Daoze commited on
Commit
a1137c8
·
verified ·
1 Parent(s): dd4df03

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/UC-7gWM77ZY/Initial_manuscript_md/Initial_manuscript.md +55 -0
  2. papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/UC-7gWM77ZY/Initial_manuscript_tex/Initial_manuscript.tex +43 -0
  3. papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/t6HPdtyIaXB/Initial_manuscript_md/Initial_manuscript.md +109 -0
  4. papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/t6HPdtyIaXB/Initial_manuscript_tex/Initial_manuscript.tex +43 -0
  5. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/0gLzHrE_t3z/Initial_manuscript_md/Initial_manuscript.md +486 -0
  6. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/0gLzHrE_t3z/Initial_manuscript_tex/Initial_manuscript.tex +359 -0
  7. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/2f70OXlGQMd/Initial_manuscript_md/Initial_manuscript.md +193 -0
  8. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/2f70OXlGQMd/Initial_manuscript_tex/Initial_manuscript.tex +185 -0
  9. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/EC1vWkJXpjy/Initial_manuscript_md/Initial_manuscript.md +311 -0
  10. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/EC1vWkJXpjy/Initial_manuscript_tex/Initial_manuscript.tex +281 -0
  11. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/G4auHKwZYP0/Initial_manuscript_md/Initial_manuscript.md +421 -0
  12. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/G4auHKwZYP0/Initial_manuscript_tex/Initial_manuscript.tex +343 -0
  13. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/HxIZzQZy_0F/Initial_manuscript_md/Initial_manuscript.md +264 -0
  14. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/HxIZzQZy_0F/Initial_manuscript_tex/Initial_manuscript.tex +185 -0
  15. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JENSKEEzsoU/Initial_manuscript_md/Initial_manuscript.md +219 -0
  16. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JENSKEEzsoU/Initial_manuscript_tex/Initial_manuscript.tex +212 -0
  17. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JQCYcdHfXyJ/Initial_manuscript_md/Initial_manuscript.md +233 -0
  18. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JQCYcdHfXyJ/Initial_manuscript_tex/Initial_manuscript.tex +332 -0
  19. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/M4wgkxaPcyj/Initial_manuscript_md/Initial_manuscript.md +169 -0
  20. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/M4wgkxaPcyj/Initial_manuscript_tex/Initial_manuscript.tex +209 -0
  21. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/PlUA_mgGaPq/Initial_manuscript_md/Initial_manuscript.md +255 -0
  22. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/PlUA_mgGaPq/Initial_manuscript_tex/Initial_manuscript.tex +171 -0
  23. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/SPxaJuM4Hbz/Initial_manuscript_md/Initial_manuscript.md +204 -0
  24. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/SPxaJuM4Hbz/Initial_manuscript_tex/Initial_manuscript.tex +230 -0
  25. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/VvRbhkiAwR/Initial_manuscript_md/Initial_manuscript.md +253 -0
  26. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/VvRbhkiAwR/Initial_manuscript_tex/Initial_manuscript.tex +232 -0
  27. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/W3Dzaik1ipL/Initial_manuscript_md/Initial_manuscript.md +249 -0
  28. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/W3Dzaik1ipL/Initial_manuscript_tex/Initial_manuscript.tex +321 -0
  29. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/XOkm8xdns5R/Initial_manuscript_md/Initial_manuscript.md +225 -0
  30. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/XOkm8xdns5R/Initial_manuscript_tex/Initial_manuscript.tex +198 -0
  31. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ZQ_HvBxcdCv/Initial_manuscript_md/Initial_manuscript.md +226 -0
  32. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ZQ_HvBxcdCv/Initial_manuscript_tex/Initial_manuscript.tex +169 -0
  33. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/dGOeF3y_Weh/Initial_manuscript_md/Initial_manuscript.md +217 -0
  34. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/dGOeF3y_Weh/Initial_manuscript_tex/Initial_manuscript.tex +228 -0
  35. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/mlmwkAdIeK/Initial_manuscript_md/Initial_manuscript.md +153 -0
  36. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/mlmwkAdIeK/Initial_manuscript_tex/Initial_manuscript.tex +144 -0
  37. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qJYo-Bbxu07/Initial_manuscript_md/Initial_manuscript.md +236 -0
  38. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qJYo-Bbxu07/Initial_manuscript_tex/Initial_manuscript.tex +179 -0
  39. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qd51R0JNLl/Initial_manuscript_md/Initial_manuscript.md +172 -0
  40. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qd51R0JNLl/Initial_manuscript_tex/Initial_manuscript.tex +221 -0
  41. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ub9_2iAo3D/Initial_manuscript_md/Initial_manuscript.md +143 -0
  42. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ub9_2iAo3D/Initial_manuscript_tex/Initial_manuscript.tex +168 -0
  43. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/yx-k0ukHzDR/Initial_manuscript_md/Initial_manuscript.md +259 -0
  44. papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/yx-k0ukHzDR/Initial_manuscript_tex/Initial_manuscript.tex +247 -0
  45. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B6PlLQtl8Zq/Initial_manuscript_md/Initial_manuscript.md +318 -0
  46. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B6PlLQtl8Zq/Initial_manuscript_tex/Initial_manuscript.tex +157 -0
  47. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/BGMfS7tgIWq/Initial_manuscript_md/Initial_manuscript.md +475 -0
  48. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/BGMfS7tgIWq/Initial_manuscript_tex/Initial_manuscript.tex +414 -0
  49. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B_OII7tlIZ5/Initial_manuscript_md/Initial_manuscript.md +259 -0
  50. papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B_OII7tlIZ5/Initial_manuscript_tex/Initial_manuscript.tex +193 -0
papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/UC-7gWM77ZY/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Continual Causal Abstractions
2
+
3
+ Anonymous submission
4
+
5
+ ## Abstract
6
+
7
+ This short paper discusses continually updated causal abstractions as a potential direction of future research. The key idea is to revise the existing level of causal abstraction to a different level of detail that is both consistent with the history of observed data and more effective in solving a given task.
8
+
9
+ Overview. [1] discusses the necessity of (a) causal abstractions for effectively solving tasks and (b) continual updates when data starts changing. [2] highlights existing approaches for (a). [3] discusses starting points for (b).
10
+
11
+ ## [1] Motivation
12
+
13
+ Both causality (as defined by Pearl (2009)) and continual learning (Hadsell et al. 2020) seem integral for the quest of understanding (artificial) intelligence. Causality's main contribution has been the formalization of key ideas such as interventions, counterfactuals and structural mechanisms. Continual learning on the other hand raised awareness for the importance of learning stable concepts continuously over time void of access to previous experiences much like biological systems. The goal of the AAAI 2023 Bridge Program on "Continual Causality" lies in finding answers to the question of what may be found at the intersection of the two subfields primarily studied in AI and CogSci research. This short paper envisions the extension of existing work on causal abstractions towards a continual learning setting where the task solving agent ought to revise its current model abstraction towards a new level of detail in order to be consistent with the history of observed data and also find more effective decision rules for solving the current task.
14
+
15
+ Why do we need causal abstractions? Let's illustrate with a simple example. Imagine being a chemist analyzing a particular gas. Your advisor tasks you to analyze the temperature and pressure in the volume. Using your thermo- and barometer you simply measure the desired quantities. The first task is solved. Next, your advisor tasks you to analyze the average velocity of a moving particle within the gas. It quickly becomes apparent to you that the previous level of description for the gas becomes obsolete, since the question has shifted from a macro- to a microscopic level of detail in which we suddenly need information on individual particles. Put differently, we are in need of a different level of abstraction. Not just that, we further want our new model to still be consistent with our previous observations, that is, if we were to measure net kinetic energy of all relevant particle combinations, then we would ideally like to see a match with the previous measurements of the thermometer. This requirement is what is being captured by the causal part of the abstraction transformation between the two models for each of the tasks.
16
+
17
+ Why do we need to continual abstractions? Let's illustrate again using an example. The following is a sneak peek into the lead example from the vision schematic presented in Fig.1. Imagine being a dietitian analyzing the (causal) effect of a particular diet onto the risk of heart disease. Your history of clients has taught you that the total cholesterol of the patient is characteristic of whether or not the risk of heart disease for that individual is increased or not. This is your initial, base abstraction: if the diet is balanced, the cholesterol levels will deteriorate and the risk of heart disease decreases. Now a new client, a sumo practitioner, enters your diet program but ends up overdoing it and eating three times the amount of items listed in the plan. To your surprise, although the cholesterol levels of the sumo were increased, his risk of heart disease had decreased. To cope with this new counterexample to the previous hypothesis, you decide to revise your abstraction as you found the high- and low-density lipoproteins to be more predictive of the risk of heart disease. For the sumo's case, the former increased which also increased the total cholesterol levels while still lowering the risk of heart disease. In other words, the dietitian continually updated the current best causal abstraction to comply with the data history while still answering the initial scientific question effectively.
18
+
19
+ ## [2] Existing Work on Causal Abstractions
20
+
21
+ Definitions. The study of causal abstractions is a subfield in Pearlian causality that aims at formalizing the philosophical concept of an abstraction such that the resulting definition is maximally "useful in practice" (commonly taken to mean that the examples such as that of the chemist from [1] work with the definition). Rubenstein et al. (2017) conducted pioneering work in establishing a formalism that discusses "exact transformations" between Structural Causal Model (SCM), which allowed the authors to (i) marginalize out 'irrelevant' variables, (ii) aggregate variables into sensible groups, and even (iii) view dynamic systems as their stationary counterpart. Following that, Beckers and Halpern (2019) fixed several shortcomings by generalizing the former formalism to "(strong) abstractions" that (i) work on SCMs directly opposed to probabilistic parameterizations and (ii) consider all possible interventions of an SCM opposed to only a selected subset. To provide a short glimpse into the state of the art formalization, given the standard formulation of an SCM $M = \left( {\left( {\mathcal{U},\mathcal{V},\mathcal{R}}\right) ,\mathcal{F},\mathcal{I},\Pr }\right)$ with a poset $\mathcal{I}$ indicating possible perfect interventions, we have:
22
+
23
+ ![01963d7c-6bc8-749a-ad70-baf71334a452_1_217_150_1359_589_0.jpg](images/01963d7c-6bc8-749a-ad70-baf71334a452_1_217_150_1359_589_0.jpg)
24
+
25
+ Figure 1: Vision of Continual Causal Abstractions (CCA). A schematic illustration for CCA. The scientific question (top purple box) asks about the causal relation (or effect) of a balanced diet onto the risk of heart disease. Based on data consisting of patient records that record various features (see right grey box for a legend), a causal abstraction algorithm provides the initial causal abstraction (middle box) that suggests optimal decision rules (middle left grey box) based on the mediator variable of total cholesterol. With the incoming new example (right teal box) the CCA algorithm provides an updated causal abstraction that uses both HL and LL as mediator variables (middle teal box) with new optimal decision rules (lower left grey box). In summary, the macroscopic view of cholesterol levels in the initial abstraction was sufficient for analyzing the initial two data points, however, the third data point required a more fine grained abstraction that considers the levels of high- and low-density lipoproteins since an increase in former also leads to an increase of TC but is actually lowering HD. (Best viewed in color.)
26
+
27
+ Definition 1 For low-level ${SCM}{M}_{L}$ , a $\tau$ -abstraction to high-level ${SCM}{M}_{H}$ is given by
28
+
29
+ 1. surjective $\tau : {\mathcal{R}}_{L}\left( {\mathcal{V}}_{L}\right) \rightarrow {\mathcal{R}}_{H}\left( {\mathcal{V}}_{H}\right)$ s.t. $\tau \left( \mathop{\Pr }\limits_{L}^{i}\right) = \mathop{\Pr }\limits_{H}^{{\omega \left( i\right) }}$ with $i \in {\mathcal{I}}_{L}$ and order-preserving, surjective $\omega : {\mathcal{I}}_{L} \rightarrow {\widetilde{\mathcal{I}}}_{H}$ ,
30
+
31
+ 2. surjective ${\tau }_{\mathcal{U}} : \mathcal{R}\left( {\mathcal{U}}_{L}\right) \rightarrow \mathcal{R}\left( {\mathcal{U}}_{H}\right)$ s.t. $\tau \left( {{M}_{L}\left( {{\mathbf{u}}_{L}, i}\right) }\right) =$ ${M}_{H}\left( {{\tau }_{\mathcal{U}}\left( {\mathbf{u}}_{L}\right) ,\omega \left( i\right) }\right)$ , and
32
+
33
+ 3. ${\mathcal{I}}_{H} = {\omega }_{\tau }\left( {\mathcal{I}}_{L}\right)$ where ${\omega }_{\tau }$ restricts $\omega$ further to subsets of a given intervention.
34
+
35
+ Returning to the chemist example from [1], suitable formalizations for ${M}_{H},{M}_{L}$ can be shown to commute via some $\tau$ -abstraction for a corresponding choice of $\tau ,{\tau }_{\mathcal{U}},\omega$ , and ${\omega }_{\tau }$ .
36
+
37
+ Learning. To the best of our knowledge, there exist no works yet on actually learning $\tau$ -abstractions i.e., no automatic verification of abstractions for SCM. While an end-to-end system will require such an abstraction learner, we are fortunate in that CCA can be researched independently.
38
+
39
+ ## [3] Future Work: Updating Existing Abstractions
40
+
41
+ While even just learning causal abstractions as in Def. 1 remains an open problem, it is not a requisite for starting to work on continual causal abstractions since the interfaces at input and output of the to-be-developed algorithm are already provided. Furthermore, we can get access to models as in "initial- [and] revised causal abstraction" highlighted in Fig. 1 by simply training neurally parameterized SCM (see Xia et al. (2021) for a formal introduction) on suitable data. In the next step, we extract decision rules from the neural network modules. While this is a hard problem in general, using suitable assumptions that still cover our problem instance (here, the "DP $\rightarrow$ HD?"-question) such as linear structural equations and binary variables, we can easily extract decision rules of the form $\mathrm{{DP}} = 1 \Rightarrow \mathrm{{TC}} = 0$ . Given such a set of rules (as highlighted in the grey boxes on the left side of Fig.1), we can simply check our new counterexample data point (here: sumo practitioner with both $\mathrm{{TC}} = 1,\mathrm{{HD}} = 0$ ) against the extracted rules. If an inconsistency is found, then the update procedure for existing abstractions triggers. An objective for achieving the desired abstraction could be to again maximize prediction but s.t. the constraint of dropping the previous predictive variable (here: TC).
42
+
43
+ In summary we can state that the two key research questions towards CCA seem to be: (1) how do we spot inadequate abstractions (presented idea was checking counterexample against learned decision rules), and (2) how do we update our abstraction to become adequate (presented idea was an objective that maximizes prediction while discarding key variables from the previous abstraction).
44
+
45
+ ## References
46
+
47
+ Beckers, S.; and Halpern, J. Y. 2019. Abstracting causal models. In Proceedings of the aaai conference on artificial intelligence, volume 33, 2678-2685.
48
+
49
+ Hadsell, R.; Rao, D.; Rusu, A. A.; and Pascanu, R. 2020. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences, 24(12): 1028-1040.
50
+
51
+ Pearl, J. 2009. Causality. Cambridge university press.
52
+
53
+ Rubenstein, P. K.; Weichwald, S.; Bongers, S.; Mooij, J. M.; Janzing, D.; Grosse-Wentrup, M.; and Schölkopf, B. 2017. Causal Consistency of Structural Equation Models. In El-idan, G.; and Kersting, K., eds., Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI- 17). AUAI Press.
54
+
55
+ Xia, K.; Lee, K.-Z.; Bengio, Y.; and Bareinboim, E. 2021. The causal-neural connection: Expressiveness, learnability, and inference. Advances in Neural Information Processing Systems, 34: 10823-10836.
papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/UC-7gWM77ZY/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CONTINUAL CAUSAL ABSTRACTIONS
2
+
3
+ Anonymous submission
4
+
5
+ § ABSTRACT
6
+
7
+ This short paper discusses continually updated causal abstractions as a potential direction of future research. The key idea is to revise the existing level of causal abstraction to a different level of detail that is both consistent with the history of observed data and more effective in solving a given task.
8
+
9
+ Overview. [1] discusses the necessity of (a) causal abstractions for effectively solving tasks and (b) continual updates when data starts changing. [2] highlights existing approaches for (a). [3] discusses starting points for (b).
10
+
11
+ § [1] MOTIVATION
12
+
13
+ Both causality (as defined by Pearl (2009)) and continual learning (Hadsell et al. 2020) seem integral for the quest of understanding (artificial) intelligence. Causality's main contribution has been the formalization of key ideas such as interventions, counterfactuals and structural mechanisms. Continual learning on the other hand raised awareness for the importance of learning stable concepts continuously over time void of access to previous experiences much like biological systems. The goal of the AAAI 2023 Bridge Program on "Continual Causality" lies in finding answers to the question of what may be found at the intersection of the two subfields primarily studied in AI and CogSci research. This short paper envisions the extension of existing work on causal abstractions towards a continual learning setting where the task solving agent ought to revise its current model abstraction towards a new level of detail in order to be consistent with the history of observed data and also find more effective decision rules for solving the current task.
14
+
15
+ Why do we need causal abstractions? Let's illustrate with a simple example. Imagine being a chemist analyzing a particular gas. Your advisor tasks you to analyze the temperature and pressure in the volume. Using your thermo- and barometer you simply measure the desired quantities. The first task is solved. Next, your advisor tasks you to analyze the average velocity of a moving particle within the gas. It quickly becomes apparent to you that the previous level of description for the gas becomes obsolete, since the question has shifted from a macro- to a microscopic level of detail in which we suddenly need information on individual particles. Put differently, we are in need of a different level of abstraction. Not just that, we further want our new model to still be consistent with our previous observations, that is, if we were to measure net kinetic energy of all relevant particle combinations, then we would ideally like to see a match with the previous measurements of the thermometer. This requirement is what is being captured by the causal part of the abstraction transformation between the two models for each of the tasks.
16
+
17
+ Why do we need to continual abstractions? Let's illustrate again using an example. The following is a sneak peek into the lead example from the vision schematic presented in Fig.1. Imagine being a dietitian analyzing the (causal) effect of a particular diet onto the risk of heart disease. Your history of clients has taught you that the total cholesterol of the patient is characteristic of whether or not the risk of heart disease for that individual is increased or not. This is your initial, base abstraction: if the diet is balanced, the cholesterol levels will deteriorate and the risk of heart disease decreases. Now a new client, a sumo practitioner, enters your diet program but ends up overdoing it and eating three times the amount of items listed in the plan. To your surprise, although the cholesterol levels of the sumo were increased, his risk of heart disease had decreased. To cope with this new counterexample to the previous hypothesis, you decide to revise your abstraction as you found the high- and low-density lipoproteins to be more predictive of the risk of heart disease. For the sumo's case, the former increased which also increased the total cholesterol levels while still lowering the risk of heart disease. In other words, the dietitian continually updated the current best causal abstraction to comply with the data history while still answering the initial scientific question effectively.
18
+
19
+ § [2] EXISTING WORK ON CAUSAL ABSTRACTIONS
20
+
21
+ Definitions. The study of causal abstractions is a subfield in Pearlian causality that aims at formalizing the philosophical concept of an abstraction such that the resulting definition is maximally "useful in practice" (commonly taken to mean that the examples such as that of the chemist from [1] work with the definition). Rubenstein et al. (2017) conducted pioneering work in establishing a formalism that discusses "exact transformations" between Structural Causal Model (SCM), which allowed the authors to (i) marginalize out 'irrelevant' variables, (ii) aggregate variables into sensible groups, and even (iii) view dynamic systems as their stationary counterpart. Following that, Beckers and Halpern (2019) fixed several shortcomings by generalizing the former formalism to "(strong) abstractions" that (i) work on SCMs directly opposed to probabilistic parameterizations and (ii) consider all possible interventions of an SCM opposed to only a selected subset. To provide a short glimpse into the state of the art formalization, given the standard formulation of an SCM $M = \left( {\left( {\mathcal{U},\mathcal{V},\mathcal{R}}\right) ,\mathcal{F},\mathcal{I},\Pr }\right)$ with a poset $\mathcal{I}$ indicating possible perfect interventions, we have:
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Vision of Continual Causal Abstractions (CCA). A schematic illustration for CCA. The scientific question (top purple box) asks about the causal relation (or effect) of a balanced diet onto the risk of heart disease. Based on data consisting of patient records that record various features (see right grey box for a legend), a causal abstraction algorithm provides the initial causal abstraction (middle box) that suggests optimal decision rules (middle left grey box) based on the mediator variable of total cholesterol. With the incoming new example (right teal box) the CCA algorithm provides an updated causal abstraction that uses both HL and LL as mediator variables (middle teal box) with new optimal decision rules (lower left grey box). In summary, the macroscopic view of cholesterol levels in the initial abstraction was sufficient for analyzing the initial two data points, however, the third data point required a more fine grained abstraction that considers the levels of high- and low-density lipoproteins since an increase in former also leads to an increase of TC but is actually lowering HD. (Best viewed in color.)
26
+
27
+ Definition 1 For low-level ${SCM}{M}_{L}$ , a $\tau$ -abstraction to high-level ${SCM}{M}_{H}$ is given by
28
+
29
+ 1. surjective $\tau : {\mathcal{R}}_{L}\left( {\mathcal{V}}_{L}\right) \rightarrow {\mathcal{R}}_{H}\left( {\mathcal{V}}_{H}\right)$ s.t. $\tau \left( \mathop{\Pr }\limits_{L}^{i}\right) = \mathop{\Pr }\limits_{H}^{{\omega \left( i\right) }}$ with $i \in {\mathcal{I}}_{L}$ and order-preserving, surjective $\omega : {\mathcal{I}}_{L} \rightarrow {\widetilde{\mathcal{I}}}_{H}$ ,
30
+
31
+ 2. surjective ${\tau }_{\mathcal{U}} : \mathcal{R}\left( {\mathcal{U}}_{L}\right) \rightarrow \mathcal{R}\left( {\mathcal{U}}_{H}\right)$ s.t. $\tau \left( {{M}_{L}\left( {{\mathbf{u}}_{L},i}\right) }\right) =$ ${M}_{H}\left( {{\tau }_{\mathcal{U}}\left( {\mathbf{u}}_{L}\right) ,\omega \left( i\right) }\right)$ , and
32
+
33
+ 3. ${\mathcal{I}}_{H} = {\omega }_{\tau }\left( {\mathcal{I}}_{L}\right)$ where ${\omega }_{\tau }$ restricts $\omega$ further to subsets of a given intervention.
34
+
35
+ Returning to the chemist example from [1], suitable formalizations for ${M}_{H},{M}_{L}$ can be shown to commute via some $\tau$ -abstraction for a corresponding choice of $\tau ,{\tau }_{\mathcal{U}},\omega$ , and ${\omega }_{\tau }$ .
36
+
37
+ Learning. To the best of our knowledge, there exist no works yet on actually learning $\tau$ -abstractions i.e., no automatic verification of abstractions for SCM. While an end-to-end system will require such an abstraction learner, we are fortunate in that CCA can be researched independently.
38
+
39
+ § [3] FUTURE WORK: UPDATING EXISTING ABSTRACTIONS
40
+
41
+ While even just learning causal abstractions as in Def. 1 remains an open problem, it is not a requisite for starting to work on continual causal abstractions since the interfaces at input and output of the to-be-developed algorithm are already provided. Furthermore, we can get access to models as in "initial- [and] revised causal abstraction" highlighted in Fig. 1 by simply training neurally parameterized SCM (see Xia et al. (2021) for a formal introduction) on suitable data. In the next step, we extract decision rules from the neural network modules. While this is a hard problem in general, using suitable assumptions that still cover our problem instance (here, the "DP $\rightarrow$ HD?"-question) such as linear structural equations and binary variables, we can easily extract decision rules of the form $\mathrm{{DP}} = 1 \Rightarrow \mathrm{{TC}} = 0$ . Given such a set of rules (as highlighted in the grey boxes on the left side of Fig.1), we can simply check our new counterexample data point (here: sumo practitioner with both $\mathrm{{TC}} = 1,\mathrm{{HD}} = 0$ ) against the extracted rules. If an inconsistency is found, then the update procedure for existing abstractions triggers. An objective for achieving the desired abstraction could be to again maximize prediction but s.t. the constraint of dropping the previous predictive variable (here: TC).
42
+
43
+ In summary we can state that the two key research questions towards CCA seem to be: (1) how do we spot inadequate abstractions (presented idea was checking counterexample against learned decision rules), and (2) how do we update our abstraction to become adequate (presented idea was an objective that maximizes prediction while discarding key variables from the previous abstraction).
papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/t6HPdtyIaXB/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Towards Causal Replay for Knowledge Rehearsal in Continual Learning
2
+
3
+ Anonymous submission
4
+
5
+ ## Abstract
6
+
7
+ Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.
8
+
9
+ ## 1 Introduction
10
+
11
+ Real-world application of Machine Learning (ML) solutions require models to dynamically learn and adapt with streams of incrementally acquired data, while preserving past knowledge. Conventional ML-based methods are ill-fated to meet these challenges as they work under a pivotal assumption that all data is available a priori under relatively stationary data distributions (Graffieti, Borghi, and Maltoni 2022). This stationarity ensures that training samples are identically and independently drawn (i.i.d), allowing models to learn in batches of representative distributions. The real-world, however, is not stationary and changes continuously (Hadsell et al. 2020). As models continually encounter novel information, violating this i.i.d assumption, their ability to remember previously learnt tasks progressively deteriorates, resulting in forgetting (McCloskey and Cohen 1989).
12
+
13
+ Continual Learning (CL) (Parisi et al. 2019; Hadsell et al. 2020) aims to address adaptability in ML-based models by enabling them to continually learn and adapt, balancing incremental learning of novel information with the preservation of past knowledge. CL focuses on learning with continuous streams of data acquired from non-stationary or changing distributions (Hadsell et al. 2020). This may be achieved by regulating model updates to control plasticity or rehearsing past knowledge by storing and replaying already seen information to simulate i.i.d learning settings.
14
+
15
+ Given the above, Causality (Pearl 2009), especially addressing adaptability and causal discovery, can complement lifelong learning of information by helping understand the causal structure of the data or task and 'readjust' model learning to cope with changing data distributions (Pearl 2019). Furthermore, it has been posited that the increasingly apparent challenges in ML (such as robustness, generalisation, bias, transparency) are due to conventional ML methods learning correlation-based patterns and relationships (Schölkopf et al. 2021). Causal reasoning tools can contribute towards addressing some of these challenges (Cheng et al. 2022).
16
+
17
+ In this position paper, we focus on knowledge rehearsal as an effective tool for CL-based models to preserve past knowledge particularly using causal interventions to understand and update data distributions such that only the most relevant data samples (for rehearsal) or features (for pseudo-rehearsal) can be used by the model to preserve past knowledge. Such Causal Replay can help improve the efficiency of knowledge rehearsal for continual learning of information.
18
+
19
+ ### 1.1 Knowledge Rehearsal to Mitigate Forgetting
20
+
21
+ Efficient rehearsal of past knowledge can be achieved by physically storing samples from previous tasks in memory buffers and regularly sampling from them (rehearsal - Robins 1993) mixing it with new data. The simplest strategy to achieve this is to fix the size of the memory buffer to be 'large enough' and randomly maintain a fraction of previously seen samples from each task in the buffer for periodic rehearsal (Hsu et al. 2018). However, as the number of tasks increases, fewer samples are available for rehearsal per task. Other sophisticated rehearsal methods focus on prioritising replay following certain heuristics such as feature or classification margins (Hu, Zhang, and Zhu 2021), or storing exemplars for each task that best approximate task means (Re-buffi et al. 2017). Despite such 'intelligent' sampling, high dimensionality of data and a large number of tasks require a huge amount of memory, making their real-world application inefficient (Kwon et al. 2021).
22
+
23
+ Alternatively, generative models may be used, along with the learning model, that learn the inherent data statistics, enabling models to draw pseudo-samples to be replayed (pseudo-rehearsal - Robins 1995) along with novel data. Recent advances in generative models (Goodfellow et al. 2014; Kingma and Welling 2013), particularly in their ability to generate high-quality samples, have greatly enhanced the potential of pseudo-rehearsal methods (Shin et al. 2017; Churamani and Gunes 2020). More recent methods focus on generative feature replay (van de Ven, Siegelmann, and To-lias 2020; Stoychev, Churamani, and Gunes 2023), alleviating the need to optimise generators for reconstructing high-dimensional samples. However, as the number of tasks increases, they face capacity saturation and are not able to efficiently learn task-discriminative representations. Furthermore, the generators become harder to train, resulting in an inefficient rehearsal of past knowledge. We believe causality can offer significant improvements in this regard. To date, there is minimal work that explores the synergies between CL and causality (Chu, Rathbun, and Li 2021).
24
+
25
+ ![01964111-b4dd-7ba7-9889-a8fff2a33946_1_243_145_1312_287_0.jpg](images/01964111-b4dd-7ba7-9889-a8fff2a33946_1_243_145_1312_287_0.jpg)
26
+
27
+ Figure 1: Causal Replay for (a) Prioritised Rehearsal and efficient (b) Pseudo-rehearsal of past knowledge.
28
+
29
+ ### 1.2 Causality
30
+
31
+ The study of causality entails a range of tools such as graphical models, the do-operator, counterfactuals as well as structural equations (Pearl 2009). Using these tools, conventional causal research has mostly focused on causal pattern recognition (Vowels, Camgoz, and Bowden 2021) and causal distribution estimation (Yao et al. 2021), Here, we focus on methods to merge conventional causal research with ML to address the existing gaps. Recent works in causal interpretability (Moraffah et al. 2020) and causal fairness (Makhlouf, Zhioua, and Palamidessi 2020) have proven such an approach to be promising. Here, we leverage two main themes: Causal Interventions and Causal Structure Discovery.
32
+
33
+ Following Pearl's notation (Pearl 2009) for a Structural Causal Model (SCM), we have a set of variables $V$ and a set of functions $F$ that encode the causal relations between each variable. Using this framework, causal interventions can be either be 'structural' or 'parametric' (Spirtes et al. 2000) representing a continuum of 'harder' to 'softer' interventions. A 'hard' intervention can be understood as a forcible removal of an edge such that the function encoding ${V}_{i} \leftarrow {f}_{{V}_{i}}$ is modified such that another variable ${V}_{j}$ is no longer a parent of variable ${V}_{i}$ . ’Soft’ interventions, on the other hand, simply modify the conditional probability distributions of the intervened variable ${V}_{i}$ . Depending on the task, we can combine the most appropriate form of causal interventions with CL-based models to preserve past knowledge and update the model using only the relevant features. In addition, we also propose to leverage existing causal discovery methods (Vowels, Camgoz, and Bowden 2021) that can be utilised to discover causal relations within the observational data. We propose to impart the discovered causal knowledge to CL-based methods in order to mitigate forgetting and to learn new relevant features.
34
+
35
+ ## 2 Causal Replay for Knowledge Rehearsal
36
+
37
+ ### 2.1 Causal Rehearsal
38
+
39
+ One strategy for augmenting CL with causality can be causality-driven rehearsal (see Figure 1 a). Firstly, we aim to understand the causal structure of task-specific data in order to prioritise samples for rehearsal. As neural networks are capable of representing the input features as well as their respective causal relations to each other within their parameters, we can learn the causal structure of the data during the training phase using a range of existing causal discovery methods (Vowels, Camgoz, and Bowden 2021). An online example of doing so is exemplified by Javed, White, and Bengio (2020). As we are only able to discover causal Directed Acyclic Graphs (DAGs) up to Markov equivalence, we can subsequently leverage causal-scoring methods (Gly-mour, Zhang, and Spirtes 2019) or causality-based feature selection methods (Yu et al. 2020) to determine which samples to prioritise for rehearsal. Subsequently, as we update the model with each new task, we reprioritise the samples to update the memory buffer as well as the learnt causal structure. As such, the causal model can then also be updated in a continual manner as more data becomes available.
40
+
41
+ ### 2.2 Causal Pseudo-rehearsal
42
+
43
+ Another opportunity is that of causality-driven pseudo-rehearsal (see Figure 1 b). Here the goal is to use the learnt causal structure of the data to rehearse information in a principled manner. Attempts to remove unwanted causal relations has proven to be effective in the case of knowledge distillation (Deng and Zhang 2021). However, such an idea has yet to be fully explored in CL. Existing methods largely rely on pattern generation to simulate i.i.d. settings. However, this does not take into account the causal relations between variables. One way of addressing this is to make use interventions (both 'hard' and 'soft') such that we generate samples from the updated distribution which has been 'intervened' upon. Such an approach has proven to be effective in the domain of disentangled representation learning using Variational Autoencoders (VAEs) (Yang et al. 2021). Instead of simply generating pseudo-samples, we can intervene by updating the parameters of the generative model based on the causal effect estimated or parameterised by the learnt causal structure of the data. These parameters can also be continually updated given new information. By conducting pseudo-rehearsal in this manner, we are able to adapt to the changes in new data whilst preserving old information.
44
+
45
+ References
46
+
47
+ Cheng, L.; Guo, R.; Moraffah, R.; Sheth, P.; Candan, K. S.; and Liu, H. 2022. Evaluation methods and measures for causal learning algorithms. IEEE Transactions on Artificial Intelligence.
48
+
49
+ Chu, Z.; Rathbun, S.; and Li, S. 2021. Continual Lifelong Causal Effect Inference with Real World Evidence.
50
+
51
+ Churamani, N.; and Gunes, H. 2020. CLIFER: Continual Learning with Imagination for Facial Expression Recognition. In 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 322-328. IEEE.
52
+
53
+ Deng, X.; and Zhang, Z. 2021. Comprehensive knowledge distillation with causal intervention. Advances in Neural Information Processing Systems, 34: 22158-22170.
54
+
55
+ Glymour, C.; Zhang, K.; and Spirtes, P. 2019. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10: 524.
56
+
57
+ Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N.; and Weinberger, K. Q., eds., Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, volume 27 of NIPS'14, 2672-2680. MIT Press, Cambridge, MA, USA: MIT Press.
58
+
59
+ Graffieti, G.; Borghi, G.; and Maltoni, D. 2022. Continual Learning in Real-Life Applications. IEEE Robotics and Automation Letters, 7(3): 6195-6202.
60
+
61
+ Hadsell, R.; Rao, D.; Rusu, A. A.; and Pascanu, R. 2020. Embracing Change: Continual Learning in Deep Neural Networks. Trends in Cognitive Sciences, 24(12): 1028- 1040.
62
+
63
+ Hsu, Y.-C.; Liu, Y.-C.; Ramasamy, A.; and Kira, Z. 2018. Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines. In NeurIPS Continual learning Workshop.
64
+
65
+ Hu, G.; Zhang, W.; and Zhu, W. 2021. Prioritized Experience Replay for Continual Learning. In 2021 6th International Conference on Computational Intelligence and Applications (ICCIA), 16-20.
66
+
67
+ Javed, K.; White, M.; and Bengio, Y. 2020. Learning causal models online. arXiv preprint arXiv:2006.07461.
68
+
69
+ Kingma, D. P.; and Welling, M. 2013. Auto-Encoding Variational Bayes. CoRR, abs/1312.6114.
70
+
71
+ Kwon, Y. D.; Chauhan, J.; Kumar, A.; Hui, P.; and Mas-colo, C. 2021. Exploring System Performance of Continual Learning for Mobile and Embedded Sensing Applications. CoRR, abs/2110.13290.
72
+
73
+ Makhlouf, K.; Zhioua, S.; and Palamidessi, C. 2020. Survey on causal-based machine learning fairness notions. arXiv preprint arXiv:2010.09553.
74
+
75
+ McCloskey, M.; and Cohen, N. J. 1989. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. In Bower, G. H., ed., Psychology of Learning and Motivation, volume 24 of Psychology of Learning and Motivation, 109-165. Academic Press.
76
+
77
+ Moraffah, R.; Karami, M.; Guo, R.; Raglin, A.; and Liu,
78
+
79
+ H. 2020. Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explorations Newsletter, 22(1): 18-33.
80
+
81
+ Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter, S. 2019. Continual lifelong learning with neural networks: A review. Neural Networks, 113: 54-71.
82
+
83
+ Pearl, J. 2009. Causality. Cambridge university press.
84
+
85
+ Pearl, J. 2019. The seven tools of causal inference, with reflections on machine learning. Communications of the ACM, 62(3): 54-60.
86
+
87
+ Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. iCaRL: Incremental Classifier and Representation Learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001-2010.
88
+
89
+ Robins, A. 1993. Catastrophic forgetting in neural networks: the role of rehearsal mechanisms. In Proceedings 1993 The First New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems, 65-68.
90
+
91
+ Robins, A. 1995. Catastrophic Forgetting, Rehearsal and Pseudorehearsal. Connection Science, 7(2): 123-146.
92
+
93
+ Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N. R.; Kalch-brenner, N.; Goyal, A.; and Bengio, Y. 2021. Toward causal representation learning. Proceedings of the IEEE, 109(5): 612-634.
94
+
95
+ Shin, H.; Lee, J. K.; Kim, J.; and Kim, J. 2017. Continual Learning with Deep Generative Replay. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vish-wanathan, S.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30, 2990-2999. Curran Associates, Inc.
96
+
97
+ Spirtes, P.; Glymour, C. N.; Scheines, R.; and Heckerman, D. 2000. Causation, prediction, and search. MIT press.
98
+
99
+ Stoychev, S.; Churamani, N.; and Gunes, H. 2023. Latent Generative Replay for Resource-efficient Continual Learning of Facial Expressions. In 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE.
100
+
101
+ van de Ven, G. M.; Siegelmann, H. T.; and Tolias, A. S. 2020. Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11(1).
102
+
103
+ Vowels, M. J.; Camgoz, N. C.; and Bowden, R. 2021. D'ya like DAGs? A survey on structure learning and causal discovery. ACM Computing Surveys (CSUR).
104
+
105
+ Yang, M.; Liu, F.; Chen, Z.; Shen, X.; Hao, J.; and Wang, J. 2021. CausalVAE: Disentangled representation learning via neural structural causal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9593-9602.
106
+
107
+ Yao, L.; Chu, Z.; Li, S.; Li, Y.; Gao, J.; and Zhang, A. 2021. A survey on causal inference. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5): 1-46.
108
+
109
+ Yu, K.; Guo, X.; Liu, L.; Li, J.; Wang, H.; Ling, Z.; and Wu, X. 2020. Causality-based feature selection: Methods and evaluations. ACM Computing Surveys (CSUR), 53(5): 1-36.
papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/t6HPdtyIaXB/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TOWARDS CAUSAL REPLAY FOR KNOWLEDGE REHEARSAL IN CONTINUAL LEARNING
2
+
3
+ Anonymous submission
4
+
5
+ § ABSTRACT
6
+
7
+ Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Real-world application of Machine Learning (ML) solutions require models to dynamically learn and adapt with streams of incrementally acquired data, while preserving past knowledge. Conventional ML-based methods are ill-fated to meet these challenges as they work under a pivotal assumption that all data is available a priori under relatively stationary data distributions (Graffieti, Borghi, and Maltoni 2022). This stationarity ensures that training samples are identically and independently drawn (i.i.d), allowing models to learn in batches of representative distributions. The real-world, however, is not stationary and changes continuously (Hadsell et al. 2020). As models continually encounter novel information, violating this i.i.d assumption, their ability to remember previously learnt tasks progressively deteriorates, resulting in forgetting (McCloskey and Cohen 1989).
12
+
13
+ Continual Learning (CL) (Parisi et al. 2019; Hadsell et al. 2020) aims to address adaptability in ML-based models by enabling them to continually learn and adapt, balancing incremental learning of novel information with the preservation of past knowledge. CL focuses on learning with continuous streams of data acquired from non-stationary or changing distributions (Hadsell et al. 2020). This may be achieved by regulating model updates to control plasticity or rehearsing past knowledge by storing and replaying already seen information to simulate i.i.d learning settings.
14
+
15
+ Given the above, Causality (Pearl 2009), especially addressing adaptability and causal discovery, can complement lifelong learning of information by helping understand the causal structure of the data or task and 'readjust' model learning to cope with changing data distributions (Pearl 2019). Furthermore, it has been posited that the increasingly apparent challenges in ML (such as robustness, generalisation, bias, transparency) are due to conventional ML methods learning correlation-based patterns and relationships (Schölkopf et al. 2021). Causal reasoning tools can contribute towards addressing some of these challenges (Cheng et al. 2022).
16
+
17
+ In this position paper, we focus on knowledge rehearsal as an effective tool for CL-based models to preserve past knowledge particularly using causal interventions to understand and update data distributions such that only the most relevant data samples (for rehearsal) or features (for pseudo-rehearsal) can be used by the model to preserve past knowledge. Such Causal Replay can help improve the efficiency of knowledge rehearsal for continual learning of information.
18
+
19
+ § 1.1 KNOWLEDGE REHEARSAL TO MITIGATE FORGETTING
20
+
21
+ Efficient rehearsal of past knowledge can be achieved by physically storing samples from previous tasks in memory buffers and regularly sampling from them (rehearsal - Robins 1993) mixing it with new data. The simplest strategy to achieve this is to fix the size of the memory buffer to be 'large enough' and randomly maintain a fraction of previously seen samples from each task in the buffer for periodic rehearsal (Hsu et al. 2018). However, as the number of tasks increases, fewer samples are available for rehearsal per task. Other sophisticated rehearsal methods focus on prioritising replay following certain heuristics such as feature or classification margins (Hu, Zhang, and Zhu 2021), or storing exemplars for each task that best approximate task means (Re-buffi et al. 2017). Despite such 'intelligent' sampling, high dimensionality of data and a large number of tasks require a huge amount of memory, making their real-world application inefficient (Kwon et al. 2021).
22
+
23
+ Alternatively, generative models may be used, along with the learning model, that learn the inherent data statistics, enabling models to draw pseudo-samples to be replayed (pseudo-rehearsal - Robins 1995) along with novel data. Recent advances in generative models (Goodfellow et al. 2014; Kingma and Welling 2013), particularly in their ability to generate high-quality samples, have greatly enhanced the potential of pseudo-rehearsal methods (Shin et al. 2017; Churamani and Gunes 2020). More recent methods focus on generative feature replay (van de Ven, Siegelmann, and To-lias 2020; Stoychev, Churamani, and Gunes 2023), alleviating the need to optimise generators for reconstructing high-dimensional samples. However, as the number of tasks increases, they face capacity saturation and are not able to efficiently learn task-discriminative representations. Furthermore, the generators become harder to train, resulting in an inefficient rehearsal of past knowledge. We believe causality can offer significant improvements in this regard. To date, there is minimal work that explores the synergies between CL and causality (Chu, Rathbun, and Li 2021).
24
+
25
+ < g r a p h i c s >
26
+
27
+ Figure 1: Causal Replay for (a) Prioritised Rehearsal and efficient (b) Pseudo-rehearsal of past knowledge.
28
+
29
+ § 1.2 CAUSALITY
30
+
31
+ The study of causality entails a range of tools such as graphical models, the do-operator, counterfactuals as well as structural equations (Pearl 2009). Using these tools, conventional causal research has mostly focused on causal pattern recognition (Vowels, Camgoz, and Bowden 2021) and causal distribution estimation (Yao et al. 2021), Here, we focus on methods to merge conventional causal research with ML to address the existing gaps. Recent works in causal interpretability (Moraffah et al. 2020) and causal fairness (Makhlouf, Zhioua, and Palamidessi 2020) have proven such an approach to be promising. Here, we leverage two main themes: Causal Interventions and Causal Structure Discovery.
32
+
33
+ Following Pearl's notation (Pearl 2009) for a Structural Causal Model (SCM), we have a set of variables $V$ and a set of functions $F$ that encode the causal relations between each variable. Using this framework, causal interventions can be either be 'structural' or 'parametric' (Spirtes et al. 2000) representing a continuum of 'harder' to 'softer' interventions. A 'hard' intervention can be understood as a forcible removal of an edge such that the function encoding ${V}_{i} \leftarrow {f}_{{V}_{i}}$ is modified such that another variable ${V}_{j}$ is no longer a parent of variable ${V}_{i}$ . ’Soft’ interventions, on the other hand, simply modify the conditional probability distributions of the intervened variable ${V}_{i}$ . Depending on the task, we can combine the most appropriate form of causal interventions with CL-based models to preserve past knowledge and update the model using only the relevant features. In addition, we also propose to leverage existing causal discovery methods (Vowels, Camgoz, and Bowden 2021) that can be utilised to discover causal relations within the observational data. We propose to impart the discovered causal knowledge to CL-based methods in order to mitigate forgetting and to learn new relevant features.
34
+
35
+ § 2 CAUSAL REPLAY FOR KNOWLEDGE REHEARSAL
36
+
37
+ § 2.1 CAUSAL REHEARSAL
38
+
39
+ One strategy for augmenting CL with causality can be causality-driven rehearsal (see Figure 1 a). Firstly, we aim to understand the causal structure of task-specific data in order to prioritise samples for rehearsal. As neural networks are capable of representing the input features as well as their respective causal relations to each other within their parameters, we can learn the causal structure of the data during the training phase using a range of existing causal discovery methods (Vowels, Camgoz, and Bowden 2021). An online example of doing so is exemplified by Javed, White, and Bengio (2020). As we are only able to discover causal Directed Acyclic Graphs (DAGs) up to Markov equivalence, we can subsequently leverage causal-scoring methods (Gly-mour, Zhang, and Spirtes 2019) or causality-based feature selection methods (Yu et al. 2020) to determine which samples to prioritise for rehearsal. Subsequently, as we update the model with each new task, we reprioritise the samples to update the memory buffer as well as the learnt causal structure. As such, the causal model can then also be updated in a continual manner as more data becomes available.
40
+
41
+ § 2.2 CAUSAL PSEUDO-REHEARSAL
42
+
43
+ Another opportunity is that of causality-driven pseudo-rehearsal (see Figure 1 b). Here the goal is to use the learnt causal structure of the data to rehearse information in a principled manner. Attempts to remove unwanted causal relations has proven to be effective in the case of knowledge distillation (Deng and Zhang 2021). However, such an idea has yet to be fully explored in CL. Existing methods largely rely on pattern generation to simulate i.i.d. settings. However, this does not take into account the causal relations between variables. One way of addressing this is to make use interventions (both 'hard' and 'soft') such that we generate samples from the updated distribution which has been 'intervened' upon. Such an approach has proven to be effective in the domain of disentangled representation learning using Variational Autoencoders (VAEs) (Yang et al. 2021). Instead of simply generating pseudo-samples, we can intervene by updating the parameters of the generative model based on the causal effect estimated or parameterised by the learnt causal structure of the data. These parameters can also be continually updated given new information. By conducting pseudo-rehearsal in this manner, we are able to adapt to the changes in new data whilst preserving old information.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/0gLzHrE_t3z/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,486 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## CORD-19: The COVID-19 Open Research Dataset
2
+
3
+ Lucy Lu Wang ${}^{1, * }$ Kyle ${\mathbf{{L0}}}^{1, * }$ Douglas Burdick ${}^{2}$
4
+
5
+ Yannis Katsis ${}^{2}$ Rodney Kinney ${}^{1}$ Yunyao Li ${}^{2}$ Ziyang Liu ${}^{6}$ William Merrill ${}^{1}\;$ Paul Mooney ${}^{5}\;$ Dewey Murdick ${}^{7}\;$ Devvret Rishi ${}^{5}$ Jerry Sheehan ${}^{4}$ Zhihong Shen ${}^{3}$ Brandon Stilson ${}^{1}$ Alex D. Wade ${}^{6}$ Kuansan Wang ${}^{3}$ Nancy Xin Ru Wang ${}^{2}$ Chris Wilhelm ${}^{1}$ Boya Xie ${}^{3}$ ouglas Raymond ${}^{1}\;$ Daniel S. Weld ${}^{1,8}\;$ Oren Etzioni ${}^{1}\;$ Sebastian Kohlme
6
+
7
+ ${}^{1}$ Allen Institute for AI ${}^{2}$ IBM Research ${}^{3}$ Microsoft Research ${}^{4}$ National Library of Medicine ${}^{5}$ Kaggle ${}^{6}$ Chan Zuckerberg Initiative ${}^{7}$ Georgetown University ${}^{8}$ University of Washington
8
+
9
+ \{lucyw, kylel\}@allenai.org
10
+
11
+ ## Abstract
12
+
13
+ The COVID-19 Open Research Dataset (CORD-19) is a growing ${}^{1}$ resource of scientific papers on COVID-19 and related historical coronavirus research. CORD-19 is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, CORD-19 has been downloaded ${}^{2}$ over ${200}\mathrm{\;K}$ times and has served as the basis of many COVID-19 text mining and discovery systems. In this article, we describe the mechanics of dataset construction, highlighting challenges and key design decisions, provide an overview of how CORD-19 has been used, and describe several shared tasks built around the dataset. We hope this resource will continue to bring together the computing community, biomedical experts, and policy makers in the search for effective treatments and management policies for COVID-19.
14
+
15
+ ## 1 Introduction
16
+
17
+ On March 16, 2020, the Allen Institute for AI (AI2), in collaboration with our partners at The White House Office of Science and Technology Policy (OSTP), the National Library of Medicine (NLM), the Chan Zuckerburg Initiative (CZI), Microsoft Research, and Kaggle, coordinated by Georgetown University's Center for Security and Emerging Technology (CSET), released the first version of CORD-19. This resource is a large and growing collection of publications and preprints on COVID- 19 and related historical coronaviruses such as SARS and MERS. The initial release consisted of ${28}\mathrm{\;K}$ papers, and the collection has grown to more than ${140}\mathrm{\;K}$ papers over the subsequent weeks. Papers and preprints from several archives are collected and ingested through the Semantic Scholar literature search engine, ${}^{3}$ metadata are harmonized and deduplicated, and paper documents are processed through the pipeline established in Lo et al. (2020) to extract full text (more than 50% of papers in CORD-19 have full text). We commit to providing regular updates to the dataset until an end to the COVID-19 crisis is foreseeable.
18
+
19
+ ![01963db6-3e6c-788a-a29f-01a4c377ad7d_0_850_801_613_301_0.jpg](images/01963db6-3e6c-788a-a29f-01a4c377ad7d_0_850_801_613_301_0.jpg)
20
+
21
+ Figure 1: Papers and preprints are collected from different sources through Semantic Scholar. Released as part of CORD-19 are the harmonized and deduplicated metadata and full text JSON.
22
+
23
+ CORD-19 aims to connect the machine learning community with biomedical domain experts and policy makers in the race to identify effective treatments and management policies for COVID- 19. The goal is to harness these diverse and complementary pools of expertise to discover relevant information more quickly from the literature. Users of the dataset have leveraged AI-based techniques in information retrieval and natural language processing to extract useful information.
24
+
25
+ ---
26
+
27
+ *denotes equal contribution
28
+
29
+ ${}^{1}$ The dataset continues to be updated daily with papers from new sources and the latest publications. Statistics reported in this article are up-to-date as of version 2020-06-14.
30
+
31
+ ${}^{2}$ https://www.semanticscholar.org/cord19
32
+
33
+ ${}^{3}$ https://semanticscholar.org/
34
+
35
+ ---
36
+
37
+ Responses to CORD-19 have been overwhelmingly positive, with the dataset being downloaded over ${200}\mathrm{\;K}$ times in the three months since its release. The dataset has been used by clinicians and clinical researchers to conduct systematic reviews, has been leveraged by data scientists and machine learning practitioners to construct search and extraction tools, and is being used as the foundation for several successful shared tasks. We summarize research and shared tasks in Section 4.
38
+
39
+ In this article, we briefly describe:
40
+
41
+ 1. The content and creation of CORD-19,
42
+
43
+ 2. Design decisions and challenges around creating the dataset,
44
+
45
+ 3. Research conducted on the dataset, and how shared tasks have facilitated this research, and
46
+
47
+ 4. A roadmap for CORD-19 going forward.
48
+
49
+ ## 2 Dataset
50
+
51
+ CORD-19 integrates papers and preprints from several sources (Figure 1), where a paper is defined as the base unit of published knowledge, and a preprint as an unpublished but publicly available counterpart of a paper. Throughout the rest of Section 2, we discuss papers, though the same processing steps are adopted for preprints. First, we ingest into Semantic Scholar paper metadata and documents from each source. Each paper is associated with bibliographic metadata, like title, authors, publication venue, etc, as well as unique identifiers such as a DOI, PubMed Central ID, PubMed ID, the WHO Covidence $\# {,}^{4}$ MAG identifier (Shen et al., 2018), and others. Some papers are associated with documents, the physical artifacts containing paper content; these are the familiar PDFs, XMLs, or physical print-outs we read.
52
+
53
+ For the CORD-19 effort, we generate harmonized and deduplicated metadata as well as structured full text parses of paper documents as output. We provide full text parses in cases where we have access to the paper documents, and where the documents are available under an open access license (e.g. Creative Commons (CC), ${}^{5}$ publisher-specific COVID-19 licenses, ${}^{6}$ or identified as open access through DOI lookup in the Unpaywall ${}^{7}$ database).
54
+
55
+ ### 2.1 Sources of papers
56
+
57
+ Papers in CORD-19 are sourced from PubMed Central (PMC), PubMed, the World Health Organization’s Covid-19 Database, ${}^{4}$ and preprint servers bioRxiv, medRxiv, and arXiv. The PMC Public Health Emergency Covid-19 Initiative ${}^{6}$ expanded access to COVID-19 literature by working with publishers to make coronavirus-related papers discoverable and accessible through PMC under open access license terms that allow for reuse and secondary analysis. BioRxiv and medRxiv preprints were initially provided by CZI, and are now ingested through Semantic Scholar along with all other included sources. We also work directly with publishers such as Elsevier ${}^{8}$ and Springer Nature, ${}^{9}$ to provide full text coverage of relevant papers available in their back catalog.
58
+
59
+ All papers are retrieved given the query ${}^{10}$ :
60
+
61
+ "COVID" OR "COVID-19" OR
62
+
63
+ "Coronavirus" OR "Corona virus"
64
+
65
+ OR "2019-nCoV" OR "SARS-CoV"
66
+
67
+ OR "MERS-COV" OR "Severe Acute
68
+
69
+ Respiratory Syndrome" OR "Middle
70
+
71
+ East Respiratory Syndrome"
72
+
73
+ Papers that match on these keywords in their title, abstract, or body text are included in the dataset. Query expansion is performed by PMC on these search terms, affecting the subset of papers in CORD-19 retrieved from PMC.
74
+
75
+ ### 2.2 Processing metadata
76
+
77
+ The initial collection of sourced papers suffers from duplication and incomplete or conflicting metadata. We perform the following operations to harmonize and deduplicate all metadata:
78
+
79
+ 1. Cluster papers using paper identifiers
80
+
81
+ 2. Select canonical metadata for each cluster
82
+
83
+ 3. Filter clusters to remove unwanted entries
84
+
85
+ ---
86
+
87
+ ${}^{5}$ https://creativecommons.org/
88
+
89
+ ${}^{6}$ https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/
90
+
91
+ ${}^{7}$ https://unpaywall.org/
92
+
93
+ ${}^{8}$ https://www.elsevier.com/connect/coronavirus-information-center
94
+
95
+ ${}^{9}$ https://www.springernature.com/gp/researchers/ campaigns/coronavirus
96
+
97
+ ${}^{10}$ Adapted from the Elsevier COVID-19 site ${}^{8}$
98
+
99
+ ${}^{4}$ https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus- 2019-ncov
100
+
101
+ ---
102
+
103
+ Clustering papers We cluster papers if they overlap on any of the following identifiers: $\{$ doi, pmc_id, pubmed_id, arxiv_id, who_covidence_id, mag_id $\}$ . If two papers from different sources have an identifier in common and no other identifier conflicts between them, we assign them to the same cluster. Each cluster is assigned a unique identifier ${\mathbf{{CORD}}}_{ - }\mathbf{{UID}}$ , which persists between dataset releases. No existing identifier, such as DOI or PMC ID, is sufficient as the primary CORD-19 identifier. Some papers in PMC do not have DOIs; some papers from the WHO, publishers, or preprint servers like arXiv do not have PMC IDs or DOIs.
104
+
105
+ Occasionally, conflicts occur. For example, a paper $c$ with $\left( {\text{doi,}{pm}{c}_{ - }{id},\text{pubmed_id}}\right)$ identifiers $\left( {x,\text{ null },{z}^{\prime }}\right)$ might share identifier $x$ with a cluster of papers $\{ a, b\}$ that has identifiers(x, y, z), but has a conflict ${z}^{\prime } \neq z$ . In this case, we choose to create a new cluster $\{ c\}$ , containing only paper $c{.}^{11}$
106
+
107
+ Selecting canonical metadata Among each cluster, the canonical entry is selected to prioritize the availability of document files and the most permissive license. For example, between two papers with PDFs, one available under a CC license and one under a more restrictive COVID-19-specific copyright license, we select the CC-licensed paper entry as canonical. If any metadata in the canonical entry are missing, values from other members of the cluster are promoted to fill in the blanks.
108
+
109
+ Cluster filtering Some entries harvested from sources are not papers, and instead correspond to materials like tables of contents, indices, or informational documents. These entries are identified in an ad hoc manner and removed from the dataset.
110
+
111
+ ### 2.3 Processing full text
112
+
113
+ Most papers are associated with one or more PDFs. ${}^{12}$ To extract full text and bibliographies from each PDF, we use the PDF parsing pipeline created for the S2ORC dataset (Lo et al.,2020). ${}^{13}$ In (Lo et al., 2020), we introduce the S2ORC JSON format for representing scientific paper full text, which is used as the target output for paper full text in CORD-19. The pipeline involves:
114
+
115
+ 1. Parse all PDFs to TEI XML files using GRO-BID ${}^{15}$ (Lopez,2009)
116
+
117
+ 2. Parse all TEI XML files to S2ORC JSON
118
+
119
+ 3. Postprocess to clean up links between inline citations and bibliography entries.
120
+
121
+ We additionally parse JATS XML ${}^{16}$ files available for PMC papers using a custom parser, generating the same target S2ORC JSON format.
122
+
123
+ This creates two sets of full text JSON parses associated with the papers in the collection, one set originating from PDFs (available from more sources), and one set originating from JATS XML (available only for PMC papers). Each PDF parse has an associated SHA, the 40-digit SHA-1 of the associated PDF file, while each XML parse is named using its associated PMC ID. Around 48% of CORD-19 papers have an associated PDF parse, and around ${37}\%$ have an XML parse, with the latter nearly a subset of the former. Most PDFs $\left( { > {90}\% }\right)$ are successfully parsed. Around 2.6% of CORD- 19 papers are associated with multiple PDF SHA, due to a combination of paper clustering and the existence of supplementary PDF files.
124
+
125
+ ### 2.4 Table parsing
126
+
127
+ Since the May 12, 2020 release of CORD-19, we also release selected HTML table parses. Tables contain important numeric and descriptive information such as sample sizes and results, which are the targets of many information extraction systems. A separate PDF table processing pipeline is used, consisting of table extraction and table understanding. Table extraction is based on the Smart Document Understanding (SDU) capability included in IBM Watson Discovery. ${}^{17}$ SDU converts a given PDF document from its native binary representation into a text-based representation like HTML which includes both identified document structures (e.g., tables, section headings, lists) and formatting information (e.g. positions for extracted text). Table understanding (also part of Watson Discovery) then annotates the extracted tables with additional semantic information, such as column and row headers and table captions. We leverage the Global Table Extractor (GTE) (Zheng et al., 2020), which uses a specialized object detection and clustering technique to extract table bounding boxes and structures.
128
+
129
+ ---
130
+
131
+ ${}^{11}$ This is a conservative clustering policy in which any meta-data conflict prohibits clustering. An alternative policy would be to cluster if any identifier matches, under which $a, b$ , and $c$ would form one cluster with identifiers $\left( {x, y,\left\lbrack {z,{z}^{\prime }}\right\rbrack }\right)$ .
132
+
133
+ ${}^{12}$ PMC papers can have multiple associated PDFs per paper, separating the main text from supplementary materials.
134
+
135
+ ${}^{13}$ One major difference in full text parsing for CORD-19 is that we do not use ScienceParse, ${}^{14}$ as we always derive this metadata from the sources directly.
136
+
137
+ ${}^{14}$ https://github.com/allenai/science-parse
138
+
139
+ ${}^{15}$ https://github.com/kermitt2/grobid
140
+
141
+ ${}^{16}$ https://jats.nlm.nih.gov/
142
+
143
+ ${}^{17}$ https://www.ibm.com/cloud/watson-discovery
144
+
145
+ ---
146
+
147
+ ![01963db6-3e6c-788a-a29f-01a4c377ad7d_3_190_168_617_486_0.jpg](images/01963db6-3e6c-788a-a29f-01a4c377ad7d_3_190_168_617_486_0.jpg)
148
+
149
+ Figure 2: The distribution of papers per year in CORD- 19. A spike in publications occurs in 2020 in response to COVID-19.
150
+
151
+ All PDFs are processed through this table extraction and understanding pipeline. If the Jaccard similarity of the table captions from the table parses and CORD-19 parses is above 0.9 , we insert the HTML of the matched table into the full text JSON. We extract ${188}\mathrm{\;K}$ tables from ${54}\mathrm{\;K}$ documents, of which ${33}\mathrm{\;K}$ tables are successfully matched to tables in ${19}\mathrm{\;K}$ (around ${25}\%$ ) full text documents in CORD-19. Based on preliminary error analysis, we find that match failures are primarily due to caption mismatches between the two parse schemes. Thus, we plan to explore alternate matching functions, potentially leveraging table content and document location as additional features. See Appendix A for example table parses.
152
+
153
+ ### 2.5 Dataset contents
154
+
155
+ CORD-19 has grown rapidly, now consisting of over ${140}\mathrm{\;K}$ papers with over ${72}\mathrm{\;K}$ full texts. Over ${47}\mathrm{\;K}$ papers and $7\mathrm{\;K}$ preprints on COVID-19 and coronaviruses have been released since the start of 2020, comprising nearly 40% of papers in the dataset.
156
+
157
+ Classification of CORD-19 papers to Microsoft Academic Graph (MAG) (Wang et al., 2019, 2020) fields of study (Shen et al., 2018) indicate that the dataset consists predominantly of papers in Medicine (55%), Biology (31%), and Chemistry (3%), which together constitute almost ${90}\%$ of the corpus. ${}^{18}$ A breakdown of the most common MAG subfields (L1 fields of study) represented in CORD- 19 is given in Table 1.
158
+
159
+ <table><tr><td>Subfield</td><td>Count</td><td>$\%$ of corpus</td></tr><tr><td>Virology</td><td>29567</td><td>25.5%</td></tr><tr><td>Immunology</td><td>15954</td><td>13.8%</td></tr><tr><td>Surgery</td><td>15667</td><td>13.5%</td></tr><tr><td>Internal medicine</td><td>12045</td><td>10.4%</td></tr><tr><td>Intensive care medicine</td><td>10624</td><td>9.2%</td></tr><tr><td>Molecular biology</td><td>7268</td><td>6.3%</td></tr><tr><td>Pathology</td><td>6611</td><td>5.7%</td></tr><tr><td>Genetics</td><td>5231</td><td>4.5%</td></tr><tr><td>Other</td><td>12997</td><td>11.2%</td></tr></table>
160
+
161
+ Table 1: MAG subfield of study for CORD-19 papers.
162
+
163
+ Figure 2 shows the distribution of CORD-19 papers by date of publication. Coronavirus publications increased during and following the SARS and MERS epidemics, but the number of papers published in the early months of 2020 exploded in response to the COVID-19 epidemic. Using author affiliations in MAG, we identify the countries from which the research in CORD-19 is conducted. Large proportions of CORD-19 papers are associated with institutions based in the Americas (around ${48}\mathrm{\;K}$ papers), Europe (over ${35}\mathrm{\;K}$ papers), and Asia (over ${30}\mathrm{\;K}$ papers).
164
+
165
+ ## 3 Design decision & challenges
166
+
167
+ A number of challenges come into play in the creation of CORD-19. We summarize the primary design requirements of the dataset, along with challenges implicit within each requirement:
168
+
169
+ Up-to-date Hundreds of new publications on COVID-19 are released every day, and a dataset like CORD-19 can quickly become irrelevant without regular updates. CORD-19 has been updated daily since May 26. A processing pipeline that produces consistent results day to day is vital to maintaining a changing dataset. That is, the metadata and full text parsing results must be reproducible, identifiers must be persistent between releases, and changes or new features should ideally be compatible with previous versions of the dataset.
170
+
171
+ Handles data from multiple sources Papers from different sources must be integrated and harmonized. Each source has its own metadata format, which must be converted to the CORD-19 format, while addressing any missing or extraneous fields. The processing pipeline must also be flexible to adding new sources.
172
+
173
+ ---
174
+
175
+ ${}^{18}$ MAG identifier mappings are provided as a supplement
176
+
177
+ on the CORD-19 landing page.
178
+
179
+ ---
180
+
181
+ Clean canonical metadata Because of the diversity of paper sources, duplication is unavoidable. Once paper metadata from each source is cleaned and organized into CORD-19 format, we apply the deduplication logic described in Section 2.2 to identify similar paper entries from different sources. We apply a conservative clustering algorithm, combining papers only when they have shared identifiers but no conflicts between any particular class of identifiers. We justify this because it is less harmful to retain a few duplicate papers than to remove a document that is potentially unique and useful.
182
+
183
+ Machine readable full text To provide accessible and canonical structured full text, we parse content from PDFs and associated paper documents. The full text is represented in S2ORC JSON format (Lo et al., 2020), a schema designed to preserve most relevant paper structures such as paragraph breaks, section headers, inline references, and citations. S2ORC JSON is simple to use for many NLP tasks, where character-level indices are often employed for annotation of relevant entities or spans. The text and annotation representations in S2ORC share similarities with BioC (Comeau et al., 2019), a JSON schema introduced by the BioCre-ative community for shareable annotations, with both formats leveraging the flexibility of character-based span annotations. However, S2ORC JSON also provides a schema for representing other components of a paper, such as its metadata fields, bibliography entries, and reference objects for figures, tables, and equations. We leverage this flexible and somewhat complete representation of S2ORC JSON for CORD-19. We recognize that converting between PDF or XML to JSON is lossy. However, the benefits of a standard structured format, and the ability to reuse and share annotations made on top of that format have been critical to the success of CORD-19.
184
+
185
+ Observes copyright restrictions Papers in CORD-19 and academic papers more broadly are made available under a variety of copyright licenses. These licenses can restrict or limit the abilities of organizations such as AI2 from redistributing their content freely. Although much of the COVID-19 literature has been made open access by publishers, the provisions on these open access licenses differ greatly across papers. Additionally, many open access licenses grant the ability to read, or "consume" the paper, but may be restrictive in other ways, for example, by not allowing republication of a paper or its redistribution for commercial purposes. The curator of a dataset like CORD- 19 must pass on best-to-our-knowledge licensing information to the end user.
186
+
187
+ Given a query:
188
+
189
+ Does hypertension increase the risks associated with Covid-19?
190
+
191
+ ![01963db6-3e6c-788a-a29f-01a4c377ad7d_4_849_280_612_277_0.jpg](images/01963db6-3e6c-788a-a29f-01a4c377ad7d_4_849_280_612_277_0.jpg)
192
+
193
+ Figure 3: An example information retrieval and extraction system using CORD-19: Given an input query, the system identifies relevant papers (yellow highlighted rows) and extracts text snippets from the full text JSONs as supporting evidence.
194
+
195
+ ## 4 Research directions
196
+
197
+ We provide a survey of various ways researchers have made use of CORD-19. We organize these into four categories:(i)direct usage by clinicians and clinical researchers (§4.1), (ii) tools and systems to assist clinicians (§4.2), (iii) research to support further text mining and NLP research (§4.3), and (iv) shared tasks and competitions (§4.4).
198
+
199
+ ### 4.1 Usage by clinical researchers
200
+
201
+ CORD-19 has been used by medical experts as a paper collection for conducting systematic reviews. These reviews address questions about COVID-19 include infection and mortality rates in different demographics (Han et al., 2020), symptoms of the disease (Parasa et al., 2020), identifying suitable drugs for repurposing (Sadegh et al., 2020), management policies (Yaacoub et al., 2020), and interactions with other diseases (Crisan-Dabija et al., 2020; Popa et al., 2020).
202
+
203
+ ### 4.2 Tools for clinicians
204
+
205
+ Challenges for clinicians and clinical researchers during the current epidemic include (i) keeping up to to date with recent papers about COVID-19, (ii) identifying useful papers from historical coronavirus literature, (iii) extracting useful information from the literature, and (iv) synthesizing knowledge from the literature. To facilitate solutions to these challenges, dozens of tools and systems over CORD-19 have already been developed. Most combine elements of text-based information retrieval and extraction, as illustrated in Figure 3. We have compiled a list of these efforts on the CORD- 19 public GitHub repository ${}^{19}$ and highlight some systems in Table 2. ${}^{20}$
206
+
207
+ ### 4.3 Text mining and NLP research
208
+
209
+ The following is a summary of resources released by the NLP community on top of CORD-19 to support other research activities.
210
+
211
+ Information extraction To support extractive systems, NER and entity linking of biomedical entities can be useful. NER and linking can be performed using NLP toolkits like ScispaCy (Neumann et al., 2019) or language models like BioBERT-base (Lee et al., 2019) and SciBERT-base (Beltagy et al., 2019) finetuned on biomedical NER datasets. Wang et al. (2020) augments CORD-19 full text with entity mentions predicted from several techniques, including weak supervision using the NLM's Unified Medical Language System (UMLS) Metathesaurus (Bodenrei-der, 2004).
212
+
213
+ Text classification Some efforts focus on extracting sentences or passages of interest. For example, Liang and Xie (2020) uses BERT (Devlin et al., 2019) to extract sentences from CORD-19 that contain COVID-19-related radiological findings.
214
+
215
+ Pretrained model weights BioBERT and SciBERT have been popular pretrained LMs for COVID- 19-related tasks. DeepSet has released a BERT-base model pretrained on CORD-19. ${}^{21}$ SPECTER (Cohan et al., 2020) paper embeddings computed using paper titles and abstracts are being released with each CORD-19 update. SeVeN relation em-beddings (Espinosa-Anke and Schockaert, 2018) between word pairs have also been made available for CORD-19. ${}^{22}$
216
+
217
+ Knowledge graphs The Covid Graph project ${}^{23}$ releases a COVID-19 knowledge graph built from mining several public data sources, including CORD-19, and is perhaps the largest current initiative in this space. Ahamed and Samad (2020) rely on entity co-occurrences in CORD-19 to construct a graph that enables centrality-based ranking of drugs, pathogens, and biomolecules.
218
+
219
+ ### 4.4 Competitions and Shared Tasks
220
+
221
+ The adoption of CORD-19 and the proliferation of text mining and NLP systems built on top of the dataset are supported by several COVID-19-related competitions and shared tasks.
222
+
223
+ #### 4.4.1 Kaggle
224
+
225
+ Kaggle hosts the CORD-19 Research Challenge, ${}^{24}$ a text-mining challenge that tasks participants with extracting answers to key scientific questions about COVID-19 from the papers in the CORD- 19 dataset. Round 1 was initiated with a set of open-ended questions, e.g., What is known about transmission, incubation, and environmental stability? and What do we know about COVID-19 risk factors?
226
+
227
+ More than 500 teams participated in Round 1 of the Kaggle competition. Feedback from medical experts during Round 1 identified that the most useful contributions took the form of article summary tables. Round 2 subsequently focused on this task of table completion, and resulted in 100 additional submissions. A unique tabular schema is defined for each question, and answers are collected from across different automated extractions. For example, extractions for risk factors should include disease severity and fatality metrics, while extractions for incubation should include time ranges. Sufficient knowledge of COVID-19 is necessary to define these schema, to understand which fields are important to include (and exclude), and also to perform error-checking and manual curation.
228
+
229
+ #### 4.4.2 TREC
230
+
231
+ The TREC-COVID ${}^{25}$ shared task (Roberts et al., 2020; Voorhees et al., 2020) assesses systems on their ability to rank papers in CORD-19 based on their relevance to COVID-19-related topics. Topics are sourced from MedlinePlus searches, Twitter conversations, library searches at OHSU, as well as from direct conversations with researchers, reflecting actual queries made by the community. To emulate real-world surge in publications and rapidly-changing information needs, the shared task is organized in multiple rounds. Each round uses a specific version of CORD-19, has newly added topics, and gives participants one week to submit per-topic document rankings for judgment. Round 1 topics included more general questions such as What is the origin of COVID-19? and What are the initial symptoms of COVID-19? while Round 3 topics have become more focused, e.g., What are the observed mutations in the SARS-CoV-2 genome? and What are the longer-term complications of those who recover from COVID-19? Around 60 medical domain experts, including indexers from NLM and medical students from OHSU and UTHealth, are involved in providing gold rankings for evaluation. TREC-COVID opened using the April 1st CORD-19 version and received submissions from over 55 participating teams.
232
+
233
+ ---
234
+
235
+ ${}^{19}$ https://github.com/allenai/cord19
236
+
237
+ ${}^{20}$ There are many Search and QA systems to survey. We have chosen to highlight the systems that were made publicly-available within a few weeks of the CORD-19 initial release.
238
+
239
+ ${}^{21}$ https://huggingface.co/deepset/covid_bert_base
240
+
241
+ ${}^{22}$ https://github.com/luisespinosaanke/cord-19-seven
242
+
243
+ ${}^{23}$ https://covidgraph.org/
244
+
245
+ ${}^{24}$ https://www.kaggle.com/allen-institute-for-ai/CORD- 19-research-challenge
246
+
247
+ ${}^{25}$ https://ir.nist.gov/covidSubmit/index.html
248
+
249
+ ---
250
+
251
+ <table><tr><td>Task</td><td>Project</td><td>Link</td><td>Description</td></tr><tr><td rowspan="4">Search and discovery</td><td>NEURAL Covidex</td><td>https://covidex.ai/</td><td>Uses a T5-base (Raffel et al., 2019) unsupervised reranker on BM25 (Jones et al., 2000)</td></tr><tr><td>CovidScholar</td><td>https://covidscholar.org/</td><td>Adapts Weston et al. (2019) system for entity- centric queries</td></tr><tr><td>KDCOVID</td><td>http://kdcovid.nl/about.html</td><td>Uses BioSentVec (Chen et al., 2019) similarity to identify relevant sentences</td></tr><tr><td>SPIKE-CORD</td><td>https://spike.covid- 19.apps.allenai.org</td><td>Enables users to define "regular expression"-like queries to directly search over full text</td></tr><tr><td rowspan="2">Question answering</td><td>COVIDASK</td><td>https://covidask.korea.ac.kr/</td><td>Adapts Seo et al. (2019) using BioASQ challenge (Task B) dataset (Tsatsaronis et al., 2015)</td></tr><tr><td>AUEB</td><td>http://cslab241.cs.aueb.gr:5000/</td><td>Adapts McDonald et al. (2018) using Tsatsaronis et al. (2015)</td></tr><tr><td>Summariz- ation</td><td>Vespa</td><td>https://cord19.vespa.ai/</td><td>Generates summaries of paper abstracts using T5 (Raffel et al., 2019)</td></tr><tr><td>Recommend- ation</td><td>Vespa</td><td>https://cord19.vespa.ai/</td><td>Recommends "similar papers" using Sentence- BERT (Reimers and Gurevych, 2019) and SPECTER embeddings (Cohan et al., 2020)</td></tr><tr><td>$\mathbf{{Entailment}}$</td><td>COVID papers browser</td><td>https://github.com/gsarti/covid- papers-browser</td><td>Similar to KDCOVID, but uses embeddings from BERT models trained on NLI datasets</td></tr><tr><td>Claim verification</td><td>SciFact</td><td>https://scifact.apps.allenai.org</td><td>Uses RoBERTa-large (Liu et al., 2019) to find Sup- port/Refute evidence for COVID-19 claims</td></tr><tr><td>Assistive lit. review</td><td>ASReview</td><td>https://github.com/asreview/ asreview-covid19</td><td>Active learning system with a CORD-19 plugin for identifying papers for literature reviews</td></tr><tr><td>Augmented reading</td><td>Sinequa</td><td>https://covidsearch.sinequa.com/ app/covid-search/</td><td>In-browser paper reader with entity highlighting on PDFs</td></tr><tr><td>Visualization</td><td>SciSight</td><td>https://scisight.apps.allenai.org</td><td>Network visualizations for browsing research groups working on COVID-19</td></tr></table>
252
+
253
+ Table 2: Publicly-available tools and systems for medical experts using CORD-19.
254
+
255
+ ## 5 Discussion
256
+
257
+ Several hundred new papers on COVID-19 are now being published every day. Automated methods are needed to analyze and synthesize information over this large quantity of content. The computing community has risen to the occasion, but it is clear that there is a critical need for better infrastructure to incorporate human judgments in the loop. Extractions need expert vetting, and search engines and systems must be designed to serve users.
258
+
259
+ Successful engagement and usage of CORD- 19 speaks to our ability to bridge computing and biomedical communities over a common, global cause. From early results of the Kaggle challenge, we have learned which formats are conducive to collaboration, and which questions are the most urgent to answer. However, there is significant work that remains for determining (i) which methods are best to assist textual discovery over the literature, (ii) how best to involve expert curators in the pipeline, and (iii) which extracted results convert to successful COVID-19 treatments and management policies. Shared tasks and challenges, as well as continued analysis and synthesis of feedback will hopefully provide answers to these outstanding questions.
260
+
261
+ Since the initial release of CORD-19, we have implemented several new features based on community feedback, such as the inclusion of unique identifiers for papers, table parses, more sources, and daily updates. Most substantial outlying features requests have been implemented or addressed at this time. We will continue to update the dataset with more sources of papers and newly published literature as resources permit.
262
+
263
+ ### 5.1 Limitations
264
+
265
+ Though we aim to be comprehensive, CORD-19 does not cover many relevant scientific documents on COVID-19. We have restricted ourselves to research papers and preprints, and do not incorporate other types of documents, such as technical reports, white papers, informational publications by governmental bodies, and more. Including these documents is outside the current scope of CORD- 19, but we encourage other groups to curate and publish such datasets.
266
+
267
+ Within the scope of scientific papers, CORD-19 is also incomplete, though we continue to prioritize the addition of new sources. This has motivated the creation of other corpora supporting COVID-19 NLP, such as LitCovid (Chen et al., 2020), which provide complementary materials to CORD-19 derived from PubMed. Though we have since added PubMed as a source of papers in CORD-19, there are other domains such as the social sciences that are not currently represented, and we hope to incorporate these works as part of future work.
268
+
269
+ We also note the shortage of foreign language papers in CORD-19, especially Chinese language papers produced during the early stages of the epidemic. These papers may be useful to many researchers, and we are working with collaborators to provide them as supplementary data. However, challenges in both sourcing and licensing these papers for re-publication are additional hurdles.
270
+
271
+ ### 5.2 Call to action
272
+
273
+ Though the full text of many scientific papers are available to researchers through CORD-19, a number of challenges prevent easy application of NLP and text mining techniques to these papers. First, the primary distribution format of scientific papers - PDF - is not amenable to text processing. The PDF file format is designed to share electronic documents rendered faithfully for reading and printing, and mixes visual with semantic information. Significant effort is needed to coerce PDF into a format more amenable to text mining, such as JATS XML, ${}^{26}$ BioC (Comeau et al.,2019), or S2ORC JSON (Lo et al., 2020), which is used in CORD-19. Though there is substantial work in this domain, we can still benefit from better PDF parsing tools for scientific documents. As a complement, scientific papers should also be made available in a structured format like JSON, XML, or HTML.
274
+
275
+ Second, there is a clear need for more scientific content to be made accessible to researchers. Some publishers have made COVID-19 papers openly available during this time, but both the duration and scope of these epidemic-specific licenses are unclear. Papers describing research in related areas (e.g., on other infectious diseases, or relevant biological pathways) have also not been made open access, and are therefore unavailable in CORD-19 or otherwise. Securing release rights for papers not yet in CORD-19 but relevant for COVID-19 research is a significant portion of future work, led by the PMC COVID-19 Initiative. ${}^{6}$
276
+
277
+ Lastly, there is no standard format for representing paper metadata. Existing schemas like the JATS XML NISO standard ${}^{26}$ or library science standards like BIBFRAME ${}^{27}$ or Dublin Core ${}^{28}$ have been adopted to represent paper metadata. However, these standards can be too coarse-grained to capture all necessary paper metadata elements, or may lack a strict schema, causing representations to vary greatly across publishers who use them. To improve metadata coherence across sources, the community must define and agree upon an appropriate standard of representation.
278
+
279
+ ## Summary
280
+
281
+ This project offers a paradigm of how the community can use machine learning to advance scientific research. By allowing computational access to the papers in CORD-19, we increase our ability to perform discovery over these texts. We hope the dataset and projects built on the dataset will serve as a template for future work in this area. We also believe there are substantial improvements that can be made in the ways we publish, share, and work with scientific papers. We offer a few suggestions that could dramatically increase community productivity, reduce redundant effort, and result in better discovery and understanding of the scientific literature.
282
+
283
+ ---
284
+
285
+ ${}^{26}$ https://www.niso.org/publications/z3996-2019-jats
286
+
287
+ ${}^{27}$ https://www.loc.gov/bibframe/
288
+
289
+ ${}^{28}$ https://www.dublincore.org/specifications/dublin-core/dces/
290
+
291
+ ---
292
+
293
+ Through CORD-19, we have learned the importance of bringing together different communities around the same scientific cause. It is clearer than ever that automated text analysis is not the solution, but rather one tool among many that can be directed to combat the COVID-19 epidemic. Crucially, the systems and tools we build must be designed to serve a use case, whether that's improving information retrieval for clinicians and medical professionals, summarizing the conclusions of the latest observational research or clinical trials, or converting these learnings to a format that is easily digestible by healthcare consumers.
294
+
295
+ ## Acknowledgments
296
+
297
+ This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship.
298
+
299
+ We thank The White House Office of Science and Technology Policy, the National Library of Medicine at the National Institutes of Health, Microsoft Research, Chan Zuckerberg Initiative, and Georgetown University's Center for Security and Emerging Technology for co-organizing the CORD-19 initiative. We thank Michael Kratsios, the Chief Technology Officer of the United States, and The White House Office of Science and Technology Policy for providing the initial seed set of questions for the Kaggle CORD-19 research challenge.
300
+
301
+ We thank Kaggle for coordinating the CORD- 19 research challenge. In particular, we acknowledge Anthony Goldbloom for providing feedback on CORD-19 and for involving us in discussions around the Kaggle literature review tables project. We thank the National Institute of Standards and Technology (NIST), National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and University of Texas Health Science Center at Houston (UTHealth) for co-organizing the TREC-COVID shared task. In particular, we thank our co-organizers - Steven Bedrick (OHSU), Aaron Cohen (OHSU), Dina Demner-Fushman (NLM), William Hersh (OHSU), Kirk Roberts (UTHealth), Ian Soboroff (NIST), and Ellen Voorhees (NIST) - for feedback on the design of CORD-19.
302
+
303
+ We acknowledge our partners at Elsevier and Springer Nature for providing additional full text coverage of papers included in the corpus.
304
+
305
+ We thank Bryan Newbold from the Internet Archive for providing feedback on data quality and helpful comments on early drafts of the manuscript.
306
+
307
+ We thank Rok Jun Lee, Hrishikesh Sathe, Dhaval Sonawane and Sudarshan Thitte from IBM Watson AI for their help in table parsing.
308
+
309
+ We also acknowledge and thank our collaborators from AI2: Paul Sayre and Sam Skjonsberg for providing front-end support for CORD-19 and TREC-COVID, Michael Schmitz for setting up the CORD-19 Discourse community forums, Adriana Dunn for creating webpage content and marketing, Linda Wagner for collecting community feedback, Jonathan Borchardt, Doug Downey, Tom Hope, Daniel King, and Gabriel Stanovsky for contributing supplemental data to the CORD-19 effort, Alex Schokking for his work on the Semantic Scholar COVID-19 Research Feed, Darrell Plessas for technical support, and Carissa Schoenick for help with public relations.
310
+
311
+ ## References
312
+
313
+ Sabber Ahamed and Manar D. Samad. 2020. Information mining for covid-19 research from a large volume of scientific literature. ArXiv, abs/2004.02085.
314
+
315
+ Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computational Linguistics.
316
+
317
+ Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research, 32 Database issue:D267-70.
318
+
319
+ Q. Chen, Y. Peng, and Z. Lu. 2019. Biosentvec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI), pages 1-5.
320
+
321
+ Qingyu Chen, Alexis Allot, and Zhiyong Lu. 2020. Keep up with the latest coronavirus research. Nature, 579:193 - 193.
322
+
323
+ Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel S. Weld. 2020. Specter: Document-level representation learning using citation-informed transformers. In ${ACL}$ .
324
+
325
+ Donald C. Comeau, Chih-Hsuan Wei, Rezarta Islamaj Dogan, and Zhiyong Lu. 2019. Pmc text mining subset in bioc: about three million full-text articles and growing. Bioinformatics.
326
+
327
+ Radu Crisan-Dabija, Cristina Grigorescu, Cristina Alice Pavel, Bogdan Artene, Iolanda Valentina Popa, Andrei Cernomaz, and Alexandru Burlacu. 2020. Tuberculosis and covid-19 in 2020: lessons from the past viral outbreaks and possible future outcomes. medRxiv.
328
+
329
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
330
+
331
+ Luis Espinosa-Anke and Steven Schockaert. 2018. SeVeN: Augmenting word embeddings with unsupervised relation vectors. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2653-2665, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
332
+
333
+ M. Fathi, Khatoon Vakili, Fatemeh Sayehmiri, Abdol-rahman Mohamadkhani, M. Hajiesmaeili, Mostafa Rezaei-Tavirani, and Owrang Eilami. 2020. Prognostic value of comormidity for severity of covid- 19: A systematic review and meta-analysis study. In medRxiv.
334
+
335
+ Yang Han, Victor O.K. Li, Jacqueline C.K. Lam, Peiyang Guo, Ruiqiao Bai, and Wilton W.T. Fok. 2020. Who is more susceptible to covid-19 infection and mortality in the states? medRxiv.
336
+
337
+ Torsten Hothorn, Marie-Charlotte Bopp, H. F. Guen-thard, Olivia Keiser, Michel Roelens, Caroline E Weibull, and Michael J Crowther. 2020. Relative coronavirus disease 2019 mortality: A swiss population-based study. In medRxiv.
338
+
339
+ Karen Spärck Jones, Steve Walker, and Stephen E. Robertson. 2000. A probabilistic model of information retrieval: development and comparative experiments - part 1. Inf. Process. Manag., 36:779-808.
340
+
341
+ Shubhi Kaushik, Scott I. Aydin, Kim R. Derespina, Pre-rna Bansal, Shanna Kowalsky, Rebecca Trachtman, Jennifer K. Gillen, Michelle M. Perez, Sara H. Sosh-nick, Edward E. Conway, Asher Bercow, Howard S. Seiden, Robert H Pass, Henry Michael Ushay, George Ofori-Amanfo, and Shivanand S Medar. 2020. Multisystem inflammatory syndrome in children (mis-c) associated with sars-cov-2 infection: A multi-institutional study from new york city. The Journal of Pediatrics.
342
+
343
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
344
+
345
+ Yuxiao Liang and Pengtao Xie. 2020. Identifying radiological findings related to covid-19 from medical literature. ArXiv, abs/2004.01862.
346
+
347
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
348
+
349
+ Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. 2020. S2ORC: The Semantic Scholar Open Research Corpus. In Proceedings of ACL.
350
+
351
+ Patrice Lopez. 2009. Grobid: Combining automatic bibliographic data recognition and term extraction for scholarship publications. In ${ECDL}$ .
352
+
353
+ Luis López-Fando, Paulina Bueno, David Sánchez Car-racedo, Márcio Augusto Averbeck, David Manuel Castro-Díaz, emmanuel chartier-kastler, Francisco Cruz, Roger R Dmochowski, Enrico Finazzi-Agrò, Sakineh Hajebrahimi, John Heesakkers, George R Kasyan, Tufan Tarcan, Benoît Peyronnet, Mauricio Plata, Bárbara Padilla-Fernández, Frank Van der Aa, Salvador Arlandis, and Hashim Hashim. 2020. Management of female and functional urology patients during the covid pandemic. European Urology Focus.
354
+
355
+ Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2018. Deep relevance ranking using enhanced document-query interactions. In EMNLP.
356
+
357
+ Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.
358
+
359
+ Sravanthi Parasa, Madhav Desai, Viveksandeep Thogu-luva Chandrasekar, Harsh Patel, Kevin Kennedy, Thomas Rösch, Marco Spadaccini, Matteo Colombo, Roberto Gabbiadini, Everson L. A. Artifon, Alessandro Repici, and Prateek Sharma. 2020. Prevalence of gastrointestinal symptoms and fecal viral shedding in patients with coronavirus disease 2019. JAMA Network Open, 3.
360
+
361
+ Iolanda Valentina Popa, Mircea Diculescu, Catalina Mihai, Cristina Cijevschi-Prelipcean, and Alexandru Burlacu. 2020. Covid-19 and inflammatory bowel diseases: risk assessment, shared molecular pathways and therapeutic challenges. medRxiv.
362
+
363
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer.
364
+
365
+ Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
366
+
367
+ Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, and William R Hersh. 2020. TREC-COVID: Rationale and Structure of an Information Retrieval Shared Task for COVID- 19. Journal of the American Medical Informatics Association. Ocaa091.
368
+
369
+ Sepideh Sadegh, Julian Matschinske, David B. Blumenthal, Gihanna Galindez, Tim Kacprowski, Markus List, Reza Nasirigerdeh, Mhaned Oubounyt, Andreas Pichlmair, Tim Daniel Rose, Marisol Salgado-Albarrán, Julian Späth, Alexey Stukalov, Nina K. Wenke, Kevin Yuan, Josch K. Pauling, and Jan Baumbach. 2020. Exploring the sars-cov-2 virus-host-drug interactome for drug repurposing.
370
+
371
+ Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4430-4441, Florence, Italy. Association for Computational Linguistics.
372
+
373
+ Zhihong Shen, Hao Ma, and Kuansan Wang. 2018. A web-scale system for scientific knowledge exploration. In Proceedings of ACL 2018, System Demonstrations, pages 87-92, Melbourne, Australia. Association for Computational Linguistics.
374
+
375
+ Silvia Stringhini, Ania Wisniak, Giovanni Piumatti, Andrew S. Azman, Stephen A Lauer, Hélène Baysson, David De Ridder, Dusan Petrovic, Stephanie Schrempft, Kailing Marcus, Sabine Yerly, Isabelle Arm Vernez, Olivia Keiser, Samia Hurst, Klara M Posfay-Barbe, Didier Trono, Didier Pit-tet, Laurent Gétaz, François Chappuis, Isabella Eck-erle, Nicolas Vuilleumier, Benjamin Meyer, Antoine Flahault, Laurent Kaiser, and Idris Guessous. 2020. Seroprevalence of anti-sars-cov-2 igg antibodies in geneva, switzerland (serocov-pop): a population-based study. Lancet (London, England).
376
+
377
+ George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou-los, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artières, Axel-Cyrille Ngonga Ngomo, Norman Heino, Éric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. In ${BMC}$ Bioinformatics.
378
+
379
+ Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2020. TREC-COVID: Constructing a pandemic information retrieval test collection. SIGIR Forum, 54.
380
+
381
+ Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, 1(1):396- 413.
382
+
383
+ Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Darrin Eide, Yuxiao Dong, Junjie Qian, Anshul Kanakia, Alvin Chen, and Richard Rogahn. 2019. A review of microsoft academic services for science of science studies. Frontiers in Big Data, 2.
384
+
385
+ Xuan Wang, Xiangchen Song, Yingjun Guan, Bangzheng Li, and Jiawei Han. 2020. Comprehensive named entity recognition on cord-19 with distant or weak supervision. ArXiv, abs/2003.12218.
386
+
387
+ Leigh Weston, Vahe Tshitoyan, John Dagdelen, Olga Kononova, Kristin Persson, Gerbrand Ceder, and Anubhav Jain. 2019. Named Entity Recognition and Normalization Applied to Large-Scale Information Extraction from the Materials Science Literature.
388
+
389
+ Sally Yaacoub, Holger J Schünemann, Joanne Khabsa, Amena El-Harakeh, Assem M Khamis, Fatimah Chamseddine, Rayane El Khoury, Zahra Saad, Layal Hneiny, Carlos Cuello Garcia, Giovanna Elsa Ute Muti-Schünemann, Antonio Bognanni, Chen Chen, Guang Chen, Yuan Zhang, Hong Zhao, Pierre Abi Hanna, Mark Loeb, Thomas Piggott, Marge Reinap, Nesrine Rizk, Rosa Stalteri, Stephanie Duda, Karla Solo, Derek K Chu, and Elie A Akl. 2020. Safe management of bodies of deceased persons with suspected or confirmed covid-19: a rapid systematic review. BMJ Global Health, 5(5).
390
+
391
+ Xinyi Zheng, Doug Burdick, Lucian Popa, and Xin Ru Nancy Wang. 2020. Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. ArXiv, abs/2005.00589.
392
+
393
+ ## A Table parsing results
394
+
395
+ There is high variance in the representation of tables across different paper PDFs. The goal of table parsing is to extract all tables from PDFs and represent them in HTML table format, along with associated titles and headings. In Table 3, we provide several example table parses, showing the high diversity of table representations across documents, the structure of resulting parses, and some common parse errors.
396
+
397
+ PDF Representation HTML Table Parse Source & Description
398
+
399
+ Effect 95% Cl
400
+
401
+ <table><tr><td>Effect</td><td>log-HR</td><td>SE $\times {10}$</td><td>P-value</td><td>HR</td><td>95% CI</td></tr><tr><td>Female</td><td>0</td><td/><td/><td>1</td><td/></tr><tr><td>Male</td><td>0.40</td><td>0.27</td><td>$< {0.001}$</td><td>1.50</td><td>${1.40} - {1.60}$</td></tr><tr><td>Age 65</td><td>0</td><td/><td/><td>1</td><td/></tr><tr><td>Age - 65</td><td>0.09</td><td>0.01</td><td>$< {0.001}$</td><td>1.09</td><td>${1.09} - {1.09}$</td></tr><tr><td>covid-19 × Female</td><td>0</td><td/><td/><td>1</td><td/></tr><tr><td>covid-19 × Male</td><td>0.18</td><td>0.73</td><td>0.05</td><td>1.20</td><td>${1.00} - {1.44}$</td></tr><tr><td>covid-19 $\times$ Age 65</td><td>0</td><td/><td/><td>1</td><td/></tr><tr><td>covid-19 $\times$ Age - 65</td><td>0.04</td><td>0.03</td><td>$< {0.001}$</td><td>1.04</td><td>${1.03} - {1.05}$</td></tr></table>
402
+
403
+ Female 0
404
+
405
+ Male 0.40 0.27 $< {0.00}$ 1.50 ${1.40} - {1.60}$
406
+
407
+ Age 65 0 From Hothorn et al. (2020):
408
+
409
+ Exact Structure; Minimal row
410
+
411
+ covid-19 × Female 1 rules
412
+
413
+ covid-19 × Male 0.18 0.73 0.05 1.20 ${1.00} - {1.44}$
414
+
415
+ covid-19 × Age 65 0
416
+
417
+ covid-19 × Age – 65 0.03 $< {0.001}$ 1.04 ${1.03} - {1.05}$
418
+
419
+ <table><tr><td>Time for surgery</td><td>Priority level</td><td>Functional urology surgeries in this category</td></tr><tr><td>${24}\mathrm{\;h}$</td><td>1a, emergency</td><td>None</td></tr><tr><td>${72}\mathrm{\;h}$</td><td>1b, urgent</td><td>Infected prosthesis/implant</td></tr><tr><td>$4{\mathrm{{wk}}}^{a}$</td><td>2</td><td>None</td></tr><tr><td>$3{\mathrm{\;{mo}}}^{3}$</td><td>3</td><td>None</td></tr><tr><td>$> {3}^{ \circ }{\mathrm{{mo}}}^{a}$</td><td>4</td><td>All the rest (Table 4)</td></tr></table>
420
+
421
+ <table><tr><td>Time for surgery</td><td>Priority level</td><td>Functional urology surgeries in this category</td></tr><tr><td>${24}\mathrm{\;h}$</td><td>la, emergency</td><td>None</td></tr><tr><td>72 h</td><td>1b, urgent</td><td>Infected prosthesis/implant</td></tr><tr><td>4 wk</td><td>2</td><td>None</td></tr><tr><td>3 mo</td><td>3</td><td>None</td></tr><tr><td>>> mo</td><td>4</td><td>All the rest (Table 4)</td></tr></table>
422
+
423
+ From López-Fando et al.
424
+
425
+ (2020): Exact Structure;
426
+
427
+ Colored rows
428
+
429
+ <table><tr><td rowspan="2"/><td colspan="3">SARS-CoV- 2 serology test result</td><td rowspan="2">Relative risk (95% CI)</td><td rowspan="2">pratue</td></tr><tr><td>Positive</td><td>Negative</td><td>Indeterminate</td></tr><tr><td colspan="6">Age group, years</td></tr><tr><td>S-9 (n=123)</td><td>1 (0-8%)</td><td>114 (927%)</td><td>8 (6-5%)</td><td>0-32 (0-11-0-63)</td><td>0.0008</td></tr><tr><td>10-19 (n=332)</td><td>${32}\left( {{9.6}\% }\right)$</td><td>295 (88-9%)</td><td>5 (15%)</td><td>0.86 (0.57-1-22)</td><td>0.37</td></tr><tr><td>20-49 (n-1096)</td><td>108 (9-9%)</td><td>970 (88-5%)</td><td>${18}\left( {{1.6}\% }\right)$</td><td>1 (ref)</td><td>-</td></tr><tr><td>50-64 (n=846)</td><td>63 (7-4%)</td><td>772 (913%)</td><td>11 (1.3%)</td><td>0.79 (0.57-1.04)</td><td>0.090</td></tr><tr><td>a65 (n=369)</td><td>${15}\left( {{44}\mathrm{\;m}}\right)$</td><td>348 (943%)</td><td>$6\left( {{16}\% }\right)$</td><td>0.50 (0.28-0.78)</td><td>0.0020</td></tr><tr><td colspan="6">Sex</td></tr><tr><td>Female $\left( {n = {1454}}\right)$</td><td>101 (6-9%)</td><td>1333 (917%)</td><td>20 (1-4%)</td><td>1 (ref)</td><td>-</td></tr><tr><td>Male (n=1312)</td><td>118 (9-0%)</td><td>${1166}\left( {{88.9}\% }\right)$</td><td>${28}\left( {2 \cdot 1\% }\right)$</td><td>1-26 (1-00-1-58)</td><td>0.054</td></tr></table>
430
+
431
+ SARS-CoV-2 serology test result Relative risk (95% CI) $\parallel \mathbf{p}$ value
432
+
433
+ Age group, years
434
+
435
+ 5-9 (n=123) 1 (0-8%) 114 (92-7%) 8 (6-5%) $0 \cdot {32}\left( {0 \cdot {11} - 0 \cdot {63}}\right)$ 0-0008 From Stringhini et al. (2020):
436
+
437
+ 10-19 (n=332) 32 (9-6%) 295 (88-9%) Minor span errors; Partially
438
+
439
+ 50-64 (n=846) 63 (7-4%) 772 (91-3%) ${11}(1 - 3\%$ 0.79 (0-57-1-04) 0-090 colored background with
440
+
441
+ ≥6.5 (n=369) ${15}\left( {4 - 1\% }\right)$ ${348}\left( {{94} - 3\% }\right)$ $6\left( {1 - 6\% }\right)$ 0-50 (0-28-0-78) 0.0020 minimal row rules
442
+
443
+ Female $\left( {\mathrm{n} = {1454}}\right)$ 101 (6-9% 1333 (91-7%) 20 (1-4%) 1 (ref)
444
+
445
+ Male $\left( {\mathrm{n} = {1312}}\right)$ 118 (90%) 1166 (88-9% 28 (2-1%) $1 \cdot {26}(1 \cdot {00} - 1 \cdot {58}$ 0.054
446
+
447
+ Number of study Prevalence (%)
448
+
449
+ Number of study Prevalence (%) 79 54.26
450
+
451
+ Sex
452
+
453
+ 79 A\$ 82 From Fathi et al. (2020): Exposure history 35.56
454
+
455
+ 66 79.84 Signs and symptoms Overmerge and span errors;
456
+
457
+ 59.53 66 79.84 Some section headers have row Fatigue or Myalgia 56 33.46
458
+
459
+ Diarrhea 52 10.71 65 59.53 rules
460
+
461
+ Dyspnea 31.48
462
+
463
+ Diarthea 10.71
464
+
465
+ Tests Value Reference Normal Range
466
+
467
+ SARS-CoV-2 PCR positive Tests Value Reference Normal Range 11 (33%)
468
+
469
+ SARS-CoV-2 PCR positive 11 (33%)
470
+
471
+ SARS-CoV-2 antibody positive
472
+
473
+ SARS-CoV-2 antibody positive 27 (8 From Kaushik et al. (2020): SARs CoV-2 PCR and SARs CoV-2 PCR and 6 (18%)
474
+
475
+ Over-splitting errors; Full row
476
+
477
+ 4000-11.000 /ul WBC in cells/uL, median 11,000 (8450, 14,400 4000-11.000 /uL and column rules with large
478
+
479
+ (108) vertical spacing in cells Hemoglobin in g/dL, median ${11.3}\left( {{9.55},{12.5}}\right)$ 10.5 - 14 g/dL
480
+
481
+ Hemoglobin in $\mathrm{g}/\mathrm{{dL}}$ , median ${11.3}\left( {{9.55},{12.5}}\right)$ 10.5 - 14 g/dL
482
+
483
+ (108)
484
+
485
+ Table 3: A sample of table parses. Though most table structure is preserved accurately, the diversity of table representations results in some errors.
486
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/0gLzHrE_t3z/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CORD-19: THE COVID-19 OPEN RESEARCH DATASET
2
+
3
+ Lucy Lu Wang ${}^{1, * }$ Kyle ${\mathbf{{L0}}}^{1, * }$ Douglas Burdick ${}^{2}$
4
+
5
+ Yannis Katsis ${}^{2}$ Rodney Kinney ${}^{1}$ Yunyao Li ${}^{2}$ Ziyang Liu ${}^{6}$ William Merrill ${}^{1}\;$ Paul Mooney ${}^{5}\;$ Dewey Murdick ${}^{7}\;$ Devvret Rishi ${}^{5}$ Jerry Sheehan ${}^{4}$ Zhihong Shen ${}^{3}$ Brandon Stilson ${}^{1}$ Alex D. Wade ${}^{6}$ Kuansan Wang ${}^{3}$ Nancy Xin Ru Wang ${}^{2}$ Chris Wilhelm ${}^{1}$ Boya Xie ${}^{3}$ ouglas Raymond ${}^{1}\;$ Daniel S. Weld ${}^{1,8}\;$ Oren Etzioni ${}^{1}\;$ Sebastian Kohlme
6
+
7
+ ${}^{1}$ Allen Institute for AI ${}^{2}$ IBM Research ${}^{3}$ Microsoft Research ${}^{4}$ National Library of Medicine ${}^{5}$ Kaggle ${}^{6}$ Chan Zuckerberg Initiative ${}^{7}$ Georgetown University ${}^{8}$ University of Washington
8
+
9
+ {lucyw, kylel}@allenai.org
10
+
11
+ § ABSTRACT
12
+
13
+ The COVID-19 Open Research Dataset (CORD-19) is a growing ${}^{1}$ resource of scientific papers on COVID-19 and related historical coronavirus research. CORD-19 is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, CORD-19 has been downloaded ${}^{2}$ over ${200}\mathrm{\;K}$ times and has served as the basis of many COVID-19 text mining and discovery systems. In this article, we describe the mechanics of dataset construction, highlighting challenges and key design decisions, provide an overview of how CORD-19 has been used, and describe several shared tasks built around the dataset. We hope this resource will continue to bring together the computing community, biomedical experts, and policy makers in the search for effective treatments and management policies for COVID-19.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ On March 16, 2020, the Allen Institute for AI (AI2), in collaboration with our partners at The White House Office of Science and Technology Policy (OSTP), the National Library of Medicine (NLM), the Chan Zuckerburg Initiative (CZI), Microsoft Research, and Kaggle, coordinated by Georgetown University's Center for Security and Emerging Technology (CSET), released the first version of CORD-19. This resource is a large and growing collection of publications and preprints on COVID- 19 and related historical coronaviruses such as SARS and MERS. The initial release consisted of ${28}\mathrm{\;K}$ papers, and the collection has grown to more than ${140}\mathrm{\;K}$ papers over the subsequent weeks. Papers and preprints from several archives are collected and ingested through the Semantic Scholar literature search engine, ${}^{3}$ metadata are harmonized and deduplicated, and paper documents are processed through the pipeline established in Lo et al. (2020) to extract full text (more than 50% of papers in CORD-19 have full text). We commit to providing regular updates to the dataset until an end to the COVID-19 crisis is foreseeable.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Papers and preprints are collected from different sources through Semantic Scholar. Released as part of CORD-19 are the harmonized and deduplicated metadata and full text JSON.
22
+
23
+ CORD-19 aims to connect the machine learning community with biomedical domain experts and policy makers in the race to identify effective treatments and management policies for COVID- 19. The goal is to harness these diverse and complementary pools of expertise to discover relevant information more quickly from the literature. Users of the dataset have leveraged AI-based techniques in information retrieval and natural language processing to extract useful information.
24
+
25
+ *denotes equal contribution
26
+
27
+ ${}^{1}$ The dataset continues to be updated daily with papers from new sources and the latest publications. Statistics reported in this article are up-to-date as of version 2020-06-14.
28
+
29
+ ${}^{2}$ https://www.semanticscholar.org/cord19
30
+
31
+ ${}^{3}$ https://semanticscholar.org/
32
+
33
+ Responses to CORD-19 have been overwhelmingly positive, with the dataset being downloaded over ${200}\mathrm{\;K}$ times in the three months since its release. The dataset has been used by clinicians and clinical researchers to conduct systematic reviews, has been leveraged by data scientists and machine learning practitioners to construct search and extraction tools, and is being used as the foundation for several successful shared tasks. We summarize research and shared tasks in Section 4.
34
+
35
+ In this article, we briefly describe:
36
+
37
+ 1. The content and creation of CORD-19,
38
+
39
+ 2. Design decisions and challenges around creating the dataset,
40
+
41
+ 3. Research conducted on the dataset, and how shared tasks have facilitated this research, and
42
+
43
+ 4. A roadmap for CORD-19 going forward.
44
+
45
+ § 2 DATASET
46
+
47
+ CORD-19 integrates papers and preprints from several sources (Figure 1), where a paper is defined as the base unit of published knowledge, and a preprint as an unpublished but publicly available counterpart of a paper. Throughout the rest of Section 2, we discuss papers, though the same processing steps are adopted for preprints. First, we ingest into Semantic Scholar paper metadata and documents from each source. Each paper is associated with bibliographic metadata, like title, authors, publication venue, etc, as well as unique identifiers such as a DOI, PubMed Central ID, PubMed ID, the WHO Covidence $\# {,}^{4}$ MAG identifier (Shen et al., 2018), and others. Some papers are associated with documents, the physical artifacts containing paper content; these are the familiar PDFs, XMLs, or physical print-outs we read.
48
+
49
+ For the CORD-19 effort, we generate harmonized and deduplicated metadata as well as structured full text parses of paper documents as output. We provide full text parses in cases where we have access to the paper documents, and where the documents are available under an open access license (e.g. Creative Commons (CC), ${}^{5}$ publisher-specific COVID-19 licenses, ${}^{6}$ or identified as open access through DOI lookup in the Unpaywall ${}^{7}$ database).
50
+
51
+ § 2.1 SOURCES OF PAPERS
52
+
53
+ Papers in CORD-19 are sourced from PubMed Central (PMC), PubMed, the World Health Organization’s Covid-19 Database, ${}^{4}$ and preprint servers bioRxiv, medRxiv, and arXiv. The PMC Public Health Emergency Covid-19 Initiative ${}^{6}$ expanded access to COVID-19 literature by working with publishers to make coronavirus-related papers discoverable and accessible through PMC under open access license terms that allow for reuse and secondary analysis. BioRxiv and medRxiv preprints were initially provided by CZI, and are now ingested through Semantic Scholar along with all other included sources. We also work directly with publishers such as Elsevier ${}^{8}$ and Springer Nature, ${}^{9}$ to provide full text coverage of relevant papers available in their back catalog.
54
+
55
+ All papers are retrieved given the query ${}^{10}$ :
56
+
57
+ "COVID" OR "COVID-19" OR
58
+
59
+ "Coronavirus" OR "Corona virus"
60
+
61
+ OR "2019-nCoV" OR "SARS-CoV"
62
+
63
+ OR "MERS-COV" OR "Severe Acute
64
+
65
+ Respiratory Syndrome" OR "Middle
66
+
67
+ East Respiratory Syndrome"
68
+
69
+ Papers that match on these keywords in their title, abstract, or body text are included in the dataset. Query expansion is performed by PMC on these search terms, affecting the subset of papers in CORD-19 retrieved from PMC.
70
+
71
+ § 2.2 PROCESSING METADATA
72
+
73
+ The initial collection of sourced papers suffers from duplication and incomplete or conflicting metadata. We perform the following operations to harmonize and deduplicate all metadata:
74
+
75
+ 1. Cluster papers using paper identifiers
76
+
77
+ 2. Select canonical metadata for each cluster
78
+
79
+ 3. Filter clusters to remove unwanted entries
80
+
81
+ ${}^{5}$ https://creativecommons.org/
82
+
83
+ ${}^{6}$ https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/
84
+
85
+ ${}^{7}$ https://unpaywall.org/
86
+
87
+ ${}^{8}$ https://www.elsevier.com/connect/coronavirus-information-center
88
+
89
+ ${}^{9}$ https://www.springernature.com/gp/researchers/ campaigns/coronavirus
90
+
91
+ ${}^{10}$ Adapted from the Elsevier COVID-19 site ${}^{8}$
92
+
93
+ ${}^{4}$ https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus- 2019-ncov
94
+
95
+ Clustering papers We cluster papers if they overlap on any of the following identifiers: $\{$ doi, pmc_id, pubmed_id, arxiv_id, who_covidence_id, mag_id $\}$ . If two papers from different sources have an identifier in common and no other identifier conflicts between them, we assign them to the same cluster. Each cluster is assigned a unique identifier ${\mathbf{{CORD}}}_{ - }\mathbf{{UID}}$ , which persists between dataset releases. No existing identifier, such as DOI or PMC ID, is sufficient as the primary CORD-19 identifier. Some papers in PMC do not have DOIs; some papers from the WHO, publishers, or preprint servers like arXiv do not have PMC IDs or DOIs.
96
+
97
+ Occasionally, conflicts occur. For example, a paper $c$ with $\left( {\text{ doi, }{pm}{c}_{ - }{id},\text{ pubmed\_id }}\right)$ identifiers $\left( {x,\text{ null },{z}^{\prime }}\right)$ might share identifier $x$ with a cluster of papers $\{ a,b\}$ that has identifiers(x, y, z), but has a conflict ${z}^{\prime } \neq z$ . In this case, we choose to create a new cluster $\{ c\}$ , containing only paper $c{.}^{11}$
98
+
99
+ Selecting canonical metadata Among each cluster, the canonical entry is selected to prioritize the availability of document files and the most permissive license. For example, between two papers with PDFs, one available under a CC license and one under a more restrictive COVID-19-specific copyright license, we select the CC-licensed paper entry as canonical. If any metadata in the canonical entry are missing, values from other members of the cluster are promoted to fill in the blanks.
100
+
101
+ Cluster filtering Some entries harvested from sources are not papers, and instead correspond to materials like tables of contents, indices, or informational documents. These entries are identified in an ad hoc manner and removed from the dataset.
102
+
103
+ § 2.3 PROCESSING FULL TEXT
104
+
105
+ Most papers are associated with one or more PDFs. ${}^{12}$ To extract full text and bibliographies from each PDF, we use the PDF parsing pipeline created for the S2ORC dataset (Lo et al.,2020). ${}^{13}$ In (Lo et al., 2020), we introduce the S2ORC JSON format for representing scientific paper full text, which is used as the target output for paper full text in CORD-19. The pipeline involves:
106
+
107
+ 1. Parse all PDFs to TEI XML files using GRO-BID ${}^{15}$ (Lopez,2009)
108
+
109
+ 2. Parse all TEI XML files to S2ORC JSON
110
+
111
+ 3. Postprocess to clean up links between inline citations and bibliography entries.
112
+
113
+ We additionally parse JATS XML ${}^{16}$ files available for PMC papers using a custom parser, generating the same target S2ORC JSON format.
114
+
115
+ This creates two sets of full text JSON parses associated with the papers in the collection, one set originating from PDFs (available from more sources), and one set originating from JATS XML (available only for PMC papers). Each PDF parse has an associated SHA, the 40-digit SHA-1 of the associated PDF file, while each XML parse is named using its associated PMC ID. Around 48% of CORD-19 papers have an associated PDF parse, and around ${37}\%$ have an XML parse, with the latter nearly a subset of the former. Most PDFs $\left( { > {90}\% }\right)$ are successfully parsed. Around 2.6% of CORD- 19 papers are associated with multiple PDF SHA, due to a combination of paper clustering and the existence of supplementary PDF files.
116
+
117
+ § 2.4 TABLE PARSING
118
+
119
+ Since the May 12, 2020 release of CORD-19, we also release selected HTML table parses. Tables contain important numeric and descriptive information such as sample sizes and results, which are the targets of many information extraction systems. A separate PDF table processing pipeline is used, consisting of table extraction and table understanding. Table extraction is based on the Smart Document Understanding (SDU) capability included in IBM Watson Discovery. ${}^{17}$ SDU converts a given PDF document from its native binary representation into a text-based representation like HTML which includes both identified document structures (e.g., tables, section headings, lists) and formatting information (e.g. positions for extracted text). Table understanding (also part of Watson Discovery) then annotates the extracted tables with additional semantic information, such as column and row headers and table captions. We leverage the Global Table Extractor (GTE) (Zheng et al., 2020), which uses a specialized object detection and clustering technique to extract table bounding boxes and structures.
120
+
121
+ ${}^{11}$ This is a conservative clustering policy in which any meta-data conflict prohibits clustering. An alternative policy would be to cluster if any identifier matches, under which $a,b$ , and $c$ would form one cluster with identifiers $\left( {x,y,\left\lbrack {z,{z}^{\prime }}\right\rbrack }\right)$ .
122
+
123
+ ${}^{12}$ PMC papers can have multiple associated PDFs per paper, separating the main text from supplementary materials.
124
+
125
+ ${}^{13}$ One major difference in full text parsing for CORD-19 is that we do not use ScienceParse, ${}^{14}$ as we always derive this metadata from the sources directly.
126
+
127
+ ${}^{14}$ https://github.com/allenai/science-parse
128
+
129
+ ${}^{15}$ https://github.com/kermitt2/grobid
130
+
131
+ ${}^{16}$ https://jats.nlm.nih.gov/
132
+
133
+ ${}^{17}$ https://www.ibm.com/cloud/watson-discovery
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 2: The distribution of papers per year in CORD- 19. A spike in publications occurs in 2020 in response to COVID-19.
138
+
139
+ All PDFs are processed through this table extraction and understanding pipeline. If the Jaccard similarity of the table captions from the table parses and CORD-19 parses is above 0.9, we insert the HTML of the matched table into the full text JSON. We extract ${188}\mathrm{\;K}$ tables from ${54}\mathrm{\;K}$ documents, of which ${33}\mathrm{\;K}$ tables are successfully matched to tables in ${19}\mathrm{\;K}$ (around ${25}\%$ ) full text documents in CORD-19. Based on preliminary error analysis, we find that match failures are primarily due to caption mismatches between the two parse schemes. Thus, we plan to explore alternate matching functions, potentially leveraging table content and document location as additional features. See Appendix A for example table parses.
140
+
141
+ § 2.5 DATASET CONTENTS
142
+
143
+ CORD-19 has grown rapidly, now consisting of over ${140}\mathrm{\;K}$ papers with over ${72}\mathrm{\;K}$ full texts. Over ${47}\mathrm{\;K}$ papers and $7\mathrm{\;K}$ preprints on COVID-19 and coronaviruses have been released since the start of 2020, comprising nearly 40% of papers in the dataset.
144
+
145
+ Classification of CORD-19 papers to Microsoft Academic Graph (MAG) (Wang et al., 2019, 2020) fields of study (Shen et al., 2018) indicate that the dataset consists predominantly of papers in Medicine (55%), Biology (31%), and Chemistry (3%), which together constitute almost ${90}\%$ of the corpus. ${}^{18}$ A breakdown of the most common MAG subfields (L1 fields of study) represented in CORD- 19 is given in Table 1.
146
+
147
+ max width=
148
+
149
+ Subfield Count $\%$ of corpus
150
+
151
+ 1-3
152
+ Virology 29567 25.5%
153
+
154
+ 1-3
155
+ Immunology 15954 13.8%
156
+
157
+ 1-3
158
+ Surgery 15667 13.5%
159
+
160
+ 1-3
161
+ Internal medicine 12045 10.4%
162
+
163
+ 1-3
164
+ Intensive care medicine 10624 9.2%
165
+
166
+ 1-3
167
+ Molecular biology 7268 6.3%
168
+
169
+ 1-3
170
+ Pathology 6611 5.7%
171
+
172
+ 1-3
173
+ Genetics 5231 4.5%
174
+
175
+ 1-3
176
+ Other 12997 11.2%
177
+
178
+ 1-3
179
+
180
+ Table 1: MAG subfield of study for CORD-19 papers.
181
+
182
+ Figure 2 shows the distribution of CORD-19 papers by date of publication. Coronavirus publications increased during and following the SARS and MERS epidemics, but the number of papers published in the early months of 2020 exploded in response to the COVID-19 epidemic. Using author affiliations in MAG, we identify the countries from which the research in CORD-19 is conducted. Large proportions of CORD-19 papers are associated with institutions based in the Americas (around ${48}\mathrm{\;K}$ papers), Europe (over ${35}\mathrm{\;K}$ papers), and Asia (over ${30}\mathrm{\;K}$ papers).
183
+
184
+ § 3 DESIGN DECISION & CHALLENGES
185
+
186
+ A number of challenges come into play in the creation of CORD-19. We summarize the primary design requirements of the dataset, along with challenges implicit within each requirement:
187
+
188
+ Up-to-date Hundreds of new publications on COVID-19 are released every day, and a dataset like CORD-19 can quickly become irrelevant without regular updates. CORD-19 has been updated daily since May 26. A processing pipeline that produces consistent results day to day is vital to maintaining a changing dataset. That is, the metadata and full text parsing results must be reproducible, identifiers must be persistent between releases, and changes or new features should ideally be compatible with previous versions of the dataset.
189
+
190
+ Handles data from multiple sources Papers from different sources must be integrated and harmonized. Each source has its own metadata format, which must be converted to the CORD-19 format, while addressing any missing or extraneous fields. The processing pipeline must also be flexible to adding new sources.
191
+
192
+ ${}^{18}$ MAG identifier mappings are provided as a supplement
193
+
194
+ on the CORD-19 landing page.
195
+
196
+ Clean canonical metadata Because of the diversity of paper sources, duplication is unavoidable. Once paper metadata from each source is cleaned and organized into CORD-19 format, we apply the deduplication logic described in Section 2.2 to identify similar paper entries from different sources. We apply a conservative clustering algorithm, combining papers only when they have shared identifiers but no conflicts between any particular class of identifiers. We justify this because it is less harmful to retain a few duplicate papers than to remove a document that is potentially unique and useful.
197
+
198
+ Machine readable full text To provide accessible and canonical structured full text, we parse content from PDFs and associated paper documents. The full text is represented in S2ORC JSON format (Lo et al., 2020), a schema designed to preserve most relevant paper structures such as paragraph breaks, section headers, inline references, and citations. S2ORC JSON is simple to use for many NLP tasks, where character-level indices are often employed for annotation of relevant entities or spans. The text and annotation representations in S2ORC share similarities with BioC (Comeau et al., 2019), a JSON schema introduced by the BioCre-ative community for shareable annotations, with both formats leveraging the flexibility of character-based span annotations. However, S2ORC JSON also provides a schema for representing other components of a paper, such as its metadata fields, bibliography entries, and reference objects for figures, tables, and equations. We leverage this flexible and somewhat complete representation of S2ORC JSON for CORD-19. We recognize that converting between PDF or XML to JSON is lossy. However, the benefits of a standard structured format, and the ability to reuse and share annotations made on top of that format have been critical to the success of CORD-19.
199
+
200
+ Observes copyright restrictions Papers in CORD-19 and academic papers more broadly are made available under a variety of copyright licenses. These licenses can restrict or limit the abilities of organizations such as AI2 from redistributing their content freely. Although much of the COVID-19 literature has been made open access by publishers, the provisions on these open access licenses differ greatly across papers. Additionally, many open access licenses grant the ability to read, or "consume" the paper, but may be restrictive in other ways, for example, by not allowing republication of a paper or its redistribution for commercial purposes. The curator of a dataset like CORD- 19 must pass on best-to-our-knowledge licensing information to the end user.
201
+
202
+ Given a query:
203
+
204
+ Does hypertension increase the risks associated with Covid-19?
205
+
206
+ < g r a p h i c s >
207
+
208
+ Figure 3: An example information retrieval and extraction system using CORD-19: Given an input query, the system identifies relevant papers (yellow highlighted rows) and extracts text snippets from the full text JSONs as supporting evidence.
209
+
210
+ § 4 RESEARCH DIRECTIONS
211
+
212
+ We provide a survey of various ways researchers have made use of CORD-19. We organize these into four categories:(i)direct usage by clinicians and clinical researchers (§4.1), (ii) tools and systems to assist clinicians (§4.2), (iii) research to support further text mining and NLP research (§4.3), and (iv) shared tasks and competitions (§4.4).
213
+
214
+ § 4.1 USAGE BY CLINICAL RESEARCHERS
215
+
216
+ CORD-19 has been used by medical experts as a paper collection for conducting systematic reviews. These reviews address questions about COVID-19 include infection and mortality rates in different demographics (Han et al., 2020), symptoms of the disease (Parasa et al., 2020), identifying suitable drugs for repurposing (Sadegh et al., 2020), management policies (Yaacoub et al., 2020), and interactions with other diseases (Crisan-Dabija et al., 2020; Popa et al., 2020).
217
+
218
+ § 4.2 TOOLS FOR CLINICIANS
219
+
220
+ Challenges for clinicians and clinical researchers during the current epidemic include (i) keeping up to to date with recent papers about COVID-19, (ii) identifying useful papers from historical coronavirus literature, (iii) extracting useful information from the literature, and (iv) synthesizing knowledge from the literature. To facilitate solutions to these challenges, dozens of tools and systems over CORD-19 have already been developed. Most combine elements of text-based information retrieval and extraction, as illustrated in Figure 3. We have compiled a list of these efforts on the CORD- 19 public GitHub repository ${}^{19}$ and highlight some systems in Table 2. ${}^{20}$
221
+
222
+ § 4.3 TEXT MINING AND NLP RESEARCH
223
+
224
+ The following is a summary of resources released by the NLP community on top of CORD-19 to support other research activities.
225
+
226
+ Information extraction To support extractive systems, NER and entity linking of biomedical entities can be useful. NER and linking can be performed using NLP toolkits like ScispaCy (Neumann et al., 2019) or language models like BioBERT-base (Lee et al., 2019) and SciBERT-base (Beltagy et al., 2019) finetuned on biomedical NER datasets. Wang et al. (2020) augments CORD-19 full text with entity mentions predicted from several techniques, including weak supervision using the NLM's Unified Medical Language System (UMLS) Metathesaurus (Bodenrei-der, 2004).
227
+
228
+ Text classification Some efforts focus on extracting sentences or passages of interest. For example, Liang and Xie (2020) uses BERT (Devlin et al., 2019) to extract sentences from CORD-19 that contain COVID-19-related radiological findings.
229
+
230
+ Pretrained model weights BioBERT and SciBERT have been popular pretrained LMs for COVID- 19-related tasks. DeepSet has released a BERT-base model pretrained on CORD-19. ${}^{21}$ SPECTER (Cohan et al., 2020) paper embeddings computed using paper titles and abstracts are being released with each CORD-19 update. SeVeN relation em-beddings (Espinosa-Anke and Schockaert, 2018) between word pairs have also been made available for CORD-19. ${}^{22}$
231
+
232
+ Knowledge graphs The Covid Graph project ${}^{23}$ releases a COVID-19 knowledge graph built from mining several public data sources, including CORD-19, and is perhaps the largest current initiative in this space. Ahamed and Samad (2020) rely on entity co-occurrences in CORD-19 to construct a graph that enables centrality-based ranking of drugs, pathogens, and biomolecules.
233
+
234
+ § 4.4 COMPETITIONS AND SHARED TASKS
235
+
236
+ The adoption of CORD-19 and the proliferation of text mining and NLP systems built on top of the dataset are supported by several COVID-19-related competitions and shared tasks.
237
+
238
+ § 4.4.1 KAGGLE
239
+
240
+ Kaggle hosts the CORD-19 Research Challenge, ${}^{24}$ a text-mining challenge that tasks participants with extracting answers to key scientific questions about COVID-19 from the papers in the CORD- 19 dataset. Round 1 was initiated with a set of open-ended questions, e.g., What is known about transmission, incubation, and environmental stability? and What do we know about COVID-19 risk factors?
241
+
242
+ More than 500 teams participated in Round 1 of the Kaggle competition. Feedback from medical experts during Round 1 identified that the most useful contributions took the form of article summary tables. Round 2 subsequently focused on this task of table completion, and resulted in 100 additional submissions. A unique tabular schema is defined for each question, and answers are collected from across different automated extractions. For example, extractions for risk factors should include disease severity and fatality metrics, while extractions for incubation should include time ranges. Sufficient knowledge of COVID-19 is necessary to define these schema, to understand which fields are important to include (and exclude), and also to perform error-checking and manual curation.
243
+
244
+ § 4.4.2 TREC
245
+
246
+ The TREC-COVID ${}^{25}$ shared task (Roberts et al., 2020; Voorhees et al., 2020) assesses systems on their ability to rank papers in CORD-19 based on their relevance to COVID-19-related topics. Topics are sourced from MedlinePlus searches, Twitter conversations, library searches at OHSU, as well as from direct conversations with researchers, reflecting actual queries made by the community. To emulate real-world surge in publications and rapidly-changing information needs, the shared task is organized in multiple rounds. Each round uses a specific version of CORD-19, has newly added topics, and gives participants one week to submit per-topic document rankings for judgment. Round 1 topics included more general questions such as What is the origin of COVID-19? and What are the initial symptoms of COVID-19? while Round 3 topics have become more focused, e.g., What are the observed mutations in the SARS-CoV-2 genome? and What are the longer-term complications of those who recover from COVID-19? Around 60 medical domain experts, including indexers from NLM and medical students from OHSU and UTHealth, are involved in providing gold rankings for evaluation. TREC-COVID opened using the April 1st CORD-19 version and received submissions from over 55 participating teams.
247
+
248
+ ${}^{19}$ https://github.com/allenai/cord19
249
+
250
+ ${}^{20}$ There are many Search and QA systems to survey. We have chosen to highlight the systems that were made publicly-available within a few weeks of the CORD-19 initial release.
251
+
252
+ ${}^{21}$ https://huggingface.co/deepset/covid_bert_base
253
+
254
+ ${}^{22}$ https://github.com/luisespinosaanke/cord-19-seven
255
+
256
+ ${}^{23}$ https://covidgraph.org/
257
+
258
+ ${}^{24}$ https://www.kaggle.com/allen-institute-for-ai/CORD- 19-research-challenge
259
+
260
+ ${}^{25}$ https://ir.nist.gov/covidSubmit/index.html
261
+
262
+ max width=
263
+
264
+ Task Project Link Description
265
+
266
+ 1-4
267
+ 4*Search and discovery NEURAL Covidex https://covidex.ai/ Uses a T5-base (Raffel et al., 2019) unsupervised reranker on BM25 (Jones et al., 2000)
268
+
269
+ 2-4
270
+ CovidScholar https://covidscholar.org/ Adapts Weston et al. (2019) system for entity- centric queries
271
+
272
+ 2-4
273
+ KDCOVID http://kdcovid.nl/about.html Uses BioSentVec (Chen et al., 2019) similarity to identify relevant sentences
274
+
275
+ 2-4
276
+ SPIKE-CORD https://spike.covid- 19.apps.allenai.org Enables users to define "regular expression"-like queries to directly search over full text
277
+
278
+ 1-4
279
+ 2*Question answering COVIDASK https://covidask.korea.ac.kr/ Adapts Seo et al. (2019) using BioASQ challenge (Task B) dataset (Tsatsaronis et al., 2015)
280
+
281
+ 2-4
282
+ AUEB http://cslab241.cs.aueb.gr:5000/ Adapts McDonald et al. (2018) using Tsatsaronis et al. (2015)
283
+
284
+ 1-4
285
+ Summariz- ation Vespa https://cord19.vespa.ai/ Generates summaries of paper abstracts using T5 (Raffel et al., 2019)
286
+
287
+ 1-4
288
+ Recommend- ation Vespa https://cord19.vespa.ai/ Recommends "similar papers" using Sentence- BERT (Reimers and Gurevych, 2019) and SPECTER embeddings (Cohan et al., 2020)
289
+
290
+ 1-4
291
+ $\mathbf{{Entailment}}$ COVID papers browser https://github.com/gsarti/covid- papers-browser Similar to KDCOVID, but uses embeddings from BERT models trained on NLI datasets
292
+
293
+ 1-4
294
+ Claim verification SciFact https://scifact.apps.allenai.org Uses RoBERTa-large (Liu et al., 2019) to find Sup- port/Refute evidence for COVID-19 claims
295
+
296
+ 1-4
297
+ Assistive lit. review ASReview https://github.com/asreview/ asreview-covid19 Active learning system with a CORD-19 plugin for identifying papers for literature reviews
298
+
299
+ 1-4
300
+ Augmented reading Sinequa https://covidsearch.sinequa.com/ app/covid-search/ In-browser paper reader with entity highlighting on PDFs
301
+
302
+ 1-4
303
+ Visualization SciSight https://scisight.apps.allenai.org Network visualizations for browsing research groups working on COVID-19
304
+
305
+ 1-4
306
+
307
+ Table 2: Publicly-available tools and systems for medical experts using CORD-19.
308
+
309
+ § 5 DISCUSSION
310
+
311
+ Several hundred new papers on COVID-19 are now being published every day. Automated methods are needed to analyze and synthesize information over this large quantity of content. The computing community has risen to the occasion, but it is clear that there is a critical need for better infrastructure to incorporate human judgments in the loop. Extractions need expert vetting, and search engines and systems must be designed to serve users.
312
+
313
+ Successful engagement and usage of CORD- 19 speaks to our ability to bridge computing and biomedical communities over a common, global cause. From early results of the Kaggle challenge, we have learned which formats are conducive to collaboration, and which questions are the most urgent to answer. However, there is significant work that remains for determining (i) which methods are best to assist textual discovery over the literature, (ii) how best to involve expert curators in the pipeline, and (iii) which extracted results convert to successful COVID-19 treatments and management policies. Shared tasks and challenges, as well as continued analysis and synthesis of feedback will hopefully provide answers to these outstanding questions.
314
+
315
+ Since the initial release of CORD-19, we have implemented several new features based on community feedback, such as the inclusion of unique identifiers for papers, table parses, more sources, and daily updates. Most substantial outlying features requests have been implemented or addressed at this time. We will continue to update the dataset with more sources of papers and newly published literature as resources permit.
316
+
317
+ § 5.1 LIMITATIONS
318
+
319
+ Though we aim to be comprehensive, CORD-19 does not cover many relevant scientific documents on COVID-19. We have restricted ourselves to research papers and preprints, and do not incorporate other types of documents, such as technical reports, white papers, informational publications by governmental bodies, and more. Including these documents is outside the current scope of CORD- 19, but we encourage other groups to curate and publish such datasets.
320
+
321
+ Within the scope of scientific papers, CORD-19 is also incomplete, though we continue to prioritize the addition of new sources. This has motivated the creation of other corpora supporting COVID-19 NLP, such as LitCovid (Chen et al., 2020), which provide complementary materials to CORD-19 derived from PubMed. Though we have since added PubMed as a source of papers in CORD-19, there are other domains such as the social sciences that are not currently represented, and we hope to incorporate these works as part of future work.
322
+
323
+ We also note the shortage of foreign language papers in CORD-19, especially Chinese language papers produced during the early stages of the epidemic. These papers may be useful to many researchers, and we are working with collaborators to provide them as supplementary data. However, challenges in both sourcing and licensing these papers for re-publication are additional hurdles.
324
+
325
+ § 5.2 CALL TO ACTION
326
+
327
+ Though the full text of many scientific papers are available to researchers through CORD-19, a number of challenges prevent easy application of NLP and text mining techniques to these papers. First, the primary distribution format of scientific papers - PDF - is not amenable to text processing. The PDF file format is designed to share electronic documents rendered faithfully for reading and printing, and mixes visual with semantic information. Significant effort is needed to coerce PDF into a format more amenable to text mining, such as JATS XML, ${}^{26}$ BioC (Comeau et al.,2019), or S2ORC JSON (Lo et al., 2020), which is used in CORD-19. Though there is substantial work in this domain, we can still benefit from better PDF parsing tools for scientific documents. As a complement, scientific papers should also be made available in a structured format like JSON, XML, or HTML.
328
+
329
+ Second, there is a clear need for more scientific content to be made accessible to researchers. Some publishers have made COVID-19 papers openly available during this time, but both the duration and scope of these epidemic-specific licenses are unclear. Papers describing research in related areas (e.g., on other infectious diseases, or relevant biological pathways) have also not been made open access, and are therefore unavailable in CORD-19 or otherwise. Securing release rights for papers not yet in CORD-19 but relevant for COVID-19 research is a significant portion of future work, led by the PMC COVID-19 Initiative. ${}^{6}$
330
+
331
+ Lastly, there is no standard format for representing paper metadata. Existing schemas like the JATS XML NISO standard ${}^{26}$ or library science standards like BIBFRAME ${}^{27}$ or Dublin Core ${}^{28}$ have been adopted to represent paper metadata. However, these standards can be too coarse-grained to capture all necessary paper metadata elements, or may lack a strict schema, causing representations to vary greatly across publishers who use them. To improve metadata coherence across sources, the community must define and agree upon an appropriate standard of representation.
332
+
333
+ § SUMMARY
334
+
335
+ This project offers a paradigm of how the community can use machine learning to advance scientific research. By allowing computational access to the papers in CORD-19, we increase our ability to perform discovery over these texts. We hope the dataset and projects built on the dataset will serve as a template for future work in this area. We also believe there are substantial improvements that can be made in the ways we publish, share, and work with scientific papers. We offer a few suggestions that could dramatically increase community productivity, reduce redundant effort, and result in better discovery and understanding of the scientific literature.
336
+
337
+ ${}^{26}$ https://www.niso.org/publications/z3996-2019-jats
338
+
339
+ ${}^{27}$ https://www.loc.gov/bibframe/
340
+
341
+ ${}^{28}$ https://www.dublincore.org/specifications/dublin-core/dces/
342
+
343
+ Through CORD-19, we have learned the importance of bringing together different communities around the same scientific cause. It is clearer than ever that automated text analysis is not the solution, but rather one tool among many that can be directed to combat the COVID-19 epidemic. Crucially, the systems and tools we build must be designed to serve a use case, whether that's improving information retrieval for clinicians and medical professionals, summarizing the conclusions of the latest observational research or clinical trials, or converting these learnings to a format that is easily digestible by healthcare consumers.
344
+
345
+ § ACKNOWLEDGMENTS
346
+
347
+ This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship.
348
+
349
+ We thank The White House Office of Science and Technology Policy, the National Library of Medicine at the National Institutes of Health, Microsoft Research, Chan Zuckerberg Initiative, and Georgetown University's Center for Security and Emerging Technology for co-organizing the CORD-19 initiative. We thank Michael Kratsios, the Chief Technology Officer of the United States, and The White House Office of Science and Technology Policy for providing the initial seed set of questions for the Kaggle CORD-19 research challenge.
350
+
351
+ We thank Kaggle for coordinating the CORD- 19 research challenge. In particular, we acknowledge Anthony Goldbloom for providing feedback on CORD-19 and for involving us in discussions around the Kaggle literature review tables project. We thank the National Institute of Standards and Technology (NIST), National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and University of Texas Health Science Center at Houston (UTHealth) for co-organizing the TREC-COVID shared task. In particular, we thank our co-organizers - Steven Bedrick (OHSU), Aaron Cohen (OHSU), Dina Demner-Fushman (NLM), William Hersh (OHSU), Kirk Roberts (UTHealth), Ian Soboroff (NIST), and Ellen Voorhees (NIST) - for feedback on the design of CORD-19.
352
+
353
+ We acknowledge our partners at Elsevier and Springer Nature for providing additional full text coverage of papers included in the corpus.
354
+
355
+ We thank Bryan Newbold from the Internet Archive for providing feedback on data quality and helpful comments on early drafts of the manuscript.
356
+
357
+ We thank Rok Jun Lee, Hrishikesh Sathe, Dhaval Sonawane and Sudarshan Thitte from IBM Watson AI for their help in table parsing.
358
+
359
+ We also acknowledge and thank our collaborators from AI2: Paul Sayre and Sam Skjonsberg for providing front-end support for CORD-19 and TREC-COVID, Michael Schmitz for setting up the CORD-19 Discourse community forums, Adriana Dunn for creating webpage content and marketing, Linda Wagner for collecting community feedback, Jonathan Borchardt, Doug Downey, Tom Hope, Daniel King, and Gabriel Stanovsky for contributing supplemental data to the CORD-19 effort, Alex Schokking for his work on the Semantic Scholar COVID-19 Research Feed, Darrell Plessas for technical support, and Carissa Schoenick for help with public relations.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/2f70OXlGQMd/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Estimating the effect of COVID-19 on mental health: Linguistic indicators of depression during a global pandemic
2
+
3
+ JT Wolohan
4
+
5
+ Booz Allen Hamilton
6
+
7
+ wolohan-john@bah.com
8
+
9
+ ## Abstract
10
+
11
+ This preliminary analysis uses a deep LSTM neural network with fastText embeddings to predict population rates of depression on Red-dit in order to estimate the effect of COVID- 19 on mental health. We find that year over year, depression rates on Reddit are up ${50}\%$ , suggesting a 15-million person increase in the number of depressed Americans and a \$7.5 billion increase in depression related spending. This finding comes at a time when uncertainty about the impact of COVID-19 on physical and economic health is still high, and suggests that in addition to those factors, mental health must be considered as well. As data becomes available, further research will be needed to validate the results of this preliminary investigation.
12
+
13
+ ## 1 Introduction
14
+
15
+ The COVID-19 pandemic has already plagued the people of the world's physical health, and while the impact of the novel coronavirus on our mental health is less well understood, it is expected to be negative (Wang et al., 2020; Ammerman et al., 2020). Even those who never get sick during a pandemic can experience a multitude of psychological stressors during a disease outbreak and those stressors can persist well past the end of the outbreak (Chew et al., 2020). The popular media is aware of the necessity for otherwise healthy people to emphasize "self care"-small acts intended to maintain one's mental health or relieve stress-during these uncertain times. This analysis attempts to quantify the impact that COVID-19 has had to date on the population rate of depression through the use of state-of-science depression prediction models and data from the popular social media site Reddit.
16
+
17
+ This paper continues an established line of research in the application of natural language processing techniques to the disease of depression (Guntuku et al., 2017), and mixes it with the rapidly emerging field of COVID-19 research (McKibbin and Fernando, 2020; Duan and Zhu, 2020). Research in the former area is centered around the notion that language use reflects the thought processes of the speaker and that by assessing the words people use, we can gain insight into their thought processes (Fine, 2006). From this, it follows that text classification approaches such as the use of long short-term memory networks (Hochre-iter and Schmidhuber, 1997) and word embed-dings (Mikolov et al., 2013), such as fastText (Bo-janowski et al., 2017), can be used to classify people's mental health status based on their speech. Depression has been studied widely in this way due to its grave impact on those it afflicts (De Choud-hury et al., 2013; Coppersmith et al., 2015). Indeed, even subclinical levels of depression have been shown to reduce quality of life in meaningful and measurable ways (Cuijpers and Smit, 2002).
18
+
19
+ ## 2 Method
20
+
21
+ In this paper, we use data from Reddit to measure the potential impact of COVID-19 on depression. We do this in two parts: first, we extend the approach from Wolohan et al. (2018), using an LSTM model to improve the accuracy of depression prediction on social media; then, with this model, we analyze a new dataset-composed of the Reddit comments of 20,000 users across the first six months of 2018,2019, and ${2020}^{1}$ -in order to estimate the population rate of depression during the COVID-19 pandemic.
22
+
23
+ ### 2.1 Data
24
+
25
+ For these analyses we use two datasets of Reddit comments, aggregated at the user level. The first dataset-the Off-Topic Depression dataset-comes from Wolohan et al.; The second, is a novel dataset created for this task: the Reddit Pandemic Depression dataset.
26
+
27
+ ---
28
+
29
+ ${}^{1}$ This paper was written in April,2020. More data will be gathered as it becomes available.
30
+
31
+ ---
32
+
33
+ The Off-Topic Depression dataset contains 141- million words from Reddit comments, aggregated by 11,000 users. The text in this dataset was not allowed to come from subreddits-the site-wide term for a themed message boards-where discussion of depression or a related issues was expected. Therefore, this dataset contains only "off-topic" text-text not on the subject of depression. This step is important for being able to detect depression among the general public, many of whom are reluctant to talk about their depressive symptoms due to depression-related stigma (Manos et al., 2009). We label users as either "depressed" or "not-depressed" based on self-disclosure behavior: authoring posts in depression-related subeditors. The Off-Topic Depression dataset is useful for training a model because of the available baseline, but contains no data from the recent COVID-19 pandemic. This makes it insufficient to estimate the impact of COVID-19 on population rates of depression.
34
+
35
+ #### 2.1.1 Pandemic Depression dataset
36
+
37
+ The Pandemic Depression dataset contains 23 million words generated by 20,000 users over three years. The users for this dataset were selected in a similar fashion to the users from the Off-Topic Depression dataset: a scrape of submission authors to the subreddit $\mathrm{r}/$ AskRedit was performed to gather a set of potential users and then a random subset from this set was taken. During the scrape, 29-million users were considered. From that sample, 20,000 were randomly selected for inclusion in the study. This approach attempts to gather "neutral" Reddit users, who are not necessarily associated with any particular subreddit or community of subreddits, and would therefore have no bias towards or away from depression. The data for the Pandemic Depression dataset is broken up by the time of users activities. We use only the first six months of 2018 and 2019 and the first four months of ${2020}^{2}$ .
38
+
39
+ ### 2.2 Deep LSTM with fastText
40
+
41
+ As part of this analysis we trained a deep long short-term memory neural network. The network we used contains five layers: a fastText (Joulin et al., 2016) embedding layer, three LSTM layers, and an output layer. We trained and evaluated the LSTM on the Off-Topic Depression dataset, using about 7,700 users for training and about 4,000 users for testing. This totalled about 100 million words for training and 40 million for testing.
42
+
43
+ <table><tr><td>Month</td><td>2018</td><td>2019</td><td>2020</td></tr><tr><td>Jan</td><td>1.6 mil</td><td>5.5 mil.</td><td>560k</td></tr><tr><td>Feb</td><td>475k</td><td>1.8 mil.</td><td>860k</td></tr><tr><td>Mar</td><td>335k</td><td>1.2 mil</td><td>2.4 mil</td></tr><tr><td>Apr</td><td>258k</td><td>1 mil</td><td>5.6 mil</td></tr><tr><td>May</td><td>211k</td><td>700k</td><td>*</td></tr><tr><td>Jun</td><td>210k</td><td>470k</td><td>*</td></tr><tr><td colspan="4">* Data not yet available.</td></tr></table>
44
+
45
+ Table 1: Pandemic Depression dataset text by month.
46
+
47
+ The first layer of the model, the embedding layer learns weights that take advantage of the specific 300-dimensional fastText vectors. The second through fourth layers of the model are identical LSTM layers with a ${20}\%$ dropout rate. The LSTM layers use each word as a step. The fifth and final layer of the model is a single-node dense layer with sigmoid activation used for predicting the class of the user. A depiction of this network and the minimal prepossessing can be seen in Figure 1.
48
+
49
+ Text preprocessing for the LSTM was minimal. The vocabulary for all documents was limited to the most-common 10,000 words. Each user was truncated down to or zero-padded up to 750 words, as necessary. We performed no other preprocessing, such as misspelling correction or internet-speak normalization.
50
+
51
+ ### 2.3 Comparative time-series analysis
52
+
53
+ In order to assess the impact of the COVID-19 pandemic on language use, we perform a comparative time-series analysis of three periods: two six month periods from before the pandemic ranging from January 2018 and 2019 to June 2018 and 2019 inclusive, and one four month period from January 2020 to April ${2020}^{3}$ . We analyzed the same users for all periods. Rates of depression were estimated for each period by classifying each user as either depressed or not-depressed with the LSTM classifier described in section 2.2.
54
+
55
+ It is important to note here that Reddit activity is inconsistent and that, unlike other social media platforms such as Twitter or Facebook, the use of a single account through time is not encouraged by the platform (Leavitt, 2015). This results in many accounts being abandoned and the resulting 2020 subset of data being smaller in terms of total active users than the subset of data for 2019. A more appropriate means of performing this analysis would be to select a random sample users known to be active during this time period in 2020. Requiring the same users for all periods may introduce bias if users who are likely to be active over long periods of time have a bias towards or away from depression.
56
+
57
+ ---
58
+
59
+ ${}^{2}$ Historical Gaffney and Matias (2018) note that there are issues with historical analysis of Reddit data-perhaps inclusive of users deleting depressive content post-hoc. Addressing those concerns are outside the scope of this preliminary investigation.
60
+
61
+ ${}^{3}$ Data from April was only available up to April 4; however data is included because a large enough volume of data was accessible: $\approx {50},{000}$ words.
62
+
63
+ ---
64
+
65
+ ![01963dac-7866-701b-a5bf-7369a641b41c_2_234_178_1178_357_0.jpg](images/01963dac-7866-701b-a5bf-7369a641b41c_2_234_178_1178_357_0.jpg)
66
+
67
+ Figure 1: Deep LSTM model for depression prediction.
68
+
69
+ ## 3 Preliminary results
70
+
71
+ In this section, we review the preliminary results. We find that an LSTM with fastText embeddings outperforms the baseline approach in Wolohan et al. Additionally, the LSTM indicates that the population rate of depression may be up by ${50}\%$ in the first four months of 2020 when compared to the first four months of 2019 and 2018.
72
+
73
+ ### 3.1 LSTM with fastText embeddings
74
+
75
+ Comparing the new model for off-topic depression prediction, a deep LSTM with fastText word em-beddings, to the model previous used by Wolohan et al., We find that the LSTM outperforms the previous baseline approach in the relevant measures of AUC and F1 score. The LSTM achieved an AUC of 0.93 and an F1 score of 0.92 , surpassing the baseline by 18 points 24 points respectively. The results for this LSTM are competitive with state-of-the-art deep-learning approaches for this task on similar datasets (see: Orabi et al. 2018; Guntuku et al. 2017).
76
+
77
+ <table><tr><td>Model</td><td>AUC</td><td>$\mathbf{{F1}}$</td></tr><tr><td>Wolohan et al.</td><td>0.75</td><td>0.68</td></tr><tr><td>LSTM + fastText</td><td>0.93</td><td>0.92</td></tr></table>
78
+
79
+ Table 2: LSTM performance versus baseline.
80
+
81
+ ### 3.2 Comparative time-series analysis
82
+
83
+ With the LSTM model improved to 0.93 AUC, we then applied the LSTM to the Pandemic Depression dataset. In doing this, we assessed whether or not users' language indicated depression for the first six months in 2018 and 2019, and the first four months of 2020. We found for 2018 and 2019, the user population being studied demonstrated a steady rate of depression around ${33}\% \pm 4\%$ . For 2020, we found the population rate of depression to average ${49}\%$ , with individual months ranging from 42% to 52%.
84
+
85
+ In the first six months of 2018 and 2019, the LSTM suggested a depression rate in the low ${30}\%$ range with only two exceptions: May 2018 and January 2019. May 2018 had the highest estimated depression rate-38% of all users-while January 2019 had the lowest-29% of all users.
86
+
87
+ In 2020, estimated depression rates among Red-dit users again are consistent; however, they are consistently 20 percentage points higher than 2018 and 2019. Of the three months, only April 2020 stands out with a low depression rate: ${42}\%$ . At the time of this writing, the data for April 2020 is incomplete.
88
+
89
+ ## 4 Discussion
90
+
91
+ With the global COVID-19 pandemic wrecking havoc on medical systems and economies worldwide, 2020 will be a year in which many people go through significant hardships. If the analysis herein is to believed, then the fear that many have about a global deterioration in mental health is likely to be a reality as well. But should we believe the analysis presented here; and what are the implications if we do?
92
+
93
+ ![01963dac-7866-701b-a5bf-7369a641b41c_3_197_173_1250_331_0.jpg](images/01963dac-7866-701b-a5bf-7369a641b41c_3_197_173_1250_331_0.jpg)
94
+
95
+ Figure 2: Sampled word count, COVID-19 cases, and modeled depression rate by week.
96
+
97
+ <table><tr><td>Month</td><td>2018</td><td>2019</td><td>2020</td><td>$\Delta$</td></tr><tr><td>January</td><td>.32</td><td>.29</td><td>.52</td><td>+79%</td></tr><tr><td>February</td><td>.34</td><td>.32</td><td>.51</td><td>+69%</td></tr><tr><td>March</td><td>.31</td><td>.32</td><td>.49</td><td>+53%</td></tr><tr><td>April</td><td>.33</td><td>.34</td><td>.42</td><td>+24%</td></tr><tr><td>May</td><td>.38</td><td>.32</td><td>-</td><td>-</td></tr><tr><td>June</td><td>.34</td><td>.34</td><td>-</td><td>-</td></tr><tr><td>Average</td><td>.34</td><td>.32</td><td>.49</td><td>+53%</td></tr></table>
98
+
99
+ Table 3: Estimated depression rate of Reddit users for select months.
100
+
101
+ ### 4.1 Model efficacy
102
+
103
+ There are two reasons that we should tentatively believe the AI-based assessments of population-level depression on Reddit. First, the ${32}\%$ population-rate of depression estimated by the model is plausible, given (1) that the LSTM is designed to detect both clinical and subclinical depression and (2) that Reddit has a much younger (read: depression prone) population than the U.S. at-large. Second, the steadiness of the numbers over time is encouraging.
104
+
105
+ First among the reasons that one would avoid dismissing these findings out of hand is the consistency between the numbers projected for 2018 and 2019 and what one would expect for a population joint rate of clinical and subclinical depression on Reddit. According to the U.S. National Institute of Health, adult rates of depression range up to ${13}\%$ depending on the demographic ${}^{4}$ . At the upper bound of 13% is the 18-25 year-old demographic. More than half of the Reddit-using population falls into this demographic ${}^{5}$ . Further, ${25}\%$ of Reddit users are 17 or younger-suggesting they might also have an increased rate of depression. NIH estimates that approximately ${17}\%$ of adolescents between 15 and 17 will experience at least one major depressive episode each year. If one takes ${13}\%$ as the population rate of depression for Reddit users and doubles it to include cases of subclinical depression (Kessler et al., 1997), one is left with a ${26}\%$ rate of total depression for Reddit users for the first six months 2019. The LSTM predicts a 32% rate of depression for the first six months of 2019. A reasonable amount of error, given the accuracy measures in Table 2.
106
+
107
+ Second, the steadiness of the rates of depression in the two control years is encouraging. There is little variation across the first six months of both 2018 and 2019. With only 9 percentage points separating the month with the greatest estimated rate of depression, May '18, and the lowest month, Jan. '19.
108
+
109
+ ### 4.2 Estimating the effect of the COVID-19 pandemic on mental health
110
+
111
+ If we assume that the rates of population-wide clinical and subclinical depression are to increase by ${50}\%$ in 2020, either as a result of the COVID- 19 pandemic or otherwise, then we would expect to see a population rate of depression increase from $7\%$ to ${10}\%$ , and a similar sized increase sub-clinical depression. In the U.S., this would amount to 15 million more adults suffering from clinical depression. This increased rate of depression would amount to a $\$ {7.5}$ billion increase in healthcare spending, assuming $\$ {500}$ per person per year (Kleine-Budde et al., 2013). This assumes that the average rate of depression is about as severe as it is currently.
112
+
113
+ ---
114
+
115
+ ${}^{4}$ All population rates of depression come from the NIH: https://www.nimh.nih.gov/health/ statistics/major-depression.shtml
116
+
117
+ ${}^{5}$ There are two sources for the demographics of Reddit, both from 2016: Barthel et al. (2016) or /u/HurricaneXriks (2016). The latter is used here.
118
+
119
+ ---
120
+
121
+ Importantly, there reasons we may believe this increased rate is not yet associated with COVID-19. As we can see in Figure 2, the estimated depression rate appears to be declining in April at a time when cases and deaths in the U.S. are rising. Speculatively, this may be associated with the phase of the pandemic the U.S.-and therefore most Reddit users-are currently experiencing. Many of these users will be under stay-at-home orders; however, the full toll of the pandemic, including economic destruction and loss of life has not yet been felt. Alternatively, we must consider that the active population of Reddit may be changing as stay-at-home orders and unemployment furnish people with addition free time to use the internet.
122
+
123
+ One would expect that depression rates increase as stressors such as unemployment, loneliness, and loss of loved ones begin to impact the population at large. If the population rate of depression is elevated for a reason un-related to COVID-19, that could spell even further trouble for Americans' mental health.
124
+
125
+ ## 5 Ethical considerations
126
+
127
+ As with all works related to public health and mental health, it is important that we consider the ethical implications of our research and make explicit the ethical justification for the work. In particular, work of this kind-namely, public health surveillance-requires special attention because, while disease surveillance is foundational to good public health practice (Fairchild et al., 2007), it also forgoes the notion of informed consent. When discussing a health issue rife with stigma such as depression, the concern about a researcher mismanaging data and revealing public health information of individuals without their consent is amplified.
128
+
129
+ Klingler et al. (2017) enumerate eight broad categories of ethical arguments by which researchers justify forgoing the traditional informed consent requirement for conducting public health surveillance. Of those, we argue that our work satisfies the effectiveness, necessity, proportionality, and least infringement requirements. First, with respect to the effectiveness, this study is the first-to my knowledge-quantitative estimate of the impact of COVID-19 on population rate of depression. That makes this data valuable to the public health and mental health communities. Second, with respect to necessity, it is noteworthy that this approach is minimally intrusive, requiring no contact with the individuals whose comments are used and no interruption of their usage of the Reddit service. For proportionality, third, it is important to note that no individual user-level data was shared at any time during this research, and that the identities of Red-dit users are hidden behind pseudonyms, offering them an additional layer of protection. Fourth and finally, the work considered carefully the notion of least infringement, collecting only data that would be necessary for the analysis herein.
130
+
131
+ Additionally, in a departure from traditional practices in the NLP community, the data underlying this work will only be shared with researchers who both (1) provide a research design or other public-health justification for the use of the data and (2) agree to take the necessary efforts to secure the data.
132
+
133
+ Ultimately, we view this work as ethically justified based on the precautions noted above and the potentially large increase in population-level depression against which this research warns. If depression is, as we predict, to impact 15 million more Americans through 2020 than in previous years, advanced warning is valuable to the American mental-health system.
134
+
135
+ ## 6 Conclusion
136
+
137
+ In this paper we show the effectiveness of an LSTM text classifier using fastText word embeddings at predicting user-level depression and use that classifier to estimate the population rate of depression in April 2020 in the midst of the COVID-19 pandemic. We estimate that through the first six months of 2020, population rate of depression is up $\approx {50}\%$ , corresponding to a 15 million more depressed Americans. This analysis suffers from a lack of data and will strengthen as more data becomes available. Additional research is needed to confirm or contradict the results presented here, and will be especially valuable when the adjusted population rates of depression are known.
138
+
139
+ ## References
140
+
141
+ Brooke A Ammerman, Taylor A Burke, Ross Ja-cobucci, and Kenneth McClure. 2020. Preliminary investigation of the association between covid-19 and suicidal thoughts and behaviors in the u.s.
142
+
143
+ Michael Barthel, Galen Stocking, Jesse Holcomb, and Amy Mitchell. 2016. Reddit news users more likely to be male, young and digital in their news preferences. shorturl.at/chsG8. Accessed: 2020-04- 07.
144
+
145
+ Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
146
+
147
+ Qian Hui Chew, Ker Chiah Wei, Shawn Vasoo, and Hong Choon Chua. 2020. Narrative synthesis of psychological and coping responses towards emerging infectious disease outbreaks in the general population: practical considerations for the covid-19 pandemic. Singapore medical journal.
148
+
149
+ Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015. Clpsych 2015 shared task: Depression and ptsd on twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 31- 39.
150
+
151
+ Pim Cuijpers and Filip Smit. 2002. Excess mortality in depression: a meta-analysis of community studies. Journal of affective disorders, 72(3):227-236.
152
+
153
+ Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In Seventh international AAAI conference on weblogs and social media.
154
+
155
+ Li Duan and Gang Zhu. 2020. Psychological interventions for people affected by the covid-19 epidemic. The Lancet Psychiatry, 7(4):300-302.
156
+
157
+ Amy L Fairchild, Ronald Bayer, James Colgrove, and Daniel Wolfe. 2007. Searching eyes: privacy, the state, and disease surveillance in America, volume 18. Univ of California Press.
158
+
159
+ Jonathan Fine. 2006. Language in psychiatry: A handbook of clinical practice. Equinox London.
160
+
161
+ Devin Gaffney and J Nathan Matias. 2018. Caveat emptor, computational social science: Large-scale missing data in a widely-published reddit corpus. PloSone, 13(7).
162
+
163
+ Sharath Chandra Guntuku, David B Yaden, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18:43-49.
164
+
165
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation,
166
+
167
+ 9(8):1735-1780.
168
+
169
+ Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
170
+
171
+ Ronald C Kessler, Shanyang Zhao, Dan G Blazer, and Marvin Swartz. 1997. Prevalence, correlates, and course of minor depression and major depression in the national comorbidity survey. Journal of affective disorders, 45(1-2):19-30.
172
+
173
+ Katja Kleine-Budde, Romina Müller, Wolfram Kawohl, Anke Bramesfeld, Jörn Moock, and Wulf Rössler. 2013. The cost of depression-a cost analysis from a large database. Journal of affective disorders, 147(1- 3):137-143.
174
+
175
+ Corinna Klingler, Diego Steven Silva, Christopher Schuermann, Andreas Alois Reis, Abha Saxena, and Daniel Strech. 2017. Ethical issues in public health surveillance: a systematic qualitative review. ${BMC}$ Public Health, 17(1):295.
176
+
177
+ Alex Leavitt. 2015. " this is a throwaway account" temporary technical identities and perceptions of anonymity in a massive online community. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 317-327.
178
+
179
+ Rachel C Manos, Laura C Rusch, Jonathan W Kanter, and Lisa M Clifford. 2009. Depression self-stigma as a mediator of the relationship between depression severity and avoidance. Journal of Social and Clinical Psychology, 28(9):1128-1143.
180
+
181
+ Warwick J McKibbin and Roshen Fernando. 2020. The global macroeconomic impacts of covid-19: Seven scenarios.
182
+
183
+ Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositional-ity. In Advances in neural information processing systems, pages 3111-3119.
184
+
185
+ Ahmed Husseini Orabi, Prasadith Buddhitha, Mahmoud Husseini Orabi, and Diana Inkpen. 2018. Deep learning for depression detection of twitter users. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 88-97.
186
+
187
+ /u/HurricaneXriks. 2016. Results of the reddit demographics survey 2016. https://imgur.com/ gallery/cPzlB. Accessed: 2020-04-07.
188
+
189
+ Cuiyan Wang, Riyu Pan, Xiaoyang Wan, Yilin Tan, Linkang Xu, Cyrus S Ho, and Roger C Ho. 2020. Immediate psychological responses and associated factors during the initial stage of the 2019 coronavirus disease (covid-19) epidemic among the
190
+
191
+ general population in china. International Journal of Environmental Research and Public Health, 17(5):1729.
192
+
193
+ JT Wolohan, Misato Hiraga, Atreyee Mukherjee, Zee-shan Ali Sayyed, and Matthew Millard. 2018. Detecting linguistic traces of depression in topic-restricted text: attending to self-stigmatized depression with nlp. In Proceedings of the First International Workshop on Language Cognition and Computational Models, pages 11-21.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/2f70OXlGQMd/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ESTIMATING THE EFFECT OF COVID-19 ON MENTAL HEALTH: LINGUISTIC INDICATORS OF DEPRESSION DURING A GLOBAL PANDEMIC
2
+
3
+ JT Wolohan
4
+
5
+ Booz Allen Hamilton
6
+
7
+ wolohan-john@bah.com
8
+
9
+ § ABSTRACT
10
+
11
+ This preliminary analysis uses a deep LSTM neural network with fastText embeddings to predict population rates of depression on Red-dit in order to estimate the effect of COVID- 19 on mental health. We find that year over year, depression rates on Reddit are up ${50}\%$ , suggesting a 15-million person increase in the number of depressed Americans and a $7.5 billion increase in depression related spending. This finding comes at a time when uncertainty about the impact of COVID-19 on physical and economic health is still high, and suggests that in addition to those factors, mental health must be considered as well. As data becomes available, further research will be needed to validate the results of this preliminary investigation.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The COVID-19 pandemic has already plagued the people of the world's physical health, and while the impact of the novel coronavirus on our mental health is less well understood, it is expected to be negative (Wang et al., 2020; Ammerman et al., 2020). Even those who never get sick during a pandemic can experience a multitude of psychological stressors during a disease outbreak and those stressors can persist well past the end of the outbreak (Chew et al., 2020). The popular media is aware of the necessity for otherwise healthy people to emphasize "self care"-small acts intended to maintain one's mental health or relieve stress-during these uncertain times. This analysis attempts to quantify the impact that COVID-19 has had to date on the population rate of depression through the use of state-of-science depression prediction models and data from the popular social media site Reddit.
16
+
17
+ This paper continues an established line of research in the application of natural language processing techniques to the disease of depression (Guntuku et al., 2017), and mixes it with the rapidly emerging field of COVID-19 research (McKibbin and Fernando, 2020; Duan and Zhu, 2020). Research in the former area is centered around the notion that language use reflects the thought processes of the speaker and that by assessing the words people use, we can gain insight into their thought processes (Fine, 2006). From this, it follows that text classification approaches such as the use of long short-term memory networks (Hochre-iter and Schmidhuber, 1997) and word embed-dings (Mikolov et al., 2013), such as fastText (Bo-janowski et al., 2017), can be used to classify people's mental health status based on their speech. Depression has been studied widely in this way due to its grave impact on those it afflicts (De Choud-hury et al., 2013; Coppersmith et al., 2015). Indeed, even subclinical levels of depression have been shown to reduce quality of life in meaningful and measurable ways (Cuijpers and Smit, 2002).
18
+
19
+ § 2 METHOD
20
+
21
+ In this paper, we use data from Reddit to measure the potential impact of COVID-19 on depression. We do this in two parts: first, we extend the approach from Wolohan et al. (2018), using an LSTM model to improve the accuracy of depression prediction on social media; then, with this model, we analyze a new dataset-composed of the Reddit comments of 20,000 users across the first six months of 2018,2019, and ${2020}^{1}$ -in order to estimate the population rate of depression during the COVID-19 pandemic.
22
+
23
+ § 2.1 DATA
24
+
25
+ For these analyses we use two datasets of Reddit comments, aggregated at the user level. The first dataset-the Off-Topic Depression dataset-comes from Wolohan et al.; The second, is a novel dataset created for this task: the Reddit Pandemic Depression dataset.
26
+
27
+ ${}^{1}$ This paper was written in April,2020. More data will be gathered as it becomes available.
28
+
29
+ The Off-Topic Depression dataset contains 141- million words from Reddit comments, aggregated by 11,000 users. The text in this dataset was not allowed to come from subreddits-the site-wide term for a themed message boards-where discussion of depression or a related issues was expected. Therefore, this dataset contains only "off-topic" text-text not on the subject of depression. This step is important for being able to detect depression among the general public, many of whom are reluctant to talk about their depressive symptoms due to depression-related stigma (Manos et al., 2009). We label users as either "depressed" or "not-depressed" based on self-disclosure behavior: authoring posts in depression-related subeditors. The Off-Topic Depression dataset is useful for training a model because of the available baseline, but contains no data from the recent COVID-19 pandemic. This makes it insufficient to estimate the impact of COVID-19 on population rates of depression.
30
+
31
+ § 2.1.1 PANDEMIC DEPRESSION DATASET
32
+
33
+ The Pandemic Depression dataset contains 23 million words generated by 20,000 users over three years. The users for this dataset were selected in a similar fashion to the users from the Off-Topic Depression dataset: a scrape of submission authors to the subreddit $\mathrm{r}/$ AskRedit was performed to gather a set of potential users and then a random subset from this set was taken. During the scrape, 29-million users were considered. From that sample, 20,000 were randomly selected for inclusion in the study. This approach attempts to gather "neutral" Reddit users, who are not necessarily associated with any particular subreddit or community of subreddits, and would therefore have no bias towards or away from depression. The data for the Pandemic Depression dataset is broken up by the time of users activities. We use only the first six months of 2018 and 2019 and the first four months of ${2020}^{2}$ .
34
+
35
+ § 2.2 DEEP LSTM WITH FASTTEXT
36
+
37
+ As part of this analysis we trained a deep long short-term memory neural network. The network we used contains five layers: a fastText (Joulin et al., 2016) embedding layer, three LSTM layers, and an output layer. We trained and evaluated the LSTM on the Off-Topic Depression dataset, using about 7,700 users for training and about 4,000 users for testing. This totalled about 100 million words for training and 40 million for testing.
38
+
39
+ max width=
40
+
41
+ Month 2018 2019 2020
42
+
43
+ 1-4
44
+ Jan 1.6 mil 5.5 mil. 560k
45
+
46
+ 1-4
47
+ Feb 475k 1.8 mil. 860k
48
+
49
+ 1-4
50
+ Mar 335k 1.2 mil 2.4 mil
51
+
52
+ 1-4
53
+ Apr 258k 1 mil 5.6 mil
54
+
55
+ 1-4
56
+ May 211k 700k *
57
+
58
+ 1-4
59
+ Jun 210k 470k *
60
+
61
+ 1-4
62
+ 4|c|* Data not yet available.
63
+
64
+ 1-4
65
+
66
+ Table 1: Pandemic Depression dataset text by month.
67
+
68
+ The first layer of the model, the embedding layer learns weights that take advantage of the specific 300-dimensional fastText vectors. The second through fourth layers of the model are identical LSTM layers with a ${20}\%$ dropout rate. The LSTM layers use each word as a step. The fifth and final layer of the model is a single-node dense layer with sigmoid activation used for predicting the class of the user. A depiction of this network and the minimal prepossessing can be seen in Figure 1.
69
+
70
+ Text preprocessing for the LSTM was minimal. The vocabulary for all documents was limited to the most-common 10,000 words. Each user was truncated down to or zero-padded up to 750 words, as necessary. We performed no other preprocessing, such as misspelling correction or internet-speak normalization.
71
+
72
+ § 2.3 COMPARATIVE TIME-SERIES ANALYSIS
73
+
74
+ In order to assess the impact of the COVID-19 pandemic on language use, we perform a comparative time-series analysis of three periods: two six month periods from before the pandemic ranging from January 2018 and 2019 to June 2018 and 2019 inclusive, and one four month period from January 2020 to April ${2020}^{3}$ . We analyzed the same users for all periods. Rates of depression were estimated for each period by classifying each user as either depressed or not-depressed with the LSTM classifier described in section 2.2.
75
+
76
+ It is important to note here that Reddit activity is inconsistent and that, unlike other social media platforms such as Twitter or Facebook, the use of a single account through time is not encouraged by the platform (Leavitt, 2015). This results in many accounts being abandoned and the resulting 2020 subset of data being smaller in terms of total active users than the subset of data for 2019. A more appropriate means of performing this analysis would be to select a random sample users known to be active during this time period in 2020. Requiring the same users for all periods may introduce bias if users who are likely to be active over long periods of time have a bias towards or away from depression.
77
+
78
+ ${}^{2}$ Historical Gaffney and Matias (2018) note that there are issues with historical analysis of Reddit data-perhaps inclusive of users deleting depressive content post-hoc. Addressing those concerns are outside the scope of this preliminary investigation.
79
+
80
+ ${}^{3}$ Data from April was only available up to April 4; however data is included because a large enough volume of data was accessible: $\approx {50},{000}$ words.
81
+
82
+ < g r a p h i c s >
83
+
84
+ Figure 1: Deep LSTM model for depression prediction.
85
+
86
+ § 3 PRELIMINARY RESULTS
87
+
88
+ In this section, we review the preliminary results. We find that an LSTM with fastText embeddings outperforms the baseline approach in Wolohan et al. Additionally, the LSTM indicates that the population rate of depression may be up by ${50}\%$ in the first four months of 2020 when compared to the first four months of 2019 and 2018.
89
+
90
+ § 3.1 LSTM WITH FASTTEXT EMBEDDINGS
91
+
92
+ Comparing the new model for off-topic depression prediction, a deep LSTM with fastText word em-beddings, to the model previous used by Wolohan et al., We find that the LSTM outperforms the previous baseline approach in the relevant measures of AUC and F1 score. The LSTM achieved an AUC of 0.93 and an F1 score of 0.92, surpassing the baseline by 18 points 24 points respectively. The results for this LSTM are competitive with state-of-the-art deep-learning approaches for this task on similar datasets (see: Orabi et al. 2018; Guntuku et al. 2017).
93
+
94
+ max width=
95
+
96
+ Model AUC $\mathbf{{F1}}$
97
+
98
+ 1-3
99
+ Wolohan et al. 0.75 0.68
100
+
101
+ 1-3
102
+ LSTM + fastText 0.93 0.92
103
+
104
+ 1-3
105
+
106
+ Table 2: LSTM performance versus baseline.
107
+
108
+ § 3.2 COMPARATIVE TIME-SERIES ANALYSIS
109
+
110
+ With the LSTM model improved to 0.93 AUC, we then applied the LSTM to the Pandemic Depression dataset. In doing this, we assessed whether or not users' language indicated depression for the first six months in 2018 and 2019, and the first four months of 2020. We found for 2018 and 2019, the user population being studied demonstrated a steady rate of depression around ${33}\% \pm 4\%$ . For 2020, we found the population rate of depression to average ${49}\%$ , with individual months ranging from 42% to 52%.
111
+
112
+ In the first six months of 2018 and 2019, the LSTM suggested a depression rate in the low ${30}\%$ range with only two exceptions: May 2018 and January 2019. May 2018 had the highest estimated depression rate-38% of all users-while January 2019 had the lowest-29% of all users.
113
+
114
+ In 2020, estimated depression rates among Red-dit users again are consistent; however, they are consistently 20 percentage points higher than 2018 and 2019. Of the three months, only April 2020 stands out with a low depression rate: ${42}\%$ . At the time of this writing, the data for April 2020 is incomplete.
115
+
116
+ § 4 DISCUSSION
117
+
118
+ With the global COVID-19 pandemic wrecking havoc on medical systems and economies worldwide, 2020 will be a year in which many people go through significant hardships. If the analysis herein is to believed, then the fear that many have about a global deterioration in mental health is likely to be a reality as well. But should we believe the analysis presented here; and what are the implications if we do?
119
+
120
+ < g r a p h i c s >
121
+
122
+ Figure 2: Sampled word count, COVID-19 cases, and modeled depression rate by week.
123
+
124
+ max width=
125
+
126
+ Month 2018 2019 2020 $\Delta$
127
+
128
+ 1-5
129
+ January .32 .29 .52 +79%
130
+
131
+ 1-5
132
+ February .34 .32 .51 +69%
133
+
134
+ 1-5
135
+ March .31 .32 .49 +53%
136
+
137
+ 1-5
138
+ April .33 .34 .42 +24%
139
+
140
+ 1-5
141
+ May .38 .32 - -
142
+
143
+ 1-5
144
+ June .34 .34 - -
145
+
146
+ 1-5
147
+ Average .34 .32 .49 +53%
148
+
149
+ 1-5
150
+
151
+ Table 3: Estimated depression rate of Reddit users for select months.
152
+
153
+ § 4.1 MODEL EFFICACY
154
+
155
+ There are two reasons that we should tentatively believe the AI-based assessments of population-level depression on Reddit. First, the ${32}\%$ population-rate of depression estimated by the model is plausible, given (1) that the LSTM is designed to detect both clinical and subclinical depression and (2) that Reddit has a much younger (read: depression prone) population than the U.S. at-large. Second, the steadiness of the numbers over time is encouraging.
156
+
157
+ First among the reasons that one would avoid dismissing these findings out of hand is the consistency between the numbers projected for 2018 and 2019 and what one would expect for a population joint rate of clinical and subclinical depression on Reddit. According to the U.S. National Institute of Health, adult rates of depression range up to ${13}\%$ depending on the demographic ${}^{4}$ . At the upper bound of 13% is the 18-25 year-old demographic. More than half of the Reddit-using population falls into this demographic ${}^{5}$ . Further, ${25}\%$ of Reddit users are 17 or younger-suggesting they might also have an increased rate of depression. NIH estimates that approximately ${17}\%$ of adolescents between 15 and 17 will experience at least one major depressive episode each year. If one takes ${13}\%$ as the population rate of depression for Reddit users and doubles it to include cases of subclinical depression (Kessler et al., 1997), one is left with a ${26}\%$ rate of total depression for Reddit users for the first six months 2019. The LSTM predicts a 32% rate of depression for the first six months of 2019. A reasonable amount of error, given the accuracy measures in Table 2.
158
+
159
+ Second, the steadiness of the rates of depression in the two control years is encouraging. There is little variation across the first six months of both 2018 and 2019. With only 9 percentage points separating the month with the greatest estimated rate of depression, May '18, and the lowest month, Jan. '19.
160
+
161
+ § 4.2 ESTIMATING THE EFFECT OF THE COVID-19 PANDEMIC ON MENTAL HEALTH
162
+
163
+ If we assume that the rates of population-wide clinical and subclinical depression are to increase by ${50}\%$ in 2020, either as a result of the COVID- 19 pandemic or otherwise, then we would expect to see a population rate of depression increase from $7\%$ to ${10}\%$ , and a similar sized increase sub-clinical depression. In the U.S., this would amount to 15 million more adults suffering from clinical depression. This increased rate of depression would amount to a $\$ {7.5}$ billion increase in healthcare spending, assuming $\$ {500}$ per person per year (Kleine-Budde et al., 2013). This assumes that the average rate of depression is about as severe as it is currently.
164
+
165
+ ${}^{4}$ All population rates of depression come from the NIH: https://www.nimh.nih.gov/health/ statistics/major-depression.shtml
166
+
167
+ ${}^{5}$ There are two sources for the demographics of Reddit, both from 2016: Barthel et al. (2016) or /u/HurricaneXriks (2016). The latter is used here.
168
+
169
+ Importantly, there reasons we may believe this increased rate is not yet associated with COVID-19. As we can see in Figure 2, the estimated depression rate appears to be declining in April at a time when cases and deaths in the U.S. are rising. Speculatively, this may be associated with the phase of the pandemic the U.S.-and therefore most Reddit users-are currently experiencing. Many of these users will be under stay-at-home orders; however, the full toll of the pandemic, including economic destruction and loss of life has not yet been felt. Alternatively, we must consider that the active population of Reddit may be changing as stay-at-home orders and unemployment furnish people with addition free time to use the internet.
170
+
171
+ One would expect that depression rates increase as stressors such as unemployment, loneliness, and loss of loved ones begin to impact the population at large. If the population rate of depression is elevated for a reason un-related to COVID-19, that could spell even further trouble for Americans' mental health.
172
+
173
+ § 5 ETHICAL CONSIDERATIONS
174
+
175
+ As with all works related to public health and mental health, it is important that we consider the ethical implications of our research and make explicit the ethical justification for the work. In particular, work of this kind-namely, public health surveillance-requires special attention because, while disease surveillance is foundational to good public health practice (Fairchild et al., 2007), it also forgoes the notion of informed consent. When discussing a health issue rife with stigma such as depression, the concern about a researcher mismanaging data and revealing public health information of individuals without their consent is amplified.
176
+
177
+ Klingler et al. (2017) enumerate eight broad categories of ethical arguments by which researchers justify forgoing the traditional informed consent requirement for conducting public health surveillance. Of those, we argue that our work satisfies the effectiveness, necessity, proportionality, and least infringement requirements. First, with respect to the effectiveness, this study is the first-to my knowledge-quantitative estimate of the impact of COVID-19 on population rate of depression. That makes this data valuable to the public health and mental health communities. Second, with respect to necessity, it is noteworthy that this approach is minimally intrusive, requiring no contact with the individuals whose comments are used and no interruption of their usage of the Reddit service. For proportionality, third, it is important to note that no individual user-level data was shared at any time during this research, and that the identities of Red-dit users are hidden behind pseudonyms, offering them an additional layer of protection. Fourth and finally, the work considered carefully the notion of least infringement, collecting only data that would be necessary for the analysis herein.
178
+
179
+ Additionally, in a departure from traditional practices in the NLP community, the data underlying this work will only be shared with researchers who both (1) provide a research design or other public-health justification for the use of the data and (2) agree to take the necessary efforts to secure the data.
180
+
181
+ Ultimately, we view this work as ethically justified based on the precautions noted above and the potentially large increase in population-level depression against which this research warns. If depression is, as we predict, to impact 15 million more Americans through 2020 than in previous years, advanced warning is valuable to the American mental-health system.
182
+
183
+ § 6 CONCLUSION
184
+
185
+ In this paper we show the effectiveness of an LSTM text classifier using fastText word embeddings at predicting user-level depression and use that classifier to estimate the population rate of depression in April 2020 in the midst of the COVID-19 pandemic. We estimate that through the first six months of 2020, population rate of depression is up $\approx {50}\%$ , corresponding to a 15 million more depressed Americans. This analysis suffers from a lack of data and will strengthen as more data becomes available. Additional research is needed to confirm or contradict the results presented here, and will be especially valuable when the adjusted population rates of depression are known.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/EC1vWkJXpjy/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Self-supervised context-aware Covid-19 document exploration through atlas grounding
2
+
3
+ Dusan Grujicic, Gorjan Radevski*, Tinne Tuytelaars, and Matthew B. Blaschko
4
+
5
+ ESAT-PSI, KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium
6
+
7
+ \{firstname.lastname\}@esat.kuleuven.be
8
+
9
+ ## Abstract
10
+
11
+ In this paper, we aim to develop a self-supervised grounding of Covid-related medical text based on the actual spatial relationships between the referred anatomical concepts. More specifically, we learn to project sentences into a physical space defined by a three-dimensional anatomical atlas, allowing for a visual approach to navigating Covid-related literature. We design a straightforward and empirically effective training objective to reduce the curated data dependency issue. We use BERT as the main building block of our model and perform a comparison of two BERT variants pre-trained on general-purpose text - ${\mathrm{{BERT}}}_{\text{BASE }}$ and ${\mathrm{{BERT}}}_{\text{SMALL }}$ , with three domain-specific pre-trained alternatives - BIOBERT, SCIBERT and CLINICALBERT. We perform a quantitative analysis that demonstrates that the model learns a context-aware mapping while being trained with self-supervision in the form of medical term occurrences. We illustrate two potential use-cases for our approach, one in interactive, $3\mathrm{D}$ data exploration, and the other in document retrieval. To accelerate research in this direction, we make public all trained models, the data we use, and our codebase. Finally, we also release a web tool for document retrieval and a visualization tool.
12
+
13
+ ## 1 Introduction
14
+
15
+ The quantity of available COVID-19 articles on the internet increases every day. Nonetheless, it remains scarce compared to general domain data sets. Annotating medical data requires the expertise of physicians, and is therefore cost-prohibitive, especially during a pandemic. As a consequence of the lack of available structured data in the medical domain, the machine learning community has mostly focused on developing general-purpose text models.
16
+
17
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_0_906_628_418_409_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_0_906_628_418_409_0.jpg)
18
+
19
+ Figure 1: Grounding of the sentence "The total number of AM in the lung at ${48}\mathrm{{hr}}$ was significantly $\left( {p < {0.05}}\right)$ reduced as compared to PBS controls” (Hartwig et al., 2014) with the CLINICALBERT model. The dark blue represents the voxels of the left lung, while the light blue area represents the outline of the body. The star denotes the model prediction. See Section 6 for details.
20
+
21
+ The development of BERT (Devlin et al., 2018), and the increased popularity of transfer learning in natural language processing (NLP), have prompted notable works that aim to leverage publicly available medical and scientific articles to develop domain specific pre-trained language models (Lee et al., 2019; Alsentzer et al., 2019; Beltagy et al., 2019). These approaches train models that learn universal sentence embeddings aimed at capturing the semantics and structure of the text data.
22
+
23
+ In contrast, we focus on mapping text to locations in a 3D model of the human body (Figure 1), where the physical proximity of objects reflects their functional and semantic relatedness to a significant degree. Such an embedding is advantageous for several reasons: (i) It allows us to visualize medical text in physically meaningful space, finding clusters of documents organized by anatomy (Figure 5). (ii) It allows us to search for and retrieve text by navigating through a physical space. (iii) There is a statistical advantage to modelling medical text in the 3D space as anatomically related substructures tend to be close to one another.
24
+
25
+ ---
26
+
27
+ *equal contribution
28
+
29
+ ---
30
+
31
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_1_271_213_475_323_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_1_271_213_475_323_0.jpg)
32
+
33
+ Figure 2: Cross-sections of the RGB volume (left) and the grayscale volume representing segmentation labels (right) (Pommert et al., 2001)
34
+
35
+ In the absence of semantic labels, we use term occurrences as the indication of what the text denotes. For example, the sentence: "The pancreas contains tissue with an endocrine and exocrine role" receives a target of mapping to the location of the pancreas in the 3D space.
36
+
37
+ In order to achieve the goal of grounding medical text into the physical space, a reference location for every medical term of interest is required. Such references can be obtained from a combination of a three-dimensional atlas of human anatomy and contextual information. There are multiple digital anatomical models available. The Virtual Population (Christ et al., 2009; Gosselin et al., 2014) of the IT’IS Foundation ${}^{1}$ contains anatomical models of 10 different persons obtained from MRI procedures. The Segmented Inner Organs (SIO) from the Voxel-Man project (Höhne et al., 2001; Pom-mert et al.,2001; Schiemann et al.,1997) ${}^{2}$ is based on the Visible Human Male (U.S. National Library of Medicine ${}^{3}$ ) and contains 202 labeled anatomical objects within the human torso. The model consists of 774 slices obtained by CT and MRI imaging, where each slice contains a cryosection image, a CT image and a segmentation label image where the grayscale level corresponds to a segmentation label of the tissue (Figure 2). In this work, we build on the atlas of Pommert et al. (2001), though the approach is readily extended to other atlases.
38
+
39
+ ## 2 Related Work
40
+
41
+ There are many works that deal with sentence grounding in a limited space, albeit not in the physical 3D space as we do. Most of the approaches exploit multimodal data and limit the projection space to either images or videos (Akbari et al., 2019; Kiela et al., 2017; Chen et al., 2019; Javed et al., 2018; Xiao et al., 2017). These works overcome expensive bounding box or pixel-level annotation, but they cannot be extended to the unsupervised setting where the data are not paired, but rather raw unpaired sentences or images. Even though the image-caption pairs without any region label are commonly referred to as weakly-supervised data in the literature, most of these works have training procedures that are dependent on curated datasets which are hard to obtain.
42
+
43
+ The works of Weyand et al. (2016); Ebrahimi et al. (2018) are probably the most similar to ours. In PlaNet, Weyand et al. (2016) attempt to classify images to a distinct set of geolocations. To do so, they train their model on a dataset of geotagged images where each image belongs to a single class: the index of the partitioned geolocation world cell. In contrast to our approach, the task is formulated as a classification problem where the physical distances and relationships between cells do not affect the way the probability distribution over them is learned. We frame our approach as a regression problem, as the spatial closeness of anatomical concepts implies a degree of semantic and functional affinity. This helps us reason about our approach in a way that in addition to knowing whether the grounding is correct or not, we have insight into how physically close we are to the target.
44
+
45
+ A similar approach, but more related to our work as it also deals with text, is the work of Ebrahimi et al. (2018), where the extracted text representations and metadata were used to classify tweets by geographic region in a fully-supervised setting. Ebrahimi et al. (2018) utilize machine learning to ground sentences in the world atlas. Yet and again, their approach is dependent on a carefully structured dataset and availability of explicit annotations. In our work, we attempt to go one level further and learn to ground sentences by only using unlabeled raw text data, obtained from medical journal articles, while preserving the spatial structure of the sentences. Our supervision comes in the form of implicit organ voxel points in a human atlas space, and words/phrases that make reference to those organs. To the best of our knowledge, so far, there have not been works that attempt to ground sentences in a 3D human atlas space, using strictly self-supervision. Additionally, a number of works have applied natural language processing techniques on Covid-19 articles (Zhang et al., 2020; Wang et al., 2020; Liang and Xie, 2020), however, none of them aim to ground text in the $3\mathrm{D}$ atlas space.
46
+
47
+ ---
48
+
49
+ 1www.itis.swiss/
50
+
51
+ ${}^{2}$ www.voxel-man.com/
52
+
53
+ ${}^{3}$ www.nlm.nih.gov/research/visible/
54
+
55
+ ---
56
+
57
+ ## 3 Methods
58
+
59
+ In this section, we describe the model we use which is entirely based on BERT, the training objective and the task that we address in this paper.
60
+
61
+ ### 3.1 The model
62
+
63
+ Bidirectional Encoder Representations from Transformers - BERT (Devlin et al., 2018) is a pre-trained Transformer (Vaswani et al., 2017) based language model. Before BERT, the only way to train deep bidirectional language models was to train a separate forward and backward language model, and in the end, concatenate their learned representations (Peters et al., 2018). BERT alleviates that problem by introducing the concept of Masked Language Modelling (MLM), previously known as cloze task (Taylor, 1953). The scalability of BERT, combined with MLM, led to the increasing popularity of such language models (Keskar et al., 2019; Liu et al., 2019; Lample and Conneau, 2019).
64
+
65
+ Due to the train-test discrepancy that occurs by including the [MASK] token in the MLM, other approaches train transformers in an autoregressive manner (Radford et al., 2019; Yang et al., 2019; Dai et al., 2019). In our work, we use BERT as a backbone in our model due to its simplicity and applicability in a wide range of domains. As we shall see later when we describe the task of text-atlas grounding, the existence of the [MASK] token in the vocabulary can be seamlessly incorporated in our pipeline to fit within the task we solve. In our work, we perform an ablation study with five different pre-trained BERT models. Following the standard practice (Devlin et al., 2018), we take the representation of the [CLS] token as a general representation of the whole sequence. Finally, to obtain the $3\mathrm{D}$ atlas grounding for a piece of medical text, we project BERT's sentence embedding with a linear layer, mapping from BERT's hidden space to the $3\mathrm{D}$ space.
66
+
67
+ ### 3.2 Text-to-atlas mapping objective
68
+
69
+ Our final objective is to ground medical texts to the anatomic atlas space using only self-supervision in the form of organ appearances in each sentence. More concretely, we have a dataset of sentences, where for each sentence, we can detect the appearances of terms denoted in the human atlas. Then, our desired scenario is that sentences that share the same semantics are mapped in the same region in the human atlas space regardless of whether they make explicit reference to an organ. To achieve that, we tokenize each of the training sentences (Loper and Bird, 2002) and stochastically mask each of the keywords. Each of the keywords (organs) is masked with 0.5 probability. In other words, assuming that we have the sentence "In addition, the kidney mRNA transcript level and serum activity of XOD in the infected group was significantly higher than that of the control group at8,15and ${22}\mathrm{{dpi}}$ (p < 0.05)" (Lin et al., 2015) on average, 50% of the time we will replace it with "In addition, the [MASK] mRNA transcript level and serum activity of XOD in the infected group was significantly higher than that of the control group at 8,15 and 22 dpi $\left( {\mathrm{p} < {0.05}}\right)$ " in the current training batch. We use the [MASK] token, as it is included in BERT's default vocabulary. Next, the sentence words are joined again and tokenized using the WordPiece (Wu et al., 2016) tokenization method as per Devlin et al. (2018). By following the above-mentioned procedure, we are able to obtain context-dependent grounding, such that the model can ground sentences purely based on their context in cases where none of the organ references are present.
70
+
71
+ ### 3.3 Minimum organ distance loss
72
+
73
+ Ideally, if we had exactly one organ occurrence per sentence, and if we could associate each organ with a single point in the $3\mathrm{D}$ space, we could simply minimize the mean squared error between the $3\mathrm{D}$ coordinates of the organ point $y$ and the model prediction $\widehat{y}$ . However, a sentence can contain multiple organ occurrences, while organs themselves are distributed in nature, and are characterized by a set of points in 3D space, which capture its position, size and shape. Therefore, the loss function needs to accommodate having more than one point as target for regression.
74
+
75
+ We calculate the Euclidean distances between the prediction and each organ point, and the soft-min (soft-max across the inputs reversed in sign) across these squared distances as weights for the contributions of individual points. The loss contribution of an organ point (denoted as ${PC}$ ) is the product of its squared distance from the predicted point and its weight:
76
+
77
+ $$
78
+ {PC}\left( {y}_{p}\right) = \frac{\parallel {y}_{p} - \widehat{y}{\parallel }_{2}^{2}\exp \left( {-{\gamma }_{1}{\begin{Vmatrix}{y}_{p} - \widehat{y}\end{Vmatrix}}_{2}^{2}}\right) }{\mathop{\sum }\limits_{{i = 1}}^{P}\exp \left( {-{\gamma }_{1}{\begin{Vmatrix}{y}_{i} - \widehat{y}\end{Vmatrix}}_{2}^{2}}\right) }, \tag{1}
79
+ $$
80
+
81
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_3_203_177_598_407_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_3_203_177_598_407_0.jpg)
82
+
83
+ Figure 3: Loss isocurves around kidney and lung point clouds projected into 2D using PCA for visualization purposes.
84
+
85
+ where $\widehat{y}$ is the model prediction, ${y}_{p}$ is an organ point, $P$ is the total number of points that characterize a single organ and ${\gamma }_{1}$ is a temperature term. We calculate the loss for one organ (denoted as ${OL}$ ) as the sum of contributions of its points:
86
+
87
+ $$
88
+ {OL} = \mathop{\sum }\limits_{{p = 1}}^{P}{PC}\left( {y}_{p}\right) \tag{2}
89
+ $$
90
+
91
+ To avoid regressing to a point outside of the organ, we shave off the surface of the organ by performing a single binary morphological erosion (Serra, 1983) prior to computing the loss.
92
+
93
+ In the case where more than one organ is present in the sentence, we calculate the loss for each individual organ in the way described above. Then, we compute the soft-min over the set of such loss terms as contribution weights for each organ. The final loss contribution of one organ (denoted as ${OC})$ is the product between its individual loss and its contribution weight:
94
+
95
+ $$
96
+ O{C}_{o} = \frac{O{L}_{o}\exp \left( {-{\gamma }_{2}O{L}_{o}}\right) }{\mathop{\sum }\limits_{{i = 1}}^{O}\exp \left( {-{\gamma }_{2}O{L}_{i}}\right) } \tag{3}
97
+ $$
98
+
99
+ where $O$ is the total number of distinct organs appearing in the sentence, $O{L}_{i}$ is the organ loss for the $i$ th organ, and ${\gamma }_{2}$ is a temperature term. Finally, the total loss for one sample is computed by summing up the loss contributions of organs appearing in its sentence:
100
+
101
+ $$
102
+ \text{ Loss } = \mathop{\sum }\limits_{{o = 1}}^{O}O{C}_{o} \tag{4}
103
+ $$
104
+
105
+ ## 4 Data Collection
106
+
107
+ ### 4.1 Text Corpus
108
+
109
+ The text corpus consists of Covid-19 related articles from the Open Research Dataset Challenge (CORD-19) ${}^{4}$ . The version from 20.03.2020., consisting of a csv file with metadata for 29500 papers and 13202 json files with full texts of scientific articles pertaining to Covid-19 was used for training the model. The abstracts and text bodies of full text articles were included in the corpus and split into sentences, which constitute the samples in the dataset. Both the full text json files and the metadata csv contain paper abstracts, and in case when there is a mismatch between the two, we include the abstract version that contains more characters. The sentence length was analyzed and it was found that ${99.89}\%$ sentences contain fewer than 128 words. In order to avoid unnecessary memory consumption during training, sentences longer than 128 words were discarded.
110
+
111
+ ### 4.2 Human Body Atlas
112
+
113
+ We utilize the Segmented Inner Organs (SIO) atlas (Pommert et al., 2001). We base the 3D atlas on the segmentation labels of the tissues in the human body, which come in the form of image slices that form a 3D voxel model of the male torso when stacked on top of one another. The stacked images from the torso represent a volume of ${573} \times {330} \times {774}$ voxels, with 1 -millimeter resolution along each axis. The value of each voxel represents the segmentation label of its corresponding organ or tissue. The SIO includes the model of the human head as well, that we do not use.
114
+
115
+ SIO contains a glossary of medical terms and their associated segmentation labels. A list of synonyms and closely related wordforms for each glossary term were retrieved. The ScispaCy UmlsEn-tityLinker (Neumann et al., 2019) was used for searching the UMLS Metathesaurus (The Unified Medical Language System) (Bodenreider, 2004) for all word forms of the SIO glossary ${}^{5}$ . The parameters of the UmlsEntityLinker were kept at default values.
116
+
117
+ ---
118
+
119
+ ${}^{4}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge
120
+
121
+ ---
122
+
123
+ SIO includes 202 anatomical objects with their distinct segmentation labels. Tissues such as skin, gray matter, white matter, and unclassified tissues were removed from the set of labeled terms, as they denote general medical concepts not characterized by specific compact locations in the human body. The vertebrae, bones, and muscles of the arms and legs were discarded as well. In the case of categories for bilateral organs located symmetrically on both the left and the right side of the body, which are seldom mentioned explicitly in the texts, only the atlas voxels pertaining to the left organ were kept for every bilateral pair. Atlas labels that appear infrequently in medical literature, but are functionally related to other, more frequently occurring organs, or are colloquially referred to under a single, umbrella term, were merged. The aforementioned steps reduced the list of distinct anatomical objects of interest to 67 . The full list of organ removals, mergers and renamings can be found at https: //github.com/gorjanradevski/macchina/.
124
+
125
+ ### 4.3 Dataset Creation
126
+
127
+ Sentences were chosen as the main units of text that are mapped to three-dimensional locations in the atlas, i.e. the samples consist of sentences and their targets in the human atlas. The voxels of one organ can be characterized by a point cloud in the atlas space, where each point represents the coordinate indices of one voxel (Figure 1).
128
+
129
+ The training set consists of sentences from 70% randomly chosen documents, while the remaining ${30}\%$ of the documents were evenly distributed between the validation and the test set. Consequently, the sentences from the same document are always assigned to the same dataset split. As can be seen on Figure 4, the frequency of the words and phrases referring to the lung, liver, bronchi, stomach, and kidney is significantly higher than that of other organs. Therefore, to balance out the numbers of organ occurrences in the dataset, we include up to 8000 randomly selected sentences that contain these frequently occurring organs and discard the rest, while keeping all the sentences containing less frequently occurring organs. Some sentences contain multiple occurrences of one or different organs, meaning that an organ can still have more than 8000 occurrences in the dataset. Regardless of this, the number of sentences that contain the most frequently occurring organs is significantly reduced, whereas the sentences containing less frequently occurring organs are preserved. The organs with fewer than 100 occurrences are removed. This included 38 organs, leaving a total of 29 anatomical categories as target locations for text mapping. The sentences that do not contain words and phrases that can be associated with the SIO glossary terms are discarded.
130
+
131
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_4_858_174_592_424_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_4_858_174_592_424_0.jpg)
132
+
133
+ Figure 4: Number of occurrences of the 13 most frequent organs
134
+
135
+ ## 5 Experimental setup
136
+
137
+ For the development of our models and pipelines we used the PyTorch library (Paszke et al., 2019) together with the HuggingFace transformers (Wolf et al., 2019) package. For each of the experiments, we start with a pre-trained model and we fine-tune the whole architecture. We keep a fixed learning rate of $2 \times {10}^{-5}$ and train the larger models for 20 epochs, and we increase the learning rate to $5 \times {10}^{-5}$ for the BERT SMALL model and train it for 50 epochs. During training we clip the gradients when the global norm exceeds 2.0 . For all experiments, our optimizer of choice is Adam (Kingma and Ba,2014) and the temperature term ${\gamma }_{1}$ is fixed to 0.33 . We fix the second temperature term ${\gamma }_{2}$ to $\frac{1}{N}$ where $N$ is the number of distinct organs appearing within an single training instance. During the fine-tuning we keep the model that reported the lowest distance to the nearest ground truth voxel point on the validation set as early stopping. Aside from early stopping and the dropout (Srivastava et al., 2014) layers present in BERT, we do not perform any other regularization.
138
+
139
+ ---
140
+
141
+ ${}^{5}$ ScispaCy version 0.2.3 and en_core_sci_lg pipeline
142
+
143
+ ---
144
+
145
+ ### 5.1 Metrics and evaluation
146
+
147
+ We perform all evaluations in two different settings, namely Regular and Masked. In the former, we perform atlas grounding on a holdout set of sentences obtained from documents not seen by the model during training. In the latter, we use the same model while masking all the SIO glossary terms and their synonyms, i.e. substituting them with the special token [MASK]. In the Masked setting, we ensure that the model relies on the sentence context instead of making a one-to-one correspondence between the organ that appears in the sentence and the location in the atlas.
148
+
149
+ Each of the models is evaluated on three metrics: (i) Distance to the nearest voxel of the nearest correct organ (NVD) ${}^{6}$ . (ii) Distance to the nearest correct organ voxel calculated only on the samples for which the projection is outside the organ volume (NVD-O) ${}^{6}$ . (iii) Rate at which the sentences are grounded within the volume of the correct organ, which we denote as Inside Organ Ratio (IOR).
150
+
151
+ We consider the predicted $3\mathrm{D}$ point to be inside the organ volume (hit) when its coordinates, rounded to the nearest integer to represent voxel indices, are within the set of voxels that make up the corresponding organ. In cases where the sentence has more than one organ reference, due to the implicit labeling, we measure a hit when the predicted coordinates correspond to any one of the given organs.
152
+
153
+ When the projection is inside the volume of the organ, the NVD is zero, and otherwise, it is measured as the distance to the surface of the nearest organ in the sentence. The NVD-O metric complements the NVD metric, such that it gives insight into how far off the prediction is when it misses the correct organ.
154
+
155
+ We justify the evaluation metrics according to the type of data we use, and the use-cases. Firstly, since we leverage unlabeled data exclusively, we assume that a single sentence needs to be grounded inside/near the organ of reference in the sentence. Secondly, we want similar sentences (sentences making reference to a certain body part), to be grounded in similar parts of the human atlas. As a result, we use the distance to the nearest organ voxel as the primary evaluation metric. Therefore, we can expect that the models with high evaluation scores to be useful for data-exploration and document retrieval through the human atlas.
156
+
157
+ ## 6 Quantitative results
158
+
159
+ In this section we report the results of our trained models. Four of the models share the same architecture, with the only difference being the pretraining corpus of BERT. Namely, the BERTBASE (Devlin et al., 2018) model has been pre-trained on the BooksCorpus (Zhu et al., 2015) and English Wikipedia. The BIOBERT (Lee et al., 2019) model is obtained by fine-tuning ${\mathrm{{BERT}}}_{\mathrm{{BASE}}}$ on PubMed abstracts and PMC full-text articles as per Lee et al. (2019) while CLINICALBERT is obtained by initializing with the BIOBERT's weights and fine-tuning on clinical notes. The SCIBERT model is obtained by fine-tuning ${\mathrm{{BERT}}}_{\text{BASE }}$ on ${1.14}\mathrm{M}$ papers from Semantic Scholar (Ammar et al., 2018). Finally, ${\mathrm{{BERT}}}_{\text{SMALL }}$ , is obtained by pre-trained distillation (Turc et al., 2019) from BERT BASE.
160
+
161
+ Additionally, we perform an analysis of the effectiveness of framing the task as classification. Here, we feed the [CLS] representation to an output layer to perform the classification as per Devlin et al. (2018). The model is trained to predict an organ index for every sentence, and the center of the predicted organ is subsequently used as the model prediction and evaluated in the same way as the regression models. We denote this model as CLASSCENTER in the result tables.
162
+
163
+ Finally, we report the results on two naive baselines that aim to exploit the information on the general locations of the organs and the information on the disbalance in the frequency of organ occurrences that exist in the datasets. In the first baseline (FREQUENCY), we measure the frequency of the organ terms in the training set samples, and always predict the point within the most frequent organ on the test set samples. In the second baseline (CENTER), we use the center of the 3D atlas as the prediction and measure the distance to the closest correct organ for every test sample (the IOR is not relevant).
164
+
165
+ ## 7 Use-cases
166
+
167
+ By grounding medical sentences in a $3\mathrm{D}$ atlas space, we produce low dimensional sentence em-beddings. We discuss two use-cases of our model, which, either implicitly or explicitly, leverage such embeddings: (i) atlas based point-cloud corpus visualization and (ii) atlas based document retrieval.
168
+
169
+ We built a tool for each of the use cases, one for visualizing and retrieving articles in the text corpus by specifying $3\mathrm{D}$ coordinates, and one for retrieving relevant articles based on a textual query. The data was obtained from the Covid-19 Open Research Dataset Challenge (CORD-19) ${}^{7}$ hosted on Kaggle. The version from 10.04.2020., consisting of 59311 json files of scientific articles pertaining to Covid-19 and metadata for 51078 papers was the latest at the time of writing. The dataset was processed by using paper indexes for matching titles, abstracts and main texts in the json files with the information required for retrieving the article in the metadata. This included the source of the publication, authors, date, digital object identifier (DOI) and the URL for each paper - all the relevant information for article retrieval. In the case of both tools, each document abstract was embedded into the $3\mathrm{D}$ space as a point cloud, where each point is the output of the model for each of its sentences. Tools and code can be accessed at https: //github.com/gorjanradevski/macchina/.
170
+
171
+ ---
172
+
173
+ ${}^{6}$ Calculated in centimeters
174
+
175
+ ---
176
+
177
+ <table><tr><td>Method</td><td>Regular</td><td>Masked</td></tr><tr><td>BERT</td><td>${0.33} \pm {0.02}$</td><td>${3.31} \pm {0.08}$</td></tr><tr><td>BioBERT</td><td>${0.21} \pm {0.02}$</td><td>${2.92} \pm {0.08}$</td></tr><tr><td>Scibert</td><td>${0.22} \pm {0.02}$</td><td>${3.33} \pm {0.09}$</td></tr><tr><td>BERTSMALL</td><td>${0.51} \pm {0.03}$</td><td>${3.44} \pm {0.08}$</td></tr><tr><td>Clinicalbert</td><td>${0.25} \pm {0.02}$</td><td>${3.11} \pm {0.08}$</td></tr><tr><td>CLASSCENTER</td><td>${0.03} \pm {0.01}$</td><td>${1.66} \pm {0.07}$</td></tr><tr><td>CENTER</td><td>${10.77} \pm {0.10}$</td><td>${10.77} \pm {0.10}$</td></tr><tr><td>FREQUENCY</td><td>${9.49} \pm {0.15}$</td><td>${9.49} \pm {0.15}$</td></tr></table>
178
+
179
+ Table 1: NVD on the Cord-19 dataset - we can infer that all models where the backbone is ${\mathrm{{BERT}}}_{\text{BASE }}$ perform comparable to each other. BERTSMALL performs worse compared to the other models, and the smaller capacity makes the model unable to sufficiently fit the data. The CLASSCENTER model outperforms the rest of the models since it solves an easier task i.e. predicting a discrete value corresponding to the organ.
180
+
181
+ ### 7.1 Atlas based point-cloud corpus visualization
182
+
183
+ One advantage of text retrieval in the physical 3D space is that we do not need to use textual queries, but are also able to retrieve information by directly specifying an observable desired location in the human atlas space. Another advantage is being able to directly observe the relationship between embedded texts in an intuitively meaningful setting.
184
+
185
+ <table><tr><td>Method</td><td>Regular</td><td>Masked</td></tr><tr><td>BERT</td><td>${4.6} \pm {0.26}$</td><td>${7.26} \pm {0.16}$</td></tr><tr><td>Biobert</td><td>${0.99} \pm {0.08}$</td><td>${5.99} \pm {0.15}$</td></tr><tr><td>Scibert</td><td>${2.27} \pm {0.18}$</td><td>${7.7} \pm {0.17}$</td></tr><tr><td>${\text{BERT}}_{\text{SMALL}}$</td><td>${2.11} \pm {0.1}$</td><td>${6.05} \pm {0.14}$</td></tr><tr><td>CLINICALBERT</td><td>${2.69} \pm {0.21}$</td><td>${7.5} \pm {0.18}$</td></tr><tr><td>CLASSCENTER</td><td>${24.94} \pm {6.26}$</td><td>${12.75} \pm {0.34}$</td></tr><tr><td>CENTER</td><td>${10.77} \pm {0.10}$</td><td>${10.77} \pm {0.10}$</td></tr><tr><td>FREQUENCY</td><td>${11.63} \pm {0.17}$</td><td>${11.63} \pm {0.17}$</td></tr></table>
186
+
187
+ Table 2: NVD-O on the Cord-19 dataset - compared to NVD, here we can observe the main shortcoming of the CLASSCENTER model. Namely, when the model fails to predict the correct organ, the error is not mitigated by predicting a point in the vicinity of the correct organ, as is the case with models that ground sentences by projecting them to the $3\mathrm{D}$ atlas.
188
+
189
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_6_886_889_532_518_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_6_886_889_532_518_0.jpg)
190
+
191
+ Figure 5: Point-cloud corpus visualization tool.
192
+
193
+ The point based tool (Figure 5) accepts a query in the form of $3\mathrm{D}$ coordinates and matches articles based on the proximity of their embeddings in $3\mathrm{D}$ space. The $3\mathrm{D}$ point is queried by selecting a $2\mathrm{D}$ point on two out of three orthogonal cross-sections. The distance between the queried point and the embedded articles is calculated as the distance between the query point and the centroids of article point clouds. The nearest 50 articles are shown as the centroids of their sentence point clouds in the $3\mathrm{D}$ view on the left, allowing the user to navigate between the closest suggestions. The user may zoom in and click on nearby points, after which the information on the corresponding article is displayed.
194
+
195
+ ---
196
+
197
+ ${}^{7}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge/
198
+
199
+ ---
200
+
201
+ <table><tr><td>Method</td><td>Regular</td><td>Masked</td></tr><tr><td>BERT</td><td>${92.76} \pm {0.29}$</td><td>${54.44} \pm {0.56}$</td></tr><tr><td>BioBERT</td><td>${78.21} \pm {0.47}$</td><td>${51.31} \pm {0.57}$</td></tr><tr><td>Scibert</td><td>${90.24} \pm {0.34}$</td><td>${56.76} \pm {0.56}$</td></tr><tr><td>${\text{BERT}}_{\text{SMALL}}$</td><td>${75.75} \pm {0.48}$</td><td>${43.21} \pm {0.56}$</td></tr><tr><td>Clinicalbert</td><td>${90.89} \pm {0.33}$</td><td>${58.56} \pm {0.56}$</td></tr><tr><td>CLASSCENTER</td><td>${99.88} \pm {0.04}$</td><td>${86.96} \pm {0.38}$</td></tr><tr><td>CENTER</td><td>${0.00} \pm {0.00}$</td><td>${0.00} \pm {0.00}$</td></tr><tr><td>FREQUENCY</td><td>${18.41} \pm {0.44}$</td><td>${18.41} \pm {0.44}$</td></tr></table>
202
+
203
+ Table 3: IOR on the Cord-19 dataset - When evaluated on the Inside Organ Ratio, the CLASSCENTER model, since it directly optimizes the IOR metric, significantly outperforms all others. Even though the grounding models approximate this metric during the training process, we can observe that for most of the models, the IOR exceeds 90% in the Regular setting and ${50}\%$ in the Masked setting.
204
+
205
+ ![01963dab-cdef-75ea-b9b0-110f3aaf79b2_7_225_957_553_548_0.jpg](images/01963dab-cdef-75ea-b9b0-110f3aaf79b2_7_225_957_553_548_0.jpg)
206
+
207
+ Figure 6: Text based document retrieval tool.
208
+
209
+ ### 7.2 Atlas based document retrieval
210
+
211
+ The text query based tool (Figure 6) accepts a text query, tokenizes it into sentences and embeds each into a point in the 3D space, creating a point cloud. The embedded point cloud is compared with the point clouds of embedded abstract sentences of each article. The articles are ranked in terms of the distances between the point cloud centroids. The information on the 200 closest articles is retrieved, and it consists of the title, abstract and the link to the publication.
212
+
213
+ ## 8 Discussion and Conclusions
214
+
215
+ There are several shortcomings in the current study. First, we only utilized a single male atlas to compute embeddings. Future work should explore multiple embeddings based on different age, gender, and body type (Christ et al., 2009; Gosselin et al., 2014). Additionally, the choice of labels for the atlas was determined separately from the specific task of Covid-19 article embeddings, and may have suboptimal levels of granularity in labeling organ substructures for this specific task. Second, for expedience we only explored training on individual sentences, as opposed to larger bodies of text with label propagation from nearby sentences. Third, we have formulated sentence embeddings in an atlas as a prediction of a single point, but we could also have considered predicting a (multi-modal) distribution over the atlas space per sentence. Finally, the query tools would ideally be validated with a user study. In the current crisis, the medical experts who would form the user group are in high demand, and we therefore postpone this step pending their availability.
216
+
217
+ In this paper, we have presented a self-supervised approach to ground medical texts in a 3D human atlas space. We have relaxed the labeled data constraint and provided an objective that learns semantically aware groundings of sentences. We did an ablation study of the performance on the sentence grounding task with 5 different BERT backbone models, namely the standard BERT as per Devlin et al. (2018), BIOBERT (Lee et al., 2019), SCIBERT (Beltagy et al., 2019), CLIN-ICALBERT (Alsentzer et al., 2019) and BERT SMALL (Turc et al., 2019). Finally, we described two use-cases that leverage this embedding. Prototype tools for these applications can be obtained at https: //github.com/gorjanradevski/macchina/.
218
+
219
+ ## Acknowledgements
220
+
221
+ We acknowledge funding from the Flemish Government under the Onderzoeksprogramma Artificièle Intelligentie (AI) Vlaanderen programme. This work is supported by KU Leuven Internal Funds under the MACCHINA project.
222
+
223
+ ## References
224
+
225
+ Hassan Akbari, Svebor Karaman, Surabhi Bhargava, Brian Chen, Carl Vondrick, and Shih-Fu Chang. 2019. Multi-level multimodal common semantic space for image-phrase grounding. In Proceedings
226
+
227
+ of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12476-12486.
228
+
229
+ Emily Alsentzer, John R Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.
230
+
231
+ Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat-ula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, et al. 2018. Construction of the literature graph in semantic scholar. arXiv preprint arXiv:1805.02262.
232
+
233
+ Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.
234
+
235
+ Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research, 32(suppl_1):D267- D270.
236
+
237
+ Lei Chen, Mengyao Zhai, Jiawei He, and Greg Mori. 2019. Object grounding via iterative context reasoning. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0-0.
238
+
239
+ Andreas Christ, Wolfgang Kainz, Eckhart G Hahn, Katharina Honegger, Marcel Zefferer, Esra Neufeld, Wolfgang Rascher, Rolf Janka, Werner Bautz, Ji Chen, et al. 2009. The virtual family-development of surface-based anatomical models of two adults and two children for dosimetric simulations. Physics in Medicine & Biology, 55(2):N23.
240
+
241
+ Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
242
+
243
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
244
+
245
+ Mohammad Ebrahimi, Elaheh ShafieiBavani, Raymond Wong, and Fang Chen. 2018. A unified neural network model for geolocating twitter users. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 42-53.
246
+
247
+ Marie-Christine Gosselin, Esra Neufeld, Heidi Moser, Eveline Huber, Silvia Farcito, Livia Gerber, Maria Jedensjö, Isabel Hilber, Fabienne Di Gennaro, Bryn Lloyd, et al. 2014. Development of a new generation of high-resolution anatomical models for medical device evaluation: the virtual population 3.0. Physics in Medicine & Biology, 59(18):5287.
248
+
249
+ Stacey M. Hartwig, Kaitlyn M. Holman, and Steven M. Varga. 2014. Depletion of alveolar macrophages ameliorates virus-induced disease following a pulmonary coronavirus infection. PLOS ONE, 9(3):1- 7.
250
+
251
+ Karl Heinz Höhne, Bernhard Pflesser, Andreas Pom-mert, Martin Riemer, Rainer Schubert, Thomas Schiemann, Ulf Tiede, and Udo Schumacher. 2001. A realistic model of human structure from the visible human data. Methods of information in medicine, 40(02):83-89.
252
+
253
+ Syed Ashar Javed, Shreyas Saxena, and Vineet Gandhi. 2018. Learning unsupervised visual grounding through semantic self-supervision. arXiv preprint arXiv:1803.06506.
254
+
255
+ Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
256
+
257
+ Douwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. 2017. Learning visually grounded sentence representations. arXiv preprint arXiv:1707.06320.
258
+
259
+ Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
260
+
261
+ Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
262
+
263
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746.
264
+
265
+ Yuxiao Liang and Pengtao Xie. 2020. Identifying radiological findings related to covid-19 from medical literature. ArXiv, abs/2004.01862.
266
+
267
+ H Lin, Q Huang, X Guo, P Liu, W Liu, Y Zou, S Zhu, G Deng, J Kuang, C Zhang, H Cao, and G Hu. 2015. Elevated level of renal xanthine oxidase mRNA transcription after nephropathogenic infectious bronchitis virus infection in growing layers. J Vet Sci., 16(4):423-429.
268
+
269
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
270
+
271
+ Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028.
272
+
273
+ Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing.
274
+
275
+ Adam Paszke, Sam Gross, Francisco Massa, Adam
276
+
277
+ Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In ${Ad}$ - vances in Neural Information Processing Systems, pages 8024-8035.
278
+
279
+ Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
280
+
281
+ Andreas Pommert, Karl Heinz Höhne, Bernhard Pf-lesser, Ernst Richter, Martin Riemer, Thomas Schie-mann, Rainer Schubert, Udo Schumacher, and Ulf Tiede. 2001. Creating a high-resolution spatial/symbolic model of the inner organs based on the visible human. Medical Image Analysis, 5(3):221- 228.
282
+
283
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
284
+
285
+ Thomas Schiemann, Ulf Tiede, and Karl Heinz Höhne. 1997. Segmentation of the visible human for high-quality volume-based visualization. Medical image analysis, 1(4):263-270.
286
+
287
+ Jean Serra. 1983. Image analysis and mathematical morphology. Academic Press, Inc.
288
+
289
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
290
+
291
+ Wilson L Taylor. 1953. "cloze procedure": A new tool for measuring readability. Journalism Bulletin, 30(4):415-433.
292
+
293
+ Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962.
294
+
295
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
296
+
297
+ Xuan Wang, Xiangchen Song, Yingjun Guan, Bangzheng Li, and Jiawei Han. 2020. Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision. arXiv e-prints, page arXiv:2003.12218.
298
+
299
+ Tobias Weyand, Ilya Kostrikov, and James Philbin. 2016. Planet-photo geolocation with convolutional neural networks. In European Conference on Computer Vision, pages 37-55. Springer.
300
+
301
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow-icz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
302
+
303
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
304
+
305
+ Fanyi Xiao, Leonid Sigal, and Yong Jae Lee. 2017. Weakly-supervised visual grounding of phrases with linguistic structures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5945-5954.
306
+
307
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
308
+
309
+ Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly Deploying a Neural Search Engine for the COVID- 19 Open Research Dataset: Preliminary Thoughts and Lessons Learned. arXiv e-prints, page arXiv:2004.05125.
310
+
311
+ Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut-dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19- 27.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/EC1vWkJXpjy/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SELF-SUPERVISED CONTEXT-AWARE COVID-19 DOCUMENT EXPLORATION THROUGH ATLAS GROUNDING
2
+
3
+ Dusan Grujicic, Gorjan Radevski*, Tinne Tuytelaars, and Matthew B. Blaschko
4
+
5
+ ESAT-PSI, KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium
6
+
7
+ {firstname.lastname}@esat.kuleuven.be
8
+
9
+ § ABSTRACT
10
+
11
+ In this paper, we aim to develop a self-supervised grounding of Covid-related medical text based on the actual spatial relationships between the referred anatomical concepts. More specifically, we learn to project sentences into a physical space defined by a three-dimensional anatomical atlas, allowing for a visual approach to navigating Covid-related literature. We design a straightforward and empirically effective training objective to reduce the curated data dependency issue. We use BERT as the main building block of our model and perform a comparison of two BERT variants pre-trained on general-purpose text - ${\mathrm{{BERT}}}_{\text{ BASE }}$ and ${\mathrm{{BERT}}}_{\text{ SMALL }}$ , with three domain-specific pre-trained alternatives - BIOBERT, SCIBERT and CLINICALBERT. We perform a quantitative analysis that demonstrates that the model learns a context-aware mapping while being trained with self-supervision in the form of medical term occurrences. We illustrate two potential use-cases for our approach, one in interactive, $3\mathrm{D}$ data exploration, and the other in document retrieval. To accelerate research in this direction, we make public all trained models, the data we use, and our codebase. Finally, we also release a web tool for document retrieval and a visualization tool.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The quantity of available COVID-19 articles on the internet increases every day. Nonetheless, it remains scarce compared to general domain data sets. Annotating medical data requires the expertise of physicians, and is therefore cost-prohibitive, especially during a pandemic. As a consequence of the lack of available structured data in the medical domain, the machine learning community has mostly focused on developing general-purpose text models.
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: Grounding of the sentence "The total number of AM in the lung at ${48}\mathrm{{hr}}$ was significantly $\left( {p < {0.05}}\right)$ reduced as compared to PBS controls” (Hartwig et al., 2014) with the CLINICALBERT model. The dark blue represents the voxels of the left lung, while the light blue area represents the outline of the body. The star denotes the model prediction. See Section 6 for details.
20
+
21
+ The development of BERT (Devlin et al., 2018), and the increased popularity of transfer learning in natural language processing (NLP), have prompted notable works that aim to leverage publicly available medical and scientific articles to develop domain specific pre-trained language models (Lee et al., 2019; Alsentzer et al., 2019; Beltagy et al., 2019). These approaches train models that learn universal sentence embeddings aimed at capturing the semantics and structure of the text data.
22
+
23
+ In contrast, we focus on mapping text to locations in a 3D model of the human body (Figure 1), where the physical proximity of objects reflects their functional and semantic relatedness to a significant degree. Such an embedding is advantageous for several reasons: (i) It allows us to visualize medical text in physically meaningful space, finding clusters of documents organized by anatomy (Figure 5). (ii) It allows us to search for and retrieve text by navigating through a physical space. (iii) There is a statistical advantage to modelling medical text in the 3D space as anatomically related substructures tend to be close to one another.
24
+
25
+ *equal contribution
26
+
27
+ < g r a p h i c s >
28
+
29
+ Figure 2: Cross-sections of the RGB volume (left) and the grayscale volume representing segmentation labels (right) (Pommert et al., 2001)
30
+
31
+ In the absence of semantic labels, we use term occurrences as the indication of what the text denotes. For example, the sentence: "The pancreas contains tissue with an endocrine and exocrine role" receives a target of mapping to the location of the pancreas in the 3D space.
32
+
33
+ In order to achieve the goal of grounding medical text into the physical space, a reference location for every medical term of interest is required. Such references can be obtained from a combination of a three-dimensional atlas of human anatomy and contextual information. There are multiple digital anatomical models available. The Virtual Population (Christ et al., 2009; Gosselin et al., 2014) of the IT’IS Foundation ${}^{1}$ contains anatomical models of 10 different persons obtained from MRI procedures. The Segmented Inner Organs (SIO) from the Voxel-Man project (Höhne et al., 2001; Pom-mert et al.,2001; Schiemann et al.,1997) ${}^{2}$ is based on the Visible Human Male (U.S. National Library of Medicine ${}^{3}$ ) and contains 202 labeled anatomical objects within the human torso. The model consists of 774 slices obtained by CT and MRI imaging, where each slice contains a cryosection image, a CT image and a segmentation label image where the grayscale level corresponds to a segmentation label of the tissue (Figure 2). In this work, we build on the atlas of Pommert et al. (2001), though the approach is readily extended to other atlases.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ There are many works that deal with sentence grounding in a limited space, albeit not in the physical 3D space as we do. Most of the approaches exploit multimodal data and limit the projection space to either images or videos (Akbari et al., 2019; Kiela et al., 2017; Chen et al., 2019; Javed et al., 2018; Xiao et al., 2017). These works overcome expensive bounding box or pixel-level annotation, but they cannot be extended to the unsupervised setting where the data are not paired, but rather raw unpaired sentences or images. Even though the image-caption pairs without any region label are commonly referred to as weakly-supervised data in the literature, most of these works have training procedures that are dependent on curated datasets which are hard to obtain.
38
+
39
+ The works of Weyand et al. (2016); Ebrahimi et al. (2018) are probably the most similar to ours. In PlaNet, Weyand et al. (2016) attempt to classify images to a distinct set of geolocations. To do so, they train their model on a dataset of geotagged images where each image belongs to a single class: the index of the partitioned geolocation world cell. In contrast to our approach, the task is formulated as a classification problem where the physical distances and relationships between cells do not affect the way the probability distribution over them is learned. We frame our approach as a regression problem, as the spatial closeness of anatomical concepts implies a degree of semantic and functional affinity. This helps us reason about our approach in a way that in addition to knowing whether the grounding is correct or not, we have insight into how physically close we are to the target.
40
+
41
+ A similar approach, but more related to our work as it also deals with text, is the work of Ebrahimi et al. (2018), where the extracted text representations and metadata were used to classify tweets by geographic region in a fully-supervised setting. Ebrahimi et al. (2018) utilize machine learning to ground sentences in the world atlas. Yet and again, their approach is dependent on a carefully structured dataset and availability of explicit annotations. In our work, we attempt to go one level further and learn to ground sentences by only using unlabeled raw text data, obtained from medical journal articles, while preserving the spatial structure of the sentences. Our supervision comes in the form of implicit organ voxel points in a human atlas space, and words/phrases that make reference to those organs. To the best of our knowledge, so far, there have not been works that attempt to ground sentences in a 3D human atlas space, using strictly self-supervision. Additionally, a number of works have applied natural language processing techniques on Covid-19 articles (Zhang et al., 2020; Wang et al., 2020; Liang and Xie, 2020), however, none of them aim to ground text in the $3\mathrm{D}$ atlas space.
42
+
43
+ 1www.itis.swiss/
44
+
45
+ ${}^{2}$ www.voxel-man.com/
46
+
47
+ ${}^{3}$ www.nlm.nih.gov/research/visible/
48
+
49
+ § 3 METHODS
50
+
51
+ In this section, we describe the model we use which is entirely based on BERT, the training objective and the task that we address in this paper.
52
+
53
+ § 3.1 THE MODEL
54
+
55
+ Bidirectional Encoder Representations from Transformers - BERT (Devlin et al., 2018) is a pre-trained Transformer (Vaswani et al., 2017) based language model. Before BERT, the only way to train deep bidirectional language models was to train a separate forward and backward language model, and in the end, concatenate their learned representations (Peters et al., 2018). BERT alleviates that problem by introducing the concept of Masked Language Modelling (MLM), previously known as cloze task (Taylor, 1953). The scalability of BERT, combined with MLM, led to the increasing popularity of such language models (Keskar et al., 2019; Liu et al., 2019; Lample and Conneau, 2019).
56
+
57
+ Due to the train-test discrepancy that occurs by including the [MASK] token in the MLM, other approaches train transformers in an autoregressive manner (Radford et al., 2019; Yang et al., 2019; Dai et al., 2019). In our work, we use BERT as a backbone in our model due to its simplicity and applicability in a wide range of domains. As we shall see later when we describe the task of text-atlas grounding, the existence of the [MASK] token in the vocabulary can be seamlessly incorporated in our pipeline to fit within the task we solve. In our work, we perform an ablation study with five different pre-trained BERT models. Following the standard practice (Devlin et al., 2018), we take the representation of the [CLS] token as a general representation of the whole sequence. Finally, to obtain the $3\mathrm{D}$ atlas grounding for a piece of medical text, we project BERT's sentence embedding with a linear layer, mapping from BERT's hidden space to the $3\mathrm{D}$ space.
58
+
59
+ § 3.2 TEXT-TO-ATLAS MAPPING OBJECTIVE
60
+
61
+ Our final objective is to ground medical texts to the anatomic atlas space using only self-supervision in the form of organ appearances in each sentence. More concretely, we have a dataset of sentences, where for each sentence, we can detect the appearances of terms denoted in the human atlas. Then, our desired scenario is that sentences that share the same semantics are mapped in the same region in the human atlas space regardless of whether they make explicit reference to an organ. To achieve that, we tokenize each of the training sentences (Loper and Bird, 2002) and stochastically mask each of the keywords. Each of the keywords (organs) is masked with 0.5 probability. In other words, assuming that we have the sentence "In addition, the kidney mRNA transcript level and serum activity of XOD in the infected group was significantly higher than that of the control group at8,15and ${22}\mathrm{{dpi}}$ (p < 0.05)" (Lin et al., 2015) on average, 50% of the time we will replace it with "In addition, the [MASK] mRNA transcript level and serum activity of XOD in the infected group was significantly higher than that of the control group at 8,15 and 22 dpi $\left( {\mathrm{p} < {0.05}}\right)$ " in the current training batch. We use the [MASK] token, as it is included in BERT's default vocabulary. Next, the sentence words are joined again and tokenized using the WordPiece (Wu et al., 2016) tokenization method as per Devlin et al. (2018). By following the above-mentioned procedure, we are able to obtain context-dependent grounding, such that the model can ground sentences purely based on their context in cases where none of the organ references are present.
62
+
63
+ § 3.3 MINIMUM ORGAN DISTANCE LOSS
64
+
65
+ Ideally, if we had exactly one organ occurrence per sentence, and if we could associate each organ with a single point in the $3\mathrm{D}$ space, we could simply minimize the mean squared error between the $3\mathrm{D}$ coordinates of the organ point $y$ and the model prediction $\widehat{y}$ . However, a sentence can contain multiple organ occurrences, while organs themselves are distributed in nature, and are characterized by a set of points in 3D space, which capture its position, size and shape. Therefore, the loss function needs to accommodate having more than one point as target for regression.
66
+
67
+ We calculate the Euclidean distances between the prediction and each organ point, and the soft-min (soft-max across the inputs reversed in sign) across these squared distances as weights for the contributions of individual points. The loss contribution of an organ point (denoted as ${PC}$ ) is the product of its squared distance from the predicted point and its weight:
68
+
69
+ $$
70
+ {PC}\left( {y}_{p}\right) = \frac{\parallel {y}_{p} - \widehat{y}{\parallel }_{2}^{2}\exp \left( {-{\gamma }_{1}{\begin{Vmatrix}{y}_{p} - \widehat{y}\end{Vmatrix}}_{2}^{2}}\right) }{\mathop{\sum }\limits_{{i = 1}}^{P}\exp \left( {-{\gamma }_{1}{\begin{Vmatrix}{y}_{i} - \widehat{y}\end{Vmatrix}}_{2}^{2}}\right) }, \tag{1}
71
+ $$
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 3: Loss isocurves around kidney and lung point clouds projected into 2D using PCA for visualization purposes.
76
+
77
+ where $\widehat{y}$ is the model prediction, ${y}_{p}$ is an organ point, $P$ is the total number of points that characterize a single organ and ${\gamma }_{1}$ is a temperature term. We calculate the loss for one organ (denoted as ${OL}$ ) as the sum of contributions of its points:
78
+
79
+ $$
80
+ {OL} = \mathop{\sum }\limits_{{p = 1}}^{P}{PC}\left( {y}_{p}\right) \tag{2}
81
+ $$
82
+
83
+ To avoid regressing to a point outside of the organ, we shave off the surface of the organ by performing a single binary morphological erosion (Serra, 1983) prior to computing the loss.
84
+
85
+ In the case where more than one organ is present in the sentence, we calculate the loss for each individual organ in the way described above. Then, we compute the soft-min over the set of such loss terms as contribution weights for each organ. The final loss contribution of one organ (denoted as ${OC})$ is the product between its individual loss and its contribution weight:
86
+
87
+ $$
88
+ O{C}_{o} = \frac{O{L}_{o}\exp \left( {-{\gamma }_{2}O{L}_{o}}\right) }{\mathop{\sum }\limits_{{i = 1}}^{O}\exp \left( {-{\gamma }_{2}O{L}_{i}}\right) } \tag{3}
89
+ $$
90
+
91
+ where $O$ is the total number of distinct organs appearing in the sentence, $O{L}_{i}$ is the organ loss for the $i$ th organ, and ${\gamma }_{2}$ is a temperature term. Finally, the total loss for one sample is computed by summing up the loss contributions of organs appearing in its sentence:
92
+
93
+ $$
94
+ \text{ Loss } = \mathop{\sum }\limits_{{o = 1}}^{O}O{C}_{o} \tag{4}
95
+ $$
96
+
97
+ § 4 DATA COLLECTION
98
+
99
+ § 4.1 TEXT CORPUS
100
+
101
+ The text corpus consists of Covid-19 related articles from the Open Research Dataset Challenge (CORD-19) ${}^{4}$ . The version from 20.03.2020., consisting of a csv file with metadata for 29500 papers and 13202 json files with full texts of scientific articles pertaining to Covid-19 was used for training the model. The abstracts and text bodies of full text articles were included in the corpus and split into sentences, which constitute the samples in the dataset. Both the full text json files and the metadata csv contain paper abstracts, and in case when there is a mismatch between the two, we include the abstract version that contains more characters. The sentence length was analyzed and it was found that ${99.89}\%$ sentences contain fewer than 128 words. In order to avoid unnecessary memory consumption during training, sentences longer than 128 words were discarded.
102
+
103
+ § 4.2 HUMAN BODY ATLAS
104
+
105
+ We utilize the Segmented Inner Organs (SIO) atlas (Pommert et al., 2001). We base the 3D atlas on the segmentation labels of the tissues in the human body, which come in the form of image slices that form a 3D voxel model of the male torso when stacked on top of one another. The stacked images from the torso represent a volume of ${573} \times {330} \times {774}$ voxels, with 1 -millimeter resolution along each axis. The value of each voxel represents the segmentation label of its corresponding organ or tissue. The SIO includes the model of the human head as well, that we do not use.
106
+
107
+ SIO contains a glossary of medical terms and their associated segmentation labels. A list of synonyms and closely related wordforms for each glossary term were retrieved. The ScispaCy UmlsEn-tityLinker (Neumann et al., 2019) was used for searching the UMLS Metathesaurus (The Unified Medical Language System) (Bodenreider, 2004) for all word forms of the SIO glossary ${}^{5}$ . The parameters of the UmlsEntityLinker were kept at default values.
108
+
109
+ ${}^{4}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge
110
+
111
+ SIO includes 202 anatomical objects with their distinct segmentation labels. Tissues such as skin, gray matter, white matter, and unclassified tissues were removed from the set of labeled terms, as they denote general medical concepts not characterized by specific compact locations in the human body. The vertebrae, bones, and muscles of the arms and legs were discarded as well. In the case of categories for bilateral organs located symmetrically on both the left and the right side of the body, which are seldom mentioned explicitly in the texts, only the atlas voxels pertaining to the left organ were kept for every bilateral pair. Atlas labels that appear infrequently in medical literature, but are functionally related to other, more frequently occurring organs, or are colloquially referred to under a single, umbrella term, were merged. The aforementioned steps reduced the list of distinct anatomical objects of interest to 67 . The full list of organ removals, mergers and renamings can be found at https: //github.com/gorjanradevski/macchina/.
112
+
113
+ § 4.3 DATASET CREATION
114
+
115
+ Sentences were chosen as the main units of text that are mapped to three-dimensional locations in the atlas, i.e. the samples consist of sentences and their targets in the human atlas. The voxels of one organ can be characterized by a point cloud in the atlas space, where each point represents the coordinate indices of one voxel (Figure 1).
116
+
117
+ The training set consists of sentences from 70% randomly chosen documents, while the remaining ${30}\%$ of the documents were evenly distributed between the validation and the test set. Consequently, the sentences from the same document are always assigned to the same dataset split. As can be seen on Figure 4, the frequency of the words and phrases referring to the lung, liver, bronchi, stomach, and kidney is significantly higher than that of other organs. Therefore, to balance out the numbers of organ occurrences in the dataset, we include up to 8000 randomly selected sentences that contain these frequently occurring organs and discard the rest, while keeping all the sentences containing less frequently occurring organs. Some sentences contain multiple occurrences of one or different organs, meaning that an organ can still have more than 8000 occurrences in the dataset. Regardless of this, the number of sentences that contain the most frequently occurring organs is significantly reduced, whereas the sentences containing less frequently occurring organs are preserved. The organs with fewer than 100 occurrences are removed. This included 38 organs, leaving a total of 29 anatomical categories as target locations for text mapping. The sentences that do not contain words and phrases that can be associated with the SIO glossary terms are discarded.
118
+
119
+ < g r a p h i c s >
120
+
121
+ Figure 4: Number of occurrences of the 13 most frequent organs
122
+
123
+ § 5 EXPERIMENTAL SETUP
124
+
125
+ For the development of our models and pipelines we used the PyTorch library (Paszke et al., 2019) together with the HuggingFace transformers (Wolf et al., 2019) package. For each of the experiments, we start with a pre-trained model and we fine-tune the whole architecture. We keep a fixed learning rate of $2 \times {10}^{-5}$ and train the larger models for 20 epochs, and we increase the learning rate to $5 \times {10}^{-5}$ for the BERT SMALL model and train it for 50 epochs. During training we clip the gradients when the global norm exceeds 2.0 . For all experiments, our optimizer of choice is Adam (Kingma and Ba,2014) and the temperature term ${\gamma }_{1}$ is fixed to 0.33 . We fix the second temperature term ${\gamma }_{2}$ to $\frac{1}{N}$ where $N$ is the number of distinct organs appearing within an single training instance. During the fine-tuning we keep the model that reported the lowest distance to the nearest ground truth voxel point on the validation set as early stopping. Aside from early stopping and the dropout (Srivastava et al., 2014) layers present in BERT, we do not perform any other regularization.
126
+
127
+ ${}^{5}$ ScispaCy version 0.2.3 and en_core_sci_lg pipeline
128
+
129
+ § 5.1 METRICS AND EVALUATION
130
+
131
+ We perform all evaluations in two different settings, namely Regular and Masked. In the former, we perform atlas grounding on a holdout set of sentences obtained from documents not seen by the model during training. In the latter, we use the same model while masking all the SIO glossary terms and their synonyms, i.e. substituting them with the special token [MASK]. In the Masked setting, we ensure that the model relies on the sentence context instead of making a one-to-one correspondence between the organ that appears in the sentence and the location in the atlas.
132
+
133
+ Each of the models is evaluated on three metrics: (i) Distance to the nearest voxel of the nearest correct organ (NVD) ${}^{6}$ . (ii) Distance to the nearest correct organ voxel calculated only on the samples for which the projection is outside the organ volume (NVD-O) ${}^{6}$ . (iii) Rate at which the sentences are grounded within the volume of the correct organ, which we denote as Inside Organ Ratio (IOR).
134
+
135
+ We consider the predicted $3\mathrm{D}$ point to be inside the organ volume (hit) when its coordinates, rounded to the nearest integer to represent voxel indices, are within the set of voxels that make up the corresponding organ. In cases where the sentence has more than one organ reference, due to the implicit labeling, we measure a hit when the predicted coordinates correspond to any one of the given organs.
136
+
137
+ When the projection is inside the volume of the organ, the NVD is zero, and otherwise, it is measured as the distance to the surface of the nearest organ in the sentence. The NVD-O metric complements the NVD metric, such that it gives insight into how far off the prediction is when it misses the correct organ.
138
+
139
+ We justify the evaluation metrics according to the type of data we use, and the use-cases. Firstly, since we leverage unlabeled data exclusively, we assume that a single sentence needs to be grounded inside/near the organ of reference in the sentence. Secondly, we want similar sentences (sentences making reference to a certain body part), to be grounded in similar parts of the human atlas. As a result, we use the distance to the nearest organ voxel as the primary evaluation metric. Therefore, we can expect that the models with high evaluation scores to be useful for data-exploration and document retrieval through the human atlas.
140
+
141
+ § 6 QUANTITATIVE RESULTS
142
+
143
+ In this section we report the results of our trained models. Four of the models share the same architecture, with the only difference being the pretraining corpus of BERT. Namely, the BERTBASE (Devlin et al., 2018) model has been pre-trained on the BooksCorpus (Zhu et al., 2015) and English Wikipedia. The BIOBERT (Lee et al., 2019) model is obtained by fine-tuning ${\mathrm{{BERT}}}_{\mathrm{{BASE}}}$ on PubMed abstracts and PMC full-text articles as per Lee et al. (2019) while CLINICALBERT is obtained by initializing with the BIOBERT's weights and fine-tuning on clinical notes. The SCIBERT model is obtained by fine-tuning ${\mathrm{{BERT}}}_{\text{ BASE }}$ on ${1.14}\mathrm{M}$ papers from Semantic Scholar (Ammar et al., 2018). Finally, ${\mathrm{{BERT}}}_{\text{ SMALL }}$ , is obtained by pre-trained distillation (Turc et al., 2019) from BERT BASE.
144
+
145
+ Additionally, we perform an analysis of the effectiveness of framing the task as classification. Here, we feed the [CLS] representation to an output layer to perform the classification as per Devlin et al. (2018). The model is trained to predict an organ index for every sentence, and the center of the predicted organ is subsequently used as the model prediction and evaluated in the same way as the regression models. We denote this model as CLASSCENTER in the result tables.
146
+
147
+ Finally, we report the results on two naive baselines that aim to exploit the information on the general locations of the organs and the information on the disbalance in the frequency of organ occurrences that exist in the datasets. In the first baseline (FREQUENCY), we measure the frequency of the organ terms in the training set samples, and always predict the point within the most frequent organ on the test set samples. In the second baseline (CENTER), we use the center of the 3D atlas as the prediction and measure the distance to the closest correct organ for every test sample (the IOR is not relevant).
148
+
149
+ § 7 USE-CASES
150
+
151
+ By grounding medical sentences in a $3\mathrm{D}$ atlas space, we produce low dimensional sentence em-beddings. We discuss two use-cases of our model, which, either implicitly or explicitly, leverage such embeddings: (i) atlas based point-cloud corpus visualization and (ii) atlas based document retrieval.
152
+
153
+ We built a tool for each of the use cases, one for visualizing and retrieving articles in the text corpus by specifying $3\mathrm{D}$ coordinates, and one for retrieving relevant articles based on a textual query. The data was obtained from the Covid-19 Open Research Dataset Challenge (CORD-19) ${}^{7}$ hosted on Kaggle. The version from 10.04.2020., consisting of 59311 json files of scientific articles pertaining to Covid-19 and metadata for 51078 papers was the latest at the time of writing. The dataset was processed by using paper indexes for matching titles, abstracts and main texts in the json files with the information required for retrieving the article in the metadata. This included the source of the publication, authors, date, digital object identifier (DOI) and the URL for each paper - all the relevant information for article retrieval. In the case of both tools, each document abstract was embedded into the $3\mathrm{D}$ space as a point cloud, where each point is the output of the model for each of its sentences. Tools and code can be accessed at https: //github.com/gorjanradevski/macchina/.
154
+
155
+ ${}^{6}$ Calculated in centimeters
156
+
157
+ max width=
158
+
159
+ Method Regular Masked
160
+
161
+ 1-3
162
+ BERT ${0.33} \pm {0.02}$ ${3.31} \pm {0.08}$
163
+
164
+ 1-3
165
+ BioBERT ${0.21} \pm {0.02}$ ${2.92} \pm {0.08}$
166
+
167
+ 1-3
168
+ Scibert ${0.22} \pm {0.02}$ ${3.33} \pm {0.09}$
169
+
170
+ 1-3
171
+ BERTSMALL ${0.51} \pm {0.03}$ ${3.44} \pm {0.08}$
172
+
173
+ 1-3
174
+ Clinicalbert ${0.25} \pm {0.02}$ ${3.11} \pm {0.08}$
175
+
176
+ 1-3
177
+ CLASSCENTER ${0.03} \pm {0.01}$ ${1.66} \pm {0.07}$
178
+
179
+ 1-3
180
+ CENTER ${10.77} \pm {0.10}$ ${10.77} \pm {0.10}$
181
+
182
+ 1-3
183
+ FREQUENCY ${9.49} \pm {0.15}$ ${9.49} \pm {0.15}$
184
+
185
+ 1-3
186
+
187
+ Table 1: NVD on the Cord-19 dataset - we can infer that all models where the backbone is ${\mathrm{{BERT}}}_{\text{ BASE }}$ perform comparable to each other. BERTSMALL performs worse compared to the other models, and the smaller capacity makes the model unable to sufficiently fit the data. The CLASSCENTER model outperforms the rest of the models since it solves an easier task i.e. predicting a discrete value corresponding to the organ.
188
+
189
+ § 7.1 ATLAS BASED POINT-CLOUD CORPUS VISUALIZATION
190
+
191
+ One advantage of text retrieval in the physical 3D space is that we do not need to use textual queries, but are also able to retrieve information by directly specifying an observable desired location in the human atlas space. Another advantage is being able to directly observe the relationship between embedded texts in an intuitively meaningful setting.
192
+
193
+ max width=
194
+
195
+ Method Regular Masked
196
+
197
+ 1-3
198
+ BERT ${4.6} \pm {0.26}$ ${7.26} \pm {0.16}$
199
+
200
+ 1-3
201
+ Biobert ${0.99} \pm {0.08}$ ${5.99} \pm {0.15}$
202
+
203
+ 1-3
204
+ Scibert ${2.27} \pm {0.18}$ ${7.7} \pm {0.17}$
205
+
206
+ 1-3
207
+ ${\text{ BERT }}_{\text{ SMALL }}$ ${2.11} \pm {0.1}$ ${6.05} \pm {0.14}$
208
+
209
+ 1-3
210
+ CLINICALBERT ${2.69} \pm {0.21}$ ${7.5} \pm {0.18}$
211
+
212
+ 1-3
213
+ CLASSCENTER ${24.94} \pm {6.26}$ ${12.75} \pm {0.34}$
214
+
215
+ 1-3
216
+ CENTER ${10.77} \pm {0.10}$ ${10.77} \pm {0.10}$
217
+
218
+ 1-3
219
+ FREQUENCY ${11.63} \pm {0.17}$ ${11.63} \pm {0.17}$
220
+
221
+ 1-3
222
+
223
+ Table 2: NVD-O on the Cord-19 dataset - compared to NVD, here we can observe the main shortcoming of the CLASSCENTER model. Namely, when the model fails to predict the correct organ, the error is not mitigated by predicting a point in the vicinity of the correct organ, as is the case with models that ground sentences by projecting them to the $3\mathrm{D}$ atlas.
224
+
225
+ < g r a p h i c s >
226
+
227
+ Figure 5: Point-cloud corpus visualization tool.
228
+
229
+ The point based tool (Figure 5) accepts a query in the form of $3\mathrm{D}$ coordinates and matches articles based on the proximity of their embeddings in $3\mathrm{D}$ space. The $3\mathrm{D}$ point is queried by selecting a $2\mathrm{D}$ point on two out of three orthogonal cross-sections. The distance between the queried point and the embedded articles is calculated as the distance between the query point and the centroids of article point clouds. The nearest 50 articles are shown as the centroids of their sentence point clouds in the $3\mathrm{D}$ view on the left, allowing the user to navigate between the closest suggestions. The user may zoom in and click on nearby points, after which the information on the corresponding article is displayed.
230
+
231
+ ${}^{7}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge/
232
+
233
+ max width=
234
+
235
+ Method Regular Masked
236
+
237
+ 1-3
238
+ BERT ${92.76} \pm {0.29}$ ${54.44} \pm {0.56}$
239
+
240
+ 1-3
241
+ BioBERT ${78.21} \pm {0.47}$ ${51.31} \pm {0.57}$
242
+
243
+ 1-3
244
+ Scibert ${90.24} \pm {0.34}$ ${56.76} \pm {0.56}$
245
+
246
+ 1-3
247
+ ${\text{ BERT }}_{\text{ SMALL }}$ ${75.75} \pm {0.48}$ ${43.21} \pm {0.56}$
248
+
249
+ 1-3
250
+ Clinicalbert ${90.89} \pm {0.33}$ ${58.56} \pm {0.56}$
251
+
252
+ 1-3
253
+ CLASSCENTER ${99.88} \pm {0.04}$ ${86.96} \pm {0.38}$
254
+
255
+ 1-3
256
+ CENTER ${0.00} \pm {0.00}$ ${0.00} \pm {0.00}$
257
+
258
+ 1-3
259
+ FREQUENCY ${18.41} \pm {0.44}$ ${18.41} \pm {0.44}$
260
+
261
+ 1-3
262
+
263
+ Table 3: IOR on the Cord-19 dataset - When evaluated on the Inside Organ Ratio, the CLASSCENTER model, since it directly optimizes the IOR metric, significantly outperforms all others. Even though the grounding models approximate this metric during the training process, we can observe that for most of the models, the IOR exceeds 90% in the Regular setting and ${50}\%$ in the Masked setting.
264
+
265
+ < g r a p h i c s >
266
+
267
+ Figure 6: Text based document retrieval tool.
268
+
269
+ § 7.2 ATLAS BASED DOCUMENT RETRIEVAL
270
+
271
+ The text query based tool (Figure 6) accepts a text query, tokenizes it into sentences and embeds each into a point in the 3D space, creating a point cloud. The embedded point cloud is compared with the point clouds of embedded abstract sentences of each article. The articles are ranked in terms of the distances between the point cloud centroids. The information on the 200 closest articles is retrieved, and it consists of the title, abstract and the link to the publication.
272
+
273
+ § 8 DISCUSSION AND CONCLUSIONS
274
+
275
+ There are several shortcomings in the current study. First, we only utilized a single male atlas to compute embeddings. Future work should explore multiple embeddings based on different age, gender, and body type (Christ et al., 2009; Gosselin et al., 2014). Additionally, the choice of labels for the atlas was determined separately from the specific task of Covid-19 article embeddings, and may have suboptimal levels of granularity in labeling organ substructures for this specific task. Second, for expedience we only explored training on individual sentences, as opposed to larger bodies of text with label propagation from nearby sentences. Third, we have formulated sentence embeddings in an atlas as a prediction of a single point, but we could also have considered predicting a (multi-modal) distribution over the atlas space per sentence. Finally, the query tools would ideally be validated with a user study. In the current crisis, the medical experts who would form the user group are in high demand, and we therefore postpone this step pending their availability.
276
+
277
+ In this paper, we have presented a self-supervised approach to ground medical texts in a 3D human atlas space. We have relaxed the labeled data constraint and provided an objective that learns semantically aware groundings of sentences. We did an ablation study of the performance on the sentence grounding task with 5 different BERT backbone models, namely the standard BERT as per Devlin et al. (2018), BIOBERT (Lee et al., 2019), SCIBERT (Beltagy et al., 2019), CLIN-ICALBERT (Alsentzer et al., 2019) and BERT SMALL (Turc et al., 2019). Finally, we described two use-cases that leverage this embedding. Prototype tools for these applications can be obtained at https: //github.com/gorjanradevski/macchina/.
278
+
279
+ § ACKNOWLEDGEMENTS
280
+
281
+ We acknowledge funding from the Flemish Government under the Onderzoeksprogramma Artificièle Intelligentie (AI) Vlaanderen programme. This work is supported by KU Leuven Internal Funds under the MACCHINA project.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/G4auHKwZYP0/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 000 050
2
+
3
+ ## COVID-19 press conference search engine using BERT
4
+
5
+ 001 051
6
+
7
+ 002 052
8
+
9
+ 003 053
10
+
11
+ 054
12
+
13
+ Anonymous ACL submission 055
14
+
15
+ 056
16
+
17
+ 057
18
+
19
+ 008 058
20
+
21
+ 010
22
+
23
+ ## Abstract
24
+
25
+ There have been multiple press conferences concerning COVID-19, where governments present their efforts in fighting the pandemic. These briefings provide reporters with a platform for their questions to be answered. This work studies multiple press conferences from different governments and agencies, ranging
26
+
27
+ 019 from WHO to the Whitehouse to different state governors to even different governments. This work collects the transcripts of these press conferences, then using a custom heuristic, selects short exchanges between different speakers, hence selecting exchanges made by the re-
28
+
29
+ 024 porters. Then using a custom trained sentence-classifier, selects the questions raised by the
30
+
31
+ 026 reporters through these exchanges. This creates a new dataset, which contains the questions asked by reporters and how they were answered by officials. This dataset can prove useful in a number of applications, in this work we present one of these uses, which is build-
32
+
33
+ 031 ing a search engine. This search engine is built on these questions by fine-tuning the state-of- 033 the-art BERT language model on the collected COVID-19 press conference transcript dataset. This search engine can prove helpful in an-
34
+
35
+ 035 swering questions raised by the public and knowing how they were answered by officials, it can also help reporters and researchers in finding how a specific question was answered by the different governments. Our goal by this work is to help organize the press questions concerning COVID-19 to help build an insight on the different efforts being taken to combat 042 the pandemic.
36
+
37
+ ## 1 Introduction
38
+
39
+ Press conferences are the channel of communication that governments/agencies use to communicate their efforts in fighting COVID-19 with the world. Studying and analyzing the transcripts of these conferences would provide great insights on
40
+
41
+ 059
42
+
43
+ 060 the different approaches and efforts these govern-
44
+
45
+ ments/agencies use in their fight against the pan- 062 demic.
46
+
47
+ This work aims to collect the transcript of multiple press conferences made since late January, made by different governments/agencies. Then us-
48
+
49
+ ing a custom heuristic, the short exchanges made 067 throughout these conferences are selected, which
50
+
51
+ mostly contains the exchanges made by reporters. 069 A sentence-classifier (a CNN model trained on a combination of SQuAD ${}^{1}$ (Rajpurkar et al.,2018) and SPAADIA ${}^{2}$ datasets) is then used to only select the questions raised by the reporters from these exchanges. This builds a dataset containing questions raised by reporters and how they were answered
52
+
53
+ by the officials from different governments. This 076 dataset can prove useful in a number of applications, from building an insight on the questions, analyzing when a certain type of question was mostly raised, comparing the questions raised from
54
+
55
+ reporters to different governments. In our work 081 we introduce another application of this dataset,
56
+
57
+ which is building a customized search engine capa- 083 ble of finding the most similar question to a custom
58
+
59
+ query. This can prove helpful in answering ques- 085 tions raised by the public, it can also help reporters build an insight on how a specific question was answered by the different governments.
60
+
61
+ This search engine is built by fine-tuning BERT
62
+
63
+ (Devlin et al., 2018), a state-of-the-art language 090 modeling, to build a customized language model,
64
+
65
+ capable of understanding the context of COVID- 092 19 press conferences. An evaluation was built on BERT to test how well it understood the context of the COVID-Press dataset. We use the evaluation technique proposed by (Ein Dor et al., 2018), which tests the ability of BERT to identity sim-
66
+
67
+ 097
68
+
69
+ 098
70
+
71
+ 099 ilar sentences, we have built our own similarity 101 dataset from COVID-19 press context for evaluation. Then using the recent proposed architec- 103 ture SBERT (Reimers and Gurevych, 2019) (which builds a mechanism for selecting the most optimized embedding for sentences from BERT), a search engine is built. This search engine gets the most similar questions and their answers (from the built dataset) to a user query.
72
+
73
+ ---
74
+
75
+ ${}^{1}$ https://rajpurkar.github.io/SQuAD-explorer/
76
+
77
+ ${}^{2}$ http://martinweisser.org/index.html#Amex_a
78
+
79
+ ---
80
+
81
+ The paper is structured in the following way: (Section 2) presents how the dataset was collected and the proposed method for selecting the questions and answers in a press conference. In (section 3), we view how BERT (Devlin et al., 2018) was fine-tuned to the collected dataset. (Section 4) views the used architecture for extracting the embeddings from the fine-tuned BERT using the newly proposed SBERT (Reimers and Gurevych, 2019). (Section 5) views some results on running the search engine. We have used google colab for scrapping, fine-tuning and building our search engine, the code ${}^{3}$ is provided as jupyter notebooks to run seamlessly on google colab. The data ${}^{4}$ is hosted on google drive to connect seamlessly with google colab.
82
+
83
+ This work opens the opportunity to analyze how a certain question is answered across the different governments, hence building insights on the different fighting efforts being made across the world.
84
+
85
+ ## 2 Building COVID-19 Press Dataset
86
+
87
+ ## This work uses the transcripts provided by REV
88
+
89
+ https://www.rev.com/ blog/transcript-tag/ coronavirus-update-transcripts
90
+
91
+ REV provides the transcripts of the press conferences made by multiple governments and agencies, these are :
92
+
93
+ - World Health organization press briefings
94
+
95
+ - United Kingdom Coronavirus briefing
96
+
97
+ - White house press conferences
98
+
99
+ - Justin Trudeau Canada COVID-19 Press Conference
100
+
101
+ - Press Conferences made by multiple US state 150
102
+
103
+ governors (NewYork, Iowa, Florida, ... and 151 many others)
104
+
105
+ 153
106
+
107
+ ### 2.1 Scrapping Transcripts
108
+
109
+ We have built a customized scrapper using python to scrape the exchanges made by different speakers in a given press conference. We have scrapped 654 press conferences made since 23th January, till 12th May. We were able to obtain more than
110
+
111
+ 66k exchanges throughout the collected transcripts 160 dataset. We tend to scrape the transcript text of
112
+
113
+ each speaker, with the name of that speaker, with 162 the timing of when this exchange was spoken within the press conference. We also record the name and the date of the press conference in addition to its url (from REV).
114
+
115
+ Since COVID-19 is a continuously evolving sit- 167 uation, we would periodically run our scrappers to
116
+
117
+ obtain the most up-to-date transcripts. 169
118
+
119
+ ### 2.2 Building COVID-19 questions dataset
120
+
121
+ This work aims to build a search engine on the questions raised by reporters and how they were answered by officials. However selecting these exchanges from the scrapped dataset appeared quite challenging, as REV doesn't provide a guide on the identity of each speaker, so work must be done in order to try and identify the identity of each speaker.
122
+
123
+ To select the questions raised by reporters, our work was broken down into 2 steps. First selecting all the reporter exchanges, then selecting the
124
+
125
+ questions from these exchanges. We first build 183 a custom heuristic capable of identifying the ex-
126
+
127
+ changes made the reporters, then we build a custom 185 sentence-classifier to select the raised questions.
128
+
129
+ #### 2.2.1 Custom heuristic for selecting short exchanges
130
+
131
+ This work uses a custom heuristic to try and identify the identity of the speakers, in order to select the exchanges made the reporters. This heuristic is built over rules of when the speakers begin to speak and the amount their exchanges. The proposed rules are:
132
+
133
+ - The longest exchanges in a press conference are flagged to be spoken by the official giving the press conference (president, prime minis-
134
+
135
+ ter, governor or a health official). 199
136
+
137
+ ---
138
+
139
+ ${}^{3}$ https://github.com/theamrzaki/covid-19-press-briefings ${}^{4}$ https://github.com/theamrzaki/covid-19-press-briefings#data
140
+
141
+ ---
142
+
143
+ - The first exchange, is flagged as been spoken by the presenter (the conductor of the conference). This can either be a reporter or the
144
+
145
+ 203 official himself.
146
+
147
+ - If the main official conducting the conference, mentioned other speakers, those speakers are flagged to be helpers to that official. In most cases these have been found to be either health officials (like in case of Dr Fauci in the white
148
+
149
+ 210 house conferences), or other officials (either military or a financial official).
150
+
151
+ - We are most concerned with flagging the reporter exchanges. These have been found to be few exchanges in a single press conference made by each reporter (each speaker speaks either once or twice max). When this pattern is found (few exchanges made by a single speaker), these exchanges are flagged to be made by reporters, and are considered to be questions. Then the exchange right after it is flagged as its answer.
152
+
153
+ Using these rules, the previously collected transcript dataset was flagged with the proposed speakers (either conference-conductor, official, helper, or reporter). A dataset is then built to only contain the exchanges made by the reporters and the answers to them. However, not all of the selected reporter exchanges can be considered as questions, this is why a custom sentence-classifier has been built in order to only select the questions.
154
+
155
+ #### 2.2.2 Sentence-Classifier for selecting questions
156
+
157
+ A classifier was built with the goal of correctly identifying the true questions from the built reporter exchanges dataset.
158
+
159
+ We used the model presented by ${}^{5}$ which builds a CNN model on a combination of SQuAD ${}^{6}$ (Ra-jpurkar et al.,2018) and SPAADIA ${}^{7}$ datasets. These datasets classify sentences into 3 classes
160
+
161
+ 1. 1111 Command
162
+
163
+ 2. 80167 Statement
164
+
165
+ ## 3. 131001 Question
166
+
167
+ 249
168
+
169
+ In our work we are only interested in classifying 250
170
+
171
+ questions, so we have considered both the "Com- 251 mand" and the "Statement" as the same class.
172
+
173
+ The CNN model was trained on 170077 sentence 253 of the 3 classes, then it was tested on 42520 sentence. It was able to achieve a test accuracy of 0.9948 .
174
+
175
+ This model has then been used to classify the
176
+
177
+ questions from the collected reporter exchanges. 258 It classified that ${67.76}\%$ were indeed questions.
178
+
179
+ These correctly identified as questions (about $5\mathrm{\;k}$ ) 260 where then selected (with their answers) in a new dataset which only contain the reporter questions.
180
+
181
+ ## 3 Fine-Tuning BERT to COVID-19-press
182
+
183
+ BERT (Devlin et al., 2018) has been proven to be the state-of-art architecture for language modeling.
184
+
185
+ It is built as an enhancement to the vanilla trans- 267 former (Vaswani et al., 2017). It is built to only
186
+
187
+ contain an encoder structure, and to depend solely 269 on self-attention.
188
+
189
+ BERT is unique in the approach used in its pretraining, where "masked language model" (MLM) is used as the pre-training objective, inspired by the Cloze task (Taylor, 1953). This approach randomly chooses words from the input text (15% of words), and the training objective is to predict these masked words. This training objective enables BERT to be pre-trained in an unsupervised manner, where raw text is supplied to BERT, without having labels.
190
+
191
+ This training objective is also used in its fine-tuning, in our case, the collected dataset (COVID- 19 press of ${66}\mathrm{k}$ exchanges) is used as the raw training text to fine-tune the pretrained BERT. Hugging Face (https://huggingface.co/) library was used to fine-tune BERT to the collected dataset. The BERT model provided by google (https://huggingface.co/google/ bert_uncased_L-8_H-256_A-4) was used as our pre-trained BERT. Google colab was used as the platform for fine-tuning.
192
+
193
+ ### 3.1 BERT Evaluation for COVID-19 press context
194
+
195
+ Evaluation of a customized language model proves challenging, as most of the available evaluation techniques are build to cope with a general language model not a customized one. A recent evaluation technique for BERT was recently proposed by (Ein Dor et al., 2018). This technique relies
196
+
197
+ on evaluating how BERT is able to measure the 299 similarity of different sentences, where 3 sentences 301 are supplied to BERT, 2 are similar and 1 is not. The evaluation is made to test if BERT is able to 303 correctly identify the similar sentences. The true breakthrough that this technique offers over other evaluation mechanisms, is the ease of producing customized evaluation datasets without manual labeling.
198
+
199
+ ---
200
+
201
+ ${}^{5}$ https://github.com/lettergram/sentence-classification
202
+
203
+ ${}^{6}$ https://rajpurkar.github.io/SQuAD-explorer/
204
+
205
+ ${}^{7}$ http://martinweisser.org/index.html#Amex_a
206
+
207
+ ---
208
+
209
+ In (Ein Dor et al., 2018) they were able to create a customized similarity dataset from scrapping
210
+
211
+ 310 Wikipedia pages. They assumed that sentences from the same paragraph in a Wikipedia article are similar, and a sentence from a different paragraph would talk about a different subject, hence lower similarity. They then used this to build a customized similarity dataset by using a Wikipedia article of their chosen context.
212
+
213
+ 317 In this work, we have used the same approach to build our own similarity dataset. We have used
214
+
215
+ 319 our built dataset, that contain all the exchanges between speakers of all datasets (dataset of ${66}\mathrm{k}$ exchange). Then to build the similarity dataset, we selected every 2 adjacent exchanges from a press conference as the similar sentences, we then used an exchange from a different press conference as the different sentence. By this, a dataset of ${40}\mathrm{k}$ triplets has been created, where each row contains 3 sentences, 2 similar (of the same press conference), and one different.
216
+
217
+ BERT has been evaluated using this custom built evaluation dataset, it was capable of correctly identifying ${99.7}\%$ of the ${40}\mathrm{k}$ triplets. This indicates that BERT was capable to correctly understand thee context of the sentences. We also evaluated our fine-tuned version of BERT, where it scored an accuracy of ${99.88}\%$ , which indicates that even the examples which were quite difficult for the vanilla BERT to correctly identify, were correctly handled by our fine-tuned BERT.
218
+
219
+ ## 4 BERT to build a search engine
220
+
221
+ Using BERT for sentence-pair regression (measuring how similar sentences are to each other, the technique used to built a search engine), proves to be inefficient for multiple reasons.
222
+
223
+ To begin with, for sentence-pair regression in BERT, the 2 sentences are provided to BERT with a special separator token in between them [SEP]. To build a search engine using this approach, one would need to supply each sentence to BERT (in addition to the query sentence). This would require
224
+
225
+ running BERT each time in deployment for about 350
226
+
227
+ $5\mathrm{\;k}$ times (size of the dataset) to get the most similar 351
228
+
229
+ question and its answer from all the dataset. This 352
230
+
231
+ is simply unsuitable for building a search engine. 353
232
+
233
+ Another approach other than sentence-pair regression is often proposed, which is extracting the sentence embedding from BERT. First running BERT just once on the $5\mathrm{k}$ questions, get-
234
+
235
+ ting their embedding, and in deployment, just run 358 BERT once on the query and use cosine similar-
236
+
237
+ ity to get the most similar question and its answer. 360 However this also exposes another disadvantage
238
+
239
+ in BERT, as in BERT no independent sentence- 362 embedding are computed, this makes it challenging to extract a good embedding from BERT (Reimers and Gurevych, 2019).
240
+
241
+ Multiple approaches were proposed to help ex-
242
+
243
+ tract good embeddings from BERT. ((May et al., 367 2019),(Zhang et al., 2019),(Qiao et al., 2019)) pro-
244
+
245
+ posed using the [CLS] token from BERT as the 369 fixed size vector embedding for a sentence. Another approach used by (Reimers and Gurevych, 2019), computes the mean of all output vectors.
246
+
247
+ In (Reimers and Gurevych, 2019), they trained a Siamese BERT network on SNLI data (Bowman et al., 2015) and on Multi-Genre NLI. They then evaluated different polling approaches to build embedding representation for sentences. Either using [CLS] or by averaging vectors to get [MEAN], they fine-tuned their architecture for classification objective function on the STS benchmark with regression objective function. They concluded that using the [MEAN] polling strategy outperformed that of using [CLS] strategy. This is the reason it was the
248
+
249
+ selected pooling strategy in our work. 383
250
+
251
+ ## 5 Experiments
252
+
253
+ 385
254
+
255
+ To build our search engine, we fine-tuned BERT on the collected $5\mathrm{\;k}$ questions, saved their embedding using the [MEAN] polling strategy, then for each test query, we run the fine-tuned BERT with the same polling strategy, and using cosine similarity we get the most similar questions asked in the
256
+
257
+ collected press-conferences. 392
258
+
259
+ To select the test queries, we followed a selecting mechanism to automatically select sentences from our corpus. Some measures were taken to ensure that the selected sentences were of different
260
+
261
+ context. The resultant embedding from the fine- 397 tuned BERT were used with k-means to cluster
262
+
263
+ the dataset to multiple clusters, were each of them 399 convey a specific context. Elbow method was used to identity that 10 clusters would be the optimized number of clusters to be used (dataset with clusters as labels). The clusters with the most number of associated sentences were then selected to draw the test sentences from, then using a random generator, a sentence from each cluster was selected.
264
+
265
+ The following are some examples from the search engine, the top 2 most similar questions and their answers are selected. With the name of the press briefing, its date, and the time within the briefing when this exchange was spoken.
266
+
267
+ ## Input Sentence
268
+
269
+ And regarding unemployment, we're hearing stories of people are still not getting returned phone calls within 72 hours,...
270
+
271
+ ## Results
272
+
273
+ ## Score: 0.9296
274
+
275
+ question: Yes, governor, I want to go back to unemployment. We're still hearing from many who are wanting to know when they're going to get their checks and you gave that answer,...
276
+
277
+ answers: Yeah, I think that is right. We have processed and I think most of the checks that are direct deposited have gone out to I think a majority of the people who are in that backlog....
278
+
279
+ header: Transcript: Governor Ned Lamont COVID- 19 Press Conference Transcript April 14
280
+
281
+ date: Apr 14, 2020 (39:31)
282
+
283
+ 433 Score: 0.9252
284
+
285
+ question: I'm still healing hearing from some people who are having problems getting through to unemployment and getting the benefits that they feel they're entitled to.....
286
+
287
+ answers: So I think that it's always important to have some perspective here. We've had over a million people become unemployed in the last six weeks. We have been able to make sure that over 820,000 people have gotten the assistance that they've earned...
288
+
289
+ header: Michigan Governor Gretchen Whitmer Press Conference Transcript April 24
290
+
291
+ date:Apr 24, 2020 (30:53)
292
+
293
+ 449 Table 1: Query 1
294
+
295
+ ## Input Sentence
296
+
297
+ 450
298
+
299
+ This morning at the San Mateo county board of 451
300
+
301
+ supervisors meeting officials there expressed 453 grave concern about the lack of PPE at Seton Medical Center, and also the need for more staffing. I just want to find out what the state is doing to address those needs?
302
+
303
+ ## Results
304
+
305
+ ## Score: 0.9039
306
+
307
+ question: Reporters in the room, I'm working 460 on behalf of your colleagues. I'm going to try and get some of their other questions in.We may not have as many confirmed cases downstate but already clusters of cases in a senior home in Taylorville outnumber the available number of ICU beds at the hospital in town....
308
+
309
+ 467
310
+
311
+ answers: Our ICU bed situation in the state,
312
+
313
+ as you know this is as we move toward the 469 peak of this, we are going to be filling up ICU beds across the state. It isn't the same in every area. There are critical-access hospitals that may have fewer ICU beds. There are other hospitals in other areas of the state that may have more availability,....
314
+
315
+ header: Illinois Governor J.B. Pritzker 476 COVID-19 Briefing Transcript April 1
316
+
317
+ date: Apr 1, 2020 (40:52)
318
+
319
+ ## Score: 0.9038
320
+
321
+ 480
322
+
323
+ question: The next question is for the Secre- 481 tary. Dr. Levine from the Capitol Star. HAP
324
+
325
+ said that it was in talks with the administration 483 today about resuming non-emergent services
326
+
327
+ as the lockdown eases. Can you characterize 485 the state of those talks and what you would need to do to allow hospitals to start treating those patients?
328
+
329
+ answers: Mm-hmm (affirmative). So that is
330
+
331
+ correct. We have had discussions with the 490 hospital association as well as a number of
332
+
333
+ different health systems and hospitals about 492 when would be the right time to allow nonemergent procedures to occur. Now remember, some of those are procedures that really have to happen for people's health and they've been on hold and it's really difficult.
334
+
335
+ header: Pennsylvania Gov. Tom Wolf Coron- 499 avirus Briefing Transcript April 22
336
+
337
+ date:Apr 22, 2020 (19:00)
338
+
339
+ As seen in the previous examples, the exchanges 501 that were flagged as questions were indeed questions. This helps indicate that the used mechanisms 503 for selecting questions from the different exchanges were successful.
340
+
341
+ ## 6 Conclusions
342
+
343
+ In this work we present a new COVID-19 data 508 source, which is the press conference briefings, as a rich source for analyzing different governments re- 510 sponse for fighting the virus. We also present some mechanisms of selecting questions from these press 512 briefings. We have used the state-of-art language models for building a semantic search engine to get the most similar questions from the press briefings. This search engine can prove helpful in addressing the questions posed by the public concerning 517 COVID-19. It can also be used by journalists and researchers in comparing the different efforts made 519 by the governments around the world in fighting the pandemic.
344
+
345
+ Building a search engine is just one of multiple possible applications of using this dataset. Further analysis of this dataset opens the possibility to
346
+
347
+ 524 multiple other uses, like analyzing the timeline of asking a certain question, when it was first raised,
348
+
349
+ 526 by whom and how it was answered.
350
+
351
+ We believe that this new data source can prove useful in multiple areas of research, to understand and build insights on the different approaches taken
352
+
353
+ 530 by governments in combating this virus.
354
+
355
+ 531
356
+
357
+ ## References
358
+
359
+ 533 Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno-
360
+
361
+ 535 tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
362
+
363
+ 540 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- 542 ing.
364
+
365
+ Liat Ein Dor, Yosi Mass, Alon Halfon, Elad Venezian, Ilya Shnayderman, Ranit Aharonov, and Noam 545 Slonim. 2018. Learning thematic similarity metric from article sections using triplet networks. In Pro- 547 ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short 548 Papers), pages 49-54, Melbourne, Australia. Asso- 549 ciation for Computational Linguistics.
366
+
367
+ Chandler May, Alex Wang, Shikha Bordia, Samuel R. 550 Bowman, and Rachel Rudinger. 2019. On measur- 551 ing social biases in sentence encoders. 552
368
+
369
+ Yifan Qiao, Chenyan Xiong, Zhenghao Liu, and 553
370
+
371
+ Zhiyuan Liu. 2019. Understanding the behaviors of 554
372
+
373
+ bert in ranking. 555
374
+
375
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. 556
376
+
377
+ Know what you don't know: Unanswerable ques- 557
378
+
379
+ tions for SQuAD. In Proceedings of the 56th An- 558 nual Meeting of the Association for Computational
380
+
381
+ Linguistics (Volume 2: Short Papers), pages 784- 559
382
+
383
+ 789, Melbourne, Australia. Association for Compu- 560
384
+
385
+ tational Linguistics. 561
386
+
387
+ Nils Reimers and Iryna Gurevych. 2019. Sentence- 562
388
+
389
+ bert: Sentence embeddings using siamese bert- 563
390
+
391
+ networks. pages 3973-3983. 564
392
+
393
+ Wilson L. Taylor. 1953. "cloze procedure": A new 565
394
+
395
+ tool for measuring readability. Journalism Quar- 566 terly, 30(4):415-433. 567
396
+
397
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 568
398
+
399
+ Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz 569
400
+
401
+ Kaiser, and Illia Polosukhin. 2017. Attention is all 570 you need. In I. Guyon, U. V. Luxburg, S. Bengio,
402
+
403
+ H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- 571 nett, editors, Advances in Neural Information Pro-
404
+
405
+ cessing Systems 30, pages 5998-6008. Curran Asso- 573
406
+
407
+ ciates, Inc. 574
408
+
409
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. 575
410
+
411
+ Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- 576
412
+
413
+ uating text generation with bert. 577
414
+
415
+ 578
416
+
417
+ 579
418
+
419
+ 580
420
+
421
+ 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/G4auHKwZYP0/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 000 050
2
+
3
+ § COVID-19 PRESS CONFERENCE SEARCH ENGINE USING BERT
4
+
5
+ 001 051
6
+
7
+ 002 052
8
+
9
+ 003 053
10
+
11
+ 054
12
+
13
+ Anonymous ACL submission 055
14
+
15
+ 056
16
+
17
+ 057
18
+
19
+ 008 058
20
+
21
+ 010
22
+
23
+ § ABSTRACT
24
+
25
+ There have been multiple press conferences concerning COVID-19, where governments present their efforts in fighting the pandemic. These briefings provide reporters with a platform for their questions to be answered. This work studies multiple press conferences from different governments and agencies, ranging
26
+
27
+ 019 from WHO to the Whitehouse to different state governors to even different governments. This work collects the transcripts of these press conferences, then using a custom heuristic, selects short exchanges between different speakers, hence selecting exchanges made by the re-
28
+
29
+ 024 porters. Then using a custom trained sentence-classifier, selects the questions raised by the
30
+
31
+ 026 reporters through these exchanges. This creates a new dataset, which contains the questions asked by reporters and how they were answered by officials. This dataset can prove useful in a number of applications, in this work we present one of these uses, which is build-
32
+
33
+ 031 ing a search engine. This search engine is built on these questions by fine-tuning the state-of- 033 the-art BERT language model on the collected COVID-19 press conference transcript dataset. This search engine can prove helpful in an-
34
+
35
+ 035 swering questions raised by the public and knowing how they were answered by officials, it can also help reporters and researchers in finding how a specific question was answered by the different governments. Our goal by this work is to help organize the press questions concerning COVID-19 to help build an insight on the different efforts being taken to combat 042 the pandemic.
36
+
37
+ § 1 INTRODUCTION
38
+
39
+ Press conferences are the channel of communication that governments/agencies use to communicate their efforts in fighting COVID-19 with the world. Studying and analyzing the transcripts of these conferences would provide great insights on
40
+
41
+ 059
42
+
43
+ 060 the different approaches and efforts these govern-
44
+
45
+ ments/agencies use in their fight against the pan- 062 demic.
46
+
47
+ This work aims to collect the transcript of multiple press conferences made since late January, made by different governments/agencies. Then us-
48
+
49
+ ing a custom heuristic, the short exchanges made 067 throughout these conferences are selected, which
50
+
51
+ mostly contains the exchanges made by reporters. 069 A sentence-classifier (a CNN model trained on a combination of SQuAD ${}^{1}$ (Rajpurkar et al.,2018) and SPAADIA ${}^{2}$ datasets) is then used to only select the questions raised by the reporters from these exchanges. This builds a dataset containing questions raised by reporters and how they were answered
52
+
53
+ by the officials from different governments. This 076 dataset can prove useful in a number of applications, from building an insight on the questions, analyzing when a certain type of question was mostly raised, comparing the questions raised from
54
+
55
+ reporters to different governments. In our work 081 we introduce another application of this dataset,
56
+
57
+ which is building a customized search engine capa- 083 ble of finding the most similar question to a custom
58
+
59
+ query. This can prove helpful in answering ques- 085 tions raised by the public, it can also help reporters build an insight on how a specific question was answered by the different governments.
60
+
61
+ This search engine is built by fine-tuning BERT
62
+
63
+ (Devlin et al., 2018), a state-of-the-art language 090 modeling, to build a customized language model,
64
+
65
+ capable of understanding the context of COVID- 092 19 press conferences. An evaluation was built on BERT to test how well it understood the context of the COVID-Press dataset. We use the evaluation technique proposed by (Ein Dor et al., 2018), which tests the ability of BERT to identity sim-
66
+
67
+ 097
68
+
69
+ 098
70
+
71
+ 099 ilar sentences, we have built our own similarity 101 dataset from COVID-19 press context for evaluation. Then using the recent proposed architec- 103 ture SBERT (Reimers and Gurevych, 2019) (which builds a mechanism for selecting the most optimized embedding for sentences from BERT), a search engine is built. This search engine gets the most similar questions and their answers (from the built dataset) to a user query.
72
+
73
+ ${}^{1}$ https://rajpurkar.github.io/SQuAD-explorer/
74
+
75
+ ${}^{2}$ http://martinweisser.org/index.html#Amex_a
76
+
77
+ The paper is structured in the following way: (Section 2) presents how the dataset was collected and the proposed method for selecting the questions and answers in a press conference. In (section 3), we view how BERT (Devlin et al., 2018) was fine-tuned to the collected dataset. (Section 4) views the used architecture for extracting the embeddings from the fine-tuned BERT using the newly proposed SBERT (Reimers and Gurevych, 2019). (Section 5) views some results on running the search engine. We have used google colab for scrapping, fine-tuning and building our search engine, the code ${}^{3}$ is provided as jupyter notebooks to run seamlessly on google colab. The data ${}^{4}$ is hosted on google drive to connect seamlessly with google colab.
78
+
79
+ This work opens the opportunity to analyze how a certain question is answered across the different governments, hence building insights on the different fighting efforts being made across the world.
80
+
81
+ § 2 BUILDING COVID-19 PRESS DATASET
82
+
83
+ § THIS WORK USES THE TRANSCRIPTS PROVIDED BY REV
84
+
85
+ https://www.rev.com/ blog/transcript-tag/ coronavirus-update-transcripts
86
+
87
+ REV provides the transcripts of the press conferences made by multiple governments and agencies, these are :
88
+
89
+ * World Health organization press briefings
90
+
91
+ * United Kingdom Coronavirus briefing
92
+
93
+ * White house press conferences
94
+
95
+ * Justin Trudeau Canada COVID-19 Press Conference
96
+
97
+ * Press Conferences made by multiple US state 150
98
+
99
+ governors (NewYork, Iowa, Florida, ... and 151 many others)
100
+
101
+ 153
102
+
103
+ § 2.1 SCRAPPING TRANSCRIPTS
104
+
105
+ We have built a customized scrapper using python to scrape the exchanges made by different speakers in a given press conference. We have scrapped 654 press conferences made since 23th January, till 12th May. We were able to obtain more than
106
+
107
+ 66k exchanges throughout the collected transcripts 160 dataset. We tend to scrape the transcript text of
108
+
109
+ each speaker, with the name of that speaker, with 162 the timing of when this exchange was spoken within the press conference. We also record the name and the date of the press conference in addition to its url (from REV).
110
+
111
+ Since COVID-19 is a continuously evolving sit- 167 uation, we would periodically run our scrappers to
112
+
113
+ obtain the most up-to-date transcripts. 169
114
+
115
+ § 2.2 BUILDING COVID-19 QUESTIONS DATASET
116
+
117
+ This work aims to build a search engine on the questions raised by reporters and how they were answered by officials. However selecting these exchanges from the scrapped dataset appeared quite challenging, as REV doesn't provide a guide on the identity of each speaker, so work must be done in order to try and identify the identity of each speaker.
118
+
119
+ To select the questions raised by reporters, our work was broken down into 2 steps. First selecting all the reporter exchanges, then selecting the
120
+
121
+ questions from these exchanges. We first build 183 a custom heuristic capable of identifying the ex-
122
+
123
+ changes made the reporters, then we build a custom 185 sentence-classifier to select the raised questions.
124
+
125
+ § 2.2.1 CUSTOM HEURISTIC FOR SELECTING SHORT EXCHANGES
126
+
127
+ This work uses a custom heuristic to try and identify the identity of the speakers, in order to select the exchanges made the reporters. This heuristic is built over rules of when the speakers begin to speak and the amount their exchanges. The proposed rules are:
128
+
129
+ * The longest exchanges in a press conference are flagged to be spoken by the official giving the press conference (president, prime minis-
130
+
131
+ ter, governor or a health official). 199
132
+
133
+ ${}^{3}$ https://github.com/theamrzaki/covid-19-press-briefings ${}^{4}$ https://github.com/theamrzaki/covid-19-press-briefings#data
134
+
135
+ * The first exchange, is flagged as been spoken by the presenter (the conductor of the conference). This can either be a reporter or the
136
+
137
+ 203 official himself.
138
+
139
+ * If the main official conducting the conference, mentioned other speakers, those speakers are flagged to be helpers to that official. In most cases these have been found to be either health officials (like in case of Dr Fauci in the white
140
+
141
+ 210 house conferences), or other officials (either military or a financial official).
142
+
143
+ * We are most concerned with flagging the reporter exchanges. These have been found to be few exchanges in a single press conference made by each reporter (each speaker speaks either once or twice max). When this pattern is found (few exchanges made by a single speaker), these exchanges are flagged to be made by reporters, and are considered to be questions. Then the exchange right after it is flagged as its answer.
144
+
145
+ Using these rules, the previously collected transcript dataset was flagged with the proposed speakers (either conference-conductor, official, helper, or reporter). A dataset is then built to only contain the exchanges made by the reporters and the answers to them. However, not all of the selected reporter exchanges can be considered as questions, this is why a custom sentence-classifier has been built in order to only select the questions.
146
+
147
+ § 2.2.2 SENTENCE-CLASSIFIER FOR SELECTING QUESTIONS
148
+
149
+ A classifier was built with the goal of correctly identifying the true questions from the built reporter exchanges dataset.
150
+
151
+ We used the model presented by ${}^{5}$ which builds a CNN model on a combination of SQuAD ${}^{6}$ (Ra-jpurkar et al.,2018) and SPAADIA ${}^{7}$ datasets. These datasets classify sentences into 3 classes
152
+
153
+ 1. 1111 Command
154
+
155
+ 2. 80167 Statement
156
+
157
+ § 3. 131001 QUESTION
158
+
159
+ 249
160
+
161
+ In our work we are only interested in classifying 250
162
+
163
+ questions, so we have considered both the "Com- 251 mand" and the "Statement" as the same class.
164
+
165
+ The CNN model was trained on 170077 sentence 253 of the 3 classes, then it was tested on 42520 sentence. It was able to achieve a test accuracy of 0.9948 .
166
+
167
+ This model has then been used to classify the
168
+
169
+ questions from the collected reporter exchanges. 258 It classified that ${67.76}\%$ were indeed questions.
170
+
171
+ These correctly identified as questions (about $5\mathrm{\;k}$ ) 260 where then selected (with their answers) in a new dataset which only contain the reporter questions.
172
+
173
+ § 3 FINE-TUNING BERT TO COVID-19-PRESS
174
+
175
+ BERT (Devlin et al., 2018) has been proven to be the state-of-art architecture for language modeling.
176
+
177
+ It is built as an enhancement to the vanilla trans- 267 former (Vaswani et al., 2017). It is built to only
178
+
179
+ contain an encoder structure, and to depend solely 269 on self-attention.
180
+
181
+ BERT is unique in the approach used in its pretraining, where "masked language model" (MLM) is used as the pre-training objective, inspired by the Cloze task (Taylor, 1953). This approach randomly chooses words from the input text (15% of words), and the training objective is to predict these masked words. This training objective enables BERT to be pre-trained in an unsupervised manner, where raw text is supplied to BERT, without having labels.
182
+
183
+ This training objective is also used in its fine-tuning, in our case, the collected dataset (COVID- 19 press of ${66}\mathrm{k}$ exchanges) is used as the raw training text to fine-tune the pretrained BERT. Hugging Face (https://huggingface.co/) library was used to fine-tune BERT to the collected dataset. The BERT model provided by google (https://huggingface.co/google/ bert_uncased_L-8_H-256_A-4) was used as our pre-trained BERT. Google colab was used as the platform for fine-tuning.
184
+
185
+ § 3.1 BERT EVALUATION FOR COVID-19 PRESS CONTEXT
186
+
187
+ Evaluation of a customized language model proves challenging, as most of the available evaluation techniques are build to cope with a general language model not a customized one. A recent evaluation technique for BERT was recently proposed by (Ein Dor et al., 2018). This technique relies
188
+
189
+ on evaluating how BERT is able to measure the 299 similarity of different sentences, where 3 sentences 301 are supplied to BERT, 2 are similar and 1 is not. The evaluation is made to test if BERT is able to 303 correctly identify the similar sentences. The true breakthrough that this technique offers over other evaluation mechanisms, is the ease of producing customized evaluation datasets without manual labeling.
190
+
191
+ ${}^{5}$ https://github.com/lettergram/sentence-classification
192
+
193
+ ${}^{6}$ https://rajpurkar.github.io/SQuAD-explorer/
194
+
195
+ ${}^{7}$ http://martinweisser.org/index.html#Amex_a
196
+
197
+ In (Ein Dor et al., 2018) they were able to create a customized similarity dataset from scrapping
198
+
199
+ 310 Wikipedia pages. They assumed that sentences from the same paragraph in a Wikipedia article are similar, and a sentence from a different paragraph would talk about a different subject, hence lower similarity. They then used this to build a customized similarity dataset by using a Wikipedia article of their chosen context.
200
+
201
+ 317 In this work, we have used the same approach to build our own similarity dataset. We have used
202
+
203
+ 319 our built dataset, that contain all the exchanges between speakers of all datasets (dataset of ${66}\mathrm{k}$ exchange). Then to build the similarity dataset, we selected every 2 adjacent exchanges from a press conference as the similar sentences, we then used an exchange from a different press conference as the different sentence. By this, a dataset of ${40}\mathrm{k}$ triplets has been created, where each row contains 3 sentences, 2 similar (of the same press conference), and one different.
204
+
205
+ BERT has been evaluated using this custom built evaluation dataset, it was capable of correctly identifying ${99.7}\%$ of the ${40}\mathrm{k}$ triplets. This indicates that BERT was capable to correctly understand thee context of the sentences. We also evaluated our fine-tuned version of BERT, where it scored an accuracy of ${99.88}\%$ , which indicates that even the examples which were quite difficult for the vanilla BERT to correctly identify, were correctly handled by our fine-tuned BERT.
206
+
207
+ § 4 BERT TO BUILD A SEARCH ENGINE
208
+
209
+ Using BERT for sentence-pair regression (measuring how similar sentences are to each other, the technique used to built a search engine), proves to be inefficient for multiple reasons.
210
+
211
+ To begin with, for sentence-pair regression in BERT, the 2 sentences are provided to BERT with a special separator token in between them [SEP]. To build a search engine using this approach, one would need to supply each sentence to BERT (in addition to the query sentence). This would require
212
+
213
+ running BERT each time in deployment for about 350
214
+
215
+ $5\mathrm{\;k}$ times (size of the dataset) to get the most similar 351
216
+
217
+ question and its answer from all the dataset. This 352
218
+
219
+ is simply unsuitable for building a search engine. 353
220
+
221
+ Another approach other than sentence-pair regression is often proposed, which is extracting the sentence embedding from BERT. First running BERT just once on the $5\mathrm{k}$ questions, get-
222
+
223
+ ting their embedding, and in deployment, just run 358 BERT once on the query and use cosine similar-
224
+
225
+ ity to get the most similar question and its answer. 360 However this also exposes another disadvantage
226
+
227
+ in BERT, as in BERT no independent sentence- 362 embedding are computed, this makes it challenging to extract a good embedding from BERT (Reimers and Gurevych, 2019).
228
+
229
+ Multiple approaches were proposed to help ex-
230
+
231
+ tract good embeddings from BERT. ((May et al., 367 2019),(Zhang et al., 2019),(Qiao et al., 2019)) pro-
232
+
233
+ posed using the [CLS] token from BERT as the 369 fixed size vector embedding for a sentence. Another approach used by (Reimers and Gurevych, 2019), computes the mean of all output vectors.
234
+
235
+ In (Reimers and Gurevych, 2019), they trained a Siamese BERT network on SNLI data (Bowman et al., 2015) and on Multi-Genre NLI. They then evaluated different polling approaches to build embedding representation for sentences. Either using [CLS] or by averaging vectors to get [MEAN], they fine-tuned their architecture for classification objective function on the STS benchmark with regression objective function. They concluded that using the [MEAN] polling strategy outperformed that of using [CLS] strategy. This is the reason it was the
236
+
237
+ selected pooling strategy in our work. 383
238
+
239
+ § 5 EXPERIMENTS
240
+
241
+ 385
242
+
243
+ To build our search engine, we fine-tuned BERT on the collected $5\mathrm{\;k}$ questions, saved their embedding using the [MEAN] polling strategy, then for each test query, we run the fine-tuned BERT with the same polling strategy, and using cosine similarity we get the most similar questions asked in the
244
+
245
+ collected press-conferences. 392
246
+
247
+ To select the test queries, we followed a selecting mechanism to automatically select sentences from our corpus. Some measures were taken to ensure that the selected sentences were of different
248
+
249
+ context. The resultant embedding from the fine- 397 tuned BERT were used with k-means to cluster
250
+
251
+ the dataset to multiple clusters, were each of them 399 convey a specific context. Elbow method was used to identity that 10 clusters would be the optimized number of clusters to be used (dataset with clusters as labels). The clusters with the most number of associated sentences were then selected to draw the test sentences from, then using a random generator, a sentence from each cluster was selected.
252
+
253
+ The following are some examples from the search engine, the top 2 most similar questions and their answers are selected. With the name of the press briefing, its date, and the time within the briefing when this exchange was spoken.
254
+
255
+ § INPUT SENTENCE
256
+
257
+ And regarding unemployment, we're hearing stories of people are still not getting returned phone calls within 72 hours,...
258
+
259
+ § RESULTS
260
+
261
+ § SCORE: 0.9296
262
+
263
+ question: Yes, governor, I want to go back to unemployment. We're still hearing from many who are wanting to know when they're going to get their checks and you gave that answer,...
264
+
265
+ answers: Yeah, I think that is right. We have processed and I think most of the checks that are direct deposited have gone out to I think a majority of the people who are in that backlog....
266
+
267
+ header: Transcript: Governor Ned Lamont COVID- 19 Press Conference Transcript April 14
268
+
269
+ date: Apr 14, 2020 (39:31)
270
+
271
+ 433 Score: 0.9252
272
+
273
+ question: I'm still healing hearing from some people who are having problems getting through to unemployment and getting the benefits that they feel they're entitled to.....
274
+
275
+ answers: So I think that it's always important to have some perspective here. We've had over a million people become unemployed in the last six weeks. We have been able to make sure that over 820,000 people have gotten the assistance that they've earned...
276
+
277
+ header: Michigan Governor Gretchen Whitmer Press Conference Transcript April 24
278
+
279
+ date:Apr 24, 2020 (30:53)
280
+
281
+ 449 Table 1: Query 1
282
+
283
+ § INPUT SENTENCE
284
+
285
+ 450
286
+
287
+ This morning at the San Mateo county board of 451
288
+
289
+ supervisors meeting officials there expressed 453 grave concern about the lack of PPE at Seton Medical Center, and also the need for more staffing. I just want to find out what the state is doing to address those needs?
290
+
291
+ § RESULTS
292
+
293
+ § SCORE: 0.9039
294
+
295
+ question: Reporters in the room, I'm working 460 on behalf of your colleagues. I'm going to try and get some of their other questions in.We may not have as many confirmed cases downstate but already clusters of cases in a senior home in Taylorville outnumber the available number of ICU beds at the hospital in town....
296
+
297
+ 467
298
+
299
+ answers: Our ICU bed situation in the state,
300
+
301
+ as you know this is as we move toward the 469 peak of this, we are going to be filling up ICU beds across the state. It isn't the same in every area. There are critical-access hospitals that may have fewer ICU beds. There are other hospitals in other areas of the state that may have more availability,....
302
+
303
+ header: Illinois Governor J.B. Pritzker 476 COVID-19 Briefing Transcript April 1
304
+
305
+ date: Apr 1, 2020 (40:52)
306
+
307
+ § SCORE: 0.9038
308
+
309
+ 480
310
+
311
+ question: The next question is for the Secre- 481 tary. Dr. Levine from the Capitol Star. HAP
312
+
313
+ said that it was in talks with the administration 483 today about resuming non-emergent services
314
+
315
+ as the lockdown eases. Can you characterize 485 the state of those talks and what you would need to do to allow hospitals to start treating those patients?
316
+
317
+ answers: Mm-hmm (affirmative). So that is
318
+
319
+ correct. We have had discussions with the 490 hospital association as well as a number of
320
+
321
+ different health systems and hospitals about 492 when would be the right time to allow nonemergent procedures to occur. Now remember, some of those are procedures that really have to happen for people's health and they've been on hold and it's really difficult.
322
+
323
+ header: Pennsylvania Gov. Tom Wolf Coron- 499 avirus Briefing Transcript April 22
324
+
325
+ date:Apr 22, 2020 (19:00)
326
+
327
+ As seen in the previous examples, the exchanges 501 that were flagged as questions were indeed questions. This helps indicate that the used mechanisms 503 for selecting questions from the different exchanges were successful.
328
+
329
+ § 6 CONCLUSIONS
330
+
331
+ In this work we present a new COVID-19 data 508 source, which is the press conference briefings, as a rich source for analyzing different governments re- 510 sponse for fighting the virus. We also present some mechanisms of selecting questions from these press 512 briefings. We have used the state-of-art language models for building a semantic search engine to get the most similar questions from the press briefings. This search engine can prove helpful in addressing the questions posed by the public concerning 517 COVID-19. It can also be used by journalists and researchers in comparing the different efforts made 519 by the governments around the world in fighting the pandemic.
332
+
333
+ Building a search engine is just one of multiple possible applications of using this dataset. Further analysis of this dataset opens the possibility to
334
+
335
+ 524 multiple other uses, like analyzing the timeline of asking a certain question, when it was first raised,
336
+
337
+ 526 by whom and how it was answered.
338
+
339
+ We believe that this new data source can prove useful in multiple areas of research, to understand and build insights on the different approaches taken
340
+
341
+ 530 by governments in combating this virus.
342
+
343
+ 531
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/HxIZzQZy_0F/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Jennifer for COVID-19: An NLP-Powered Chatbot Built for the People and by the People to Combat Misinformation
2
+
3
+ Yunyao ${\mathrm{{Li}}}^{1, * }$ Tyrone Grandison ${}^{2, * }$ Patricia Silveyra ${}^{3, * }$ Ali Douraghy ${}^{4}$
4
+
5
+ Xinyu Guan ${}^{5}\;$ Thomas Kieselbach ${}^{6}\;$ Chengkai Li ${}^{7}\;$ Haiqi Zhang ${}^{7}$
6
+
7
+ ${}^{1}$ IBM Research - Almaden ${}^{2}$ The Data-Driven Institute ${}^{3}$ University of North Carolina - Chapel Hill ${}^{4}$ The National Academies of Sciences, Engineering and Medicine ${}^{5}$ Yale University ${}^{6}$ Umeå University ${}^{7}$ University of Texas - Arlington
8
+
9
+ \{yunyaoli@us.ibm.com, tgrandison@data-driven.institute, patry@email.unc.edu, adouraghy@nas.edu\}
10
+
11
+ ## Abstract
12
+
13
+ Just as SARS-CoV-2, a new form of coronavirus continues to infect a growing number of people around the world, harmful misinformation about the outbreak also continues to spread. With the goal of combating misinformation, we designed and built Jennifer-a chatbot maintained by a global group of volunteers. With Jennifer, we hope to learn whether public information from reputable sources could be more effectively organized and shared in the wake of a crisis as well as to understand issues that the public were most immediately curious about. In this paper, we introduce Jennifer and describe the design of this proof-of-principle system. We also present lessons learned and discuss open challenges. Finally, to facilitate future research, we release COQB-19 (COVID-19 Question Bank ${)}^{1}$ , a dataset of 3,924 COVID-19-related questions in 944 groups, gathered from our users and volunteers. Jennifer is available at http://bit.ly/jenniferai.${}^{2}$
14
+
15
+ ## 1 Introduction
16
+
17
+ This paper introduces Jennifer, a chatbot created to provide easily accessible information from reliable resources to answer questions related to the current COVID-19 pandemic. Jennifer leverages cutting-edge chatbot technology, as well as a global network of volunteers, to combat misinformation related to the pandemic. The information provided by Jennifer covers a wide variety of topics, ranging from case statistics and food safety to best practices for disease prevention and management.
18
+
19
+ The idea of Jennifer was born in early March 2020 during the semi-annual meeting of New Voices, ${}^{3}$ a project of the National Academies of Sciences, Engineering, and Medicine. ${}^{4}$ The New Voices members, a group of early-career scientists representing a diversity of research, health and policy perspectives, knew that the scientific community could rapidly mobilize its expertise to address the public health challenges the United States would soon face, and called for "rapid collaborations between scientists and the civic tech communities to educate the public" (New Voices, 2020). We envisioned using the latest techniques in artificial intelligence to create a platform of evidence-based information from reliable sources that the public would find easy to interact with.
20
+
21
+ We quickly mobilized to design and build Jennifer and to recruit a global group of volunteer scientists to help test and scale Jennifer's performance. The proof-of-principle system will demonstrate the feasibility to directly crowd-source the global scientific community's expertise for public benefit without the need for intermediaries, and help improve public trust in science.
22
+
23
+ Our core design considerations are:
24
+
25
+ - Rapid Development: Jennifer should be built within a short amount of time to win the race against fast spreading misinformation.
26
+
27
+ - Ease of Access: Jennifer should provide information to the general public in an easily accessible manner across different platforms (e.g. Web and social media).
28
+
29
+ - Ease of Maintenance: Jennifer should be maintainable by broader and more diverse group of volunteers.
30
+
31
+ - Quality Assurance: Jennifer should provide information from reputable sources and maintain a rigorous process to ensure the quality of information in a consumable and empathetic manner. - Extensibility: Jennifer should be easily extensible to expand its capability with minimal effort.
32
+
33
+ ---
34
+
35
+ *denotes equal contribution
36
+
37
+ 'https://www.newvoicesnasem.org/ data-downloads
38
+
39
+ ${}^{2}$ Facebook http://fb.me/JenniferCOVIDAI
40
+
41
+ ${}^{3}$ Learn more at: www.NewVoicesNASEM.org
42
+
43
+ ${}^{4}$ The opinions expressed here are those of the authors and do not represent positions of the National Academies of Sciences, Engineering, and Medicine or the authors' institutions
44
+
45
+ ---
46
+
47
+ The first version of Jennifer was released on March 8, 2020. Since then, over 160 volunteers from 141 institutions around the globe recruited through the New Voices’ network ${}^{5}$ have helped make daily updates to the chatbot to ensure that its content reflects the latest available information from trusted sources. It is available on the Web and as a Facebook bot. It is also currently embedded in two fact checking systems. ${}^{6}$ As of June 18, 2020, Jennifer has been asked 1,480 questions (excluding questions selected via menus and answered 1,059 of them (a response rate of 71%), with an average engagement duration of three minutes and 15 seconds. We plan to conduct more formal evaluation of Jennifer in the future.
48
+
49
+ ## 2 Jennifer Overview
50
+
51
+ We chose to build Jennifer as a chatbot, because chatbots "are able to present concise information from credible sources" and "less overwhelming than social media or web search engines' long list of results" (Miner et al., 2020). The need for agility and speed necessitated that we leverage an existing chatbot platform; rather than building a chatbot from scratch. We chose to utilize Juji (Juji, 2020) that supports both tasks-oriented and social dialogues and allows easy extensions.
52
+
53
+ This platform supports do-it-yourself chatbot making, similar to Chatfuel ${}^{7}$ and Manychat, ${}^{8}$ but with more advanced NLP capabilities for dialog management similar to Google Dialogflow ${}^{9}$ and IBM Watson Assistant (Janarthanam, 2017; Xiao et al.,2020). ${}^{10}$ By leveraging Juji, we were able to build and deploy the first version of Jennifer in less than a day. The resulting chatbot is readily deployable on the Web and as a Facebook bot.
54
+
55
+ ### 2.1 Overall Architecture
56
+
57
+ Figure 1 depicts the overall architecture of Jennifer. As can be seen, Jennifer depends on the Juji base system for dialog management (Zhou et al., 2019). Given a user question, Juji uses a pre-trained machine learning model to identify one or more relevant question with known answers and depending on its confidence level returns an answer or a follow-up question (more in Sec. 2.2. The main capabilities of Jennifer come from the Question-Answer(QA) pairs generated by the extensions specifically implemented for Jennifer with two modes of ingestion:
58
+
59
+ - Crowdsourced: This mode relies on a repository of Frequently Asked Questions gathered from reliable sources such as the Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), the University of Washington Bothell, and the Federation of American Scientists. ${}^{11}$ The questions are provided by the users and volunteers of Jennifer, many based on the FAQs. The answers are manually curated by the volunteers of Jennifer via a rigorous process detailed shortly.
60
+
61
+ - Automated: Often, users of Jennifer ask questions on specific statistics such as number of confirmed cases in a country, or the death rate of a state or a city. The number of questions of this nature was significant, and answers to such questions are changing constantly. Therefore, it is labor intensive to manually curate answers or create alternative questions for such questions. Instead, we have built a QA Generator to automatically create such QA pairs, based on structured data pulled from reliable sources such as the CDC on a daily basis and question templates derived from the crowdsourced questions.
62
+
63
+ Most QA pairs come from the crowdsourced mode with significant efforts by our volunteers. Our volunteer base is a selected group of medical experts, scientists, engineers, technologists, and specialists. To ensure efficient delineation of tasks and to preserve scientific integrity, we divide this base into four volunteer groups: Curators, Helpers, Testers, and Admins, as follows.
64
+
65
+ Curators take new, unanswered questions, research current answers from reputable and trustworthy sources, and then craft answers with supporting evidence for inclusion. Curators also update answers that have become obsolete. Given the novelty of COVID-19, this is a critical task. Helpers take a set of existing questions and generate many possible question formulations, i.e., alternative questions. This step helps Jennifer to better answer unseen questions as Juji fine-tunes its underlying QA engine using additional data. Testers evaluate answers for freshness, accuracy, readability and monitor other possible quality issues with Jennifer, e.g., format issue. Input from volunteers is further validated by Admins before it is deployed into Jennifer. Specifically, Admins validate all answers, first for scientific validity, and then for language fluency and naturalness in response. Admins also validates alternative questions for relevancy and language fluency. Dedicated Slack channels ${}^{12}$ for each volunteer groups were used to facilitate discussions and collaborations.
66
+
67
+ ---
68
+
69
+ ${}^{5}$ Learn more about the New Voices Network Tool at https://www.newvoicesnasem.org/the-network
70
+
71
+ ${}^{6}$ https://coronacheck.eurecom.fr/en and https://idir.uta.edu/covid-19/
72
+
73
+ 7https://chatfuel.com
74
+
75
+ 8 https:manychat.com
76
+
77
+ 9https://dialog-flow.com/
78
+
79
+ ${}^{10}$ https://www.ibm.com/cloud/ watson-assistant/
80
+
81
+ ${}^{11}$ www.cdc.gov, www.who.int, www.uwb.edu, and fas.org
82
+
83
+ ---
84
+
85
+ ![01963dae-7f56-7b95-adb7-682c3d3cd864_2_282_176_1102_445_0.jpg](images/01963dae-7f56-7b95-adb7-682c3d3cd864_2_282_176_1102_445_0.jpg)
86
+
87
+ Figure 1: Architecture Overview of Jennifer
88
+
89
+ ![01963dae-7f56-7b95-adb7-682c3d3cd864_2_202_694_611_710_0.jpg](images/01963dae-7f56-7b95-adb7-682c3d3cd864_2_202_694_611_710_0.jpg)
90
+
91
+ Figure 3: Jennifer informs users about additional topics it knows
92
+
93
+ ### 2.2 Chat Design
94
+
95
+ We designed the dialog flow of Jennifer based on two principles: 1. Fostering mixed-initiative interaction (Walker and Whittaker, 1990); 2. Supporting Two-way adaptation: learning from users and also encouraging users to learn what Jennifer can do (Pan et al., 2005)
96
+
97
+ ![01963dae-7f56-7b95-adb7-682c3d3cd864_2_858_671_584_166_0.jpg](images/01963dae-7f56-7b95-adb7-682c3d3cd864_2_858_671_584_166_0.jpg)
98
+
99
+ Figure 4: Jennifer recommends relevant questions.
100
+
101
+ When Jennifer was first launched in early March, most people knew little about COVID-19 or its impact. Thus Jennifer started with a "menu" to inform users about its existing knowledge on most important topics (Fig. 2). After answering a question, Jennifer also volunteers information on additional topics that it knows (Fig. 3). This design aims to address two challenges: 1) the user may not know how to get started or lack knowledge to ask additional questions; 2) Jennifer (or any AI system) will never be perfect; there will always be questions that it cannot answer. By informing users about what it knows, users are more likely to ask questions that Jennifer can answer. If Jennifer is unsure about how to answer a question, it will recommend similar questions to give users a chance to obtain desired answers as well as learn more about Jennifer's capabilities. Fig. 4 shows how it expresses its uncertainty regarding the user's question but proceeds to recommend a list of relevant inquiries. Jennifer will improve its response to similar questions based on user interactions.
102
+
103
+ Jennifer aims at fostering mixed-initiative interactions. On the one hand, it proactively solicits questions from users; on the other hand, it allows users to initiate their questions any time during the chat flow. Such mixed-initiative interactions keep users engaged while allowing users to obtain information at their own pace.
104
+
105
+ ---
106
+
107
+ ${}^{12}$ https://slack.com
108
+
109
+ ---
110
+
111
+ ### 2.3 QA Pairs
112
+
113
+ As illustrated in Table 1, QA pairs in Jennifer are grouped by their ids ${}^{13}$ ; questions with the same id are regarded as similar and associated with one or more (semantically equivalent) answers. We release COQB-19 with questions gathered from our users and volunteers.
114
+
115
+ To be included in the chatbot, each answer needs to satisfy the following criteria:
116
+
117
+ Easy to understand: The information is presented in language understandable by the general public. Accuracy and Openness: The answers must be backed up by data from reliable sources, include references or links to such sources, and be verified by at least one trusted volunteer medical expert. Furthermore, scientific understanding of COVID- 19 is quickly evolving; it is important to be explicit about potential uncertainty in the answers.
118
+
119
+ Demonstration of Empathy: The language provided in the answers should emulate natural empathetic conversation, and must acknowledge factors such as stress or anxiety experienced by the users to help foster trust.
120
+
121
+ ### 2.4 Multilingual Support
122
+
123
+ We have received numerous requests from users around the world to offer Jennifer-like capabilities in other languages. On the surface, this task appears to be straightforward. One can translate QAs from Jennifer into another language using machine translation (ML). However, language expansion needs to overcome several major obstacles:
124
+
125
+ - Language Fluency: Results produced by commercially available ML services still require significant manual refinement, particularly for domain-specific text (e.g., many answers provided by Jennifer).
126
+
127
+ - Domain Customization: Specialized domains such as epidemiology and public health often have their own specific terminologies in non-English languages.
128
+
129
+ - Relevancy: Answers to questions should be verified against reliable sources in the language of question. Additionally, cultural aspects and differences among dialects must be considered when crafting answers in different languages.
130
+
131
+ - Models The Dialog Manager relies on pre-trained machine learning models which usually perform poorer for non-English languages.
132
+
133
+ We therefore chose to expand one language at a time. The first language we selected is Spanish, spoken by ${13.5}\%$ of the US population. ${}^{14}$ Specifically, we designed and built so fia, ${}^{15}$ also using the Juji platform. The QA collection underlying Sofia consists of QA pairs manually translated from the Jennifer QA pairs. It is maintained and manually curated by a group of bilingual Spanish-English certified medical interpreters, using verified information from the Spanish language websites of the CDC and WHO. Plans to expand Jennifer to other languages are currently under development.
134
+
135
+ ## 3 Discussion
136
+
137
+ Jennifer has successfully demonstrated that, with the right combination of technology and human experts, information from reputable sources can be more quickly and effectively organized and shared at scale. In this section, we share lessons learned and open challenges in the hope to shed some light on promising future research directions.
138
+
139
+ ### 3.1 Lessons Learned
140
+
141
+ People are eager to help. Many scientists and health professionals are eager to step up and help to better respond to the COVID-19 crisis. Many expressed gratefulness for the opportunity to contribute to Jennifer as a volunteer.
142
+
143
+ Process and Communication is Important. Given the evolving tasks and the large number of volunteers with diverse background, putting the right process around tasks, workflow, and sequencing (Norheim-Hagtun and Meier, 2010) is key to ensuring efficient use of the volunteers' time to the advantage of the project. It is also important to hold regular dialog with the volunteers to both provide and obtain feedback as well as keep them posted about the progress of the project.
144
+
145
+ Effective and Dedicated Management is Critical. Even with delineation and process optimization, the job of managing volunteers and the intake process requires constant focus and dedication by a few individuals to ensure successful execution. As such, we need to support Jennifer with more dedicated resources along with its large number of volunteers to ensure its long-term success.
146
+
147
+ Human-Machine Conversation requires Proactive Design. Despite of the careful chat design described in Sec. 2.2, improvements on our current design are still desired to avoid the perils of over-promising and encourages users to frame their questions with more specific keywords and simpler sentence structures. We are currently exploring different design options.
148
+
149
+ ---
150
+
151
+ ${}^{14}$ https://www.census.gov/data
152
+
153
+ ${}^{15}$ Available at https://bit.ly/SofiaAI and on Facebook https://fb.me/SofiaCOVIDAI
154
+
155
+ ${}^{13}$ Manually assigned by Testers and validated by Admins
156
+
157
+ ---
158
+
159
+ <table><tr><td>ID</td><td>Question</td><td>Answer</td></tr><tr><td>ChildrenRisk</td><td>Are kids at risk?</td><td>Based on the current data, nobody seems to be immune from COVID-19,</td></tr><tr><td>ChildrenRisk</td><td>Can children be infected?</td><td>including children. It is true that the number of cases in children is so far</td></tr><tr><td>ChildrenRisk</td><td>Are children at risk of getting COVID-19?</td><td>lower than the number of cases in adults. We really don't know why this is.</td></tr><tr><td>ChildrenRisk</td><td>Tell me how COVID-19 affects children?</td><td>The CDC provides answers to commonly asked questions about COVID</td></tr><tr><td>ChildrenRisk</td><td>Tell me if kids get infected?</td><td>-19 in here. For those interested in recent research on the subject, a study</td></tr><tr><td>ChildrenRisk</td><td>Tell me if children get infected?</td><td>describing infections in kids in China is available here.</td></tr></table>
160
+
161
+ Table 1: Example QA Pairs
162
+
163
+ ### 3.2 Open Challenges
164
+
165
+ Coordinating the distribution of information at the national level is critical to prepare for the next pandemic (Alexander, 2020). Jennifer-like chatbots may be a fundamental component of future misinformation resolution strategy. Our experience with Jennifer confirms that it is possible to collaboratively build such chatbots quickly and effectively and to scale these initiatives with the help from many volunteers. However, building these chatbots also comes with its own set of open challenges.
166
+
167
+ Scalable Crowdsourced Fact Checking Platform. Much of the recent research has focused on automating the task of fact checking (e.g., Adair et al. (2017); Pathak and Srihari (2019)). However, in a novel crisis like COVID-19, facts are quickly changing. It is crucial to engage human experts in the loop to ensure the timeliness and accuracy of the answers provided by systems like Jennifer. Much of the development and ongoing maintenance of Jennifer relies on a rigorous, manual process for quality assurance. Though receiving input from a large number of distributed volunteers is desirable, it remains an open challenge to design, construct, and maintain a fact-checking platform that supports a rigorous process to both engage a large number of experts with diverse expertise levels and leverage automation in minimizing human efforts (Hughes and Tapia, 2015).
168
+
169
+ Zero-Shot Empathetic Natural Language Generation (NLG). To ensure accuracy, comprehensibility, and appropriate level of empathy, answers provided by Jennifer are either manually curated or auto-generated with manually curated templates (Sec. 2). While it is possible to scrape FAQs automatically from reliable resources, how to use the scraped text to generate empathetic answers with little or no training data remains an open problem (Liu et al., 2020), potentially solvable via approaches similar to politeness transfer (Madaan et al., 2020). Identifying multiple resources relevant to a question and composing answers based on them in a coherent and emphathetic manner is an even more challenging problem.
170
+
171
+ Competing Information Sources and Public Trust. Tensions among centralized knowledge networks, such as public health organizations and medical academia, and decentralized information sources on social media platforms and independent news sites introduce new challenges for combating misinformation during global crises. Evidence-based, peer-reviewed information has to compete for public attention and public trust (Cary Funk, 2020). Information literacy becomes ever more important. Solving this challenge requires more than technological innovation (Goldstein, 2020).
172
+
173
+ ## 4 Conclusion
174
+
175
+ This paper introduces Jennifer, a chatbot created to provide easily accessible information to answer questions related to the current COVID-19 pandemic. Jennifer leverages cutting-edge chatbot technology, as well as a diverse network of volunteers from around the globe, to combat misinformation related to the pandemic. The information provided by Jennifer covers a wide variety of topics, ranging from updated case statistics to food safety and best practices to prevent the virus spread.
176
+
177
+ ## Acknowledgments
178
+
179
+ The authors would like to acknowledge the National Academies of Sciences, Engineering, and Medicine and the Gordon and Betty Moore Foundation for their generous support of the New Voices project, as well as the guidance and support of the Juji.io team. We would also like to thank our hundred of volunteers whose efforts have made Jennifer possible.
180
+
181
+ ## References
182
+
183
+ Bill Adair, Chengkai Li, Jun Yang, and Cong Yu. 2017. Progress toward "the holy grail": The continued
184
+
185
+ shop).
186
+
187
+ Archita Pathak and Rohini Srihari. 2019. BREAKING! presenting fake news corpus for automated fact checking. In ACL (Student Research Work-
188
+
189
+ Marilyn Walker and Steve Whittaker. 1990. Mixed initiative in dialogue: an investigation into discourse segmentation. In ${ACL}$ , pages 70-78 .
190
+
191
+ Ziang Xiao, Michelle X. Zhou, Wenxi Chen, Huahai Yang, and Changyan Chi. 2020. If I Hear You Correctly: Building and Evaluating Interview Chatbots with Active Listening Skills. In ACM CHI'2020.
192
+
193
+ Michelle X. Zhou, Gloria Mark, Jingyi Li, and Huahai Yang. 2019. Trusting virtual agents: The effect of personality. ACM Trans. Interact. Intell. Syst., 9(2- 3):10:1-10:36.
194
+
195
+ quest to automate fact-checking. In Proceedings of the 2017 Computation+Journalism Symposium.
196
+
197
+ Senator Lamar Alexander. 2020. Preparing for the next pandemic. https://www.alexander.senate.gov.[Online; accessed 16-June-2020].
198
+
199
+ Cary Funk. 2020. Key findings about Americans' confidence in science and their views on scientists' role in society. https://pewrsr.ch/2Hgq31S.[Online; accessed June-2020].
200
+
201
+ Stéphane Goldstein, editor. 2020. Informed Societies. Facet Publishing.
202
+
203
+ Amanda Lee Hughes and Andrea H. Tapia. 2015. Social media in crisis: When professional responders meet digital volunteers. Journal of Homeland Security and Emergency Management, 12.
204
+
205
+ Srini Janarthanam. 2017. Hands-On Chatbots and Conversational UI Development: Build Chatbots and Voice User Interfaces with Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, and Alexa Skills. Packt Publishing.
206
+
207
+ Juji. 2020. Juji document for chatbot designers. https://docs.juji.io/.[Online; accessed 14- June-2020].
208
+
209
+ Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng $\mathrm{{Xu}}$ , and Pascale Fung. 2020. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. In AAAI, pages 8433-8440. AAAI Press.
210
+
211
+ Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhu-moye. 2020. Politeness transfer: A tag and generate approach. In ${ACL}$ .
212
+
213
+ Adam S. Miner, Liliana Laranjo, and A. Baki Kocaballi. 2020. Chatbots in the fight against the covid-19 pandemic. npj Digit. Med., 3.
214
+
215
+ New Voices. 2020. How the us must respond to the covid-19 pandemic. https://blogs.scientificamerican.com/observations/ how-the-us-must-respond-to-the-covid-19-pandemic/. [Online; accessed June-2020].
216
+
217
+ I. Norheim-Hagtun and P. Meier. 2010. Crowdsourc-ing for crisis mapping in Haiti. Innovations: Technology, Governance, Globalization, 5:81-89.
218
+
219
+ Shimei Pan, Siwei Shen, Michelle X. Zhou, and Keith Houck. 2005. Two-way adaptation for robust input interpretation in practical multimodal conversation systems. In IUI, pages 35-42. ACM.
220
+
221
+ ## A Auto-Generation of QA Pairs
222
+
223
+ Below is part of the data from CDC website on June 30, 2020. Jun 30 2020 12:15PM Case and Death data . Testing data updated as of Jun 25 2020 12:00AM
224
+
225
+ For statistics-related questions, we apply a simple template-based approach to automatically generate the corresponding QA pairs so that we can update Jennifer with the most current information while minimizing manual efforts. The templates were all manually curated by Admins based on input from other volunteers.
226
+
227
+ Specifically, we current generate the following statistics-related questions automatically and deploy them on regular basis:
228
+
229
+ - Number of confirmed cases in a specific U.S. state/jurisdiction using Case and Death data from CDC.
230
+
231
+ - Number of death in a specific U.S. state/jurisdictions using Case and Death data from CDC.
232
+
233
+ - Number of confirmed cases in a country using Case data from WHO.
234
+
235
+ Table 2 shows the example templates for QA pairs within a single group. Based on data released by CDC on daily basis (e.g. Table 3), we automatically generate QA pairs from the templates, as illustrate in Table 4.
236
+
237
+ ## B COQB-19: COVID-19 Question Bank
238
+
239
+ This dataset is derived from Jennifer, a chat-bot designed and built to provide information on COVID-19 from trusted resources (available at http://bit.ly/jenniferai and on http://fb.me/JenniferCOVIDAI).The dataset is intended to enable researchers to understand common questions asked by general public on COVID- 19 and to apply recent advances in natural language processing to better answer such questions. The corpus will be updated periodically.
240
+
241
+ ### B.1 Description
242
+
243
+ The current dataset consists of a total of 3,924 questions. Each question is assigned with a Question ID. Questions of the same ID are regarded as similar questions and grouped together in the dataset.
244
+
245
+ Based on the original source of the questions, the dataset is further divided into two subsets:
246
+
247
+ - COQB- ${\mathbf{{19}}}_{\text{crowdsourced }}$ This set consists of a total of 2,341 questions with 280 unique Question IDs. They are gathered based on input from users and volunteers of Jennifer
248
+
249
+ - COQB- ${\mathbf{{19}}}_{\text{auto-generated }}$ This set consists of a total of 1,583 questions with 664 unique Question IDs. unique Question IDs. They are automatically generated with pre-defined question templates that were designed based on input from users and volunteers of Jennifer.
250
+
251
+ The dataset is available at http: //www.NewVoicesNAMSEM.org/data-downloads with the Open Covid Pledge OCL-PC v1.1 license ( https://opencovidpledge.org/licenses/ v1-1-ocl-pc/).
252
+
253
+ <table><tr><td>ID</td><td>Questions</td><td>Answers</td></tr><tr><td>Case (STATENAME)</td><td>How many confirmed cases in ⟨STATENAME)?</td><td>There were ⟨NUMCASE) cases</td></tr><tr><td>Case (STATENAME)</td><td>How many cases have been confirmed in ⟨STATENAME⟩?</td><td>reported in ⟨STATENAME)</td></tr><tr><td>Case (STATENAME)</td><td>How many people in ⟨STATENAME) have it?</td><td>of (DATE). However, the</td></tr><tr><td>Case (STATENAME)</td><td>Number of confirmed cases of COVID 19 in (STATENAME)?</td><td>numbers are changing every day.</td></tr><tr><td>Case (STATENAME)</td><td>How many COVID-19 cases have been confirmed in ⟨STATENAME⟩?</td><td>For regular updates, please go to</td></tr><tr><td>Case (STATENAME)</td><td>How many people have been tested positive for COVID-19 in (STATENAME)?</td><td>CDC website ( $\langle$ URL $\rangle$ ).</td></tr><tr><td>Case (STATENAME)</td><td>How many confirmed cases are there in ⟨STATENAME⟩?</td><td/></tr></table>
254
+
255
+ Table 2: Sample QA templates
256
+
257
+ <table><tr><td>$\mathbf{{abbr}}$</td><td>fips</td><td>jurisdiction</td><td>Total Cases</td><td>Total Death</td><td>Death100k</td><td>CasesInLast7Days</td><td>RatePer100000</td></tr><tr><td>AK</td><td>2</td><td>Alaska</td><td>904</td><td>14</td><td>1.9</td><td>149</td><td>122.6</td></tr><tr><td>AL</td><td>1</td><td>Alabama</td><td>37203</td><td>931</td><td>19</td><td>7182</td><td>761.1</td></tr><tr><td>AR</td><td>5</td><td>Arkansas</td><td>20257</td><td>265</td><td>8.8</td><td>4696</td><td>672.1</td></tr></table>
258
+
259
+ Table 3: Partial COVID-19 related Case and Death data updated as of Jun 30 2020 12:15PM on CDC website.
260
+
261
+ <table><tr><td>ID</td><td>Questions</td><td>Answers</td></tr><tr><td>CaseAlaska</td><td>How many confirmed cases in Alaska?</td><td>There were 904</td></tr><tr><td>CaseAlaska</td><td>How many cases have been confirmed in Alaska?</td><td>reported in Alaska</td></tr><tr><td>CaseAlaska</td><td>How many people in Alaska have it?</td><td>of June 30, 2020. However, the</td></tr><tr><td>CaseAlaska</td><td>Number of confirmed cases of COVID 19 in Alaska?</td><td>numbers are changing every day.</td></tr><tr><td>CaseAlaska</td><td>How many COVID-19 cases have been confirmed in Alaska?</td><td>For regular updates, please go to</td></tr><tr><td>CaseAlaska</td><td>How many people have been tested positive for COVID-19 in Alaska?</td><td>CDC website (https://www.cdc.gov/...).</td></tr><tr><td>CaseAlaska</td><td>How many confirmed cases are there in Alaska?</td><td/></tr><tr><td>CaseAlabama</td><td>How many confirmed cases in Alabama?</td><td>There were 37203</td></tr><tr><td>CaseAlabama</td><td>How many cases have been confirmed in Alabama?</td><td>reported in Alabama</td></tr><tr><td>CaseAlabama</td><td>How many people in Alabama have it?</td><td>of June 30, 2020. However, the</td></tr><tr><td>CaseAlabama</td><td>Number of confirmed cases of COVID 19 in Alabama?</td><td>numbers are changing every day.</td></tr><tr><td>CaseAlabama</td><td>How many COVID-19 cases have been confirmed in Alabama?</td><td>For regular updates, please go to</td></tr><tr><td>CaseAlabama</td><td>How many people have been tested positive for COVID-19 in Alabama?</td><td>CDC website (https://www.cdc.gov/...).</td></tr><tr><td>CaseAlabama</td><td>How many confirmed cases are there in Alabama?</td><td/></tr><tr><td>CaseArkansas</td><td>How many confirmed cases in Arkansas?</td><td>There were 20257</td></tr><tr><td>CaseArkansas</td><td>How many cases have been confirmed in Arkansas?</td><td>reported in Arkansas</td></tr><tr><td>CaseArkansas</td><td>How many people in Arkansas have it?</td><td>of June 30, 2020. However, the</td></tr><tr><td>CaseArkansas</td><td>Number of confirmed cases of COVID 19 in Arkansas?</td><td>numbers are changing every day.</td></tr><tr><td>CaseArkansas</td><td>How many COVID-19 cases have been confirmed in Arkansas?</td><td>For regular updates, please go to</td></tr><tr><td>CaseArkansas</td><td>How many people have been tested positive for COVID-19 in Arkansas?</td><td>CDC website (https://www.cdc.gov/...).</td></tr><tr><td>CaseArkansas</td><td>How many confirmed cases are there in Arkansas?</td><td/></tr></table>
262
+
263
+ Table 4: Sample QA pairs generated using templates in Table 2 from data in Table 3
264
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/HxIZzQZy_0F/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § JENNIFER FOR COVID-19: AN NLP-POWERED CHATBOT BUILT FOR THE PEOPLE AND BY THE PEOPLE TO COMBAT MISINFORMATION
2
+
3
+ Yunyao ${\mathrm{{Li}}}^{1, * }$ Tyrone Grandison ${}^{2, * }$ Patricia Silveyra ${}^{3, * }$ Ali Douraghy ${}^{4}$
4
+
5
+ Xinyu Guan ${}^{5}\;$ Thomas Kieselbach ${}^{6}\;$ Chengkai Li ${}^{7}\;$ Haiqi Zhang ${}^{7}$
6
+
7
+ ${}^{1}$ IBM Research - Almaden ${}^{2}$ The Data-Driven Institute ${}^{3}$ University of North Carolina - Chapel Hill ${}^{4}$ The National Academies of Sciences, Engineering and Medicine ${}^{5}$ Yale University ${}^{6}$ Umeå University ${}^{7}$ University of Texas - Arlington
8
+
9
+ {yunyaoli@us.ibm.com, tgrandison@data-driven.institute, patry@email.unc.edu, adouraghy@nas.edu}
10
+
11
+ § ABSTRACT
12
+
13
+ Just as SARS-CoV-2, a new form of coronavirus continues to infect a growing number of people around the world, harmful misinformation about the outbreak also continues to spread. With the goal of combating misinformation, we designed and built Jennifer-a chatbot maintained by a global group of volunteers. With Jennifer, we hope to learn whether public information from reputable sources could be more effectively organized and shared in the wake of a crisis as well as to understand issues that the public were most immediately curious about. In this paper, we introduce Jennifer and describe the design of this proof-of-principle system. We also present lessons learned and discuss open challenges. Finally, to facilitate future research, we release COQB-19 (COVID-19 Question Bank ${)}^{1}$ , a dataset of 3,924 COVID-19-related questions in 944 groups, gathered from our users and volunteers. Jennifer is available at http://bit.ly/jenniferai. ${}^{2}$
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ This paper introduces Jennifer, a chatbot created to provide easily accessible information from reliable resources to answer questions related to the current COVID-19 pandemic. Jennifer leverages cutting-edge chatbot technology, as well as a global network of volunteers, to combat misinformation related to the pandemic. The information provided by Jennifer covers a wide variety of topics, ranging from case statistics and food safety to best practices for disease prevention and management.
18
+
19
+ The idea of Jennifer was born in early March 2020 during the semi-annual meeting of New Voices, ${}^{3}$ a project of the National Academies of Sciences, Engineering, and Medicine. ${}^{4}$ The New Voices members, a group of early-career scientists representing a diversity of research, health and policy perspectives, knew that the scientific community could rapidly mobilize its expertise to address the public health challenges the United States would soon face, and called for "rapid collaborations between scientists and the civic tech communities to educate the public" (New Voices, 2020). We envisioned using the latest techniques in artificial intelligence to create a platform of evidence-based information from reliable sources that the public would find easy to interact with.
20
+
21
+ We quickly mobilized to design and build Jennifer and to recruit a global group of volunteer scientists to help test and scale Jennifer's performance. The proof-of-principle system will demonstrate the feasibility to directly crowd-source the global scientific community's expertise for public benefit without the need for intermediaries, and help improve public trust in science.
22
+
23
+ Our core design considerations are:
24
+
25
+ * Rapid Development: Jennifer should be built within a short amount of time to win the race against fast spreading misinformation.
26
+
27
+ * Ease of Access: Jennifer should provide information to the general public in an easily accessible manner across different platforms (e.g. Web and social media).
28
+
29
+ * Ease of Maintenance: Jennifer should be maintainable by broader and more diverse group of volunteers.
30
+
31
+ * Quality Assurance: Jennifer should provide information from reputable sources and maintain a rigorous process to ensure the quality of information in a consumable and empathetic manner. - Extensibility: Jennifer should be easily extensible to expand its capability with minimal effort.
32
+
33
+ *denotes equal contribution
34
+
35
+ 'https://www.newvoicesnasem.org/ data-downloads
36
+
37
+ ${}^{2}$ Facebook http://fb.me/JenniferCOVIDAI
38
+
39
+ ${}^{3}$ Learn more at: www.NewVoicesNASEM.org
40
+
41
+ ${}^{4}$ The opinions expressed here are those of the authors and do not represent positions of the National Academies of Sciences, Engineering, and Medicine or the authors' institutions
42
+
43
+ The first version of Jennifer was released on March 8, 2020. Since then, over 160 volunteers from 141 institutions around the globe recruited through the New Voices’ network ${}^{5}$ have helped make daily updates to the chatbot to ensure that its content reflects the latest available information from trusted sources. It is available on the Web and as a Facebook bot. It is also currently embedded in two fact checking systems. ${}^{6}$ As of June 18, 2020, Jennifer has been asked 1,480 questions (excluding questions selected via menus and answered 1,059 of them (a response rate of 71%), with an average engagement duration of three minutes and 15 seconds. We plan to conduct more formal evaluation of Jennifer in the future.
44
+
45
+ § 2 JENNIFER OVERVIEW
46
+
47
+ We chose to build Jennifer as a chatbot, because chatbots "are able to present concise information from credible sources" and "less overwhelming than social media or web search engines' long list of results" (Miner et al., 2020). The need for agility and speed necessitated that we leverage an existing chatbot platform; rather than building a chatbot from scratch. We chose to utilize Juji (Juji, 2020) that supports both tasks-oriented and social dialogues and allows easy extensions.
48
+
49
+ This platform supports do-it-yourself chatbot making, similar to Chatfuel ${}^{7}$ and Manychat, ${}^{8}$ but with more advanced NLP capabilities for dialog management similar to Google Dialogflow ${}^{9}$ and IBM Watson Assistant (Janarthanam, 2017; Xiao et al.,2020). ${}^{10}$ By leveraging Juji, we were able to build and deploy the first version of Jennifer in less than a day. The resulting chatbot is readily deployable on the Web and as a Facebook bot.
50
+
51
+ § 2.1 OVERALL ARCHITECTURE
52
+
53
+ Figure 1 depicts the overall architecture of Jennifer. As can be seen, Jennifer depends on the Juji base system for dialog management (Zhou et al., 2019). Given a user question, Juji uses a pre-trained machine learning model to identify one or more relevant question with known answers and depending on its confidence level returns an answer or a follow-up question (more in Sec. 2.2. The main capabilities of Jennifer come from the Question-Answer(QA) pairs generated by the extensions specifically implemented for Jennifer with two modes of ingestion:
54
+
55
+ * Crowdsourced: This mode relies on a repository of Frequently Asked Questions gathered from reliable sources such as the Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), the University of Washington Bothell, and the Federation of American Scientists. ${}^{11}$ The questions are provided by the users and volunteers of Jennifer, many based on the FAQs. The answers are manually curated by the volunteers of Jennifer via a rigorous process detailed shortly.
56
+
57
+ * Automated: Often, users of Jennifer ask questions on specific statistics such as number of confirmed cases in a country, or the death rate of a state or a city. The number of questions of this nature was significant, and answers to such questions are changing constantly. Therefore, it is labor intensive to manually curate answers or create alternative questions for such questions. Instead, we have built a QA Generator to automatically create such QA pairs, based on structured data pulled from reliable sources such as the CDC on a daily basis and question templates derived from the crowdsourced questions.
58
+
59
+ Most QA pairs come from the crowdsourced mode with significant efforts by our volunteers. Our volunteer base is a selected group of medical experts, scientists, engineers, technologists, and specialists. To ensure efficient delineation of tasks and to preserve scientific integrity, we divide this base into four volunteer groups: Curators, Helpers, Testers, and Admins, as follows.
60
+
61
+ Curators take new, unanswered questions, research current answers from reputable and trustworthy sources, and then craft answers with supporting evidence for inclusion. Curators also update answers that have become obsolete. Given the novelty of COVID-19, this is a critical task. Helpers take a set of existing questions and generate many possible question formulations, i.e., alternative questions. This step helps Jennifer to better answer unseen questions as Juji fine-tunes its underlying QA engine using additional data. Testers evaluate answers for freshness, accuracy, readability and monitor other possible quality issues with Jennifer, e.g., format issue. Input from volunteers is further validated by Admins before it is deployed into Jennifer. Specifically, Admins validate all answers, first for scientific validity, and then for language fluency and naturalness in response. Admins also validates alternative questions for relevancy and language fluency. Dedicated Slack channels ${}^{12}$ for each volunteer groups were used to facilitate discussions and collaborations.
62
+
63
+ ${}^{5}$ Learn more about the New Voices Network Tool at https://www.newvoicesnasem.org/the-network
64
+
65
+ ${}^{6}$ https://coronacheck.eurecom.fr/en and https://idir.uta.edu/covid-19/
66
+
67
+ 7https://chatfuel.com
68
+
69
+ 8 https:manychat.com
70
+
71
+ 9https://dialog-flow.com/
72
+
73
+ ${}^{10}$ https://www.ibm.com/cloud/ watson-assistant/
74
+
75
+ ${}^{11}$ www.cdc.gov, www.who.int, www.uwb.edu, and fas.org
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 1: Architecture Overview of Jennifer
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 3: Jennifer informs users about additional topics it knows
84
+
85
+ § 2.2 CHAT DESIGN
86
+
87
+ We designed the dialog flow of Jennifer based on two principles: 1. Fostering mixed-initiative interaction (Walker and Whittaker, 1990); 2. Supporting Two-way adaptation: learning from users and also encouraging users to learn what Jennifer can do (Pan et al., 2005)
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 4: Jennifer recommends relevant questions.
92
+
93
+ When Jennifer was first launched in early March, most people knew little about COVID-19 or its impact. Thus Jennifer started with a "menu" to inform users about its existing knowledge on most important topics (Fig. 2). After answering a question, Jennifer also volunteers information on additional topics that it knows (Fig. 3). This design aims to address two challenges: 1) the user may not know how to get started or lack knowledge to ask additional questions; 2) Jennifer (or any AI system) will never be perfect; there will always be questions that it cannot answer. By informing users about what it knows, users are more likely to ask questions that Jennifer can answer. If Jennifer is unsure about how to answer a question, it will recommend similar questions to give users a chance to obtain desired answers as well as learn more about Jennifer's capabilities. Fig. 4 shows how it expresses its uncertainty regarding the user's question but proceeds to recommend a list of relevant inquiries. Jennifer will improve its response to similar questions based on user interactions.
94
+
95
+ Jennifer aims at fostering mixed-initiative interactions. On the one hand, it proactively solicits questions from users; on the other hand, it allows users to initiate their questions any time during the chat flow. Such mixed-initiative interactions keep users engaged while allowing users to obtain information at their own pace.
96
+
97
+ ${}^{12}$ https://slack.com
98
+
99
+ § 2.3 QA PAIRS
100
+
101
+ As illustrated in Table 1, QA pairs in Jennifer are grouped by their ids ${}^{13}$ ; questions with the same id are regarded as similar and associated with one or more (semantically equivalent) answers. We release COQB-19 with questions gathered from our users and volunteers.
102
+
103
+ To be included in the chatbot, each answer needs to satisfy the following criteria:
104
+
105
+ Easy to understand: The information is presented in language understandable by the general public. Accuracy and Openness: The answers must be backed up by data from reliable sources, include references or links to such sources, and be verified by at least one trusted volunteer medical expert. Furthermore, scientific understanding of COVID- 19 is quickly evolving; it is important to be explicit about potential uncertainty in the answers.
106
+
107
+ Demonstration of Empathy: The language provided in the answers should emulate natural empathetic conversation, and must acknowledge factors such as stress or anxiety experienced by the users to help foster trust.
108
+
109
+ § 2.4 MULTILINGUAL SUPPORT
110
+
111
+ We have received numerous requests from users around the world to offer Jennifer-like capabilities in other languages. On the surface, this task appears to be straightforward. One can translate QAs from Jennifer into another language using machine translation (ML). However, language expansion needs to overcome several major obstacles:
112
+
113
+ * Language Fluency: Results produced by commercially available ML services still require significant manual refinement, particularly for domain-specific text (e.g., many answers provided by Jennifer).
114
+
115
+ * Domain Customization: Specialized domains such as epidemiology and public health often have their own specific terminologies in non-English languages.
116
+
117
+ * Relevancy: Answers to questions should be verified against reliable sources in the language of question. Additionally, cultural aspects and differences among dialects must be considered when crafting answers in different languages.
118
+
119
+ * Models The Dialog Manager relies on pre-trained machine learning models which usually perform poorer for non-English languages.
120
+
121
+ We therefore chose to expand one language at a time. The first language we selected is Spanish, spoken by ${13.5}\%$ of the US population. ${}^{14}$ Specifically, we designed and built so fia, ${}^{15}$ also using the Juji platform. The QA collection underlying Sofia consists of QA pairs manually translated from the Jennifer QA pairs. It is maintained and manually curated by a group of bilingual Spanish-English certified medical interpreters, using verified information from the Spanish language websites of the CDC and WHO. Plans to expand Jennifer to other languages are currently under development.
122
+
123
+ § 3 DISCUSSION
124
+
125
+ Jennifer has successfully demonstrated that, with the right combination of technology and human experts, information from reputable sources can be more quickly and effectively organized and shared at scale. In this section, we share lessons learned and open challenges in the hope to shed some light on promising future research directions.
126
+
127
+ § 3.1 LESSONS LEARNED
128
+
129
+ People are eager to help. Many scientists and health professionals are eager to step up and help to better respond to the COVID-19 crisis. Many expressed gratefulness for the opportunity to contribute to Jennifer as a volunteer.
130
+
131
+ Process and Communication is Important. Given the evolving tasks and the large number of volunteers with diverse background, putting the right process around tasks, workflow, and sequencing (Norheim-Hagtun and Meier, 2010) is key to ensuring efficient use of the volunteers' time to the advantage of the project. It is also important to hold regular dialog with the volunteers to both provide and obtain feedback as well as keep them posted about the progress of the project.
132
+
133
+ Effective and Dedicated Management is Critical. Even with delineation and process optimization, the job of managing volunteers and the intake process requires constant focus and dedication by a few individuals to ensure successful execution. As such, we need to support Jennifer with more dedicated resources along with its large number of volunteers to ensure its long-term success.
134
+
135
+ Human-Machine Conversation requires Proactive Design. Despite of the careful chat design described in Sec. 2.2, improvements on our current design are still desired to avoid the perils of over-promising and encourages users to frame their questions with more specific keywords and simpler sentence structures. We are currently exploring different design options.
136
+
137
+ ${}^{14}$ https://www.census.gov/data
138
+
139
+ ${}^{15}$ Available at https://bit.ly/SofiaAI and on Facebook https://fb.me/SofiaCOVIDAI
140
+
141
+ ${}^{13}$ Manually assigned by Testers and validated by Admins
142
+
143
+ max width=
144
+
145
+ ID Question Answer
146
+
147
+ 1-3
148
+ ChildrenRisk Are kids at risk? Based on the current data, nobody seems to be immune from COVID-19,
149
+
150
+ 1-3
151
+ ChildrenRisk Can children be infected? including children. It is true that the number of cases in children is so far
152
+
153
+ 1-3
154
+ ChildrenRisk Are children at risk of getting COVID-19? lower than the number of cases in adults. We really don't know why this is.
155
+
156
+ 1-3
157
+ ChildrenRisk Tell me how COVID-19 affects children? The CDC provides answers to commonly asked questions about COVID
158
+
159
+ 1-3
160
+ ChildrenRisk Tell me if kids get infected? -19 in here. For those interested in recent research on the subject, a study
161
+
162
+ 1-3
163
+ ChildrenRisk Tell me if children get infected? describing infections in kids in China is available here.
164
+
165
+ 1-3
166
+
167
+ Table 1: Example QA Pairs
168
+
169
+ § 3.2 OPEN CHALLENGES
170
+
171
+ Coordinating the distribution of information at the national level is critical to prepare for the next pandemic (Alexander, 2020). Jennifer-like chatbots may be a fundamental component of future misinformation resolution strategy. Our experience with Jennifer confirms that it is possible to collaboratively build such chatbots quickly and effectively and to scale these initiatives with the help from many volunteers. However, building these chatbots also comes with its own set of open challenges.
172
+
173
+ Scalable Crowdsourced Fact Checking Platform. Much of the recent research has focused on automating the task of fact checking (e.g., Adair et al. (2017); Pathak and Srihari (2019)). However, in a novel crisis like COVID-19, facts are quickly changing. It is crucial to engage human experts in the loop to ensure the timeliness and accuracy of the answers provided by systems like Jennifer. Much of the development and ongoing maintenance of Jennifer relies on a rigorous, manual process for quality assurance. Though receiving input from a large number of distributed volunteers is desirable, it remains an open challenge to design, construct, and maintain a fact-checking platform that supports a rigorous process to both engage a large number of experts with diverse expertise levels and leverage automation in minimizing human efforts (Hughes and Tapia, 2015).
174
+
175
+ Zero-Shot Empathetic Natural Language Generation (NLG). To ensure accuracy, comprehensibility, and appropriate level of empathy, answers provided by Jennifer are either manually curated or auto-generated with manually curated templates (Sec. 2). While it is possible to scrape FAQs automatically from reliable resources, how to use the scraped text to generate empathetic answers with little or no training data remains an open problem (Liu et al., 2020), potentially solvable via approaches similar to politeness transfer (Madaan et al., 2020). Identifying multiple resources relevant to a question and composing answers based on them in a coherent and emphathetic manner is an even more challenging problem.
176
+
177
+ Competing Information Sources and Public Trust. Tensions among centralized knowledge networks, such as public health organizations and medical academia, and decentralized information sources on social media platforms and independent news sites introduce new challenges for combating misinformation during global crises. Evidence-based, peer-reviewed information has to compete for public attention and public trust (Cary Funk, 2020). Information literacy becomes ever more important. Solving this challenge requires more than technological innovation (Goldstein, 2020).
178
+
179
+ § 4 CONCLUSION
180
+
181
+ This paper introduces Jennifer, a chatbot created to provide easily accessible information to answer questions related to the current COVID-19 pandemic. Jennifer leverages cutting-edge chatbot technology, as well as a diverse network of volunteers from around the globe, to combat misinformation related to the pandemic. The information provided by Jennifer covers a wide variety of topics, ranging from updated case statistics to food safety and best practices to prevent the virus spread.
182
+
183
+ § ACKNOWLEDGMENTS
184
+
185
+ The authors would like to acknowledge the National Academies of Sciences, Engineering, and Medicine and the Gordon and Betty Moore Foundation for their generous support of the New Voices project, as well as the guidance and support of the Juji.io team. We would also like to thank our hundred of volunteers whose efforts have made Jennifer possible.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JENSKEEzsoU/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # COVID-QA: A Question Answering Dataset for COVID-19
2
+
3
+ Timo Möller*
4
+
5
+ deepset GmbH
6
+
7
+ Berlin, Germany
8
+
9
+ G Anthony Reina*
10
+
11
+ Intel Corporation
12
+
13
+ Santa Clara, CA, USA
14
+
15
+ Raghavan Jayakumar
16
+
17
+ Lawrence Livermore
18
+
19
+ National Laboratory (retired)
20
+
21
+ Livermore, CA, USA
22
+
23
+ Malte Pietsch
24
+
25
+ deepset GmbH
26
+
27
+ Berlin, Germany
28
+
29
+ ## Abstract
30
+
31
+ We present COVID-QA, a Question Answering dataset that is based on the COVID-19 Open Research Dataset. Our dataset consists of 2,019 question/answer pairs annotated by volunteer biomedical experts and saved in the popular SQuaD format. This presents, to date, the largest manually created Question Answering dataset on Covid-19 related material. We began with a RoBERTa base model that was initially fine-tuned on SQuAD and continued training the model on our COVID-QA dataset. We found that the additional training on this domain-specific data leads to significant gains in performance. Both the trained model and COVID-QA annotated datasets have been open-sourced and made available at: https://github.com/deepset-ai/COVID-QA.Our hope is that researchers use COVID-QA to improve or evaluate search systems into COVID related scientific texts.
32
+
33
+ ## 1 Introduction
34
+
35
+ In 2019 a novel coronavirus (SARS-CoV-2) caused a worldwide pandemic (COVID-19) that has spread to over 8 million people and led to over 400,000 deaths to date (Pascarella et al., 2020; Chen and Li, 2020). On March 16, 2020 the White House Office of Science and Technology Policy released the COVID-19 Open Research Dataset called CORD-19 (The White House Office of Science and Technology Policy, 2020 (accessed May 9, 2020). CORD-19 is an open-sourced corpus of over 69,000 scholarly articles that focus on COVID-19, SARS-CoV-2, the Coronavirus group and other viral outbreaks. The CORD-19 project is a collaboration of the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University's Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health.
36
+
37
+ COVID-QA is an open-source, volunteer-led project to develop a Question Answering (QA) dataset based on text from CORD-19 articles. 2,019 extractive question/answer pairs have been created by medical experts in a fashion similar to the prominent SQuAD dataset. The annotations have been used for assessing the performance of QA systems trained on other general domain QA datasets. Furthermore these general QA systems could be improved by training and evaluating on COVID-QA in k-fold cross validation manner. A special form of k-fold cross validation for QA models has been implemented as part of this work to optimally leverage the dataset. The COVID-QA annotations are released under the Creative Commons Attribution 4.0 International License (CC-BY 4.0). The goal is to provide a QA dataset for researchers to benchmark or improve the functionality of their COVID-specific search systems or QA models.
38
+
39
+ ## 2 Related Work
40
+
41
+ ### 2.1 Datasets
42
+
43
+ There are many large, extractive QA datasets, including prominent ones such as SQuAD (Rajpurkar et al., 2016) and Natural Questions (Kwiatkowski et al., 2019). Typically, these are crowd-sourced, general-purpose datasets. Each contains more than a hundred thousand question/answer pairs. This makes them suitable for training deep learning models from scratch.
44
+
45
+ In contrast, there are also many smaller, domain-specific QA datasets created by expert annotators. Although their small sizes limit their usefulness in the de novo training of QA models, their domain-specific annotations make them good candidates for evaluating existing QA systems. For example, the MRQA Shared Task (Fisch et al., 2019)- which focuses on the generalizability of QA systems-contains domain-specific datasets, such as BioASQ (Tsatsaronis et al.,2015) (~3,000 QA pairs) and BioProcess (Berant et al., 2014) (219 QA pairs).
46
+
47
+ ---
48
+
49
+ * These authors contributed equally.
50
+
51
+ ---
52
+
53
+ ![01963db5-bdd7-79d5-a472-2c6db169362a_1_192_166_640_380_0.jpg](images/01963db5-bdd7-79d5-a472-2c6db169362a_1_192_166_640_380_0.jpg)
54
+
55
+ Figure 1: The COVID-19 QA Annotator is a lightweight, web-based interface which allows multiple experts to quickly apply SQuAD-style labels to research articles. The annotator uses the mouse to highlight the answer (pink) and creates a question based on that answer (box on left). The only system requirement is a web browser and an internet connection. Annotations can be exported either in Microsoft Excel or SQuAD (JSON) format.
56
+
57
+ Since the COVID-19 pandemic began there have also been efforts in assembling relevant QA datasets, such as CovidQA (Tang et al., 2020) and the CORD-19 Information Aggregator; or dataset collections, such as Cord-19 PubAnnotation and Social Media for Public Health. Additionally, there is an ongoing IR shared task dataset TREC-COVID (Voorhees et al., 2020) that is, at the time of writing, under construction.
58
+
59
+ ### 2.2 Tools
60
+
61
+ New tools are emerging to make search (or Question Answering) into COVID-19 related literature possible. Recently Google released the COVID- 19 Research Explorer a semantic search interface on top of the CORD-19 dataset. Neural Covidex (Zhang et al., 2020) is a similar initiative.
62
+
63
+ We view our work as a complementary effort to aid in the evaluation and/or improvement to these existing tools.
64
+
65
+ ## 3 COVID-QA Dataset Creation
66
+
67
+ The CORD-19 dataset was compiled so that researchers could apply "recent advances in natural language processing (NLP) and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease." (The White House Office of Science and Technology Policy, 2020 (accessed May 9, 2020). NLP and AI techniques are especially important since the large volume of COVID-19 related research articles will continue to grow daily as more resources are rallied to combat the pandemic.
68
+
69
+ Although the CORD-19 dataset itself provides a valuable start for researchers, it remains a completely unstructured resource. The dataset is simply the text from published scientific articles that were compiled from a keyword search of the Chan-Zuckerberg Initiative (CZI), PubMed Central (PMC), BioRxiv, and medRxiv online databases.
70
+
71
+ We decided to build an automated tool that could answer any information-seeking question about COVID-19 using scientific literature. In order to achieve this, we needed to provide a sufficient number of annotated questions/answers to the CORD- 19 dataset. Our inspiration for this project came, in part, from the previous works, such as the BioASQ datasets and the BioBert and SciBERT biomedical NLP models (Tsatsaronis et al., 2015; Lee et al., 2020; Beltagy et al., 2019). These similar projects demonstrated that fine-tuning an existing baseline NLP model on a highly-domain specific text corpus could significantly improve the accuracy of the predictions.
72
+
73
+ ### 3.1 Article and Annotator Selection
74
+
75
+ Within 10 days of the CORD-19 dataset release, we began annotation. First, we selected the subset of the CORD-19 dataset that had full-text and a commercially-friendly copyright license. On manual inspection of the subset, we determined that the majority of the scientific articles were not highly specific to COVID or coronavirus, but instead related more generally to viral outbreaks and genomics. This lack of COVID-19 specificity in the articles is understandable given the newness of the pandemic. Our domain experts agreed that scientific research on related viral pandemics, including SARS, MERS, and HIV-1, were relevant to COVID-19 researchers. Nevertheless, they also agreed to balance the dataset as much as possible between COVID-19 and generic viral pandemic-related articles. We, therefore, extracted 245 articles by searching the commercially-friendly subset for the phrases "Covid", "Wuhan" and "Coronavirus". We extracted an additional 200 random articles from the commercially-friendly subset. Finally, our annotators uploaded 5 additional scientific articles based on their knowledge of the domain. Of these 450 scientific articles, 147 were annotated over the 8 week period prior to this report. Although the expert annotators were volunteers, it was required that all have at least a Master's degree in biomedical sciences to ensure a basic understanding of the scientific articles. The annotation team was led by a medical doctor (G.A.R.) who recruited 15 volunteers and vetted their credentials. Annotators were told that the COVID-QA dataset was not simply a database of COVID-19 facts and figures, but instead, datapoints to train a model that could answer questions given any scientific article about COVID-19 or similar viral outbreaks. Annotators focused their questions mainly on viral/genomic structure, antigenic targets, epidemiological characteristics, symptoms, and pharmaceutical treatment. We presume most of the questions are relevant for any scientific report of a viral outbreak.
76
+
77
+ ![01963db5-bdd7-79d5-a472-2c6db169362a_2_196_169_638_450_0.jpg](images/01963db5-bdd7-79d5-a472-2c6db169362a_2_196_169_638_450_0.jpg)
78
+
79
+ Figure 2: Sample questions and answers from the annotated CORD-19 dataset. The COVID-QA annotation tool allows for this data to be reviewed/corrected by other annotators. The experts that volunteered as annotators were required to have at least a Master's degree in one of the biomedical sciences.
80
+
81
+ ### 3.2 QA Annotation Process
82
+
83
+ We used an existing, web-based annotation tool that had been created by deepset for their Neural Search Framework Haystack. Figure 1 shows an image of the COVID-QA Annotation tool. The tool was used by annotators to mark answer text and formulating corresponding questions while sifting through the article. In addition to being a light-weight, scalable interface, the tool was easy enough to use that even non-technical experts were quickly comfortable in its use. We created a series of online videos as well as released annotation guidelines to demonstrate the usage of the tool and the concepts behind the QA annotations. The annotation tool allowed the model developers to export the labels in SQuAD format for convenient fine-tuning of QA models. The annotators generated 2,019 question/answer pairs based on the annotations of 147 articles from the dataset. All 2,019 annotations were verified, corrected or removed by the annotation team leader. Figure 2 shows several examples of these question/answer pairs.
84
+
85
+ The annotators reported the process to be relatively easy to annotate and enjoyed the experience because it provided them an opportunity to contribute to the COVID response while reading interesting and informative research papers. While the annotators needed to be familiar with the biomedical field and recent broad progress, we found that intimate knowledge with the subject of the article was not essential. Instead, it is adequate for the annotator to understand the material well enough to figure out the broad relationship between subjects and objects that are being linked and the purported logic in it. Where the annotator is not certain of the relationship and importance of a statement, they would simply omit an annotation. A Slack workspace was created to allow annotators to share best practices, post issues with existing labels, and recommend enhancements to the annotation process.
86
+
87
+ ### 3.3 Dataset Analysis
88
+
89
+ Although most of the annotations were found to be straightforward- producing short, simple question/answer pairs- we found a large number of articles where it was difficult to properly phrase the question or frame the answer. Specifically, the annotators found that some question/answer pairs relied on context that spanned throughout the article. Figure 3 shows that although the majority of question/answer pairs contained short sentences, there was also a very long tail of question/answer pairs where the answers could span more than 30 words. The annotators reported this issue to the model developers in order to prepare the following experiments that would address these long question/answer pairs during training and inference.
90
+
91
+ The annotators also discussed new labelling schemes, such as single-question/multiple-answer annotations and semantic variations of the same question for a single answer. Although these were considered to be valid research arms for future work, we chose to keep the annotation instructions as simplified as possible for the current work.
92
+
93
+ ![01963db5-bdd7-79d5-a472-2c6db169362a_3_231_176_537_240_0.jpg](images/01963db5-bdd7-79d5-a472-2c6db169362a_3_231_176_537_240_0.jpg)
94
+
95
+ Figure 3: The COVID-19 QA dataset statistics. Although the majority of the 2,019 questions or answers contain fewer than 10 words, the answer distribution tends to have a much longer tail.
96
+
97
+ ## 4 Experiments
98
+
99
+ ### 4.1 Models
100
+
101
+ We used pretrained, transformer-based Language Models that had been fine-tuned to QA datasets. It has been shown that models trained on the SQuAD dataset have the best generalization performance to out-of-domain data compared to other large QA datasets (Longpre et al., 2019).
102
+
103
+ We chose a RoBERTa-base architecture (Liu et al., 2019) for its good accuracy vs model size trade-off and fine-tuned it on the SQuAD dataset. The resulting model and its evaluation can be accessed at Hugging Face's model hub under "deepset/roberta-base-squad2". We refer to this model as the baseline model.
104
+
105
+ We further fine-tuned the baseline model using our COVID-QA annotations introduced in this paper. We used the FARM framework to fine-tune the model in k-fold cross validation manner. The training parameters are defined in Table 1. We disabled the option to return no answer during testing since in COVID-QA each question has a dedicated text answer. We will refer to this model as the COVID-QA model.
106
+
107
+ ### 4.2 Metric
108
+
109
+ Most open domain Question Answering systems, such as DrQA (Chen et al., 2017) or REALM (Guu et al., 2020), use the Exact Match (EM) between label and prediction text (not the position) as evaluation criteria where the answer is often an unambiguous, single entity $(Q$ : "How many feet are in one mile?" Answer: "5,280"). A less sensitive metric than EM is needed in the cases where there are long answers, long articles, and/or only a single answer annotation per question.
110
+
111
+ We choose to take multiple model predictions and any positional overlap between prediction and label as metric and refer to it as top-n-accuracy. Top-n-accuracy compares the gold label against $n$ model predictions and looks for any overlap between prediction and answer positions to create a binary hit or miss scenario where accuracy can be computed. We also report EM and F1 score of our models to simplify comparison to existing methods. In the following paragraphs, we will outline the reasons for choosing top-n-accuracy on COVID-QA data.
112
+
113
+ As an example of variation in long answers, consider a question like "Does C-C chemokine receptor type 5 (CCR5) affect the transmission of HIV- 1?" given the context "Genetic variants in CCR5 have been shown to influence vertical transmission of HIV-1. CCR5 promoter variants resulting in higher expression of the receptor were associated with increased risk of MTCT of HIV-1 [...].". A valid answer could be either the first, the second or both sentences altogether.
114
+
115
+ In long scientific articles, the abstract, results, and conclusion may contain some of the same information but with slightly different wording. This is, in fact, part of the design that goes into writing a good scientific article. A metric such as EM might not be the best measure of the model performance for these cases. When only one annotation is present it, by nature, cannot capture all possible answer variations in the answers and positions in the article. The chances of multiple, correct answer spans are increased when working with long documents and, therefore, a robust metric should be able to account for these variations.
116
+
117
+ ### 4.3 Evaluation
118
+
119
+ Evaluating Language Models trained on downstream tasks is difficult because the performance varies across runs (Dodge et al., 2020) - especially for difficult tasks and/or small dataset sizes. We therefore implemented k-fold cross validation for QA to get a better estimate of how well the performance will generalize.
120
+
121
+ Another complication for training QA models on long documents is the restricted sequence length of Transformer-based Language Models. Long documents are split into multiple (overlapping) chunks with mostly one chunk containing the answers, while the other chunks are negative examples where no answer can be found. We observed that having too many negative examples both substantially increases training time and also prevents the model from convergence. We therefore included a down-sampling option to only keep d negative examples during training. For evaluation the whole article was processed and no downsampling was applied.
122
+
123
+ ![01963db5-bdd7-79d5-a472-2c6db169362a_4_194_174_632_331_0.jpg](images/01963db5-bdd7-79d5-a472-2c6db169362a_4_194_174_632_331_0.jpg)
124
+
125
+ Figure 4: Per fold cross validation scores for the COVID-QA model (RoBERTa-base fine-tuned on SQuAD, continued with cross validation on COVID-QA annotations). The bars represent top-3 accuracy, Exact Match and F1 scores for each of the 5 folds. The horizontal lines depict average values for each metric.
126
+
127
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>baseline model</td><td>deepset/roberta-base-squad2</td></tr><tr><td>batch size</td><td>80</td></tr><tr><td>epochs</td><td>2</td></tr><tr><td>learning rate</td><td>3e-5</td></tr><tr><td>max seq len</td><td>384</td></tr><tr><td>doc stride</td><td>192</td></tr><tr><td>no answer</td><td>-100 (disabled)</td></tr><tr><td>cross val folds</td><td>5</td></tr><tr><td>hardware</td><td>4x V100</td></tr></table>
128
+
129
+ Table 1: Parameters used for training and evaluating the COVID-QA model. Note that disabling the option of "no answer" only applies to the test setting.
130
+
131
+ For evaluation, we used k-fold cross validation with one negative example per question/answer pair during training and the full article for testing.
132
+
133
+ ### 4.4 Results
134
+
135
+ Table 2 shows that fine-tuning the baseline model on the domain data with the parameters specified in Table 1 results in an absolute improvement of top-3 accuracy by about $+ 5\%$ (85.59 versus 80.68) and in the F1 score by slightly over $+ {10}\% ({59.53}$ versus 49.43). The scores reported for the COVID-QA model are averaged across all five folds with Figure 4 showing the scores per each fold.
136
+
137
+ <table><tr><td>Model</td><td>EM</td><td>F1</td><td>Top-3 Acc.</td></tr><tr><td>Baseline</td><td>21.84</td><td>49.43</td><td>80.68</td></tr><tr><td>COVID-QA model</td><td>25.90</td><td>59.53</td><td>85.59</td></tr></table>
138
+
139
+ Table 2: Comparison of baseline model to the model cross validated on COVID-QA labels. The scores for the COVID-QA model are averaged across the 5 folds.
140
+
141
+ ## 5 Discussion & Future Work
142
+
143
+ In this section we want to highlight shortcomings and potential improvements to the used QA models as well as the generated dataset itself.
144
+
145
+ ### 5.1 Interpretation of Results
146
+
147
+ As shown in Table 2 all scores improved significantly when using COVID-QA labels for continued training. F1 and EM scores are low compared to metrics reported on SQuAD because of the different text domain as well as an about ${40} \times$ larger text size. Our baseline model created an average of 6118.5 tokens for each COVID-QA document compared to 153.2 tokens for SQuAD. A relatively larger increase in F1 score (compared to EM) might be the consequence of the model adjusting to much longer answers that are present in our dataset. As shown in Fig. 3 the mean number of words in COVID-QA answers is 13.9 whereas for SQuAD (training set, excluded "no answer" option) it is 3.2 words per answer only.
148
+
149
+ ### 5.2 Modelling Improvements
150
+
151
+ We believe substantial improvements to the QA model can be achieved by using other baseline models like the large version of RoBERTA fine-tuned on SQuAD. Combined fine-tuning of the baseline model on multiple QA datasets might be beneficial for the transfer to the CORD-19 domain as well. Adjusting the underlying Language Model to the CORD-19 domain by training it from scratch or Language Model adaptation is another option, though one has to consider these domain Language Models cannot be fitted to SQuAD as well as Language Models pretrained on the general domain.
152
+
153
+ Improvements to the metric are also imaginable. One possible direction is a semantic comparison of prediction to gold label by using vector representations from Language Models as described in BLEURT (Sellam et al., 2020), a newly proposed metric for evaluating text generation systems.
154
+
155
+ One issue of using transformer-based models for QA is the need for vast amounts of computational power - especially when asking a question on not just a short paragraph but a whole collection of documents. To speed up the QA process one usually uses a two-stage procedure to first retrieve a set of candidate documents and then apply more powerful QA models to extract the actual answer like done with DrQA (Chen et al., 2017) or inside Haystack, an open-source framework for doing QA at scale. Another alternative to speed up inference is to index all possible phrases (including potential answers) and project the question into this index to retrieve the closest answers as done in the Dense-Sparse Phrase Index (Seo et al., 2019).
156
+
157
+ ### 5.3 Dataset Improvements
158
+
159
+ We are confident about the quality of the generated question/answer pairs because the annotation team leader, a trained medical doctor, manually verified each question/answer pair. Though we also see much potential for improvement. First of all the dataset does not contain multiple labels by different annotators for any of our questions. This would be needed to determine the inter-annotator agreement as well as if there are multiple possible answers per question. The dataset furthermore does not contain unanswerable questions, i.e. questions where the annotator was sure the given article does not contain an answer. Another shortcoming is the constructed tone of the questions as a result of the SQuAD style label process where annotators need to create a question while reading the article. This is a major criticism of SQuAD in the paper accompanying Natural Questions (Kwiatkowski et al., 2019). A fast way to create less constructed questions would be to have other annotators rephrase existing questions without access to the underlying article. Another minor improvement could be made to the answer annotation spans. These spans sometimes start or end inside of a word that does not belong to the answer.
160
+
161
+ ## 6 Conclusion
162
+
163
+ We have created a SQuAD-style QA dataset based on 2,019 annotations of medical experts and released it to the public. We have shown that the created dataset can be used for improving QA systems by either evaluating existing QA models or continued training. We also believe the created dataset could be used for improving or evaluating existing search systems that browse through the CORD-19 dataset.
164
+
165
+ Based on improved QA systems we hope researchers can find information inside COVID-19 related literature much quicker and therefore utilize the existing knowledge more efficiently for finding solutions to the COVID-19 pandemic.
166
+
167
+ ## 7 Source Code
168
+
169
+ The annotations are available at https://github.com/deepset-ai/COVID-QA/tree/master/ data/question-answering/COVID-QA.json. Source code for performing cross validation on QA models is available at https://github.com/ deepset-ai/FARM/blob/master/examples/ question_answering_crossvalidation.py
170
+
171
+ The annotation tool is available at https://github.com/deepset-ai/haystack
172
+
173
+ ## 8 Acknowledgements
174
+
175
+ This effort has been successful thanks to the hard work of many people, including, but not limited to the following (in alphabetical order of last name): Victor Alm, Laura Barrera, Branden Chan, Suha Chari, Simon Fakir, Ling Hsin, Archana Jayaku-mar, Tripti Kataria, Bogdan Kostic, Pete Logan, Bilawal Nadeem, Travis Nesbit, Milind Pandit, Jeanine Renne, Milos Rusic, Jack Seksenyan, Kyle Shannon, Ajeet Singh, Tanay Soni, and Narayan Sundararajan. We are also grateful for the support of deepset, Intel, NVIDIA and AWS.
176
+
177
+ ## References
178
+
179
+ Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3606- 3611.
180
+
181
+ Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1499-1510.
182
+
183
+ Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879.
184
+
185
+ Yu Chen and Lanjuan Li. 2020. Sars-cov-2: virus dynamics and host response. The Lancet, pages 515-
186
+
187
+ 516.
188
+
189
+ Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305.
190
+
191
+ Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. In EMNLP 2019 MRQA Workshop, page 1.
192
+
193
+ Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu-pat, and Ming-Wei Chang. 2020. Realm: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909.
194
+
195
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.
196
+
197
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
198
+
199
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
200
+
201
+ Shayne Longpre, Yi Lu, Zhucheng Tu, and Chris DuBois. 2019. An exploration of data augmentation and sampling techniques for domain-agnostic question answering. In EMNLP 2019 MRQA Workshop, page 220.
202
+
203
+ Giuseppe Pascarella, Alessandro Strumia, Chiara Piliego, Federica Bruno, Romualdo Del Buono, Fabio Costa, Simone Scarlata, and Felice Agrò. 2020. Covid-19 diagnosis and management: A comprehensive review. Journal of Internal Medicine.
204
+
205
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
206
+
207
+ Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text generation. arXiv preprint arXiv:2004.04696.
208
+
209
+ Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4430-4441.
210
+
211
+ Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly bootstrapping a question answering dataset for covid-19. arXiv preprint arXiv:2004.11339.
212
+
213
+ The White House Office of Science and Technology Policy. 2020 (accessed May 9, 2020). Call to Action to the Tech Community on New Machine Readable COVID-19 Dataset.
214
+
215
+ George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou-los, et al. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. ${BMC}$ bioinformatics, 16(1):138.
216
+
217
+ Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2020. Trec-covid: Constructing a pandemic information retrieval test collection. arXiv preprint arXiv:2005.04474.
218
+
219
+ Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly deploying a neural search engine for the covid-19 open research dataset: Preliminary thoughts and lessons learned. arXiv preprint arXiv:2004.05125.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JENSKEEzsoU/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COVID-QA: A QUESTION ANSWERING DATASET FOR COVID-19
2
+
3
+ Timo Möller*
4
+
5
+ deepset GmbH
6
+
7
+ Berlin, Germany
8
+
9
+ G Anthony Reina*
10
+
11
+ Intel Corporation
12
+
13
+ Santa Clara, CA, USA
14
+
15
+ Raghavan Jayakumar
16
+
17
+ Lawrence Livermore
18
+
19
+ National Laboratory (retired)
20
+
21
+ Livermore, CA, USA
22
+
23
+ Malte Pietsch
24
+
25
+ deepset GmbH
26
+
27
+ Berlin, Germany
28
+
29
+ § ABSTRACT
30
+
31
+ We present COVID-QA, a Question Answering dataset that is based on the COVID-19 Open Research Dataset. Our dataset consists of 2,019 question/answer pairs annotated by volunteer biomedical experts and saved in the popular SQuaD format. This presents, to date, the largest manually created Question Answering dataset on Covid-19 related material. We began with a RoBERTa base model that was initially fine-tuned on SQuAD and continued training the model on our COVID-QA dataset. We found that the additional training on this domain-specific data leads to significant gains in performance. Both the trained model and COVID-QA annotated datasets have been open-sourced and made available at: https://github.com/deepset-ai/COVID-QA.Our hope is that researchers use COVID-QA to improve or evaluate search systems into COVID related scientific texts.
32
+
33
+ § 1 INTRODUCTION
34
+
35
+ In 2019 a novel coronavirus (SARS-CoV-2) caused a worldwide pandemic (COVID-19) that has spread to over 8 million people and led to over 400,000 deaths to date (Pascarella et al., 2020; Chen and Li, 2020). On March 16, 2020 the White House Office of Science and Technology Policy released the COVID-19 Open Research Dataset called CORD-19 (The White House Office of Science and Technology Policy, 2020 (accessed May 9, 2020). CORD-19 is an open-sourced corpus of over 69,000 scholarly articles that focus on COVID-19, SARS-CoV-2, the Coronavirus group and other viral outbreaks. The CORD-19 project is a collaboration of the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University's Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health.
36
+
37
+ COVID-QA is an open-source, volunteer-led project to develop a Question Answering (QA) dataset based on text from CORD-19 articles. 2,019 extractive question/answer pairs have been created by medical experts in a fashion similar to the prominent SQuAD dataset. The annotations have been used for assessing the performance of QA systems trained on other general domain QA datasets. Furthermore these general QA systems could be improved by training and evaluating on COVID-QA in k-fold cross validation manner. A special form of k-fold cross validation for QA models has been implemented as part of this work to optimally leverage the dataset. The COVID-QA annotations are released under the Creative Commons Attribution 4.0 International License (CC-BY 4.0). The goal is to provide a QA dataset for researchers to benchmark or improve the functionality of their COVID-specific search systems or QA models.
38
+
39
+ § 2 RELATED WORK
40
+
41
+ § 2.1 DATASETS
42
+
43
+ There are many large, extractive QA datasets, including prominent ones such as SQuAD (Rajpurkar et al., 2016) and Natural Questions (Kwiatkowski et al., 2019). Typically, these are crowd-sourced, general-purpose datasets. Each contains more than a hundred thousand question/answer pairs. This makes them suitable for training deep learning models from scratch.
44
+
45
+ In contrast, there are also many smaller, domain-specific QA datasets created by expert annotators. Although their small sizes limit their usefulness in the de novo training of QA models, their domain-specific annotations make them good candidates for evaluating existing QA systems. For example, the MRQA Shared Task (Fisch et al., 2019)- which focuses on the generalizability of QA systems-contains domain-specific datasets, such as BioASQ (Tsatsaronis et al.,2015) (3̃,000 QA pairs) and BioProcess (Berant et al., 2014) (219 QA pairs).
46
+
47
+ * These authors contributed equally.
48
+
49
+ < g r a p h i c s >
50
+
51
+ Figure 1: The COVID-19 QA Annotator is a lightweight, web-based interface which allows multiple experts to quickly apply SQuAD-style labels to research articles. The annotator uses the mouse to highlight the answer (pink) and creates a question based on that answer (box on left). The only system requirement is a web browser and an internet connection. Annotations can be exported either in Microsoft Excel or SQuAD (JSON) format.
52
+
53
+ Since the COVID-19 pandemic began there have also been efforts in assembling relevant QA datasets, such as CovidQA (Tang et al., 2020) and the CORD-19 Information Aggregator; or dataset collections, such as Cord-19 PubAnnotation and Social Media for Public Health. Additionally, there is an ongoing IR shared task dataset TREC-COVID (Voorhees et al., 2020) that is, at the time of writing, under construction.
54
+
55
+ § 2.2 TOOLS
56
+
57
+ New tools are emerging to make search (or Question Answering) into COVID-19 related literature possible. Recently Google released the COVID- 19 Research Explorer a semantic search interface on top of the CORD-19 dataset. Neural Covidex (Zhang et al., 2020) is a similar initiative.
58
+
59
+ We view our work as a complementary effort to aid in the evaluation and/or improvement to these existing tools.
60
+
61
+ § 3 COVID-QA DATASET CREATION
62
+
63
+ The CORD-19 dataset was compiled so that researchers could apply "recent advances in natural language processing (NLP) and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease." (The White House Office of Science and Technology Policy, 2020 (accessed May 9, 2020). NLP and AI techniques are especially important since the large volume of COVID-19 related research articles will continue to grow daily as more resources are rallied to combat the pandemic.
64
+
65
+ Although the CORD-19 dataset itself provides a valuable start for researchers, it remains a completely unstructured resource. The dataset is simply the text from published scientific articles that were compiled from a keyword search of the Chan-Zuckerberg Initiative (CZI), PubMed Central (PMC), BioRxiv, and medRxiv online databases.
66
+
67
+ We decided to build an automated tool that could answer any information-seeking question about COVID-19 using scientific literature. In order to achieve this, we needed to provide a sufficient number of annotated questions/answers to the CORD- 19 dataset. Our inspiration for this project came, in part, from the previous works, such as the BioASQ datasets and the BioBert and SciBERT biomedical NLP models (Tsatsaronis et al., 2015; Lee et al., 2020; Beltagy et al., 2019). These similar projects demonstrated that fine-tuning an existing baseline NLP model on a highly-domain specific text corpus could significantly improve the accuracy of the predictions.
68
+
69
+ § 3.1 ARTICLE AND ANNOTATOR SELECTION
70
+
71
+ Within 10 days of the CORD-19 dataset release, we began annotation. First, we selected the subset of the CORD-19 dataset that had full-text and a commercially-friendly copyright license. On manual inspection of the subset, we determined that the majority of the scientific articles were not highly specific to COVID or coronavirus, but instead related more generally to viral outbreaks and genomics. This lack of COVID-19 specificity in the articles is understandable given the newness of the pandemic. Our domain experts agreed that scientific research on related viral pandemics, including SARS, MERS, and HIV-1, were relevant to COVID-19 researchers. Nevertheless, they also agreed to balance the dataset as much as possible between COVID-19 and generic viral pandemic-related articles. We, therefore, extracted 245 articles by searching the commercially-friendly subset for the phrases "Covid", "Wuhan" and "Coronavirus". We extracted an additional 200 random articles from the commercially-friendly subset. Finally, our annotators uploaded 5 additional scientific articles based on their knowledge of the domain. Of these 450 scientific articles, 147 were annotated over the 8 week period prior to this report. Although the expert annotators were volunteers, it was required that all have at least a Master's degree in biomedical sciences to ensure a basic understanding of the scientific articles. The annotation team was led by a medical doctor (G.A.R.) who recruited 15 volunteers and vetted their credentials. Annotators were told that the COVID-QA dataset was not simply a database of COVID-19 facts and figures, but instead, datapoints to train a model that could answer questions given any scientific article about COVID-19 or similar viral outbreaks. Annotators focused their questions mainly on viral/genomic structure, antigenic targets, epidemiological characteristics, symptoms, and pharmaceutical treatment. We presume most of the questions are relevant for any scientific report of a viral outbreak.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 2: Sample questions and answers from the annotated CORD-19 dataset. The COVID-QA annotation tool allows for this data to be reviewed/corrected by other annotators. The experts that volunteered as annotators were required to have at least a Master's degree in one of the biomedical sciences.
76
+
77
+ § 3.2 QA ANNOTATION PROCESS
78
+
79
+ We used an existing, web-based annotation tool that had been created by deepset for their Neural Search Framework Haystack. Figure 1 shows an image of the COVID-QA Annotation tool. The tool was used by annotators to mark answer text and formulating corresponding questions while sifting through the article. In addition to being a light-weight, scalable interface, the tool was easy enough to use that even non-technical experts were quickly comfortable in its use. We created a series of online videos as well as released annotation guidelines to demonstrate the usage of the tool and the concepts behind the QA annotations. The annotation tool allowed the model developers to export the labels in SQuAD format for convenient fine-tuning of QA models. The annotators generated 2,019 question/answer pairs based on the annotations of 147 articles from the dataset. All 2,019 annotations were verified, corrected or removed by the annotation team leader. Figure 2 shows several examples of these question/answer pairs.
80
+
81
+ The annotators reported the process to be relatively easy to annotate and enjoyed the experience because it provided them an opportunity to contribute to the COVID response while reading interesting and informative research papers. While the annotators needed to be familiar with the biomedical field and recent broad progress, we found that intimate knowledge with the subject of the article was not essential. Instead, it is adequate for the annotator to understand the material well enough to figure out the broad relationship between subjects and objects that are being linked and the purported logic in it. Where the annotator is not certain of the relationship and importance of a statement, they would simply omit an annotation. A Slack workspace was created to allow annotators to share best practices, post issues with existing labels, and recommend enhancements to the annotation process.
82
+
83
+ § 3.3 DATASET ANALYSIS
84
+
85
+ Although most of the annotations were found to be straightforward- producing short, simple question/answer pairs- we found a large number of articles where it was difficult to properly phrase the question or frame the answer. Specifically, the annotators found that some question/answer pairs relied on context that spanned throughout the article. Figure 3 shows that although the majority of question/answer pairs contained short sentences, there was also a very long tail of question/answer pairs where the answers could span more than 30 words. The annotators reported this issue to the model developers in order to prepare the following experiments that would address these long question/answer pairs during training and inference.
86
+
87
+ The annotators also discussed new labelling schemes, such as single-question/multiple-answer annotations and semantic variations of the same question for a single answer. Although these were considered to be valid research arms for future work, we chose to keep the annotation instructions as simplified as possible for the current work.
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 3: The COVID-19 QA dataset statistics. Although the majority of the 2,019 questions or answers contain fewer than 10 words, the answer distribution tends to have a much longer tail.
92
+
93
+ § 4 EXPERIMENTS
94
+
95
+ § 4.1 MODELS
96
+
97
+ We used pretrained, transformer-based Language Models that had been fine-tuned to QA datasets. It has been shown that models trained on the SQuAD dataset have the best generalization performance to out-of-domain data compared to other large QA datasets (Longpre et al., 2019).
98
+
99
+ We chose a RoBERTa-base architecture (Liu et al., 2019) for its good accuracy vs model size trade-off and fine-tuned it on the SQuAD dataset. The resulting model and its evaluation can be accessed at Hugging Face's model hub under "deepset/roberta-base-squad2". We refer to this model as the baseline model.
100
+
101
+ We further fine-tuned the baseline model using our COVID-QA annotations introduced in this paper. We used the FARM framework to fine-tune the model in k-fold cross validation manner. The training parameters are defined in Table 1. We disabled the option to return no answer during testing since in COVID-QA each question has a dedicated text answer. We will refer to this model as the COVID-QA model.
102
+
103
+ § 4.2 METRIC
104
+
105
+ Most open domain Question Answering systems, such as DrQA (Chen et al., 2017) or REALM (Guu et al., 2020), use the Exact Match (EM) between label and prediction text (not the position) as evaluation criteria where the answer is often an unambiguous, single entity $(Q$ : "How many feet are in one mile?" Answer: "5,280"). A less sensitive metric than EM is needed in the cases where there are long answers, long articles, and/or only a single answer annotation per question.
106
+
107
+ We choose to take multiple model predictions and any positional overlap between prediction and label as metric and refer to it as top-n-accuracy. Top-n-accuracy compares the gold label against $n$ model predictions and looks for any overlap between prediction and answer positions to create a binary hit or miss scenario where accuracy can be computed. We also report EM and F1 score of our models to simplify comparison to existing methods. In the following paragraphs, we will outline the reasons for choosing top-n-accuracy on COVID-QA data.
108
+
109
+ As an example of variation in long answers, consider a question like "Does C-C chemokine receptor type 5 (CCR5) affect the transmission of HIV- 1?" given the context "Genetic variants in CCR5 have been shown to influence vertical transmission of HIV-1. CCR5 promoter variants resulting in higher expression of the receptor were associated with increased risk of MTCT of HIV-1 [...].". A valid answer could be either the first, the second or both sentences altogether.
110
+
111
+ In long scientific articles, the abstract, results, and conclusion may contain some of the same information but with slightly different wording. This is, in fact, part of the design that goes into writing a good scientific article. A metric such as EM might not be the best measure of the model performance for these cases. When only one annotation is present it, by nature, cannot capture all possible answer variations in the answers and positions in the article. The chances of multiple, correct answer spans are increased when working with long documents and, therefore, a robust metric should be able to account for these variations.
112
+
113
+ § 4.3 EVALUATION
114
+
115
+ Evaluating Language Models trained on downstream tasks is difficult because the performance varies across runs (Dodge et al., 2020) - especially for difficult tasks and/or small dataset sizes. We therefore implemented k-fold cross validation for QA to get a better estimate of how well the performance will generalize.
116
+
117
+ Another complication for training QA models on long documents is the restricted sequence length of Transformer-based Language Models. Long documents are split into multiple (overlapping) chunks with mostly one chunk containing the answers, while the other chunks are negative examples where no answer can be found. We observed that having too many negative examples both substantially increases training time and also prevents the model from convergence. We therefore included a down-sampling option to only keep d negative examples during training. For evaluation the whole article was processed and no downsampling was applied.
118
+
119
+ < g r a p h i c s >
120
+
121
+ Figure 4: Per fold cross validation scores for the COVID-QA model (RoBERTa-base fine-tuned on SQuAD, continued with cross validation on COVID-QA annotations). The bars represent top-3 accuracy, Exact Match and F1 scores for each of the 5 folds. The horizontal lines depict average values for each metric.
122
+
123
+ max width=
124
+
125
+ Parameter Value
126
+
127
+ 1-2
128
+ baseline model deepset/roberta-base-squad2
129
+
130
+ 1-2
131
+ batch size 80
132
+
133
+ 1-2
134
+ epochs 2
135
+
136
+ 1-2
137
+ learning rate 3e-5
138
+
139
+ 1-2
140
+ max seq len 384
141
+
142
+ 1-2
143
+ doc stride 192
144
+
145
+ 1-2
146
+ no answer -100 (disabled)
147
+
148
+ 1-2
149
+ cross val folds 5
150
+
151
+ 1-2
152
+ hardware 4x V100
153
+
154
+ 1-2
155
+
156
+ Table 1: Parameters used for training and evaluating the COVID-QA model. Note that disabling the option of "no answer" only applies to the test setting.
157
+
158
+ For evaluation, we used k-fold cross validation with one negative example per question/answer pair during training and the full article for testing.
159
+
160
+ § 4.4 RESULTS
161
+
162
+ Table 2 shows that fine-tuning the baseline model on the domain data with the parameters specified in Table 1 results in an absolute improvement of top-3 accuracy by about $+ 5\%$ (85.59 versus 80.68) and in the F1 score by slightly over $+ {10}\% ({59.53}$ versus 49.43). The scores reported for the COVID-QA model are averaged across all five folds with Figure 4 showing the scores per each fold.
163
+
164
+ max width=
165
+
166
+ Model EM F1 Top-3 Acc.
167
+
168
+ 1-4
169
+ Baseline 21.84 49.43 80.68
170
+
171
+ 1-4
172
+ COVID-QA model 25.90 59.53 85.59
173
+
174
+ 1-4
175
+
176
+ Table 2: Comparison of baseline model to the model cross validated on COVID-QA labels. The scores for the COVID-QA model are averaged across the 5 folds.
177
+
178
+ § 5 DISCUSSION & FUTURE WORK
179
+
180
+ In this section we want to highlight shortcomings and potential improvements to the used QA models as well as the generated dataset itself.
181
+
182
+ § 5.1 INTERPRETATION OF RESULTS
183
+
184
+ As shown in Table 2 all scores improved significantly when using COVID-QA labels for continued training. F1 and EM scores are low compared to metrics reported on SQuAD because of the different text domain as well as an about ${40} \times$ larger text size. Our baseline model created an average of 6118.5 tokens for each COVID-QA document compared to 153.2 tokens for SQuAD. A relatively larger increase in F1 score (compared to EM) might be the consequence of the model adjusting to much longer answers that are present in our dataset. As shown in Fig. 3 the mean number of words in COVID-QA answers is 13.9 whereas for SQuAD (training set, excluded "no answer" option) it is 3.2 words per answer only.
185
+
186
+ § 5.2 MODELLING IMPROVEMENTS
187
+
188
+ We believe substantial improvements to the QA model can be achieved by using other baseline models like the large version of RoBERTA fine-tuned on SQuAD. Combined fine-tuning of the baseline model on multiple QA datasets might be beneficial for the transfer to the CORD-19 domain as well. Adjusting the underlying Language Model to the CORD-19 domain by training it from scratch or Language Model adaptation is another option, though one has to consider these domain Language Models cannot be fitted to SQuAD as well as Language Models pretrained on the general domain.
189
+
190
+ Improvements to the metric are also imaginable. One possible direction is a semantic comparison of prediction to gold label by using vector representations from Language Models as described in BLEURT (Sellam et al., 2020), a newly proposed metric for evaluating text generation systems.
191
+
192
+ One issue of using transformer-based models for QA is the need for vast amounts of computational power - especially when asking a question on not just a short paragraph but a whole collection of documents. To speed up the QA process one usually uses a two-stage procedure to first retrieve a set of candidate documents and then apply more powerful QA models to extract the actual answer like done with DrQA (Chen et al., 2017) or inside Haystack, an open-source framework for doing QA at scale. Another alternative to speed up inference is to index all possible phrases (including potential answers) and project the question into this index to retrieve the closest answers as done in the Dense-Sparse Phrase Index (Seo et al., 2019).
193
+
194
+ § 5.3 DATASET IMPROVEMENTS
195
+
196
+ We are confident about the quality of the generated question/answer pairs because the annotation team leader, a trained medical doctor, manually verified each question/answer pair. Though we also see much potential for improvement. First of all the dataset does not contain multiple labels by different annotators for any of our questions. This would be needed to determine the inter-annotator agreement as well as if there are multiple possible answers per question. The dataset furthermore does not contain unanswerable questions, i.e. questions where the annotator was sure the given article does not contain an answer. Another shortcoming is the constructed tone of the questions as a result of the SQuAD style label process where annotators need to create a question while reading the article. This is a major criticism of SQuAD in the paper accompanying Natural Questions (Kwiatkowski et al., 2019). A fast way to create less constructed questions would be to have other annotators rephrase existing questions without access to the underlying article. Another minor improvement could be made to the answer annotation spans. These spans sometimes start or end inside of a word that does not belong to the answer.
197
+
198
+ § 6 CONCLUSION
199
+
200
+ We have created a SQuAD-style QA dataset based on 2,019 annotations of medical experts and released it to the public. We have shown that the created dataset can be used for improving QA systems by either evaluating existing QA models or continued training. We also believe the created dataset could be used for improving or evaluating existing search systems that browse through the CORD-19 dataset.
201
+
202
+ Based on improved QA systems we hope researchers can find information inside COVID-19 related literature much quicker and therefore utilize the existing knowledge more efficiently for finding solutions to the COVID-19 pandemic.
203
+
204
+ § 7 SOURCE CODE
205
+
206
+ The annotations are available at https://github.com/deepset-ai/COVID-QA/tree/master/ data/question-answering/COVID-QA.json. Source code for performing cross validation on QA models is available at https://github.com/ deepset-ai/FARM/blob/master/examples/ question_answering_crossvalidation.py
207
+
208
+ The annotation tool is available at https://github.com/deepset-ai/haystack
209
+
210
+ § 8 ACKNOWLEDGEMENTS
211
+
212
+ This effort has been successful thanks to the hard work of many people, including, but not limited to the following (in alphabetical order of last name): Victor Alm, Laura Barrera, Branden Chan, Suha Chari, Simon Fakir, Ling Hsin, Archana Jayaku-mar, Tripti Kataria, Bogdan Kostic, Pete Logan, Bilawal Nadeem, Travis Nesbit, Milind Pandit, Jeanine Renne, Milos Rusic, Jack Seksenyan, Kyle Shannon, Ajeet Singh, Tanay Soni, and Narayan Sundararajan. We are also grateful for the support of deepset, Intel, NVIDIA and AWS.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JQCYcdHfXyJ/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Measuring Emotions in the COVID-19 Real World Worry Dataset
2
+
3
+ Bennett Kleinberg ${}^{1,2}$ Isabelle van der ${\mathbf{{Vegt}}}^{1}$ Maximilian Mozes ${}^{1,2,3}$
4
+
5
+ ${}^{1}$ Department of Security and Crime Science ${}^{2}$ Dawes Centre for Future Crime ${}^{3}$ Department of Computer Science
6
+
7
+ University College London
8
+
9
+ \{bennett.kleinberg, isabelle.vandervegt, maximilian.mozes\}@ucl.ac.uk
10
+
11
+ ## Abstract
12
+
13
+ The COVID-19 pandemic is having a dramatic impact on societies and economies around the world. With various measures of lockdowns and social distancing in place, it becomes important to understand emotional responses on a large scale. In this paper, we present the first ground truth dataset of emotional responses to COVID-19. We asked participants to indicate their emotions and express these in text. This resulted in the Real World Worry Dataset of 5,000 texts (2,500 short + 2,500 long texts). Our analyses suggest that emotional responses correlated with linguistic measures. Topic modeling further revealed that people in the UK worry about their family and the economic situation. Tweet-sized texts functioned as a call for solidarity, while longer texts shed light on worries and concerns. Using predictive modeling approaches, we were able to approximate the emotional responses of participants from text within 14% of their actual value. We encourage others to use the dataset and improve how we can use automated methods to learn about emotional responses and worries about an urgent problem.
14
+
15
+ ## 1 Introduction
16
+
17
+ The outbreak of the SARS-CoV-2 virus in late 2019 and subsequent evolution of the COVID-19 disease has affected the world on an enormous scale. While hospitals are at the forefront of trying to mitigate the life-threatening consequences of the disease, practically all societal levels are dealing directly or indirectly with an unprecedented situation. Most countries are - at the time of writing this paper - in various stages of a lockdown. Schools and universities are closed or operate online-only, and merely essential shops are kept open.
18
+
19
+ At the same time, lockdown measures such as social distancing (e.g., keeping a distance of at least 1.5 meters from one another and only socializing with two people at most) might have a direct impact on people's mental health. With an uncertain outlook on the development of the COVID-19 situation and its preventative measures, it is of vital importance to understand how governments, NGOs, and social organizations can help those who are most affected by the situation. That implies, at the first stage, understanding the emotions, worries, and concerns that people have and possible coping strategies. Since a majority of online communication is recorded in the form of text data, measuring the emotions around COVID-19 will be a central part of understanding and addressing the impacts of the COVID-19 situation on people. This is where computational linguistics can play a crucial role.
20
+
21
+ In this paper, we present and make publicly available a high quality, ground truth text dataset of emotional responses to COVID-19. We report initial findings on linguistic correlates of emotions, topic models, and prediction experiments.
22
+
23
+ ### 1.1 Ground truth emotions datasets
24
+
25
+ Tasks like emotion detection (Seyeditabari et al., 2018) and sentiment analysis (Liu, 2015) typically rely on labeled data in one of two forms. Either a corpus is annotated on a document-level, where individual documents are judged according to a predefined set of emotions (Strapparava and Mi-halcea, 2007; Preotiuc-Pietro et al., 2016) or individual $n$ -grams sourced from a dictionary are categorised or scored with respect to their emotional value (Bradley et al., 1999; Strapparava and Valitutti, 2004). These annotations are done (semi) automatically (e.g., exploiting hashtags such as #happy) (Mohammad and Kiritchenko, 2015; Abdul-Mageed and Ungar, 2017) or manually through third persons (Mohammad and Turney, 2010). While these approaches are common practice and have accelerated the progress that was made in the field, they are limited in that they propagate a pseudo ground truth. This is problematic because, as we argue, the core aim of emotion detection is to make an inference about the author's emotional state. The text as the product of an emotional state then functions as a proxy for the latter. For example, rather than wanting to know whether a Tweet is written in a pessimistic tone, we are interested in learning whether the author of the text actually felt pessimistic.
26
+
27
+ The limitation inherent to third-person annotation, then, is that they might not be adequate measurements of the emotional state of interest. The solution, albeit a costly one, lies in ground truth datasets. Whereas real ground truth would require - in its strictest sense - a random assignment of people to experimental conditions (e.g., one group that is given a positive product experience, and another group with a negative experience), variations that rely on self-reported emotions can also mitigate the problem. A dataset that relies on self-reports is the International Survey on Emotion Antecedents and Reactions (ISEAR) ${}^{1}$ , which asked participants to recall from memory situations that evoked a set of emotions. The COVID-19 situation is unique and calls for novel datasets that capture people's affective responses to it while it is happening.
28
+
29
+ ### 1.2 Current COVID-19 datasets
30
+
31
+ Several datasets mapping how the public responds to the pandemic have been made available. For example, tweets relating to the Coronavirus have been collected since March 11, 2020, yielding about 4.4 million tweets a day (Banda et al., 2020). Tweets were collected through the Twitter stream API, using keywords such as 'coronavirus' and 'COVID- 19'. Another Twitter dataset of Coronavirus tweets has been collected since January 22, 2020, in several languages, including English, Spanish, and Indonesian (Chen et al., 2020). Further efforts include the ongoing Pandemic Project ${}^{2}$ which has people write about the effect of the coronavirus outbreak on their everyday lives.
32
+
33
+ ### 1.3 The COVID-19 Real World Worry Dataset
34
+
35
+ This paper reports initial findings for the Real World Worry Dataset (RWWD) that captured the emotional responses of UK residents to COVID-19 at a point in time where the impact of the COVID- 19 situation affected the lives of all individuals in the UK. The data were collected on the 6th and 7th of April 2020, a time at which the UK was under "lockdown" (news, 2020), and death tolls were increasing. On April 6, 5,373 people in the UK had died of the virus, and 51,608 tested positive (Walker , now). On the day before data collection, the Queen addressed the nation via a television broadcast (Guardian, 2020). Furthermore, it was also announced that Prime Minister Boris Johnson was admitted to intensive care in a hospital for COVID-19 symptoms (Lyons, 2020).
36
+
37
+ The RWWD is a ground truth dataset that used a direct survey method and obtained written accounts of people alongside data of their felt emotions while writing. As such, the dataset does not rely on third-person annotation but can resort to direct self-reported emotions. We present two versions of RWWD, each consisting of 2,500 English texts representing the participants' genuine emotional responses to Corona situation in the UK: the Long RWWD consists of texts that were open-ended in length and asked the participants to express their feelings as they wish. The Short RWWD asked the same people also to express their feelings in Tweet-sized texts. The latter was chosen to facilitate the use of this dataset for Twitter data research.
38
+
39
+ The dataset is publicly available. ${}^{3}$ .
40
+
41
+ ## 2 Data
42
+
43
+ We collected the data of $n = {2500}$ participants (94.46% native English speakers) via the crowd-sourcing platform Prolific ${}^{4}$ . Every participant provided consent in line with the local IRB. The sample requirements were that the participants were resident in the UK and a Twitter user. In the data collection task, all participants were asked to indicate how they felt about the current COVID-19 situation using 9 -point scales $(1 =$ not at all, $5 =$ moderately, $9 =$ very much). Specifically, each participant rated how worried they were about the Corona/COVID-19 situation and how much anger, anxiety, desire, disgust, fear, happiness, relaxation, and sadness (Harmon-Jones et al., 2016) they felt about their situation at this moment. They also had to choose which of the eight emotions (except worry) best represented their feeling at this moment.
44
+
45
+ ---
46
+
47
+ 'https://www.unige.ch/cisa/research/ materials-and-online-research/ research-material/
48
+
49
+ ${}^{2}$ https://utpsyc.org/covid19/index.html
50
+
51
+ ${}^{3}$ Data: https://github.com/ben-aaron188/ covid19worry and https://osf.io/awy7r/
52
+
53
+ 4https://www.prolific.co/
54
+
55
+ ---
56
+
57
+ All participants were then asked to write two texts. First, we instructed them to "write in a few sentences how you feel about the Corona situation at this very moment. This text should express your feelings at this moment" (min. 500 characters). The second part asked them to express their feelings in Tweet form (max. 240 characters) with otherwise identical instructions. Finally, the participants indicated on a 9-point scale how well they felt they could express their feelings (in general/in the long text/in the Tweet-length text) and how often they used Twitter (from 1=never,5=every month, 9=every day) and whether English was their native language. The overall corpus size of the dataset was 2500 long texts (320,372 tokens) and 2500 short texts (69,171 tokens). In long and short texts, only 6 and 17 emoticons (e.g. ":(", "<3") were found, respectively. Because of the low frequency of emoticons, these were not focused on in our analysis.
58
+
59
+ ### 2.1 Excerpts
60
+
61
+ Below are two excerpts from the dataset:
62
+
63
+ Long text: $I$ am 6 months pregnant, so $I$ feel worried about the impact that getting the virus would have on me and the baby. My husband also has asthma so that is a concern too. I am worried about the impact that the lockdown will have on my ability to access the healthcare $I$ will need when having the baby, and also about the exposure to the virus [...] There is just so much uncertainty about the future and what the coming weeks and months will hold for me and the people I care about.
64
+
65
+ Tweet-sized text: Proud of our NHS and keyworkers who are working on the frontline at the moment. I'm optimistic about the future, IF EVERYONE FOLLOWS THE RULES. We need to unite as a country, by social distancing and stay in.
66
+
67
+ ### 2.2 Descriptive statistics
68
+
69
+ We excluded nine participants who padded the long text with punctuation or letter repetitions. The dominant feelings of participants were anxiety/worry, sadness, and fear (see Table 1) ${}^{5}$ . For all emotions, the participants' self-rating ranged across the whole spectrum (from "not at all" to "very much"). The final sample consisted to ${65.15}\%$ of females ${}^{6}$ with an overall mean age of 33.84 years $\left( {{SD} = {22.04}}\right)$ .
70
+
71
+ The participants' self-reported ability to express their feelings, in general, was $M = {6.88}({SD} =$ 1.69). When specified for both types of texts separately, we find that the ability to express themselves in the long text $\left( {M = {7.12},{SD} = {1.78}}\right)$ was higher than that for short texts $(M = {5.91}$ , ${SD} = {2.12}$ ), Bayes factor $> {1e} + {96}$ .
72
+
73
+ The participants reported to use Twitter almost weekly $\left( {M = {6.26},{SD} = {2.80}}\right)$ , tweeted themselves rarely to once per month $(M = {3.67},{SD} =$ 2.52), and actively participated in conversations in a similar frequency $\left( {M = {3.41},{SD} = {2.40}}\right)$ . Our participants were thus familiar with Twitter as a platform but not overly active in tweeting themselves.
74
+
75
+ <table><tr><td>Variable</td><td>$\mathbf{{Mean}}$</td><td>SD</td></tr><tr><td colspan="3">Corpus descriptives</td></tr><tr><td>Tokens (long text)</td><td>127.75</td><td>39.67</td></tr><tr><td>Tokens (short text)</td><td>27.70</td><td>15.98</td></tr><tr><td>Types (long text)</td><td>82.69</td><td>18.24</td></tr><tr><td>Types (short text)</td><td>23.50</td><td>12.21</td></tr><tr><td>TTR (long text)</td><td>0.66</td><td>0.06</td></tr><tr><td>TTR (short text)</td><td>0.88</td><td>0.09</td></tr><tr><td>Chars. (long text)</td><td>632.54</td><td>197.75</td></tr><tr><td>Chars. (short text)</td><td>137.21</td><td>78.40</td></tr><tr><td colspan="3">Emotions</td></tr><tr><td>Worry</td><td>${6.55}^{a}$</td><td>1.76</td></tr><tr><td>Anger1 (4.33%)</td><td>${3.91}^{b}$</td><td>2.24</td></tr><tr><td>Anxiety (55.36%)</td><td>${6.49}^{a}$</td><td>2.28</td></tr><tr><td>Desire (1.09%)</td><td>${2.97}^{b}$</td><td>2.04</td></tr><tr><td>Disgust (0.69%)</td><td>${3.23}^{b}$</td><td>2.13</td></tr><tr><td>Fear (9.22%)</td><td>${5.67}^{a}$</td><td>2.27</td></tr><tr><td>Happiness (1.58%)</td><td>${3.62}^{b}$</td><td>1.89</td></tr><tr><td>Relaxation (13.38%)</td><td>${3.95}^{b}$</td><td>2.13</td></tr><tr><td>Sadness (14.36%)</td><td>${5.59}^{a}$</td><td>2.31</td></tr></table>
76
+
77
+ Table 1: Descriptive statistics of text data and emotion ratings. ${}^{1}$ brackets indicate how often the emotion was chosen as the best fit for the current feeling about COVID-19. ${}^{a}$ the value is larger than the neutral midpoint with Bayes factors $> {1e} + {32}.\;{}^{b}$ the value is smaller than the neutral midpoint with $\mathrm{{BF}} > {1e} + {115}$ . TTR = type-token ratio.
78
+
79
+ ---
80
+
81
+ ${}^{5}$ For correlations among the emotions, see the online supplement
82
+
83
+ ${}^{6}$ For an analysis of gender differences using this dataset, see van der Vegt and Kleinberg (2020).
84
+
85
+ ---
86
+
87
+ ## 3 Findings and experiments
88
+
89
+ ### 3.1 Correlations of emotions with LIWC categories
90
+
91
+ We correlated the self-reported emotions to matching categories of the LIWC2015 lexicon (Pennebaker et al., 2015). The overall matching rate was high (92.36% and 90.11% for short and long texts, respectively). Across all correlations, we see that the extent to which the linguistic variables explain variance in the emotion values (indicated by the ${R}^{2}$ ) is larger in long texts than in Tweet-sized short texts (see Table 2). There are significant positive correlations for all affective LIWC variables with their corresponding self-reported emotions (i.e., higher LIWC scores accompanied higher emotion scores, and vice versa). These correlations imply that the linguistic variables explain up to ${10}\%$ and $3\%$ of the variance in the emotion scores for long and short texts, respectively.
92
+
93
+ The LIWC also contains categories intended to capture areas that concern people (not necessarily in a negative sense), which we correlated to the self-reported worry score. Positive (negative) correlations would suggest that the higher (lower) the worry score of the participants, the larger their score on the respective LIWC category. We found no correlation between the categories "work", "money" and "death" suggesting that the worry people reported was not associated with these categories. Significant positive correlations emerged for long texts for "family" and "friend": the more people were worried, the more they spoke about family and - to a lesser degree — friends.
94
+
95
+ ### 3.2 Topic models of people's worries
96
+
97
+ We constructed topic models for both the long and short texts separately using the stm package in $\mathrm{R}$ (Roberts et al., 2014a). The text data were lower-cased, punctuation, stopwords and numbers were removed, and all words were stemmed. For the long texts, we chose a topic model with 20 topics as determined by semantic coherence and exclusivity values for the model (Mimno et al., 2011; Roberts et al., 2014b, a). Table 3 shows the five most prevalent topics with ten associated frequent terms for each topic (see online supplement for all 20 topics). The most prevalent topic seems to relate to following the rules related to the lockdown. In contrast, the second most prevalent topic appears to relate to worries about employment and the economy. For the Tweet-sized texts, we selected a model with 15 topics. The most common topic bears a resemblance to the government slogan "Stay at home, protect the NHS, save lives." The second most prevalent topic seems to relate to calls for others to adhere to social distancing rules.
98
+
99
+ ### 3.3 Predicting emotions about COVID-19
100
+
101
+ It is worth noting that the current literature on automatic emotion detection mainly casts this problem as a classification task, where words or documents are classified into emotional categories (Buechel and Hahn, 2016; Demszky et al., 2020). Our fine-grained annotations allow for estimating emotional values on a continuous scale. Previous works on emotion regression utilise supervised models such as linear regression for this task (Preotiuc-Pietro et al., 2016), and more recent efforts employ neural network-based methods (Wang et al., 2016; Zhu et al., 2019). However, the latter typically require larger amounts of annotated data, and are hence less applicable to our collected dataset.
102
+
103
+ We, therefore, use linear regression models to predict the reported emotional values (i.e., anxiety, fear, sadness, worry) based on text properties. Specifically, we applied regularised ridge regression models ${}^{7}$ using TFIDF and part-of-speech (POS) features extracted from long and short texts separately. TFIDF features were computed based on the 1000 most frequent words in the vocabularies of each corpus; POS features were extracted using a predefined scheme of 53 POS tags in $\mathrm{{spa}}C{y}^{8}$ .
104
+
105
+ We process the resulting feature representations using principal component analysis and assess the performances using the mean absolute error (MAE) and the coefficient of determination ${R}^{2}$ . Each experiment is conducted using five-fold cross-validation, and the arithmetic means of all five folds are reported as the final performance results.
106
+
107
+ Table 4 shows the performance results in both long and short texts. We observe MAEs ranging between 1.26 (worry with TFIDF) and 1.88 (sadness with POS) for the long texts, and between 1.37 (worry with POS) and 1.91 (sadness with POS) for the short texts. We furthermore observe that the models perform best in predicting the worry scores for both long and short texts. The models explain up to ${16}\%$ of the variance for the emotional response variables on the long texts, but only up to
108
+
109
+ ---
110
+
111
+ ${}^{7}$ We used the scikit-learn python library (Pedregosa et al., 2011).
112
+
113
+ 8 https://spacy.io
114
+
115
+ ---
116
+
117
+ <table><tr><td>Correlates</td><td>Long texts</td><td>Short texts</td></tr><tr><td colspan="3">Affective processes</td></tr><tr><td>Anger - LIWC "anger"</td><td>0.28 [0.23; 0.32] (7.56%)</td><td>0.09 [0.04; 0.15] (0.88%)</td></tr><tr><td>Sadness - LIWC "sad"</td><td>0.21 [0.16; 0.26] (4.35%)</td><td>0.13 [0.07; 0.18] (1.58%)</td></tr><tr><td>Anxiety - LIWC "anx"</td><td>0.33 [0.28; 0.37] (10.63%)</td><td>0.18 [0.13; 0.23] (3.38%)</td></tr><tr><td>Worry - LIWC "anx"</td><td>0.30 [0.26; 0.35] (9.27%)</td><td>0.18 [0.13; 0.23] (3.30%)</td></tr><tr><td>Happiness - LIWC "posemo"</td><td>0.22 [0.17; 0.26] (4.64%)</td><td>0.13 [0.07; 0.18] (1.56%)</td></tr><tr><td colspan="3"/></tr><tr><td colspan="3">Concern sub-categories</td></tr><tr><td>Worry - LIWC "work"</td><td>-0.03 [-0.08; 0.02] (0.01%)</td><td>-0.03 [-0.08; 0.02] (0.10%)</td></tr><tr><td>Worry - LIWC "money"</td><td>0.00 [-0.05; 0.05] (0.00%)</td><td>-0.01 [-0.06; 0.04] (0.00%)</td></tr><tr><td>Worry - LIWC "death"</td><td>0.05 [-0.01; 0.10] (0.26%)</td><td>0.05 [0.00; 0.10] (0.29%)</td></tr><tr><td>Worry - LIWC "family"</td><td>0.18 [0.13; 0.23] (3.12%)</td><td>0.06 [0.01; 0.11] (0.40%)</td></tr><tr><td>Worry - LIWC "friend"</td><td>0.07 [0.01; 0.12] (0.42%)</td><td>-0.01 [-0.06; 0.05] (0.00%)</td></tr></table>
118
+
119
+ Table 2: Correlations (Pearson’s $r,{99}\%$ CI, $R$ -squared in %) between LIWC variables and emotions.
120
+
121
+ <table><tr><td>Docs</td><td>Terms</td></tr><tr><td colspan="2">Long texts</td></tr><tr><td>9.52</td><td>people, take, think, rule, stay, serious, follow, virus, mani, will</td></tr><tr><td>8.35</td><td>will, worri, job, long, also, economy, concern, impact, famili, situat</td></tr><tr><td>7.59</td><td>feel, time, situat, relax, quit, moment, sad, thing, like, also</td></tr><tr><td>6.87</td><td>feel, will, anxious, know, also, famili, worri, friend, like, sad</td></tr><tr><td>5.69</td><td>work, home, worri, famili, friend, abl, time, miss, school, children</td></tr><tr><td colspan="2">Short texts</td></tr><tr><td>10.70</td><td>stay, home, safe, live, pleas, insid, save, protect, nhs, everyone</td></tr><tr><td>8.27</td><td>people, need, rule, dont, stop, selfish, social, die, distance, spread</td></tr><tr><td>7.96</td><td>get, can, just, back, wish, normal, listen, lockdown, follow, sooner</td></tr><tr><td>7.34</td><td>famili, anxious, worri, scare, friend, see, want, miss, concern, covid</td></tr><tr><td>6.81</td><td>feel, situat, current, anxious, frustrat, help, also, away, may, extrem</td></tr></table>
122
+
123
+ Table 3: The five most prevalent topics for long and short texts.
124
+
125
+ $1\%$ on Tweet-sized texts.
126
+
127
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Long</td><td colspan="2">Short</td></tr><tr><td>MAE</td><td>${R}^{2}$</td><td>MAE</td><td>${R}^{2}$</td></tr><tr><td>Anxiety - TFIDF</td><td>1.65</td><td>0.16</td><td>1.82</td><td>-0.01</td></tr><tr><td>Anxiety - POS</td><td>1.79</td><td>0.04</td><td>1.84</td><td>0.00</td></tr><tr><td>Fear - TFIDF</td><td>1.71</td><td>0.15</td><td>1.85</td><td>0.00</td></tr><tr><td>Fear - POS</td><td>1.83</td><td>0.05</td><td>1.87</td><td>0.01</td></tr><tr><td>Sadness - TFIDF</td><td>1.75</td><td>0.12</td><td>1.90</td><td>-0.02</td></tr><tr><td>Sadness - POS</td><td>1.88</td><td>0.02</td><td>1.91</td><td>-0.01</td></tr><tr><td>Worry - TFIDF</td><td>1.26</td><td>0.16</td><td>1.38</td><td>-0.03</td></tr><tr><td>Worry - POS</td><td>1.35</td><td>0.03</td><td>1.37</td><td>0.01</td></tr></table>
128
+
129
+ Table 4: Results for regression modeling for long and short texts.
130
+
131
+ ## 4 Discussion
132
+
133
+ This paper introduced a ground truth dataset of emotional responses in the UK to the Corona pandemic. We reported initial findings on the linguistic correlates of emotional states, used topic modeling to understand what people in the UK are concerned about, and ran prediction experiments to infer emotional states from text using machine learning. These analyses provided several core findings: (1) Some emotional states correlated with word lists made to measure these constructs, (2) longer texts were more useful to identify patterns in language that relate to emotions than shorter texts, (3) Tweet-sized texts served as a means to call for solidarity during lockdown measures while longer texts gave insights to people's worries, and (4) preliminary regression experiments indicate that we can infer from the texts the emotional responses with an absolute error of 1.26 on a 9-point scale (14%).
134
+
135
+ ### 4.1 Linguistic correlates of emotions and worries
136
+
137
+ Emotional reactions to the Coronavirus were obtained through self-reported scores. When we used psycholinguistic word lists that measure these emotions, we found weak positive correlations. The lexicon-approach was best at measuring anger, anxiety, and worry and did so better for longer texts than for Tweet-sized texts. That difference is not surprising given that the LIWC was not constructed for micro-blogging and very short documents. In behavioral and cognitive research, small effects (here: a maximum of ${10.63}\%$ of explained variance) are the rule rather than the exception (Gelman, 2017; Yarkoni and Westfall, 2017). It is essential, however, to interpret them as such: if 10% of the variance in the anxiety score are explained through a linguistic measurement, ${90}\%$ are not. An explanation for the imperfect correlations - aside from random measurement error - might lie in the inadequate expression of someone's felt emotion in the form of written text. The latter is partly corroborated by even smaller effects for shorter texts, which may have been too short to allow for the expression of one's emotion.
138
+
139
+ It is also important to look at the overlap in emotions. Correlational follow-up analysis (see online supplement) among the self-reported emotions showed high correlations of worry with fear $\left( {r = {0.70}}\right)$ and anxiety $\left( {r = {0.66}}\right)$ suggesting that these are not clearly separate constructs in our dataset. Other high correlations were evident between anger and disgust $\left( {r = {0.67}}\right)$ , fear and anxiety $\left( {r = {0.78}}\right)$ , and happiness and relaxation $\left( {r = {0.68}}\right)$ . Although the chosen emotions (with our addition of "worry") were adopted from previous work (Harmon-Jones et al., 2016), it merits attention in future work to disentangle the emotions and assess, for example, common ngrams per cluster of emotions (e.g. as in Demszky et al., 2020).
140
+
141
+ ### 4.2 Topics of people's worries
142
+
143
+ Prevalent topics in our corpus showed that people worry about their jobs and the economy, as well as their friends and family - the latter of which is also corroborated by the LIWC analysis. For example, people discussed the potential impact of the situation on their family, as well as their children missing school. Participants also discussed the lockdown and social distancing measures. In the Tweet-sized texts, in particular, people encouraged others to stay at home and adhere to lockdown rules in order to slow the spread of the virus, save lives and/or protect the NHS. Thus, people used the shorter texts as a means to call for solidarity, while longer texts offered insights into their actual worries (for recent work on gender differences, see van der Vegt and Kleinberg, 2020).
144
+
145
+ While there are various ways to select the ideal number of topics, we have relied on assessing the semantic coherence of topics and exclusivity of topic words. Since there does not seem to be a consensus on the best practice for selecting topic numbers, we encourage others to examine different approaches or models with varying numbers of topics.
146
+
147
+ ### 4.3 Predicting emotional responses
148
+
149
+ Prediction experiments revealed that ridge regression models can be used to approximate emotional responses to COVID-19 based on encoding of the textual features extracted from the participants' statements. Similar to the correlational and topic modeling findings, there is a stark difference between the long and short texts: the regression models are more accurate and explain more variance for longer than for shorter texts. Additional experiments are required to investigate further the expressiveness of the collected textual statements for the prediction of emotional values. The best predictions were obtained for the reported worry score (MAE $= {1.26}$ , MAPE $= {14.00}\%$ ). An explanation why worry was the easiest to predict could be that it was the highest reported emotion overall with the lowest standard deviation, thus potentially biasing the model. More fine-grained prediction analyses out of the scope of this initial paper could further examine this.
150
+
151
+ ### 4.4 Suggestions for future research
152
+
153
+ The current analysis leaves several research questions untouched. First, to mitigate the limitations of lexicon-approaches, future work on inferring emotions around COVID-19 could expand on the prediction approach (e.g., using different feature sets and models). Carefully validated models could help to provide the basis for large scale, real-time measurements of emotional responses. Of particular importance is a solution to the problem hinted at in the current paper: the shorter, Tweet-sized texts contained much less information, had a different function, and were less suitable for predictive modeling. However, it must be noted that the experimental setup of this study did not fully mimic a 'natural' Twitter experience. Whether the results are generalisable to actual Twitter data is an important empirical question for follow-up work. Nevertheless, with much of today's stream of text data coming in the form of (very) short messages, it is important to understand the limitations of using that kind of data and worthwhile examining how we can better make inferences from that information.
154
+
155
+ Second, with a lot of research attention paid to readily available Twitter data, we hope that future studies also focus on non-Twitter data to capture emotional responses of those who are underrepresented (or non-represented) on social media but are at heightened risk.
156
+
157
+ Third, future research may focus on manually annotating topics to more precisely map out what people worry about with regards to COVID-19. Several raters could assess frequent terms for each topic, then assign a label. Then through discussion or majority votes, final topic labels can be assigned to obtain a model of COVID-19 real-world worries.
158
+
159
+ Fourth, future efforts may aim for sampling over a longer period to capture how emotional responses develop over time. Ideally, using high-frequency sampling (e.g., daily for several months), future work could account for the large number of events that may affect emotions.
160
+
161
+ Lastly, it is worthwhile to utilise other approaches to measuring psychological constructs in text. Although the rate of out-of-vocabulary terms for the LIWC in our data was low, other dictionaries may be able to capture other relevant constructs. For instance, the tool Empath (Fast et al., 2016) could help measure emotions not available in the LIWC (e.g., nervousness and optimism). We hope that future work will use the current dataset (and extensions thereof) to go further so we can better understand emotional responses in the real world.
162
+
163
+ ## 5 Conclusions
164
+
165
+ This paper introduced the first ground truth dataset of emotional responses to COVID-19 in text form. Our findings highlight the potential of inferring concerns and worries from text data but also show some of the pitfalls, in particular, when using concise texts as data. We encourage the research community to use the dataset so we can better understand the impact of the pandemic on people's lives.
166
+
167
+ ## Acknowledgments
168
+
169
+ This research was supported by the Dawes Centre for Future Crime at UCL.
170
+
171
+ ## References
172
+
173
+ Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Computational Linguistics.
174
+
175
+ Juan M. Banda, Ramya Tekumalla, Guanyu Wang, Jingyuan Yu, Tuo Liu, Yuning Ding, and Gerardo Chowell. 2020. A Twitter Dataset of 150+ million tweets related to COVID-19 for open research. Type: dataset.
176
+
177
+ Margaret M. Bradley, Peter J. Lang, Margaret M. Bradley, and Peter J. Lang. 1999. Affective norms for english words (anew): Instruction manual and affective ratings.
178
+
179
+ Sven Buechel and Udo Hahn. 2016. Emotion analysis as a regression problem - dimensional models and their implications on emotion representation and metrical evaluation. In Proceedings of the Twenty-Second European Conference on Artificial Intelligence, ECAI'16, page 1114-1122, NLD. IOS Press.
180
+
181
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. #COVID-19: The First Public Coronavirus Twitter Dataset. Original-date: 2020-03- 15T17:32:03Z.
182
+
183
+ Dorottya Demszky, Dana Movshovitz-Attias, Jeong-woo Ko, Alan Cowen, Gaurav Nemade, and Su-jith Ravi. 2020. GoEmotions: A Dataset of Fine-Grained Emotions. arXiv:2005.00547 [cs]. ArXiv: 2005.00547.
184
+
185
+ Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding Topic Signals in Large-Scale Text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 4647-4657, San Jose California USA. ACM.
186
+
187
+ Andrew Gelman. 2017. The piranha problem in social psychology / behavioral economics: The "take a pill" model of science eats itself - Statistical Modeling, Causal Inference, and Social Science.
188
+
189
+ The Guardian. 2020. Coronavirus latest: 5 April at a glance. The Guardian.
190
+
191
+ Cindy Harmon-Jones, Brock Bastian, and Eddie
192
+
193
+ Harmon-Jones. 2016. The Discrete Emotions Questionnaire: A New Tool for Measuring State Self-Reported Emotions. PLOS ONE, 11(8):e0159915.
194
+
195
+ Bing Liu. 2015. Sentiment analysis: mining opinions, sentiments, and emotions. Cambridge University Press, New York, NY.
196
+
197
+ Kate Lyons. 2020. Coronavirus latest: at a glance. The Guardian.
198
+
199
+ David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing Semantic Coherence in Topic Models. page 11.
200
+
201
+ Saif Mohammad and Peter Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26-34, Los Angeles, CA. Association for Computational Linguistics.
202
+
203
+ Saif M. Mohammad and Svetlana Kiritchenko. 2015. Using Hashtags to Capture Fine Emotion Categories from Tweets. Computational Intelligence, 31(2):301-326.
204
+
205
+ ITV news. 2020. Police can issue 'unlimited fines' to those flouting coronavirus social distancing rules, says Health Secretary. Library Catalog: www.itv.com.
206
+
207
+ F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch-esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
208
+
209
+ James W. Pennebaker, Ryan L. Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report.
210
+
211
+ Daniel Preotiuc-Pietro, H. Andrew Schwartz, Gregory Park, Johannes Eichstaedt, Margaret Kern, Lyle Un-gar, and Elisabeth Shulman. 2016. Modelling valence and arousal in Facebook posts. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 9-15, San Diego, California. Association for Computational Linguistics.
212
+
213
+ Margaret E Roberts, Brandon M Stewart, and Dustin Tingley. 2014a. stm: R Package for Structural Topic Models. Journal of Statistical Software, page 41.
214
+
215
+ Margaret E. Roberts, Brandon M. Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G. Rand. 2014b. Structural Topic Models
216
+
217
+ for Open-Ended Survey Responses. American Journal of Political Science, 58(4):1064-1082. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ajps.12103.
218
+
219
+ Armin Seyeditabari, Narges Tabari, and Wlodek Zadrozny. 2018. Emotion Detection in Text: a Review. arXiv:1806.00674 [cs]. ArXiv: 1806.00674.
220
+
221
+ Carlo Strapparava and Rada Mihalcea. 2007. SemEval- 2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70-74, Prague, Czech Republic. Association for Computational Linguistics.
222
+
223
+ Carlo Strapparava and Alessandro Valitutti. 2004. WordNet affect: an affective extension of Word-Net. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA).
224
+
225
+ Isabelle van der Vegt and Bennett Kleinberg. 2020. Women worry about family, men about the economy: Gender differences in emotional responses to COVID-19. arXiv:2004.08202 [cs]. ArXiv: 2004.08202.
226
+
227
+ Amy Walker (now), Matthew Weaver (earlier), Steven Morris, Jamie Grierson, Mark Brown, Jamie Grierson, and Pete Pattisson. 2020. UK coronavirus live: Boris Johnson remains in hospital 'for observation' after 'comfortable night'. The Guardian.
228
+
229
+ Jin Wang, Liang-Chih Yu, K. Robert Lai, and Xue-jie Zhang. 2016. Dimensional sentiment analysis using a regional CNN-LSTM model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 225-230, Berlin, Germany. Association for Computational Linguistics.
230
+
231
+ Tal Yarkoni and Jacob Westfall. 2017. Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspectives on Psychological Science, 12(6):1100-1122.
232
+
233
+ Suyang Zhu, Shoushan Li, and Guodong Zhou. 2019. Adversarial attention modeling for multidimensional emotion regression. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 471-480, Florence, Italy. Association for Computational Linguistics.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/JQCYcdHfXyJ/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § MEASURING EMOTIONS IN THE COVID-19 REAL WORLD WORRY DATASET
2
+
3
+ Bennett Kleinberg ${}^{1,2}$ Isabelle van der ${\mathbf{{Vegt}}}^{1}$ Maximilian Mozes ${}^{1,2,3}$
4
+
5
+ ${}^{1}$ Department of Security and Crime Science ${}^{2}$ Dawes Centre for Future Crime ${}^{3}$ Department of Computer Science
6
+
7
+ University College London
8
+
9
+ {bennett.kleinberg, isabelle.vandervegt, maximilian.mozes}@ucl.ac.uk
10
+
11
+ § ABSTRACT
12
+
13
+ The COVID-19 pandemic is having a dramatic impact on societies and economies around the world. With various measures of lockdowns and social distancing in place, it becomes important to understand emotional responses on a large scale. In this paper, we present the first ground truth dataset of emotional responses to COVID-19. We asked participants to indicate their emotions and express these in text. This resulted in the Real World Worry Dataset of 5,000 texts (2,500 short + 2,500 long texts). Our analyses suggest that emotional responses correlated with linguistic measures. Topic modeling further revealed that people in the UK worry about their family and the economic situation. Tweet-sized texts functioned as a call for solidarity, while longer texts shed light on worries and concerns. Using predictive modeling approaches, we were able to approximate the emotional responses of participants from text within 14% of their actual value. We encourage others to use the dataset and improve how we can use automated methods to learn about emotional responses and worries about an urgent problem.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ The outbreak of the SARS-CoV-2 virus in late 2019 and subsequent evolution of the COVID-19 disease has affected the world on an enormous scale. While hospitals are at the forefront of trying to mitigate the life-threatening consequences of the disease, practically all societal levels are dealing directly or indirectly with an unprecedented situation. Most countries are - at the time of writing this paper - in various stages of a lockdown. Schools and universities are closed or operate online-only, and merely essential shops are kept open.
18
+
19
+ At the same time, lockdown measures such as social distancing (e.g., keeping a distance of at least 1.5 meters from one another and only socializing with two people at most) might have a direct impact on people's mental health. With an uncertain outlook on the development of the COVID-19 situation and its preventative measures, it is of vital importance to understand how governments, NGOs, and social organizations can help those who are most affected by the situation. That implies, at the first stage, understanding the emotions, worries, and concerns that people have and possible coping strategies. Since a majority of online communication is recorded in the form of text data, measuring the emotions around COVID-19 will be a central part of understanding and addressing the impacts of the COVID-19 situation on people. This is where computational linguistics can play a crucial role.
20
+
21
+ In this paper, we present and make publicly available a high quality, ground truth text dataset of emotional responses to COVID-19. We report initial findings on linguistic correlates of emotions, topic models, and prediction experiments.
22
+
23
+ § 1.1 GROUND TRUTH EMOTIONS DATASETS
24
+
25
+ Tasks like emotion detection (Seyeditabari et al., 2018) and sentiment analysis (Liu, 2015) typically rely on labeled data in one of two forms. Either a corpus is annotated on a document-level, where individual documents are judged according to a predefined set of emotions (Strapparava and Mi-halcea, 2007; Preotiuc-Pietro et al., 2016) or individual $n$ -grams sourced from a dictionary are categorised or scored with respect to their emotional value (Bradley et al., 1999; Strapparava and Valitutti, 2004). These annotations are done (semi) automatically (e.g., exploiting hashtags such as #happy) (Mohammad and Kiritchenko, 2015; Abdul-Mageed and Ungar, 2017) or manually through third persons (Mohammad and Turney, 2010). While these approaches are common practice and have accelerated the progress that was made in the field, they are limited in that they propagate a pseudo ground truth. This is problematic because, as we argue, the core aim of emotion detection is to make an inference about the author's emotional state. The text as the product of an emotional state then functions as a proxy for the latter. For example, rather than wanting to know whether a Tweet is written in a pessimistic tone, we are interested in learning whether the author of the text actually felt pessimistic.
26
+
27
+ The limitation inherent to third-person annotation, then, is that they might not be adequate measurements of the emotional state of interest. The solution, albeit a costly one, lies in ground truth datasets. Whereas real ground truth would require - in its strictest sense - a random assignment of people to experimental conditions (e.g., one group that is given a positive product experience, and another group with a negative experience), variations that rely on self-reported emotions can also mitigate the problem. A dataset that relies on self-reports is the International Survey on Emotion Antecedents and Reactions (ISEAR) ${}^{1}$ , which asked participants to recall from memory situations that evoked a set of emotions. The COVID-19 situation is unique and calls for novel datasets that capture people's affective responses to it while it is happening.
28
+
29
+ § 1.2 CURRENT COVID-19 DATASETS
30
+
31
+ Several datasets mapping how the public responds to the pandemic have been made available. For example, tweets relating to the Coronavirus have been collected since March 11, 2020, yielding about 4.4 million tweets a day (Banda et al., 2020). Tweets were collected through the Twitter stream API, using keywords such as 'coronavirus' and 'COVID- 19'. Another Twitter dataset of Coronavirus tweets has been collected since January 22, 2020, in several languages, including English, Spanish, and Indonesian (Chen et al., 2020). Further efforts include the ongoing Pandemic Project ${}^{2}$ which has people write about the effect of the coronavirus outbreak on their everyday lives.
32
+
33
+ § 1.3 THE COVID-19 REAL WORLD WORRY DATASET
34
+
35
+ This paper reports initial findings for the Real World Worry Dataset (RWWD) that captured the emotional responses of UK residents to COVID-19 at a point in time where the impact of the COVID- 19 situation affected the lives of all individuals in the UK. The data were collected on the 6th and 7th of April 2020, a time at which the UK was under "lockdown" (news, 2020), and death tolls were increasing. On April 6, 5,373 people in the UK had died of the virus, and 51,608 tested positive (Walker, now). On the day before data collection, the Queen addressed the nation via a television broadcast (Guardian, 2020). Furthermore, it was also announced that Prime Minister Boris Johnson was admitted to intensive care in a hospital for COVID-19 symptoms (Lyons, 2020).
36
+
37
+ The RWWD is a ground truth dataset that used a direct survey method and obtained written accounts of people alongside data of their felt emotions while writing. As such, the dataset does not rely on third-person annotation but can resort to direct self-reported emotions. We present two versions of RWWD, each consisting of 2,500 English texts representing the participants' genuine emotional responses to Corona situation in the UK: the Long RWWD consists of texts that were open-ended in length and asked the participants to express their feelings as they wish. The Short RWWD asked the same people also to express their feelings in Tweet-sized texts. The latter was chosen to facilitate the use of this dataset for Twitter data research.
38
+
39
+ The dataset is publicly available. ${}^{3}$ .
40
+
41
+ § 2 DATA
42
+
43
+ We collected the data of $n = {2500}$ participants (94.46% native English speakers) via the crowd-sourcing platform Prolific ${}^{4}$ . Every participant provided consent in line with the local IRB. The sample requirements were that the participants were resident in the UK and a Twitter user. In the data collection task, all participants were asked to indicate how they felt about the current COVID-19 situation using 9 -point scales $(1 =$ not at all, $5 =$ moderately, $9 =$ very much). Specifically, each participant rated how worried they were about the Corona/COVID-19 situation and how much anger, anxiety, desire, disgust, fear, happiness, relaxation, and sadness (Harmon-Jones et al., 2016) they felt about their situation at this moment. They also had to choose which of the eight emotions (except worry) best represented their feeling at this moment.
44
+
45
+ 'https://www.unige.ch/cisa/research/ materials-and-online-research/ research-material/
46
+
47
+ ${}^{2}$ https://utpsyc.org/covid19/index.html
48
+
49
+ ${}^{3}$ Data: https://github.com/ben-aaron188/ covid19worry and https://osf.io/awy7r/
50
+
51
+ 4https://www.prolific.co/
52
+
53
+ All participants were then asked to write two texts. First, we instructed them to "write in a few sentences how you feel about the Corona situation at this very moment. This text should express your feelings at this moment" (min. 500 characters). The second part asked them to express their feelings in Tweet form (max. 240 characters) with otherwise identical instructions. Finally, the participants indicated on a 9-point scale how well they felt they could express their feelings (in general/in the long text/in the Tweet-length text) and how often they used Twitter (from 1=never,5=every month, 9=every day) and whether English was their native language. The overall corpus size of the dataset was 2500 long texts (320,372 tokens) and 2500 short texts (69,171 tokens). In long and short texts, only 6 and 17 emoticons (e.g. ":(", "<3") were found, respectively. Because of the low frequency of emoticons, these were not focused on in our analysis.
54
+
55
+ § 2.1 EXCERPTS
56
+
57
+ Below are two excerpts from the dataset:
58
+
59
+ Long text: $I$ am 6 months pregnant, so $I$ feel worried about the impact that getting the virus would have on me and the baby. My husband also has asthma so that is a concern too. I am worried about the impact that the lockdown will have on my ability to access the healthcare $I$ will need when having the baby, and also about the exposure to the virus [...] There is just so much uncertainty about the future and what the coming weeks and months will hold for me and the people I care about.
60
+
61
+ Tweet-sized text: Proud of our NHS and keyworkers who are working on the frontline at the moment. I'm optimistic about the future, IF EVERYONE FOLLOWS THE RULES. We need to unite as a country, by social distancing and stay in.
62
+
63
+ § 2.2 DESCRIPTIVE STATISTICS
64
+
65
+ We excluded nine participants who padded the long text with punctuation or letter repetitions. The dominant feelings of participants were anxiety/worry, sadness, and fear (see Table 1) ${}^{5}$ . For all emotions, the participants' self-rating ranged across the whole spectrum (from "not at all" to "very much"). The final sample consisted to ${65.15}\%$ of females ${}^{6}$ with an overall mean age of 33.84 years $\left( {{SD} = {22.04}}\right)$ .
66
+
67
+ The participants' self-reported ability to express their feelings, in general, was $M = {6.88}({SD} =$ 1.69). When specified for both types of texts separately, we find that the ability to express themselves in the long text $\left( {M = {7.12},{SD} = {1.78}}\right)$ was higher than that for short texts $(M = {5.91}$ , ${SD} = {2.12}$ ), Bayes factor $> {1e} + {96}$ .
68
+
69
+ The participants reported to use Twitter almost weekly $\left( {M = {6.26},{SD} = {2.80}}\right)$ , tweeted themselves rarely to once per month $(M = {3.67},{SD} =$ 2.52), and actively participated in conversations in a similar frequency $\left( {M = {3.41},{SD} = {2.40}}\right)$ . Our participants were thus familiar with Twitter as a platform but not overly active in tweeting themselves.
70
+
71
+ max width=
72
+
73
+ Variable $\mathbf{{Mean}}$ SD
74
+
75
+ 1-3
76
+ 3|c|Corpus descriptives
77
+
78
+ 1-3
79
+ Tokens (long text) 127.75 39.67
80
+
81
+ 1-3
82
+ Tokens (short text) 27.70 15.98
83
+
84
+ 1-3
85
+ Types (long text) 82.69 18.24
86
+
87
+ 1-3
88
+ Types (short text) 23.50 12.21
89
+
90
+ 1-3
91
+ TTR (long text) 0.66 0.06
92
+
93
+ 1-3
94
+ TTR (short text) 0.88 0.09
95
+
96
+ 1-3
97
+ Chars. (long text) 632.54 197.75
98
+
99
+ 1-3
100
+ Chars. (short text) 137.21 78.40
101
+
102
+ 1-3
103
+ 3|c|Emotions
104
+
105
+ 1-3
106
+ Worry ${6.55}^{a}$ 1.76
107
+
108
+ 1-3
109
+ Anger1 (4.33%) ${3.91}^{b}$ 2.24
110
+
111
+ 1-3
112
+ Anxiety (55.36%) ${6.49}^{a}$ 2.28
113
+
114
+ 1-3
115
+ Desire (1.09%) ${2.97}^{b}$ 2.04
116
+
117
+ 1-3
118
+ Disgust (0.69%) ${3.23}^{b}$ 2.13
119
+
120
+ 1-3
121
+ Fear (9.22%) ${5.67}^{a}$ 2.27
122
+
123
+ 1-3
124
+ Happiness (1.58%) ${3.62}^{b}$ 1.89
125
+
126
+ 1-3
127
+ Relaxation (13.38%) ${3.95}^{b}$ 2.13
128
+
129
+ 1-3
130
+ Sadness (14.36%) ${5.59}^{a}$ 2.31
131
+
132
+ 1-3
133
+
134
+ Table 1: Descriptive statistics of text data and emotion ratings. ${}^{1}$ brackets indicate how often the emotion was chosen as the best fit for the current feeling about COVID-19. ${}^{a}$ the value is larger than the neutral midpoint with Bayes factors $> {1e} + {32}.\;{}^{b}$ the value is smaller than the neutral midpoint with $\mathrm{{BF}} > {1e} + {115}$ . TTR = type-token ratio.
135
+
136
+ ${}^{5}$ For correlations among the emotions, see the online supplement
137
+
138
+ ${}^{6}$ For an analysis of gender differences using this dataset, see van der Vegt and Kleinberg (2020).
139
+
140
+ § 3 FINDINGS AND EXPERIMENTS
141
+
142
+ § 3.1 CORRELATIONS OF EMOTIONS WITH LIWC CATEGORIES
143
+
144
+ We correlated the self-reported emotions to matching categories of the LIWC2015 lexicon (Pennebaker et al., 2015). The overall matching rate was high (92.36% and 90.11% for short and long texts, respectively). Across all correlations, we see that the extent to which the linguistic variables explain variance in the emotion values (indicated by the ${R}^{2}$ ) is larger in long texts than in Tweet-sized short texts (see Table 2). There are significant positive correlations for all affective LIWC variables with their corresponding self-reported emotions (i.e., higher LIWC scores accompanied higher emotion scores, and vice versa). These correlations imply that the linguistic variables explain up to ${10}\%$ and $3\%$ of the variance in the emotion scores for long and short texts, respectively.
145
+
146
+ The LIWC also contains categories intended to capture areas that concern people (not necessarily in a negative sense), which we correlated to the self-reported worry score. Positive (negative) correlations would suggest that the higher (lower) the worry score of the participants, the larger their score on the respective LIWC category. We found no correlation between the categories "work", "money" and "death" suggesting that the worry people reported was not associated with these categories. Significant positive correlations emerged for long texts for "family" and "friend": the more people were worried, the more they spoke about family and - to a lesser degree — friends.
147
+
148
+ § 3.2 TOPIC MODELS OF PEOPLE'S WORRIES
149
+
150
+ We constructed topic models for both the long and short texts separately using the stm package in $\mathrm{R}$ (Roberts et al., 2014a). The text data were lower-cased, punctuation, stopwords and numbers were removed, and all words were stemmed. For the long texts, we chose a topic model with 20 topics as determined by semantic coherence and exclusivity values for the model (Mimno et al., 2011; Roberts et al., 2014b, a). Table 3 shows the five most prevalent topics with ten associated frequent terms for each topic (see online supplement for all 20 topics). The most prevalent topic seems to relate to following the rules related to the lockdown. In contrast, the second most prevalent topic appears to relate to worries about employment and the economy. For the Tweet-sized texts, we selected a model with 15 topics. The most common topic bears a resemblance to the government slogan "Stay at home, protect the NHS, save lives." The second most prevalent topic seems to relate to calls for others to adhere to social distancing rules.
151
+
152
+ § 3.3 PREDICTING EMOTIONS ABOUT COVID-19
153
+
154
+ It is worth noting that the current literature on automatic emotion detection mainly casts this problem as a classification task, where words or documents are classified into emotional categories (Buechel and Hahn, 2016; Demszky et al., 2020). Our fine-grained annotations allow for estimating emotional values on a continuous scale. Previous works on emotion regression utilise supervised models such as linear regression for this task (Preotiuc-Pietro et al., 2016), and more recent efforts employ neural network-based methods (Wang et al., 2016; Zhu et al., 2019). However, the latter typically require larger amounts of annotated data, and are hence less applicable to our collected dataset.
155
+
156
+ We, therefore, use linear regression models to predict the reported emotional values (i.e., anxiety, fear, sadness, worry) based on text properties. Specifically, we applied regularised ridge regression models ${}^{7}$ using TFIDF and part-of-speech (POS) features extracted from long and short texts separately. TFIDF features were computed based on the 1000 most frequent words in the vocabularies of each corpus; POS features were extracted using a predefined scheme of 53 POS tags in $\mathrm{{spa}}C{y}^{8}$ .
157
+
158
+ We process the resulting feature representations using principal component analysis and assess the performances using the mean absolute error (MAE) and the coefficient of determination ${R}^{2}$ . Each experiment is conducted using five-fold cross-validation, and the arithmetic means of all five folds are reported as the final performance results.
159
+
160
+ Table 4 shows the performance results in both long and short texts. We observe MAEs ranging between 1.26 (worry with TFIDF) and 1.88 (sadness with POS) for the long texts, and between 1.37 (worry with POS) and 1.91 (sadness with POS) for the short texts. We furthermore observe that the models perform best in predicting the worry scores for both long and short texts. The models explain up to ${16}\%$ of the variance for the emotional response variables on the long texts, but only up to
161
+
162
+ ${}^{7}$ We used the scikit-learn python library (Pedregosa et al., 2011).
163
+
164
+ 8 https://spacy.io
165
+
166
+ max width=
167
+
168
+ Correlates Long texts Short texts
169
+
170
+ 1-3
171
+ 3|c|Affective processes
172
+
173
+ 1-3
174
+ Anger - LIWC "anger" 0.28 [0.23; 0.32] (7.56%) 0.09 [0.04; 0.15] (0.88%)
175
+
176
+ 1-3
177
+ Sadness - LIWC "sad" 0.21 [0.16; 0.26] (4.35%) 0.13 [0.07; 0.18] (1.58%)
178
+
179
+ 1-3
180
+ Anxiety - LIWC "anx" 0.33 [0.28; 0.37] (10.63%) 0.18 [0.13; 0.23] (3.38%)
181
+
182
+ 1-3
183
+ Worry - LIWC "anx" 0.30 [0.26; 0.35] (9.27%) 0.18 [0.13; 0.23] (3.30%)
184
+
185
+ 1-3
186
+ Happiness - LIWC "posemo" 0.22 [0.17; 0.26] (4.64%) 0.13 [0.07; 0.18] (1.56%)
187
+
188
+ 1-3
189
+ 3|c|X
190
+
191
+ 1-3
192
+ 3|c|Concern sub-categories
193
+
194
+ 1-3
195
+ Worry - LIWC "work" -0.03 [-0.08; 0.02] (0.01%) -0.03 [-0.08; 0.02] (0.10%)
196
+
197
+ 1-3
198
+ Worry - LIWC "money" 0.00 [-0.05; 0.05] (0.00%) -0.01 [-0.06; 0.04] (0.00%)
199
+
200
+ 1-3
201
+ Worry - LIWC "death" 0.05 [-0.01; 0.10] (0.26%) 0.05 [0.00; 0.10] (0.29%)
202
+
203
+ 1-3
204
+ Worry - LIWC "family" 0.18 [0.13; 0.23] (3.12%) 0.06 [0.01; 0.11] (0.40%)
205
+
206
+ 1-3
207
+ Worry - LIWC "friend" 0.07 [0.01; 0.12] (0.42%) -0.01 [-0.06; 0.05] (0.00%)
208
+
209
+ 1-3
210
+
211
+ Table 2: Correlations (Pearson’s $r,{99}\%$ CI, $R$ -squared in %) between LIWC variables and emotions.
212
+
213
+ max width=
214
+
215
+ Docs Terms
216
+
217
+ 1-2
218
+ 2|c|Long texts
219
+
220
+ 1-2
221
+ 9.52 people, take, think, rule, stay, serious, follow, virus, mani, will
222
+
223
+ 1-2
224
+ 8.35 will, worri, job, long, also, economy, concern, impact, famili, situat
225
+
226
+ 1-2
227
+ 7.59 feel, time, situat, relax, quit, moment, sad, thing, like, also
228
+
229
+ 1-2
230
+ 6.87 feel, will, anxious, know, also, famili, worri, friend, like, sad
231
+
232
+ 1-2
233
+ 5.69 work, home, worri, famili, friend, abl, time, miss, school, children
234
+
235
+ 1-2
236
+ 2|c|Short texts
237
+
238
+ 1-2
239
+ 10.70 stay, home, safe, live, pleas, insid, save, protect, nhs, everyone
240
+
241
+ 1-2
242
+ 8.27 people, need, rule, dont, stop, selfish, social, die, distance, spread
243
+
244
+ 1-2
245
+ 7.96 get, can, just, back, wish, normal, listen, lockdown, follow, sooner
246
+
247
+ 1-2
248
+ 7.34 famili, anxious, worri, scare, friend, see, want, miss, concern, covid
249
+
250
+ 1-2
251
+ 6.81 feel, situat, current, anxious, frustrat, help, also, away, may, extrem
252
+
253
+ 1-2
254
+
255
+ Table 3: The five most prevalent topics for long and short texts.
256
+
257
+ $1\%$ on Tweet-sized texts.
258
+
259
+ max width=
260
+
261
+ 2*Model 2|c|Long 2|c|Short
262
+
263
+ 2-5
264
+ MAE ${R}^{2}$ MAE ${R}^{2}$
265
+
266
+ 1-5
267
+ Anxiety - TFIDF 1.65 0.16 1.82 -0.01
268
+
269
+ 1-5
270
+ Anxiety - POS 1.79 0.04 1.84 0.00
271
+
272
+ 1-5
273
+ Fear - TFIDF 1.71 0.15 1.85 0.00
274
+
275
+ 1-5
276
+ Fear - POS 1.83 0.05 1.87 0.01
277
+
278
+ 1-5
279
+ Sadness - TFIDF 1.75 0.12 1.90 -0.02
280
+
281
+ 1-5
282
+ Sadness - POS 1.88 0.02 1.91 -0.01
283
+
284
+ 1-5
285
+ Worry - TFIDF 1.26 0.16 1.38 -0.03
286
+
287
+ 1-5
288
+ Worry - POS 1.35 0.03 1.37 0.01
289
+
290
+ 1-5
291
+
292
+ Table 4: Results for regression modeling for long and short texts.
293
+
294
+ § 4 DISCUSSION
295
+
296
+ This paper introduced a ground truth dataset of emotional responses in the UK to the Corona pandemic. We reported initial findings on the linguistic correlates of emotional states, used topic modeling to understand what people in the UK are concerned about, and ran prediction experiments to infer emotional states from text using machine learning. These analyses provided several core findings: (1) Some emotional states correlated with word lists made to measure these constructs, (2) longer texts were more useful to identify patterns in language that relate to emotions than shorter texts, (3) Tweet-sized texts served as a means to call for solidarity during lockdown measures while longer texts gave insights to people's worries, and (4) preliminary regression experiments indicate that we can infer from the texts the emotional responses with an absolute error of 1.26 on a 9-point scale (14%).
297
+
298
+ § 4.1 LINGUISTIC CORRELATES OF EMOTIONS AND WORRIES
299
+
300
+ Emotional reactions to the Coronavirus were obtained through self-reported scores. When we used psycholinguistic word lists that measure these emotions, we found weak positive correlations. The lexicon-approach was best at measuring anger, anxiety, and worry and did so better for longer texts than for Tweet-sized texts. That difference is not surprising given that the LIWC was not constructed for micro-blogging and very short documents. In behavioral and cognitive research, small effects (here: a maximum of ${10.63}\%$ of explained variance) are the rule rather than the exception (Gelman, 2017; Yarkoni and Westfall, 2017). It is essential, however, to interpret them as such: if 10% of the variance in the anxiety score are explained through a linguistic measurement, ${90}\%$ are not. An explanation for the imperfect correlations - aside from random measurement error - might lie in the inadequate expression of someone's felt emotion in the form of written text. The latter is partly corroborated by even smaller effects for shorter texts, which may have been too short to allow for the expression of one's emotion.
301
+
302
+ It is also important to look at the overlap in emotions. Correlational follow-up analysis (see online supplement) among the self-reported emotions showed high correlations of worry with fear $\left( {r = {0.70}}\right)$ and anxiety $\left( {r = {0.66}}\right)$ suggesting that these are not clearly separate constructs in our dataset. Other high correlations were evident between anger and disgust $\left( {r = {0.67}}\right)$ , fear and anxiety $\left( {r = {0.78}}\right)$ , and happiness and relaxation $\left( {r = {0.68}}\right)$ . Although the chosen emotions (with our addition of "worry") were adopted from previous work (Harmon-Jones et al., 2016), it merits attention in future work to disentangle the emotions and assess, for example, common ngrams per cluster of emotions (e.g. as in Demszky et al., 2020).
303
+
304
+ § 4.2 TOPICS OF PEOPLE'S WORRIES
305
+
306
+ Prevalent topics in our corpus showed that people worry about their jobs and the economy, as well as their friends and family - the latter of which is also corroborated by the LIWC analysis. For example, people discussed the potential impact of the situation on their family, as well as their children missing school. Participants also discussed the lockdown and social distancing measures. In the Tweet-sized texts, in particular, people encouraged others to stay at home and adhere to lockdown rules in order to slow the spread of the virus, save lives and/or protect the NHS. Thus, people used the shorter texts as a means to call for solidarity, while longer texts offered insights into their actual worries (for recent work on gender differences, see van der Vegt and Kleinberg, 2020).
307
+
308
+ While there are various ways to select the ideal number of topics, we have relied on assessing the semantic coherence of topics and exclusivity of topic words. Since there does not seem to be a consensus on the best practice for selecting topic numbers, we encourage others to examine different approaches or models with varying numbers of topics.
309
+
310
+ § 4.3 PREDICTING EMOTIONAL RESPONSES
311
+
312
+ Prediction experiments revealed that ridge regression models can be used to approximate emotional responses to COVID-19 based on encoding of the textual features extracted from the participants' statements. Similar to the correlational and topic modeling findings, there is a stark difference between the long and short texts: the regression models are more accurate and explain more variance for longer than for shorter texts. Additional experiments are required to investigate further the expressiveness of the collected textual statements for the prediction of emotional values. The best predictions were obtained for the reported worry score (MAE $= {1.26}$ , MAPE $= {14.00}\%$ ). An explanation why worry was the easiest to predict could be that it was the highest reported emotion overall with the lowest standard deviation, thus potentially biasing the model. More fine-grained prediction analyses out of the scope of this initial paper could further examine this.
313
+
314
+ § 4.4 SUGGESTIONS FOR FUTURE RESEARCH
315
+
316
+ The current analysis leaves several research questions untouched. First, to mitigate the limitations of lexicon-approaches, future work on inferring emotions around COVID-19 could expand on the prediction approach (e.g., using different feature sets and models). Carefully validated models could help to provide the basis for large scale, real-time measurements of emotional responses. Of particular importance is a solution to the problem hinted at in the current paper: the shorter, Tweet-sized texts contained much less information, had a different function, and were less suitable for predictive modeling. However, it must be noted that the experimental setup of this study did not fully mimic a 'natural' Twitter experience. Whether the results are generalisable to actual Twitter data is an important empirical question for follow-up work. Nevertheless, with much of today's stream of text data coming in the form of (very) short messages, it is important to understand the limitations of using that kind of data and worthwhile examining how we can better make inferences from that information.
317
+
318
+ Second, with a lot of research attention paid to readily available Twitter data, we hope that future studies also focus on non-Twitter data to capture emotional responses of those who are underrepresented (or non-represented) on social media but are at heightened risk.
319
+
320
+ Third, future research may focus on manually annotating topics to more precisely map out what people worry about with regards to COVID-19. Several raters could assess frequent terms for each topic, then assign a label. Then through discussion or majority votes, final topic labels can be assigned to obtain a model of COVID-19 real-world worries.
321
+
322
+ Fourth, future efforts may aim for sampling over a longer period to capture how emotional responses develop over time. Ideally, using high-frequency sampling (e.g., daily for several months), future work could account for the large number of events that may affect emotions.
323
+
324
+ Lastly, it is worthwhile to utilise other approaches to measuring psychological constructs in text. Although the rate of out-of-vocabulary terms for the LIWC in our data was low, other dictionaries may be able to capture other relevant constructs. For instance, the tool Empath (Fast et al., 2016) could help measure emotions not available in the LIWC (e.g., nervousness and optimism). We hope that future work will use the current dataset (and extensions thereof) to go further so we can better understand emotional responses in the real world.
325
+
326
+ § 5 CONCLUSIONS
327
+
328
+ This paper introduced the first ground truth dataset of emotional responses to COVID-19 in text form. Our findings highlight the potential of inferring concerns and worries from text data but also show some of the pitfalls, in particular, when using concise texts as data. We encourage the research community to use the dataset so we can better understand the impact of the pandemic on people's lives.
329
+
330
+ § ACKNOWLEDGMENTS
331
+
332
+ This research was supported by the Dawes Centre for Future Crime at UCL.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/M4wgkxaPcyj/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NLP-based Feature Extraction for the Detection of COVID-19 Misinformation Videos on YouTube
2
+
3
+ Juan Carlos Medina Serrano, Orestis Papakyriakopoulos, Simon Hegelich
4
+
5
+ Technical University of Munich, Germany
6
+
7
+ \{juan.medina, orestis.p\} @tum.de, simon.hegelich @hfp.tum.de
8
+
9
+ ## Abstract
10
+
11
+ We present a simple NLP methodology for detecting COVID-19 misinformation videos on YouTube by leveraging user comments. We use transfer-learning pre-trained models to generate a multi-label classifier that can categorize conspiratorial content. We use the percentage of misinformation comments on each video as a new feature for video classification. We show that the inclusion of this feature in simple models yields an accuracy of up to 82.2%. Furthermore, we verify the significance of the feature by performing a Bayesian analysis. Finally, we show that adding the first hundred comments as tf-idf features increases the video classifier accuracy by up to ${89.4}\%$ .
12
+
13
+ ## 1 Introduction
14
+
15
+ The COVID-19 health crisis was accompanied by a misinfodemic: The limited knowledge on the nature and origin of the virus gave ample space for the emergence of conspiracy theories, which were diffused on YouTube, and online social networks. Although YouTube accelerated attempts to detect and filter related misinformation, it yielded moderate results (Li et al., 2020; Frenkel et al., 2020).
16
+
17
+ In this study, we present a simple NLP-based methodology that can support fact-checkers in detecting COVID-19 misinformation on YouTube. Instead of training models on the videos themselves and predicting their nature, we exploit the vast amount of available comments on each YouTube video and extract features that can be used in misinformation detection. Our methodology comes with the advantage that labeling comments is simpler and faster than video labeling. Additionally, no complex neural architecture is needed for the classification of videos.
18
+
19
+ Our study provides the following contributions:
20
+
21
+ - We create a multi-label classifier based on transfer-learning that can detect conspiracy-laden comments. We find that misinformation videos contain a significantly higher proportion of conspiratorial comments.
22
+
23
+ - Based on this information, we use the percentage of conspiracy comments as feature for the detection of COVID-19 misinformation videos. We verify its efficiency by deploying simple machine learning models for misinformation detection. We validate feature significance by Bayesian analysis.
24
+
25
+ - We show that including the first hundred comments as tf-idf features in the classifier increases the accuracy from 82.2% to 89.4%.
26
+
27
+ ## 2 Related Work
28
+
29
+ Previous research studies have extensively investigated the possibilities and limits of NLP for detecting misinformation. Researchers have provided theoretical frameworks for understanding the lingual and contextual properties of various types of misinformation, such as rumors, false news, and propaganda (Li et al., 2019; Thorne and Vlachos, 2018; Rubin et al.; Zhou and Zafarani, 2018). Given the general difficulty in detecting misinformation, scientists have also developed dedicated benchmark datasets to evaluate the effectiveness of NLP architectures in misinformation related classification tasks (Pérez-Rosas et al., 2018; Hanselowski et al., 2018). Given the vast amount of misinformation appearing in online social networks, various research studies propose case-specific NLP methodologies for tracing misinformation. For example, Della Vedova et al. (2018) and Popat et al. (2018) combined lingual properties of articles and other meta-data for the detection of false news. Volkova et al. (2017), Qazvinian et al. (2011) and Kumar and Carley (2019) created special architectures that take into consideration the microblogging structure of online social networks, while De Sarkar et al. (2018) and Gupta et al. (2019) exploited sentence-level semantics for misinformation detection.
30
+
31
+ Despite the deployment of such architectures for fact-checking, locating malicious content and promptly removing them remains an open challenge (Gillespie, 2018; Roberts, 2019). In the case of Covid-19 misinformation, a large share of conspiratorial contents remain online on YouTube and other platforms, influencing the public, despite content moderation practices (Li et al., 2020; Frenkel et al., 2020; Ferrara, 2020). Given this, it is important to develop case-specific NLP tools that can assist policymakers and researchers in the process of detecting COVID-19 misinformation and managing it accordingly. Towards this end, we illustrate how NLP-based feature extraction (Shu et al., 2017; Jiang et al., 2020; Lendvai and Reichel, 2016) based on user comments can be effectively used for this task. User comment data has been employed to annotate social media objects (Momeni et al., 2013), infer the political leaning of news articles (Park et al., 2011), and to predict popularity (Kim et al., 2016). Jiang and Wilson (2018) previously analyzed user comments to detect misinformation. However, they focused on linguistic signals and concluded that users' comments were not strong signals for detecting misinformation.
32
+
33
+ ## 3 Methodology and Experiments
34
+
35
+ ### 3.1 Dataset
36
+
37
+ The first step of the study consisted of obtaining a set of YouTube videos that included either misinformation or debunking content. We decided not to use YouTube's search function as previous studies found few conspiratorial content on the top results (Marchal et al., 2020). We preferred to search for YouTube videos through user-generated content on social media platforms. For this, we queried the pushhift Reddit API (Baumgartner et al., 2020), and Crowdtangle's historical data of public Facebook posts (Silverman, 2019) using the query "COVID-19 OR coronavirus". Additionally, we downloaded the COVID-19 Twitter dataset developed by Chen et al. (2020). The total dataset included over 85 million posts generated between January and April 2020. We significantly reduced this dataset by querying the posts with "biowarfare OR biological weapon OR bioweapon OR manmade OR human origin". From the remaining posts, we extracted and expanded the URLs. We identified 1,672 unique YouTube videos. 10% of these videos had been blocked by YouTube as of April 2020. For the rest of the videos, we watched them, excluded the non-English videos, and manually labeled them as either misinformation, factual, or neither. To label a video as misinformation, we validated that its message was conveying with certainty a conspiracy theory regarding the origin of the coronavirus, as a man-made bioweapon or being caused by $5\mathrm{G}$ . We did not classify videos that questioned its origin but showed no certainty about a hoax (which included well-known and verified news media videos) as misinformation. We classified as factual those videos that included debunking of conspiracy theories or presented scientific results on the origins and causes of COVID-19. We labeled the rest of the videos as neither. Two of the authors (JCMS, OP) performed the labeling procedure independently. For the cases where the labels did not agree, the third author was consulted (SH).
38
+
39
+ Afterward, we collected the comments on both misinformation and factual videos using YouTube's Data ${\mathrm{{API}}}^{1}$ . For this study, we only included videos with more than twenty comments. The final dataset consisted of 113 misinformation and 67 factual videos, with 32,273 and 119,294 total comments respectively. We selected a ten percent random sample of the comments from the misinformation videos and proceeded to label them. This labeling procedure was performed in the same manner as the video classification to assure data quality. For each comment, we collected two labels. First, we gave a label if the comment expressed agreement (1) or not (0). Agreement comments included comments such as "this is the video I was looking for", or "save and share this video before YouTube puts it down". The second label considered if comments amplified misinformation with a conspiracy theory/misinformation comment (1) or without one (0). Comments that questioned the conspiracies (such as "could it be a bioweapon?") were not labeled as misinformation. ${19.7}\%$ of the comments in the sample were labeled as conspiracy comment and ${12.5}\%$ as agreement comment. Only 2.2% of the comments were classified as both agreement and conspiratorial. Although both agreement and conspiracy labeled comments express the same message of believing in the misinformation content from the videos, we decided to keep them apart due to their different linguistic properties. To compare the collection of agree-labeled comments and conspiracy-labeled comments, we tokenized and created a bag-of-words model. The two collections share ${19.4}\%$ of their vocabulary. However, only ${1.95}\%$ of the vocabulary has more than four occurrences in both collections. We applied ${\chi }^{2}$ tests for each of these remaining words and observe that ${50}\%$ occur in significantly different proportions. At the end, only 0.96% of the vocabulary has a significant similar number of occurrences in the two datasets. The YouTube comments dataset without user data can be accessed in this GitHub repository ${}^{2}$ , alongside a Google Colab notebook with the code.
40
+
41
+ ---
42
+
43
+ ${}^{1}$ https://developers.google.com/youtube/v3
44
+
45
+ ---
46
+
47
+ ### 3.2 Classification of Users Comments
48
+
49
+ We first performed a multi-label classification on the 10% sample of the misinformation videos' comments. We split the annotated data into training $\left( {{80}\% }\right)$ and test $\left( {{20}\% }\right)$ datasets. We employed state-of-the-art neural transfer-learning for the classification by fine-tuning three pre-trained models: XLNet base (Yang et al., 2019), BERT base (Devlin et al., 2018) and RoBERTa base (Liu et al., 2019). The fine-tuning consists of initializing the model's pre-trained weights and re-training on labeled data. We ran the models for four epochs using the same hyperparameters as the base models. For the experiments, we used 0.5 as a decision threshold. Additionally, we train two simpler models as baselines: a logistic regression model using LIWC's lexicon-derived frequencies (Tausczik and Pennebaker, 2010) as features, and a multinomial naive Bayes model using bag-of-words vectors as features. Table 1 shows the average micro- ${F}_{1}$ scores for the three transformer models after performing the fine-tuning five times. RoBERTa is the best performing model for the training and test dataset on the conspiracy classification as for the test data on the agreement label. BERT is the best performing model only for the training data on the agree label. The three transformer models outperform the baseline models. This predictive superiority is more evident in the precision-recall curves (with corresponding binary- ${F}_{1}$ scores) of the five models on the test data (Figure 1).
50
+
51
+ ${}^{2}$ https://github.com/JuanCarlosCSE/YouTube_misinfo
52
+
53
+ <table><tr><td rowspan="2"/><td colspan="2">Agree</td><td colspan="2">Conspiracy</td></tr><tr><td>Train</td><td>Test</td><td>Train</td><td>Test</td></tr><tr><td>LIWC</td><td>88.7</td><td>88.6</td><td>81</td><td>78.2</td></tr><tr><td>$\mathbf{{NB}}$</td><td>94.2</td><td>82.4</td><td>94.3</td><td>78.8</td></tr><tr><td>XLNet</td><td>${97} \pm {0.1}$</td><td>93.1±0.3</td><td>93.9±0.5</td><td>${84.8} \pm {0.6}$</td></tr><tr><td>BERT</td><td>98.5±0.1</td><td>93.3±0.5</td><td>96.3±0.3</td><td>${83.8} \pm {0.9}$</td></tr><tr><td>RoBERTa</td><td>${98.1} \pm {0.2}$</td><td>$\mathbf{{93.9}} \pm {0.4}$</td><td>96.4±0.3</td><td>$\mathbf{{86.7}} \pm {0.5}$</td></tr></table>
54
+
55
+ Table 1: Train and test micro ${F}_{1}$ scores (mean and standard deviation) from multi-label classification models: LIWC with logistic regression and Naive Bayes as baselines, and three transformer models with five runs.
56
+
57
+ ![01963db9-d143-7d03-98dc-de3bd7f8cff6_2_932_168_428_884_0.jpg](images/01963db9-d143-7d03-98dc-de3bd7f8cff6_2_932_168_428_884_0.jpg)
58
+
59
+ Figure 1: Precision and recall curves for binary ${F}_{1}$ scores for the conspiracy (upper figure) and agreement (lower figure) label. The plot shows the results for three neural-transfer classifiers.
60
+
61
+ We employed the fine-tuned RoBERTa model to predict the labels of the remaining comments from the misinformation and factual videos. We then calculated the percentage of conspiracy comments per video. We also obtained this percentage for the agreement label. Figure 2 shows the resulting density distributions from misinformation and factual videos. We observe a difference between the distributions from the two types of videos. We confirmed this by performing Welch's t-test for independent samples. For the conspiracy comments percentage, the t-test was significant $\left( {\mathrm{p} < {0.000}}\right)$ , indicating that the samples come from different distributions. The t-test was not significant for the agreement percentage (p>0.1).
62
+
63
+ ![01963db9-d143-7d03-98dc-de3bd7f8cff6_3_233_175_526_685_0.jpg](images/01963db9-d143-7d03-98dc-de3bd7f8cff6_3_233_175_526_685_0.jpg)
64
+
65
+ Figure 2: Probability densities of misinformation and factual videos regarding the percentage of conspiratorial comments (upper) agreement comments (lower).
66
+
67
+ ### 3.3 Classification of YouTube Videos
68
+
69
+ The next step consisted of classifying the set of YouTube videos to detect misinformation. For this, we employed the percentage of conspiracy comments of each video as a feature. Additionally, we extracted content features from the videos' titles and from the raw first hundred comments per video (or all the comments for videos with fewer than 100 comments). For this, we preprocessed the titles and comments with tokenization, removal of stopwords, and the usage of the standard term frequency-inverse document (tf-idf) weighting for word frequencies to create a document term matrix, whose columns serve as input features. We selected six feature settings for our experiments: each of the set of features alone and the three possible combination between them . For each setting, we employed three classification models: logistic regression, support vector machine (SVM), and random forest. For the SVM models, we tried the linear, sigmoid, and RBF kernel. For both SVM and random forest, we performed a grid search to obtain the best hyperparameters. In each run, we performed 10-fold cross-validation and report the mean accuracy in Table 2. We observe that the SVM model has the highest accuracy for all the settings except for one. The conspiracy feature alone achieves an accuracy of 81.1 . Using the tf-idf comment features the accuracy is slightly better with 83.9. However, the conspiracy feature and comments combined achieve the highest accuracy of 89.4. We observe that the models with all the features combined have lower accuracy than the models omitting the title features. This may be due to overfitting and the title repeating information from the other two sets of features. Interestingly, the accuracy for the best model is still high (85.5%) when taking into consideration only videos with less than 100 comments. This implies that our methodology is appropriate for the early detection of misinformation videos.
70
+
71
+ <table><tr><td/><td>LR</td><td>SVM</td><td>$\mathbf{{RF}}$</td></tr><tr><td>title</td><td>62.7</td><td>65.6 (1)</td><td>64.4</td></tr><tr><td>conspiracy %</td><td>62.7</td><td>81.1 (r)</td><td>72.2</td></tr><tr><td>comments</td><td>66.7</td><td>83.9 (r)</td><td>82.8</td></tr><tr><td>title + conspiracy %</td><td>64.4</td><td>77.7 (s)</td><td>82.2</td></tr><tr><td>comments + conspiracy %</td><td>73.3</td><td>89.4 (1)</td><td>84.44</td></tr><tr><td>all</td><td>73.3</td><td>84.4 (1)</td><td>82.7</td></tr></table>
72
+
73
+ Table 2: Classification accuracy for logistic regression, support vector machines, and random forest models for six feature settings. For the SVM, we applied three kernels: linear (l), sigmoid (s) and RBF (r). The kernel with the best accuracy appears in parenthesis.
74
+
75
+ ### 3.4 Bayesian Modeling
76
+
77
+ To find the statistical validity of the conspiracy percentage feature, we turned to Bayesian modeling as it allows us to obtain the full posterior distribution of feature coefficients. We performed inference on three Bayesian logistic regression models using a Hamiltonian Monte Carlo solver. A simple model considered only the conspiracy percentage feature. A second model included this feature and the ten most relevant word features from the random forest model trained only on the title and conspiracy percentage. A third model included the conspiracy feature, and the top ten most relevant words from the linear SVM trained on the conspiracy feature and the first 100 comments. The first column of Table 3 and 4 shows the importance of each of the features in the random forest and linear SVM model, respectively. The two tables also show the statistics of the posterior probability distributions of the model coefficients: the mean, standard deviation, and the $1\%$ and ${99}\%$ quantiles. For the three models, the coefficients distribution converged (the $\widehat{R}$ diagnostic (Vehtari et al.,2019) was equal to one). We specifically selected logistic regression models for their interpretability. We observe that for the model based on the title word features, the posterior distribution of the conspiracy percentage feature coefficient is the only one that does not include zero in its ${98}\%$ highest posterior density interval (Table 3). Although this is not equivalent to traditional p-values, it conveys significance in a Bayesian setting. The model based on the 100 comments word features (Table 4), maintains the conspiracy feature as significant. However, also three coefficients from the word features avoid zero in their 98% interval. The model's coefficients are negative for covid19 and lab, and positive for god.
78
+
79
+ Finally, we compare the three Bayesian models using the the WAIC information criteria, which estimates out-of-sample expectation and corrects for the effective number of parameters to avoid overfitting (Watanabe and Opper, 2010). Figure 3 shows the resulting deviance of the three models. We observe that the second model is slightly better than the simple model. However, the differences are included in the standard error of the title words feature model. This is not true for the simple model and the model including the comments features. In this case, the full model outperforms the model based only on the conspiracy feature. This indicates that there is important information in the videos' first hundred comments that is not explained by the conspiracy percentage feature on its own.
80
+
81
+ ## 4 Discussion
82
+
83
+ We have leveraged large quantities of user comments to extract a simple feature that is effective in the prediction of misinformation videos. Given that the classifier is also accurate for videos with few comments, it can be used for online learning. For example, the user comments of videos containing coronavirus can be tracked and classified as they are posted. High levels of conspiracy comments could then indicate that the video includes misinformation claims. For this to work, it is not necessary a conspiracy classifier with perfect accuracy given that the percentage of conspiracy comments feature is based on an aggregated classifications. An improved classifier would be able to define a threshold that allows a balanced number of false positives and true negatives. The average percentage of conspiratorial comments would be maintained, irrespective of the wrong classifications. On the other hand, the accuracy of the video classifier is more critical. We found that using simple classifiers on the raw content of the videos' first 100 comments significantly improves the accuracy of misinformation video detection from 82.2 to 89.4. However, in large-scale settings, it may be prohibitive to store the raw comments and continuously perform batch classification. In contrast, the conspiracy percentage feature only requires storing a conspiracy comment counter per video. Future research could leverage the video content to increase the classifier accuracy. The detection of misinformation on social media remains an open challenge, and further research is needed to understand how the COVID-19 misinfodemic spread to prevent future ones.
84
+
85
+ <table><tr><td/><td>$\mathbf{{RF}}$</td><td>mean</td><td>SD</td><td>1%</td><td>99%</td></tr><tr><td>conspiracy %</td><td>19.2</td><td>28.25</td><td>4.8</td><td>18.19</td><td>39.94</td></tr><tr><td>coronavirus</td><td>2.95</td><td>-7.45</td><td>3.4</td><td>-15.57</td><td>0.01</td></tr><tr><td>covid19</td><td>2.81</td><td>-5.17</td><td>2.4</td><td>-11.08</td><td>0.10</td></tr><tr><td>china</td><td>1.42</td><td>-4.28</td><td>3</td><td>-11.23</td><td>2.63</td></tr><tr><td>man</td><td>1.24</td><td>-6.04</td><td>2.8</td><td>-12.25</td><td>0.52</td></tr><tr><td>bioweapon</td><td>1.24</td><td>4.81</td><td>5.5</td><td>-6.40</td><td>19.32</td></tr><tr><td>conspiracy</td><td>1.1</td><td>-4.24</td><td>3.7</td><td>-13.96</td><td>3.72</td></tr><tr><td>new</td><td>1.03</td><td>-5.13</td><td>5.4</td><td>-18.93</td><td>6.39</td></tr><tr><td>update</td><td>0.87</td><td>-0.15</td><td>2.5</td><td>-6.57</td><td>5.69</td></tr><tr><td>cases</td><td>0.83</td><td>-12.37</td><td>6.3</td><td>-26.75</td><td>2.10</td></tr><tr><td>outbreak</td><td>0.72</td><td>-1.25</td><td>2.9</td><td>-8.31</td><td>5.66</td></tr></table>
86
+
87
+ Table 3: Top eleven features from the random forest model with the conspiracy and title as feature with the statistics of the coefficients' posterior probability distributions. The first column shows the percentage of feature importance.
88
+
89
+ <table><tr><td/><td>SVIN</td><td>mean</td><td>SD</td><td>1%</td><td>99%</td></tr><tr><td>conspiracy %</td><td>2.82</td><td>34.96</td><td>6.2</td><td>20.56</td><td>50.09</td></tr><tr><td>virus</td><td>0.93</td><td>-6.70</td><td>5.3</td><td>-19.64</td><td>4.82</td></tr><tr><td>covid19</td><td>0.84</td><td>-28.8</td><td>10</td><td>-54.33</td><td>-6.20</td></tr><tr><td>god</td><td>0.75</td><td>19.29</td><td>7.6</td><td>3.39</td><td>37.54</td></tr><tr><td>allah</td><td>0.73</td><td>-40.09</td><td>26</td><td>-103.18</td><td>1.32</td></tr><tr><td>china</td><td>0.72</td><td>-4.64</td><td>3.9</td><td>-14.60</td><td>3.76</td></tr><tr><td>gates</td><td>0.69</td><td>3.39</td><td>16</td><td>-32.39</td><td>42.94</td></tr><tr><td>amir</td><td>0.68</td><td>-8.57</td><td>6.6</td><td>-24.66</td><td>5.81</td></tr><tr><td>lab</td><td>0.68</td><td>-20.70</td><td>8.2</td><td>-40.57</td><td>-2.28</td></tr><tr><td>cases</td><td>0.66</td><td>-22.41</td><td>14</td><td>-57.26</td><td>8.48</td></tr><tr><td>trump</td><td>0.63</td><td>14.53</td><td>9.6</td><td>-7.23</td><td>36.92</td></tr></table>
90
+
91
+ Table 4: Top eleven features from the SVM model with conspiracy and first 100 comments as features with the statistics of the coefficients' posterior probability distributions. The first column shows the SVM coefficients.
92
+
93
+ ![01963db9-d143-7d03-98dc-de3bd7f8cff6_4_845_1315_608_184_0.jpg](images/01963db9-d143-7d03-98dc-de3bd7f8cff6_4_845_1315_608_184_0.jpg)
94
+
95
+ Figure 3: Deviance using WAIC as model selection metric. Black error bars represent the standard error.
96
+
97
+ ## References
98
+
99
+ Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 830-839.
100
+
101
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Covid-19: The first public coronavirus twitter dataset. arXiv preprint arXiv:2003.07372.
102
+
103
+ Sohan De Sarkar, Fan Yang, and Arjun Mukherjee. 2018. Attending sentences to detect satirical fake news. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3371- 3380, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
104
+
105
+ Marco L Della Vedova, Eugenio Tacchini, Stefano Moret, Gabriele Ballarin, Massimo DiPierro, and Luca de Alfaro. 2018. Automatic online fake news detection combining content and social signals. In 2018 22nd Conference of Open Innovations Association (FRUCT), pages 272-279. IEEE.
106
+
107
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
108
+
109
+ Emilio Ferrara. 2020. What types of covid-19 conspiracies are populated by Twitter bots? First Monday.
110
+
111
+ Sheera Frenkel, Ben Decker, and Davey Alba. 2020. How the 'plandemic' movie and its falsehoods spread widely online.
112
+
113
+ Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
114
+
115
+ Pankaj Gupta, Khushbu Saxena, Usama Yaseen, Thomas Runkler, and Hinrich Schütze. 2019. Neural architectures for fine-grained propaganda detection in news. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 92-97, Hong Kong, China. Association for Computational Linguistics.
116
+
117
+ Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1859-1874, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
118
+
119
+ Shan Jiang, Miriam Metzger, Andrew Flanagin, and Christo Wilson. 2020. Modeling and measuring expressed (dis) belief in (mis) information. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 315-326.
120
+
121
+ Shan Jiang and Christo Wilson. 2018. Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1-23.
122
+
123
+ Young Bin Kim, Jun Gi Kim, Wook Kim, Jae Ho Im, Tae Hyeong Kim, Shin Jin Kang, and Chang Hun Kim. 2016. Predicting fluctuations in cryptocur-rency transactions based on user comments and replies. PloS one, 11(8):e0161197.
124
+
125
+ Sumeet Kumar and Kathleen Carley. 2019. Tree LSTMs with convolution units to predict stance and rumor veracity in social media conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5047-5058, Florence, Italy. Association for Computational Linguistics.
126
+
127
+ Piroska Lendvai and Uwe Reichel. 2016. Contradiction detection for rumorous claims. In Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM), pages 31-40, Osaka, Japan. The COLING 2016 Organizing Committee.
128
+
129
+ Heidi Oi-Yee Li, Adrian Bailey, David Huynh, and James Chan. 2020. YouTube as a source of information on covid-19: a pandemic of misinformation? BMJ Global Health, 5(5).
130
+
131
+ Quanzhi Li, Qiong Zhang, Luo Si, and Yingchi Liu. 2019. Rumor detection on social media: Datasets, methods and opportunities. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 66-75, Hong Kong, China. Association for Computational Linguistics.
132
+
133
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
134
+
135
+ Nahema Marchal, Hubert Au, and Philip N Howard. 2020. Coronavirus news and information on YouTube. Health, 1(1):0-3.
136
+
137
+ Elaheh Momeni, Claire Cardie, and Myle Ott. 2013. Properties, prediction, and prevalence of useful user-generated comments for descriptive annotation of social media objects. In Seventh International AAAI Conference on Weblogs and Social Media.
138
+
139
+ Souneil Park, Minsam Ko, Jungwoo Kim, Ying Liu, and Junehwa Song. 2011. The politics of comments: predicting political orientation of news stories with commenters' sentiment patterns. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, pages 113-122.
140
+
141
+ Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic detection of fake news. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3391-3401, Santa Fe, New Mexico, USA.
142
+
143
+ Association for Computational Linguistics.
144
+
145
+ Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, and Gerhard Weikum. 2018. DeClarE: Debunking fake news and false claims using evidence-aware deep learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 22-32, Brussels, Belgium. Association for Computational Linguistics.
146
+
147
+ Vahed Qazvinian, Emily Rosengren, Dragomir Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589-1599.
148
+
149
+ Sarah T Roberts. 2019. Behind the screen: Content moderation in the shadows of social media. Yale University Press.
150
+
151
+ Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. Fake news or truth? using satirical cues to detect potentially misleading news.
152
+
153
+ Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD explorations newsletter, 19(1):22-36.
154
+
155
+ B Silverman. 2019. Crowdtangle for academics and researchers.
156
+
157
+ Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24-54.
158
+
159
+ James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346-3359, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
160
+
161
+ Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner. 2019. Rank-normalization, folding, and localization: An improved $\mathrm{r}$ for assessing convergence of mcmc. arXiv preprint arXiv:1903.08008.
162
+
163
+ Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on Twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 647-653.
164
+
165
+ Sumio Watanabe and Manfred Opper. 2010. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of machine learning research, 11(12).
166
+
167
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763.
168
+
169
+ Xinyi Zhou and Reza Zafarani. 2018. Fake news: A survey of research, detection methods, and opportunities. arXiv preprint arXiv:1812.00315.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/M4wgkxaPcyj/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NLP-BASED FEATURE EXTRACTION FOR THE DETECTION OF COVID-19 MISINFORMATION VIDEOS ON YOUTUBE
2
+
3
+ Juan Carlos Medina Serrano, Orestis Papakyriakopoulos, Simon Hegelich
4
+
5
+ Technical University of Munich, Germany
6
+
7
+ {juan.medina, orestis.p} @tum.de, simon.hegelich @hfp.tum.de
8
+
9
+ § ABSTRACT
10
+
11
+ We present a simple NLP methodology for detecting COVID-19 misinformation videos on YouTube by leveraging user comments. We use transfer-learning pre-trained models to generate a multi-label classifier that can categorize conspiratorial content. We use the percentage of misinformation comments on each video as a new feature for video classification. We show that the inclusion of this feature in simple models yields an accuracy of up to 82.2%. Furthermore, we verify the significance of the feature by performing a Bayesian analysis. Finally, we show that adding the first hundred comments as tf-idf features increases the video classifier accuracy by up to ${89.4}\%$ .
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The COVID-19 health crisis was accompanied by a misinfodemic: The limited knowledge on the nature and origin of the virus gave ample space for the emergence of conspiracy theories, which were diffused on YouTube, and online social networks. Although YouTube accelerated attempts to detect and filter related misinformation, it yielded moderate results (Li et al., 2020; Frenkel et al., 2020).
16
+
17
+ In this study, we present a simple NLP-based methodology that can support fact-checkers in detecting COVID-19 misinformation on YouTube. Instead of training models on the videos themselves and predicting their nature, we exploit the vast amount of available comments on each YouTube video and extract features that can be used in misinformation detection. Our methodology comes with the advantage that labeling comments is simpler and faster than video labeling. Additionally, no complex neural architecture is needed for the classification of videos.
18
+
19
+ Our study provides the following contributions:
20
+
21
+ * We create a multi-label classifier based on transfer-learning that can detect conspiracy-laden comments. We find that misinformation videos contain a significantly higher proportion of conspiratorial comments.
22
+
23
+ * Based on this information, we use the percentage of conspiracy comments as feature for the detection of COVID-19 misinformation videos. We verify its efficiency by deploying simple machine learning models for misinformation detection. We validate feature significance by Bayesian analysis.
24
+
25
+ * We show that including the first hundred comments as tf-idf features in the classifier increases the accuracy from 82.2% to 89.4%.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Previous research studies have extensively investigated the possibilities and limits of NLP for detecting misinformation. Researchers have provided theoretical frameworks for understanding the lingual and contextual properties of various types of misinformation, such as rumors, false news, and propaganda (Li et al., 2019; Thorne and Vlachos, 2018; Rubin et al.; Zhou and Zafarani, 2018). Given the general difficulty in detecting misinformation, scientists have also developed dedicated benchmark datasets to evaluate the effectiveness of NLP architectures in misinformation related classification tasks (Pérez-Rosas et al., 2018; Hanselowski et al., 2018). Given the vast amount of misinformation appearing in online social networks, various research studies propose case-specific NLP methodologies for tracing misinformation. For example, Della Vedova et al. (2018) and Popat et al. (2018) combined lingual properties of articles and other meta-data for the detection of false news. Volkova et al. (2017), Qazvinian et al. (2011) and Kumar and Carley (2019) created special architectures that take into consideration the microblogging structure of online social networks, while De Sarkar et al. (2018) and Gupta et al. (2019) exploited sentence-level semantics for misinformation detection.
30
+
31
+ Despite the deployment of such architectures for fact-checking, locating malicious content and promptly removing them remains an open challenge (Gillespie, 2018; Roberts, 2019). In the case of Covid-19 misinformation, a large share of conspiratorial contents remain online on YouTube and other platforms, influencing the public, despite content moderation practices (Li et al., 2020; Frenkel et al., 2020; Ferrara, 2020). Given this, it is important to develop case-specific NLP tools that can assist policymakers and researchers in the process of detecting COVID-19 misinformation and managing it accordingly. Towards this end, we illustrate how NLP-based feature extraction (Shu et al., 2017; Jiang et al., 2020; Lendvai and Reichel, 2016) based on user comments can be effectively used for this task. User comment data has been employed to annotate social media objects (Momeni et al., 2013), infer the political leaning of news articles (Park et al., 2011), and to predict popularity (Kim et al., 2016). Jiang and Wilson (2018) previously analyzed user comments to detect misinformation. However, they focused on linguistic signals and concluded that users' comments were not strong signals for detecting misinformation.
32
+
33
+ § 3 METHODOLOGY AND EXPERIMENTS
34
+
35
+ § 3.1 DATASET
36
+
37
+ The first step of the study consisted of obtaining a set of YouTube videos that included either misinformation or debunking content. We decided not to use YouTube's search function as previous studies found few conspiratorial content on the top results (Marchal et al., 2020). We preferred to search for YouTube videos through user-generated content on social media platforms. For this, we queried the pushhift Reddit API (Baumgartner et al., 2020), and Crowdtangle's historical data of public Facebook posts (Silverman, 2019) using the query "COVID-19 OR coronavirus". Additionally, we downloaded the COVID-19 Twitter dataset developed by Chen et al. (2020). The total dataset included over 85 million posts generated between January and April 2020. We significantly reduced this dataset by querying the posts with "biowarfare OR biological weapon OR bioweapon OR manmade OR human origin". From the remaining posts, we extracted and expanded the URLs. We identified 1,672 unique YouTube videos. 10% of these videos had been blocked by YouTube as of April 2020. For the rest of the videos, we watched them, excluded the non-English videos, and manually labeled them as either misinformation, factual, or neither. To label a video as misinformation, we validated that its message was conveying with certainty a conspiracy theory regarding the origin of the coronavirus, as a man-made bioweapon or being caused by $5\mathrm{G}$ . We did not classify videos that questioned its origin but showed no certainty about a hoax (which included well-known and verified news media videos) as misinformation. We classified as factual those videos that included debunking of conspiracy theories or presented scientific results on the origins and causes of COVID-19. We labeled the rest of the videos as neither. Two of the authors (JCMS, OP) performed the labeling procedure independently. For the cases where the labels did not agree, the third author was consulted (SH).
38
+
39
+ Afterward, we collected the comments on both misinformation and factual videos using YouTube's Data ${\mathrm{{API}}}^{1}$ . For this study, we only included videos with more than twenty comments. The final dataset consisted of 113 misinformation and 67 factual videos, with 32,273 and 119,294 total comments respectively. We selected a ten percent random sample of the comments from the misinformation videos and proceeded to label them. This labeling procedure was performed in the same manner as the video classification to assure data quality. For each comment, we collected two labels. First, we gave a label if the comment expressed agreement (1) or not (0). Agreement comments included comments such as "this is the video I was looking for", or "save and share this video before YouTube puts it down". The second label considered if comments amplified misinformation with a conspiracy theory/misinformation comment (1) or without one (0). Comments that questioned the conspiracies (such as "could it be a bioweapon?") were not labeled as misinformation. ${19.7}\%$ of the comments in the sample were labeled as conspiracy comment and ${12.5}\%$ as agreement comment. Only 2.2% of the comments were classified as both agreement and conspiratorial. Although both agreement and conspiracy labeled comments express the same message of believing in the misinformation content from the videos, we decided to keep them apart due to their different linguistic properties. To compare the collection of agree-labeled comments and conspiracy-labeled comments, we tokenized and created a bag-of-words model. The two collections share ${19.4}\%$ of their vocabulary. However, only ${1.95}\%$ of the vocabulary has more than four occurrences in both collections. We applied ${\chi }^{2}$ tests for each of these remaining words and observe that ${50}\%$ occur in significantly different proportions. At the end, only 0.96% of the vocabulary has a significant similar number of occurrences in the two datasets. The YouTube comments dataset without user data can be accessed in this GitHub repository ${}^{2}$ , alongside a Google Colab notebook with the code.
40
+
41
+ ${}^{1}$ https://developers.google.com/youtube/v3
42
+
43
+ § 3.2 CLASSIFICATION OF USERS COMMENTS
44
+
45
+ We first performed a multi-label classification on the 10% sample of the misinformation videos' comments. We split the annotated data into training $\left( {{80}\% }\right)$ and test $\left( {{20}\% }\right)$ datasets. We employed state-of-the-art neural transfer-learning for the classification by fine-tuning three pre-trained models: XLNet base (Yang et al., 2019), BERT base (Devlin et al., 2018) and RoBERTa base (Liu et al., 2019). The fine-tuning consists of initializing the model's pre-trained weights and re-training on labeled data. We ran the models for four epochs using the same hyperparameters as the base models. For the experiments, we used 0.5 as a decision threshold. Additionally, we train two simpler models as baselines: a logistic regression model using LIWC's lexicon-derived frequencies (Tausczik and Pennebaker, 2010) as features, and a multinomial naive Bayes model using bag-of-words vectors as features. Table 1 shows the average micro- ${F}_{1}$ scores for the three transformer models after performing the fine-tuning five times. RoBERTa is the best performing model for the training and test dataset on the conspiracy classification as for the test data on the agreement label. BERT is the best performing model only for the training data on the agree label. The three transformer models outperform the baseline models. This predictive superiority is more evident in the precision-recall curves (with corresponding binary- ${F}_{1}$ scores) of the five models on the test data (Figure 1).
46
+
47
+ ${}^{2}$ https://github.com/JuanCarlosCSE/YouTube_misinfo
48
+
49
+ max width=
50
+
51
+ 2*X 2|c|Agree 2|c|Conspiracy
52
+
53
+ 2-5
54
+ Train Test Train Test
55
+
56
+ 1-5
57
+ LIWC 88.7 88.6 81 78.2
58
+
59
+ 1-5
60
+ $\mathbf{{NB}}$ 94.2 82.4 94.3 78.8
61
+
62
+ 1-5
63
+ XLNet ${97} \pm {0.1}$ 93.1±0.3 93.9±0.5 ${84.8} \pm {0.6}$
64
+
65
+ 1-5
66
+ BERT 98.5±0.1 93.3±0.5 96.3±0.3 ${83.8} \pm {0.9}$
67
+
68
+ 1-5
69
+ RoBERTa ${98.1} \pm {0.2}$ $\mathbf{{93.9}} \pm {0.4}$ 96.4±0.3 $\mathbf{{86.7}} \pm {0.5}$
70
+
71
+ 1-5
72
+
73
+ Table 1: Train and test micro ${F}_{1}$ scores (mean and standard deviation) from multi-label classification models: LIWC with logistic regression and Naive Bayes as baselines, and three transformer models with five runs.
74
+
75
+ < g r a p h i c s >
76
+
77
+ Figure 1: Precision and recall curves for binary ${F}_{1}$ scores for the conspiracy (upper figure) and agreement (lower figure) label. The plot shows the results for three neural-transfer classifiers.
78
+
79
+ We employed the fine-tuned RoBERTa model to predict the labels of the remaining comments from the misinformation and factual videos. We then calculated the percentage of conspiracy comments per video. We also obtained this percentage for the agreement label. Figure 2 shows the resulting density distributions from misinformation and factual videos. We observe a difference between the distributions from the two types of videos. We confirmed this by performing Welch's t-test for independent samples. For the conspiracy comments percentage, the t-test was significant $\left( {\mathrm{p} < {0.000}}\right)$ , indicating that the samples come from different distributions. The t-test was not significant for the agreement percentage (p>0.1).
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 2: Probability densities of misinformation and factual videos regarding the percentage of conspiratorial comments (upper) agreement comments (lower).
84
+
85
+ § 3.3 CLASSIFICATION OF YOUTUBE VIDEOS
86
+
87
+ The next step consisted of classifying the set of YouTube videos to detect misinformation. For this, we employed the percentage of conspiracy comments of each video as a feature. Additionally, we extracted content features from the videos' titles and from the raw first hundred comments per video (or all the comments for videos with fewer than 100 comments). For this, we preprocessed the titles and comments with tokenization, removal of stopwords, and the usage of the standard term frequency-inverse document (tf-idf) weighting for word frequencies to create a document term matrix, whose columns serve as input features. We selected six feature settings for our experiments: each of the set of features alone and the three possible combination between them . For each setting, we employed three classification models: logistic regression, support vector machine (SVM), and random forest. For the SVM models, we tried the linear, sigmoid, and RBF kernel. For both SVM and random forest, we performed a grid search to obtain the best hyperparameters. In each run, we performed 10-fold cross-validation and report the mean accuracy in Table 2. We observe that the SVM model has the highest accuracy for all the settings except for one. The conspiracy feature alone achieves an accuracy of 81.1 . Using the tf-idf comment features the accuracy is slightly better with 83.9. However, the conspiracy feature and comments combined achieve the highest accuracy of 89.4. We observe that the models with all the features combined have lower accuracy than the models omitting the title features. This may be due to overfitting and the title repeating information from the other two sets of features. Interestingly, the accuracy for the best model is still high (85.5%) when taking into consideration only videos with less than 100 comments. This implies that our methodology is appropriate for the early detection of misinformation videos.
88
+
89
+ max width=
90
+
91
+ X LR SVM $\mathbf{{RF}}$
92
+
93
+ 1-4
94
+ title 62.7 65.6 (1) 64.4
95
+
96
+ 1-4
97
+ conspiracy % 62.7 81.1 (r) 72.2
98
+
99
+ 1-4
100
+ comments 66.7 83.9 (r) 82.8
101
+
102
+ 1-4
103
+ title + conspiracy % 64.4 77.7 (s) 82.2
104
+
105
+ 1-4
106
+ comments + conspiracy % 73.3 89.4 (1) 84.44
107
+
108
+ 1-4
109
+ all 73.3 84.4 (1) 82.7
110
+
111
+ 1-4
112
+
113
+ Table 2: Classification accuracy for logistic regression, support vector machines, and random forest models for six feature settings. For the SVM, we applied three kernels: linear (l), sigmoid (s) and RBF (r). The kernel with the best accuracy appears in parenthesis.
114
+
115
+ § 3.4 BAYESIAN MODELING
116
+
117
+ To find the statistical validity of the conspiracy percentage feature, we turned to Bayesian modeling as it allows us to obtain the full posterior distribution of feature coefficients. We performed inference on three Bayesian logistic regression models using a Hamiltonian Monte Carlo solver. A simple model considered only the conspiracy percentage feature. A second model included this feature and the ten most relevant word features from the random forest model trained only on the title and conspiracy percentage. A third model included the conspiracy feature, and the top ten most relevant words from the linear SVM trained on the conspiracy feature and the first 100 comments. The first column of Table 3 and 4 shows the importance of each of the features in the random forest and linear SVM model, respectively. The two tables also show the statistics of the posterior probability distributions of the model coefficients: the mean, standard deviation, and the $1\%$ and ${99}\%$ quantiles. For the three models, the coefficients distribution converged (the $\widehat{R}$ diagnostic (Vehtari et al.,2019) was equal to one). We specifically selected logistic regression models for their interpretability. We observe that for the model based on the title word features, the posterior distribution of the conspiracy percentage feature coefficient is the only one that does not include zero in its ${98}\%$ highest posterior density interval (Table 3). Although this is not equivalent to traditional p-values, it conveys significance in a Bayesian setting. The model based on the 100 comments word features (Table 4), maintains the conspiracy feature as significant. However, also three coefficients from the word features avoid zero in their 98% interval. The model's coefficients are negative for covid19 and lab, and positive for god.
118
+
119
+ Finally, we compare the three Bayesian models using the the WAIC information criteria, which estimates out-of-sample expectation and corrects for the effective number of parameters to avoid overfitting (Watanabe and Opper, 2010). Figure 3 shows the resulting deviance of the three models. We observe that the second model is slightly better than the simple model. However, the differences are included in the standard error of the title words feature model. This is not true for the simple model and the model including the comments features. In this case, the full model outperforms the model based only on the conspiracy feature. This indicates that there is important information in the videos' first hundred comments that is not explained by the conspiracy percentage feature on its own.
120
+
121
+ § 4 DISCUSSION
122
+
123
+ We have leveraged large quantities of user comments to extract a simple feature that is effective in the prediction of misinformation videos. Given that the classifier is also accurate for videos with few comments, it can be used for online learning. For example, the user comments of videos containing coronavirus can be tracked and classified as they are posted. High levels of conspiracy comments could then indicate that the video includes misinformation claims. For this to work, it is not necessary a conspiracy classifier with perfect accuracy given that the percentage of conspiracy comments feature is based on an aggregated classifications. An improved classifier would be able to define a threshold that allows a balanced number of false positives and true negatives. The average percentage of conspiratorial comments would be maintained, irrespective of the wrong classifications. On the other hand, the accuracy of the video classifier is more critical. We found that using simple classifiers on the raw content of the videos' first 100 comments significantly improves the accuracy of misinformation video detection from 82.2 to 89.4. However, in large-scale settings, it may be prohibitive to store the raw comments and continuously perform batch classification. In contrast, the conspiracy percentage feature only requires storing a conspiracy comment counter per video. Future research could leverage the video content to increase the classifier accuracy. The detection of misinformation on social media remains an open challenge, and further research is needed to understand how the COVID-19 misinfodemic spread to prevent future ones.
124
+
125
+ max width=
126
+
127
+ X $\mathbf{{RF}}$ mean SD 1% 99%
128
+
129
+ 1-6
130
+ conspiracy % 19.2 28.25 4.8 18.19 39.94
131
+
132
+ 1-6
133
+ coronavirus 2.95 -7.45 3.4 -15.57 0.01
134
+
135
+ 1-6
136
+ covid19 2.81 -5.17 2.4 -11.08 0.10
137
+
138
+ 1-6
139
+ china 1.42 -4.28 3 -11.23 2.63
140
+
141
+ 1-6
142
+ man 1.24 -6.04 2.8 -12.25 0.52
143
+
144
+ 1-6
145
+ bioweapon 1.24 4.81 5.5 -6.40 19.32
146
+
147
+ 1-6
148
+ conspiracy 1.1 -4.24 3.7 -13.96 3.72
149
+
150
+ 1-6
151
+ new 1.03 -5.13 5.4 -18.93 6.39
152
+
153
+ 1-6
154
+ update 0.87 -0.15 2.5 -6.57 5.69
155
+
156
+ 1-6
157
+ cases 0.83 -12.37 6.3 -26.75 2.10
158
+
159
+ 1-6
160
+ outbreak 0.72 -1.25 2.9 -8.31 5.66
161
+
162
+ 1-6
163
+
164
+ Table 3: Top eleven features from the random forest model with the conspiracy and title as feature with the statistics of the coefficients' posterior probability distributions. The first column shows the percentage of feature importance.
165
+
166
+ max width=
167
+
168
+ X SVIN mean SD 1% 99%
169
+
170
+ 1-6
171
+ conspiracy % 2.82 34.96 6.2 20.56 50.09
172
+
173
+ 1-6
174
+ virus 0.93 -6.70 5.3 -19.64 4.82
175
+
176
+ 1-6
177
+ covid19 0.84 -28.8 10 -54.33 -6.20
178
+
179
+ 1-6
180
+ god 0.75 19.29 7.6 3.39 37.54
181
+
182
+ 1-6
183
+ allah 0.73 -40.09 26 -103.18 1.32
184
+
185
+ 1-6
186
+ china 0.72 -4.64 3.9 -14.60 3.76
187
+
188
+ 1-6
189
+ gates 0.69 3.39 16 -32.39 42.94
190
+
191
+ 1-6
192
+ amir 0.68 -8.57 6.6 -24.66 5.81
193
+
194
+ 1-6
195
+ lab 0.68 -20.70 8.2 -40.57 -2.28
196
+
197
+ 1-6
198
+ cases 0.66 -22.41 14 -57.26 8.48
199
+
200
+ 1-6
201
+ trump 0.63 14.53 9.6 -7.23 36.92
202
+
203
+ 1-6
204
+
205
+ Table 4: Top eleven features from the SVM model with conspiracy and first 100 comments as features with the statistics of the coefficients' posterior probability distributions. The first column shows the SVM coefficients.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 3: Deviance using WAIC as model selection metric. Black error bars represent the standard error.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/PlUA_mgGaPq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Rapidly Deploying a Neural Search Engine for the COVID-19 Open Research Dataset: Preliminary Thoughts and Lessons Learned
2
+
3
+ Edwin Zhang, ${}^{1}$ Nikhil Gupta, ${}^{1}$ Rodrigo Nogueira, ${}^{1}$ Kyunghyun Cho, ${}^{2,3,4,5}$ and Jimmy Lin ${}^{1}$
4
+
5
+ ${}^{1}$ David R. Cheriton School of Computer Science, University of Waterloo
6
+
7
+ ${}^{2}$ Courant Institute of Mathematical Sciences, New York University
8
+
9
+ ${}^{3}$ Center for Data Science, New York University
10
+
11
+ ${}^{4}$ Facebook AI Research ${}^{5}$ CIFAR Associate Fellow
12
+
13
+ ## Abstract
14
+
15
+ We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI. This web application exists as part of a suite of tools that we have developed over the past few weeks to help domain experts tackle the ongoing global pandemic. We hope that improved information access capabilities to the scientific literature can inform evidence-based decision making and insight generation. This paper describes our initial efforts and offers a few thoughts about lessons we have learned along the way.
16
+
17
+ ## 1 Introduction
18
+
19
+ As a response to the worldwide COVID-19 pandemic, on March 13, 2020, the Allen Institute for AI released the COVID-19 Open Research Dataset (CORD-19) in partnership with a coalition of research groups. ${}^{1}$ With weekly updates since the initial release, the corpus currently contains over 47,000 scholarly articles, including over 36,000 with full text, about COVID-19 and coronavirus-related research more broadly (for example, SARS and MERS), drawn from a variety of sources including PubMed, a curated list of articles from the WHO, as well as preprints from bioRxiv and medRxiv. The stated goal of the effort is "to mobilize researchers to apply recent advances in natural language processing to generate new insights in support of the fight against this infectious disease". We responded to this call to arms.
20
+
21
+ In approximately two weeks, our team was able to build, deploy, and share with the research community a number of components that support information access to this corpus. We have also assembled these components into two end-to-end search applications that are available online at covidex. ai: a keyword-based search engine that supports faceted browsing and the Neural Covidex, a search engine that exploits the latest advances in deep learning and neural architectures for ranking. This paper describes our initial efforts.
22
+
23
+ We have several goals for this paper: First, we discuss our motivation and approach, articulating how, hopefully, better information access capabilities can contribute to the fight against this global pandemic. Second, we provide a technical description of what we have built. Previously, this information was scattered on different web pages, in tweets, and ephemeral discussions with colleagues over video conferences and email. Gathering all this information in one place is important for other researchers who wish to evaluate and build on our work. Finally, we reflect on our journey so far-discussing the evaluation of our system and offering some lessons learned that might inform future efforts in building technologies to aid in rapidly developing crises.
24
+
25
+ ## 2 Motivation and Approach
26
+
27
+ Our team was assembled on March 21, 2020 over Slack, comprising members of two research groups from the University of Waterloo and New York University. This was a natural outgrowth of existing collaborations, and thus we had rapport from the very beginning. Prior to these discussions, we had known about the CORD-19 dataset, but had not yet undertaken any serious attempt to build a research project around it.
28
+
29
+ Motivating our efforts, we believed that information access capabilities (search, question answering, etc.)-broadly, the types of technologies that our team works on-could be applied to provide users with high-quality information from the scientific literature, to inform evidence-based decision making and to support insight generation. Examples might include public health officials assessing the efficacy of population-level interventions, clinicians conducting meta-analyses to update care guidelines based on emerging clinical studies, virologist probing the genetic structure of COVID-19 in search of vaccines. We hope to contribute to these efforts by building better information access capabilities and packaging them into useful applications.
30
+
31
+ ---
32
+
33
+ 'https://pages.semanticscholar.org/ coronavirus-research
34
+
35
+ ---
36
+
37
+ At the outset, we adopted a two-pronged strategy to build both end-to-end applications as well as modular, reusable components. The intended users of our systems are domain experts (e.g., clinicians and virologists) who would naturally demand responsive web applications with intuitive, easy-touse interfaces. However, we also wished to build component technologies that could be shared with the research community, so that others can build on our efforts without "reinventing the wheel". To this end, we have released software artifacts (e.g., Java package in Maven Central, Python module on PyPI) that encapsulate some of our capabilities, complete with sample notebooks demonstrating their use. These notebooks support one-click repli-cability and provide a springboard for extensions.
38
+
39
+ ## 3 Technical Description
40
+
41
+ Multi-stage search architectures represent the most common design for modern search engines, with work in academia dating back over a decade (Matveeva et al., 2006; Wang et al., 2011; Asadi and Lin, 2013). Known production deployments of this architecture include the Bing web search engine (Pedersen, 2010) as well as Alibaba's e-commerce search engine (Liu et al., 2017).
42
+
43
+ The idea behind multi-stage ranking is straightforward: instead of a monolithic ranker, ranking is decomposed into a series of stages. Typically, the pipeline begins with an initial retrieval stage, most often using "bag of words" queries against an inverted index. One or more subsequent stages reranks and refines the candidate set successively until the final results are presented to the user.
44
+
45
+ This multi-stage ranking design provides a nice organizing structure for our efforts-in particular, it provides a clean interface between basic keyword search and subsequent neural reranking components. This allowed us to make progress independently in a decoupled manner, but also presents natural integration points.
46
+
47
+ ### 3.1 Modular and Reusable Keyword Search
48
+
49
+ In our design, initial retrieval is performed by the Anserini IR toolkit (Yang et al.,2017,2018), ${}^{2}$ which we have been developing for several years and powers a number of our previous systems that incorporates various neural architectures (Yang et al., 2019; Yilmaz et al., 2019). Anserini represents an effort to better align real-world search applications with academic information retrieval research: under the covers, it builds on the popular and widely-deployed open-source Lucene search library, on top of which we provide a number of missing features for conducting research on modern IR test collections.
50
+
51
+ Anserini provides an abstraction for document collections, and comes with a variety of adaptors for different corpora and formats: web pages in WARC containers, XML documents in tarballs, JSON objects in text files, etc. Providing simple keyword search over CORD-19 required only writing an adaptor for the corpus that allows Anserini to ingest the documents. We were able to implement such an adaptor in a short amount of time.
52
+
53
+ However, one important issue that immediately arose with CORD-19 concerned the granularity of indexing, i.e., what should we consider a "document", as the "atomic unit" of indexing and retrieval? One complication stems from the fact that the corpus contains a mix of articles that vary widely in length, not only in terms of natural variations, but also because the full text is not available for some documents. It is well known in the IR literature, dating back several decades (e.g., Singhal et al. 1996), that length normalization plays an important role in retrieval effectiveness.
54
+
55
+ Here, however, the literature does provide some guidance: previous work (Lin, 2009) showed that paragraph-level indexing can be more effective than the two other obvious alternatives of (a) indexing only the title and abstract of articles and (b) indexing each full-text article as a single, individual document. Based on this previous work, in addition to the two above conditions (for comparison purposes), we built (c) a paragraph-level index as follows: each full text article is segmented into paragraphs (based on existing annotations), and for each paragraph, we create a "document" for indexing comprising the title, abstract, and that paragraph. Thus, a full-text article comprising $n$ paragraphs yields $n + 1$ separate "retrievable units" in the index. To be consistent with standard IR parlance, we call each of these retrieval units a document, in a generic sense, despite their composite structure. An article for which we do not have the full text is represented by an individual document in this scheme. Note that while fielded search (dividing the text into separate fields and performing scoring separately for each field) can yield better results, for expediency we did not implement this. Following best practice, documents are ranked using the BM25 scoring function.
56
+
57
+ ---
58
+
59
+ 2 http://anserini.io/
60
+
61
+ ---
62
+
63
+ Based on "eyeballing the results" using sample information needs (manually formulated into keyword queries) from the Kaggle challenge associated with CORD-19, ${}^{3}$ results from the paragraph index did appear to be better (see Section 4 for more discussion). In particular, the full-text index, i.e., condition (b) above, overly favored long articles, which were often book chapters and other material of a pedagogical nature, less likely to be relevant in our context. The paragraph index often retrieves multiple paragraphs from the same article, but we consider this to be a useful feature, since duplicates of the same underlying article can provide additional signals for evidence combination by downstream components.
64
+
65
+ Since Anserini is built on top of Lucene, which is implemented in Java, our tools are designed to run on the Java Virtual Machine (JVM). However, TensorFlow (Abadi et al., 2016) and Py-Torch (Paszke et al., 2019), the two most popular neural network toolkits, use Python as their main language. More broadly, Python-with its diverse and mature ecosystem-has emerged as the language of choice for most data scientists today. Anticipating this gap, our team had been working on Pyserini, ${}^{4}$ Python bindings for Anserini, since late 2019. Pyserini is released as a Python module on PyPI and easily installable via pip. ${}^{5}$
66
+
67
+ Putting all the pieces together, by March 23, a scant two days after the formation of our team, we were able release modular and reusable baseline keyword search components for accessing the CORD-19 collection. ${}^{6}$ Specifically, we shared pre-built Anserini indexes for CORD-19 and released updated version of Anserini (the underlying IR toolkit, as a Maven artifact in the Maven Central Repository) as well as Pyserini (the Python interface, as a Python module on PyPI) that provided basic keyword search. Furthermore, these capabilities were demonstrated in online notebooks, so that other researchers can replicate our results and continue to build on them.
68
+
69
+ Finally, we demonstrated, also via a notebook, how basic keyword search can be seamlessly integrated with modern neural modeling techniques. On top of initial candidate documents retrieved from Pyserini, we implemented a simple unsupervised sentence highlighting technique to draw a reader's attention to the most pertinent passages in a document, using the pretrained BioBERT model (Lee et al., 2020) from the HuggingFace Transformer library (Wolf et al., 2019). We used BioBERT to convert sentences from the retrieved candidates and the query (which we treat as a sequence of keywords) into sets of hidden vectors. ${}^{7}$ We compute the cosine similarity between every combination of hidden states from the two sets, corresponding to a sentence and the query. We choose the top- $K$ words in the context, and then highlight the top sentences that contain those words. Despite its unsupervised nature, this approach appeared to accurately identify pertinent sentences based on context. Originally meant as a simple demonstration of how keyword search can be seamlessly integrated with neural network components, this notebook provided the basic approach for sentence highlighting that we would eventually deploy in the Neural Covidex (details below).
70
+
71
+ ### 3.2 Keyword Search with Faceted Browsing
72
+
73
+ Python modules and notebooks are useful for fellow researchers, but it would be unreasonable to expect end users (for example, clinicians) to use them directly. Thus, we considered it a priority to deploy an end-to-end search application over CORD-19 with an easy-to-use interface.
74
+
75
+ Fortunately, our team had also been working on this, dating back to early 2019. In Clancy et al. (2019), we described integrating Anserini with Solr, so that we can use Anserini as a frontend to index directly into the Solr search platform. As Solr is also built on Lucene, such integration was not very onerous. On top of Solr, we were able to deploy the Blacklight search interface, ${}^{8}$ which is an application written in Ruby on Rails. In addition to providing basic support for query entry and results rendering, Blacklight also supports faceted browsing out of the box. With this combination-which had already been implemented for other corpora-our team was able to rapidly create a fully-featured search application on CORD-19, which we shared with the public on March 23 over social media. ${}^{9}$
76
+
77
+ ---
78
+
79
+ ${}^{3}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge
80
+
81
+ 4http://pyserini.io/
82
+
83
+ 5https://pypi.org/project/pyserini/
84
+
85
+ 6https://twitter.com/lintool/status/ 1241881933031841800
86
+
87
+ ${}^{7}$ We used the hidden activations from the penultimate layer immediately before the final softmax layer.
88
+
89
+ ---
90
+
91
+ ![01963db7-e083-7059-9ec3-935a3902444e_3_347_187_960_829_0.jpg](images/01963db7-e083-7059-9ec3-935a3902444e_3_347_187_960_829_0.jpg)
92
+
93
+ Figure 1: Screenshot of our "basic" Covidex keyword search application, which builds on Anserini, Solr, and Blacklight, providing basic BM25 ranking and faceting browsing.
94
+
95
+ A screenshot of this interface is shown in Figure 1. Beyond standard "type in a query and get back a list of results" capabilities, it is worthwhile to highlight the faceted browsing feature. From CORD-19, we were able to easily expose facets corresponding to year, authors, journal, and source. Navigating by year, for example, would allow a user to focus on older coronavirus research (e.g., on SARS) or the latest research on COVID-19, and a combination of the journal and source facets would allow a user to differentiate between pre-prints and the peer-reviewed literature, and between venues with different reputations.
96
+
97
+ ### 3.3 The Neural Covidex
98
+
99
+ The Neural Covidex is a search engine that takes advantage of the latest advances in neural ranking architectures, representing a culmination of our current efforts. Even before embarking on this project, our team had been active in exploring neural architectures for information access problems, particularly deep transformer models that have been pretrained on language modeling objectives: We were the first to apply BERT (Devlin et al., 2019) to the passage ranking problem. BERTserini (Yang et al., 2019) was among the first to apply deep transformer models to the retrieval-based question answering directly on large corpora. Birch (Yilmaz et al., 2019) represents the state of the art in document ranking (as of EMNLP 2019). All of these systems were built on Anserini.
100
+
101
+ In this project, however, we decided to incorporate our latest work based on ranking with sequence-to-sequence models (Nogueira et al., 2020). Our reranker, which consumes the candidate documents retrieved from CORD-19 by Py-serini using BM25 ranking, is based on the T5-base model (Raffel et al., 2019) that has been modified to perform a ranking task. Given a query $q$ and a set of candidate documents $d \in D$ , we construct the following input sequence to feed into T5-base:
102
+
103
+ ---
104
+
105
+ 8 https://projectblacklight.org/
106
+
107
+ 9https://twitter.com/lintool/status/ 1242085391123066880
108
+
109
+ ---
110
+
111
+ Query: $q$ Document: $d$ Relevant:(1)
112
+
113
+ The model is fine-tuned to produce either "true" or "false" depending on whether the document is relevant or not to the query. That is, "true" and "false" are the ground truth predictions in the sequence-to-sequence task, what we call the "target words".
114
+
115
+ At inference time, to compute probabilities for each query-document pair (in a reranking setting), we apply a softmax only on the logits of the "true" and "false" tokens. We rerank the candidate documents according to the probabilities assigned to the "true" token. See Nogueira et al. (2020) for additional details about this logit normalization trick and the effects of different target words.
116
+
117
+ Since we do not have training data specific to CORD-19, we fine-tuned our model on the MS MARCO passage dataset (Nguyen et al., 2016), which comprises ${8.8}\mathrm{M}$ passages obtained from the top 10 results retrieved by the Bing search engine (based on around $1\mathrm{M}$ queries). The training set contains approximately ${500}\mathrm{k}$ pairs of query and relevant documents, where each query has one relevant passage on average; non-relevant documents for training are also provided as part of the training data. Nogueira et al. (2020) and Yilmaz et al. (2019) had both previously demonstrated that models trained on MS MACRO can be directly applied to other document ranking tasks. We hoped that this is also the case for CORD-19.
118
+
119
+ We fine-tuned our T5-base model with a constant learning rate of ${10}^{-3}$ for ${10}\mathrm{k}$ iterations with class-balanced batches of size 256. We used a maximum of 512 input tokens and one output token (i.e., either "true" or "false", as described above). In the MS MARCO passage dataset, none of the inputs required truncation when using this length limit. Training the model takes approximately 4 hours on a single Google TPU v3-8.
120
+
121
+ For the Neural Covidex, we used the paragraph index built by Anserini over CORD-19 (see Section 3.1). Since some of the documents are longer than the length restrictions of the model, it is not feasible to directly apply our method to the entire text at once. To address this issue, we first segment each document into spans by applying a sliding window of 10 sentences with a stride of 5 . We then obtain a probability of relevance for each span by performing inference on it independently. We select the highest probability among these spans as the relevance probability of the document. Note that with the paragraph index, keyword search might retrieve multiple paragraphs from the same underlying article; our technique essentially takes the highest-scoring span across all these retrieved results as the score for that article to produce a final ranking of articles. That is, in the final interface, we deduplicate paragraphs so that each article only appears once in the results.
122
+
123
+ A screenshot of the Neural Covidex is shown in Figure 2. By default, the abstract of each article is displayed, but the user can click to reveal the relevant paragraph from that article (for those with full text). The most salient sentence is highlighted, using exactly the technique described in Section 3.1 that we initially prototyped in a notebook.
124
+
125
+ Architecturally, the Neural Covidex is currently built as a monolith (with future plans to refac-tor into more modular microservices), where all incoming API requests are handled by a service that performs searching, reranking, and text highlighting. Search is performed with Pyserini (as discussed in Section 3.1), reranking with T5 (discussed above), and text highlighting with BioBERT (also discussed in Section 3.1). The system is built using the FastAPI Python web framework, which was chosen for speed and ease of use. ${}^{10}$ The fron-tend UI is built with React to support the use of modular, declarative JavaScript components, ${}^{11}$ taking advantage of its vast ecosystem.
126
+
127
+ The system is currently deployed across a small cluster of servers, each with two NVIDIA V100 GPUs, as our pipeline requires neural network inference at query time (T5 for reranking, BioBERT for highlighting). Each server runs the complete software stack in a simple replicated setup (no partitioning). On top of this, we leverage Cloudflare as a simple load balancer, which uses a round robin scheme to dispatch requests across the different servers. ${}^{12}$ The end-to-end latency for a typical query is around two seconds.
128
+
129
+ On April 2, 2020, a little more than a week after publicly releasing the basic keyword search interface and associated components, we launched the Neural Covidex on social media. ${}^{13}$
130
+
131
+ ---
132
+
133
+ 10 https://fastapi.tiangolo.com/
134
+
135
+ 11https://reactis.org/
136
+
137
+ 12https://www.cloudflare.com/
138
+
139
+ 13https://twitter.com/lintool/status/ 1245749445930688514
140
+
141
+ ---
142
+
143
+ ![01963db7-e083-7059-9ec3-935a3902444e_5_348_189_959_825_0.jpg](images/01963db7-e083-7059-9ec3-935a3902444e_5_348_189_959_825_0.jpg)
144
+
145
+ Figure 2: Screenshot of our Neural Covidex application, which builds on BM25 rankings from Pyserini, neural reranking using T5, and unsupervised sentence highlighting using BioBERT.
146
+
147
+ ## 4 Evaluation or the Lack Thereof
148
+
149
+ It is, of course, expected that papers today have an evaluation section that attempts to empirically quantify the effectiveness of their proposed techniques and to support the claims to innovation made by the authors. Is our system any good? Quite honestly, we don't know.
150
+
151
+ At this point, all we can do is to point to previous work, in which nearly all the components that comprise our Neural Covidex have been evaluated separately, in their respective contexts (which of course is very different from the present application). While previous papers support our assertion that we are deploying state-of-the-art neural models, we currently have no conclusive evidence that they are effective for the CORD-19 corpus, previous results on cross-domain transfer notwithstanding (Yilmaz et al., 2019; Nogueira et al., 2020).
152
+
153
+ The evaluation problem, however, is far more complex than this. Since Neural Covidex is, at its core, a search engine, the impulse would be to evaluate it as such: using well-established methodologies based on test collections-comprising topics (information needs) and relevance judgments (human annotations). It is not clear if existing test collections-such as resources from the TREC Precision Medicine Track (Roberts et al., 2019) and other TREC evaluations dating even further back, or the BioASQ challenge (Tsatsaronis et al., 2015)—are useful for information needs against CORD-19. If no appropriate test collections exist, the logical chain of reasoning would compel the creation of one, and indeed, there are efforts underway to do exactly this. ${}^{14}$
154
+
155
+ Such an approach-which will undoubtedly provide the community with valuable resources-presupposes that better ranking is needed. While improved ranking would always be welcomed, it is not clear that better ranking is the most urgent "missing ingredient" that will address the information access problem faced by stakeholders today. For example, in anecdotal feedback we've received, users remarked that they liked the highlighting that our interface provides to draw attention to the most salient passages. An evaluation of ranking, would not cover this presentational aspect of an end-to-end system.
156
+
157
+ ---
158
+
159
+ ${}^{14}$ https://dmice.ohsu.edu/hersh/ COVIDSearch.html
160
+
161
+ ---
162
+
163
+ One important lesson from the information retrieval literature, dating back two decades, ${}^{15}$ is that batch retrieval evaluations (e.g., measuring mAP, nNDCG, etc.) often yield very different conclusions than end-to-end, human-in-the-loop evaluations (Hersh et al., 2000; Turpin and Hersh, 2001). As an example, a search engine that provides demonstrably inferior ranking might actually be quite useful from a task completion perspective because it provides other features and support user behaviors to compensate for any deficiencies (Lin and Smucker, 2008).
164
+
165
+ Even more broadly, it could very well be the case that search is completely the wrong capability to pursue. For example, it might be the case that users really want a filtering and notification service in which they "register" a standing query, and desire that a system "push" them relevant information as it becomes available (for example, in an email digest). Something along the lines of the recent TREC Mi-croblog Tracks (Lin et al., 2015) might be a better model of the information needs. Such filtering and notification capabilities may even be more critical than user-initiated search in the present context due to the rapidly growing literature.
166
+
167
+ Our point is: we don't actually know how our systems (or any of its individual components) can concretely contribute to efforts to tackle the ongoing pandemic until we receive guidance from real users who are engage in those efforts. Of course, they're all on the frontlines and have no time to provide feedback. Therein lies the challenge: how to build improved fire-fighting capabilities for tomorrow without bothering those who are trying to fight the fires that already raging in front of us.
168
+
169
+ Now that we have a basic system in place, our efforts have shifted to broader engagement with potential stakeholders to solicit additional guidance, while trying to balance exactly the tradeoff discussed above. For our project, and for the community as a whole, we argue that informal "hallway usability testing" (virtually, of course) is still highly informative and insightful. Until we have a better sense of what users really need, discussions of performance in terms of nDCG, BLEU, and ${\mathrm{F}}_{1}$ (pick your favorite metric) are premature. We believe the system we have deployed will assist us in understanding the true needs of those who are on the frontlines.
170
+
171
+ ## 5 Lessons Learned
172
+
173
+ First and foremost, the rapid development and deployment of the Neural Covidex and all the associated software components is a testament to the power of open source, open science, and the maturity of the modern software ecosystem. For example, our project depends on Apache Lucene, Apache Solr, Project Blacklight, React, FastAPI, PyTorch, TensorFlow, the HuggingFace Transformers library, and more. These existing projects represent countless hours of effort by numerous individuals with very different skill sets, at all levels of the software stack. We are indebted to the contributors of all these software projects, without which our own systems could not have gotten off the ground so quickly.
174
+
175
+ In addition to software components, our efforts would not have been possible without the community culture of open data sharing-starting, of course, from CORD-19 itself. The Allen Institute for AI deserves tremendous credit for their tireless efforts in curating the articles, incrementally expanding the corpus, and continuously improve the data quality (data cleaning, as we all know, is ${80}\%$ of data science). The rapid recent advances in neural architectures for NLP largely come from transformers that have been pretrained with language modeling objectives. Pretraining, of course, requires enormous amounts of hardware resources, and the fact that our community has developed an open culture where these models are freely shared has broadened and accelerated advances tremendously. We are beneficiaries of this sharing. Pretrained models then need to be fine-tuned for the actual downstream task, and for search-related tasks, the single biggest driver of recent progress has been Microsoft's release of the MS MARCO datatset (Nguyen et al., 2016). Without exaggeration, much of our recent work would not exist with this treasure trove.
176
+
177
+ Second, we learned from this experience that preparation matters, in the sense that an emphasis on good software engineering practices in our research groups (that long predate the present crisis) have paid off in enabling our team to rapidly retarget existing components to CORD-19. This is especially true of the "foundational" components at the bottom of our stack: Anserini has been in development for several years, with an emphasis on providing easily replicable and reusable keyword search capabilities. The Pyserini interface to Anserini had also been in development since late 2019, providing a clean Python interface to Anserini. While the ability to rapidly explore new research ideas is important, investments in software engineering best practices are worthwhile and pay large dividends in the long run.
178
+
179
+ ---
180
+
181
+ ${}^{15}$ Which means that students have likely not heard of this work and researchers might have likely forgotten it.
182
+
183
+ ---
184
+
185
+ These practices go hand-in-hand with open-source release of software artifacts that allow others to replicate results reported in research papers. While open-sourcing research code has already emerged as a norm in our community, to us this is more than a "code dump". Refactoring research code into software artifacts that have at least some semblance of interface abstractions for reusability, writing good documentation to aid replication efforts, and other thankless tasks consume enormous amounts of effort-and without a faculty advisor's strong insistence, often never happens. Ultimately, we feel this is a matter of the "culture" of a research group- and cannot be instilled overnight-but our team's rapid progress illustrates that building such cultural norms is worthwhile.
186
+
187
+ Finally, these recent experiences have refreshed a lesson that we've already known, but needed reminding: there's a large gap between code for producing results in research papers and a real, live, deployed system. We illustrate with two examples:
188
+
189
+ Our reranking necessitates computationally-expensive neural network inference on GPUs at query time. If we were simply running experiments for a research paper, this would not be a concern, since evaluations could be conducted in batch, and we would not be concerned with how long inference took to generate the results. However, in a live system, both latency (where we test the patience of an individual user) and throughput (which dictates how many concurrent users we could serve) are critical. Even after the initial implementation of the Neural Covidex had been completed-and we had informally shared the system with colleagues-it required several more days of effort until we were reasonably confident that we could handle a public release, with potentially concurrent usage. During this time, we focused on issues such as hardware provisioning, load balancing, load testing, deploy processes, and other important operational concerns. Researchers simply wishing to write papers need not worry about any of these issues.
190
+
191
+ Furthermore, in a live system, presentational details become disproportionately important. In our initial deployment, rendered text contained artifacts of the underlying tokenization by the neural models; for example, "COVID-19" appeared as "COVID - 19" with added spaces. Also, we had minor issues with the highlighting service, in that sometimes the highlights did not align perfectly with the underlying sentences. These were no doubt relatively trivial matters of software engineering, but in initial informal evaluations, users kept mentioning these imperfections over and over again-to the extent, we suspect, that it was distracting them from considering the underlying quality of the ranking. Once again, these were issues that would have never cropped up if our end goal was to simply write research papers, not deploy a live system to serve users.
192
+
193
+ ## 6 Conclusions
194
+
195
+ This paper describes our initial efforts in building the Neural Covidex, which incorporates the latest neural architectures to provide information access capabilities to AI2's CORD-19. We hope that our systems and components can prove useful in the fight against this global pandemic, and that the capabilities we've developed can be applied to analyzing the scientific literature more broadly.
196
+
197
+ ## 7 Acknowledgments
198
+
199
+ This research was supported in part by the Canada First Research Excellence Fund, the Natural Sciences and Engineering Research Council (NSERC) of Canada, NVIDIA, and eBay. We'd like to thank Kyle Lo from AI2 for helpful discussions and Colin Raffel from Google for his assistance with T5.
200
+
201
+ ## References
202
+
203
+ Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI '16), pages 265-283.
204
+
205
+ Nima Asadi and Jimmy Lin. 2013. Effectiveness/efficiency tradeoffs for candidate generation in multi-stage retrieval architectures. In Proceedings of the 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2013), pages 997-1000, Dublin, Ireland.
206
+
207
+ Ryan Clancy, Toke Eskildsen, Nick Ruest, and Jimmy Lin. 2019. Solr integration in the Anserini information retrieval toolkit. In Proceedings of the42nd An-
208
+
209
+ nual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019), pages 1285-1288, Paris, France.
210
+
211
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.
212
+
213
+ William R. Hersh, Andrew Turpin, Susan Price, Benjamin Chan, Dale Kramer, Lynetta Sacherek, and Daniel Olson. 2000. Do batch and user evaluations give the same results? In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2000), pages 17-24, Athens, Greece.
214
+
215
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
216
+
217
+ Jimmy Lin. 2009. Is searching full text more effective than searching abstracts? BMC Bioinformatics, 10:46.
218
+
219
+ Jimmy Lin, Miles Efron, Yulu Wang, Garrick Sherman, and Ellen Voorhees. 2015. Overview of the TREC-2015 Microblog Track. In Proceedings of the Twenty-Fourth Text REtrieval Conference (TREC 2015), Gaithersburg, Maryland.
220
+
221
+ Jimmy Lin and Mark D. Smucker. 2008. How do users find things with PubMed? Towards automatic utility evaluation with user simulations. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2008), pages 19-26, Singapore.
222
+
223
+ Shichen Liu, Fei Xiao, Wenwu Ou, and Luo Si. 2017. Cascade ranking for operational e-commerce search. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD 2017), pages 1557-1565, Halifax, Nova Scotia, Canada.
224
+
225
+ Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. 2006. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006), pages 437-444, Seattle, Washington.
226
+
227
+ Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: a human-generated machine reading comprehension dataset.
228
+
229
+ Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. arXiv:2003.06713.
230
+
231
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: an imperative style, high-performance deep learning library. In ${Ad}$ - vances in Neural Information Processing Systems, pages 8024-8035.
232
+
233
+ Jan Pedersen. 2010. Query understanding at Bing. In Industry Track Keynote at the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2010), Geneva, Switzerland.
234
+
235
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. In arXiv:1910.10683.
236
+
237
+ Kirk Roberts, Dina Demner-Fushman, Ellen M. Voorhees, William R. Hersh, Steven Bedrick, Alexander J. Lazar, Shubham Pant, and Funda Meric-Bernstam. 2019. Overview of the TREC 2019 precision medicine track. In Proceedings of the Twenty-Eighth Text REtrieval Conference (TREC 2019), Gaithersburg, Maryland.
238
+
239
+ Amit Singhal, Chris Buckley, and Mandar Mitra. 1996. Pivoted document length normalization. In Proceedings of the 19th Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval (SIGIR 1996), pages 21-29, Zürich, Switzerland.
240
+
241
+ George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou-los, et al. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. ${BMC}$ bioinformatics, 16(1):138.
242
+
243
+ Andrew Turpin and William R. Hersh. 2001. Why batch and user evaluations do not give the same results. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2001), pages 225-231, New Orleans, Louisiana.
244
+
245
+ Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011), pages 105-114, Beijing, China.
246
+
247
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-icz, et al. 2019. Transformers: State-of-the-art natural language processing. arXiv:1910.03771.
248
+
249
+ Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: enabling the use of Lucene for information retrieval research. In Proceedings of the 40th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017), pages 1253-1256, Tokyo, Japan.
250
+
251
+ Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: reproducible ranking baselines using Lucene. Journal of Data and Information Quality, 10(4):Article 16.
252
+
253
+ Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72-77, Minneapolis, Minnesota.
254
+
255
+ Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain modeling of sentence-level evidence for document retrieval. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3481-3487, Hong Kong, China.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/PlUA_mgGaPq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § RAPIDLY DEPLOYING A NEURAL SEARCH ENGINE FOR THE COVID-19 OPEN RESEARCH DATASET: PRELIMINARY THOUGHTS AND LESSONS LEARNED
2
+
3
+ Edwin Zhang, ${}^{1}$ Nikhil Gupta, ${}^{1}$ Rodrigo Nogueira, ${}^{1}$ Kyunghyun Cho, ${}^{2,3,4,5}$ and Jimmy Lin ${}^{1}$
4
+
5
+ ${}^{1}$ David R. Cheriton School of Computer Science, University of Waterloo
6
+
7
+ ${}^{2}$ Courant Institute of Mathematical Sciences, New York University
8
+
9
+ ${}^{3}$ Center for Data Science, New York University
10
+
11
+ ${}^{4}$ Facebook AI Research ${}^{5}$ CIFAR Associate Fellow
12
+
13
+ § ABSTRACT
14
+
15
+ We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI. This web application exists as part of a suite of tools that we have developed over the past few weeks to help domain experts tackle the ongoing global pandemic. We hope that improved information access capabilities to the scientific literature can inform evidence-based decision making and insight generation. This paper describes our initial efforts and offers a few thoughts about lessons we have learned along the way.
16
+
17
+ § 1 INTRODUCTION
18
+
19
+ As a response to the worldwide COVID-19 pandemic, on March 13, 2020, the Allen Institute for AI released the COVID-19 Open Research Dataset (CORD-19) in partnership with a coalition of research groups. ${}^{1}$ With weekly updates since the initial release, the corpus currently contains over 47,000 scholarly articles, including over 36,000 with full text, about COVID-19 and coronavirus-related research more broadly (for example, SARS and MERS), drawn from a variety of sources including PubMed, a curated list of articles from the WHO, as well as preprints from bioRxiv and medRxiv. The stated goal of the effort is "to mobilize researchers to apply recent advances in natural language processing to generate new insights in support of the fight against this infectious disease". We responded to this call to arms.
20
+
21
+ In approximately two weeks, our team was able to build, deploy, and share with the research community a number of components that support information access to this corpus. We have also assembled these components into two end-to-end search applications that are available online at covidex. ai: a keyword-based search engine that supports faceted browsing and the Neural Covidex, a search engine that exploits the latest advances in deep learning and neural architectures for ranking. This paper describes our initial efforts.
22
+
23
+ We have several goals for this paper: First, we discuss our motivation and approach, articulating how, hopefully, better information access capabilities can contribute to the fight against this global pandemic. Second, we provide a technical description of what we have built. Previously, this information was scattered on different web pages, in tweets, and ephemeral discussions with colleagues over video conferences and email. Gathering all this information in one place is important for other researchers who wish to evaluate and build on our work. Finally, we reflect on our journey so far-discussing the evaluation of our system and offering some lessons learned that might inform future efforts in building technologies to aid in rapidly developing crises.
24
+
25
+ § 2 MOTIVATION AND APPROACH
26
+
27
+ Our team was assembled on March 21, 2020 over Slack, comprising members of two research groups from the University of Waterloo and New York University. This was a natural outgrowth of existing collaborations, and thus we had rapport from the very beginning. Prior to these discussions, we had known about the CORD-19 dataset, but had not yet undertaken any serious attempt to build a research project around it.
28
+
29
+ Motivating our efforts, we believed that information access capabilities (search, question answering, etc.)-broadly, the types of technologies that our team works on-could be applied to provide users with high-quality information from the scientific literature, to inform evidence-based decision making and to support insight generation. Examples might include public health officials assessing the efficacy of population-level interventions, clinicians conducting meta-analyses to update care guidelines based on emerging clinical studies, virologist probing the genetic structure of COVID-19 in search of vaccines. We hope to contribute to these efforts by building better information access capabilities and packaging them into useful applications.
30
+
31
+ 'https://pages.semanticscholar.org/ coronavirus-research
32
+
33
+ At the outset, we adopted a two-pronged strategy to build both end-to-end applications as well as modular, reusable components. The intended users of our systems are domain experts (e.g., clinicians and virologists) who would naturally demand responsive web applications with intuitive, easy-touse interfaces. However, we also wished to build component technologies that could be shared with the research community, so that others can build on our efforts without "reinventing the wheel". To this end, we have released software artifacts (e.g., Java package in Maven Central, Python module on PyPI) that encapsulate some of our capabilities, complete with sample notebooks demonstrating their use. These notebooks support one-click repli-cability and provide a springboard for extensions.
34
+
35
+ § 3 TECHNICAL DESCRIPTION
36
+
37
+ Multi-stage search architectures represent the most common design for modern search engines, with work in academia dating back over a decade (Matveeva et al., 2006; Wang et al., 2011; Asadi and Lin, 2013). Known production deployments of this architecture include the Bing web search engine (Pedersen, 2010) as well as Alibaba's e-commerce search engine (Liu et al., 2017).
38
+
39
+ The idea behind multi-stage ranking is straightforward: instead of a monolithic ranker, ranking is decomposed into a series of stages. Typically, the pipeline begins with an initial retrieval stage, most often using "bag of words" queries against an inverted index. One or more subsequent stages reranks and refines the candidate set successively until the final results are presented to the user.
40
+
41
+ This multi-stage ranking design provides a nice organizing structure for our efforts-in particular, it provides a clean interface between basic keyword search and subsequent neural reranking components. This allowed us to make progress independently in a decoupled manner, but also presents natural integration points.
42
+
43
+ § 3.1 MODULAR AND REUSABLE KEYWORD SEARCH
44
+
45
+ In our design, initial retrieval is performed by the Anserini IR toolkit (Yang et al.,2017,2018), ${}^{2}$ which we have been developing for several years and powers a number of our previous systems that incorporates various neural architectures (Yang et al., 2019; Yilmaz et al., 2019). Anserini represents an effort to better align real-world search applications with academic information retrieval research: under the covers, it builds on the popular and widely-deployed open-source Lucene search library, on top of which we provide a number of missing features for conducting research on modern IR test collections.
46
+
47
+ Anserini provides an abstraction for document collections, and comes with a variety of adaptors for different corpora and formats: web pages in WARC containers, XML documents in tarballs, JSON objects in text files, etc. Providing simple keyword search over CORD-19 required only writing an adaptor for the corpus that allows Anserini to ingest the documents. We were able to implement such an adaptor in a short amount of time.
48
+
49
+ However, one important issue that immediately arose with CORD-19 concerned the granularity of indexing, i.e., what should we consider a "document", as the "atomic unit" of indexing and retrieval? One complication stems from the fact that the corpus contains a mix of articles that vary widely in length, not only in terms of natural variations, but also because the full text is not available for some documents. It is well known in the IR literature, dating back several decades (e.g., Singhal et al. 1996), that length normalization plays an important role in retrieval effectiveness.
50
+
51
+ Here, however, the literature does provide some guidance: previous work (Lin, 2009) showed that paragraph-level indexing can be more effective than the two other obvious alternatives of (a) indexing only the title and abstract of articles and (b) indexing each full-text article as a single, individual document. Based on this previous work, in addition to the two above conditions (for comparison purposes), we built (c) a paragraph-level index as follows: each full text article is segmented into paragraphs (based on existing annotations), and for each paragraph, we create a "document" for indexing comprising the title, abstract, and that paragraph. Thus, a full-text article comprising $n$ paragraphs yields $n + 1$ separate "retrievable units" in the index. To be consistent with standard IR parlance, we call each of these retrieval units a document, in a generic sense, despite their composite structure. An article for which we do not have the full text is represented by an individual document in this scheme. Note that while fielded search (dividing the text into separate fields and performing scoring separately for each field) can yield better results, for expediency we did not implement this. Following best practice, documents are ranked using the BM25 scoring function.
52
+
53
+ 2 http://anserini.io/
54
+
55
+ Based on "eyeballing the results" using sample information needs (manually formulated into keyword queries) from the Kaggle challenge associated with CORD-19, ${}^{3}$ results from the paragraph index did appear to be better (see Section 4 for more discussion). In particular, the full-text index, i.e., condition (b) above, overly favored long articles, which were often book chapters and other material of a pedagogical nature, less likely to be relevant in our context. The paragraph index often retrieves multiple paragraphs from the same article, but we consider this to be a useful feature, since duplicates of the same underlying article can provide additional signals for evidence combination by downstream components.
56
+
57
+ Since Anserini is built on top of Lucene, which is implemented in Java, our tools are designed to run on the Java Virtual Machine (JVM). However, TensorFlow (Abadi et al., 2016) and Py-Torch (Paszke et al., 2019), the two most popular neural network toolkits, use Python as their main language. More broadly, Python-with its diverse and mature ecosystem-has emerged as the language of choice for most data scientists today. Anticipating this gap, our team had been working on Pyserini, ${}^{4}$ Python bindings for Anserini, since late 2019. Pyserini is released as a Python module on PyPI and easily installable via pip. ${}^{5}$
58
+
59
+ Putting all the pieces together, by March 23, a scant two days after the formation of our team, we were able release modular and reusable baseline keyword search components for accessing the CORD-19 collection. ${}^{6}$ Specifically, we shared pre-built Anserini indexes for CORD-19 and released updated version of Anserini (the underlying IR toolkit, as a Maven artifact in the Maven Central Repository) as well as Pyserini (the Python interface, as a Python module on PyPI) that provided basic keyword search. Furthermore, these capabilities were demonstrated in online notebooks, so that other researchers can replicate our results and continue to build on them.
60
+
61
+ Finally, we demonstrated, also via a notebook, how basic keyword search can be seamlessly integrated with modern neural modeling techniques. On top of initial candidate documents retrieved from Pyserini, we implemented a simple unsupervised sentence highlighting technique to draw a reader's attention to the most pertinent passages in a document, using the pretrained BioBERT model (Lee et al., 2020) from the HuggingFace Transformer library (Wolf et al., 2019). We used BioBERT to convert sentences from the retrieved candidates and the query (which we treat as a sequence of keywords) into sets of hidden vectors. ${}^{7}$ We compute the cosine similarity between every combination of hidden states from the two sets, corresponding to a sentence and the query. We choose the top- $K$ words in the context, and then highlight the top sentences that contain those words. Despite its unsupervised nature, this approach appeared to accurately identify pertinent sentences based on context. Originally meant as a simple demonstration of how keyword search can be seamlessly integrated with neural network components, this notebook provided the basic approach for sentence highlighting that we would eventually deploy in the Neural Covidex (details below).
62
+
63
+ § 3.2 KEYWORD SEARCH WITH FACETED BROWSING
64
+
65
+ Python modules and notebooks are useful for fellow researchers, but it would be unreasonable to expect end users (for example, clinicians) to use them directly. Thus, we considered it a priority to deploy an end-to-end search application over CORD-19 with an easy-to-use interface.
66
+
67
+ Fortunately, our team had also been working on this, dating back to early 2019. In Clancy et al. (2019), we described integrating Anserini with Solr, so that we can use Anserini as a frontend to index directly into the Solr search platform. As Solr is also built on Lucene, such integration was not very onerous. On top of Solr, we were able to deploy the Blacklight search interface, ${}^{8}$ which is an application written in Ruby on Rails. In addition to providing basic support for query entry and results rendering, Blacklight also supports faceted browsing out of the box. With this combination-which had already been implemented for other corpora-our team was able to rapidly create a fully-featured search application on CORD-19, which we shared with the public on March 23 over social media. ${}^{9}$
68
+
69
+ ${}^{3}$ https://www.kaggle.com/ allen-institute-for-ai/ CORD-19-research-challenge
70
+
71
+ 4http://pyserini.io/
72
+
73
+ 5https://pypi.org/project/pyserini/
74
+
75
+ 6https://twitter.com/lintool/status/ 1241881933031841800
76
+
77
+ ${}^{7}$ We used the hidden activations from the penultimate layer immediately before the final softmax layer.
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 1: Screenshot of our "basic" Covidex keyword search application, which builds on Anserini, Solr, and Blacklight, providing basic BM25 ranking and faceting browsing.
82
+
83
+ A screenshot of this interface is shown in Figure 1. Beyond standard "type in a query and get back a list of results" capabilities, it is worthwhile to highlight the faceted browsing feature. From CORD-19, we were able to easily expose facets corresponding to year, authors, journal, and source. Navigating by year, for example, would allow a user to focus on older coronavirus research (e.g., on SARS) or the latest research on COVID-19, and a combination of the journal and source facets would allow a user to differentiate between pre-prints and the peer-reviewed literature, and between venues with different reputations.
84
+
85
+ § 3.3 THE NEURAL COVIDEX
86
+
87
+ The Neural Covidex is a search engine that takes advantage of the latest advances in neural ranking architectures, representing a culmination of our current efforts. Even before embarking on this project, our team had been active in exploring neural architectures for information access problems, particularly deep transformer models that have been pretrained on language modeling objectives: We were the first to apply BERT (Devlin et al., 2019) to the passage ranking problem. BERTserini (Yang et al., 2019) was among the first to apply deep transformer models to the retrieval-based question answering directly on large corpora. Birch (Yilmaz et al., 2019) represents the state of the art in document ranking (as of EMNLP 2019). All of these systems were built on Anserini.
88
+
89
+ In this project, however, we decided to incorporate our latest work based on ranking with sequence-to-sequence models (Nogueira et al., 2020). Our reranker, which consumes the candidate documents retrieved from CORD-19 by Py-serini using BM25 ranking, is based on the T5-base model (Raffel et al., 2019) that has been modified to perform a ranking task. Given a query $q$ and a set of candidate documents $d \in D$ , we construct the following input sequence to feed into T5-base:
90
+
91
+ 8 https://projectblacklight.org/
92
+
93
+ 9https://twitter.com/lintool/status/ 1242085391123066880
94
+
95
+ Query: $q$ Document: $d$ Relevant:(1)
96
+
97
+ The model is fine-tuned to produce either "true" or "false" depending on whether the document is relevant or not to the query. That is, "true" and "false" are the ground truth predictions in the sequence-to-sequence task, what we call the "target words".
98
+
99
+ At inference time, to compute probabilities for each query-document pair (in a reranking setting), we apply a softmax only on the logits of the "true" and "false" tokens. We rerank the candidate documents according to the probabilities assigned to the "true" token. See Nogueira et al. (2020) for additional details about this logit normalization trick and the effects of different target words.
100
+
101
+ Since we do not have training data specific to CORD-19, we fine-tuned our model on the MS MARCO passage dataset (Nguyen et al., 2016), which comprises ${8.8}\mathrm{M}$ passages obtained from the top 10 results retrieved by the Bing search engine (based on around $1\mathrm{M}$ queries). The training set contains approximately ${500}\mathrm{k}$ pairs of query and relevant documents, where each query has one relevant passage on average; non-relevant documents for training are also provided as part of the training data. Nogueira et al. (2020) and Yilmaz et al. (2019) had both previously demonstrated that models trained on MS MACRO can be directly applied to other document ranking tasks. We hoped that this is also the case for CORD-19.
102
+
103
+ We fine-tuned our T5-base model with a constant learning rate of ${10}^{-3}$ for ${10}\mathrm{k}$ iterations with class-balanced batches of size 256. We used a maximum of 512 input tokens and one output token (i.e., either "true" or "false", as described above). In the MS MARCO passage dataset, none of the inputs required truncation when using this length limit. Training the model takes approximately 4 hours on a single Google TPU v3-8.
104
+
105
+ For the Neural Covidex, we used the paragraph index built by Anserini over CORD-19 (see Section 3.1). Since some of the documents are longer than the length restrictions of the model, it is not feasible to directly apply our method to the entire text at once. To address this issue, we first segment each document into spans by applying a sliding window of 10 sentences with a stride of 5 . We then obtain a probability of relevance for each span by performing inference on it independently. We select the highest probability among these spans as the relevance probability of the document. Note that with the paragraph index, keyword search might retrieve multiple paragraphs from the same underlying article; our technique essentially takes the highest-scoring span across all these retrieved results as the score for that article to produce a final ranking of articles. That is, in the final interface, we deduplicate paragraphs so that each article only appears once in the results.
106
+
107
+ A screenshot of the Neural Covidex is shown in Figure 2. By default, the abstract of each article is displayed, but the user can click to reveal the relevant paragraph from that article (for those with full text). The most salient sentence is highlighted, using exactly the technique described in Section 3.1 that we initially prototyped in a notebook.
108
+
109
+ Architecturally, the Neural Covidex is currently built as a monolith (with future plans to refac-tor into more modular microservices), where all incoming API requests are handled by a service that performs searching, reranking, and text highlighting. Search is performed with Pyserini (as discussed in Section 3.1), reranking with T5 (discussed above), and text highlighting with BioBERT (also discussed in Section 3.1). The system is built using the FastAPI Python web framework, which was chosen for speed and ease of use. ${}^{10}$ The fron-tend UI is built with React to support the use of modular, declarative JavaScript components, ${}^{11}$ taking advantage of its vast ecosystem.
110
+
111
+ The system is currently deployed across a small cluster of servers, each with two NVIDIA V100 GPUs, as our pipeline requires neural network inference at query time (T5 for reranking, BioBERT for highlighting). Each server runs the complete software stack in a simple replicated setup (no partitioning). On top of this, we leverage Cloudflare as a simple load balancer, which uses a round robin scheme to dispatch requests across the different servers. ${}^{12}$ The end-to-end latency for a typical query is around two seconds.
112
+
113
+ On April 2, 2020, a little more than a week after publicly releasing the basic keyword search interface and associated components, we launched the Neural Covidex on social media. ${}^{13}$
114
+
115
+ 10 https://fastapi.tiangolo.com/
116
+
117
+ 11https://reactis.org/
118
+
119
+ 12https://www.cloudflare.com/
120
+
121
+ 13https://twitter.com/lintool/status/ 1245749445930688514
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 2: Screenshot of our Neural Covidex application, which builds on BM25 rankings from Pyserini, neural reranking using T5, and unsupervised sentence highlighting using BioBERT.
126
+
127
+ § 4 EVALUATION OR THE LACK THEREOF
128
+
129
+ It is, of course, expected that papers today have an evaluation section that attempts to empirically quantify the effectiveness of their proposed techniques and to support the claims to innovation made by the authors. Is our system any good? Quite honestly, we don't know.
130
+
131
+ At this point, all we can do is to point to previous work, in which nearly all the components that comprise our Neural Covidex have been evaluated separately, in their respective contexts (which of course is very different from the present application). While previous papers support our assertion that we are deploying state-of-the-art neural models, we currently have no conclusive evidence that they are effective for the CORD-19 corpus, previous results on cross-domain transfer notwithstanding (Yilmaz et al., 2019; Nogueira et al., 2020).
132
+
133
+ The evaluation problem, however, is far more complex than this. Since Neural Covidex is, at its core, a search engine, the impulse would be to evaluate it as such: using well-established methodologies based on test collections-comprising topics (information needs) and relevance judgments (human annotations). It is not clear if existing test collections-such as resources from the TREC Precision Medicine Track (Roberts et al., 2019) and other TREC evaluations dating even further back, or the BioASQ challenge (Tsatsaronis et al., 2015)—are useful for information needs against CORD-19. If no appropriate test collections exist, the logical chain of reasoning would compel the creation of one, and indeed, there are efforts underway to do exactly this. ${}^{14}$
134
+
135
+ Such an approach-which will undoubtedly provide the community with valuable resources-presupposes that better ranking is needed. While improved ranking would always be welcomed, it is not clear that better ranking is the most urgent "missing ingredient" that will address the information access problem faced by stakeholders today. For example, in anecdotal feedback we've received, users remarked that they liked the highlighting that our interface provides to draw attention to the most salient passages. An evaluation of ranking, would not cover this presentational aspect of an end-to-end system.
136
+
137
+ ${}^{14}$ https://dmice.ohsu.edu/hersh/ COVIDSearch.html
138
+
139
+ One important lesson from the information retrieval literature, dating back two decades, ${}^{15}$ is that batch retrieval evaluations (e.g., measuring mAP, nNDCG, etc.) often yield very different conclusions than end-to-end, human-in-the-loop evaluations (Hersh et al., 2000; Turpin and Hersh, 2001). As an example, a search engine that provides demonstrably inferior ranking might actually be quite useful from a task completion perspective because it provides other features and support user behaviors to compensate for any deficiencies (Lin and Smucker, 2008).
140
+
141
+ Even more broadly, it could very well be the case that search is completely the wrong capability to pursue. For example, it might be the case that users really want a filtering and notification service in which they "register" a standing query, and desire that a system "push" them relevant information as it becomes available (for example, in an email digest). Something along the lines of the recent TREC Mi-croblog Tracks (Lin et al., 2015) might be a better model of the information needs. Such filtering and notification capabilities may even be more critical than user-initiated search in the present context due to the rapidly growing literature.
142
+
143
+ Our point is: we don't actually know how our systems (or any of its individual components) can concretely contribute to efforts to tackle the ongoing pandemic until we receive guidance from real users who are engage in those efforts. Of course, they're all on the frontlines and have no time to provide feedback. Therein lies the challenge: how to build improved fire-fighting capabilities for tomorrow without bothering those who are trying to fight the fires that already raging in front of us.
144
+
145
+ Now that we have a basic system in place, our efforts have shifted to broader engagement with potential stakeholders to solicit additional guidance, while trying to balance exactly the tradeoff discussed above. For our project, and for the community as a whole, we argue that informal "hallway usability testing" (virtually, of course) is still highly informative and insightful. Until we have a better sense of what users really need, discussions of performance in terms of nDCG, BLEU, and ${\mathrm{F}}_{1}$ (pick your favorite metric) are premature. We believe the system we have deployed will assist us in understanding the true needs of those who are on the frontlines.
146
+
147
+ § 5 LESSONS LEARNED
148
+
149
+ First and foremost, the rapid development and deployment of the Neural Covidex and all the associated software components is a testament to the power of open source, open science, and the maturity of the modern software ecosystem. For example, our project depends on Apache Lucene, Apache Solr, Project Blacklight, React, FastAPI, PyTorch, TensorFlow, the HuggingFace Transformers library, and more. These existing projects represent countless hours of effort by numerous individuals with very different skill sets, at all levels of the software stack. We are indebted to the contributors of all these software projects, without which our own systems could not have gotten off the ground so quickly.
150
+
151
+ In addition to software components, our efforts would not have been possible without the community culture of open data sharing-starting, of course, from CORD-19 itself. The Allen Institute for AI deserves tremendous credit for their tireless efforts in curating the articles, incrementally expanding the corpus, and continuously improve the data quality (data cleaning, as we all know, is ${80}\%$ of data science). The rapid recent advances in neural architectures for NLP largely come from transformers that have been pretrained with language modeling objectives. Pretraining, of course, requires enormous amounts of hardware resources, and the fact that our community has developed an open culture where these models are freely shared has broadened and accelerated advances tremendously. We are beneficiaries of this sharing. Pretrained models then need to be fine-tuned for the actual downstream task, and for search-related tasks, the single biggest driver of recent progress has been Microsoft's release of the MS MARCO datatset (Nguyen et al., 2016). Without exaggeration, much of our recent work would not exist with this treasure trove.
152
+
153
+ Second, we learned from this experience that preparation matters, in the sense that an emphasis on good software engineering practices in our research groups (that long predate the present crisis) have paid off in enabling our team to rapidly retarget existing components to CORD-19. This is especially true of the "foundational" components at the bottom of our stack: Anserini has been in development for several years, with an emphasis on providing easily replicable and reusable keyword search capabilities. The Pyserini interface to Anserini had also been in development since late 2019, providing a clean Python interface to Anserini. While the ability to rapidly explore new research ideas is important, investments in software engineering best practices are worthwhile and pay large dividends in the long run.
154
+
155
+ ${}^{15}$ Which means that students have likely not heard of this work and researchers might have likely forgotten it.
156
+
157
+ These practices go hand-in-hand with open-source release of software artifacts that allow others to replicate results reported in research papers. While open-sourcing research code has already emerged as a norm in our community, to us this is more than a "code dump". Refactoring research code into software artifacts that have at least some semblance of interface abstractions for reusability, writing good documentation to aid replication efforts, and other thankless tasks consume enormous amounts of effort-and without a faculty advisor's strong insistence, often never happens. Ultimately, we feel this is a matter of the "culture" of a research group- and cannot be instilled overnight-but our team's rapid progress illustrates that building such cultural norms is worthwhile.
158
+
159
+ Finally, these recent experiences have refreshed a lesson that we've already known, but needed reminding: there's a large gap between code for producing results in research papers and a real, live, deployed system. We illustrate with two examples:
160
+
161
+ Our reranking necessitates computationally-expensive neural network inference on GPUs at query time. If we were simply running experiments for a research paper, this would not be a concern, since evaluations could be conducted in batch, and we would not be concerned with how long inference took to generate the results. However, in a live system, both latency (where we test the patience of an individual user) and throughput (which dictates how many concurrent users we could serve) are critical. Even after the initial implementation of the Neural Covidex had been completed-and we had informally shared the system with colleagues-it required several more days of effort until we were reasonably confident that we could handle a public release, with potentially concurrent usage. During this time, we focused on issues such as hardware provisioning, load balancing, load testing, deploy processes, and other important operational concerns. Researchers simply wishing to write papers need not worry about any of these issues.
162
+
163
+ Furthermore, in a live system, presentational details become disproportionately important. In our initial deployment, rendered text contained artifacts of the underlying tokenization by the neural models; for example, "COVID-19" appeared as "COVID - 19" with added spaces. Also, we had minor issues with the highlighting service, in that sometimes the highlights did not align perfectly with the underlying sentences. These were no doubt relatively trivial matters of software engineering, but in initial informal evaluations, users kept mentioning these imperfections over and over again-to the extent, we suspect, that it was distracting them from considering the underlying quality of the ranking. Once again, these were issues that would have never cropped up if our end goal was to simply write research papers, not deploy a live system to serve users.
164
+
165
+ § 6 CONCLUSIONS
166
+
167
+ This paper describes our initial efforts in building the Neural Covidex, which incorporates the latest neural architectures to provide information access capabilities to AI2's CORD-19. We hope that our systems and components can prove useful in the fight against this global pandemic, and that the capabilities we've developed can be applied to analyzing the scientific literature more broadly.
168
+
169
+ § 7 ACKNOWLEDGMENTS
170
+
171
+ This research was supported in part by the Canada First Research Excellence Fund, the Natural Sciences and Engineering Research Council (NSERC) of Canada, NVIDIA, and eBay. We'd like to thank Kyle Lo from AI2 for helpful discussions and Colin Raffel from Google for his assistance with T5.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/SPxaJuM4Hbz/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Document Classification for COVID-19 Literature
2
+
3
+ Bernal Jiménez Gutiérrez, Juncheng Zeng, Dongdong Zhang, Ping Zhang, Yu Su
4
+
5
+ The Ohio State University
6
+
7
+ \{jimenezgutierrez.1, zeng.671, zhang.11069,
8
+
9
+ zhang.10631, su.809\}@osu.edu
10
+
11
+ ## Abstract
12
+
13
+ The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 8,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that the BioBERT and novel Longformer models surpass all others with almost equivalent micro-F1 and accuracy scores of around ${81}\%$ and ${69}\%$ on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub ${}^{1}$ .
14
+
15
+ ## 1 Introduction
16
+
17
+ The COVID-19 pandemic has made it a global priority for research on the subject to be developed at unprecedented rates. Researchers in a wide variety of fields, from clinicians to epidemiologists to policy makers, must all have effective access to the most up to date publications in their respective areas. Automated document classification can play an important role in organizing the stream of articles by fields and topics to facilitate the search process and speed up research efforts.
18
+
19
+ To explore how document classification models can help organize COVID-19 research papers, we use the LitCovid dataset (Chen et al., 2020), a collection of 8,000 newly released scientific papers compiled by the NIH to facilitate access to the literature on all aspects of the virus. This dataset is updated daily and every new article is manually assigned one or more of the following 8 categories: General, Transmission Dynamics (Transmission), Treatment, Case Report, Epidemic Forecasting (Forecasting), Prevention, Mechanism and Diagnosis. We leverage these annotations and the articles made available by LitCovid to compile a timely new dataset for multi-label document classification.
20
+
21
+ Apart from addressing the pressing needs of the pandemic, this dataset also offers an interesting document classification dataset which spans different biomedical specialities while sharing one overarching topic. This setting is distinct from other biomedical document classification datasets which tend to exclusively distinguish between biomedical topics such as hallmarks of cancer (Baker et al., 2016), chemical exposure methods (Baker, 2017) or diagnosis codes (Du et al., 2019). The dataset's shared focus on the COVID-19 pandemic also sets it apart from open-domain datasets and academic paper classification datasets such as IMDB or the aRxiv Academic Paper Dataset (AAPD) (Yang et al., 2018) in which no shared topic can be found in most of the documents, and it poses unique challenges for document classification models.
22
+
23
+ We evaluate a number of models on the LitCovid dataset and find that fine-tuning pre-trained language models yields higher performance than traditional machine learning approaches and neural models such as LSTMs (Adhikari et al., 2019b; Kim, 2014; Liu et al., 2017). We also notice that BioBERT (Lee et al., 2019), a BERT model pre-trained on the original corpus for BERT plus a large set of PubMed articles, performed slightly better than the original BERT base model. We also observe that the novel Longformer (Beltagy et al., 2020) model, which allows for processing longer sequences, matches BioBERT's performance when 1024 subwords are used instead of 512 , the maximum for BERT models.
24
+
25
+ ---
26
+
27
+ 'https://github.com/dki-lab/ covid19-classification
28
+
29
+ ---
30
+
31
+ <table><tr><td/><td>LitCovid</td><td>CORD-19 Test</td></tr><tr><td>#of Classes</td><td>8</td><td>8</td></tr><tr><td>#of Articles</td><td>8,002</td><td>100</td></tr><tr><td>Avg. sentences</td><td>51</td><td>109</td></tr><tr><td>Avg. tokens</td><td>1,221</td><td>2861</td></tr><tr><td>Total # of tokens</td><td>9,771,284</td><td>286,065</td></tr></table>
32
+
33
+ Table 1: Dataset statistics for the LitCovid and Test CORD-19 Datasets.
34
+
35
+ We then explore the data efficiency and generalizability of these models as crucial aspects to address for document classification to become a useful tool against outbreaks like this one. Finally, we discuss some issues found in our error analysis such as current models often (1) correlating certain categories too closely with each other and (2) failing to focus on discriminative sections of a document and get distracted by introductory text about COVID-19, which suggest venues for future improvement.
36
+
37
+ ## 2 Datasets
38
+
39
+ In this section, we describe the LitCovid dataset in more detail and briefly introduce the CORD-19 dataset which we sampled to create a small test set to evaluate model generalizability.
40
+
41
+ ### 2.1 LitCovid
42
+
43
+ The LitCovid dataset is a collection of recently published PubMed articles which are directly related to the 2019 novel Coronavirus. The dataset contains upwards of 14,000 articles and approximately 2,000 new articles are added every week, making it a comprehensive resource for keeping researchers up to date with the current COVID-19 crisis.
44
+
45
+ For a large portion of the articles in LitCovid, either the full article or at least the abstract can be downloaded directly from their website. For our document classification dataset, we select 8,002 from the original ${14},{000} +$ articles which contain full texts or abstracts. As seen in table 1, these selected articles contain on average approximately 51 sentences and 1,200 tokens, reflecting the roughly even split between abstracts and full articles we observe from inspection.
46
+
47
+ Each article in LitCovid is assigned one or more of the following 8 topic labels: Prevention, Treatment, Diagnosis, Mechanism, Case Report, Transmission, Forecasting and General. Even though every article in the corpus can be labelled with multiple tags, most articles, around 76%, contain only one label. Table 2 shows the label distribution for the subset of LitCovid which is used in the present work. We note that there is a large class imbalance, with the most frequently occurring label appearing almost 20 times as much as the least frequent one. We split the LitCovid dataset into train, dev, test with the ratio 7:1:2.
48
+
49
+ <table><tr><td>Class</td><td>LitCovid</td><td>CORD-19 Set</td></tr><tr><td>Prevention</td><td>3807</td><td>12</td></tr><tr><td>Treatment</td><td>2149</td><td>20</td></tr><tr><td>Diagnosis</td><td>1570</td><td>25</td></tr><tr><td>Mechanism</td><td>1199</td><td>70</td></tr><tr><td>Case Report</td><td>621</td><td>2</td></tr><tr><td>Transmission</td><td>455</td><td>6</td></tr><tr><td>General</td><td>222</td><td>7</td></tr><tr><td>Forecasting</td><td>205</td><td>2</td></tr></table>
50
+
51
+ Table 2: Number of documents in each category for the Lit-Covid and CORD-19 Test Datasets.
52
+
53
+ ### 2.2 CORD-19
54
+
55
+ The COVID-19 Open Research Dataset (CORD- 19) (Wang et al., 2020) was one of the earliest datasets released to facilitate cooperation between the computing community and the many relevant actors of the COVID-19 pandemic. It consists of approximately 60,000 papers related to COVID- 19 and similar coronaviruses such as SARS and MERS since the SARS epidemic of 2002. Due to their differences in scope, this dataset shares only around 1,200 articles with the LitCovid dataset.
56
+
57
+ In order to test how our models generalize to a different setting, we asked biomedical experts to label a small set of 100 articles found only in CORD-19. Each article was labelled independently by two annotators. For articles which received two different annotations (around 15%), a third annotator broke ties. Table 1 shows the statistics of this small set and Table 2 shows its category distribution.
58
+
59
+ ## 3 Models
60
+
61
+ In the following section we provide a brief description of each model and the implementations used. We use micro-F1 (F1) and accuracy (Acc.) as our evaluation metrics, as done in (Adhikari et al., 2019a). All reproducibility information can be found in Appendix A.
62
+
63
+ ### 3.1 Traditional Machine Learning Models
64
+
65
+ To compare with simpler but competitive traditional baselines we use the default scikit-learn (Pe-dregosa et al., 2011) implementation of logistic regression and linear support vector machine (SVM) for multi-label classification which trains one classifier per class using a one-vs-rest scheme. Both models use TF-IDF weighted bag-of-words as input.
66
+
67
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Dev Set</td><td colspan="2">$\mathbf{{TestSet}}$</td></tr><tr><td>$\mathbf{{Acc}.}$</td><td>$\mathbf{{F1}}$</td><td>$\mathbf{{Acc}.}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td>$\mathbf{{LR}}$</td><td>53.3</td><td>67.5</td><td>58.5</td><td>72.2</td></tr><tr><td>SVM</td><td>58.8</td><td>72.4</td><td>62.6</td><td>76.0</td></tr><tr><td>$\mathbf{{LSTM}}$</td><td>${57.7} \pm {0.7}$</td><td>${75.8} \pm {0.5}$</td><td>${59.1} \pm {1.3}$</td><td>${76.1} \pm {0.5}$</td></tr><tr><td>$\mathbf{{LST}{M}_{reg}}$</td><td>${59.4} \pm {2.4}$</td><td>74.6 ±1.2</td><td>${61.7} \pm {1.9}$</td><td>${75.9} \pm {1.2}$</td></tr><tr><td>KimCNN</td><td>${59.3} \pm {1.1}$</td><td>${75.7} \pm {0.4}$</td><td>${61.0} \pm {0.1}$</td><td>${76.2} \pm {0.2}$</td></tr><tr><td>XML-CNN</td><td>${61.9} \pm {1.0}$</td><td>77.2 ±0.3</td><td>${64.6} \pm {0.4}$</td><td>77.9 $\pm {0.3}$</td></tr><tr><td>${\mathbf{{BERT}}}_{\mathbf{{base}}}$</td><td>${66.1} \pm {1.3}$</td><td>${79.1} \pm {0.1}$</td><td>${68.1} \pm {0.9}$</td><td>${80.6} \pm {0.2}$</td></tr><tr><td>${\mathbf{{BERT}}}_{\mathbf{{large}}}$</td><td>${66.4} \pm {0.5}$</td><td>${79.0} \pm {0.7}$</td><td>${68.1} \pm {1.1}$</td><td>${79.5} \pm {1.2}$</td></tr><tr><td>Longformer</td><td>${66.7} \pm {1.1}$</td><td>${79.9} \pm {0.5}$</td><td>${69.2} \pm {0.2}$</td><td>${80.7} \pm {0.7}$</td></tr><tr><td>BioBERT</td><td>${66.5} \pm {0.6}$</td><td>${80.2} \pm {0.1}$</td><td>${68.5} \pm {1.0}$</td><td>${81.2} \pm {0.3}$</td></tr></table>
68
+
69
+ Table 3: Performance for each model expressed as mean $\pm$ standard deviation across three training runs.
70
+
71
+ ### 3.2 Conventional Neural Models
72
+
73
+ Using Hedwig ${}^{2}$ , a document classification toolkit, we evaluate the following models: KimCNN (Kim, 2014), XML-CNN (Liu et al., 2017) as well as an unregularized and a regularized LSTM (Adhikari et al., 2019b). We notice that they all perform similarly and slightly better than traditional methods.
74
+
75
+ ### 3.3 Pre-Trained Language Models
76
+
77
+ Using the same Hedwig document classification toolkit, we evaluate the performance of DocBERT (Adhikari et al., 2019a) on this task with a few different pre-trained language models. We fine-tune BERT base, BERT large (Devlin et al., 2019) and BioBERT (Lee et al., 2019), a version of BERT base which was further pre-trained on a collection of PubMed articles. We find all BERT models achieve best performance with their highest possible sequence length of 512 subwords. Additionally, we fine-tune the pre-trained Longformer (Beltagy et al., 2020) in the same way and find that it performs best when a maximum sequence length of 1024 is used. As in the original Longformer paper, we use global attention on the [CLS] token for document classification but find that performance improves by around $1\%$ if we use the average of all tokens as input instead of only the [CLS] representation. We hypothesize that this effect can be observed because the LitCovid dataset contains longer documents on average that the Hyperparti-san dataset used in the original Longformer paper.
78
+
79
+ ![01963da8-bb4a-7877-8652-55d961fa8897_2_849_197_604_414_0.jpg](images/01963da8-bb4a-7877-8652-55d961fa8897_2_849_197_604_414_0.jpg)
80
+
81
+ Figure 1: Data efficiency analysis. Pre-trained language models achieve their maximum performance on only ${20}\%$ of the training data.
82
+
83
+ We find that all pre-trained language models outperform the previous traditional and neural methods by a sizable margin in both accuracy and micro-F1 score. The best performing models are the Longformer and BioBERT, both achieving a similar micro-F1 score of around ${81}\%$ on the test set and an accuracy of 69.2% and 68.5% respectively.
84
+
85
+ ## 4 Results & Discussion
86
+
87
+ In this section, we explore data efficiency, model generalizability and discuss potential ways to improve performance on this task in future work.
88
+
89
+ ### 4.1 Data Efficiency
90
+
91
+ During a sudden healthcare crisis like this pandemic it is essential for models to obtain useful results as soon as possible. Since labelling biomedical articles is a very time-consuming process, achieving peak performance using less data becomes highly desirable. We thus evaluate the data efficiency of these models by training each of the ones shown in Figure 1 using $1\% ,5\% ,{10}\%$ , ${20}\%$ and ${50}\%$ of our training data and report the micro-F1 score on the dev set. When selecting the data subsets, we sample each category independently to make sure they are all represented.
92
+
93
+ We observe that pre-trained models are much more data-efficient than other models and that BioBERT is the most efficient, demonstrating the importance of domain-specific pre-training. We also notice that BioBERT performs worse than other pre-trained models on $1\%$ of the data, suggesting that its pre-training prevents it from leveraging potentially spurious patterns when there is very little data available.
94
+
95
+ ---
96
+
97
+ ${}^{2}$ https://github.com/castorini/hedwig
98
+
99
+ ---
100
+
101
+ <table><tr><td>Article</td><td>Label</td><td>Prediction</td></tr><tr><td>Analysis on epidemic situation and spatiotemporal changes of COVID-19 in Anhui. ... We mapped the spatiotemporal changes of confirmed cases, fitted the epidemic situation by the population growth curve at different stages and took statistical description and analysis of the epidemic situation in Anhui province.</td><td>Forecasting</td><td>Prevention Forecasting</td></tr><tr><td>2019 Novel coronavirus: where we are and what we know. There is a current worldwide outbreak of a new type of coronavirus (2019-nCoV), which originated from Wuhan in China and has now spread to 17 other countries. ... This paper aggregates and consolidates the virology, epidemiology, clinical management strategies ... In addition, by fitting the number of infections with a single-term exponential model ...</td><td>Treatment Mechanism Transmission Forecasting</td><td>Prevention Forecasting</td></tr><tr><td>Managing Cancer Care During the COVID-19 Pandemic: Agility and Collaboration Toward a Common Goal. The first confirmed case of coronavirus disease 2019 (COVID-19) in the United States was reported on January 20, 2020, in Snohomish County, Washington. ...</td><td>Treatment</td><td>Prevention</td></tr></table>
102
+
103
+ Table 4: LitCovid Error Samples. Sentences relevant to the article's category are highlighted with blue and general ones with red.
104
+
105
+ ### 4.2 CORD-19 Generalizability
106
+
107
+ To effectively respond to this pandemic, experts must not only learn as much as possible about the current virus but also thoroughly understand past epidemics and similar viruses. Thus, it is crucial for models trained on the LitCovid dataset to successfully categorize articles about related epidemics. We therefore evaluate some of our baselines on such articles using our labelled CORD-19 subset. We find that the micro-F1 and accuracy metrics drop by around 10 and 30 points respectively. This massive drop in performance from a minor change in domain indicates that the models have trouble ignoring the overarching COVID-19 topic and isolating relevant signals from each category.
108
+
109
+ <table><tr><td/><td>$\mathbf{{Acc}.}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td>SVM</td><td>26.0</td><td>55.6</td></tr><tr><td>$\mathbf{{LSTMreg}}$</td><td>${31.3} \pm {2.5}$</td><td>${62.9} \pm {2.4}$</td></tr><tr><td>Longformer</td><td>${37.3} \pm {4.9}$</td><td>${66.9} \pm {2.1}$</td></tr><tr><td>BioBERT</td><td>${39.7} \pm {3.1}$</td><td>${68.1} \pm {1.3}$</td></tr></table>
110
+
111
+ Table 5: Performance on the CORD-19 Test Set expressed as mean $\pm$ standard deviation across three training runs.
112
+
113
+ It is interesting to note that Mechanism is the only category for which BioBERT performs better in CORD-19 than in LitCovid. This could be due to Mechanism articles using technical language and there being enough samples for the models to learn; in contrast with Forecasting which also uses specific language but has far fewer training examples. BioBERT's binary F1 scores for each category on both datasets can be found in Appendix B.
114
+
115
+ ### 4.3 Error Analysis
116
+
117
+ We analyze 50 errors made by both highest scoring BioBERT and the Longformer models on Lit-Covid documents to better understand their performance. We find that ${34}\%$ of these were annotation errors which our best performing model predicted correctly. We also find that ${10}\%$ of the errors were nearly impossible to classify using only the text available on LitCovid, and the full articles are needed to make better-informed prediction. From the rest of the errors we identify some aspects of this task which should be addressed in future work.
118
+
119
+ We first note these models often correlate certain categories, namely Prevention, Transmission and Forecasting, much more closely than necessary. Even though these categories are semantically related and some overlap exists, the Transmission and Forecasting tags are predicted in conjunction with the Prevention tag much more frequently than what is observed in the labels as can be seen from the table in Appendix C. Future work should attempt to explicitly model correlation between categories to help the model recognize the particular cases in which labels should occur together. The first row in Table 4 shows a document labelled as Forecasting which is also incorrectly predicted with a Prevention label, exemplifying this issue.
120
+
121
+ Finally, we observe that models have trouble identifying discriminative sections of the document due to how much introductory content on the pandemic can be found in most articles. Future work should explicitly model the gap in relevance between introductory sections and crucial sentences such as thesis statements and article titles. In Table 4 , the second and third examples would be more easily classified correctly if specific sentences were ignored while others attended to more thoroughly. This could also increase interpretability, facilitating analysis and further improvement.
122
+
123
+ ## 5 Conclusion
124
+
125
+ We provide an analysis of document classification models on the LitCovid dataset for the COVID- 19 literature. We determine that fine-tuning pre-trained language models yields the best performance on this task. We study the generalizability and data efficiency of these models and discuss some important issues to address in future work.
126
+
127
+ ## References
128
+
129
+ Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019a. Docbert: Bert for document classification. ArXiv, abs/1904.08398.
130
+
131
+ Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019b. Rethinking complex neural network architectures for document classification. In NAACL-HLT.
132
+
133
+ Simon Baker. 2017. Corpus and Software.
134
+
135
+ Simon Baker, Ilona Silins, Yufan Guo, Imran Ali, Johan Högberg, Ulla Stenius, and Anna Korhonen. 2016. Automatic semantic classification of scientific literature according to the hallmarks of cancer. Bioinformatics, 32 3:432-40.
136
+
137
+ Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.
138
+
139
+ Q. Chen, A. Allot, and Z. Lu. 2020. Keep up with the latest coronavirus research. Nature, 579(7798):193.
140
+
141
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805.
142
+
143
+ Jingcheng Du, Qingyu Chen, Yifan Peng, Yang Xiang, Cui Tao, and Zhiyong Lu. 2019. Ml-net: multi-label classification of biomedical texts with deep neural networks. Journal of the American Medical Informatics Association : JAMIA.
144
+
145
+ Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.
146
+
147
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
148
+
149
+ Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yim-ing Yang. 2017. Deep learning for extreme multi-label text classification. Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval.
150
+
151
+ Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram-fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jacob VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825-2830.
152
+
153
+ Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Michael Kinney, Ziyang Liu, William. Merrill, Paul Mooney, Dewey A. Murdick, Devvret
154
+
155
+ Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stil-son, Alex D. Wade, Kuansan Wang, Christopher Wilhelm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. Cord-19: The covid-19 open research dataset. ArXiv, abs/2004.10706.
156
+
157
+ Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. Sgm: Sequence generation model for multi-label classification. In ${COL}$ - ING.
158
+
159
+ ## A Experimental Set-up
160
+
161
+ We split the LitCovid dataset into train, dev, test with the ratio 7:1:2.
162
+
163
+ We adopt micro-F1 and accuracy as our evaluation metrics, same as (Adhikari et al., 2019a). We use scikit-learn (Pedregosa et al., 2011) and Hedwig evaluation scripts to evaluate all the models. For preprocessing, tokenization and sentence segmentation, we use the NLTK library.
164
+
165
+ All the document classification models used in the paper, logistic regression ${}^{1}{\mathrm{{SVM}}}^{2}$ DocBERT ${}^{3}$ , Reg-LSTM ${}^{4}$ , Reg-LSTM ${}^{5}$ , XML-CNN ${}^{6}$ , Kim CNN ${}^{7}$ are run based on the implementations listed here and strictly followed their instructions. We used the following pre-trained language models, BioBERT ${}^{8}$ , BERT base ${}^{9}$ , BERT large ${}^{10}$ and the Longformer ${}^{11}$ .
166
+
167
+ For reproducibility, we list all the key hyperpa-rameters, the tuning bounds and the $\#$ of parameters for each model in Table A1. For the logistic regression and the SVM all hyperparameters used were default to scikit-learn and therefore are excluded from this table. For all models we train for a maximum of 30 epochs with a patience of 5 . We used micro-F1 score for all hyperparameter tuning. All models were run on NVIDIA GeForce GTX 1080 GPUs. B Performance by Category
168
+
169
+ ---
170
+
171
+ https://scikit-learn.org/stable/ modules/generated/\\sklearn.linear_model. LogisticRegression.html
172
+
173
+ ${}^{2}$ https://scikit-learn.org/stable/ modules/generated/sklearn.svm.SVC.html
174
+
175
+ ${}^{3}$ https://github.com/castorini/hedwig/ blob/master/models/bert
176
+
177
+ ${}^{4}$ https://github.com/castorini/hedwig/ blob/master/models/reg_lstm
178
+
179
+ 5https://github.com/castorini/hedwig/ blob/master/models/reg_lstm
180
+
181
+ ${}^{6}$ https://github.com/castorini/hedwig/ blob/master/models/xml_cnn
182
+
183
+ ${}^{7}$ https://github.com/castorini/hedwig/ blob/master/models/kim_cnn
184
+
185
+ 8 https://huggingface.co/monologg/ biobert_v1.1_pubmed
186
+
187
+ 9https://huggingface.co/ bert-base-uncased 10https://huggingface.co/ bert-large-uncased "https://github.com/allenai/longformer
188
+
189
+ ---
190
+
191
+ <table><tr><td>Model</td><td>Hyperparameters</td><td>Hyperparameter bounds</td><td>Number of Parameters</td></tr><tr><td>Kim CNN</td><td>batch size: 32 learning rate: 0.001 dropout: 0.1 mode: static output channel: 100 word dimension: 300 embedding dimension: 300 epoch decay: 15 weight decay: 0</td><td>batch size:(16,32,64) learning rate:(0.01,0.001,0.0001) dropout:(0.1,0.5,0.7)</td><td>362,708</td></tr><tr><td>XML-CNN</td><td>batch size: 32 learning rate: 0.001 dropout: 0.7 dynamic pool length: 8 mode: static output channel: 100 word dimension: 300 embedding dimension: 300 epoch decay: 15 weight decay: 0 hidden bottleneck dimension: 512</td><td>batch size:(32,64) learning rate: $\left( {{0.001},{0.0001},1 \times {10}^{-5}}\right)$ dropout:(0.1,0.5,0.7) dynamic pool length:(8,16,32)</td><td>1,653,716</td></tr><tr><td>$\mathbf{{LSTM}}$</td><td>batch size: 8 learning rate: 0.001 dropout: 0.1 hidden dimension: 512 mode: static output channel: 100 word dimension: 300 embedding dimension: 300 number of layers: 1 epoch decay: 15 weight decay: 0 bidirectional: true bottleneck layer: true weight drop: 0.1 embedding dropout: 0.2 temporal averaging: 0.0 temporal activation regularization: 0.0 activation regularization: 0.0</td><td>learning rate:(0.01,0.001,0.0001) hidden dimension:(300,512)</td><td>3,342,344</td></tr><tr><td>${\mathbf{{LSTM}}}_{\mathbf{{Reg}}}$</td><td>batch size: 8 learning rate: 0.001 dropout: 0.5 hidden dimension: 300 temporal averaging: 0.99 mode: static output channel: 100 word dimension: 300 embedding dimension: 300 number of layers: 1 epoch decay: 15 weight decay: 0 bidirectional: true bottleneck layer: true weight drop: 0.1 embedding dropout: 0.2 temporal activation regularization: 0.0 activation regularization: 0.0</td><td>batch size:(8,16) learning rate:(0.01,0.001,0.0001) hidden dimension:(300,512) dropout:(0.5,0.6)</td><td>1,449,608</td></tr><tr><td>${\mathbf{{BERT}}}_{\mathbf{{base}}}$</td><td>learning rate: $2 \times {10}^{-5}$ max sequence length: 512 batch size: 6 model: bert-base-uncased warmup proportion: 0.1 gradient accumulation steps: 1</td><td>learning rate: $({0.001},{0.0001}$ , $2 \times {10}^{-5},1 \times {10}^{-6}$ ) maximum sequence length:(256,512)</td><td>110M</td></tr><tr><td>${\mathbf{{BERT}}}_{\mathbf{{large}}}$</td><td>learning rate: $2 \times {10}^{-5}$ max sequence length: 512 batch size: 2 model: bert-large-uncased warmup proportion: 0.1 gradient accumulation steps: 1</td><td>learning rate: $({0.001},{0.0001}$ , $2 \times {10}^{-5},1 \times {10}^{-6}$ ) maximum sequence length:(256,512)</td><td>336M</td></tr><tr><td>BioBERT</td><td>learning rate: $2 \times {10}^{-5}$ max sequence length: 512 batch size: 6 model: monologg/biobert_v1.1_pubmed warmup proportion: 0.1 gradient accumulation steps: 1</td><td>learning rate: $({0.001},{0.0001}$ , $2 \times {10}^{-5},1 \times {10}^{-6})$ ) maximum sequence length:(256,512)</td><td>108M</td></tr><tr><td>Longformer</td><td>learning rate: $2 \times {10}^{-5}$ max sequence length: 1024 batch size: 3 model: longformer-base-4096 warmup proportion: 0.1 gradient accumulation steps: 1</td><td>learning rate: $({0.001},{0.0001}$ , $2 \times {10}^{-5},1 \times {10}^{-6})$ ) maximum sequence length:(1024,3584)</td><td>148M</td></tr></table>
192
+
193
+ Table A1: Hyperparameters, tuning bounds and number of parameters for each method.
194
+
195
+ <table><tr><td rowspan="2">Category</td><td colspan="2">Binary F1 Score</td></tr><tr><td>LitCovid $\mathbf{{Dev}}$</td><td>CORD-19 Set</td></tr><tr><td>Prevention</td><td>${88.2} \pm {0.2}$</td><td>${65.8} \pm {2.9}$</td></tr><tr><td>Case Report</td><td>${87.2} \pm {1.1}$</td><td>${66.7} \pm {0.0}$</td></tr><tr><td>Treatment</td><td>${81.5} \pm {0.5}$</td><td>${60.5} \pm {4.2}$</td></tr><tr><td>Diagnosis</td><td>${75.7} \pm {2.0}$</td><td>${58.0} \pm {1.4}$</td></tr><tr><td>Mechanism</td><td>${71.1} \pm {1.6}$</td><td>${81.4} \pm {3.8}$</td></tr><tr><td>Forecasting</td><td>${70.9} \pm {1.1}$</td><td>${0.0} \pm {0.0}$</td></tr><tr><td>General</td><td>${64.4} \pm {8.6}$</td><td>${0.0} \pm {0.0}$</td></tr><tr><td>Transmission</td><td>${48.3} \pm {3.7}$</td><td>${52.0} \pm {11.0}$</td></tr></table>
196
+
197
+ Table A2: BioBERT Binary F1 scores per category on the LitCovid dev set and the CORD-19 test set. Scores are given as mean $\pm$ standard deviation across three BioBERT training runs.
198
+
199
+ ## C Category Correlation
200
+
201
+ <table><tr><td rowspan="2">Category</td><td rowspan="2">Full Label</td><td colspan="2">Percentage of Docs with Category</td></tr><tr><td>Label</td><td>Prediction</td></tr><tr><td rowspan="2">Forecasting</td><td>Single Label</td><td>39.1</td><td>23.7</td></tr><tr><td>+ Prevention</td><td>43.4</td><td>71.1</td></tr><tr><td rowspan="2">Transmission</td><td>Single Label</td><td>17.3</td><td>3.4</td></tr><tr><td>+ Prevention</td><td>48.0</td><td>55.0</td></tr></table>
202
+
203
+ Table A3: This table shows how the Longformer model predicts (Forecasting & Prevention) and (Transmission & Prevention) much more frequently than can be found in the labels. The numbers are percentages of total number of documents with that category label.
204
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/SPxaJuM4Hbz/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DOCUMENT CLASSIFICATION FOR COVID-19 LITERATURE
2
+
3
+ Bernal Jiménez Gutiérrez, Juncheng Zeng, Dongdong Zhang, Ping Zhang, Yu Su
4
+
5
+ The Ohio State University
6
+
7
+ {jimenezgutierrez.1, zeng.671, zhang.11069,
8
+
9
+ zhang.10631, su.809}@osu.edu
10
+
11
+ § ABSTRACT
12
+
13
+ The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 8,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that the BioBERT and novel Longformer models surpass all others with almost equivalent micro-F1 and accuracy scores of around ${81}\%$ and ${69}\%$ on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub ${}^{1}$ .
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ The COVID-19 pandemic has made it a global priority for research on the subject to be developed at unprecedented rates. Researchers in a wide variety of fields, from clinicians to epidemiologists to policy makers, must all have effective access to the most up to date publications in their respective areas. Automated document classification can play an important role in organizing the stream of articles by fields and topics to facilitate the search process and speed up research efforts.
18
+
19
+ To explore how document classification models can help organize COVID-19 research papers, we use the LitCovid dataset (Chen et al., 2020), a collection of 8,000 newly released scientific papers compiled by the NIH to facilitate access to the literature on all aspects of the virus. This dataset is updated daily and every new article is manually assigned one or more of the following 8 categories: General, Transmission Dynamics (Transmission), Treatment, Case Report, Epidemic Forecasting (Forecasting), Prevention, Mechanism and Diagnosis. We leverage these annotations and the articles made available by LitCovid to compile a timely new dataset for multi-label document classification.
20
+
21
+ Apart from addressing the pressing needs of the pandemic, this dataset also offers an interesting document classification dataset which spans different biomedical specialities while sharing one overarching topic. This setting is distinct from other biomedical document classification datasets which tend to exclusively distinguish between biomedical topics such as hallmarks of cancer (Baker et al., 2016), chemical exposure methods (Baker, 2017) or diagnosis codes (Du et al., 2019). The dataset's shared focus on the COVID-19 pandemic also sets it apart from open-domain datasets and academic paper classification datasets such as IMDB or the aRxiv Academic Paper Dataset (AAPD) (Yang et al., 2018) in which no shared topic can be found in most of the documents, and it poses unique challenges for document classification models.
22
+
23
+ We evaluate a number of models on the LitCovid dataset and find that fine-tuning pre-trained language models yields higher performance than traditional machine learning approaches and neural models such as LSTMs (Adhikari et al., 2019b; Kim, 2014; Liu et al., 2017). We also notice that BioBERT (Lee et al., 2019), a BERT model pre-trained on the original corpus for BERT plus a large set of PubMed articles, performed slightly better than the original BERT base model. We also observe that the novel Longformer (Beltagy et al., 2020) model, which allows for processing longer sequences, matches BioBERT's performance when 1024 subwords are used instead of 512, the maximum for BERT models.
24
+
25
+ 'https://github.com/dki-lab/ covid19-classification
26
+
27
+ max width=
28
+
29
+ X LitCovid CORD-19 Test
30
+
31
+ 1-3
32
+ #of Classes 8 8
33
+
34
+ 1-3
35
+ #of Articles 8,002 100
36
+
37
+ 1-3
38
+ Avg. sentences 51 109
39
+
40
+ 1-3
41
+ Avg. tokens 1,221 2861
42
+
43
+ 1-3
44
+ Total # of tokens 9,771,284 286,065
45
+
46
+ 1-3
47
+
48
+ Table 1: Dataset statistics for the LitCovid and Test CORD-19 Datasets.
49
+
50
+ We then explore the data efficiency and generalizability of these models as crucial aspects to address for document classification to become a useful tool against outbreaks like this one. Finally, we discuss some issues found in our error analysis such as current models often (1) correlating certain categories too closely with each other and (2) failing to focus on discriminative sections of a document and get distracted by introductory text about COVID-19, which suggest venues for future improvement.
51
+
52
+ § 2 DATASETS
53
+
54
+ In this section, we describe the LitCovid dataset in more detail and briefly introduce the CORD-19 dataset which we sampled to create a small test set to evaluate model generalizability.
55
+
56
+ § 2.1 LITCOVID
57
+
58
+ The LitCovid dataset is a collection of recently published PubMed articles which are directly related to the 2019 novel Coronavirus. The dataset contains upwards of 14,000 articles and approximately 2,000 new articles are added every week, making it a comprehensive resource for keeping researchers up to date with the current COVID-19 crisis.
59
+
60
+ For a large portion of the articles in LitCovid, either the full article or at least the abstract can be downloaded directly from their website. For our document classification dataset, we select 8,002 from the original ${14},{000} +$ articles which contain full texts or abstracts. As seen in table 1, these selected articles contain on average approximately 51 sentences and 1,200 tokens, reflecting the roughly even split between abstracts and full articles we observe from inspection.
61
+
62
+ Each article in LitCovid is assigned one or more of the following 8 topic labels: Prevention, Treatment, Diagnosis, Mechanism, Case Report, Transmission, Forecasting and General. Even though every article in the corpus can be labelled with multiple tags, most articles, around 76%, contain only one label. Table 2 shows the label distribution for the subset of LitCovid which is used in the present work. We note that there is a large class imbalance, with the most frequently occurring label appearing almost 20 times as much as the least frequent one. We split the LitCovid dataset into train, dev, test with the ratio 7:1:2.
63
+
64
+ max width=
65
+
66
+ Class LitCovid CORD-19 Set
67
+
68
+ 1-3
69
+ Prevention 3807 12
70
+
71
+ 1-3
72
+ Treatment 2149 20
73
+
74
+ 1-3
75
+ Diagnosis 1570 25
76
+
77
+ 1-3
78
+ Mechanism 1199 70
79
+
80
+ 1-3
81
+ Case Report 621 2
82
+
83
+ 1-3
84
+ Transmission 455 6
85
+
86
+ 1-3
87
+ General 222 7
88
+
89
+ 1-3
90
+ Forecasting 205 2
91
+
92
+ 1-3
93
+
94
+ Table 2: Number of documents in each category for the Lit-Covid and CORD-19 Test Datasets.
95
+
96
+ § 2.2 CORD-19
97
+
98
+ The COVID-19 Open Research Dataset (CORD- 19) (Wang et al., 2020) was one of the earliest datasets released to facilitate cooperation between the computing community and the many relevant actors of the COVID-19 pandemic. It consists of approximately 60,000 papers related to COVID- 19 and similar coronaviruses such as SARS and MERS since the SARS epidemic of 2002. Due to their differences in scope, this dataset shares only around 1,200 articles with the LitCovid dataset.
99
+
100
+ In order to test how our models generalize to a different setting, we asked biomedical experts to label a small set of 100 articles found only in CORD-19. Each article was labelled independently by two annotators. For articles which received two different annotations (around 15%), a third annotator broke ties. Table 1 shows the statistics of this small set and Table 2 shows its category distribution.
101
+
102
+ § 3 MODELS
103
+
104
+ In the following section we provide a brief description of each model and the implementations used. We use micro-F1 (F1) and accuracy (Acc.) as our evaluation metrics, as done in (Adhikari et al., 2019a). All reproducibility information can be found in Appendix A.
105
+
106
+ § 3.1 TRADITIONAL MACHINE LEARNING MODELS
107
+
108
+ To compare with simpler but competitive traditional baselines we use the default scikit-learn (Pe-dregosa et al., 2011) implementation of logistic regression and linear support vector machine (SVM) for multi-label classification which trains one classifier per class using a one-vs-rest scheme. Both models use TF-IDF weighted bag-of-words as input.
109
+
110
+ max width=
111
+
112
+ 2*Model 2|c|Dev Set 2|c|$\mathbf{{TestSet}}$
113
+
114
+ 2-5
115
+ $\mathbf{{Acc}.}$ $\mathbf{{F1}}$ $\mathbf{{Acc}.}$ $\mathbf{{F1}}$
116
+
117
+ 1-5
118
+ $\mathbf{{LR}}$ 53.3 67.5 58.5 72.2
119
+
120
+ 1-5
121
+ SVM 58.8 72.4 62.6 76.0
122
+
123
+ 1-5
124
+ $\mathbf{{LSTM}}$ ${57.7} \pm {0.7}$ ${75.8} \pm {0.5}$ ${59.1} \pm {1.3}$ ${76.1} \pm {0.5}$
125
+
126
+ 1-5
127
+ $\mathbf{{LST}{M}_{reg}}$ ${59.4} \pm {2.4}$ 74.6 ±1.2 ${61.7} \pm {1.9}$ ${75.9} \pm {1.2}$
128
+
129
+ 1-5
130
+ KimCNN ${59.3} \pm {1.1}$ ${75.7} \pm {0.4}$ ${61.0} \pm {0.1}$ ${76.2} \pm {0.2}$
131
+
132
+ 1-5
133
+ XML-CNN ${61.9} \pm {1.0}$ 77.2 ±0.3 ${64.6} \pm {0.4}$ 77.9 $\pm {0.3}$
134
+
135
+ 1-5
136
+ ${\mathbf{{BERT}}}_{\mathbf{{base}}}$ ${66.1} \pm {1.3}$ ${79.1} \pm {0.1}$ ${68.1} \pm {0.9}$ ${80.6} \pm {0.2}$
137
+
138
+ 1-5
139
+ ${\mathbf{{BERT}}}_{\mathbf{{large}}}$ ${66.4} \pm {0.5}$ ${79.0} \pm {0.7}$ ${68.1} \pm {1.1}$ ${79.5} \pm {1.2}$
140
+
141
+ 1-5
142
+ Longformer ${66.7} \pm {1.1}$ ${79.9} \pm {0.5}$ ${69.2} \pm {0.2}$ ${80.7} \pm {0.7}$
143
+
144
+ 1-5
145
+ BioBERT ${66.5} \pm {0.6}$ ${80.2} \pm {0.1}$ ${68.5} \pm {1.0}$ ${81.2} \pm {0.3}$
146
+
147
+ 1-5
148
+
149
+ Table 3: Performance for each model expressed as mean $\pm$ standard deviation across three training runs.
150
+
151
+ § 3.2 CONVENTIONAL NEURAL MODELS
152
+
153
+ Using Hedwig ${}^{2}$ , a document classification toolkit, we evaluate the following models: KimCNN (Kim, 2014), XML-CNN (Liu et al., 2017) as well as an unregularized and a regularized LSTM (Adhikari et al., 2019b). We notice that they all perform similarly and slightly better than traditional methods.
154
+
155
+ § 3.3 PRE-TRAINED LANGUAGE MODELS
156
+
157
+ Using the same Hedwig document classification toolkit, we evaluate the performance of DocBERT (Adhikari et al., 2019a) on this task with a few different pre-trained language models. We fine-tune BERT base, BERT large (Devlin et al., 2019) and BioBERT (Lee et al., 2019), a version of BERT base which was further pre-trained on a collection of PubMed articles. We find all BERT models achieve best performance with their highest possible sequence length of 512 subwords. Additionally, we fine-tune the pre-trained Longformer (Beltagy et al., 2020) in the same way and find that it performs best when a maximum sequence length of 1024 is used. As in the original Longformer paper, we use global attention on the [CLS] token for document classification but find that performance improves by around $1\%$ if we use the average of all tokens as input instead of only the [CLS] representation. We hypothesize that this effect can be observed because the LitCovid dataset contains longer documents on average that the Hyperparti-san dataset used in the original Longformer paper.
158
+
159
+ < g r a p h i c s >
160
+
161
+ Figure 1: Data efficiency analysis. Pre-trained language models achieve their maximum performance on only ${20}\%$ of the training data.
162
+
163
+ We find that all pre-trained language models outperform the previous traditional and neural methods by a sizable margin in both accuracy and micro-F1 score. The best performing models are the Longformer and BioBERT, both achieving a similar micro-F1 score of around ${81}\%$ on the test set and an accuracy of 69.2% and 68.5% respectively.
164
+
165
+ § 4 RESULTS & DISCUSSION
166
+
167
+ In this section, we explore data efficiency, model generalizability and discuss potential ways to improve performance on this task in future work.
168
+
169
+ § 4.1 DATA EFFICIENCY
170
+
171
+ During a sudden healthcare crisis like this pandemic it is essential for models to obtain useful results as soon as possible. Since labelling biomedical articles is a very time-consuming process, achieving peak performance using less data becomes highly desirable. We thus evaluate the data efficiency of these models by training each of the ones shown in Figure 1 using $1\% ,5\% ,{10}\%$ , ${20}\%$ and ${50}\%$ of our training data and report the micro-F1 score on the dev set. When selecting the data subsets, we sample each category independently to make sure they are all represented.
172
+
173
+ We observe that pre-trained models are much more data-efficient than other models and that BioBERT is the most efficient, demonstrating the importance of domain-specific pre-training. We also notice that BioBERT performs worse than other pre-trained models on $1\%$ of the data, suggesting that its pre-training prevents it from leveraging potentially spurious patterns when there is very little data available.
174
+
175
+ ${}^{2}$ https://github.com/castorini/hedwig
176
+
177
+ max width=
178
+
179
+ Article Label Prediction
180
+
181
+ 1-3
182
+ Analysis on epidemic situation and spatiotemporal changes of COVID-19 in Anhui. ... We mapped the spatiotemporal changes of confirmed cases, fitted the epidemic situation by the population growth curve at different stages and took statistical description and analysis of the epidemic situation in Anhui province. Forecasting Prevention Forecasting
183
+
184
+ 1-3
185
+ 2019 Novel coronavirus: where we are and what we know. There is a current worldwide outbreak of a new type of coronavirus (2019-nCoV), which originated from Wuhan in China and has now spread to 17 other countries. ... This paper aggregates and consolidates the virology, epidemiology, clinical management strategies ... In addition, by fitting the number of infections with a single-term exponential model ... Treatment Mechanism Transmission Forecasting Prevention Forecasting
186
+
187
+ 1-3
188
+ Managing Cancer Care During the COVID-19 Pandemic: Agility and Collaboration Toward a Common Goal. The first confirmed case of coronavirus disease 2019 (COVID-19) in the United States was reported on January 20, 2020, in Snohomish County, Washington. ... Treatment Prevention
189
+
190
+ 1-3
191
+
192
+ Table 4: LitCovid Error Samples. Sentences relevant to the article's category are highlighted with blue and general ones with red.
193
+
194
+ § 4.2 CORD-19 GENERALIZABILITY
195
+
196
+ To effectively respond to this pandemic, experts must not only learn as much as possible about the current virus but also thoroughly understand past epidemics and similar viruses. Thus, it is crucial for models trained on the LitCovid dataset to successfully categorize articles about related epidemics. We therefore evaluate some of our baselines on such articles using our labelled CORD-19 subset. We find that the micro-F1 and accuracy metrics drop by around 10 and 30 points respectively. This massive drop in performance from a minor change in domain indicates that the models have trouble ignoring the overarching COVID-19 topic and isolating relevant signals from each category.
197
+
198
+ max width=
199
+
200
+ X $\mathbf{{Acc}.}$ $\mathbf{{F1}}$
201
+
202
+ 1-3
203
+ SVM 26.0 55.6
204
+
205
+ 1-3
206
+ $\mathbf{{LSTMreg}}$ ${31.3} \pm {2.5}$ ${62.9} \pm {2.4}$
207
+
208
+ 1-3
209
+ Longformer ${37.3} \pm {4.9}$ ${66.9} \pm {2.1}$
210
+
211
+ 1-3
212
+ BioBERT ${39.7} \pm {3.1}$ ${68.1} \pm {1.3}$
213
+
214
+ 1-3
215
+
216
+ Table 5: Performance on the CORD-19 Test Set expressed as mean $\pm$ standard deviation across three training runs.
217
+
218
+ It is interesting to note that Mechanism is the only category for which BioBERT performs better in CORD-19 than in LitCovid. This could be due to Mechanism articles using technical language and there being enough samples for the models to learn; in contrast with Forecasting which also uses specific language but has far fewer training examples. BioBERT's binary F1 scores for each category on both datasets can be found in Appendix B.
219
+
220
+ § 4.3 ERROR ANALYSIS
221
+
222
+ We analyze 50 errors made by both highest scoring BioBERT and the Longformer models on Lit-Covid documents to better understand their performance. We find that ${34}\%$ of these were annotation errors which our best performing model predicted correctly. We also find that ${10}\%$ of the errors were nearly impossible to classify using only the text available on LitCovid, and the full articles are needed to make better-informed prediction. From the rest of the errors we identify some aspects of this task which should be addressed in future work.
223
+
224
+ We first note these models often correlate certain categories, namely Prevention, Transmission and Forecasting, much more closely than necessary. Even though these categories are semantically related and some overlap exists, the Transmission and Forecasting tags are predicted in conjunction with the Prevention tag much more frequently than what is observed in the labels as can be seen from the table in Appendix C. Future work should attempt to explicitly model correlation between categories to help the model recognize the particular cases in which labels should occur together. The first row in Table 4 shows a document labelled as Forecasting which is also incorrectly predicted with a Prevention label, exemplifying this issue.
225
+
226
+ Finally, we observe that models have trouble identifying discriminative sections of the document due to how much introductory content on the pandemic can be found in most articles. Future work should explicitly model the gap in relevance between introductory sections and crucial sentences such as thesis statements and article titles. In Table 4, the second and third examples would be more easily classified correctly if specific sentences were ignored while others attended to more thoroughly. This could also increase interpretability, facilitating analysis and further improvement.
227
+
228
+ § 5 CONCLUSION
229
+
230
+ We provide an analysis of document classification models on the LitCovid dataset for the COVID- 19 literature. We determine that fine-tuning pre-trained language models yields the best performance on this task. We study the generalizability and data efficiency of these models and discuss some important issues to address in future work.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/VvRbhkiAwR/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic
2
+
3
+ Anna Kruspe
4
+
5
+ German Aerospace Center (DLR)
6
+
7
+ Institute of Data Science
8
+
9
+ Jena, Germany
10
+
11
+ anna.kruspe@dlr.de Matthias Häberle Technical University of Munich (TUM) Signal Processing in Earth Observation (SiPEO) Munich, Germany
12
+
13
+ matthias.haeberle@tum.de
14
+
15
+ Iona Kuhn
16
+
17
+ German Aerospace Center (DLR)
18
+
19
+ Institute of Data Science Jena, Germany
20
+
21
+ iona.kuhn@dlr.de
22
+
23
+ Xiao Xiang Zhu
24
+
25
+ German Aerospace Center (DLR)
26
+
27
+ Remote Sensing Technology Institute (IMF) Oberpfaffenhofen, Germany
28
+
29
+ xiaoxiang.zhu@dlr.de
30
+
31
+ ## Abstract
32
+
33
+ Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.
34
+
35
+ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lock-down announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span.
36
+
37
+ ## 1 Introduction
38
+
39
+ The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.
40
+
41
+ First studies about the effect the pandemic has on people's lives are being published at the moment (e.g. Betsch et al., 2020), mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).
42
+
43
+ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.
44
+
45
+ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months.
46
+
47
+ ## 2 Related work
48
+
49
+ Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter.
50
+
51
+ Feng and Zhou (2020) analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states.
52
+
53
+ Lyu et al. (2020) looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians.
54
+
55
+ Chen et al. (2020) focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do.
56
+
57
+ ## 3 Data collection
58
+
59
+ For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of $1\%$ of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth ${}^{1}$ , then we filtered by country performing a point in polygon test using the Python package ${\operatorname{Shapely}}^{2}$ . Figure 1 depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis.
60
+
61
+ ## 4 Analysis method
62
+
63
+ We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method.
64
+
65
+ ### 4.1 Sentiment modeling
66
+
67
+ In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with ${50}\%$ dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure 2.
68
+
69
+ This network is trained on the Sentiment 140 dataset (Go et al., 2009). This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values1.0,0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.
70
+
71
+ We test variants of the model using the following pre-trained word- and sentence-level embeddings:
72
+
73
+ - A skip-gram version of word2vec (Mikolov et al., 2013) trained on the English-language Wikipedia3
74
+
75
+ - A multilingual version of BERT (Devlin et al., 2018) trained on Wikipedia data ${}^{4}$
76
+
77
+ ## A multilingual version of BERT trained on 160 million tweets containing COVID-19 key- words ${}^{5}$ (Müller et al.,2020)
78
+
79
+ ---
80
+
81
+ ${}^{3}$ https://tfhub.dev/google/ Wiki-words-250/2
82
+
83
+ ${}^{4}$ https://tfhub.dev/tensorflow/bert_ multi_cased_L-12_H-768_A-12/2
84
+
85
+ ${}^{5}$ https://tfhub.dev/
86
+
87
+ digitalepidemiologylab/
88
+
89
+ covid-twitter-bert/1
90
+
91
+ 'https://www.naturalearthdata.com/downloads/10m-cultural-vectors/ 10m-admin-0-countries/
92
+
93
+ ${}^{2}$ https://pypi.org/project/Shapely/
94
+
95
+ ---
96
+
97
+ ![01963dad-f148-707a-aea4-43d0a85f622a_2_342_168_945_396_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_2_342_168_945_396_0.jpg)
98
+
99
+ Figure 1: Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.
100
+
101
+ ![01963dad-f148-707a-aea4-43d0a85f622a_2_238_731_502_612_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_2_238_731_502_612_0.jpg)
102
+
103
+ Figure 2: Architecture of the sentiment analysis model.
104
+
105
+ ![01963dad-f148-707a-aea4-43d0a85f622a_2_179_1520_577_450_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_2_179_1520_577_450_0.jpg)
106
+
107
+ Figure 3: MSE for different models on the Sentiment140 test dataset.
108
+
109
+ - An ELMO model (Peters et al., 2018) trained on the 1 Billion Word Benchmark dataset ${}^{6}$
110
+
111
+ - The Multilingual Universal Sentence Encoder (MUSE) ${}^{7}$ (Yang et al.,2019)
112
+
113
+ We train each sentiment analysis model on the Sentiment140 dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure 3. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages (Hutto and Gilbert, 2014).
114
+
115
+ Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the word2vec model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.
116
+
117
+ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.
118
+
119
+ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. The chosen keywords are listed in figure 4.
120
+
121
+ ---
122
+
123
+ ${}^{6}$ https://tfhub.dev/google/elmo/3
124
+
125
+ ${}^{7}$ https://tfhub.dev/google/
126
+
127
+ universal-sentence-encoder-multilingual/ 3
128
+
129
+ ---
130
+
131
+ ### 4.2 Considerations
132
+
133
+ There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolo-cation. This applies to less than $1\%$ of the whole tweet stream, but according to Sloan et al. (2013), the amount of geolocated tweets closely follows the geographic population distribution. According to Graham et al. (2014), there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.
134
+
135
+ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. "Positive" sentiment encompasses, for example, happy or hopeful tweets, "negative" angry or sad tweets, and "neutral" tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.
136
+
137
+ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for
138
+
139
+ future studies from psychological or sociological perspectives.
140
+
141
+ <table><tr><td>corona</td><td>コロナ</td><td>chain</td></tr><tr><td>covid</td><td>冠狀病毒</td><td>Inlow</td></tr><tr><td>wuhan</td><td>武漢</td><td>Patial</td></tr><tr><td>koroona</td><td>ador</td><td>nonp</td></tr><tr><td>Kopoha</td><td>코로나</td><td>Lyngs</td></tr><tr><td>KOPWV korona</td><td>YORNY</td><td>UNDLI</td></tr></table>
142
+
143
+ Figure 4: Keywords used for filtering the tweets (not case sensitive).
144
+
145
+ ## 5 Results
146
+
147
+ In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section 3 , tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian).
148
+
149
+ ### 5.1 Over-all
150
+
151
+ In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure 5 shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.
152
+
153
+ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic.
154
+
155
+ ### 5.2 Analysis by country
156
+
157
+ We next aggregated the tweets by country as described in section 3 and performed the same analysis by country. The country-wise curves are shown jointly in figure 6. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets).
158
+
159
+ #### 5.2.1 Italy
160
+
161
+ Figure 7 shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section 5.1, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9 , which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets.
162
+
163
+ #### 5.2.2 Spain
164
+
165
+ For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure 8 . The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that "corona" is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.
166
+
167
+ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments.
168
+
169
+ #### 5.2.3 France
170
+
171
+ Analyses for the data from France are shown in figure 9. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic.
172
+
173
+ #### 5.2.4 Germany
174
+
175
+ For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure 10. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.
176
+
177
+ ![01963dad-f148-707a-aea4-43d0a85f622a_5_250_181_1161_563_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_5_250_181_1161_563_0.jpg)
178
+
179
+ Figure 5: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
180
+
181
+ ![01963dad-f148-707a-aea4-43d0a85f622a_5_241_877_1175_1141_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_5_241_877_1175_1141_0.jpg)
182
+
183
+ Figure 7: Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
184
+
185
+ ![01963dad-f148-707a-aea4-43d0a85f622a_6_247_330_1164_544_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_6_247_330_1164_544_0.jpg)
186
+
187
+ Figure 8: Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
188
+
189
+ ![01963dad-f148-707a-aea4-43d0a85f622a_6_245_1307_1166_554_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_6_245_1307_1166_554_0.jpg)
190
+
191
+ Figure 9: France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
192
+
193
+ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany (Betsch et al., 2020), and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors.
194
+
195
+ #### 5.2.5 United Kingdom
196
+
197
+ Curves for the United Kingdom are shown in figure 11, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.
198
+
199
+ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lock-downs. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end.
200
+
201
+ ## 6 Conclusion
202
+
203
+ In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.
204
+
205
+ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.
206
+
207
+ ## 7 Future work
208
+
209
+ We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.
210
+
211
+ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing (e.g. Qazi et al., 2020; Banda et al., 2020). These data sets are much larger because collection was not restricted to geotagged tweets. In Qazi et al. (2020), geolocations were instead completed from outside sources.
212
+
213
+ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model.
214
+
215
+ ![01963dad-f148-707a-aea4-43d0a85f622a_8_253_341_1138_552_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_8_253_341_1138_552_0.jpg)
216
+
217
+ Figure 10: Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
218
+
219
+ ![01963dad-f148-707a-aea4-43d0a85f622a_8_247_1336_1165_533_0.jpg](images/01963dad-f148-707a-aea4-43d0a85f622a_8_247_1336_1165_533_0.jpg)
220
+
221
+ Figure 11: United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
222
+
223
+ ## References
224
+
225
+ Juan M. Banda, Ramya Tekumalla, Guanyu Wang, Jingyuan Yu, Tuo Liu, Yuning Ding, Katya Artemova, Elena Tutubalina, and Gerardo Chowell. 2020. A large-scale COVID- 19 Twitter chatter dataset for open scientific research - an international collaboration. https://doi.org/10.5281/zenodo.3723939.
226
+
227
+ Cornelia Betsch, Lars Korn, Lisa Felgendreff, Sarah Eitze, Philipp Schmid, Philipp Sprengholz, Lothar Wieler, Patrick Schmich, Volker Stol-lorz, Michael Ramharter, Michael Bosnjak, Saad B. Omer, Heidrun Thaiss, Freia De Bock, Ursula von Rüden, Cara Ebert, Janina Steinert, and Martin Bruder. 2020. German COVID- 19 Snapshot MOnitoring (COSMO Germany). https://www.psycharchives.org/handle/20.500.12034/2
228
+
229
+ Long Chen, Hanjia Lyu, Tongyu Yang, Yu Wang, and Jiebo Luo. 2020. In the eyes of the beholder: Sentiment and topic analyses on social media use of neutral and controversial terms for covid- 19. arXiv:2004.10225.
230
+
231
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
232
+
233
+ Yunhe Feng and Wenjun Zhou. 2020. Is Working From Home The New Norm? An Observational Study Based on a Large Geo-tagged COVID-19 Twitter Dataset. arXiv:2006.08581.
234
+
235
+ Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. Technical report, Stanford University.
236
+
237
+ Mark Graham, A. Scott Hale, and Devin Gaffney. 2014. Where in the World are You? Geolocation and Language Identification in Twitter. The Professional Geographer.
238
+
239
+ C.J. Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14).
240
+
241
+ Hanjia Lyu, Long Chen, Yu Wang, and Jiebo Luo. 2020. Sense and Sensibility: Characterizing Social Media Users Regarding the Use of Controversial Terms for COVID-19. arXiv:2004.06307.
242
+
243
+ Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (ICLR), Scottsdale, AZ, USA.
244
+
245
+ Martin Müller, Marcel Salathé, and Per E. Kummer-vold. 2020. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv:2005.07503.
246
+
247
+ Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. CoRR, abs/1802.05365.
248
+
249
+ Umair Qazi, Muhammad Imran, and Ferda Ofli. 2020. Geocov19: A dataset of hundreds of millions of multilingual covid-19 tweets with location information. ACM SIGSPATIAL Special, 12(1):6-15.
250
+
251
+ Luke Sloan, Jeffrey Morgan, William Housley, Matthew Williams, Adam Edwards, Pete Burnap, and Omer Rana. 2013. Knowing the tweeters: Deriving sociologically relevant demographics from twitter. Sociological Research Online, 18(3):7.
252
+
253
+ Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez 8. Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Multilingual universal sentence encoder for semantic retrieval. arXiv:1907.04307.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/VvRbhkiAwR/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CROSS-LANGUAGE SENTIMENT ANALYSIS OF EUROPEAN TWITTER MESSAGES DURING THE COVID-19 PANDEMIC
2
+
3
+ Anna Kruspe
4
+
5
+ German Aerospace Center (DLR)
6
+
7
+ Institute of Data Science
8
+
9
+ Jena, Germany
10
+
11
+ anna.kruspe@dlr.de Matthias Häberle Technical University of Munich (TUM) Signal Processing in Earth Observation (SiPEO) Munich, Germany
12
+
13
+ matthias.haeberle@tum.de
14
+
15
+ Iona Kuhn
16
+
17
+ German Aerospace Center (DLR)
18
+
19
+ Institute of Data Science Jena, Germany
20
+
21
+ iona.kuhn@dlr.de
22
+
23
+ Xiao Xiang Zhu
24
+
25
+ German Aerospace Center (DLR)
26
+
27
+ Remote Sensing Technology Institute (IMF) Oberpfaffenhofen, Germany
28
+
29
+ xiaoxiang.zhu@dlr.de
30
+
31
+ § ABSTRACT
32
+
33
+ Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.
34
+
35
+ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lock-down announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span.
36
+
37
+ § 1 INTRODUCTION
38
+
39
+ The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.
40
+
41
+ First studies about the effect the pandemic has on people's lives are being published at the moment (e.g. Betsch et al., 2020), mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).
42
+
43
+ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.
44
+
45
+ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months.
46
+
47
+ § 2 RELATED WORK
48
+
49
+ Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter.
50
+
51
+ Feng and Zhou (2020) analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states.
52
+
53
+ Lyu et al. (2020) looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians.
54
+
55
+ Chen et al. (2020) focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do.
56
+
57
+ § 3 DATA COLLECTION
58
+
59
+ For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of $1\%$ of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth ${}^{1}$ , then we filtered by country performing a point in polygon test using the Python package ${\operatorname{Shapely}}^{2}$ . Figure 1 depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis.
60
+
61
+ § 4 ANALYSIS METHOD
62
+
63
+ We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method.
64
+
65
+ § 4.1 SENTIMENT MODELING
66
+
67
+ In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with ${50}\%$ dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure 2.
68
+
69
+ This network is trained on the Sentiment 140 dataset (Go et al., 2009). This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values1.0,0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.
70
+
71
+ We test variants of the model using the following pre-trained word- and sentence-level embeddings:
72
+
73
+ * A skip-gram version of word2vec (Mikolov et al., 2013) trained on the English-language Wikipedia3
74
+
75
+ * A multilingual version of BERT (Devlin et al., 2018) trained on Wikipedia data ${}^{4}$
76
+
77
+ § A MULTILINGUAL VERSION OF BERT TRAINED ON 160 MILLION TWEETS CONTAINING COVID-19 KEY- WORDS ${}^{5}$ (MÜLLER ET AL.,2020)
78
+
79
+ ${}^{3}$ https://tfhub.dev/google/ Wiki-words-250/2
80
+
81
+ ${}^{4}$ https://tfhub.dev/tensorflow/bert_ multi_cased_L-12_H-768_A-12/2
82
+
83
+ ${}^{5}$ https://tfhub.dev/
84
+
85
+ digitalepidemiologylab/
86
+
87
+ covid-twitter-bert/1
88
+
89
+ 'https://www.naturalearthdata.com/downloads/10m-cultural-vectors/ 10m-admin-0-countries/
90
+
91
+ ${}^{2}$ https://pypi.org/project/Shapely/
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 1: Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.
96
+
97
+ < g r a p h i c s >
98
+
99
+ Figure 2: Architecture of the sentiment analysis model.
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 3: MSE for different models on the Sentiment140 test dataset.
104
+
105
+ * An ELMO model (Peters et al., 2018) trained on the 1 Billion Word Benchmark dataset ${}^{6}$
106
+
107
+ * The Multilingual Universal Sentence Encoder (MUSE) ${}^{7}$ (Yang et al.,2019)
108
+
109
+ We train each sentiment analysis model on the Sentiment140 dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure 3. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages (Hutto and Gilbert, 2014).
110
+
111
+ Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the word2vec model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.
112
+
113
+ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.
114
+
115
+ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. The chosen keywords are listed in figure 4.
116
+
117
+ ${}^{6}$ https://tfhub.dev/google/elmo/3
118
+
119
+ ${}^{7}$ https://tfhub.dev/google/
120
+
121
+ universal-sentence-encoder-multilingual/ 3
122
+
123
+ § 4.2 CONSIDERATIONS
124
+
125
+ There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolo-cation. This applies to less than $1\%$ of the whole tweet stream, but according to Sloan et al. (2013), the amount of geolocated tweets closely follows the geographic population distribution. According to Graham et al. (2014), there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.
126
+
127
+ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. "Positive" sentiment encompasses, for example, happy or hopeful tweets, "negative" angry or sad tweets, and "neutral" tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.
128
+
129
+ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for
130
+
131
+ future studies from psychological or sociological perspectives.
132
+
133
+ max width=
134
+
135
+ corona コロナ chain
136
+
137
+ 1-3
138
+ covid 冠狀病毒 Inlow
139
+
140
+ 1-3
141
+ wuhan 武漢 Patial
142
+
143
+ 1-3
144
+ koroona ador nonp
145
+
146
+ 1-3
147
+ Kopoha 코로나 Lyngs
148
+
149
+ 1-3
150
+ KOPWV korona YORNY UNDLI
151
+
152
+ 1-3
153
+
154
+ Figure 4: Keywords used for filtering the tweets (not case sensitive).
155
+
156
+ § 5 RESULTS
157
+
158
+ In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section 3, tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian).
159
+
160
+ § 5.1 OVER-ALL
161
+
162
+ In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure 5 shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.
163
+
164
+ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic.
165
+
166
+ § 5.2 ANALYSIS BY COUNTRY
167
+
168
+ We next aggregated the tweets by country as described in section 3 and performed the same analysis by country. The country-wise curves are shown jointly in figure 6. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets).
169
+
170
+ § 5.2.1 ITALY
171
+
172
+ Figure 7 shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section 5.1, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9, which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets.
173
+
174
+ § 5.2.2 SPAIN
175
+
176
+ For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure 8 . The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that "corona" is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.
177
+
178
+ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments.
179
+
180
+ § 5.2.3 FRANCE
181
+
182
+ Analyses for the data from France are shown in figure 9. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic.
183
+
184
+ § 5.2.4 GERMANY
185
+
186
+ For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure 10. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.
187
+
188
+ < g r a p h i c s >
189
+
190
+ Figure 5: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
191
+
192
+ < g r a p h i c s >
193
+
194
+ Figure 7: Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
195
+
196
+ < g r a p h i c s >
197
+
198
+ Figure 8: Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
199
+
200
+ < g r a p h i c s >
201
+
202
+ Figure 9: France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
203
+
204
+ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany (Betsch et al., 2020), and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors.
205
+
206
+ § 5.2.5 UNITED KINGDOM
207
+
208
+ Curves for the United Kingdom are shown in figure 11, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.
209
+
210
+ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lock-downs. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end.
211
+
212
+ § 6 CONCLUSION
213
+
214
+ In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.
215
+
216
+ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.
217
+
218
+ § 7 FUTURE WORK
219
+
220
+ We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.
221
+
222
+ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing (e.g. Qazi et al., 2020; Banda et al., 2020). These data sets are much larger because collection was not restricted to geotagged tweets. In Qazi et al. (2020), geolocations were instead completed from outside sources.
223
+
224
+ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model.
225
+
226
+ < g r a p h i c s >
227
+
228
+ Figure 10: Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
229
+
230
+ < g r a p h i c s >
231
+
232
+ Figure 11: United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/W3Dzaik1ipL/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Information Retrieval and Extraction on COVID-19 Clinical Articles Using Graph Community Detection and Bio-BERT Embeddings
2
+
3
+ Debasmita Das, Yatin Katyal, Janu Verma Shashank Dubey, Aakash Deep Singh, Kushagra Agarwal, Sourojit Bhaduri, Rajesh Kumar Ranjan
4
+
5
+ Mastercard AI Garage, Gurgaon, India
6
+
7
+ \{firstname.secondname\}@mastercard.com
8
+
9
+ ## Abstract
10
+
11
+ In this paper, we present an information retrieval system on a corpus of scientific articles related to COVID-19. We build a similarity network on the articles where similarity is determined via shared citations and biological domain-specific sentence embeddings. Ego-splitting community detection on the article network is employed to cluster the articles and then the queries are matched with the clusters. Extractive summarization using BERT and PageRank methods is used to provide responses to the query. We also provide a Question-Answer bot on a small set of intents to demonstrate the efficacy of our model for an information extraction module.
12
+
13
+ ## 1 Introduction
14
+
15
+ Novel coronavirus (COVID-19) has resulted in a pandemic in a short span of time owing to its quick transmission. A lot of scientific attention has been directed towards understanding the causes and impacts of the virus. Though this has resulted in a large amount of research articles being published every day, extracting relevant information (Asai et al., 2019) from such a huge pool of textual articles remains challenging. It is, thus, of particular importance to have systems that can retrieve relevant answers to queries. For example, it is useful to ask
16
+
17
+ What is known about the transmission, incubation, and environmental stability of COVID-19 ?
18
+
19
+ But finding information relevant to this query is quite challenging owing to the plethora of research articles being published, and the diversity, specificity of the query space. In this work, we address the problem of extracting relevant answers (Chen et al., 2017) from a corpus of clinical articles on COVID-19 in response to a query.
20
+
21
+ Information retrieval i.e. finding relevant documents in response to a query, is a standard problem with applications in web search engines e.g. Google, Bing etc. Models for retrieval systems rely heavily on word-embeddings (Mikolov et al., 2013) which provide a vector representation for every word in the corpus. The techniques developed for web search might not extend to the case of clinical articles since the distribution of words over these documents is quite different from the typical documents. There has been some work in building retrieval systems for scientific articles in a specific field e.g. biological or clinical papers, which are powered by domain specific word-embeddings (Mikolov et al., 2013) e.g. BioBERT (Lee et al., 2020). However, the direct use of BioBERT is not very optimal e.g. there might not even be an embedding of COVID-19. The domain of biological articles is still not specific enough to be able to used for our task. Thus, to build a useful information extraction system for COVID-19, it is important to fine-tune embeddings to align them to their distribution in COVID-19 related articles.
22
+
23
+ In this paper, we propose a system to extract information from a corpus of COVID-19 articles which is relevant to a query (Srihari and Li, 2000). Our approach has two main modules.
24
+
25
+ - Graph-based Clustering: This involves building a graph of the research articles in the corpus using the citations and textual similarity between them. Biological sentence vector embeddings are used to compute similarity[BioSentVec]. Graph-based community detection (Schaeffer, 2007) algorithms are employed to cluster the large number of documents into a relatively small number of clusters. We provide a detailed qualitative evaluation of the resulting clusters, and try to provide interpretable labels for the clusters. We find best matching clusters to a query by computing the similarities of their BioSentVec vectors.
26
+
27
+ - BERT-based Extractive Summarization: This module extracts relevant sentences from the best-matched documents within the top clusters. Contextual embeddings (Si et al., 2019) of BERT-type trained on a corpus of biological articles are used to generate vectorial representation of sentences and the documents. Our output is a set of sentences from the summaries that are ranked in their degree of relevance to the query.
28
+
29
+ We demonstrate our model using COVID-19 Open Research Dataset (CORD-19) which was made available by the White House, Allen Institute for AI and a coalition of research groups. This data is available on Kaggle ${}^{1}$ as a part of their open research challenge (See Section 3.1). We also demonstrate the efficacy of our clustering and the summarization method by experimenting with a Question-Answer (QA) system where we provide precise answers to specific questions e.g. What is the incubation period of COVID-19 ? We provide evidence that our system can be employed, with minor modifications, on a much larger data to build a useful QA system or a chat-bot.
30
+
31
+ Concretely, we make following contributions:
32
+
33
+ 1. Using graph-based clustering on a network of articles in the corpus.
34
+
35
+ 2. Qualitative analysis of the clusters, and human-assisted labelling on the clusters.
36
+
37
+ 3. Biological BERT based extractive summarization of the articles to find informative portions which are relevant to a query.
38
+
39
+ 4. Proof-of-concept for a Question-Answer system on a limited set of intents.
40
+
41
+ The rest of this paper is divided as follows. In section 2 we will describe our method in details, the results are covered in section 3 , evaluation and discussions are the part of section 4 .
42
+
43
+ ## 2 Method
44
+
45
+ In this section, we will discuss different components of our model. Starting point of out methodology is a graph of the articles.
46
+
47
+ ### 2.1 Construction of the graph
48
+
49
+ We build a citation network (Price, 1965) of the articles in the corpus where nodes corresponds to the papers in the corpus and the edges are determined by the citations of the papers. There are many ways a citation network can be constructed e.g. if paper A cities paper B, then there is a directed edge from A to B. We use the transversality of the citation relations to create edges i.e. if paper A and paper B both cite common papers then this is a signal that A and B are likely to be discussing similar topics.
50
+
51
+ In addition to the similarity of two papers in terms of their mutual citations, the semantic similarity of the documents (Lee et al., 2005) is also a valuable factor. Moreover, some articles might have only a few of their citations in the corpus, and some articles can have none of their citations in the corpus. Thus, we use semantic similarity between a pair of articles to add new edges and further enhance the coverage of network over the corpus. Word and sentence embeddings (Arora et al., 2016) have emerged as the standard way to obtain semantic representation of textual documents (Lau and Baldwin, 2016), where the documents are projected onto a low-dimensional space that preserves the semantic relationship. For this work, we use BioSentVec (Chen et al., 2019) which is trained on a corpus of about 30 million clinical and bio-medical research articles from the public databases - PubMed and MIMIC-III. BioSentVec provides 700-dimensional ' sentence embeddings. We separately compute pairwise cosine similarity of article abstracts and the papers, and take their average as the semantic similarity (Muflikhah and Baharudin, 2009) between the papers. If this similarity of a pair of papers is greater than a threshold, we add an edge between them to the citation-based network.
52
+
53
+ Thus, we obtain a larger similarity-network of the papers containing un-directed edges. For simplicity of the discussion, we treat both types of edges i.e. citation-based and the semantic-based as indistinguishable and work with a homogeneous graph. The network, thus built, can have multiple edges between two nodes e.g. if they have multiple citations common and the edge can be formed via any of the shared citations, or if both types of edges are present. We ignore this multiplicity and consider at most one edge between any two nodes. It is possible to develop a heterogeneous network (Shi et al., 2014) of different edge-types, and edges can be weighted according to the number of shared citations. We do not consider these approaches in this work.
54
+
55
+ ---
56
+
57
+ ${}^{1}$ https://www.kaggle.com/allen-institute-for-ai/CORD- 19-research-challenge
58
+
59
+ ---
60
+
61
+ ### 2.2 Clustering of the papers
62
+
63
+ We employ community detection (Chen et al., 2011) on our graph of citation-based and semantic-based edges. Community detection is a useful technique to extract relationships between nodes in a complex graph. Nodes within a community are 'strongly' connected to each other than to those in different communities, and the nodes can be classified into communities or modules (Fortunato, 2010). For example, in a collaboration network of scientists, where nodes are scientists, edges corresponds to co-authorship, communities can indicate research areas. There is a plethora of community detection algorithms, each with their set of assumptions and workings. We will use community and cluster interchangeably.
64
+
65
+ For this task, we will use ego-splitting (Epasto et al., 2017) which provides a scalable and flexible community detection algorithm for complex networks. It employs local structures known as ego-nets which are the sub-graphs induced by the neighborhood of each node.
66
+
67
+ 1. Local ego-net clustering involves construction of ego-nets for each node and then clustering of the ego-nets. For each cluster thus obtained, we add a new nodes (personas) which are same as the previous nodes but are now uniquely associated with a community. Then a new graph (persona graph) is constructed where there are multiple copies of nodes and the edges corresponds to the edges in the original network.
68
+
69
+ 2. Global network partitioning involves the partitioning of the persona graph and mapping them back to the original graph.
70
+
71
+ This algorithm can be trained at different levels of resolutions - lower resolutions generate more granular clusters (higher number of clusters) and higher resolutions produce fewer clusters at a higher-level.
72
+
73
+ ### 2.3 Mapping Queries to the Clusters
74
+
75
+ We next describe the method to map queries to clusters i.e. for any given query we find the clusters that are closely related to the query. We employ Bio-BERT embeddings to map the query and each document into dense vectors. Bio-BERT is a domain specific BERT (Devlin et al., 2018) (Bidirectional Encoder Representations from Transformers) for biomedical text mining, it is trained on corpus of PubMed and PMC full-text articles. It has been shown that Bio-BERT outperforms other approaches of embeddings as well as vanilla BERT on clinical data for variety of tasks e.g. entity recognition (Nadeau and Sekine, 2007), relation extraction (GuoDong et al., 2005), and QA system etc. This mapping is done in following steps:
76
+
77
+ 1. Map title of each article in the corpus to a 768-dimensional vector using pre-trained Bio-BERT embeddings.
78
+
79
+ 2. Obtain the Bio-BERT embedding for the given query.
80
+
81
+ 3. Find top-40 most similar titles to the query in terms of their cosine similarities with the query.
82
+
83
+ 4. This gives a distribution of cluster labels over the top-40 papers.
84
+
85
+ 5. Based on a threshold on the similarity score or on the fraction of the top-40 papers in a cluster matching the query, we tag the query with a set of cluster labels.
86
+
87
+ This mapping helps in reducing the search space of the query and to retrieve more refined and focused results. It is worth noting here that the cluster assignment to the queries is done using only the titles, which might not capture the full relevance. But the assigned clusters do provide a direction and a smaller set of papers to explore further for better and faster search results.
88
+
89
+ Another purpose of this mapping of the query to the clusters is to purpose labels for each query in lieu of the supervised multi-label classification which is not possible due to the lack of the ground truth labels. More discussion in the 3.5 and 3.7.
90
+
91
+ ### 2.4 Information Retrieval
92
+
93
+ We have reduced the set of possible articles that are relevant to a query as the union of the articles in the top-k clusters. Now, we will describe the process of retrieve articles that are best matched with the query. We again use the pre-trained Bio-BERT embeddings to obtain a vector representation of the whole document. This representation is different from the one used in that cluster mapping where only title embedding is used. Also, we only consider the articles in the selected clusters, call this the candidate set. The Bio-BERT embedding of the query is used to compute its cosine similarity with the articles in the candidate set. Top-100 articles from the candidate set ranked by the cosine similarity with the query are selected to the returned in response to the query.
94
+
95
+ We also return a set of best matching sentences to the query which are deemed to be most informative. For this, we create a graph of sentences in the top-100 articles in terms of the cosine similarities of their Bio-BERT embeddings. The edges in this graph are weighted by the pairwise cosine similarities of the node sentences. Finally, the sentence nodes are ranked by their PageRank (Xing and Ghorbani, 2004) in this graph and top seven sentences are reported.
96
+
97
+ ### 2.5 Question-Answer System
98
+
99
+ To explore the efficacy of our work for more refined information extraction, we experimented with a Question-Answer bot (Nomoto et al., 2004) which takes in a question and attempts of find the precise answer to it. For the input question, we employ our model to find relevant articles and passages which are most-likely to contain the answer. This question and the passage are then concatenated and fed to transformer to BERT (Devlin et al., 2018) type with pre-trained BioBERT embeddings as the inputs. The output layer is a sequence of the same length as input with a softmax layer that is trained to compute the probability of the corresponding input token to be the start and the end of the answer.
100
+
101
+ ### 2.6 Extractive Summarization and Information Extraction
102
+
103
+ As a further enhancement and an application of the system, we provide extractive summarization (Padmakumar and Saran, 2016) of the best matched papers returned by the system. We attempt to produce a coherent summary of the papers by extracting important sentences from the paper. We used (Miller, 2019) approach for this task, which uses pre-trained BERT (Devlin et al., 2018) embeddings to obtain a sentence level embeddings and then K-means clustering (Wagstaff et al., 2001) of the sentences is performed. Finally, sentences closest to the centroid are selected.
104
+
105
+ ## 3 Results and Discussions
106
+
107
+ In this section, we provide a discussion on the results and comment on the evaluation and broad utilization of this work.
108
+
109
+ ### 3.1 Data
110
+
111
+ We used a corpus of scientific articles named COVID-19 Open Research Dataset (CORD-19) which was collected by the Allen Institute for AI and a coalition of research groups. Specifically, our motivation was the open research challenge hosted on Kaggle to build useful text mining tools to assist the medical community develop answers to high priority scientific questions. The CORD-19 contains 134000 research articles, including 60000 full-text articles about COVID-19, ${SARS},{CoV} - 2$ etc. Some important intents or tasks have been identified and there are multiple sub-tasks within each task. These represent a set of high importance topics and sub-topics for which relevant information to be retrieved from the given corpus.
112
+
113
+ As described in the Section 2.1 We built a network of the papers where each paper is represented as a node, and the edges between nodes implies that they either share a citation or the cosine similarity is greater than 0.9 for the BioSentVec embeddings of their abstracts and titles.
114
+
115
+ ### 3.2 Clustering Results
116
+
117
+ We performed ego-splitting community detection algorithm on the article graph at various levels of resolution ranging from 0.001 to 1 . The clustering that we report here was performed at the resolution of 0.3 to produce 661 clusters of non-uniform sizes consisting of around ${38}\mathrm{\;k}$ papers.
118
+
119
+ We also attempt to provide human-understandable labels for some of the clusters. For each cluster, we select top-5 papers in each cluster using PageRank on the papers as nodes in the graph corresponding to the cluster. Using the keywords in these top articles provide us the candidates for labelling the clusters. These potential labels are then manually investigated against the cluster to refine the labels for it and to reduce noise in the label assignment. Some examples of clusters labels are as follows:
120
+
121
+ - Travel, Mass Gathering & Social Mixing during Epidemics (224 Research Papers).
122
+
123
+ - Clinical Management during Epidemics (223 Research Papers)
124
+
125
+ - Spread and Transmission of Viral Infections (3,347 Research Papers)
126
+
127
+ - Hospital Emergency Management (824 Research Papers)
128
+
129
+ - Social, Media/Newspapers & Political Impact on Viral Epidemics (95 Research Papers)
130
+
131
+ It must be noted that the labels do not faithfully match every single article in a cluster, but a substantial majority of the articles can be described by the assigned labels. We also do not claim to have ${100}\%$ coverage since there a lot of clusters and we do not always have sufficient information to find consistent labels. Having a smaller set of labels helps in better bookkeeping. We plan to do a more careful study of the labels - automatically and manually - to further refine the results and increase the coverage over articles. We are also making public a set of articles with their labels. We hope that this set be used to study the articles related to COVID- 19 in a supervised manner and to employ modern developments in NLP to develop techniques to help the community in various tasks.
132
+
133
+ ### 3.3 Cluster Mapping
134
+
135
+ A sample of the cluster mapping results are shown in the Table 2. We take some examples of queries i.e. sub-tasks provided with the Kaggle competition and find their best-matching clusters via the procedure explained in Section 2.3. The results for all the sub-tasks are being made available.
136
+
137
+ ### 3.4 Retrieval Results
138
+
139
+ A sample of results of the retrieval system for the sub-tasks provided with the Kaggle competition are shown in the Table 3. Consider the query subtask : Approaches to evaluate risk for enhanced disease & vaccinations starting after vaccination, for which we find the best matching clusters as
140
+
141
+ 1. Studies: Vaccine Development
142
+
143
+ 2. Spread & Transmission of Viral Infections
144
+
145
+ From these clusters, the retrieval system finds the articles best matching to the sub-task as explained in 2.4.
146
+
147
+ 1. Vaccines and Vaccination Practices: Key Food Systems to Sustainable Animal Production.
148
+
149
+ 2. Canine Vaccination
150
+
151
+ 3. Progress in Respiratory Virus Vaccine Development.
152
+
153
+ ### 3.5 Discussion on Evaluation
154
+
155
+ Finally, we would like to address issues around the evaluation and applicability of this work. Since no ground truth data on articles matching the queries was provided, it was not possible to evaluate the system. We have thus no quantitative way to show superiority of our methods, neither do we claim any superiority. In fact, our motivation was to quickly prototype a retrieval system using modern advances in NLP like contextual embeddings e.g. BERT. We have used human intervention throughout this process both while building the model and for limited evaluation. We also propose potential evaluation methods in this situation.
156
+
157
+ We performed unsupervised clustering of the articles, evaluation of which is inherently difficult e.g. clustering is in the eyes of the beholder (Estivill-Castro, 2002). We do provide a qualitative study of the clusters by confirming that for most articles clusters within a cluster are 'more' similar to each other than to those in other clusters. It is possible to compute statistics like Silhouette coefficient, gap statistic etc. that provide a quantitative evaluation, but these statistics are not often useful or which of these should be used in not obvious, see e.g. Clustervision (Kwon et al., 2018). We also provide names/tags for the clusters based on finding top-papers in each clusters in terms of their PageRank values in a small graph and the keywords that figure prominently in these documents. Furthermore, we evaluated the tags by looking inside the clusters and comparing the papers against the proposed tags. Topic modeling e.g. LDA could be another approach to find tags for the articles and thus for the clusters. Our approach is much simpler and is also computationally efficient.
158
+
159
+ The information retrieval system that we proposed here works by finding articles that are best-matched to a query. We manually investigate the results for a set of queries i.e. sub-tasks in the Kag-gle competition. First, the mapping of the queries to the clusters is done, then the best-matching documents are returned.
160
+
161
+ For the QA system, we provide short, precise answers to the questions. We make no claim on the correctness of the answers, and only restrict to
162
+
163
+ <table><tr><td>Cluster</td><td>Example Papers</td><td>Journal</td><td>Published</td></tr><tr><td>Travel & Mass Gathering</td><td>1. Mass gathering and globalization of respiratory pathogens during 2013 Hajj</td><td>Clinical Microbiology and Infection</td><td>2015-06-30</td></tr><tr><td/><td>2. Travel implications of emerging coronavirus SARS and MERS-CoV</td><td>Travel Medicine & Infectious Disease</td><td>2014-10-31</td></tr><tr><td/><td>3. Respiratory tract infections among French Hajj pilgrims from 2014 to 2017</td><td>Sci Rep</td><td>2019-11-28</td></tr><tr><td>Studies: Vaccine</td><td>1. Immunoinformatics and Vaccine</td><td>Immunotargets Ther</td><td>2020-02-26</td></tr><tr><td>Development</td><td>Development: An Overview 2. Immunization recommendations and safety & immunogenicity on the delayed vaccination of non-national immunization program for the coronavirus disease 2019 in China</td><td>Chinese Journal of Pediatrics</td><td>2020-02-27</td></tr><tr><td>Spread and Transmission of Viral Infections</td><td>1. Prediction of COVID-19 Spreading Profiles in South Korea, Italy and Iran by Data-Driven Coding</td><td>medRxiv</td><td>2020-03-10</td></tr><tr><td/><td>2. Importation and Human-to-Human Transmission of a Novel Coronavirus in Vietnam</td><td>New England Journal of Medicine</td><td>2020-02-27</td></tr><tr><td/><td>3. Temperature significant change COVID-19 Transmission in 429 cities</td><td>-</td><td>2020-02-25</td></tr><tr><td>Impacts on Pregnancy</td><td>1. From mice to women : the conundrum of immunity to infection during pregnancy 2. Influenza and pneumonia in pregnancy</td><td>Journal of Reproductive Immunology Clinics in Perinatology</td><td>2013-03-31 2005-09-30</td></tr><tr><td/><td>3. Pregnancy and perinatal outcomes of women with SARS</td><td>American Journal of Obstetrics and Gynecology</td><td>2004-07-31</td></tr><tr><td rowspan="3">Impact of Social Media</td><td>1. Social media engagement analysis of U.S. Federal health agencies on Facebook</td><td>BMC Med Inform Decis Mak</td><td>2017-04-21</td></tr><tr><td>2. Social Media as a Sensor of Air Quality and Public Response in China</td><td>J Med Internet Res</td><td>2015-03-26</td></tr><tr><td>3. Scoping Review on Search Queries and Social Media for Disease Surveillance: A Chronology of Innovation height</td><td>J Med Internet Res</td><td>2013-07-18</td></tr></table>
164
+
165
+ Table 1: Sample of clusters and corresponding papers. extracting the answers from the papers. Now it is possible that there are different answers reported to the same question in different papers.
166
+
167
+ <table><tr><td>Sub-task</td><td>Clusters</td><td>No. of Documents</td></tr><tr><td>Approaches to evaluate risk for enhanced disease after vaccination</td><td>1.Studies: Vaccine Development 2. Spread & Transmission of Viral Infections</td><td>638 835</td></tr><tr><td>Seasonality of Transmission</td><td>1.Seasonality of Viral Infections 2. Spread & Transmission of Viral Infections</td><td>138 835</td></tr><tr><td>Age-adjusted mortality data for Acute</td><td>1.Viral Infections - Studies</td><td>3,347</td></tr><tr><td>Respiratory Distress Syndrome (ARDS)</td><td>2.Hospital Emergency Management</td><td>824</td></tr><tr><td>with or without other organ failure - particularly for viral etiologies</td><td>3.Severe Pneumonia</td><td>567</td></tr></table>
168
+
169
+ Table 2: Sub-tasks and their corresponding clusters
170
+
171
+ <table><tr><td rowspan="4">Subtask Approaches to evaluate risk for enhanced disease vaccination</td><td>Top 3 Sentences 1. Cattle receive many vaccinations starting after 3 months of age after maternal immunity no longer interferes with vaccination Animal by neutralizing the vaccine virus Production 2. Although protection againstVeterinary Clinics</td><td>Title of the Paper Vaccines and Vaccination Practices: Key to Sustainable</td><td>Journal Encyclopedia of Agriculture and Food Systems</td></tr><tr><td>most agents develops after routine vaccination programs, vaccines against some agents such as herpesvirus or Reovirus are not available.</td><td>Canine Vaccination</td><td>of North America: Small Animal Practice</td></tr><tr><td>3. A realistic goal of</td><td>Progress in</td><td>Seminars in</td></tr><tr><td>immunization has to be a reduction of severe disease rather than induction of sterilizing immunity, similar to what has been achieved with rotavirus vaccines</td><td>Respiratory Virus Vaccine Development</td><td>Respiratory and Critical Care Medicine</td></tr><tr><td rowspan="3">Co-infections (determine whether co-existing respiratory/viral infections make the virus more transmissible or virulent) and other co-morbidities</td><td>1.The September epidemic of asthma-Observational studies have also been used to investigate the association of respiratory viruses with asthma morbidity</td><td>The Impact of Respiratory Viral Infection on Wheezing Illnesses & Asthma Exacerbations</td><td>Immunology and Allergy Clinics of North America</td></tr><tr><td>2.Their relatively short incubation times and efficient transmission via small droplets among comorbid patients highlight the need for better understanding of respiratory viral infections in hospital settings.</td><td>Laboratory-based surveillance of hospital-acquired respiratory virus infection in a tertiary care hospital</td><td>American Journal of Infection Control</td></tr><tr><td>3.However, such studies could not take into account possible episodes of mild or moderate illness that did not require inpatient medical care and could not address whether asymptomatic community spread played a role in the 2003 epidemic.</td><td>SARS-CoV Antibody Prevalence in all Hong Kong Patient Contacts</td><td>Emerg Infect Dis</td></tr></table>
172
+
173
+ Table 3: Sub-tasks & corresponding articles returned by the System
174
+
175
+ <table><tr><td>Query</td><td>Answers</td><td>Confidence</td></tr><tr><td rowspan="2">Transmission Risks</td><td>1. High Community Prevalence</td><td>99.92</td></tr><tr><td>2. VIral Infections</td><td>99.84</td></tr><tr><td rowspan="3">Animal Host to Human</td><td>1. Virus</td><td>99.79</td></tr><tr><td>2. Pediculus lice</td><td>99.64</td></tr><tr><td>3. Anthropods</td><td>98.974</td></tr><tr><td rowspan="3">Risk Reduction Strategies</td><td>1. Hygiene Measures</td><td>98.235</td></tr><tr><td>2. Antioxidant Vitamin Supplements</td><td>97.746</td></tr><tr><td>3. Quarantine</td><td>97.471</td></tr><tr><td rowspan="4">What are neonates risk?</td><td>1. cardiac failure</td><td>96.925</td></tr><tr><td>2. Diarrhoea</td><td>96.778</td></tr><tr><td>3. Allergic Disorders</td><td>96.424</td></tr><tr><td>4. Serious Illness or Death</td><td>95.439</td></tr><tr><td rowspan="2">What are extrapulmonary manifestations of COVID-19?</td><td>1. Orbital sinus bleeding</td><td>92.95</td></tr><tr><td>2. Invasive Devices</td><td>90.21</td></tr><tr><td rowspan="2">Coronavirus Survival</td><td>1. 3 hours</td><td>97.753</td></tr><tr><td>3. 4 days on surfaces</td><td>97.271</td></tr></table>
176
+
177
+ Table 4: Questions and Answers with Confidence Score
178
+
179
+ ### 3.6 Possible Evaluation
180
+
181
+ In lack of ground truth, a possible evaluation of retrieval system could be to perform a user-study on the relevance of the results to the query. This means measuring how relevant the results are to a query as ascertained by a set of unbiased users e.g. via Mechanical Turk. If we show titles and abstracts of top-k articles to the subjects, we can calculate the average number of articles marked relevant by them over a set of queries. This can be loosely interpreted as an estimate of the Precision $\mathrm{k}$ of the system. Since it is not known how many relevant articles there are in the corpus due to the lack of the notion of relevance, such an evaluation can not estimate recall value. It must be noted that such estimation is far from perfect since the variance across users and the queries are not factored in. Careful estimation of sample size is another point of contention. At least, such a system can help us reach a consensus notion of relevance and possibly building a small set of labeled data. Due to lack of time and complete clarity on the procedure, we have not performed evaluation of this kind, and restricted to our team-members investigating the results.
182
+
183
+ ### 3.7 Further
184
+
185
+ We want to highlight that at various components of this system, we used human intervention to label data and tried to resolve some ambiguities. We are releasing these labeled pieces along with this paper. We hope this brings us closer to constructing a labelled data for various tasks e.g. classification of articles and queries into cluster categories, entity-recognition via top keywords, information retrieval & extraction, and a QA system.
186
+
187
+ ## 4 Conclusion
188
+
189
+ In this work, we presented the problem of building an information retrieval system for scientific papers on COVID-19. This system, based on network analytical methods and modern developments in contextual word embeddings e.g. BERT, extracts articles and the sections therein relevant to a given query. We used human intervention in attempts to attach interpretable labels to the data e.g. articles, and queries. We also discussed challenges and possible avenues in evaluation of such a system in the lack of ground truth data.
190
+
191
+ ## References
192
+
193
+ Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence em-beddings.
194
+
195
+ Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learn-
196
+
197
+ ing to retrieve reasoning paths over wikipedia graph for question answering. arXiv preprint arXiv:1911.10470.
198
+
199
+ Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051.
200
+
201
+ Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. Biosentvec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI), pages 1- 5. IEEE.
202
+
203
+ Wei Chen, Zhenming Liu, Xiaorui Sun, and Yajun Wang. 2011. Community detection in social networks through community formation games. In Twenty-Second International Joint Conference on Artificial Intelligence.
204
+
205
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
206
+
207
+ Alessandro Epasto, Silvio Lattanzi, and Renato Paes Leme. 2017. Ego-splitting framework: From non-overlapping to overlapping clusters. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 145-154.
208
+
209
+ Vladimir Estivill-Castro. 2002. Why so many clustering algorithms: A position paper. SIGKDD Explor. Newsl., 4(1):65-75.
210
+
211
+ Santo Fortunato. 2010. Community detection in graphs. Physics reports, 486(3-5):75-174.
212
+
213
+ Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427-434. Association for Computational Linguistics.
214
+
215
+ Bum Chul Kwon, Ben Eysenbach, Janu Verma, Kenney $\mathrm{{Ng}}$ , Christopher deFilippi, Walter F. Stewart, and Adam Perer. 2018. Clustervision: Visual supervision of unsupervised clustering. IEEE Transactions on Visualization and Computer Graphics, PP(1):1- 1.
216
+
217
+ Jey Han Lau and Timothy Baldwin. 2016. An empirical evaluation of doc2vec with practical insights into document embedding generation. arXiv preprint arXiv:1607.05368.
218
+
219
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
220
+
221
+ Michael D Lee, Brandon Pincombe, and Matthew
222
+
223
+ Welsh. 2005. An empirical evaluation of models of text document similarity. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 27.
224
+
225
+ Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositional-ity. In Advances in neural information processing systems, pages 3111-3119.
226
+
227
+ Derek Miller. 2019. Leveraging bert for extractive text summarization on lectures. ArXiv, abs/1906.04165.
228
+
229
+ Lailil Muflikhah and Baharum Baharudin. 2009. Document clustering using concept space and cosine similarity measurement. In 2009 International Conference on Computer Technology and Development, volume 1, pages 58-62. IEEE.
230
+
231
+ David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvis-ticae Investigationes, 30(1):3-26.
232
+
233
+ Masako Nomoto, Mitsuhiro Sato, and Hiroyuki Suzuki. 2004. Document retrieval system and question answering system. US Patent App. 10/637,498.
234
+
235
+ Aishwarya Padmakumar and Akanksha Saran. 2016. Unsupervised text summarization using sentence embeddings. Technical report, Technical Report, University of Texas at Austin.
236
+
237
+ Derek J De Solla Price. 1965. Networks of scientific papers. Science, pages 510-515.
238
+
239
+ Satu Elisa Schaeffer. 2007. Graph clustering. Computer science review, 1(1):27-64.
240
+
241
+ Chuan Shi, Ran Wang, Yitong Li, Philip S Yu, and Bin Wu. 2014. Ranking-based clustering on general heterogeneous information networks by network projection. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 699-708.
242
+
243
+ Yuqi Si, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. Enhancing clinical concept extraction with contextual embeddings. Journal of the American Medical Informatics Association, 26(11):1297-1304.
244
+
245
+ Rohini K Srihari and Wei Li. 2000. A question answering system supported by information extraction. In Sixth Applied Natural Language Processing Conference, pages 166-172.
246
+
247
+ Kiri Wagstaff, Claire Cardie, Seth Rogers, Stefan Schrödl, et al. 2001. Constrained k-means clustering with background knowledge. In Icml, volume 1, pages 577-584.
248
+
249
+ Wenpu Xing and Ali Ghorbani. 2004. Weighted pager-ank algorithm. In Proceedings. Second Annual Conference on Communication Networks and Services Research, 2004., pages 305-314. IEEE.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/W3Dzaik1ipL/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § INFORMATION RETRIEVAL AND EXTRACTION ON COVID-19 CLINICAL ARTICLES USING GRAPH COMMUNITY DETECTION AND BIO-BERT EMBEDDINGS
2
+
3
+ Debasmita Das, Yatin Katyal, Janu Verma Shashank Dubey, Aakash Deep Singh, Kushagra Agarwal, Sourojit Bhaduri, Rajesh Kumar Ranjan
4
+
5
+ Mastercard AI Garage, Gurgaon, India
6
+
7
+ {firstname.secondname}@mastercard.com
8
+
9
+ § ABSTRACT
10
+
11
+ In this paper, we present an information retrieval system on a corpus of scientific articles related to COVID-19. We build a similarity network on the articles where similarity is determined via shared citations and biological domain-specific sentence embeddings. Ego-splitting community detection on the article network is employed to cluster the articles and then the queries are matched with the clusters. Extractive summarization using BERT and PageRank methods is used to provide responses to the query. We also provide a Question-Answer bot on a small set of intents to demonstrate the efficacy of our model for an information extraction module.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Novel coronavirus (COVID-19) has resulted in a pandemic in a short span of time owing to its quick transmission. A lot of scientific attention has been directed towards understanding the causes and impacts of the virus. Though this has resulted in a large amount of research articles being published every day, extracting relevant information (Asai et al., 2019) from such a huge pool of textual articles remains challenging. It is, thus, of particular importance to have systems that can retrieve relevant answers to queries. For example, it is useful to ask
16
+
17
+ What is known about the transmission, incubation, and environmental stability of COVID-19 ?
18
+
19
+ But finding information relevant to this query is quite challenging owing to the plethora of research articles being published, and the diversity, specificity of the query space. In this work, we address the problem of extracting relevant answers (Chen et al., 2017) from a corpus of clinical articles on COVID-19 in response to a query.
20
+
21
+ Information retrieval i.e. finding relevant documents in response to a query, is a standard problem with applications in web search engines e.g. Google, Bing etc. Models for retrieval systems rely heavily on word-embeddings (Mikolov et al., 2013) which provide a vector representation for every word in the corpus. The techniques developed for web search might not extend to the case of clinical articles since the distribution of words over these documents is quite different from the typical documents. There has been some work in building retrieval systems for scientific articles in a specific field e.g. biological or clinical papers, which are powered by domain specific word-embeddings (Mikolov et al., 2013) e.g. BioBERT (Lee et al., 2020). However, the direct use of BioBERT is not very optimal e.g. there might not even be an embedding of COVID-19. The domain of biological articles is still not specific enough to be able to used for our task. Thus, to build a useful information extraction system for COVID-19, it is important to fine-tune embeddings to align them to their distribution in COVID-19 related articles.
22
+
23
+ In this paper, we propose a system to extract information from a corpus of COVID-19 articles which is relevant to a query (Srihari and Li, 2000). Our approach has two main modules.
24
+
25
+ * Graph-based Clustering: This involves building a graph of the research articles in the corpus using the citations and textual similarity between them. Biological sentence vector embeddings are used to compute similarity[BioSentVec]. Graph-based community detection (Schaeffer, 2007) algorithms are employed to cluster the large number of documents into a relatively small number of clusters. We provide a detailed qualitative evaluation of the resulting clusters, and try to provide interpretable labels for the clusters. We find best matching clusters to a query by computing the similarities of their BioSentVec vectors.
26
+
27
+ * BERT-based Extractive Summarization: This module extracts relevant sentences from the best-matched documents within the top clusters. Contextual embeddings (Si et al., 2019) of BERT-type trained on a corpus of biological articles are used to generate vectorial representation of sentences and the documents. Our output is a set of sentences from the summaries that are ranked in their degree of relevance to the query.
28
+
29
+ We demonstrate our model using COVID-19 Open Research Dataset (CORD-19) which was made available by the White House, Allen Institute for AI and a coalition of research groups. This data is available on Kaggle ${}^{1}$ as a part of their open research challenge (See Section 3.1). We also demonstrate the efficacy of our clustering and the summarization method by experimenting with a Question-Answer (QA) system where we provide precise answers to specific questions e.g. What is the incubation period of COVID-19 ? We provide evidence that our system can be employed, with minor modifications, on a much larger data to build a useful QA system or a chat-bot.
30
+
31
+ Concretely, we make following contributions:
32
+
33
+ 1. Using graph-based clustering on a network of articles in the corpus.
34
+
35
+ 2. Qualitative analysis of the clusters, and human-assisted labelling on the clusters.
36
+
37
+ 3. Biological BERT based extractive summarization of the articles to find informative portions which are relevant to a query.
38
+
39
+ 4. Proof-of-concept for a Question-Answer system on a limited set of intents.
40
+
41
+ The rest of this paper is divided as follows. In section 2 we will describe our method in details, the results are covered in section 3, evaluation and discussions are the part of section 4 .
42
+
43
+ § 2 METHOD
44
+
45
+ In this section, we will discuss different components of our model. Starting point of out methodology is a graph of the articles.
46
+
47
+ § 2.1 CONSTRUCTION OF THE GRAPH
48
+
49
+ We build a citation network (Price, 1965) of the articles in the corpus where nodes corresponds to the papers in the corpus and the edges are determined by the citations of the papers. There are many ways a citation network can be constructed e.g. if paper A cities paper B, then there is a directed edge from A to B. We use the transversality of the citation relations to create edges i.e. if paper A and paper B both cite common papers then this is a signal that A and B are likely to be discussing similar topics.
50
+
51
+ In addition to the similarity of two papers in terms of their mutual citations, the semantic similarity of the documents (Lee et al., 2005) is also a valuable factor. Moreover, some articles might have only a few of their citations in the corpus, and some articles can have none of their citations in the corpus. Thus, we use semantic similarity between a pair of articles to add new edges and further enhance the coverage of network over the corpus. Word and sentence embeddings (Arora et al., 2016) have emerged as the standard way to obtain semantic representation of textual documents (Lau and Baldwin, 2016), where the documents are projected onto a low-dimensional space that preserves the semantic relationship. For this work, we use BioSentVec (Chen et al., 2019) which is trained on a corpus of about 30 million clinical and bio-medical research articles from the public databases - PubMed and MIMIC-III. BioSentVec provides 700-dimensional ' sentence embeddings. We separately compute pairwise cosine similarity of article abstracts and the papers, and take their average as the semantic similarity (Muflikhah and Baharudin, 2009) between the papers. If this similarity of a pair of papers is greater than a threshold, we add an edge between them to the citation-based network.
52
+
53
+ Thus, we obtain a larger similarity-network of the papers containing un-directed edges. For simplicity of the discussion, we treat both types of edges i.e. citation-based and the semantic-based as indistinguishable and work with a homogeneous graph. The network, thus built, can have multiple edges between two nodes e.g. if they have multiple citations common and the edge can be formed via any of the shared citations, or if both types of edges are present. We ignore this multiplicity and consider at most one edge between any two nodes. It is possible to develop a heterogeneous network (Shi et al., 2014) of different edge-types, and edges can be weighted according to the number of shared citations. We do not consider these approaches in this work.
54
+
55
+ ${}^{1}$ https://www.kaggle.com/allen-institute-for-ai/CORD- 19-research-challenge
56
+
57
+ § 2.2 CLUSTERING OF THE PAPERS
58
+
59
+ We employ community detection (Chen et al., 2011) on our graph of citation-based and semantic-based edges. Community detection is a useful technique to extract relationships between nodes in a complex graph. Nodes within a community are 'strongly' connected to each other than to those in different communities, and the nodes can be classified into communities or modules (Fortunato, 2010). For example, in a collaboration network of scientists, where nodes are scientists, edges corresponds to co-authorship, communities can indicate research areas. There is a plethora of community detection algorithms, each with their set of assumptions and workings. We will use community and cluster interchangeably.
60
+
61
+ For this task, we will use ego-splitting (Epasto et al., 2017) which provides a scalable and flexible community detection algorithm for complex networks. It employs local structures known as ego-nets which are the sub-graphs induced by the neighborhood of each node.
62
+
63
+ 1. Local ego-net clustering involves construction of ego-nets for each node and then clustering of the ego-nets. For each cluster thus obtained, we add a new nodes (personas) which are same as the previous nodes but are now uniquely associated with a community. Then a new graph (persona graph) is constructed where there are multiple copies of nodes and the edges corresponds to the edges in the original network.
64
+
65
+ 2. Global network partitioning involves the partitioning of the persona graph and mapping them back to the original graph.
66
+
67
+ This algorithm can be trained at different levels of resolutions - lower resolutions generate more granular clusters (higher number of clusters) and higher resolutions produce fewer clusters at a higher-level.
68
+
69
+ § 2.3 MAPPING QUERIES TO THE CLUSTERS
70
+
71
+ We next describe the method to map queries to clusters i.e. for any given query we find the clusters that are closely related to the query. We employ Bio-BERT embeddings to map the query and each document into dense vectors. Bio-BERT is a domain specific BERT (Devlin et al., 2018) (Bidirectional Encoder Representations from Transformers) for biomedical text mining, it is trained on corpus of PubMed and PMC full-text articles. It has been shown that Bio-BERT outperforms other approaches of embeddings as well as vanilla BERT on clinical data for variety of tasks e.g. entity recognition (Nadeau and Sekine, 2007), relation extraction (GuoDong et al., 2005), and QA system etc. This mapping is done in following steps:
72
+
73
+ 1. Map title of each article in the corpus to a 768-dimensional vector using pre-trained Bio-BERT embeddings.
74
+
75
+ 2. Obtain the Bio-BERT embedding for the given query.
76
+
77
+ 3. Find top-40 most similar titles to the query in terms of their cosine similarities with the query.
78
+
79
+ 4. This gives a distribution of cluster labels over the top-40 papers.
80
+
81
+ 5. Based on a threshold on the similarity score or on the fraction of the top-40 papers in a cluster matching the query, we tag the query with a set of cluster labels.
82
+
83
+ This mapping helps in reducing the search space of the query and to retrieve more refined and focused results. It is worth noting here that the cluster assignment to the queries is done using only the titles, which might not capture the full relevance. But the assigned clusters do provide a direction and a smaller set of papers to explore further for better and faster search results.
84
+
85
+ Another purpose of this mapping of the query to the clusters is to purpose labels for each query in lieu of the supervised multi-label classification which is not possible due to the lack of the ground truth labels. More discussion in the 3.5 and 3.7.
86
+
87
+ § 2.4 INFORMATION RETRIEVAL
88
+
89
+ We have reduced the set of possible articles that are relevant to a query as the union of the articles in the top-k clusters. Now, we will describe the process of retrieve articles that are best matched with the query. We again use the pre-trained Bio-BERT embeddings to obtain a vector representation of the whole document. This representation is different from the one used in that cluster mapping where only title embedding is used. Also, we only consider the articles in the selected clusters, call this the candidate set. The Bio-BERT embedding of the query is used to compute its cosine similarity with the articles in the candidate set. Top-100 articles from the candidate set ranked by the cosine similarity with the query are selected to the returned in response to the query.
90
+
91
+ We also return a set of best matching sentences to the query which are deemed to be most informative. For this, we create a graph of sentences in the top-100 articles in terms of the cosine similarities of their Bio-BERT embeddings. The edges in this graph are weighted by the pairwise cosine similarities of the node sentences. Finally, the sentence nodes are ranked by their PageRank (Xing and Ghorbani, 2004) in this graph and top seven sentences are reported.
92
+
93
+ § 2.5 QUESTION-ANSWER SYSTEM
94
+
95
+ To explore the efficacy of our work for more refined information extraction, we experimented with a Question-Answer bot (Nomoto et al., 2004) which takes in a question and attempts of find the precise answer to it. For the input question, we employ our model to find relevant articles and passages which are most-likely to contain the answer. This question and the passage are then concatenated and fed to transformer to BERT (Devlin et al., 2018) type with pre-trained BioBERT embeddings as the inputs. The output layer is a sequence of the same length as input with a softmax layer that is trained to compute the probability of the corresponding input token to be the start and the end of the answer.
96
+
97
+ § 2.6 EXTRACTIVE SUMMARIZATION AND INFORMATION EXTRACTION
98
+
99
+ As a further enhancement and an application of the system, we provide extractive summarization (Padmakumar and Saran, 2016) of the best matched papers returned by the system. We attempt to produce a coherent summary of the papers by extracting important sentences from the paper. We used (Miller, 2019) approach for this task, which uses pre-trained BERT (Devlin et al., 2018) embeddings to obtain a sentence level embeddings and then K-means clustering (Wagstaff et al., 2001) of the sentences is performed. Finally, sentences closest to the centroid are selected.
100
+
101
+ § 3 RESULTS AND DISCUSSIONS
102
+
103
+ In this section, we provide a discussion on the results and comment on the evaluation and broad utilization of this work.
104
+
105
+ § 3.1 DATA
106
+
107
+ We used a corpus of scientific articles named COVID-19 Open Research Dataset (CORD-19) which was collected by the Allen Institute for AI and a coalition of research groups. Specifically, our motivation was the open research challenge hosted on Kaggle to build useful text mining tools to assist the medical community develop answers to high priority scientific questions. The CORD-19 contains 134000 research articles, including 60000 full-text articles about COVID-19, ${SARS},{CoV} - 2$ etc. Some important intents or tasks have been identified and there are multiple sub-tasks within each task. These represent a set of high importance topics and sub-topics for which relevant information to be retrieved from the given corpus.
108
+
109
+ As described in the Section 2.1 We built a network of the papers where each paper is represented as a node, and the edges between nodes implies that they either share a citation or the cosine similarity is greater than 0.9 for the BioSentVec embeddings of their abstracts and titles.
110
+
111
+ § 3.2 CLUSTERING RESULTS
112
+
113
+ We performed ego-splitting community detection algorithm on the article graph at various levels of resolution ranging from 0.001 to 1 . The clustering that we report here was performed at the resolution of 0.3 to produce 661 clusters of non-uniform sizes consisting of around ${38}\mathrm{\;k}$ papers.
114
+
115
+ We also attempt to provide human-understandable labels for some of the clusters. For each cluster, we select top-5 papers in each cluster using PageRank on the papers as nodes in the graph corresponding to the cluster. Using the keywords in these top articles provide us the candidates for labelling the clusters. These potential labels are then manually investigated against the cluster to refine the labels for it and to reduce noise in the label assignment. Some examples of clusters labels are as follows:
116
+
117
+ * Travel, Mass Gathering & Social Mixing during Epidemics (224 Research Papers).
118
+
119
+ * Clinical Management during Epidemics (223 Research Papers)
120
+
121
+ * Spread and Transmission of Viral Infections (3,347 Research Papers)
122
+
123
+ * Hospital Emergency Management (824 Research Papers)
124
+
125
+ * Social, Media/Newspapers & Political Impact on Viral Epidemics (95 Research Papers)
126
+
127
+ It must be noted that the labels do not faithfully match every single article in a cluster, but a substantial majority of the articles can be described by the assigned labels. We also do not claim to have ${100}\%$ coverage since there a lot of clusters and we do not always have sufficient information to find consistent labels. Having a smaller set of labels helps in better bookkeeping. We plan to do a more careful study of the labels - automatically and manually - to further refine the results and increase the coverage over articles. We are also making public a set of articles with their labels. We hope that this set be used to study the articles related to COVID- 19 in a supervised manner and to employ modern developments in NLP to develop techniques to help the community in various tasks.
128
+
129
+ § 3.3 CLUSTER MAPPING
130
+
131
+ A sample of the cluster mapping results are shown in the Table 2. We take some examples of queries i.e. sub-tasks provided with the Kaggle competition and find their best-matching clusters via the procedure explained in Section 2.3. The results for all the sub-tasks are being made available.
132
+
133
+ § 3.4 RETRIEVAL RESULTS
134
+
135
+ A sample of results of the retrieval system for the sub-tasks provided with the Kaggle competition are shown in the Table 3. Consider the query subtask : Approaches to evaluate risk for enhanced disease & vaccinations starting after vaccination, for which we find the best matching clusters as
136
+
137
+ 1. Studies: Vaccine Development
138
+
139
+ 2. Spread & Transmission of Viral Infections
140
+
141
+ From these clusters, the retrieval system finds the articles best matching to the sub-task as explained in 2.4.
142
+
143
+ 1. Vaccines and Vaccination Practices: Key Food Systems to Sustainable Animal Production.
144
+
145
+ 2. Canine Vaccination
146
+
147
+ 3. Progress in Respiratory Virus Vaccine Development.
148
+
149
+ § 3.5 DISCUSSION ON EVALUATION
150
+
151
+ Finally, we would like to address issues around the evaluation and applicability of this work. Since no ground truth data on articles matching the queries was provided, it was not possible to evaluate the system. We have thus no quantitative way to show superiority of our methods, neither do we claim any superiority. In fact, our motivation was to quickly prototype a retrieval system using modern advances in NLP like contextual embeddings e.g. BERT. We have used human intervention throughout this process both while building the model and for limited evaluation. We also propose potential evaluation methods in this situation.
152
+
153
+ We performed unsupervised clustering of the articles, evaluation of which is inherently difficult e.g. clustering is in the eyes of the beholder (Estivill-Castro, 2002). We do provide a qualitative study of the clusters by confirming that for most articles clusters within a cluster are 'more' similar to each other than to those in other clusters. It is possible to compute statistics like Silhouette coefficient, gap statistic etc. that provide a quantitative evaluation, but these statistics are not often useful or which of these should be used in not obvious, see e.g. Clustervision (Kwon et al., 2018). We also provide names/tags for the clusters based on finding top-papers in each clusters in terms of their PageRank values in a small graph and the keywords that figure prominently in these documents. Furthermore, we evaluated the tags by looking inside the clusters and comparing the papers against the proposed tags. Topic modeling e.g. LDA could be another approach to find tags for the articles and thus for the clusters. Our approach is much simpler and is also computationally efficient.
154
+
155
+ The information retrieval system that we proposed here works by finding articles that are best-matched to a query. We manually investigate the results for a set of queries i.e. sub-tasks in the Kag-gle competition. First, the mapping of the queries to the clusters is done, then the best-matching documents are returned.
156
+
157
+ For the QA system, we provide short, precise answers to the questions. We make no claim on the correctness of the answers, and only restrict to
158
+
159
+ max width=
160
+
161
+ Cluster Example Papers Journal Published
162
+
163
+ 1-4
164
+ Travel & Mass Gathering 1. Mass gathering and globalization of respiratory pathogens during 2013 Hajj Clinical Microbiology and Infection 2015-06-30
165
+
166
+ 1-4
167
+ X 2. Travel implications of emerging coronavirus SARS and MERS-CoV Travel Medicine & Infectious Disease 2014-10-31
168
+
169
+ 1-4
170
+ X 3. Respiratory tract infections among French Hajj pilgrims from 2014 to 2017 Sci Rep 2019-11-28
171
+
172
+ 1-4
173
+ Studies: Vaccine 1. Immunoinformatics and Vaccine Immunotargets Ther 2020-02-26
174
+
175
+ 1-4
176
+ Development Development: An Overview 2. Immunization recommendations and safety & immunogenicity on the delayed vaccination of non-national immunization program for the coronavirus disease 2019 in China Chinese Journal of Pediatrics 2020-02-27
177
+
178
+ 1-4
179
+ Spread and Transmission of Viral Infections 1. Prediction of COVID-19 Spreading Profiles in South Korea, Italy and Iran by Data-Driven Coding medRxiv 2020-03-10
180
+
181
+ 1-4
182
+ X 2. Importation and Human-to-Human Transmission of a Novel Coronavirus in Vietnam New England Journal of Medicine 2020-02-27
183
+
184
+ 1-4
185
+ X 3. Temperature significant change COVID-19 Transmission in 429 cities - 2020-02-25
186
+
187
+ 1-4
188
+ Impacts on Pregnancy 1. From mice to women : the conundrum of immunity to infection during pregnancy 2. Influenza and pneumonia in pregnancy Journal of Reproductive Immunology Clinics in Perinatology 2013-03-31 2005-09-30
189
+
190
+ 1-4
191
+ X 3. Pregnancy and perinatal outcomes of women with SARS American Journal of Obstetrics and Gynecology 2004-07-31
192
+
193
+ 1-4
194
+ 3*Impact of Social Media 1. Social media engagement analysis of U.S. Federal health agencies on Facebook BMC Med Inform Decis Mak 2017-04-21
195
+
196
+ 2-4
197
+ 2. Social Media as a Sensor of Air Quality and Public Response in China J Med Internet Res 2015-03-26
198
+
199
+ 2-4
200
+ 3. Scoping Review on Search Queries and Social Media for Disease Surveillance: A Chronology of Innovation height J Med Internet Res 2013-07-18
201
+
202
+ 1-4
203
+
204
+ Table 1: Sample of clusters and corresponding papers. extracting the answers from the papers. Now it is possible that there are different answers reported to the same question in different papers.
205
+
206
+ max width=
207
+
208
+ Sub-task Clusters No. of Documents
209
+
210
+ 1-3
211
+ Approaches to evaluate risk for enhanced disease after vaccination 1.Studies: Vaccine Development 2. Spread & Transmission of Viral Infections 638 835
212
+
213
+ 1-3
214
+ Seasonality of Transmission 1.Seasonality of Viral Infections 2. Spread & Transmission of Viral Infections 138 835
215
+
216
+ 1-3
217
+ Age-adjusted mortality data for Acute 1.Viral Infections - Studies 3,347
218
+
219
+ 1-3
220
+ Respiratory Distress Syndrome (ARDS) 2.Hospital Emergency Management 824
221
+
222
+ 1-3
223
+ with or without other organ failure - particularly for viral etiologies 3.Severe Pneumonia 567
224
+
225
+ 1-3
226
+
227
+ Table 2: Sub-tasks and their corresponding clusters
228
+
229
+ max width=
230
+
231
+ 4*Subtask Approaches to evaluate risk for enhanced disease vaccination Top 3 Sentences 1. Cattle receive many vaccinations starting after 3 months of age after maternal immunity no longer interferes with vaccination Animal by neutralizing the vaccine virus Production 2. Although protection againstVeterinary Clinics Title of the Paper Vaccines and Vaccination Practices: Key to Sustainable Journal Encyclopedia of Agriculture and Food Systems
232
+
233
+ 2-4
234
+ most agents develops after routine vaccination programs, vaccines against some agents such as herpesvirus or Reovirus are not available. Canine Vaccination of North America: Small Animal Practice
235
+
236
+ 2-4
237
+ 3. A realistic goal of Progress in Seminars in
238
+
239
+ 2-4
240
+ immunization has to be a reduction of severe disease rather than induction of sterilizing immunity, similar to what has been achieved with rotavirus vaccines Respiratory Virus Vaccine Development Respiratory and Critical Care Medicine
241
+
242
+ 1-4
243
+ 3*Co-infections (determine whether co-existing respiratory/viral infections make the virus more transmissible or virulent) and other co-morbidities 1.The September epidemic of asthma-Observational studies have also been used to investigate the association of respiratory viruses with asthma morbidity The Impact of Respiratory Viral Infection on Wheezing Illnesses & Asthma Exacerbations Immunology and Allergy Clinics of North America
244
+
245
+ 2-4
246
+ 2.Their relatively short incubation times and efficient transmission via small droplets among comorbid patients highlight the need for better understanding of respiratory viral infections in hospital settings. Laboratory-based surveillance of hospital-acquired respiratory virus infection in a tertiary care hospital American Journal of Infection Control
247
+
248
+ 2-4
249
+ 3.However, such studies could not take into account possible episodes of mild or moderate illness that did not require inpatient medical care and could not address whether asymptomatic community spread played a role in the 2003 epidemic. SARS-CoV Antibody Prevalence in all Hong Kong Patient Contacts Emerg Infect Dis
250
+
251
+ 1-4
252
+
253
+ Table 3: Sub-tasks & corresponding articles returned by the System
254
+
255
+ max width=
256
+
257
+ Query Answers Confidence
258
+
259
+ 1-3
260
+ 2*Transmission Risks 1. High Community Prevalence 99.92
261
+
262
+ 2-3
263
+ 2. VIral Infections 99.84
264
+
265
+ 1-3
266
+ 3*Animal Host to Human 1. Virus 99.79
267
+
268
+ 2-3
269
+ 2. Pediculus lice 99.64
270
+
271
+ 2-3
272
+ 3. Anthropods 98.974
273
+
274
+ 1-3
275
+ 3*Risk Reduction Strategies 1. Hygiene Measures 98.235
276
+
277
+ 2-3
278
+ 2. Antioxidant Vitamin Supplements 97.746
279
+
280
+ 2-3
281
+ 3. Quarantine 97.471
282
+
283
+ 1-3
284
+ 4*What are neonates risk? 1. cardiac failure 96.925
285
+
286
+ 2-3
287
+ 2. Diarrhoea 96.778
288
+
289
+ 2-3
290
+ 3. Allergic Disorders 96.424
291
+
292
+ 2-3
293
+ 4. Serious Illness or Death 95.439
294
+
295
+ 1-3
296
+ 2*What are extrapulmonary manifestations of COVID-19? 1. Orbital sinus bleeding 92.95
297
+
298
+ 2-3
299
+ 2. Invasive Devices 90.21
300
+
301
+ 1-3
302
+ 2*Coronavirus Survival 1. 3 hours 97.753
303
+
304
+ 2-3
305
+ 3. 4 days on surfaces 97.271
306
+
307
+ 1-3
308
+
309
+ Table 4: Questions and Answers with Confidence Score
310
+
311
+ § 3.6 POSSIBLE EVALUATION
312
+
313
+ In lack of ground truth, a possible evaluation of retrieval system could be to perform a user-study on the relevance of the results to the query. This means measuring how relevant the results are to a query as ascertained by a set of unbiased users e.g. via Mechanical Turk. If we show titles and abstracts of top-k articles to the subjects, we can calculate the average number of articles marked relevant by them over a set of queries. This can be loosely interpreted as an estimate of the Precision $\mathrm{k}$ of the system. Since it is not known how many relevant articles there are in the corpus due to the lack of the notion of relevance, such an evaluation can not estimate recall value. It must be noted that such estimation is far from perfect since the variance across users and the queries are not factored in. Careful estimation of sample size is another point of contention. At least, such a system can help us reach a consensus notion of relevance and possibly building a small set of labeled data. Due to lack of time and complete clarity on the procedure, we have not performed evaluation of this kind, and restricted to our team-members investigating the results.
314
+
315
+ § 3.7 FURTHER
316
+
317
+ We want to highlight that at various components of this system, we used human intervention to label data and tried to resolve some ambiguities. We are releasing these labeled pieces along with this paper. We hope this brings us closer to constructing a labelled data for various tasks e.g. classification of articles and queries into cluster categories, entity-recognition via top keywords, information retrieval & extraction, and a QA system.
318
+
319
+ § 4 CONCLUSION
320
+
321
+ In this work, we presented the problem of building an information retrieval system for scientific papers on COVID-19. This system, based on network analytical methods and modern developments in contextual word embeddings e.g. BERT, extracts articles and the sections therein relevant to a given query. We used human intervention in attempts to attach interpretable labels to the data e.g. articles, and queries. We also discussed challenges and possible avenues in evaluation of such a system in the lack of ground truth data.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/XOkm8xdns5R/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CODA-19: Reliably Annotating Research Aspects on 10,000+ CORD-19 Abstracts Using a Non-Expert Crowd
2
+
3
+ Ting-Hao (Kenneth) Huang ${}^{1}$ , Chieh-Yang Huang ${}^{1}$ Chien-Kuang Cornelia Ding ${}^{2}$ , Yen-Chia Hsu ${}^{3}$ , C. Lee Giles ${}^{1}$
4
+
5
+ ${}^{1}$ Pennsylvania State University, University Park, PA, USA
6
+
7
+ \{txh710, chiehyang, clg20\}@psu.edu
8
+
9
+ ${}^{2}$ University of California, San Francisco, CA, USA. Cornelia. Ding@ucsf.edu
10
+
11
+ ${}^{3}$ Carnegie Mellon University, Pittsburgh, PA, USA. yenchiah@andrew.cmu.edu
12
+
13
+ ## Abstract
14
+
15
+ This paper introduces CODA- ${\mathbf{{19}}}^{1}$ , a human-annotated dataset that codes the Background, Purpose, Method, Finding/Contribution, and Other sections of 10,966 English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk within 10 days, achieving a label quality comparable to that of experts. Each abstract was annotated by nine different workers, and the final labels were obtained by majority vote. The inter-annotator agreement (Cohen's kappa) between the crowd and the biomedical expert (0.741) is comparable to inter-expert agreement (0.788). CODA-19's labels have an accuracy of ${82.2}\%$ when compared to the biomedical expert's labels, while the accuracy between experts was ${85.0}\%$ . Reliable human annotations help scientists to understand the rapidly accelerating coronavirus literature and also serve as the battery of AI/NLP research, but obtaining expert annotations can be slow. We demonstrated that a non-expert crowd can be rapidly employed at scale to join the fight against COVID-19.
16
+
17
+ ## 1 Introduction
18
+
19
+ Just as COVID-19 is rapidly spreading worldwide, the rapid acceleration in new coronavirus literature makes it hard to keep up with. Researchers have thus teamed up with the White House to release the COVID-19 Open Research Dataset (CORD- 19) (Wang et al., 2020), containing over 59,000 related scholarly articles (as of May 1, 2020). The Open Research Dataset Challenge has also been launched on Kaggle to encourage researchers to use cutting-edge techniques to gain new insights from these papers. However, it often requires large-scale human annotations for automated language understanding, relation extraction, and question answering to reach good performance levels. Producing such annotations for thousands of papers can be a prolonged process if we only employ expert annotators, whose availability is limited.
20
+
21
+ ![01963db8-96d3-7aed-94e1-1e4c10135a2f_0_856_616_603_723_0.jpg](images/01963db8-96d3-7aed-94e1-1e4c10135a2f_0_856_616_603_723_0.jpg)
22
+
23
+ Figure 1: An example of the final crowd annotation for the abstract of (Hubbs et al., 2019).
24
+
25
+ Data sparsity is one of the challenges for text mining in the biomedical domain because text annotations on scholarly articles were mainly produced by small groups of experts. For example, two researchers manually created the ACL RD-TEC 2.0, a dataset that contains 300 scientific abstracts (QasemiZadeh and Schumann, 2016); a group of annotators "with rich experience in biomedical content curation" created MedMen-tions, a corpus containing 4,000 abstracts (Mohan and Li, 2019); and several datasets used in biomedical NLP shared tasks were manually created by the organizers and/or their students, such as the Sci-enceIE in SemEval'17 (Augenstein et al., 2017) and Relation Extraction in SemEval'18 (Gábor et al., 2018). Obtaining expert annotations can be too slow to respond to COVID-19, so we explore an alternative approach: using non-expert crowds, such as workers on Amazon Mechanical Turk (MTurk), to produce high-quality, useful annotations for thousands of scientific papers.
26
+
27
+ ---
28
+
29
+ ${}^{1}$ COVID-19 Research Aspect Dataset (CODA-19): https://github.com/windx0303/CODA-19
30
+
31
+ ---
32
+
33
+ This paper introduces CODA-19, the COVID- 19 Research Aspect Dataset, presenting the first outcome of our exploration in using non-expert crowds for large-scale scholarly article annotation. CODA-19 contains 10,966 abstracts randomly selected from CORD-19. Each abstract was segmented into sentences, which were further divided into one or more shorter text fragments. All 168,286 text fragments in CODA-19 were labeled with a "research aspect," i.e., Background, Purpose, Method, Finding/Contribution, or Other. This annotation scheme was adapted from SOLVENT (Chan et al., 2018), with minor changes.
34
+
35
+ In our project, 248 crowd workers from MTurk were recruited and annotated the whole CODA-19 within ten days. ${}^{2}$ Each abstract was annotated by nine different workers. We aggregated the crowd labels for each text segment using majority voting.
36
+
37
+ The resulting crowd labels had a label accuracy of ${82}\%$ when compared against the expert labels on 129 abstracts. The inter-annotator agreement (Cohen's kappa) was 0.741 between the crowd labels and the expert labels, while it was 0.788 between two experts. We also established several classification baselines, showing the feasibility of automating such annotation tasks.
38
+
39
+ ## 2 Annotation Scheme
40
+
41
+ CODA-19 uses a five-class annotation scheme to denote research aspects in scientific articles: Background, Purpose, Method, Finding/Contribution, or Other. Table 1 shows the full annotation guidelines we developed to instruct workers. We updated and expanded this guideline daily during the annotation process to address workers' questions and feedback.
42
+
43
+ This scheme was adapted from SOLVENT (Chan et al., 2018), with three changes. First, we added an "Other" category. Articles in CORD-19 are broad and diverse (Colavizza et al., 2020), so it is unrealistic to govern all cases with only four categories. We are also aware that CORD-19's data came with occasional formatting or segmenting errors. These cases were also to be put into the "Other" category. Second, we replaced the "Mechanism" category with "Method." Chan et al. created SOLVENT with the aim of discovering the analogies between research papers at scale. Our goal was to better understand the contribution of each paper, so we decided to use a more general word, "Method," to include the research methods and procedures that cannot be characterized as "Mechanisms." Also, biomedical literature widely used the word "mechanism," which could also be confusing to workers. Third, we modified the name "Finding" to "Finding/Contribution" to allow broader contributions that are not usually viewed as "findings." Our scheme is also similar to that of DISA (Huang and Chen, 2017), which has an additional "Conclusion" category.
44
+
45
+ We selected this scheme because it balances the richness of information and the difficulty level for workers to annotate. We are aware of the long history of research (Kilicoglu, 2018) on composing structured abstracts (Hartley, 2004), identifying argumentative zones (Teufel et al., 1999; Mizuta et al., 2006; Liakata et al., 2010), analyzing scientific discourse (de Waard and Maat, 2012; Dasigi et al., 2017; Banerjee et al., 2020), supporting paper writing (Wang et al., 2019), and representing papers to reduce information overload (de Waard et al., 2009). However, most of these schemes assumed expert annotators rather than crowd workers. We eventually narrowed our focus down to two annotation schemes: SOLVENT and the "Information Type" (Focus, Polarity, Certainty, Evidence, Trend) proposed by Wilbur et al. (2006). SOLVENT is easier to annotate and has been tested with workers from MTurk and Upwork, while Wilbur's scheme is informative and specialized for biomedical articles. We implemented annotation interfaces for both schemes and launched a few tasks on MTurk for testing. Workers accomplished the SOLVENT tasks much faster with reasonable label accuracy, while only a few workers accomplished the Information Type annotation task. Therefore, we decided to adapt the SOLVENT scheme.
46
+
47
+ ---
48
+
49
+ ${}^{2}$ From April 19,2020 to April 29,2020, including the time for worker training and post-task survey.
50
+
51
+ ---
52
+
53
+ <table><tr><td>Aspect</td><td>Annotation Guideline</td></tr><tr><td>Background</td><td>"Background" text segments answer one or more of these questions: - Why is this problem important? - What relevant works have been created before? - What is still missing in the previous works? - What are the high-level research questions? - How might this help other research or researchers?</td></tr><tr><td>Purpose</td><td>"Purpose" text segments answer one or more of these questions: - What specific things do the researchers want to do? - What specific knowledge do the researchers want to gain? - What specific hypothesis do the researchers want to test?</td></tr><tr><td>$\mathbf{{Method}}$</td><td>"Method" text segments answer one or more of these questions: - How did the researchers do the work or find what they sought? - What are the procedures and steps of the research?</td></tr><tr><td>Finding/ Contribution</td><td>"Finding/Contribution" text segments answer one or more of these questions: - What did the researchers find out? - Did the proposed methods work? - Did the thing behave as the researchers expected?</td></tr><tr><td>Other</td><td>- Text segments that do not fit into any of the four categories above. - Text segments that are not part of the article. - Text segments that are not in English. - Text segments that contain only reference marks (e.g., "[1,2,3,4,5") or dates (e.g., "April 20, 2008"). - Captions for figures and tables (e.g. "Figure 1: Experimental Result of ...") - Formatting errors. - Text segments the annotator does not know or is not sure about.</td></tr></table>
54
+
55
+ Table 1: CODA-19's annotation guideline for crowd workers.
56
+
57
+ ## 3 CODA-19 Dataset Construction
58
+
59
+ CODA-19 has 10,966 abstracts that contain a total of 2,703,174 tokens and 103,978 sentences, which were divided into 168,286 segments. The data is released as a 80/10/10 train/dev/test split.
60
+
61
+ ### 3.1 Data Preparation
62
+
63
+ We used Stanford CoreNLP (Manning et al., 2014) to tokenize and segment sentences for all the abstracts in CORD-19. We further used comma (,), semicolon (;), and period (.) to split each sentence into shorter fragments, where a fragment has no fewer than six tokens (including punctuation marks) and has no orphan parentheses.
64
+
65
+ As of April 15, 2020, 29,306 articles in CORD- 19 had a non-empty abstract. An average abstract had 9.73 sentences $\left( {\mathrm{{SD}} = {8.44}}\right)$ , which were further divided into 15.75 text segments $\left( {\mathrm{{SD}} = {13.26}}\right)$ . Each abstract had 252.36 tokens $\left( {\mathrm{{SD}} = {192.89}}\right)$ on average. We filtered out the ${538}\left( {{1.84}\% }\right)$ abstracts with only one sentence because many of them had formatting errors. We also removed the 145 (0.49%) abstracts that had more than 1,200 tokens to keep the working time for each task under five minutes (see Section 3.3). We randomly selected 11,000 abstracts from the remaining data for annotation. During the annotation process, workers informed us that a few articles were not in English. We identified these automatically using langdetect ${}^{3}$ and excluded them.
66
+
67
+ ### 3.2 Interface Design
68
+
69
+ Figure 2 shows the worker interface, which we designed to guide workers to read and label all the text segments in an abstract. The interface showed the instruction on the top (Figure 2a) and presented the task in three steps: In Step 1, the worker was instructed to spend ten seconds to take a quick glance at the abstract. The goal was to get a high-level sense of the topic rather than to fully understand the abstract. In Step 2, we showed the main annotation interface (Figure 2b), where the worker can go through each text segment and select the most appropriate category for each segment one by one. In Step 3, the worker can review the labeled text segments (Figure 2c) and go back to Step 2 to fix any problems.
70
+
71
+ ### 3.3 Annotation Procedure
72
+
73
+ Worker Training and Recruitment We first created a qualification Human Intelligence Task (HIT) to recruit workers on MTurk (\$1/HIT). The workers needed to watch a five-minute video to learn the scheme, go through an interactive tutorial to learn the interface, and sign a consent form to obtain the qualification. We granted custom qualifications to 400 workers who accomplished the qualification HIT. Only the workers with this qualification could do our tasks. ${}^{4}$
74
+
75
+ ---
76
+
77
+ ${}^{3}$ langdetect: https://github.com/Mimino666/langdetect
78
+
79
+ ---
80
+
81
+ ![01963db8-96d3-7aed-94e1-1e4c10135a2f_3_195_174_1264_638_0.jpg](images/01963db8-96d3-7aed-94e1-1e4c10135a2f_3_195_174_1264_638_0.jpg)
82
+
83
+ Figure 2: The worker interface used to construct CODA-19.
84
+
85
+ Posting Tasks in Smaller Batches We divided 11,000 abstracts into smaller batches, where each batch has no more than 1,000 abstracts. Each abstract forms a single HIT. We recruited nine different workers through nine assignments to label each abstract. Our strategy was to post one batch at a time. When a batch was finished, we assessed its data quality, sent feedback to workers to guide them, or blocked workers who constantly had low accuracy before proceeding with the next batch.
86
+
87
+ Worker Wage and Total Cost We aimed to pay an hourly wage of $\$ {10}$ . The working time of an abstract was estimated by the average reading speed of English native speakers, i.e., 200-300 words per minute (Siegenthaler et al., 2012). For an abstract, we rounded up (#token/250) to an integer as the estimated working time in minutes and paid $\left( {\$ {0.05} + \text{ Estimated Working Minutes } \times \$ {0.17}}\right)$ for it. As a result, 59.49% of our HITs were priced at $\$ {0.22},{36.41}\%$ were at $\$ {0.39},{2.74}\%$ were at $\$ {0.56}$ , ${0.81}\%$ were at $\$ {0.73}$ , and ${0.55}\%$ were at $\$ {0.90}$ . We posted nine assignments per HIT. Adding the ${20}\%$ MTurk fee, coding each abstract (using nine workers) cost $\$ {3.21}$ on average.
88
+
89
+ ### 3.4 Label Aggregation
90
+
91
+ The final labels in CODA-19 were obtained by majority voting over crowd labels, excluding the labels from blocked workers. For each batch of HITs, we manually examined the labels from workers who frequently disagreed with the majority-voted labels (Section 3.3). If a worker had abnormally low accuracy or was apparently spamming, we retracted the worker's qualification to prevent him/her from taking future tasks. We excluded the labels from these removed workers when aggregating the final labels. Note that there can be ties when two or more aspects received the same highest number of votes (e.g., $4/4/1$ or $3/3/3$ ). We resolved ties by using the following tiebreakers, in order: Finding, Method, Purpose, Background, Other.
92
+
93
+ ## 4 Data Quality Assessment
94
+
95
+ We worked with a biomedical expert and a computer scientist to assess label quality; both experts are co-authors of this paper. The biomedical expert (the "Bio" Expert in Table 2) is an MD and also a PhD in Genetics and Genomics. She is now a resident physician in pathology at the University of California, San Francisco. The other expert (the "CS" Expert in Table 2) has a PhD in Computer Science and is currently a Project Scientist at Carnegie Mellon University.
96
+
97
+ ---
98
+
99
+ ${}^{4}$ Four built-in MTurk qualifications were also used: Locale (US Only), HIT Approval Rate ( $\geq {98}\%$ ), Number of Approved HITs $\left( { \geq {3000}}\right)$ , and the Adult Content Qualification.
100
+
101
+ ---
102
+
103
+ <table><tr><td rowspan="2">Eval. Label</td><td rowspan="2">Gold Label</td><td colspan="4">Background</td><td colspan="3">Purpose</td><td colspan="3">$\mathbf{{Method}}$</td><td colspan="3">Finding</td><td colspan="2">Other</td><td rowspan="2">acc</td><td rowspan="2">kappa</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Crowd</td><td>Bio</td><td>.827</td><td>.911</td><td>.867</td><td>.427</td><td>.662</td><td>.519</td><td>.783</td><td>.710</td><td>.744</td><td>.874</td><td>.838</td><td>.856</td><td>.986</td><td>.609</td><td>.753</td><td>.822</td><td>.741</td></tr><tr><td>Crowd</td><td>CS</td><td>.846</td><td>.883</td><td>.864</td><td>.700</td><td>.611</td><td>.653</td><td>.818</td><td>.633</td><td>.714</td><td>.800</td><td>.931</td><td>.860</td><td>.986</td><td>.619</td><td>.761</td><td>.821</td><td>.745</td></tr><tr><td>CS</td><td>Bio</td><td>.915</td><td>.966</td><td>.940</td><td>.421</td><td>.746</td><td>.538</td><td>.670</td><td>.785</td><td>.723</td><td>.958</td><td>.789</td><td>.865</td><td>.867</td><td>.852</td><td>.860</td><td>.850</td><td>.788</td></tr></table>
104
+
105
+ Table 2: Crowd performance using both Bio Expert and CS Expert as the gold standard. CODA-19's labels have an accuracy of 0.82 and a kappa of 0.74 , when compared against two experts' labels. It is noteworthy that when we compared labels between two experts, the accuracy (0.850) and kappa (0.788) were only slightly higher.
106
+
107
+ Both experts annotated the same 129 abstracts randomly selected from CODA-19. The experts used the same interface as that of the workers (Figure 2). The inter-annotator agreement (Cohen's kappa) between the two experts was 0.788 . Table 2 shows the aggregated crowd label's accuracy, along with the precision, recall, and F1-score of each class. CODA-19's labels have an accuracy of 0.82 and a kappa of 0.74 when compared against the two experts' labels. It is noteworthy that when we compared labels between the two experts, the accuracy (0.850) and kappa (0.788) were only slightly higher. The crowd workers performed best in labeling "Background" and "Finding," and they had nearly perfect precision for the "Other" category. Figure 3 shows the normalized confusion matrix for the aggregated crowd labels versus the biomedical expert's labels. Many "Purpose" segments were mislabeled as "Background," which might indicate more ambiguous cases between these two categories. During the annotation period, we received several emails from workers asking about the distinctions between these two aspects. For example, do "potential applications of the proposed work" count as "Background" or "Purpose"?
108
+
109
+ ## 5 Classification Baselines
110
+
111
+ We further examined machines' capacity for annotating research aspects automatically. Six baseline models were implemented: Linear SVM, Random Forest, CNN, LSTM, BERT, and SciBERT.
112
+
113
+ Data Preprocessing The Tf-idf feature was used. We turned all words into lowercase and removed those with frequency lower than 5 . The final tf-idf feature contained 16,775 dimensions. For deep-learning approaches, the vocabulary size was 16,135 , where tokens with frequency lower than 5 were replaced by <UNK>. Sequences were padded with $< \mathrm{{PAD}} >$ if containing less than 60 tokens and were truncated if containing more than 60 tokens.
114
+
115
+ ![01963db8-96d3-7aed-94e1-1e4c10135a2f_4_857_562_588_596_0.jpg](images/01963db8-96d3-7aed-94e1-1e4c10135a2f_4_857_562_588_596_0.jpg)
116
+
117
+ Figure 3: The normalized confusion matrix for the CODA-19 labels versus the biomedical expert's labels.
118
+
119
+ Models Machine-learning approaches were implemented using Scikit-learn (Pedregosa et al., 2011) and deep-learning approaches were implemented using PyTorch (Paszke et al., 2019). The following are the training setups.
120
+
121
+ - Linear SVM: We did a grid search for hyper-parameters and found that $C = 1,$ tol $=$ 0.001 , and hinge loss yielded the best results.
122
+
123
+ - Random Forest: With the grid search, 150 estimators yielded the best result.
124
+
125
+ - CNN: The classic CNN (Kim, 2014) was implemented. Three kernel sizes(3,4,5)were used, each with 100 filters. The word embedding size was 256. A dropout rate of 0.3 and L2 regularization with weight ${10}^{-6}$ were used when training. We used the Adam optimizer, with a learning rate of 0.00005 . The model was trained for 50 epochs and the one with highest validation score was kept for testing.
126
+
127
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Background</td><td colspan="3">Purpose</td><td colspan="3">Method</td><td colspan="3">Finding</td><td colspan="3">Other</td><td rowspan="2">Accuracy</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>#Sample</td><td/><td>5062</td><td/><td/><td>821</td><td/><td/><td>2140</td><td/><td/><td>6890</td><td/><td/><td>562</td><td/><td>15475</td></tr><tr><td>SVM</td><td>.658</td><td>.703</td><td>.680</td><td>.621</td><td>.446</td><td>.519</td><td>.615</td><td>.495</td><td>.549</td><td>.697</td><td>.729</td><td>.712</td><td>.729</td><td>.699</td><td>.714</td><td>.672</td></tr><tr><td>$\mathbf{{RF}}$</td><td>.671</td><td>.632</td><td>.651</td><td>.696</td><td>.365</td><td>.479</td><td>.716</td><td>.350</td><td>.471</td><td>.630</td><td>.787</td><td>.699</td><td>.674</td><td>.742</td><td>.706</td><td>.652</td></tr><tr><td>CNN</td><td>.649</td><td>.706</td><td>.676</td><td>.612</td><td>.512</td><td>.557</td><td>.596</td><td>.562</td><td>.579</td><td>.726</td><td>.702</td><td>.714</td><td>.743</td><td>.795</td><td>.768</td><td>.677</td></tr><tr><td>LSTM</td><td>.655</td><td>.706</td><td>.680</td><td>.700</td><td>.464</td><td>.558</td><td>.634</td><td>.508</td><td>.564</td><td>.700</td><td>.724</td><td>.711</td><td>.682</td><td>.770</td><td>.723</td><td>.676</td></tr><tr><td>BERT</td><td>.719</td><td>.759</td><td>.738</td><td>.585</td><td>.639</td><td>.611</td><td>.680</td><td>.612</td><td>.644</td><td>.777</td><td>.752</td><td>.764</td><td>.773</td><td>.874</td><td>.820</td><td>.733</td></tr><tr><td>SciBERT</td><td>.733</td><td>.768</td><td>.750</td><td>.616</td><td>.636</td><td>.626</td><td>.715</td><td>.636</td><td>.673</td><td>.783</td><td>.775</td><td>.779</td><td>.794</td><td>.852</td><td>.822</td><td>.749</td></tr></table>
128
+
129
+ Table 3: Baseline performance of automatic labeling using the crowd labels of CODA-19. SciBERT achieves highest accuracy of 0.749 and outperforms other models in every aspects.
130
+
131
+ - LSTM: We used 10 LSTM layers to encode the sequence. The encoded vector was then passed through a dense layer for classification. Word embedding size and LSTM hidden size were both 256. The rest of the hyper-parameter and training setting was the same as that of the CNN model.
132
+
133
+ - BERT: Hugging Face's implementation (Wolf et al., 2019) of the Pretrained BERT (Devlin et al., 2018) was used for fine-tuning. We fine-tuned the pretrained model with a learning rate of $3 * {10}^{-7}$ for 50 epochs. Early stopping was used when no improvement occurred in the validation accuracy for five consecutive epochs. The model with the highest validation score was kept for testing.
134
+
135
+ - SciBERT: Hugging Face's implementation (Wolf et al., 2019) of the Pretrained SciBERT (Beltagy et al., 2019) was used for fine-tuning. The fine-tuning setting is the same as that of the BERT model.
136
+
137
+ Result Table 3 shows the results for the six baseline models: SciBERT preformed the best in overall accuracy. When looking at each aspect, all the models performed better in classifying "Background," "Finding," and "Other," while identifying "Purpose" and "Method" was more challenging.
138
+
139
+ ## 6 What's Next?
140
+
141
+ One obvious future direction is to improve classification performance. We evaluated the automatic labels against the biomedical expert's labels, and the SciBERT model achieved an accuracy of 0.774 and a Cohen's kappa of 0.667 , indicating some space for further improvement. Our baseline approaches did not use any contextual information nor domain knowledge. We expect that the classification performance can be further boosted, allowing researchers to label future papers automatically.
142
+
143
+ How can these annotations help search and information extraction? Several search engines have been quickly developed and deployed. These engines allow users to navigate CORD-19 more efficiently and could potentially support decision-making. One motivation for spotting research aspects automatically is to help search and information extraction (Teufel et al., 1999). We have teamed up with the group who created COVID- ${\text{Seer}}^{5}$ to explore the possible uses of CODA-19 in such systems.
144
+
145
+ What other types of biomedical annotations can be crowdsourced? Many prior works that used crowd workers to annotate medical documents (Khare et al., 2016) focused on images (Heim et al., 2018) or named entities (e.g., medical terms (Mohan and Li, 2019), disease (Good et al., 2014), or medicine (Abaho et al., 2019).) We will explore what other types of annotations can be created using non-expert workers.
146
+
147
+ ## Acknowledgments
148
+
149
+ This project is supported by the Huck Institutes of the Life Sciences' Coronavirus Research Seed Fund (CRSF) at Penn State University and the College of IST COVID-19 Seed Fund at Penn State University. We thank the crowd workers for participating in this project and providing useful feedback. We thank VoiceBunny Inc. for granting a ${20}\%$ discount for the voiceover for the worker tutorial video to support projects relevant to COVID-19. We also thank Tiffany Knearem, Shih-Hong (Alan) Huang, Joseph Chee Chang, and Frank Ritter for the great discussion and useful feedback.
150
+
151
+ ---
152
+
153
+ ${}^{5}$ CovidSeer: https://covidseer.ist.psu.edu/
154
+
155
+ ---
156
+
157
+ ## References
158
+
159
+ Micheal Abaho, Danushka Bollegala, Paula Williamson, and Susanna Dodd. 2019. Correcting crowdsourced annotations to improve detection of outcome types in evidence based medicine. In CEUR Workshop Proceedings, volume 2429, pages 1-5.
160
+
161
+ Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. Semeval 2017 task 10: Scienceie-extracting keyphrases and relations from scientific publications. arXiv preprint arXiv:1704.02853.
162
+
163
+ Soumya Banerjee, Debarshi Kumar Sanyal, Sami-ran Chattopadhyay, Plaban Kumar Bhowmick, and Parthapratim Das. 2020. Segmenting scientific abstracts into discourse categories: A deep learning-based approach for sparse labeled data. arXiv preprint arXiv:2005.05414.
164
+
165
+ Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP.
166
+
167
+ Joel Chan, Joseph Chee Chang, Tom Hope, Dafna Sha-haf, and Aniket Kittur. 2018. Solvent: A mixed initiative system for finding analogies between research papers. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1-21.
168
+
169
+ Giovanni Colavizza, Rodrigo Costas, Vincent A. Traag, Nees Jan van Eck, Thed van Leeuwen, and Ludo Waltman. 2020. A scientometric overview of cord- 19. bioRxiv.
170
+
171
+ Pradeep Dasigi, Gully APC Burns, Eduard Hovy, and Anita de Waard. 2017. Experiment segmentation in scientific discourse as clause-level structured prediction using recurrent neural networks. arXiv preprint arXiv:1702.05398.
172
+
173
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
174
+
175
+ Kata Gábor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang QasemiZadeh, Haifa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 679-688.
176
+
177
+ Benjamin M Good, Max Nanis, Chunlei Wu, and Andrew I Su. 2014. Microtask crowdsourcing for disease mention annotation in pubmed abstracts. In ${Pa}$ - cific Symposium on Biocomputing Co-Chairs, pages 282-293. World Scientific.
178
+
179
+ James Hartley. 2004. Current findings from research on structured abstracts. Journal of the Medical Library Association, 92(3):368.
180
+
181
+ Eric Heim, Tobias Roß, Alexander Seitel, Keno März, Bram Stieltjes, Matthias Eisenmann, Jo-
182
+
183
+ hannes Lebert, Jasmin Metzger, Gregor Sommer, Alexander W Sauter, et al. 2018. Large-scale medical image annotation with crowd-powered algorithms. Journal of Medical Imaging, 5(3):034002.
184
+
185
+ Hen-Hsen Huang and Hsin-Hsi Chen. 2017. Disa: A scientific writing advisor with deep information structure analysis. In IJCAI, pages 5229-5231.
186
+
187
+ Natalia B Hubbs, Mareena M Whisby-Pitts, and Jonathan L McMurry. 2019. Kinetic analysis of bacteriophage sf6 binding to outer membrane protein a using whole virions. bioRxiv, page 509141.
188
+
189
+ Ritu Khare, Benjamin M Good, Robert Leaman, Andrew I Su, and Zhiyong Lu. 2016. Crowdsourcing in biomedicine: challenges and opportunities. Briefings in bioinformatics, 17(1):23-32.
190
+
191
+ Halil Kilicoglu. 2018. Biomedical text mining for research rigor and integrity: tasks, challenges, directions. Briefings in bioinformatics, 19(6):1400- 1414.
192
+
193
+ Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
194
+
195
+ Maria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisation and zoning of scientific papers. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10).
196
+
197
+ Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc-Closky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.
198
+
199
+ Yoko Mizuta, Anna Korhonen, Tony Mullen, and Nigel Collier. 2006. Zone analysis in biology articles as a basis for information extraction. International journal of medical informatics, 75(6):468-487.
200
+
201
+ Sunil Mohan and Donghui Li. 2019. Medmentions: a large biomedical corpus annotated with umls concepts. arXiv preprint arXiv:1902.09476.
202
+
203
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te-jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py-torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
204
+
205
+ Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram-fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12(null):2825-2830.
206
+
207
+ Behrang QasemiZadeh and Anne-Kathrin Schumann. 2016. The acl rd-tec 2.0: A language resource for evaluating term extraction and entity recognition methods. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1862-1868.
208
+
209
+ Eva Siegenthaler, Yves Bochud, Per Bergamin, and Pascal Wurtz. 2012. Reading on lcd vs e-ink displays: effects on fatigue and visual strain. Ophthalmic and Physiological Optics, 32(5):367-374.
210
+
211
+ Simone Teufel et al. 1999. Argumentative zoning: Information extraction from scientific text. Ph.D. thesis, Citeseer.
212
+
213
+ A. de Waard, S. Buckingham Shum, A. Carusi, J. Park, M. Samwald, and Á. Sándor. 2009. Hypotheses, evidence and relationships: The hyper approach for representing scientific knowledge claims. In Proceedings 8th International Semantic Web Conference, Workshop on Semantic Web Applications in Scientific Discourse. Lecture Notes in Computer Science, Springer Verlag: Berlin.
214
+
215
+ Anita de Waard and Henk Pander Maat. 2012. Verb form indicates discourse segment type in biological research papers: Experimental evidence. Journal of English for Academic Purposes, 11(4):357-366.
216
+
217
+ Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Michael Kinney, Ziyang Liu, William. Merrill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stil-son, Alex D. Wade, Kuansan Wang, Christopher Wilhelm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. Cord-19: The covid-19 open research dataset. ArXiv, abs/2004.10706.
218
+
219
+ Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, and Yi Luan. 2019. Paperrobot: Incremental draft generation of scientific ideas. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1980-1991.
220
+
221
+ W John Wilbur, Andrey Rzhetsky, and Hagit Shatkay. 2006. New directions in biomedical text annotation: definitions, guidelines and corpus construction. BMC bioinformatics, 7(1):356.
222
+
223
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow-
224
+
225
+ icz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/XOkm8xdns5R/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CODA-19: RELIABLY ANNOTATING RESEARCH ASPECTS ON 10,000+ CORD-19 ABSTRACTS USING A NON-EXPERT CROWD
2
+
3
+ Ting-Hao (Kenneth) Huang ${}^{1}$ , Chieh-Yang Huang ${}^{1}$ Chien-Kuang Cornelia Ding ${}^{2}$ , Yen-Chia Hsu ${}^{3}$ , C. Lee Giles ${}^{1}$
4
+
5
+ ${}^{1}$ Pennsylvania State University, University Park, PA, USA
6
+
7
+ {txh710, chiehyang, clg20}@psu.edu
8
+
9
+ ${}^{2}$ University of California, San Francisco, CA, USA. Cornelia. Ding@ucsf.edu
10
+
11
+ ${}^{3}$ Carnegie Mellon University, Pittsburgh, PA, USA. yenchiah@andrew.cmu.edu
12
+
13
+ § ABSTRACT
14
+
15
+ This paper introduces CODA- ${\mathbf{{19}}}^{1}$ , a human-annotated dataset that codes the Background, Purpose, Method, Finding/Contribution, and Other sections of 10,966 English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk within 10 days, achieving a label quality comparable to that of experts. Each abstract was annotated by nine different workers, and the final labels were obtained by majority vote. The inter-annotator agreement (Cohen's kappa) between the crowd and the biomedical expert (0.741) is comparable to inter-expert agreement (0.788). CODA-19's labels have an accuracy of ${82.2}\%$ when compared to the biomedical expert's labels, while the accuracy between experts was ${85.0}\%$ . Reliable human annotations help scientists to understand the rapidly accelerating coronavirus literature and also serve as the battery of AI/NLP research, but obtaining expert annotations can be slow. We demonstrated that a non-expert crowd can be rapidly employed at scale to join the fight against COVID-19.
16
+
17
+ § 1 INTRODUCTION
18
+
19
+ Just as COVID-19 is rapidly spreading worldwide, the rapid acceleration in new coronavirus literature makes it hard to keep up with. Researchers have thus teamed up with the White House to release the COVID-19 Open Research Dataset (CORD- 19) (Wang et al., 2020), containing over 59,000 related scholarly articles (as of May 1, 2020). The Open Research Dataset Challenge has also been launched on Kaggle to encourage researchers to use cutting-edge techniques to gain new insights from these papers. However, it often requires large-scale human annotations for automated language understanding, relation extraction, and question answering to reach good performance levels. Producing such annotations for thousands of papers can be a prolonged process if we only employ expert annotators, whose availability is limited.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: An example of the final crowd annotation for the abstract of (Hubbs et al., 2019).
24
+
25
+ Data sparsity is one of the challenges for text mining in the biomedical domain because text annotations on scholarly articles were mainly produced by small groups of experts. For example, two researchers manually created the ACL RD-TEC 2.0, a dataset that contains 300 scientific abstracts (QasemiZadeh and Schumann, 2016); a group of annotators "with rich experience in biomedical content curation" created MedMen-tions, a corpus containing 4,000 abstracts (Mohan and Li, 2019); and several datasets used in biomedical NLP shared tasks were manually created by the organizers and/or their students, such as the Sci-enceIE in SemEval'17 (Augenstein et al., 2017) and Relation Extraction in SemEval'18 (Gábor et al., 2018). Obtaining expert annotations can be too slow to respond to COVID-19, so we explore an alternative approach: using non-expert crowds, such as workers on Amazon Mechanical Turk (MTurk), to produce high-quality, useful annotations for thousands of scientific papers.
26
+
27
+ ${}^{1}$ COVID-19 Research Aspect Dataset (CODA-19): https://github.com/windx0303/CODA-19
28
+
29
+ This paper introduces CODA-19, the COVID- 19 Research Aspect Dataset, presenting the first outcome of our exploration in using non-expert crowds for large-scale scholarly article annotation. CODA-19 contains 10,966 abstracts randomly selected from CORD-19. Each abstract was segmented into sentences, which were further divided into one or more shorter text fragments. All 168,286 text fragments in CODA-19 were labeled with a "research aspect," i.e., Background, Purpose, Method, Finding/Contribution, or Other. This annotation scheme was adapted from SOLVENT (Chan et al., 2018), with minor changes.
30
+
31
+ In our project, 248 crowd workers from MTurk were recruited and annotated the whole CODA-19 within ten days. ${}^{2}$ Each abstract was annotated by nine different workers. We aggregated the crowd labels for each text segment using majority voting.
32
+
33
+ The resulting crowd labels had a label accuracy of ${82}\%$ when compared against the expert labels on 129 abstracts. The inter-annotator agreement (Cohen's kappa) was 0.741 between the crowd labels and the expert labels, while it was 0.788 between two experts. We also established several classification baselines, showing the feasibility of automating such annotation tasks.
34
+
35
+ § 2 ANNOTATION SCHEME
36
+
37
+ CODA-19 uses a five-class annotation scheme to denote research aspects in scientific articles: Background, Purpose, Method, Finding/Contribution, or Other. Table 1 shows the full annotation guidelines we developed to instruct workers. We updated and expanded this guideline daily during the annotation process to address workers' questions and feedback.
38
+
39
+ This scheme was adapted from SOLVENT (Chan et al., 2018), with three changes. First, we added an "Other" category. Articles in CORD-19 are broad and diverse (Colavizza et al., 2020), so it is unrealistic to govern all cases with only four categories. We are also aware that CORD-19's data came with occasional formatting or segmenting errors. These cases were also to be put into the "Other" category. Second, we replaced the "Mechanism" category with "Method." Chan et al. created SOLVENT with the aim of discovering the analogies between research papers at scale. Our goal was to better understand the contribution of each paper, so we decided to use a more general word, "Method," to include the research methods and procedures that cannot be characterized as "Mechanisms." Also, biomedical literature widely used the word "mechanism," which could also be confusing to workers. Third, we modified the name "Finding" to "Finding/Contribution" to allow broader contributions that are not usually viewed as "findings." Our scheme is also similar to that of DISA (Huang and Chen, 2017), which has an additional "Conclusion" category.
40
+
41
+ We selected this scheme because it balances the richness of information and the difficulty level for workers to annotate. We are aware of the long history of research (Kilicoglu, 2018) on composing structured abstracts (Hartley, 2004), identifying argumentative zones (Teufel et al., 1999; Mizuta et al., 2006; Liakata et al., 2010), analyzing scientific discourse (de Waard and Maat, 2012; Dasigi et al., 2017; Banerjee et al., 2020), supporting paper writing (Wang et al., 2019), and representing papers to reduce information overload (de Waard et al., 2009). However, most of these schemes assumed expert annotators rather than crowd workers. We eventually narrowed our focus down to two annotation schemes: SOLVENT and the "Information Type" (Focus, Polarity, Certainty, Evidence, Trend) proposed by Wilbur et al. (2006). SOLVENT is easier to annotate and has been tested with workers from MTurk and Upwork, while Wilbur's scheme is informative and specialized for biomedical articles. We implemented annotation interfaces for both schemes and launched a few tasks on MTurk for testing. Workers accomplished the SOLVENT tasks much faster with reasonable label accuracy, while only a few workers accomplished the Information Type annotation task. Therefore, we decided to adapt the SOLVENT scheme.
42
+
43
+ ${}^{2}$ From April 19,2020 to April 29,2020, including the time for worker training and post-task survey.
44
+
45
+ max width=
46
+
47
+ Aspect Annotation Guideline
48
+
49
+ 1-2
50
+ Background "Background" text segments answer one or more of these questions: - Why is this problem important? - What relevant works have been created before? - What is still missing in the previous works? - What are the high-level research questions? - How might this help other research or researchers?
51
+
52
+ 1-2
53
+ Purpose "Purpose" text segments answer one or more of these questions: - What specific things do the researchers want to do? - What specific knowledge do the researchers want to gain? - What specific hypothesis do the researchers want to test?
54
+
55
+ 1-2
56
+ $\mathbf{{Method}}$ "Method" text segments answer one or more of these questions: - How did the researchers do the work or find what they sought? - What are the procedures and steps of the research?
57
+
58
+ 1-2
59
+ Finding/ Contribution "Finding/Contribution" text segments answer one or more of these questions: - What did the researchers find out? - Did the proposed methods work? - Did the thing behave as the researchers expected?
60
+
61
+ 1-2
62
+ Other - Text segments that do not fit into any of the four categories above. - Text segments that are not part of the article. - Text segments that are not in English. - Text segments that contain only reference marks (e.g., "[1,2,3,4,5") or dates (e.g., "April 20, 2008"). - Captions for figures and tables (e.g. "Figure 1: Experimental Result of ...") - Formatting errors. - Text segments the annotator does not know or is not sure about.
63
+
64
+ 1-2
65
+
66
+ Table 1: CODA-19's annotation guideline for crowd workers.
67
+
68
+ § 3 CODA-19 DATASET CONSTRUCTION
69
+
70
+ CODA-19 has 10,966 abstracts that contain a total of 2,703,174 tokens and 103,978 sentences, which were divided into 168,286 segments. The data is released as a 80/10/10 train/dev/test split.
71
+
72
+ § 3.1 DATA PREPARATION
73
+
74
+ We used Stanford CoreNLP (Manning et al., 2014) to tokenize and segment sentences for all the abstracts in CORD-19. We further used comma (,), semicolon (;), and period (.) to split each sentence into shorter fragments, where a fragment has no fewer than six tokens (including punctuation marks) and has no orphan parentheses.
75
+
76
+ As of April 15, 2020, 29,306 articles in CORD- 19 had a non-empty abstract. An average abstract had 9.73 sentences $\left( {\mathrm{{SD}} = {8.44}}\right)$ , which were further divided into 15.75 text segments $\left( {\mathrm{{SD}} = {13.26}}\right)$ . Each abstract had 252.36 tokens $\left( {\mathrm{{SD}} = {192.89}}\right)$ on average. We filtered out the ${538}\left( {{1.84}\% }\right)$ abstracts with only one sentence because many of them had formatting errors. We also removed the 145 (0.49%) abstracts that had more than 1,200 tokens to keep the working time for each task under five minutes (see Section 3.3). We randomly selected 11,000 abstracts from the remaining data for annotation. During the annotation process, workers informed us that a few articles were not in English. We identified these automatically using langdetect ${}^{3}$ and excluded them.
77
+
78
+ § 3.2 INTERFACE DESIGN
79
+
80
+ Figure 2 shows the worker interface, which we designed to guide workers to read and label all the text segments in an abstract. The interface showed the instruction on the top (Figure 2a) and presented the task in three steps: In Step 1, the worker was instructed to spend ten seconds to take a quick glance at the abstract. The goal was to get a high-level sense of the topic rather than to fully understand the abstract. In Step 2, we showed the main annotation interface (Figure 2b), where the worker can go through each text segment and select the most appropriate category for each segment one by one. In Step 3, the worker can review the labeled text segments (Figure 2c) and go back to Step 2 to fix any problems.
81
+
82
+ § 3.3 ANNOTATION PROCEDURE
83
+
84
+ Worker Training and Recruitment We first created a qualification Human Intelligence Task (HIT) to recruit workers on MTurk ($1/HIT). The workers needed to watch a five-minute video to learn the scheme, go through an interactive tutorial to learn the interface, and sign a consent form to obtain the qualification. We granted custom qualifications to 400 workers who accomplished the qualification HIT. Only the workers with this qualification could do our tasks. ${}^{4}$
85
+
86
+ ${}^{3}$ langdetect: https://github.com/Mimino666/langdetect
87
+
88
+ < g r a p h i c s >
89
+
90
+ Figure 2: The worker interface used to construct CODA-19.
91
+
92
+ Posting Tasks in Smaller Batches We divided 11,000 abstracts into smaller batches, where each batch has no more than 1,000 abstracts. Each abstract forms a single HIT. We recruited nine different workers through nine assignments to label each abstract. Our strategy was to post one batch at a time. When a batch was finished, we assessed its data quality, sent feedback to workers to guide them, or blocked workers who constantly had low accuracy before proceeding with the next batch.
93
+
94
+ Worker Wage and Total Cost We aimed to pay an hourly wage of $\$ {10}$ . The working time of an abstract was estimated by the average reading speed of English native speakers, i.e., 200-300 words per minute (Siegenthaler et al., 2012). For an abstract, we rounded up (#token/250) to an integer as the estimated working time in minutes and paid $\left( {\$ {0.05} + \text{ Estimated Working Minutes } \times \$ {0.17}}\right)$ for it. As a result, 59.49% of our HITs were priced at $\$ {0.22},{36.41}\%$ were at $\$ {0.39},{2.74}\%$ were at $\$ {0.56}$ , ${0.81}\%$ were at $\$ {0.73}$ , and ${0.55}\%$ were at $\$ {0.90}$ . We posted nine assignments per HIT. Adding the ${20}\%$ MTurk fee, coding each abstract (using nine workers) cost $\$ {3.21}$ on average.
95
+
96
+ § 3.4 LABEL AGGREGATION
97
+
98
+ The final labels in CODA-19 were obtained by majority voting over crowd labels, excluding the labels from blocked workers. For each batch of HITs, we manually examined the labels from workers who frequently disagreed with the majority-voted labels (Section 3.3). If a worker had abnormally low accuracy or was apparently spamming, we retracted the worker's qualification to prevent him/her from taking future tasks. We excluded the labels from these removed workers when aggregating the final labels. Note that there can be ties when two or more aspects received the same highest number of votes (e.g., $4/4/1$ or $3/3/3$ ). We resolved ties by using the following tiebreakers, in order: Finding, Method, Purpose, Background, Other.
99
+
100
+ § 4 DATA QUALITY ASSESSMENT
101
+
102
+ We worked with a biomedical expert and a computer scientist to assess label quality; both experts are co-authors of this paper. The biomedical expert (the "Bio" Expert in Table 2) is an MD and also a PhD in Genetics and Genomics. She is now a resident physician in pathology at the University of California, San Francisco. The other expert (the "CS" Expert in Table 2) has a PhD in Computer Science and is currently a Project Scientist at Carnegie Mellon University.
103
+
104
+ ${}^{4}$ Four built-in MTurk qualifications were also used: Locale (US Only), HIT Approval Rate ( $\geq {98}\%$ ), Number of Approved HITs $\left( { \geq {3000}}\right)$ , and the Adult Content Qualification.
105
+
106
+ max width=
107
+
108
+ 2*Eval. Label 2*Gold Label 4|c|Background 3|c|Purpose 3|c|$\mathbf{{Method}}$ 3|c|Finding 2|c|Other 2*acc 2*kappa
109
+
110
+ 3-17
111
+ P R F1 P R F1 P R F1 P R F1 P R F1
112
+
113
+ 1-19
114
+ Crowd Bio .827 .911 .867 .427 .662 .519 .783 .710 .744 .874 .838 .856 .986 .609 .753 .822 .741
115
+
116
+ 1-19
117
+ Crowd CS .846 .883 .864 .700 .611 .653 .818 .633 .714 .800 .931 .860 .986 .619 .761 .821 .745
118
+
119
+ 1-19
120
+ CS Bio .915 .966 .940 .421 .746 .538 .670 .785 .723 .958 .789 .865 .867 .852 .860 .850 .788
121
+
122
+ 1-19
123
+
124
+ Table 2: Crowd performance using both Bio Expert and CS Expert as the gold standard. CODA-19's labels have an accuracy of 0.82 and a kappa of 0.74, when compared against two experts' labels. It is noteworthy that when we compared labels between two experts, the accuracy (0.850) and kappa (0.788) were only slightly higher.
125
+
126
+ Both experts annotated the same 129 abstracts randomly selected from CODA-19. The experts used the same interface as that of the workers (Figure 2). The inter-annotator agreement (Cohen's kappa) between the two experts was 0.788 . Table 2 shows the aggregated crowd label's accuracy, along with the precision, recall, and F1-score of each class. CODA-19's labels have an accuracy of 0.82 and a kappa of 0.74 when compared against the two experts' labels. It is noteworthy that when we compared labels between the two experts, the accuracy (0.850) and kappa (0.788) were only slightly higher. The crowd workers performed best in labeling "Background" and "Finding," and they had nearly perfect precision for the "Other" category. Figure 3 shows the normalized confusion matrix for the aggregated crowd labels versus the biomedical expert's labels. Many "Purpose" segments were mislabeled as "Background," which might indicate more ambiguous cases between these two categories. During the annotation period, we received several emails from workers asking about the distinctions between these two aspects. For example, do "potential applications of the proposed work" count as "Background" or "Purpose"?
127
+
128
+ § 5 CLASSIFICATION BASELINES
129
+
130
+ We further examined machines' capacity for annotating research aspects automatically. Six baseline models were implemented: Linear SVM, Random Forest, CNN, LSTM, BERT, and SciBERT.
131
+
132
+ Data Preprocessing The Tf-idf feature was used. We turned all words into lowercase and removed those with frequency lower than 5 . The final tf-idf feature contained 16,775 dimensions. For deep-learning approaches, the vocabulary size was 16,135, where tokens with frequency lower than 5 were replaced by <UNK>. Sequences were padded with $< \mathrm{{PAD}} >$ if containing less than 60 tokens and were truncated if containing more than 60 tokens.
133
+
134
+ < g r a p h i c s >
135
+
136
+ Figure 3: The normalized confusion matrix for the CODA-19 labels versus the biomedical expert's labels.
137
+
138
+ Models Machine-learning approaches were implemented using Scikit-learn (Pedregosa et al., 2011) and deep-learning approaches were implemented using PyTorch (Paszke et al., 2019). The following are the training setups.
139
+
140
+ * Linear SVM: We did a grid search for hyper-parameters and found that $C = 1,$ tol $=$ 0.001, and hinge loss yielded the best results.
141
+
142
+ * Random Forest: With the grid search, 150 estimators yielded the best result.
143
+
144
+ * CNN: The classic CNN (Kim, 2014) was implemented. Three kernel sizes(3,4,5)were used, each with 100 filters. The word embedding size was 256. A dropout rate of 0.3 and L2 regularization with weight ${10}^{-6}$ were used when training. We used the Adam optimizer, with a learning rate of 0.00005 . The model was trained for 50 epochs and the one with highest validation score was kept for testing.
145
+
146
+ max width=
147
+
148
+ 2*Model 3|c|Background 3|c|Purpose 3|c|Method 3|c|Finding 3|c|Other 2*Accuracy
149
+
150
+ 2-16
151
+ P R F1 P R F1 P R F1 P R F1 P R F1
152
+
153
+ 1-17
154
+ #Sample X 5062 X X 821 X X 2140 X X 6890 X X 562 X 15475
155
+
156
+ 1-17
157
+ SVM .658 .703 .680 .621 .446 .519 .615 .495 .549 .697 .729 .712 .729 .699 .714 .672
158
+
159
+ 1-17
160
+ $\mathbf{{RF}}$ .671 .632 .651 .696 .365 .479 .716 .350 .471 .630 .787 .699 .674 .742 .706 .652
161
+
162
+ 1-17
163
+ CNN .649 .706 .676 .612 .512 .557 .596 .562 .579 .726 .702 .714 .743 .795 .768 .677
164
+
165
+ 1-17
166
+ LSTM .655 .706 .680 .700 .464 .558 .634 .508 .564 .700 .724 .711 .682 .770 .723 .676
167
+
168
+ 1-17
169
+ BERT .719 .759 .738 .585 .639 .611 .680 .612 .644 .777 .752 .764 .773 .874 .820 .733
170
+
171
+ 1-17
172
+ SciBERT .733 .768 .750 .616 .636 .626 .715 .636 .673 .783 .775 .779 .794 .852 .822 .749
173
+
174
+ 1-17
175
+
176
+ Table 3: Baseline performance of automatic labeling using the crowd labels of CODA-19. SciBERT achieves highest accuracy of 0.749 and outperforms other models in every aspects.
177
+
178
+ * LSTM: We used 10 LSTM layers to encode the sequence. The encoded vector was then passed through a dense layer for classification. Word embedding size and LSTM hidden size were both 256. The rest of the hyper-parameter and training setting was the same as that of the CNN model.
179
+
180
+ * BERT: Hugging Face's implementation (Wolf et al., 2019) of the Pretrained BERT (Devlin et al., 2018) was used for fine-tuning. We fine-tuned the pretrained model with a learning rate of $3 * {10}^{-7}$ for 50 epochs. Early stopping was used when no improvement occurred in the validation accuracy for five consecutive epochs. The model with the highest validation score was kept for testing.
181
+
182
+ * SciBERT: Hugging Face's implementation (Wolf et al., 2019) of the Pretrained SciBERT (Beltagy et al., 2019) was used for fine-tuning. The fine-tuning setting is the same as that of the BERT model.
183
+
184
+ Result Table 3 shows the results for the six baseline models: SciBERT preformed the best in overall accuracy. When looking at each aspect, all the models performed better in classifying "Background," "Finding," and "Other," while identifying "Purpose" and "Method" was more challenging.
185
+
186
+ § 6 WHAT'S NEXT?
187
+
188
+ One obvious future direction is to improve classification performance. We evaluated the automatic labels against the biomedical expert's labels, and the SciBERT model achieved an accuracy of 0.774 and a Cohen's kappa of 0.667, indicating some space for further improvement. Our baseline approaches did not use any contextual information nor domain knowledge. We expect that the classification performance can be further boosted, allowing researchers to label future papers automatically.
189
+
190
+ How can these annotations help search and information extraction? Several search engines have been quickly developed and deployed. These engines allow users to navigate CORD-19 more efficiently and could potentially support decision-making. One motivation for spotting research aspects automatically is to help search and information extraction (Teufel et al., 1999). We have teamed up with the group who created COVID- ${\text{ Seer }}^{5}$ to explore the possible uses of CODA-19 in such systems.
191
+
192
+ What other types of biomedical annotations can be crowdsourced? Many prior works that used crowd workers to annotate medical documents (Khare et al., 2016) focused on images (Heim et al., 2018) or named entities (e.g., medical terms (Mohan and Li, 2019), disease (Good et al., 2014), or medicine (Abaho et al., 2019).) We will explore what other types of annotations can be created using non-expert workers.
193
+
194
+ § ACKNOWLEDGMENTS
195
+
196
+ This project is supported by the Huck Institutes of the Life Sciences' Coronavirus Research Seed Fund (CRSF) at Penn State University and the College of IST COVID-19 Seed Fund at Penn State University. We thank the crowd workers for participating in this project and providing useful feedback. We thank VoiceBunny Inc. for granting a ${20}\%$ discount for the voiceover for the worker tutorial video to support projects relevant to COVID-19. We also thank Tiffany Knearem, Shih-Hong (Alan) Huang, Joseph Chee Chang, and Frank Ritter for the great discussion and useful feedback.
197
+
198
+ ${}^{5}$ CovidSeer: https://covidseer.ist.psu.edu/
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ZQ_HvBxcdCv/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Natural Language Processing System for National COVID-19 Surveillance in the US Department of Veterans Affairs
2
+
3
+ Alec B Chapman ${}^{1,2}$ , Kelly S Peterson ${}^{1,2}$ , Augie Turano ${}^{3}$ , Tamára L Box ${}^{4}$ ,
4
+
5
+ Katherine S Wallace ${}^{5}$ , Makoto Jones ${}^{1,2}$
6
+
7
+ ${}^{1}$ Veterans Affairs (VA) Salt Lake City Health Care System
8
+
9
+ ${}^{2}$ Division of Epidemiology, University of Utah
10
+
11
+ ${}^{3}$ VA Office of EHR Modernization
12
+
13
+ ${}^{4}$ VA Office of Clinical Systems Development and Evaluation (CSDE)
14
+
15
+ ${}^{5}$ VA Office of Biosurveillance, VA Central Office, Washington, DC
16
+
17
+ ## Abstract
18
+
19
+ Timely and accurate accounting of positive cases has been an important part of the response to the COVID-19 pandemic. While most positive cases within Veterans Affairs (VA) are identified through structured laboratory results, some patients are tested or diagnosed outside VA so their clinical status is documented only in free-text narratives. We developed a Natural Language Processing pipeline for identifying positively diagnosed COVID- 19 patients and deployed this system to accelerate chart review. As part of the VA national response to COVID-19, this process identified 6,360 positive cases which did not have corresponding laboratory data. These cases accounted for 36.1% of total confirmed positive cases in VA to date. With available data, performance of the system is estimated as 82.4% precision and 94.2% recall. A public-facing implementation is released as open source and available to the community.
20
+
21
+ ## 1 Introduction
22
+
23
+ A robust pandemic response is contingent on timely and accurate information (Morse 2012). During the COVID-19 pandemic, public health institutions have established surveillance systems to monitor and track case counts over time.
24
+
25
+ COVID-19 is typically diagnosed using laboratory tests. The test results are frequently used as a source for surveillance systems. However, such systems typically only capture laboratory results from the same healthcare system. Patients may also be diagnosed with COVID-19 in the community, such as in external hospital networks or drive-through testing. These patients may potentially be missed by laboratory-based surveillance methods, leading to these patients not being represented in overall case counts.
26
+
27
+ Patient health information needed for biosurveillance is often recorded in free-text narratives in the Electronic Health Record (EHR) (Chapman et al. 2011), offering an alternative source of COVID-19 status when structured lab evidence is absent.
28
+
29
+ In this work we developed a Natural Language Processing (NLP) system to extract potential positive COVID-19 cases from clinical text within the Department of Veterans Affairs (VA). Following review by a clinical expert, positively identified patients are included in official VA surveillance counts. Since the VA EHR includes data from hospitals and clinics across the United States, this system enables a unique capability for collecting data for national surveillance purposes.
30
+
31
+ ## 2 Background
32
+
33
+ Manual information gathering draws effort away from patient care priorities and can impede timely and effective responses to public health threats. Automated approaches for processing clinical notes have been applied for public health purposes when data is needed as quickly as possible.
34
+
35
+ Gesteland et al (2003) developed an automated syndromic surveillance system using clinical text to identify anomalies in symptoms as rapidly as possible. Several examples in the literature have utilized clinical text including chief complaints to perform early detection of infectious disease (Brillman et al. 2005; Chapman, Dowling, and Wagner 2004; Ivanov et al. 2003; Matheny et al. 2012; Pineda et al. 2015).
36
+
37
+ Typical data sources for COVID-19 surveillance include government announcements, scientific publications, and news articles (Xu et al. 2020) Most literature to date for NLP related to COVID- 19 has involved public data sources such as research publications (Wang et al. 2020). Others have examined social media sources including Twitter to examine sentiment or misinformation related to the virus (Rajput, Grover, and Rathi 2020; Singh et al. 2020). In this work, the objective was to identify the diagnosis of COVID-19 in clinical documents to report complete case counts of the disease for public health surveillance in VA.
38
+
39
+ ## 3 Methods
40
+
41
+ ### 3.1 Dataset
42
+
43
+ Veterans Health Administration (VHA) includes medical centers and clinics across the United States ${}^{1}$ . The VA Corporate Data Warehouse (CDW) includes electronic clinical data for these sites in a unified architecture. This work included clinical data in 2020 between January 1 and June 15.
44
+
45
+ ### 3.2 NLP Pipeline
46
+
47
+ The primary objective of our NLP system is to classify whether a clinical document contains a positive COVID-19 case. To do this, we designed a rule-based pipeline which extracted target entities related to COVID-19, asserted certain attributes for each entity, and finally classified documents as either positive or negative based on the entities within the document. We prioritized minimizing false negatives in order to identify as many positive cases as possible. However, as the volume of data increased, it became important to reduce false positives in order to minimize manual chart review.
48
+
49
+ The pipeline was implemented in Python using the spaCy framework ${}^{2}$ . All processing steps except for tokenization, part-of-speech tagging, and dependency parsing were implemented using custom spaCy components, a feature available in version 2.0 and later. Each component may contain its own rules or knowledge base. Several components are available as part of med ${\mathrm{{Sp}}}_{2}{\mathrm{{Cy}}}^{3}$ , an open source project for clinical NLP using spaCy, and a publicly available version of the pipeline is released on GitHub ${}^{4}$ .
50
+
51
+ The following describes each of the custom components in the pipeline, shown visually in Appendix A:
52
+
53
+ - Preprocessor: Modifies the underlying text before text processing. This step removes semi-structured templated texts and questionnaires which can cause false positives and replaces certain abbreviations and misspellings to simplify later processing steps.
54
+
55
+ - Target Matcher: Extracts entities related to COVID-19 based on linguistic patterns. This includes terms such as "COVID-19", "novel coronavirus", "ncov", and "SARS-COV-2".
56
+
57
+ - Context: Identifies semantic modifiers and attributes such as negation, uncertainty, and experiencer. This step was performed using cycontext ${}^{5}$ , a spaCy implementation of the ConText algorithm (Chapman, Dowling, and Chu 2007). Figure 1 shows a visualization of the ConText algorithm.
58
+
59
+ - Sectionizer: Detects section boundaries in the text, such as "Visit Diagnoses" or "Past Medical History".
60
+
61
+ - Postprocessor: Modifies or removes entities based on business logic. This component allows the pipeline to handle edge cases or more complex logic using the results of previous components.
62
+
63
+ - Document Classifier: Assigns a label of "Positive" or "Negative" to each document based on the entities and attributes extracted from the text.
64
+
65
+ The following is a brief description of classification logic at both entity level and document level. Entities are excluded if any of the following attributes are present:
66
+
67
+ - Uncertain
68
+
69
+ - Negated
70
+
71
+ - Experienced by someone other than the patient
72
+
73
+ Entities are marked as "positive" when any of the following conditions are met:
74
+
75
+ - Associated with a positive modifier, such as "diagnosed with" or "is positive"
76
+
77
+ - Occurring in certain sections of a note, such as "Diagnoses:"
78
+
79
+ - Mentioned with a specific associated condition, such as "COVID-19 pneumonia"
80
+
81
+ ---
82
+
83
+ ${}^{1}$ https://www.va.gov/health/
84
+
85
+ ${}^{2}$ https://spacy.io/
86
+
87
+ ${}^{3}$ https://github.com/medspacy
88
+
89
+ ${}^{4}$ https://github.com/abchapman93/VA_COVID- 19_NLP_BSV
90
+
91
+ ${}^{5}$ https://github.com/medspacy/cycontext
92
+
93
+ ---
94
+
95
+ ![01963db3-d7b7-7054-9855-b75a83eddfaf_2_206_213_1221_430_0.jpg](images/01963db3-d7b7-7054-9855-b75a83eddfaf_2_206_213_1221_430_0.jpg)
96
+
97
+ Figure 1. Visualizations provided in medSpaCy allowed us to view the output of our system and inspect linguistic patterns in the text. Target and modifier concepts are highlighted in text and arrows between them show relationships indicating whether the patient experienced COVID-19.
98
+
99
+ Based on the entities and corresponding attributes, we then classify the document as "Positive" or "Negative". In our current implementation, a document is classified as "Positive" if it has at least one positive, non-excluded entity.
100
+
101
+ ### 3.3 Deployment
102
+
103
+ Our system was deployed to process clinical notes in VA CDW beginning January 21, 2020, the day after the first case was confirmed in the United States (Holshue et al. 2020). All documents containing keywords related to COVID-19 were included in document processing. Documents were retrieved and processed regularly to facilitate daily operations.
104
+
105
+ ### 3.4 Clinical Review
106
+
107
+ When a patient's document was classified by text processing as positive, the document was reviewed by a clinical validator. Using an internally developed web-based tool, reviewers viewed a marked-up summary of the processed clinical documents. If the patient fit a clinical definition of COVID-19, the reviewer accepted the suggestion and the patient was added to VA's COVID-19 counts.
108
+
109
+ Due to an increasing volume of data and limited resources for review, later iterations accelerated validation and improved precision by assigning documents to "High" and "Low" priority groups using other indicators such as a relevant ICD-10 code. This allowed reviewers to prioritize review of those patients who were likely to be valid cases and to minimize the review of false positives.
110
+
111
+ ## 4 Results
112
+
113
+ ### 4.1 Document Processing
114
+
115
+ Keywords such as coronavirus, novel coronavirus, COVID-19, SARS-CoV-2, and others were found in 17 million documents in VA CDW between January 1 and June 15, 2020. The median document length of this document set was 1,383 characters. Figure 2 shows the weekly volume of documents matching these keywords.
116
+
117
+ ![01963db3-d7b7-7054-9855-b75a83eddfaf_2_226_1638_1204_388_0.jpg](images/01963db3-d7b7-7054-9855-b75a83eddfaf_2_226_1638_1204_388_0.jpg)
118
+
119
+ Figure 2. Frequency of documents matching COVID-19 related keywords from January through June 15, 2020. Some key dates are marked for reference.
120
+
121
+ The phrase novel coronavirus was first observed in clinical notes the week of January 15. On February 11, 2020, World Health Organization (WHO) announced terminology of SARS-CoV-2 for the virus and COVID-19 as the disease it causes (World Health Organization 2020a). On March 11, WHO declared the COVID-19 situation as a pandemic (World Health Organization 2020b). In our dataset, the term COVID-19 occurred nearly 50,000 times the week of March 11 and increased to over 250,000 mentions the following week.
122
+
123
+ As of June 15, 2020, our system had processed documents from 3.6 million patients. Table 1 presents several illustrations of example text processed and classified by our system. After clinical review, a total of 6,360 patients without laboratory evidence were confirmed to be positive for COVID-19. This accounted for 36.1% of the total 17,624 positive cases identified in VA at the time.
124
+
125
+ <table><tr><td colspan="2">Text Classifications</td></tr><tr><td>Positive</td><td>"Patient admitted to hospital for respiratory failure secondary to COVID-19.” "Diagnoses: COVID-19 B34.9" "The patient reports that they have been diagnosed with COVID-19.”</td></tr><tr><td>Negative</td><td>"Requested that patient be screened for COVID-19 via telephone." "Studies have shown that some COVID-19 patients ha prolonged baseline." "Has the patient been diagnosed with COVID-19? Y/N”</td></tr></table>
126
+
127
+ Table 1. Examples of positive and negative classified text.
128
+
129
+ ### 4.2 System Performance
130
+
131
+ To evaluate the performance of our pipeline, we estimated precision and recall. Due to constraints, we calculated precision at a document level and recall at a patient level.
132
+
133
+ For precision, we manually reviewed 500 randomly selected documents classified as positive with an entry date on or later than May 1. We considered a document a true positive if the patient was stated to have been positive for COVID-19 and thus appropriate to review for validation.
134
+
135
+ Measuring recall is more complicated as the actual number of positive cases is not known. To estimate recall, we evaluated performance of our system for patients with positive laboratory results and at least one document containing previously mentioned keywords. We considered recall to be the percentage of these patients who had at least one document classified as positive by our system. All positive COVID-19 laboratory results completed between May 1 and June 15 were included in this analysis.
136
+
137
+ Our review yielded an estimated document-level precision of 82.4%. Estimated patient-level recall was 94.2%. Appendix B shows examples and explanations of incorrectly classified texts. One common cause of false positives was template texts such as screenings or educational information which contained phrases such as "confirmed COVID-19" but did not actually signify that the patient was positive. Several errors were referring to COVID-19 practices or the pandemic more generally, such as "COVID-19 infection control protocols". Other errors were caused by incorrectly linked targets and modifiers, resulting in marking a non-positive entity as positive or failing to mark an entity as excluded.
138
+
139
+ One source of false negatives was positive modifiers which were not linked to mentions of COVID-19. The scope for linking targets and modifiers was set to be one sentence based upon observation that linguistic modifiers typically occurred in the same sentence as a target concept. This error can be propagated by text formatting such as erroneous new lines which cause incorrect sentence splitting.
140
+
141
+ ## 5 Discussion
142
+
143
+ In this work we described the development and application of a Natural Language Processing system for COVID-19 surveillance in a national healthcare system in the United States. We demonstrated that NLP combined with clinical review can be leveraged to improve surveillance for COVID-19. Within the VA surveillance system, over one third of total known cases were identified by NLP and clinical review, with the remainder being identified through structured laboratory data. This capability validated that NLP can provide significant value to such a surveillance system, which requires a timely and sensitive case count.
144
+
145
+ Our system achieved high recall while still maintaining acceptable precision. Leveraging a rule-based system allowed defining narrow and specific criteria for what is extracted. Rules were iteratively developed to filter out irrelevant documents while still identifying positive cases.
146
+
147
+ Additionally, the flexibility of a rule-based system allowed us to add new examples and adapt to new concepts as they emerged. This was critical in the COVID-19 response, as the pandemic remains a dynamic and evolving situation. For example, the terms COVID-19 and SARS-CoV-2 were not announced until weeks after the surveillance system had been deployed, but requirements dictated immediate addition to our system. Similarly, changes in the clinical documentation such as new clinical concerns and semi-structured template texts required quick response and modification.
148
+
149
+ Due to the continuously changing nature of COVID-19, we required a system which permitted rapid and flexible development. While other mature clinical NLP systems exist, such as cTAKES and CLAMP (Savova et al. 2010; Soysal et al. 2018), we elected to develop this system using the features and flexibility of the spaCy framework. Rapid iteration permitted reviewing documents for errors, directly making changes to rules, and then evaluating them without compiling or reloading. Visualizations such as Figure 1 were useful to troubleshoot rule development and understand the linguistic patterns.
150
+
151
+ One limitation of this work is the evaluation of system performance. Our primary objective in this effort was to serve Veterans and provide complete public health reporting. The goal of chart review was to identify all positive patients rather than to create a reference set. Precision and recall metrics presented here are estimates using sampling and available structured data.
152
+
153
+ In future work, we plan to evaluate machine learning methods to improve identification of positive cases. A machine learning classifier could potentially improve our current system by improving document classification accuracy and identifying high-probability cases for review. This was not feasible in early stages of the response since there were very few known cases and no existing reference set. We have now identified thousands of possible cases which could be included in a training set for a supervised classifier. However, as stated previously, our clinical review did not equate to creating a reference set. Specifically, clinical reviewers did not always assign negative labels to reviewed cases which would be needed for training a supervised model. However, we believe that with additional validation and review, a machine learning classifier has the potential to augment our system's performance.
154
+
155
+ ## 6 Conclusion
156
+
157
+ We have developed a text processing pipeline and utilized it to perform accelerated review of COVID-19 status in clinical documents. This approach was dynamic and allowed us to adapt to an evolving situation where vocabulary and clinical understanding continued to emerge with high data volume. Rapid implementation and iteration permitted reaction to shifting clinical documentation and evidence. This pipeline accelerated review of patient charts such that 36.1% of confirmed positive cases in a VA surveillance system were identified using this capability.
158
+
159
+ ## Acknowledgments
160
+
161
+ We thank Christopher Mannozzi, Gary Roselle, Joel Roos, Joseph Francis, Julia Lewis, Richard Pham, Shantini Gamage, VA Business Intelligences Services Line (BISL), VA Office of Clinical Systems Development and Evaluation (CSDE), VHA Healthcare Operations Center (HOC), VHA Office of Analytics and Performance Integration (API), and VA Informatics and Computing Infrastructure (VINCI) Applied NLP.
162
+
163
+ We also thank the members of CSDE BASIC (Biosurveillance, Antimicrobial Stewardship, and Infection Control) for their invaluable contributions to this work.
164
+
165
+ ## References
166
+
167
+ Brillman, Judith C., Tom Burr, David Forslund, Edward Joyce, Rick Picard, and Edith Umland. 2005. "Modeling Emergency Department Visit Patterns for Infectious Disease Complaints: Results and
168
+
169
+ Application to Disease Surveillance." ${BMC}$ Medical Informatics and Decision Making $5\left( 1\right) : 4$ .
170
+
171
+ Chapman, Wendy, John Dowling, and David Chu. 2007. "ConText: An Algorithm for Identifying Contextual Features from Clinical Text." Pp. 81-88 in Biological, translational, and clinical language processing.
172
+
173
+ Chapman, Wendy W., John N. Dowling, and Michael M. Wagner. 2004. "Fever Detection from Free-Text Clinical Records for Biosurveillance." Journal of Biomedical Informatics 37(2):120-27.
174
+
175
+ Chapman, Wendy W., Adi V Gundlapalli, Brett R. South, and John N. Dowling. 2011. "Natural Language Processing for Biosurveillance." Pp. 279-310 in Infectious Disease Informatics and Biosurveillance. Springer.
176
+
177
+ Gesteland, Per H., Reed M. Gardner, Fu-Chiang Tsui, Jeremy U. Espino, Robert T. Rolfs, Brent C. James, Wendy W. Chapman, Andrew W. Moore, and Michael M. Wagner. 2003. "Automated Syndromic Surveillance for the 2002 Winter Olympics." Journal of the American Medical Informatics Association 10(6):547-54.
178
+
179
+ Holshue, Michelle L., Chas DeBolt, Scott Lindquist, Kathy H. Lofy, John Wiesman, Hollianne Bruce, Christopher Spitters, Keith Ericson, Sara Wilkerson, and Ahmet Tural. 2020. "First Case of 2019 Novel Coronavirus in the United States." New England Journal of Medicine.
180
+
181
+ Ivanov, Oleg, Per H. Gesteland, William Hogan, Michael B. Mundorff, and Michael M. Wagner. 2003. "Detection of Pediatric Respiratory and Gastrointestinal Outbreaks from Free-Text Chief Complaints." P. 318 in AMIA Annual Symposium Proceedings. Vol. 2003. American Medical Informatics Association.
182
+
183
+ Matheny, Michael E., Fern FitzHenry, Theodore Speroff, Jennifer K. Green, Michelle L. Griffith, Eduard E. Vasilevskis, Elliot M. Fielstein, Peter L. Elkin, and Steven H. Brown. 2012. "Detection of Infectious Symptoms from VA Emergency
184
+
185
+ Department and Primary Care Clinical Documentation." International Journal of Medical Informatics 81(3):143-56.
186
+
187
+ Morse, Stephen S. 2012. "Public Health Surveillance and Infectious Disease Detection." Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 10(1):6-16.
188
+
189
+ Pineda, Arturo López, Ye Ye, Shyam Visweswaran, Gregory F. Cooper, Michael M. Wagner, and Fuchiang Rich Tsui. 2015. "Comparison of Machine Learning Classifiers for Influenza Detection from Emergency Department Free-Text Reports." Journal of Biomedical Informatics 58:60-69.
190
+
191
+ Rajput, Nikhil Kumar, Bhavya Ahuja Grover, and Vipin Kumar Rathi. 2020. "Word Frequency and Sentiment Analysis of Twitter Messages during Coronavirus Pandemic." ArXiv Preprint ArXiv:2004.03925.
192
+
193
+ Savova, Guergana K., James J. Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C. Kipper-Schuler, and Christopher G. Chute. 2010. "Mayo Clinical Text Analysis and Knowledge Extraction System (CTAKES): Architecture, Component Evaluation and Applications." Journal of the American Medical Informatics Association 17(5):507-13.
194
+
195
+ Singh, Lisa, Shweta Bansal, Leticia Bode, Ceren Budak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. "A First Look at COVID-19 Information and Misinformation Sharing on Twitter." ArXiv Preprint ArXiv:2003.13907.
196
+
197
+ Soysal, Ergin, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2018. "CLAMP-a Toolkit for Efficiently Building Customized Clinical Natural Language Processing Pipelines." Journal of the American Medical Informatics Association 25(3):331-36.
198
+
199
+ Wang, Lucy Lu, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, and William Merrill.
200
+
201
+ 2020. "CORD-19: The Covid-19 Open Research Dataset." ArXiv Preprint ArXiv:2004.10706.
202
+
203
+ World Health Organization. 2020a. "Naming the Coronavirus Disease (COVID-19) and the Virus That Causes It." Retrieved June 10, 2020
204
+
205
+ (https://www.who.int/emergencies/diseases /novel-coronavirus-2019/technical-guidance/naming-the-coronavirus-disease- (covid-2019)-and-the-virus-that-causes-it).
206
+
207
+ World Health Organization. 2020b. "WHO Director-General's Opening Remarks at the Media Briefing on COVID-19 - 25 May 2020." Retrieved June 10, 2020 (https://www.who.int/dg/speeches/detail/w ho-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---25-may- 2020).
208
+
209
+ Xu, Bo, Bernardo Gutierrez, Sumiko Mekaru, Kara Sewalk, Lauren Goodwin, Alyssa Loskill, Emily L. Cohn, Yulin Hswen, Sarah C. Hill, and Maria M. Cobo. 2020. "Epidemiological Data from the COVID- 19 Outbreak, Real-Time Case Information." Scientific Data 7(1):1-6.
210
+
211
+ ## Appendix A: NLP Pipeline
212
+
213
+ ![01963db3-d7b7-7054-9855-b75a83eddfaf_6_225_1303_416_433_0.jpg](images/01963db3-d7b7-7054-9855-b75a83eddfaf_6_225_1303_416_433_0.jpg)
214
+
215
+ Figure 3. Diagram of components in modular text processing pipeline. Components developed in this work marked by a solid line and existing spaCy components by a dashed line.
216
+
217
+ Appendix B: Error Analysis
218
+
219
+ <table><tr><td>Template or educational text</td></tr><tr><td>"Do you have any: * Fever</td></tr><tr><td>* Diagnosed with COVID-19 in the last 14 days"</td></tr><tr><td>"The patient reports that they have _____ diagnosed with COVID-19"</td></tr><tr><td>Experiencer other than the patient</td></tr><tr><td>"Veteran's ex tested positive for COVID-19."</td></tr><tr><td>"Patient's wife is a nurse. She tested positive for coronavirus.”</td></tr><tr><td>Incorrectly linked modifiers</td></tr><tr><td>"They said he has not presented with any sxs of COVID-19." "Veteran with decreased positive lifestyle due to COVID-19.”</td></tr><tr><td>Uncertain</td></tr><tr><td>"Admitting Diagnosis: COVID CHECK"</td></tr><tr><td>Not relevant to patient diagnosis</td></tr><tr><td>"TELEHEALTH SCREENING: Called to explain program COVID-19 + Monitoring"</td></tr><tr><td>"75 yo man with telephone primary care follow-up due to COVID-19 restrictions."</td></tr></table>
220
+
221
+ Table 2. Examples and explanations of false positives.
222
+
223
+ <table><tr><td>Text formatting causes incorrect sentence splitting</td></tr><tr><td>"Employee was tested for $\mathbf{{COVID}} <$ END OF SENTENCE> XX/XX/2020 and result positive."</td></tr><tr><td>Positive modifier too far from target concept</td></tr><tr><td>"Contacted Veteran for daily follow-up for COVID-19 screening. Discussed the following: Employee tested positive."</td></tr><tr><td>Incorrectly linked modifiers</td></tr><tr><td>"Risk for respiratory insufficiency r/t COVID- 19.”</td></tr><tr><td>Variations on positive modifiers not recognized by system</td></tr><tr><td>"62 y M COVID-19" (variation of "62 year old Male with COVID-19”)</td></tr></table>
224
+
225
+ Table 3. Examples and explanations of false negatives.
226
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ZQ_HvBxcdCv/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § A NATURAL LANGUAGE PROCESSING SYSTEM FOR NATIONAL COVID-19 SURVEILLANCE IN THE US DEPARTMENT OF VETERANS AFFAIRS
2
+
3
+ Alec B Chapman ${}^{1,2}$ , Kelly S Peterson ${}^{1,2}$ , Augie Turano ${}^{3}$ , Tamára L Box ${}^{4}$ ,
4
+
5
+ Katherine S Wallace ${}^{5}$ , Makoto Jones ${}^{1,2}$
6
+
7
+ ${}^{1}$ Veterans Affairs (VA) Salt Lake City Health Care System
8
+
9
+ ${}^{2}$ Division of Epidemiology, University of Utah
10
+
11
+ ${}^{3}$ VA Office of EHR Modernization
12
+
13
+ ${}^{4}$ VA Office of Clinical Systems Development and Evaluation (CSDE)
14
+
15
+ ${}^{5}$ VA Office of Biosurveillance, VA Central Office, Washington, DC
16
+
17
+ § ABSTRACT
18
+
19
+ Timely and accurate accounting of positive cases has been an important part of the response to the COVID-19 pandemic. While most positive cases within Veterans Affairs (VA) are identified through structured laboratory results, some patients are tested or diagnosed outside VA so their clinical status is documented only in free-text narratives. We developed a Natural Language Processing pipeline for identifying positively diagnosed COVID- 19 patients and deployed this system to accelerate chart review. As part of the VA national response to COVID-19, this process identified 6,360 positive cases which did not have corresponding laboratory data. These cases accounted for 36.1% of total confirmed positive cases in VA to date. With available data, performance of the system is estimated as 82.4% precision and 94.2% recall. A public-facing implementation is released as open source and available to the community.
20
+
21
+ § 1 INTRODUCTION
22
+
23
+ A robust pandemic response is contingent on timely and accurate information (Morse 2012). During the COVID-19 pandemic, public health institutions have established surveillance systems to monitor and track case counts over time.
24
+
25
+ COVID-19 is typically diagnosed using laboratory tests. The test results are frequently used as a source for surveillance systems. However, such systems typically only capture laboratory results from the same healthcare system. Patients may also be diagnosed with COVID-19 in the community, such as in external hospital networks or drive-through testing. These patients may potentially be missed by laboratory-based surveillance methods, leading to these patients not being represented in overall case counts.
26
+
27
+ Patient health information needed for biosurveillance is often recorded in free-text narratives in the Electronic Health Record (EHR) (Chapman et al. 2011), offering an alternative source of COVID-19 status when structured lab evidence is absent.
28
+
29
+ In this work we developed a Natural Language Processing (NLP) system to extract potential positive COVID-19 cases from clinical text within the Department of Veterans Affairs (VA). Following review by a clinical expert, positively identified patients are included in official VA surveillance counts. Since the VA EHR includes data from hospitals and clinics across the United States, this system enables a unique capability for collecting data for national surveillance purposes.
30
+
31
+ § 2 BACKGROUND
32
+
33
+ Manual information gathering draws effort away from patient care priorities and can impede timely and effective responses to public health threats. Automated approaches for processing clinical notes have been applied for public health purposes when data is needed as quickly as possible.
34
+
35
+ Gesteland et al (2003) developed an automated syndromic surveillance system using clinical text to identify anomalies in symptoms as rapidly as possible. Several examples in the literature have utilized clinical text including chief complaints to perform early detection of infectious disease (Brillman et al. 2005; Chapman, Dowling, and Wagner 2004; Ivanov et al. 2003; Matheny et al. 2012; Pineda et al. 2015).
36
+
37
+ Typical data sources for COVID-19 surveillance include government announcements, scientific publications, and news articles (Xu et al. 2020) Most literature to date for NLP related to COVID- 19 has involved public data sources such as research publications (Wang et al. 2020). Others have examined social media sources including Twitter to examine sentiment or misinformation related to the virus (Rajput, Grover, and Rathi 2020; Singh et al. 2020). In this work, the objective was to identify the diagnosis of COVID-19 in clinical documents to report complete case counts of the disease for public health surveillance in VA.
38
+
39
+ § 3 METHODS
40
+
41
+ § 3.1 DATASET
42
+
43
+ Veterans Health Administration (VHA) includes medical centers and clinics across the United States ${}^{1}$ . The VA Corporate Data Warehouse (CDW) includes electronic clinical data for these sites in a unified architecture. This work included clinical data in 2020 between January 1 and June 15.
44
+
45
+ § 3.2 NLP PIPELINE
46
+
47
+ The primary objective of our NLP system is to classify whether a clinical document contains a positive COVID-19 case. To do this, we designed a rule-based pipeline which extracted target entities related to COVID-19, asserted certain attributes for each entity, and finally classified documents as either positive or negative based on the entities within the document. We prioritized minimizing false negatives in order to identify as many positive cases as possible. However, as the volume of data increased, it became important to reduce false positives in order to minimize manual chart review.
48
+
49
+ The pipeline was implemented in Python using the spaCy framework ${}^{2}$ . All processing steps except for tokenization, part-of-speech tagging, and dependency parsing were implemented using custom spaCy components, a feature available in version 2.0 and later. Each component may contain its own rules or knowledge base. Several components are available as part of med ${\mathrm{{Sp}}}_{2}{\mathrm{{Cy}}}^{3}$ , an open source project for clinical NLP using spaCy, and a publicly available version of the pipeline is released on GitHub ${}^{4}$ .
50
+
51
+ The following describes each of the custom components in the pipeline, shown visually in Appendix A:
52
+
53
+ * Preprocessor: Modifies the underlying text before text processing. This step removes semi-structured templated texts and questionnaires which can cause false positives and replaces certain abbreviations and misspellings to simplify later processing steps.
54
+
55
+ * Target Matcher: Extracts entities related to COVID-19 based on linguistic patterns. This includes terms such as "COVID-19", "novel coronavirus", "ncov", and "SARS-COV-2".
56
+
57
+ * Context: Identifies semantic modifiers and attributes such as negation, uncertainty, and experiencer. This step was performed using cycontext ${}^{5}$ , a spaCy implementation of the ConText algorithm (Chapman, Dowling, and Chu 2007). Figure 1 shows a visualization of the ConText algorithm.
58
+
59
+ * Sectionizer: Detects section boundaries in the text, such as "Visit Diagnoses" or "Past Medical History".
60
+
61
+ * Postprocessor: Modifies or removes entities based on business logic. This component allows the pipeline to handle edge cases or more complex logic using the results of previous components.
62
+
63
+ * Document Classifier: Assigns a label of "Positive" or "Negative" to each document based on the entities and attributes extracted from the text.
64
+
65
+ The following is a brief description of classification logic at both entity level and document level. Entities are excluded if any of the following attributes are present:
66
+
67
+ * Uncertain
68
+
69
+ * Negated
70
+
71
+ * Experienced by someone other than the patient
72
+
73
+ Entities are marked as "positive" when any of the following conditions are met:
74
+
75
+ * Associated with a positive modifier, such as "diagnosed with" or "is positive"
76
+
77
+ * Occurring in certain sections of a note, such as "Diagnoses:"
78
+
79
+ * Mentioned with a specific associated condition, such as "COVID-19 pneumonia"
80
+
81
+ ${}^{1}$ https://www.va.gov/health/
82
+
83
+ ${}^{2}$ https://spacy.io/
84
+
85
+ ${}^{3}$ https://github.com/medspacy
86
+
87
+ ${}^{4}$ https://github.com/abchapman93/VA_COVID- 19_NLP_BSV
88
+
89
+ ${}^{5}$ https://github.com/medspacy/cycontext
90
+
91
+ < g r a p h i c s >
92
+
93
+ Figure 1. Visualizations provided in medSpaCy allowed us to view the output of our system and inspect linguistic patterns in the text. Target and modifier concepts are highlighted in text and arrows between them show relationships indicating whether the patient experienced COVID-19.
94
+
95
+ Based on the entities and corresponding attributes, we then classify the document as "Positive" or "Negative". In our current implementation, a document is classified as "Positive" if it has at least one positive, non-excluded entity.
96
+
97
+ § 3.3 DEPLOYMENT
98
+
99
+ Our system was deployed to process clinical notes in VA CDW beginning January 21, 2020, the day after the first case was confirmed in the United States (Holshue et al. 2020). All documents containing keywords related to COVID-19 were included in document processing. Documents were retrieved and processed regularly to facilitate daily operations.
100
+
101
+ § 3.4 CLINICAL REVIEW
102
+
103
+ When a patient's document was classified by text processing as positive, the document was reviewed by a clinical validator. Using an internally developed web-based tool, reviewers viewed a marked-up summary of the processed clinical documents. If the patient fit a clinical definition of COVID-19, the reviewer accepted the suggestion and the patient was added to VA's COVID-19 counts.
104
+
105
+ Due to an increasing volume of data and limited resources for review, later iterations accelerated validation and improved precision by assigning documents to "High" and "Low" priority groups using other indicators such as a relevant ICD-10 code. This allowed reviewers to prioritize review of those patients who were likely to be valid cases and to minimize the review of false positives.
106
+
107
+ § 4 RESULTS
108
+
109
+ § 4.1 DOCUMENT PROCESSING
110
+
111
+ Keywords such as coronavirus, novel coronavirus, COVID-19, SARS-CoV-2, and others were found in 17 million documents in VA CDW between January 1 and June 15, 2020. The median document length of this document set was 1,383 characters. Figure 2 shows the weekly volume of documents matching these keywords.
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 2. Frequency of documents matching COVID-19 related keywords from January through June 15, 2020. Some key dates are marked for reference.
116
+
117
+ The phrase novel coronavirus was first observed in clinical notes the week of January 15. On February 11, 2020, World Health Organization (WHO) announced terminology of SARS-CoV-2 for the virus and COVID-19 as the disease it causes (World Health Organization 2020a). On March 11, WHO declared the COVID-19 situation as a pandemic (World Health Organization 2020b). In our dataset, the term COVID-19 occurred nearly 50,000 times the week of March 11 and increased to over 250,000 mentions the following week.
118
+
119
+ As of June 15, 2020, our system had processed documents from 3.6 million patients. Table 1 presents several illustrations of example text processed and classified by our system. After clinical review, a total of 6,360 patients without laboratory evidence were confirmed to be positive for COVID-19. This accounted for 36.1% of the total 17,624 positive cases identified in VA at the time.
120
+
121
+ max width=
122
+
123
+ 2|c|Text Classifications
124
+
125
+ 1-2
126
+ Positive "Patient admitted to hospital for respiratory failure secondary to COVID-19.” "Diagnoses: COVID-19 B34.9" "The patient reports that they have been diagnosed with COVID-19.”
127
+
128
+ 1-2
129
+ Negative "Requested that patient be screened for COVID-19 via telephone." "Studies have shown that some COVID-19 patients ha prolonged baseline." "Has the patient been diagnosed with COVID-19? Y/N”
130
+
131
+ 1-2
132
+
133
+ Table 1. Examples of positive and negative classified text.
134
+
135
+ § 4.2 SYSTEM PERFORMANCE
136
+
137
+ To evaluate the performance of our pipeline, we estimated precision and recall. Due to constraints, we calculated precision at a document level and recall at a patient level.
138
+
139
+ For precision, we manually reviewed 500 randomly selected documents classified as positive with an entry date on or later than May 1. We considered a document a true positive if the patient was stated to have been positive for COVID-19 and thus appropriate to review for validation.
140
+
141
+ Measuring recall is more complicated as the actual number of positive cases is not known. To estimate recall, we evaluated performance of our system for patients with positive laboratory results and at least one document containing previously mentioned keywords. We considered recall to be the percentage of these patients who had at least one document classified as positive by our system. All positive COVID-19 laboratory results completed between May 1 and June 15 were included in this analysis.
142
+
143
+ Our review yielded an estimated document-level precision of 82.4%. Estimated patient-level recall was 94.2%. Appendix B shows examples and explanations of incorrectly classified texts. One common cause of false positives was template texts such as screenings or educational information which contained phrases such as "confirmed COVID-19" but did not actually signify that the patient was positive. Several errors were referring to COVID-19 practices or the pandemic more generally, such as "COVID-19 infection control protocols". Other errors were caused by incorrectly linked targets and modifiers, resulting in marking a non-positive entity as positive or failing to mark an entity as excluded.
144
+
145
+ One source of false negatives was positive modifiers which were not linked to mentions of COVID-19. The scope for linking targets and modifiers was set to be one sentence based upon observation that linguistic modifiers typically occurred in the same sentence as a target concept. This error can be propagated by text formatting such as erroneous new lines which cause incorrect sentence splitting.
146
+
147
+ § 5 DISCUSSION
148
+
149
+ In this work we described the development and application of a Natural Language Processing system for COVID-19 surveillance in a national healthcare system in the United States. We demonstrated that NLP combined with clinical review can be leveraged to improve surveillance for COVID-19. Within the VA surveillance system, over one third of total known cases were identified by NLP and clinical review, with the remainder being identified through structured laboratory data. This capability validated that NLP can provide significant value to such a surveillance system, which requires a timely and sensitive case count.
150
+
151
+ Our system achieved high recall while still maintaining acceptable precision. Leveraging a rule-based system allowed defining narrow and specific criteria for what is extracted. Rules were iteratively developed to filter out irrelevant documents while still identifying positive cases.
152
+
153
+ Additionally, the flexibility of a rule-based system allowed us to add new examples and adapt to new concepts as they emerged. This was critical in the COVID-19 response, as the pandemic remains a dynamic and evolving situation. For example, the terms COVID-19 and SARS-CoV-2 were not announced until weeks after the surveillance system had been deployed, but requirements dictated immediate addition to our system. Similarly, changes in the clinical documentation such as new clinical concerns and semi-structured template texts required quick response and modification.
154
+
155
+ Due to the continuously changing nature of COVID-19, we required a system which permitted rapid and flexible development. While other mature clinical NLP systems exist, such as cTAKES and CLAMP (Savova et al. 2010; Soysal et al. 2018), we elected to develop this system using the features and flexibility of the spaCy framework. Rapid iteration permitted reviewing documents for errors, directly making changes to rules, and then evaluating them without compiling or reloading. Visualizations such as Figure 1 were useful to troubleshoot rule development and understand the linguistic patterns.
156
+
157
+ One limitation of this work is the evaluation of system performance. Our primary objective in this effort was to serve Veterans and provide complete public health reporting. The goal of chart review was to identify all positive patients rather than to create a reference set. Precision and recall metrics presented here are estimates using sampling and available structured data.
158
+
159
+ In future work, we plan to evaluate machine learning methods to improve identification of positive cases. A machine learning classifier could potentially improve our current system by improving document classification accuracy and identifying high-probability cases for review. This was not feasible in early stages of the response since there were very few known cases and no existing reference set. We have now identified thousands of possible cases which could be included in a training set for a supervised classifier. However, as stated previously, our clinical review did not equate to creating a reference set. Specifically, clinical reviewers did not always assign negative labels to reviewed cases which would be needed for training a supervised model. However, we believe that with additional validation and review, a machine learning classifier has the potential to augment our system's performance.
160
+
161
+ § 6 CONCLUSION
162
+
163
+ We have developed a text processing pipeline and utilized it to perform accelerated review of COVID-19 status in clinical documents. This approach was dynamic and allowed us to adapt to an evolving situation where vocabulary and clinical understanding continued to emerge with high data volume. Rapid implementation and iteration permitted reaction to shifting clinical documentation and evidence. This pipeline accelerated review of patient charts such that 36.1% of confirmed positive cases in a VA surveillance system were identified using this capability.
164
+
165
+ § ACKNOWLEDGMENTS
166
+
167
+ We thank Christopher Mannozzi, Gary Roselle, Joel Roos, Joseph Francis, Julia Lewis, Richard Pham, Shantini Gamage, VA Business Intelligences Services Line (BISL), VA Office of Clinical Systems Development and Evaluation (CSDE), VHA Healthcare Operations Center (HOC), VHA Office of Analytics and Performance Integration (API), and VA Informatics and Computing Infrastructure (VINCI) Applied NLP.
168
+
169
+ We also thank the members of CSDE BASIC (Biosurveillance, Antimicrobial Stewardship, and Infection Control) for their invaluable contributions to this work.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/dGOeF3y_Weh/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Analysis of BERT FAQ Retrieval Models for COVID-19 Infobot
2
+
3
+ Shuo Sun
4
+
5
+ Johns Hopkins University
6
+
7
+ ssun32@jhu.edu
8
+
9
+ João Sedoc
10
+
11
+ Johns Hopkins University
12
+
13
+ jsedoc@jhu.edu
14
+
15
+ ## Abstract
16
+
17
+ The outbreak of the COVID-19 pandemic has caused tremendous amounts of suffering and deaths around the world and greatly affected the lives of humanity. As the world sees more infected cases every day, the need and demand for reliable and up-to-date information on COVID-19 have never been higher. While recent pre-trained language models show successes on many other NLP tasks, we did not have COVID-19 related dataset to help us evaluate the performance of QA systems and infobots based on these models. After the creation of a COVID-19 question similarity dataset by public health experts from the Johns Hopkins Bloomberg School of Public Health (JHSPH), we create models sufficient for application. We also analyze the amount of supervised data required.
18
+
19
+ ## 1 Introduction
20
+
21
+ The COVID-19 pandemic has undeniably affected the lives of almost everyone in every part of the world. Schools are closed, companies are shutting down permanently, and people are losing jobs due to the lack of consumer demands. While doctors, nurses, and many other essential workers are at the front-line battling the virus, many concerned citizens are at home, searching for the latest developments of the pandemics and keeping them up to date with the newest information and guidelines from organizations such as CDC and WHO. However, misinformation is rampant in social media ( ) and even public officials e.g. ingestion of disinfectants or use of NSAIDs, such as aspirin and ibuprofen. This motivates the need to answer questions like "Should I ingest disinfectants to treat COVID-19?" and "Can I use Aspirin with COVID?" The desire for reliable and up-to-date information related to a pandemic has never been greater in this modern era. Consequently, NLP practitioners quickly ramp up QA systems that are designed to automatically answer COVID-19 related questions.
22
+
23
+ Traditional, QA systems can be categorized into generation-based methods (Serban et al., 2016; Xing et al., 2018) which synthesize answers using natural language generation techniques, and retrieval-based methods (Wu et al., 2018; Sakata et al., 2019), which retrieve the best answers from a list of given candidate answers. Given the existence of a vast amount of publicly available question-answer pairs from FAQ webpages maintained by organizations such as ${\mathrm{{WHO}}}^{1}$ and ${\mathrm{{CDC}}}^{2}$ , most existing COVID-19 QA systems use retrieval-based methods. We can further classify the retrieval-based techniques into three subcategories:
24
+
25
+ Rule-based These QA systems follow a set of predefined rules (Frederking, 1981) when generating responses to human questions. The rules are usually curated manually and require constant updates as the COVID-19 situations evolve around the world. They are also prone to errors caused by the insufficiency of rules to cover different situations. For example, QA systems that look for the coexisting keywords "what" and "COVID-19" to generate responses for the question "What is COVID-19?" might also produce similar answers to "What is the incubation period of COVID-19?".
26
+
27
+ Q-A Similarities QA systems in this category compute similarity scores between input questions and candidate answers and then sort candidate answers base on the similarity scores. The question-answer pairs can be ranked with traditional Information Retrieval (IR) methods such as tf-idf (Salton and McGill, 1986) and BM25 (Robertson et al., 2009; Chen and Van Durme, 2017) or neural IR methods (Sasaki et al., 2018; McDonald et al., 2018). Recently, models based on pre-trained language models such as BERT (Devlin et al., 2019; MacAvaney et al., 2019; Reimers and Gurevych, 2019) have demonstrated strong performance on sentence similarity and retrieval tasks.
28
+
29
+ ---
30
+
31
+ ${}^{1}$ https://www.who.int/news-room/q-a-detail/q-a-coronaviruses
32
+
33
+ ${}^{2}$ https://www.cdc.gov/coronavirus/2019-ncov/faq.html
34
+
35
+ ---
36
+
37
+ Q-Q Similarities QA systems in this category are similar to systems based on Q-A similarities, except that they calculate similarity scores between input questions and candidate questions instead of candidate answers. In other words, these QA systems retrieve and return the answers of candidate questions that are most similar to the input question.
38
+
39
+ In this work, we explore the feasibilities of using pre-trained language models to compute Q-A and $\mathrm{Q} - \mathrm{Q}$ similarities for retrieval-based COVID-19 QA systems. To support our experiments, we created a preliminary COVID-19 question similarity dataset in collaboration with experts from the Johns Hopkins Bloomberg School of Public Health (JH-SPH). Evaluation results on our preliminary dataset suggest that although fine-tuned BERT-based models perform decently in terms of IR metrics, these models do not perform at the precision levels justifiable for direct real-world applications. Further, our experiments also suggest it is challenging to find threshold similarity scores that can balance the precision and recall for these models. We argue that high-precision systems are exceptionally important at this crucial moment since we do not want to serve irrelevant information to worried users, or worse, inadvertently disseminate false information. We further show that with some supervision from our dataset, the overall performance of these models improves significantly. To support further researches, we will publicly release our COVID-19 question similarity dataset soon.
40
+
41
+ ## 2 Approaches
42
+
43
+ Figure 1 presents the system architecture of a typical baseline retrieval-based QA system. We first build a database of candidate question-answer pairs by scraping COVID-19 related frequently asked questions (FAQ) web pages from a list of carefully chosen data sources. A retrieval-based QA system ingests an input question and returns top-ranked candidate question-answer pairs from the dataset based on similarities between the input question and candidates in the database. At the time of submission for this paper, our database contains 690 question-answer pairs extracted from 12 data sources. We will use this architecture for all experiments in this paper.
44
+
45
+ ![01963da9-61b2-7530-adc4-bef3972552fd_1_845_168_614_262_0.jpg](images/01963da9-61b2-7530-adc4-bef3972552fd_1_845_168_614_262_0.jpg)
46
+
47
+ Figure 1: The system architecture of a retrieval-based COVID-19 QA system. FAQ webpages are scraped from reliable sources such as CDC, FDA, and WHO and pooled together into a database of question-answer pairs. A retrieval module ingests an input question and returns top-ranked candidate question-answer pairs from the database based on computed similarity metrics.
48
+
49
+ Since we want to examine the effectiveness of existing retrieval solutions, we experiment with two common used retrieval techniques:
50
+
51
+ BM25 The BM25 model (Robertson et al., 2009) is a well-known ranking function commonly used in search engines. It is a bag-of-word model that calculates similarity scores between the terms in queries and the terms in documents. We adapt BM25 to the QA task by treating input questions as queries and the question-answer pairs as documents. We use Elasticsearch ${}^{3}$ , which uses BM25 by default, as our backend retrieval system.
52
+
53
+ BERT This is a state-of-the-art pre-trained language model that performs well on many NLP tasks. BERT (Devlin et al., 2019) and its variants such as Roberta (Liu et al., 2019) haven been consistently producing top results on the SQUAD2.0 (Rajpurkar et al., 2018) leaderboard. Recently, Sakata et al. (2019) shows that BERT-based FAQ retrieval systems outperform baseline retrieval systems on benchmark IR datasets. In this paper, we experiment with BERT models in both unsupervised and supervised settings:
54
+
55
+ 1. Under unsupervised setting, we use sentence transformers ${}^{4}$ (Reimers and Gurevych,2019) to encode the input questions and candidate questions (or candidate answers) into semantically meaningful BERT sentence embeddings. The sentence transformers are BERT-based models that were fine-tuned on publicly available natural language inference (NLI) and semantic text similarity (STS) datasets. The sentence embeddings from these models are also aligned, meaning that the cosine similarities between sentence embeddings reflect their degrees of similarities. We can then calculate similarity scores between the input question and candidate questions (or candidate answers) by taking the cosine similarity between their sentence embeddings.
56
+
57
+ ---
58
+
59
+ ${}^{3}$ https://www.elastic.co/
60
+
61
+ ${}^{4}$ https://github.com/UKPLab/sentence-transformers
62
+
63
+ ---
64
+
65
+ 2. Under supervised setting, we further fine-tune the sentence transformers with examples from our COVID-19 question similarity dataset.
66
+
67
+ For every model, we run experiments in both $Q - Q$ mode where we calculate similarity scores between input questions and candidate questions, and $Q$ - $A$ mode where we compute similarity scores between input questions and candidate answers. We report results in mean average precision (MAP) (Buckley and Voorhees, 2005) and normalized discounted cumulative gain (NDCG) (Järvelin and Kekäläinen, 2002). ${}^{5}$
68
+
69
+ ## 3 Dataset
70
+
71
+ Due to the subjective nature of evaluating QA systems and the lack of in-domain data related to Covid19, we are creating a new COVID-19 question similarity dataset in collaboration with experts from the Johns Hopkins Bloomberg School of Public Health (JHSPH). The annotation process can be summarized as follows:
72
+
73
+ 1. We use a filtered subsample of user-generated questions from Qorona ${}^{6}$ , a list of COVID-19 related questions collected using Google au-tocomplete API, and from COVID-19 related data collected by Dialogue ${\mathrm{{MD}}}^{7}$ .
74
+
75
+ 2. For each input question, we retrieve the top five question-answer pairs from a pool of candidate question-answer pairs ${}^{8}$ with the help of a BM25-based baseline QA retrieval system.
76
+
77
+ 3. We engage public health experts to directly assess the relevance of the candidate question-answer pairs on a scale of 0-100 .
78
+
79
+ 4. For input questions with no retrieved relevant question-answer pairs, our annotators manually craft answers for those questions.
80
+
81
+ An example from our dateset is shown in figure 2.
82
+
83
+ At the time of submission of this paper, our preliminary dataset contains 6495 input questions with 32475 candidate question-answer pairs, covering a large variety of questions such as "Can COVID-19 be spread through surface-touching?" and "Can we use fabric masks to prevent the spread?". We reserve 1497 questions for the test set and use the other 4998 annotated instances for training.
84
+
85
+ ## 4 Experimental Setup
86
+
87
+ We filter out instances with no relevant candidates and some instances with blank candidate answers. Our filtered benchmark test set contains 392 examples. We assign relevance labels of one to question-answer candidates with annotated scores $\geq {80}$ and zero otherwise.
88
+
89
+ Ideally, for a given input question $\gamma$ and a list of candidate question-answer pairs $C =$ $\left\{ {\left( {{q}_{1},{a}_{1}}\right) \ldots \left( {{q}_{5},{a}_{5}}\right) }\right\}$ , we want to learn a function $\mathrm{f}$ , such that $f\left( {\gamma ,\left( {{q}_{i},{a}_{i}}\right) }\right) > f\left( {\gamma ,\left( {{q}_{j},{a}_{j}}\right) }\right) \Leftrightarrow$ $g\left( {\gamma ,\left( {{q}_{i},{a}_{i}}\right) }\right) > g\left( {\gamma ,\left( {{q}_{j},{a}_{j}}\right) }\right)$ for $1 \leq i, j \leq 5$ , where $\mathrm{g}$ is a function that returns the annotated relevance label. For our BM25 baseline, $f$ is modeled by the BM25 ranking function in Elasticsearch. For BERT-based models, $f$ returns the cosine similarity between the sentence embedding of an input question and the sentence embedding of a candidate question or candidate answer.
90
+
91
+ We conducted all experiments on an AWS instance with 8 cpus, 60GB of RAM and a 16GB Nvidia Tesla V100 GPU.
92
+
93
+ ## 5 Results
94
+
95
+ Table 1 presents the results of various models on test set of our preliminary COVID-19 question similarity dataset. We highlight some of the findings here:
96
+
97
+ First, models that were fine-tuned on our annotated dataset significantly improve the performances on the COVID-19 question similarity test set. For example, NDCG@3 of a BERT retrieval model fine-tuned on NLI data improves from 0.544 to 0.626 and from 0.309 to 0.626 when we fine-tune that model on similarities between (input question, candidate question) pairs and (input question, candidate answer) pairs respectively. The surprise
98
+
99
+ ---
100
+
101
+ ${}^{5}$ Both metrics can be calculated with the pytrec_eval tool (Van Gysel and de Rijke, 2018).
102
+
103
+ ${}^{6}$ https://github.com/allenai/Qorona
104
+
105
+ ${}^{7}$ https://github.com/dialoguemd/COVID-19
106
+
107
+ ${}^{8}$ We scraped FAQ webpages from reliable sources such as CDC, FDA, WHO and Cleveland Clinic.
108
+
109
+ ---
110
+
111
+ Question: Can I go for a run Does running exercise compromise my immune system
112
+
113
+ Candidate 1: (We are currently on lockdown... can I go outside? Can I work out outside? Can I go for a run? Can I go for a walk?,...) $\rightarrow$ 100
114
+
115
+ Candidate 2: (Should I go to work if there is an outbreak in my community?,...) $\rightarrow \mathbf{0}$
116
+
117
+ Candidate 3: (Can I take my child to the playground? $\rightarrow \mathbf{0}$ ,...)
118
+
119
+ Candidate 4: (Can i go to the funeral of someone who died of COVID-19?,...) $\rightarrow \mathbf{0}$
120
+
121
+ Candidate 5: (How can I and my family prepare for COVID-19?,...) $\rightarrow \mathbf{0}$ here is that models that were fine-tuned on only (input question, candidate question) pairs also significantly outperform unsupervised models when we evaluate those models in Q-A mode. For example, the NCCG@3 of the same BERT model improves from 0.309 to 0.493 when evaluated in Q-A mode. We hypothesize that some of the candidate questions are summaries of the candidate answers and because of that, the sentence representations of the candidate questions might be close to the sentence representations of the candidate answers. Therefore, learning to align the vectors of input questions and candidate questions would also improve the alignment between the vectors of input questions and candidate answers.
122
+
123
+ Figure 2: An example from our COVID-19 question-answering dataset. For every input question, we retrieved five candidate question-answer pairs using a baseline BM25 retrieval system. Annotators were asked to carefully assign relevance scores between 0 - 100 to the candidates.
124
+
125
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Fine-tune</td><td colspan="4">Q-Q mode</td><td colspan="4">Q-A mode</td></tr><tr><td>MAP</td><td>N@1</td><td>N@3</td><td>T/Q(s)</td><td>MAP</td><td>N@1</td><td>N@3</td><td>T/Q(s)</td></tr><tr><td>BM25*</td><td>N/A</td><td>0.569</td><td>0.523</td><td>0.572</td><td>-</td><td>0.461</td><td>0.370</td><td>0.474</td><td>-</td></tr><tr><td colspan="10">Unsupervised</td></tr><tr><td>BERT</td><td>NLI</td><td>0.537</td><td>0.477</td><td>0.544</td><td>0.018</td><td>0.334</td><td>0.194</td><td>0.309</td><td>0.030</td></tr><tr><td>Roberta</td><td>NLI</td><td>0.529</td><td>0.464</td><td>0.535</td><td>0.048</td><td>0.337</td><td>0.194</td><td>0.315</td><td>0.066</td></tr><tr><td>BERT</td><td>$\mathrm{{NLI}} \rightarrow \mathrm{{STSB}}$</td><td>0.504</td><td>0.426</td><td>0.511</td><td>0.018</td><td>0.386</td><td>0.225</td><td>0.413</td><td>0.030</td></tr><tr><td>Roberta</td><td>$\mathrm{{NLI}} \rightarrow \mathrm{{STSB}}$</td><td>0.505</td><td>0.423</td><td>0.507</td><td>0.047</td><td>0.334</td><td>0.189</td><td>0.303</td><td>0.066</td></tr><tr><td>CovidBERT</td><td>NLI</td><td>0.533</td><td>0.462</td><td>0.544</td><td>0.018</td><td>0.318</td><td>0.176</td><td>0.277</td><td>0.031</td></tr><tr><td colspan="10">Supervised - Trained on (input question, candidate question) pairs</td></tr><tr><td>BERT</td><td>None</td><td>0.614</td><td>0.587</td><td>0.619</td><td>0.018</td><td>0.460</td><td>0.304</td><td>0.493</td><td>0.030</td></tr><tr><td>BERT</td><td>NLI</td><td>0.623</td><td>0.605</td><td>0.626</td><td>0.018</td><td>0.411</td><td>0.268</td><td>0.457</td><td>0.030</td></tr><tr><td>CovidBERT</td><td>NLI</td><td>0.617</td><td>0.592</td><td>0.622</td><td>0.018</td><td>0.474</td><td>0.321</td><td>0.509</td><td>0.032</td></tr><tr><td>TwitterBERT</td><td>None</td><td>0.621</td><td>0.600</td><td>0.624</td><td>0.018</td><td>0.396</td><td>0.270</td><td>0.398</td><td>0.030</td></tr><tr><td colspan="10">Supervised - Trained on (input question, candidate answer) pairs</td></tr><tr><td>BERT</td><td>None</td><td>0.605</td><td>0.577</td><td>0.611</td><td>0.017</td><td>0.620</td><td>0.600</td><td>0.626</td><td>0.030</td></tr><tr><td>BERT</td><td>NLI</td><td>0.605</td><td>0.579</td><td>0.611</td><td>0.018</td><td>0.615</td><td>0.587</td><td>0.623</td><td>0.029</td></tr><tr><td>TwitterBERT</td><td>None</td><td>0.597</td><td>0.566</td><td>0.603</td><td>0.017</td><td>0.579</td><td>0.548</td><td>0.580</td><td>0.030</td></tr><tr><td>CovidBERT</td><td>NLI</td><td>0.609</td><td>0.584</td><td>0.614</td><td>0.017</td><td>0.618</td><td>0.597</td><td>0.624</td><td>0.030</td></tr><tr><td colspan="10">Supervised - Trained on both</td></tr><tr><td>BERT</td><td>None</td><td>0.615</td><td>0.589</td><td>0.618</td><td>0.018</td><td>0.614</td><td>0.587</td><td>0.619</td><td>0.030</td></tr><tr><td>BERT</td><td>NLI</td><td>0.624</td><td>0.607</td><td>0.627</td><td>0.018</td><td>0.617</td><td>0.594</td><td>0.621</td><td>0.030</td></tr><tr><td>TwitterBERT</td><td>None</td><td>0.614</td><td>0.587</td><td>0.621</td><td>0.018</td><td>0.611</td><td>0.579</td><td>0.619</td><td>0.030</td></tr><tr><td>CovidBERT</td><td>NLI</td><td>0.614</td><td>0.584</td><td>0.621</td><td>0.018</td><td>0.612</td><td>0.584</td><td>0.618</td><td>0.030</td></tr></table>
126
+
127
+ Table 1: MAP and NDCG (cut off at top 1 and top 3 documents) of various retrieval models. Q-Q mode ranks candidates based on similarity scores between input questions and candidate questions, while Q-A mode ranks candidates based on similarity scores between input questions and candidate answers. T/Q is the average time (in second) taken to calculate similarity scores for each input question. All BERT models are based on BERT-base-cased and all Roberta models are fine-tuned on Roberta-large. CovidBERT was (continue) trained on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. TwitterBERT was (continue) trained on tweets about coronavirus.
128
+
129
+ Second, we observe that unsupervised models perform significantly better in $Q - Q$ mode than in $Q$ -A mode. For example, unsupervised models can perform at NDCG@3 of around 0.507 to 0.544 in Q-Q mode, but their performances drop significantly to around 0.266 to 0.309 in Q-A mode. This also applies to the supervised models trained on (input question, candidate question) pairs which perform at NDCG@3 of around 0.619 to 0.626 in Q-Q mode against 0.398 to 0.493 in Q-A mode. This is expected given the fact that those models were fine-tuned on short sentence pairs, which is different from the answers in our COVID-19 dataset that are significantly longer. In contrast, models that were fine-tuned on (input question, candidate answer) pairs or both (input question, candidate question) and (input question, candidate answer) pairs perform well in both Q-Q and Q-A modes.
130
+
131
+ Third, although Roberta outperforms BERT on many benchmark datasets (Rajpurkar et al., 2018), it does not seem to perform better than BERT on our benchmark COVID-19 test set. As we can see from the unsupervised section of table 1, BERT outperforms Roberta under almost all settings. Further, because Roberta models have significantly more parameters than BERT models, they take 2-3 times longer to compute sentence embeddings and cosine similarities for every batch of data. We exclude Roberta from further experiments and focus on BERT models for the remaining of this paper.
132
+
133
+ Last but not least, vanilla BM25 model using the default parameters from Elasticsearch outperforms all unsupervised BERT-based models in both Q- $\mathrm{Q}$ and $\mathrm{Q} - \mathrm{A}$ modes. In contrast, it perform worse than the supervised models in Q-Q mode and Q-A mode.
134
+
135
+ In general, unsupervised BERT-based models perform decently well on our benchmark test set, performing at NDCG@1 of around 0.423 to 0.477, which means that these models can rank relevant candidates at the top one positions around 42.3% to 47.7% of the time.
136
+
137
+ ![01963da9-61b2-7530-adc4-bef3972552fd_4_867_173_569_433_0.jpg](images/01963da9-61b2-7530-adc4-bef3972552fd_4_867_173_569_433_0.jpg)
138
+
139
+ Figure 3: A COVID-19 QA system serving as the back-end system of a COVID-19 infobot. The QA system contains a database of question-answer pairs similar to the one seen in figure 1. As the system is not perfect, there are cases where the QA system returns incorrect results or cannot find valid answers in the database. An additional confidence estimator is needed to filter out bad results.
140
+
141
+ ## 6 Applying QA to COVID-19 infobot
142
+
143
+ Unlike typical QA retrieval systems that are designed to show users lists of top-ranked candidate answers and let the users decide what are the best answers, infobot expects the QA system to return the most confident answer. In other words, an in-fobot should serve answers to input questions if and only if it is confident that the answers are correct. If not, the infobot should explain to users that it does not know how to answer the questions as seen in figure 3. We want to further emphasize the importance of precision in this setting since we do not want to provide irrelevant answers to users, or worse, give wrong advice to users.
144
+
145
+ Therefore, a confidence estimator is needed to filter out irrelevant or wrong answers. A commonly used approach in the NLP community is to set a threshold to the similarity scores. As seen in the example in figure 3 , any candidate answer with similarity score of less than 0.8 will be rejected and replaced with "I am not able to answer that question".
146
+
147
+ To evaluate how well our retrieval systems do in a infobot-based environment, we measure the performances of our models in terms of precision, recall, and F1 at different threshold values. We collect results at 101 threshold values between 0.0 and 1.0 , evenly spaced out at the interval of 0.01 . For each threshold value, a candidate is considered correct, if the similarity score between the candidate and the input question is greater than the threshold value. We gather all (input question, candidate) tuples from our COVID-19 question similarity test set and then convert them into true/false labels according to the threshold. We calculate the precision, recall, and F1 values between the predicted outputs and the actual relevance labels at all threshold values.
148
+
149
+ ![01963da9-61b2-7530-adc4-bef3972552fd_5_196_172_1260_496_0.jpg](images/01963da9-61b2-7530-adc4-bef3972552fd_5_196_172_1260_496_0.jpg)
150
+
151
+ Figure 4: Precision/Recall/F1 curves of an unsupervised model versus a supervised model.
152
+
153
+ We show the precision, recall, and F1 curves of an unsupervised BERT-NLI model before and after it was fine-tuned on our annotated dataset. Both models were evaluated in Q-Q mode and we expect the trend is similar to other unsupervised and supervised models.
154
+
155
+ As seen in figure 4, the unsupervised model performs poorly at this task, achieving a maximum F1 score of less than 0.35 , and the three metrics converge at a low value of around 0.27 . In contrast, the situation is much better for the supervised model, where the best F1 score is more than 0.65 , and all three metrics also converge at around 0.65 . We hypothesize that the scales of cosine similarities from the unsupervised model are different for different sentences, therefore it is difficult to find a global threshold that works well for all sentences. In comparison, our annotated dataset optimizes those scales and makes it easier to find a reasonable threshold.
156
+
157
+ ### 6.1 How much training data is actually needed?
158
+
159
+ Our results show that it is possible to improve the F1 from around 0.35 to 0.65 by fine-tuning those models with our annotated dataset. An interesting question then arises as to what percentage of training data is needed to reach peak performance. To find out, we re-trained a vanilla BERT model on sub-samples of our training data. As seen in figure 5 , with just 10% of the training data, the model achieves a good F1 of 0.598 and NDCG@3 of 0.543 . However, it only manages to hit its peak F1 when trained on ${50}\%$ of the training data and hit its peak MAP and NDCG when trained on ${60}\%$ of the training data. Those percentages translate to 2499 and 2999 examples respectively. This shows that BERT-based retrieval models do require a significant amount of supervision before we can deploy them in a real-world setting.
160
+
161
+ ![01963da9-61b2-7530-adc4-bef3972552fd_5_849_775_607_456_0.jpg](images/01963da9-61b2-7530-adc4-bef3972552fd_5_849_775_607_456_0.jpg)
162
+
163
+ Figure 5: MAP/NDCG@1/NDCG@3/F1 against percentage of training data used.
164
+
165
+ ## 7 Conclusion and future work
166
+
167
+ This paper presents experimental results and analyses on the effectiveness of using recent pre-trained language models to build COVID-19 related QA systems. We evaluate BM25 and unsupervised BERT-based QA models on a COVID-19 question similarity dataset carefully annotated by public health experts from JHSPH and find that although these perform decently, achieving NDCG@1 of around 0.42-0.52 , they are not performing at the level necessary in the real-world environment. When further applying these QA models to an in-fobot environment, the unsupervised models get poor F1 scores of around 0.35 and it is difficult to find good threshold values that can balance precision and recall. To facilitate future research, we are releasing BERT-NLI ${}^{9}$ and TwitterBERT ${}^{10}$ , which were fine-tuned on (input question, candidate question) pairs from our dataset. We are also building a larger COVID-19 question similarity dataset with twenty candidates for every input question. We will publicly release our dataset in the future.
168
+
169
+ ## References
170
+
171
+ Chris Buckley and Ellen Voorhees. 2005. Retrieval system evaluation. TREC: Experiment and Evaluation in Information Retrieval, pages 53-75.
172
+
173
+ Tongfei Chen and Benjamin Van Durme. 2017. Discriminative information retrieval for question answering sentence selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 719-725.
174
+
175
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
176
+
177
+ Robert E Frederking. 1981. A rule-based conversation participant. In Proceedings of the 19th annual meeting on Association for Computational Linguistics, pages 83-87. Association for Computational Linguistics.
178
+
179
+ Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422-446.
180
+
181
+ ## Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
182
+
183
+ Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
184
+
185
+ Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. Cedr: Contextualized embed-dings for document ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1101-1104.
186
+
187
+ Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2018. Deep relevance ranking using enhanced document-query interactions. arXiv preprint arXiv:1809.01682.
188
+
189
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789.
190
+
191
+ Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3973-3983.
192
+
193
+ Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: $\mathrm{{Bm}}{25}$ and beyond. Foundations and Trends $\mathbb{R}$ in Information Retrieval, 3(4):333-389.
194
+
195
+ Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. Faq retrieval using query-question similarity and bert-based query-answer relevance. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1113-1116.
196
+
197
+ Gerard Salton and Michael J McGill. 1986. Introduction to modern information retrieval.
198
+
199
+ Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-lingual learning-to-rank with shared representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 458-463.
200
+
201
+ Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence.
202
+
203
+ Christophe Van Gysel and Maarten de Rijke. 2018. Pytrec_eval: An extremely fast python interface to trec_eval. In SIGIR. ACM.
204
+
205
+ Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018. Learning matching models with weak supervision for response selection in retrieval-based chatbots. In
206
+
207
+ ---
208
+
209
+ 9https://huggingface.co/ssun32/bert_ base_nli_turkle
210
+
211
+ 10https://huggingface.co/ssun32/bert_ twitter_turkle
212
+
213
+ ---
214
+
215
+ Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 420-425.
216
+
217
+ Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Thirty-Second AAAI Conference on Artificial Intelligence.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/dGOeF3y_Weh/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AN ANALYSIS OF BERT FAQ RETRIEVAL MODELS FOR COVID-19 INFOBOT
2
+
3
+ Shuo Sun
4
+
5
+ Johns Hopkins University
6
+
7
+ ssun32@jhu.edu
8
+
9
+ João Sedoc
10
+
11
+ Johns Hopkins University
12
+
13
+ jsedoc@jhu.edu
14
+
15
+ § ABSTRACT
16
+
17
+ The outbreak of the COVID-19 pandemic has caused tremendous amounts of suffering and deaths around the world and greatly affected the lives of humanity. As the world sees more infected cases every day, the need and demand for reliable and up-to-date information on COVID-19 have never been higher. While recent pre-trained language models show successes on many other NLP tasks, we did not have COVID-19 related dataset to help us evaluate the performance of QA systems and infobots based on these models. After the creation of a COVID-19 question similarity dataset by public health experts from the Johns Hopkins Bloomberg School of Public Health (JHSPH), we create models sufficient for application. We also analyze the amount of supervised data required.
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ The COVID-19 pandemic has undeniably affected the lives of almost everyone in every part of the world. Schools are closed, companies are shutting down permanently, and people are losing jobs due to the lack of consumer demands. While doctors, nurses, and many other essential workers are at the front-line battling the virus, many concerned citizens are at home, searching for the latest developments of the pandemics and keeping them up to date with the newest information and guidelines from organizations such as CDC and WHO. However, misinformation is rampant in social media ( ) and even public officials e.g. ingestion of disinfectants or use of NSAIDs, such as aspirin and ibuprofen. This motivates the need to answer questions like "Should I ingest disinfectants to treat COVID-19?" and "Can I use Aspirin with COVID?" The desire for reliable and up-to-date information related to a pandemic has never been greater in this modern era. Consequently, NLP practitioners quickly ramp up QA systems that are designed to automatically answer COVID-19 related questions.
22
+
23
+ Traditional, QA systems can be categorized into generation-based methods (Serban et al., 2016; Xing et al., 2018) which synthesize answers using natural language generation techniques, and retrieval-based methods (Wu et al., 2018; Sakata et al., 2019), which retrieve the best answers from a list of given candidate answers. Given the existence of a vast amount of publicly available question-answer pairs from FAQ webpages maintained by organizations such as ${\mathrm{{WHO}}}^{1}$ and ${\mathrm{{CDC}}}^{2}$ , most existing COVID-19 QA systems use retrieval-based methods. We can further classify the retrieval-based techniques into three subcategories:
24
+
25
+ Rule-based These QA systems follow a set of predefined rules (Frederking, 1981) when generating responses to human questions. The rules are usually curated manually and require constant updates as the COVID-19 situations evolve around the world. They are also prone to errors caused by the insufficiency of rules to cover different situations. For example, QA systems that look for the coexisting keywords "what" and "COVID-19" to generate responses for the question "What is COVID-19?" might also produce similar answers to "What is the incubation period of COVID-19?".
26
+
27
+ Q-A Similarities QA systems in this category compute similarity scores between input questions and candidate answers and then sort candidate answers base on the similarity scores. The question-answer pairs can be ranked with traditional Information Retrieval (IR) methods such as tf-idf (Salton and McGill, 1986) and BM25 (Robertson et al., 2009; Chen and Van Durme, 2017) or neural IR methods (Sasaki et al., 2018; McDonald et al., 2018). Recently, models based on pre-trained language models such as BERT (Devlin et al., 2019; MacAvaney et al., 2019; Reimers and Gurevych, 2019) have demonstrated strong performance on sentence similarity and retrieval tasks.
28
+
29
+ ${}^{1}$ https://www.who.int/news-room/q-a-detail/q-a-coronaviruses
30
+
31
+ ${}^{2}$ https://www.cdc.gov/coronavirus/2019-ncov/faq.html
32
+
33
+ Q-Q Similarities QA systems in this category are similar to systems based on Q-A similarities, except that they calculate similarity scores between input questions and candidate questions instead of candidate answers. In other words, these QA systems retrieve and return the answers of candidate questions that are most similar to the input question.
34
+
35
+ In this work, we explore the feasibilities of using pre-trained language models to compute Q-A and $\mathrm{Q} - \mathrm{Q}$ similarities for retrieval-based COVID-19 QA systems. To support our experiments, we created a preliminary COVID-19 question similarity dataset in collaboration with experts from the Johns Hopkins Bloomberg School of Public Health (JH-SPH). Evaluation results on our preliminary dataset suggest that although fine-tuned BERT-based models perform decently in terms of IR metrics, these models do not perform at the precision levels justifiable for direct real-world applications. Further, our experiments also suggest it is challenging to find threshold similarity scores that can balance the precision and recall for these models. We argue that high-precision systems are exceptionally important at this crucial moment since we do not want to serve irrelevant information to worried users, or worse, inadvertently disseminate false information. We further show that with some supervision from our dataset, the overall performance of these models improves significantly. To support further researches, we will publicly release our COVID-19 question similarity dataset soon.
36
+
37
+ § 2 APPROACHES
38
+
39
+ Figure 1 presents the system architecture of a typical baseline retrieval-based QA system. We first build a database of candidate question-answer pairs by scraping COVID-19 related frequently asked questions (FAQ) web pages from a list of carefully chosen data sources. A retrieval-based QA system ingests an input question and returns top-ranked candidate question-answer pairs from the dataset based on similarities between the input question and candidates in the database. At the time of submission for this paper, our database contains 690 question-answer pairs extracted from 12 data sources. We will use this architecture for all experiments in this paper.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 1: The system architecture of a retrieval-based COVID-19 QA system. FAQ webpages are scraped from reliable sources such as CDC, FDA, and WHO and pooled together into a database of question-answer pairs. A retrieval module ingests an input question and returns top-ranked candidate question-answer pairs from the database based on computed similarity metrics.
44
+
45
+ Since we want to examine the effectiveness of existing retrieval solutions, we experiment with two common used retrieval techniques:
46
+
47
+ BM25 The BM25 model (Robertson et al., 2009) is a well-known ranking function commonly used in search engines. It is a bag-of-word model that calculates similarity scores between the terms in queries and the terms in documents. We adapt BM25 to the QA task by treating input questions as queries and the question-answer pairs as documents. We use Elasticsearch ${}^{3}$ , which uses BM25 by default, as our backend retrieval system.
48
+
49
+ BERT This is a state-of-the-art pre-trained language model that performs well on many NLP tasks. BERT (Devlin et al., 2019) and its variants such as Roberta (Liu et al., 2019) haven been consistently producing top results on the SQUAD2.0 (Rajpurkar et al., 2018) leaderboard. Recently, Sakata et al. (2019) shows that BERT-based FAQ retrieval systems outperform baseline retrieval systems on benchmark IR datasets. In this paper, we experiment with BERT models in both unsupervised and supervised settings:
50
+
51
+ 1. Under unsupervised setting, we use sentence transformers ${}^{4}$ (Reimers and Gurevych,2019) to encode the input questions and candidate questions (or candidate answers) into semantically meaningful BERT sentence embeddings. The sentence transformers are BERT-based models that were fine-tuned on publicly available natural language inference (NLI) and semantic text similarity (STS) datasets. The sentence embeddings from these models are also aligned, meaning that the cosine similarities between sentence embeddings reflect their degrees of similarities. We can then calculate similarity scores between the input question and candidate questions (or candidate answers) by taking the cosine similarity between their sentence embeddings.
52
+
53
+ ${}^{3}$ https://www.elastic.co/
54
+
55
+ ${}^{4}$ https://github.com/UKPLab/sentence-transformers
56
+
57
+ 2. Under supervised setting, we further fine-tune the sentence transformers with examples from our COVID-19 question similarity dataset.
58
+
59
+ For every model, we run experiments in both $Q - Q$ mode where we calculate similarity scores between input questions and candidate questions, and $Q$ - $A$ mode where we compute similarity scores between input questions and candidate answers. We report results in mean average precision (MAP) (Buckley and Voorhees, 2005) and normalized discounted cumulative gain (NDCG) (Järvelin and Kekäläinen, 2002). ${}^{5}$
60
+
61
+ § 3 DATASET
62
+
63
+ Due to the subjective nature of evaluating QA systems and the lack of in-domain data related to Covid19, we are creating a new COVID-19 question similarity dataset in collaboration with experts from the Johns Hopkins Bloomberg School of Public Health (JHSPH). The annotation process can be summarized as follows:
64
+
65
+ 1. We use a filtered subsample of user-generated questions from Qorona ${}^{6}$ , a list of COVID-19 related questions collected using Google au-tocomplete API, and from COVID-19 related data collected by Dialogue ${\mathrm{{MD}}}^{7}$ .
66
+
67
+ 2. For each input question, we retrieve the top five question-answer pairs from a pool of candidate question-answer pairs ${}^{8}$ with the help of a BM25-based baseline QA retrieval system.
68
+
69
+ 3. We engage public health experts to directly assess the relevance of the candidate question-answer pairs on a scale of 0-100 .
70
+
71
+ 4. For input questions with no retrieved relevant question-answer pairs, our annotators manually craft answers for those questions.
72
+
73
+ An example from our dateset is shown in figure 2.
74
+
75
+ At the time of submission of this paper, our preliminary dataset contains 6495 input questions with 32475 candidate question-answer pairs, covering a large variety of questions such as "Can COVID-19 be spread through surface-touching?" and "Can we use fabric masks to prevent the spread?". We reserve 1497 questions for the test set and use the other 4998 annotated instances for training.
76
+
77
+ § 4 EXPERIMENTAL SETUP
78
+
79
+ We filter out instances with no relevant candidates and some instances with blank candidate answers. Our filtered benchmark test set contains 392 examples. We assign relevance labels of one to question-answer candidates with annotated scores $\geq {80}$ and zero otherwise.
80
+
81
+ Ideally, for a given input question $\gamma$ and a list of candidate question-answer pairs $C =$ $\left\{ {\left( {{q}_{1},{a}_{1}}\right) \ldots \left( {{q}_{5},{a}_{5}}\right) }\right\}$ , we want to learn a function $\mathrm{f}$ , such that $f\left( {\gamma ,\left( {{q}_{i},{a}_{i}}\right) }\right) > f\left( {\gamma ,\left( {{q}_{j},{a}_{j}}\right) }\right) \Leftrightarrow$ $g\left( {\gamma ,\left( {{q}_{i},{a}_{i}}\right) }\right) > g\left( {\gamma ,\left( {{q}_{j},{a}_{j}}\right) }\right)$ for $1 \leq i,j \leq 5$ , where $\mathrm{g}$ is a function that returns the annotated relevance label. For our BM25 baseline, $f$ is modeled by the BM25 ranking function in Elasticsearch. For BERT-based models, $f$ returns the cosine similarity between the sentence embedding of an input question and the sentence embedding of a candidate question or candidate answer.
82
+
83
+ We conducted all experiments on an AWS instance with 8 cpus, 60GB of RAM and a 16GB Nvidia Tesla V100 GPU.
84
+
85
+ § 5 RESULTS
86
+
87
+ Table 1 presents the results of various models on test set of our preliminary COVID-19 question similarity dataset. We highlight some of the findings here:
88
+
89
+ First, models that were fine-tuned on our annotated dataset significantly improve the performances on the COVID-19 question similarity test set. For example, NDCG@3 of a BERT retrieval model fine-tuned on NLI data improves from 0.544 to 0.626 and from 0.309 to 0.626 when we fine-tune that model on similarities between (input question, candidate question) pairs and (input question, candidate answer) pairs respectively. The surprise
90
+
91
+ ${}^{5}$ Both metrics can be calculated with the pytrec_eval tool (Van Gysel and de Rijke, 2018).
92
+
93
+ ${}^{6}$ https://github.com/allenai/Qorona
94
+
95
+ ${}^{7}$ https://github.com/dialoguemd/COVID-19
96
+
97
+ ${}^{8}$ We scraped FAQ webpages from reliable sources such as CDC, FDA, WHO and Cleveland Clinic.
98
+
99
+ Question: Can I go for a run Does running exercise compromise my immune system
100
+
101
+ Candidate 1: (We are currently on lockdown... can I go outside? Can I work out outside? Can I go for a run? Can I go for a walk?,...) $\rightarrow$ 100
102
+
103
+ Candidate 2: (Should I go to work if there is an outbreak in my community?,...) $\rightarrow \mathbf{0}$
104
+
105
+ Candidate 3: (Can I take my child to the playground? $\rightarrow \mathbf{0}$ ,...)
106
+
107
+ Candidate 4: (Can i go to the funeral of someone who died of COVID-19?,...) $\rightarrow \mathbf{0}$
108
+
109
+ Candidate 5: (How can I and my family prepare for COVID-19?,...) $\rightarrow \mathbf{0}$ here is that models that were fine-tuned on only (input question, candidate question) pairs also significantly outperform unsupervised models when we evaluate those models in Q-A mode. For example, the NCCG@3 of the same BERT model improves from 0.309 to 0.493 when evaluated in Q-A mode. We hypothesize that some of the candidate questions are summaries of the candidate answers and because of that, the sentence representations of the candidate questions might be close to the sentence representations of the candidate answers. Therefore, learning to align the vectors of input questions and candidate questions would also improve the alignment between the vectors of input questions and candidate answers.
110
+
111
+ Figure 2: An example from our COVID-19 question-answering dataset. For every input question, we retrieved five candidate question-answer pairs using a baseline BM25 retrieval system. Annotators were asked to carefully assign relevance scores between 0 - 100 to the candidates.
112
+
113
+ max width=
114
+
115
+ 2*Model 2*Fine-tune 4|c|Q-Q mode 4|c|Q-A mode
116
+
117
+ 3-10
118
+ MAP N@1 N@3 T/Q(s) MAP N@1 N@3 T/Q(s)
119
+
120
+ 1-10
121
+ BM25* N/A 0.569 0.523 0.572 - 0.461 0.370 0.474 -
122
+
123
+ 1-10
124
+ 10|c|Unsupervised
125
+
126
+ 1-10
127
+ BERT NLI 0.537 0.477 0.544 0.018 0.334 0.194 0.309 0.030
128
+
129
+ 1-10
130
+ Roberta NLI 0.529 0.464 0.535 0.048 0.337 0.194 0.315 0.066
131
+
132
+ 1-10
133
+ BERT $\mathrm{{NLI}} \rightarrow \mathrm{{STSB}}$ 0.504 0.426 0.511 0.018 0.386 0.225 0.413 0.030
134
+
135
+ 1-10
136
+ Roberta $\mathrm{{NLI}} \rightarrow \mathrm{{STSB}}$ 0.505 0.423 0.507 0.047 0.334 0.189 0.303 0.066
137
+
138
+ 1-10
139
+ CovidBERT NLI 0.533 0.462 0.544 0.018 0.318 0.176 0.277 0.031
140
+
141
+ 1-10
142
+ 10|c|Supervised - Trained on (input question, candidate question) pairs
143
+
144
+ 1-10
145
+ BERT None 0.614 0.587 0.619 0.018 0.460 0.304 0.493 0.030
146
+
147
+ 1-10
148
+ BERT NLI 0.623 0.605 0.626 0.018 0.411 0.268 0.457 0.030
149
+
150
+ 1-10
151
+ CovidBERT NLI 0.617 0.592 0.622 0.018 0.474 0.321 0.509 0.032
152
+
153
+ 1-10
154
+ TwitterBERT None 0.621 0.600 0.624 0.018 0.396 0.270 0.398 0.030
155
+
156
+ 1-10
157
+ 10|c|Supervised - Trained on (input question, candidate answer) pairs
158
+
159
+ 1-10
160
+ BERT None 0.605 0.577 0.611 0.017 0.620 0.600 0.626 0.030
161
+
162
+ 1-10
163
+ BERT NLI 0.605 0.579 0.611 0.018 0.615 0.587 0.623 0.029
164
+
165
+ 1-10
166
+ TwitterBERT None 0.597 0.566 0.603 0.017 0.579 0.548 0.580 0.030
167
+
168
+ 1-10
169
+ CovidBERT NLI 0.609 0.584 0.614 0.017 0.618 0.597 0.624 0.030
170
+
171
+ 1-10
172
+ 10|c|Supervised - Trained on both
173
+
174
+ 1-10
175
+ BERT None 0.615 0.589 0.618 0.018 0.614 0.587 0.619 0.030
176
+
177
+ 1-10
178
+ BERT NLI 0.624 0.607 0.627 0.018 0.617 0.594 0.621 0.030
179
+
180
+ 1-10
181
+ TwitterBERT None 0.614 0.587 0.621 0.018 0.611 0.579 0.619 0.030
182
+
183
+ 1-10
184
+ CovidBERT NLI 0.614 0.584 0.621 0.018 0.612 0.584 0.618 0.030
185
+
186
+ 1-10
187
+
188
+ Table 1: MAP and NDCG (cut off at top 1 and top 3 documents) of various retrieval models. Q-Q mode ranks candidates based on similarity scores between input questions and candidate questions, while Q-A mode ranks candidates based on similarity scores between input questions and candidate answers. T/Q is the average time (in second) taken to calculate similarity scores for each input question. All BERT models are based on BERT-base-cased and all Roberta models are fine-tuned on Roberta-large. CovidBERT was (continue) trained on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. TwitterBERT was (continue) trained on tweets about coronavirus.
189
+
190
+ Second, we observe that unsupervised models perform significantly better in $Q - Q$ mode than in $Q$ -A mode. For example, unsupervised models can perform at NDCG@3 of around 0.507 to 0.544 in Q-Q mode, but their performances drop significantly to around 0.266 to 0.309 in Q-A mode. This also applies to the supervised models trained on (input question, candidate question) pairs which perform at NDCG@3 of around 0.619 to 0.626 in Q-Q mode against 0.398 to 0.493 in Q-A mode. This is expected given the fact that those models were fine-tuned on short sentence pairs, which is different from the answers in our COVID-19 dataset that are significantly longer. In contrast, models that were fine-tuned on (input question, candidate answer) pairs or both (input question, candidate question) and (input question, candidate answer) pairs perform well in both Q-Q and Q-A modes.
191
+
192
+ Third, although Roberta outperforms BERT on many benchmark datasets (Rajpurkar et al., 2018), it does not seem to perform better than BERT on our benchmark COVID-19 test set. As we can see from the unsupervised section of table 1, BERT outperforms Roberta under almost all settings. Further, because Roberta models have significantly more parameters than BERT models, they take 2-3 times longer to compute sentence embeddings and cosine similarities for every batch of data. We exclude Roberta from further experiments and focus on BERT models for the remaining of this paper.
193
+
194
+ Last but not least, vanilla BM25 model using the default parameters from Elasticsearch outperforms all unsupervised BERT-based models in both Q- $\mathrm{Q}$ and $\mathrm{Q} - \mathrm{A}$ modes. In contrast, it perform worse than the supervised models in Q-Q mode and Q-A mode.
195
+
196
+ In general, unsupervised BERT-based models perform decently well on our benchmark test set, performing at NDCG@1 of around 0.423 to 0.477, which means that these models can rank relevant candidates at the top one positions around 42.3% to 47.7% of the time.
197
+
198
+ < g r a p h i c s >
199
+
200
+ Figure 3: A COVID-19 QA system serving as the back-end system of a COVID-19 infobot. The QA system contains a database of question-answer pairs similar to the one seen in figure 1. As the system is not perfect, there are cases where the QA system returns incorrect results or cannot find valid answers in the database. An additional confidence estimator is needed to filter out bad results.
201
+
202
+ § 6 APPLYING QA TO COVID-19 INFOBOT
203
+
204
+ Unlike typical QA retrieval systems that are designed to show users lists of top-ranked candidate answers and let the users decide what are the best answers, infobot expects the QA system to return the most confident answer. In other words, an in-fobot should serve answers to input questions if and only if it is confident that the answers are correct. If not, the infobot should explain to users that it does not know how to answer the questions as seen in figure 3. We want to further emphasize the importance of precision in this setting since we do not want to provide irrelevant answers to users, or worse, give wrong advice to users.
205
+
206
+ Therefore, a confidence estimator is needed to filter out irrelevant or wrong answers. A commonly used approach in the NLP community is to set a threshold to the similarity scores. As seen in the example in figure 3, any candidate answer with similarity score of less than 0.8 will be rejected and replaced with "I am not able to answer that question".
207
+
208
+ To evaluate how well our retrieval systems do in a infobot-based environment, we measure the performances of our models in terms of precision, recall, and F1 at different threshold values. We collect results at 101 threshold values between 0.0 and 1.0, evenly spaced out at the interval of 0.01 . For each threshold value, a candidate is considered correct, if the similarity score between the candidate and the input question is greater than the threshold value. We gather all (input question, candidate) tuples from our COVID-19 question similarity test set and then convert them into true/false labels according to the threshold. We calculate the precision, recall, and F1 values between the predicted outputs and the actual relevance labels at all threshold values.
209
+
210
+ < g r a p h i c s >
211
+
212
+ Figure 4: Precision/Recall/F1 curves of an unsupervised model versus a supervised model.
213
+
214
+ We show the precision, recall, and F1 curves of an unsupervised BERT-NLI model before and after it was fine-tuned on our annotated dataset. Both models were evaluated in Q-Q mode and we expect the trend is similar to other unsupervised and supervised models.
215
+
216
+ As seen in figure 4, the unsupervised model performs poorly at this task, achieving a maximum F1 score of less than 0.35, and the three metrics converge at a low value of around 0.27 . In contrast, the situation is much better for the supervised model, where the best F1 score is more than 0.65, and all three metrics also converge at around 0.65 . We hypothesize that the scales of cosine similarities from the unsupervised model are different for different sentences, therefore it is difficult to find a global threshold that works well for all sentences. In comparison, our annotated dataset optimizes those scales and makes it easier to find a reasonable threshold.
217
+
218
+ § 6.1 HOW MUCH TRAINING DATA IS ACTUALLY NEEDED?
219
+
220
+ Our results show that it is possible to improve the F1 from around 0.35 to 0.65 by fine-tuning those models with our annotated dataset. An interesting question then arises as to what percentage of training data is needed to reach peak performance. To find out, we re-trained a vanilla BERT model on sub-samples of our training data. As seen in figure 5, with just 10% of the training data, the model achieves a good F1 of 0.598 and NDCG@3 of 0.543 . However, it only manages to hit its peak F1 when trained on ${50}\%$ of the training data and hit its peak MAP and NDCG when trained on ${60}\%$ of the training data. Those percentages translate to 2499 and 2999 examples respectively. This shows that BERT-based retrieval models do require a significant amount of supervision before we can deploy them in a real-world setting.
221
+
222
+ < g r a p h i c s >
223
+
224
+ Figure 5: MAP/NDCG@1/NDCG@3/F1 against percentage of training data used.
225
+
226
+ § 7 CONCLUSION AND FUTURE WORK
227
+
228
+ This paper presents experimental results and analyses on the effectiveness of using recent pre-trained language models to build COVID-19 related QA systems. We evaluate BM25 and unsupervised BERT-based QA models on a COVID-19 question similarity dataset carefully annotated by public health experts from JHSPH and find that although these perform decently, achieving NDCG@1 of around 0.42-0.52, they are not performing at the level necessary in the real-world environment. When further applying these QA models to an in-fobot environment, the unsupervised models get poor F1 scores of around 0.35 and it is difficult to find good threshold values that can balance precision and recall. To facilitate future research, we are releasing BERT-NLI ${}^{9}$ and TwitterBERT ${}^{10}$ , which were fine-tuned on (input question, candidate question) pairs from our dataset. We are also building a larger COVID-19 question similarity dataset with twenty candidates for every input question. We will publicly release our dataset in the future.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/mlmwkAdIeK/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Exploration of Gender Differences in COVID-19 Discourse on Reddit
2
+
3
+ Jai Aggarwal Ella Rabinovich Suzanne Stevenson
4
+
5
+ Department of Computer Science, University of Toronto
6
+
7
+ \{jai, ella, suzanne\}@cs.toronto.edu
8
+
9
+ ## Abstract
10
+
11
+ Decades of research on differences in the language of men and women have established postulates about the nature of lexical, topical, and emotional preferences between the two genders, along with their sociological underpinnings. Using a novel dataset of male and female linguistic productions collected from a social media platform, we further confirm existing assumptions about gender-linked affective distinctions, and demonstrate that these distinctions are amplified in the emotionally-charged discourse related to COVID-19. Our analysis also reveals considerable differences in topical preferences between male and female authors in pandemic-related discussions.
12
+
13
+ ## 1 Introduction
14
+
15
+ Research on gender differences in language has a long history spanning psychology, gender studies, sociolinguistics, and, more recently, computational linguistics. A considerable body of linguistic studies highlights the differences between the language of men and women in topical, lexical, and syntactic perspectives (Lakoff, 1973; Labov, 1990); these differences have proven to be accurately detectable by automatic classification tools (Koppel et al., 2002; Schler et al., 2006; Schwartz et al., 2013). Here, we study the differences in male (M) and female (F) language in discussions of COVID-191 on the Red- ${\text{dit}}^{2}$ discussion platform. Responses to the virus on social media have been heavily emotionally-charged, accompanied by feelings of anxiety, grief, and concern regarding long-lasting effects, such as economic ones. We explore how established emotional and topical cross-gender distinctions are carried over into pandemic-related discourse.
16
+
17
+ Multiple studies (e.g., Mulac et al. (2001); Mu-lac (2006); Newman et al. (2008)) have found distinctions in topical preferences in spontaneous productions of the two genders, showing that men were more likely to discuss money- and occupation-related topics, while women preferred discussion on family and social life. The authors attributed the differences to the assumption that male authors are more likely to discuss objects and impersonal topics, while female authors are more interested in psychological and social processes.
18
+
19
+ Gender-linked linguistic distinctions across emotional dimensions have been a subject of prolific research, both from the perspective of comprehension and production (Burriss et al., 2007; Hoffman, 2008; Thelwall et al., 2010), with findings suggesting that women are more likely than men to employ positive emotions, while men exhibit higher tendency to dominance, engagement, and control (although see Park et al. (2016) for an alternative finding). A common way to study emotions in psycholinguistics uses an approach that groups affective states into a few major dimensions. The Valence-Arousal-Dominance (VAD) affect representation has been widely used to conceptualize an individual's emotional spectrum, where valence refers to the degree of positiveness of the affect, arousal to the degree of its intensity, and dominance represents the level of control (Bradley and Lang, 1994). Computational studies applying this approach to emotion analysis have been relatively scarce due to the limited availability of a comprehensive resource of VAD rankings, with (to the best of our knowledge) no large-scale study on cross-gender language. The NRC-VAD Lexicon, a large dataset of VAD human rankings, recently released by Mohammad (2018), facilitates computational analysis of gender-linked differences across the three emotional dimensions at scale.
20
+
21
+ We use the VAD dataset of Mohammad (2018) to perform a comprehensive analysis of the similarities and differences between $M$ and $F$ language collected from the Reddit discussion platform, contrasting two sub-corpora: a collection of spontaneous utterances on a wide variety of topics (the 'baseline' dataset), and a collection of COVID-related productions by the same set of authors. We first corroborate existing assumptions on differences in emotional aspects of linguistic productions of men and women, and further show that these distinctions are amplified in the emotionally-intensive setting of COVID discussions. We next take a topic modeling approach to show detectable distinctions in the range of topics discussed by the two genders in COVID-related discourse, reinforcing (to some extent) assumptions on gender-related topical preferences, in emotionally-charged discourse. ${}^{3}$
22
+
23
+ ---
24
+
25
+ ${}^{1}$ We refer to COVID-19 by ’COVID’ hereafter.
26
+
27
+ ${}^{2}$ https://www.reddit.com/
28
+
29
+ ---
30
+
31
+ ## 2 Datasets
32
+
33
+ Our main dataset comprises a large collection of spontaneous, COVID-related English utterances by male and female authors from the Reddit discussion platforms. As of May 2020, Reddit was ranked as the 19th most visited website in the world, with over ${430}\mathrm{M}$ active users, ${1.2}\mathrm{M}$ topical threads (sub-reddits), and over ${70}\%$ of its user base coming from English-speaking countries. Subreddits often encourage their subscribers to specify a meta-property (called a 'flair', a textual tag), projecting a small glimpse about themselves (e.g., political association, country of origin, age), thereby customizing their presence within a subreddit.
34
+
35
+ We identified a set of subreddits, such as 'r/askmen', 'r/askwomen', where authors commonly self-report their gender ${}^{4}$ , and extracted a set of unique user-ids of authors who specified their gender as a flair. Using the extracted set of ids along with their associated gender, we collected COVID-related submissions and comments ${}^{5}$ by10,421male and 5,630 female users from the Reddit discussion platform, starting February 1st through June 1st, resulting in over ${70}\mathrm{\;K}$ male and ${35}\mathrm{\;K}$ female posts spanning7,583topical threads. COVID-related posts were identified by matching a set of predefined keywords with a post's content: 'covid', 'covid-19', 'covid19', 'corona', 'coronavirus', 'the virus', 'pandemic'. The ample size of the corpus facilitates analysis of distinctions-along emotional and topical dimensions-between the two genders in their discourse on the pandemic. Figure 1 presents the weekly amount of COVID-related posts in our main corpus. As can be seen, the discourse was increased in early-mid March (weeks 5-6), followed by a gradual decrease in intensity until nearly flattening out during the last four weeks of our analysis.
36
+
37
+ ![01963dab-34aa-73b8-bf6f-f84d504052c6_1_865_532_568_366_0.jpg](images/01963dab-34aa-73b8-bf6f-f84d504052c6_1_865_532_568_366_0.jpg)
38
+
39
+ Figure 1: Weekly amount of posts by gender.
40
+
41
+ Aiming at a comparative analysis between virus-related and 'neutral' (baseline) linguistic productions by men and women, we collected an additional dataset comprising randomly sampled ${10}\mathrm{\;K}$ posts per week by the same set of authors, totalling in ${150}\mathrm{\;K}$ posts for each gender. We use the collected data for analysis of emotional differences as well as topical preferences in spontaneous productions by male and female authors on Reddit.
42
+
43
+ ## 3 Analysis of Emotional Dimensions
44
+
45
+ ### 3.1 Methods
46
+
47
+ A large dataset of VAD human rankings for 20,000 English words has been recently released by Mohammad (2018), where each word is assigned V, A, and D values, each in the range $\left\lbrack {0 - 1}\right\rbrack$ . For example, the word 'fabulous' is ranked high on the valence dimension, while 'deceptive' is rated with a low score. In this study we aim at estimating the affective variables of posts (typically comprising multiple sentences), rather than individual words; we do so by inferring the affective rankings of sentences using those of individual words.
48
+
49
+ Word embedding spaces have been shown to capture variability in emotional dimensions closely corresponding to valence, arousal, and dominance (Hollis and Westbury, 2016), implying that such semantic representations carry over information useful for the task of emotional affect assessment. Therefore, we exploit affective dimension ratings assigned to individual words for supervision in extracting ratings of sentences. We use the model introduced by Reimers and Gurevych (2019) for producing word- and sentence-embeddings using Siamese BERT-Networks, ${}^{6}$ thereby obtaining semantic representations for the 20,000 words in Mohammad (2018) as well as for sentences posted by Reddit authors. This model performs significantly better than alternatives (such as averaging over a sentence's individual word embeddings and using BERT encoding (Reimers and Gurevych, 2019)) on the SentEval toolkit, a popular evaluation toolkit for sentence embeddings (Conneau and Kiela, 2018).
50
+
51
+ ---
52
+
53
+ ${}^{3}$ All data and code will be available at https:// github.com/ellarabi/covid19-demography.
54
+
55
+ ${}^{4}$ Although gender can be viewed as a continuum rather than binary, we limit this study to the two most prominent gender markers in our corpus: male and female.
56
+
57
+ ${}^{5}$ For convenience, we refer to both initial submissions and comments to submissions as 'posts' hereafter.
58
+
59
+ ---
60
+
61
+ ![01963dab-34aa-73b8-bf6f-f84d504052c6_2_188_162_1321_329_0.jpg](images/01963dab-34aa-73b8-bf6f-f84d504052c6_2_188_162_1321_329_0.jpg)
62
+
63
+ Figure 2: Diachronic analysis of valence (left), arousal (middle), and dominance (right) scores for Reddit data.
64
+
65
+ <table><tr><td colspan="6">COVID-related posts</td><td colspan="5">baseline posts</td></tr><tr><td/><td>mean(M)</td><td>std(M)</td><td>mean(F)</td><td>std(F)</td><td>eff. size</td><td>mean(M)</td><td>std(M)</td><td>mean(F)</td><td>std(F)</td><td>eff. size</td></tr><tr><td>V</td><td>0.375</td><td>0.12</td><td>0.388</td><td>0.11</td><td>-0.120</td><td>0.453</td><td>0.14</td><td>0.459</td><td>0.14</td><td>-0.043</td></tr><tr><td>A</td><td>0.579</td><td>0.09</td><td>0.567</td><td>0.08</td><td>0.144</td><td>0.570</td><td>0.10</td><td>0.559</td><td>0.09</td><td>0.109</td></tr><tr><td>D</td><td>0.490</td><td>0.08</td><td>0.476</td><td>0.07</td><td>0.183</td><td>0.486</td><td>0.09</td><td>0.469</td><td>0.09</td><td>0.185</td></tr></table>
66
+
67
+ Table 1: Comparison of $\mathrm{M}$ and $\mathrm{F}$ means for each affective dimension. All differences are significant at $\mathrm{p} < {0.001}$ . The highest mean score in a row (for COVID and baseline data, separately) is boldfaced.
68
+
69
+ Next, we trained beta regression models ${}^{7}$ (Zeileis et al., 2010) to predict VAD scores (dependent variables) of words from their embeddings (independent predictors), yielding Pearson's correlations of0.85,0.78, and 0.81 on a 1000-word held-out set for $V, A$ , and $D$ , respectively. The trained models were then used to infer VAD values for each sentence within a post using the sentence embeddings. ${}^{8}$ A post’s final score was computed as the average of the predicted scores for each of its constituent sentences. As an example, the post 'most countries handled the covid-19 situation appropriately' was assigned a low arousal score of 0.274 , whereas a high arousal score of 0.882 was assigned to 'gonna shoot the virus to death!'.
70
+
71
+ ### 3.2 Results and Discussion
72
+
73
+ We compared V, A, and D scores of M posts to those of $\mathrm{F}$ posts, in each of the COVID and baseline datasets, using Wilcoxon rank-sum tests. All differences were significant, and Cohen’s $d$ (Cohen, 2013) was used to find the effect size of these differences; see Table 1. We also compared the scores for each gender in the COVID dataset to their respective scores in the baseline dataset (discussed below). We further show, in Figure 2, the diachronic trends in VAD for $\mathrm{M}$ and $\mathrm{F}$ authors in the two sub-corpora: COVID and baseline.
74
+
75
+ First, Table 1 shows considerable differences between $M$ and $F$ authors in the baseline dataset for all three emotional dimensions (albeit a tiny effect size in valence), in line with established assumptions in this field (Burriss et al., 2007; Hoffman, 2008; Thelwall et al., 2010): women tend to use more positive language, while men score higher on arousal and dominance. Interestingly, the cross-gender differences in $\mathrm{V}$ and $\mathrm{A}$ are amplified between baseline and COVID data, with an increase in effect size from 0.043 to 0.120 for $\mathrm{V}$ and 0.109 to 0.144 for A. Men seem to use more negative language when discussing COVID than women do, presumably indicating a grimmer outlook towards the pandemic outbreak. Virtually no difference was detected in $\mathrm{D}$ between $\mathrm{M}$ and $\mathrm{F}$ authors in baseline vs. virus-related discussions.
76
+
77
+ COVID-related data trends (Figure 2) show comparatively low scores for valence and high scores for arousal in the early weeks our analysis (February to mid-March). We attribute these findings to an increased level of alarm and uncertainty about the pandemic in its early stages, which gradually attenuated as the population learned more about the virus. Intuitively, both genders exhibit lower V scores in COVID discussions compared to baseline: Cohen’s $d$ effect size resulted in -0.617 for $\mathrm{M}$ and-0.554for $\mathrm{F}$ authors. Smaller, yet considerable, differences between the two sub-corpora exist also for A and D (0.095 and 0.047 for M, as well as 0.083 and 0.085 , for F authors). Collectively, these affective divergences from baseline typify emotionally-intensive COVID-related discourse.
78
+
79
+ ---
80
+
81
+ ${}^{6}$ We used the bert-large-nli-mean-tokens model, obtaining highest scores on a the STS benchmark.
82
+
83
+ ${}^{7}$ An alternative to linear regression in cases where the dependent variable is a proportion (in 0-1 range).
84
+
85
+ ${}^{8}$ We excluded sentences shorter than 5 tokens.
86
+
87
+ ---
88
+
89
+ <table><tr><td colspan="4">topics with highest coherence scores in $\mathrm{M}$ posts</td><td colspan="4">topics with highest coherence scores in $\mathrm{F}$ posts</td></tr><tr><td>M-1</td><td>M-2</td><td>M-3</td><td>M-4</td><td>F-1</td><td>F-2</td><td>F-3</td><td>F-4</td></tr><tr><td>money</td><td>week</td><td>case</td><td>fuck</td><td>virus</td><td>feel</td><td>mask</td><td>week</td></tr><tr><td>economy</td><td>health</td><td>rate</td><td>mask</td><td>make</td><td>thing</td><td>hand</td><td>test</td></tr><tr><td>business</td><td>close</td><td>spread</td><td>claim</td><td>good</td><td>good</td><td>wear</td><td>hospital</td></tr><tr><td>market</td><td>food</td><td>hospital</td><td>news</td><td>thing</td><td>friend</td><td>woman</td><td>sick</td></tr><tr><td>crisis</td><td>open</td><td>week</td><td>post</td><td>vaccine</td><td>talk</td><td>food</td><td>patient</td></tr><tr><td>make</td><td>travel</td><td>month</td><td>comment</td><td>point</td><td>make</td><td>face</td><td>symptom</td></tr><tr><td>economic</td><td>supply</td><td>testing</td><td>call</td><td>happen</td><td>love</td><td>call</td><td>doctor</td></tr><tr><td>pandemic</td><td>store</td><td>social</td><td>article</td><td>human</td><td>parent</td><td>store</td><td>positive</td></tr><tr><td>lose</td><td>stay</td><td>lockdown</td><td>chinese</td><td>body</td><td>anxiety</td><td>close</td><td>start</td></tr><tr><td>vote</td><td>plan</td><td>measure</td><td>medium</td><td>study</td><td>read</td><td>stay</td><td>care</td></tr></table>
90
+
91
+ Table 2: Most coherent topics identified in $\mathrm{M}$ and $\mathrm{F}$ COVID-related posts.
92
+
93
+ ## 4 Analysis of Topical Distinctions
94
+
95
+ We next explored detailed topical similarities and differences in the productions by the two genders. Specifically, we compared two topic models: one created using $\mathrm{M}$ posts, and another using $\mathrm{F}$ posts, in the COVID dataset. ${}^{9}$ We identified the prevalent discussion topics in these two sub-corpora by using a publicly-available topic modeling tool (MALLET, McCallum, 2002). Each topic is represented by a probability distribution over the entire vocabulary, where terms more characteristic of a topic are assigned a higher probability. A common way to evaluate a topic learned from a set of documents is by computing its coherence score - a measure reflecting mutual semantic similarity of the topic's terms, and, therefore, its overall quality (Newman et al., 2010). The quality of a learned model is then estimated by averaging the scores of its individual topics - the model coherence score. We selected the optimal number of topics for each set of posts by maximizing its model coherence score, resulting in 8 topics for male and 7 topics for female posts (coherence scores of 0.48 and 0.46).
96
+
97
+ We examined the similarities and the differences across the two topical distributions by extracting the top- 4 topics - those with the highest individual coherence scores - in each of the M and F models. Table 2 presents the 10 words with highest likelihood for these topics in each model (on the left and right sides, respectively); topics within each are ordered by decreasing coherence score (left to right). We can see that both genders are occupied with health-related issues (topics M-3, F-1, F-4), and the implications on consumption habits (topics M-2, F-3). However, clear distinctions in topical preferences are also revealed by our analysis: men discuss economy/market and media-related topics (M-1, M-4), while women focus more on family and social aspects (F-2). Collectively these results show that the established postulates regarding gender-linked topical preferences are evident in COVID-related discourse on Reddit.
98
+
99
+ ## 5 Conclusions
100
+
101
+ A large body of studies spanning a range of disciplines has suggested (and corroborated) assumptions regarding the differences in linguistic productions of male and female speakers. Using a large dataset of COVID-related utterances by men and women on the Reddit discussion platforms, we show clear distinctions along emotional dimensions between the two genders, and demonstrate that these differences are amplified in emotionally-intensive discourse on the pandemic. Our analysis of topic modeling further highlights distinctions in topical preferences between men and women.
102
+
103
+ ---
104
+
105
+ ${}^{9}$ Prior to topic modeling we applied a preprocessing step including lemmatization of a post's text and filtering out stop-words (the 300 most frequent words in the corpus).
106
+
107
+ ---
108
+
109
+ ## References
110
+
111
+ Margaret M Bradley and Peter J Lang. 1994. Measur-
112
+
113
+ ing emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1):49-59.
114
+
115
+ Louisa Burriss, DA Powell, and Jeffrey White. 2007. Psychophysiological and subjective indices of emotion as a function of age and gender. Cognition and emotion, 21(1):182-210.
116
+
117
+ Jacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Academic press.
118
+
119
+ Alexis Conneau and Douwe Kiela. 2018. SentEval: An Evaluation Toolkit for Universal Sentence Representations. LREC 2018 - 11th International Conference on Language Resources and Evaluation, pages 1699-1704.
120
+
121
+ Martin L Hoffman. 2008. Empathy and prosocial behavior. Handbook of emotions, 3:440-455.
122
+
123
+ Geoff Hollis and Chris Westbury. 2016. The principals of meaning: Extracting semantic dimensions from co-occurrence models of semantics. Psychonomic Bulletin and Review, 23(6):1744-1756.
124
+
125
+ Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and linguistic computing, 17(4):401-412.
126
+
127
+ William Labov. 1990. The intersection of sex and social class in the course of linguistic change. Language variation and change, 2(2):205-254.
128
+
129
+ Robin Lakoff. 1973. Language and woman's place. Language in society, 2(1):45-79.
130
+
131
+ Andrew Kachites McCallum. 2002. MALLET: A machine learning for language toolkit.
132
+
133
+ Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174-184.
134
+
135
+ Anthony Mulac. 2006. The gender-linked language effect: Do language differences really make a difference? Lawrence Erlbaum Associates Publishers.
136
+
137
+ Anthony Mulac, James J Bradac, and Pamela Gibbons. 2001. Empirical support for the gender-as-culture hypothesis: An intercultural analysis of male/female language differences. Human Communication Research, 27(1):121-152.
138
+
139
+ David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 100-108. Association for Computational Linguistics.
140
+
141
+ Matthew L Newman, Carla J Groom, Lori D Handel-man, and James W Pennebaker. 2008. Gender differences in language use: An analysis of 14,000 text samples. Discourse Processes, 45(3):211-236.
142
+
143
+ Gregory Park, David Bryce Yaden, H Andrew Schwartz, Margaret L Kern, Johannes C Eichstaedt, Michael Kosinski, David Stillwell, Lyle H Ungar, and Martin EP Seligman. 2016. Women are warmer but no less assertive than men: Gender and language on facebook. PloS one, 11(5).
144
+
145
+ Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pages 3982-3992.
146
+
147
+ Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing we-blogs, volume 6, pages 199-205.
148
+
149
+ H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ra-mones, Megha Agrawal, Achal Shah, Michal Kosin-ski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791.
150
+
151
+ Mike Thelwall, David Wilkinson, and Sukhvinder Up-pal. 2010. Data mining emotion in social network communication: Gender differences in mys-pace. Journal of the American Society for Information Science and Technology, 61(1):190-199.
152
+
153
+ Achim Zeileis, Francisco Cribari-Neto, Bettina Grün, and I Kos-midis. 2010. Beta regression in r. Journal of statistical software, 34(2):1-24.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/mlmwkAdIeK/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EXPLORATION OF GENDER DIFFERENCES IN COVID-19 DISCOURSE ON REDDIT
2
+
3
+ Jai Aggarwal Ella Rabinovich Suzanne Stevenson
4
+
5
+ Department of Computer Science, University of Toronto
6
+
7
+ {jai, ella, suzanne}@cs.toronto.edu
8
+
9
+ § ABSTRACT
10
+
11
+ Decades of research on differences in the language of men and women have established postulates about the nature of lexical, topical, and emotional preferences between the two genders, along with their sociological underpinnings. Using a novel dataset of male and female linguistic productions collected from a social media platform, we further confirm existing assumptions about gender-linked affective distinctions, and demonstrate that these distinctions are amplified in the emotionally-charged discourse related to COVID-19. Our analysis also reveals considerable differences in topical preferences between male and female authors in pandemic-related discussions.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Research on gender differences in language has a long history spanning psychology, gender studies, sociolinguistics, and, more recently, computational linguistics. A considerable body of linguistic studies highlights the differences between the language of men and women in topical, lexical, and syntactic perspectives (Lakoff, 1973; Labov, 1990); these differences have proven to be accurately detectable by automatic classification tools (Koppel et al., 2002; Schler et al., 2006; Schwartz et al., 2013). Here, we study the differences in male (M) and female (F) language in discussions of COVID-191 on the Red- ${\text{ dit }}^{2}$ discussion platform. Responses to the virus on social media have been heavily emotionally-charged, accompanied by feelings of anxiety, grief, and concern regarding long-lasting effects, such as economic ones. We explore how established emotional and topical cross-gender distinctions are carried over into pandemic-related discourse.
16
+
17
+ Multiple studies (e.g., Mulac et al. (2001); Mu-lac (2006); Newman et al. (2008)) have found distinctions in topical preferences in spontaneous productions of the two genders, showing that men were more likely to discuss money- and occupation-related topics, while women preferred discussion on family and social life. The authors attributed the differences to the assumption that male authors are more likely to discuss objects and impersonal topics, while female authors are more interested in psychological and social processes.
18
+
19
+ Gender-linked linguistic distinctions across emotional dimensions have been a subject of prolific research, both from the perspective of comprehension and production (Burriss et al., 2007; Hoffman, 2008; Thelwall et al., 2010), with findings suggesting that women are more likely than men to employ positive emotions, while men exhibit higher tendency to dominance, engagement, and control (although see Park et al. (2016) for an alternative finding). A common way to study emotions in psycholinguistics uses an approach that groups affective states into a few major dimensions. The Valence-Arousal-Dominance (VAD) affect representation has been widely used to conceptualize an individual's emotional spectrum, where valence refers to the degree of positiveness of the affect, arousal to the degree of its intensity, and dominance represents the level of control (Bradley and Lang, 1994). Computational studies applying this approach to emotion analysis have been relatively scarce due to the limited availability of a comprehensive resource of VAD rankings, with (to the best of our knowledge) no large-scale study on cross-gender language. The NRC-VAD Lexicon, a large dataset of VAD human rankings, recently released by Mohammad (2018), facilitates computational analysis of gender-linked differences across the three emotional dimensions at scale.
20
+
21
+ We use the VAD dataset of Mohammad (2018) to perform a comprehensive analysis of the similarities and differences between $M$ and $F$ language collected from the Reddit discussion platform, contrasting two sub-corpora: a collection of spontaneous utterances on a wide variety of topics (the 'baseline' dataset), and a collection of COVID-related productions by the same set of authors. We first corroborate existing assumptions on differences in emotional aspects of linguistic productions of men and women, and further show that these distinctions are amplified in the emotionally-intensive setting of COVID discussions. We next take a topic modeling approach to show detectable distinctions in the range of topics discussed by the two genders in COVID-related discourse, reinforcing (to some extent) assumptions on gender-related topical preferences, in emotionally-charged discourse. ${}^{3}$
22
+
23
+ ${}^{1}$ We refer to COVID-19 by ’COVID’ hereafter.
24
+
25
+ ${}^{2}$ https://www.reddit.com/
26
+
27
+ § 2 DATASETS
28
+
29
+ Our main dataset comprises a large collection of spontaneous, COVID-related English utterances by male and female authors from the Reddit discussion platforms. As of May 2020, Reddit was ranked as the 19th most visited website in the world, with over ${430}\mathrm{M}$ active users, ${1.2}\mathrm{M}$ topical threads (sub-reddits), and over ${70}\%$ of its user base coming from English-speaking countries. Subreddits often encourage their subscribers to specify a meta-property (called a 'flair', a textual tag), projecting a small glimpse about themselves (e.g., political association, country of origin, age), thereby customizing their presence within a subreddit.
30
+
31
+ We identified a set of subreddits, such as 'r/askmen', 'r/askwomen', where authors commonly self-report their gender ${}^{4}$ , and extracted a set of unique user-ids of authors who specified their gender as a flair. Using the extracted set of ids along with their associated gender, we collected COVID-related submissions and comments ${}^{5}$ by10,421male and 5,630 female users from the Reddit discussion platform, starting February 1st through June 1st, resulting in over ${70}\mathrm{\;K}$ male and ${35}\mathrm{\;K}$ female posts spanning7,583topical threads. COVID-related posts were identified by matching a set of predefined keywords with a post's content: 'covid', 'covid-19', 'covid19', 'corona', 'coronavirus', 'the virus', 'pandemic'. The ample size of the corpus facilitates analysis of distinctions-along emotional and topical dimensions-between the two genders in their discourse on the pandemic. Figure 1 presents the weekly amount of COVID-related posts in our main corpus. As can be seen, the discourse was increased in early-mid March (weeks 5-6), followed by a gradual decrease in intensity until nearly flattening out during the last four weeks of our analysis.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1: Weekly amount of posts by gender.
36
+
37
+ Aiming at a comparative analysis between virus-related and 'neutral' (baseline) linguistic productions by men and women, we collected an additional dataset comprising randomly sampled ${10}\mathrm{\;K}$ posts per week by the same set of authors, totalling in ${150}\mathrm{\;K}$ posts for each gender. We use the collected data for analysis of emotional differences as well as topical preferences in spontaneous productions by male and female authors on Reddit.
38
+
39
+ § 3 ANALYSIS OF EMOTIONAL DIMENSIONS
40
+
41
+ § 3.1 METHODS
42
+
43
+ A large dataset of VAD human rankings for 20,000 English words has been recently released by Mohammad (2018), where each word is assigned V, A, and D values, each in the range $\left\lbrack {0 - 1}\right\rbrack$ . For example, the word 'fabulous' is ranked high on the valence dimension, while 'deceptive' is rated with a low score. In this study we aim at estimating the affective variables of posts (typically comprising multiple sentences), rather than individual words; we do so by inferring the affective rankings of sentences using those of individual words.
44
+
45
+ Word embedding spaces have been shown to capture variability in emotional dimensions closely corresponding to valence, arousal, and dominance (Hollis and Westbury, 2016), implying that such semantic representations carry over information useful for the task of emotional affect assessment. Therefore, we exploit affective dimension ratings assigned to individual words for supervision in extracting ratings of sentences. We use the model introduced by Reimers and Gurevych (2019) for producing word- and sentence-embeddings using Siamese BERT-Networks, ${}^{6}$ thereby obtaining semantic representations for the 20,000 words in Mohammad (2018) as well as for sentences posted by Reddit authors. This model performs significantly better than alternatives (such as averaging over a sentence's individual word embeddings and using BERT encoding (Reimers and Gurevych, 2019)) on the SentEval toolkit, a popular evaluation toolkit for sentence embeddings (Conneau and Kiela, 2018).
46
+
47
+ ${}^{3}$ All data and code will be available at https:// github.com/ellarabi/covid19-demography.
48
+
49
+ ${}^{4}$ Although gender can be viewed as a continuum rather than binary, we limit this study to the two most prominent gender markers in our corpus: male and female.
50
+
51
+ ${}^{5}$ For convenience, we refer to both initial submissions and comments to submissions as 'posts' hereafter.
52
+
53
+ < g r a p h i c s >
54
+
55
+ Figure 2: Diachronic analysis of valence (left), arousal (middle), and dominance (right) scores for Reddit data.
56
+
57
+ max width=
58
+
59
+ 6|c|COVID-related posts 5|c|baseline posts
60
+
61
+ 1-11
62
+ X mean(M) std(M) mean(F) std(F) eff. size mean(M) std(M) mean(F) std(F) eff. size
63
+
64
+ 1-11
65
+ V 0.375 0.12 0.388 0.11 -0.120 0.453 0.14 0.459 0.14 -0.043
66
+
67
+ 1-11
68
+ A 0.579 0.09 0.567 0.08 0.144 0.570 0.10 0.559 0.09 0.109
69
+
70
+ 1-11
71
+ D 0.490 0.08 0.476 0.07 0.183 0.486 0.09 0.469 0.09 0.185
72
+
73
+ 1-11
74
+
75
+ Table 1: Comparison of $\mathrm{M}$ and $\mathrm{F}$ means for each affective dimension. All differences are significant at $\mathrm{p} < {0.001}$ . The highest mean score in a row (for COVID and baseline data, separately) is boldfaced.
76
+
77
+ Next, we trained beta regression models ${}^{7}$ (Zeileis et al., 2010) to predict VAD scores (dependent variables) of words from their embeddings (independent predictors), yielding Pearson's correlations of0.85,0.78, and 0.81 on a 1000-word held-out set for $V,A$ , and $D$ , respectively. The trained models were then used to infer VAD values for each sentence within a post using the sentence embeddings. ${}^{8}$ A post’s final score was computed as the average of the predicted scores for each of its constituent sentences. As an example, the post 'most countries handled the covid-19 situation appropriately' was assigned a low arousal score of 0.274, whereas a high arousal score of 0.882 was assigned to 'gonna shoot the virus to death!'.
78
+
79
+ § 3.2 RESULTS AND DISCUSSION
80
+
81
+ We compared V, A, and D scores of M posts to those of $\mathrm{F}$ posts, in each of the COVID and baseline datasets, using Wilcoxon rank-sum tests. All differences were significant, and Cohen’s $d$ (Cohen, 2013) was used to find the effect size of these differences; see Table 1. We also compared the scores for each gender in the COVID dataset to their respective scores in the baseline dataset (discussed below). We further show, in Figure 2, the diachronic trends in VAD for $\mathrm{M}$ and $\mathrm{F}$ authors in the two sub-corpora: COVID and baseline.
82
+
83
+ First, Table 1 shows considerable differences between $M$ and $F$ authors in the baseline dataset for all three emotional dimensions (albeit a tiny effect size in valence), in line with established assumptions in this field (Burriss et al., 2007; Hoffman, 2008; Thelwall et al., 2010): women tend to use more positive language, while men score higher on arousal and dominance. Interestingly, the cross-gender differences in $\mathrm{V}$ and $\mathrm{A}$ are amplified between baseline and COVID data, with an increase in effect size from 0.043 to 0.120 for $\mathrm{V}$ and 0.109 to 0.144 for A. Men seem to use more negative language when discussing COVID than women do, presumably indicating a grimmer outlook towards the pandemic outbreak. Virtually no difference was detected in $\mathrm{D}$ between $\mathrm{M}$ and $\mathrm{F}$ authors in baseline vs. virus-related discussions.
84
+
85
+ COVID-related data trends (Figure 2) show comparatively low scores for valence and high scores for arousal in the early weeks our analysis (February to mid-March). We attribute these findings to an increased level of alarm and uncertainty about the pandemic in its early stages, which gradually attenuated as the population learned more about the virus. Intuitively, both genders exhibit lower V scores in COVID discussions compared to baseline: Cohen’s $d$ effect size resulted in -0.617 for $\mathrm{M}$ and-0.554for $\mathrm{F}$ authors. Smaller, yet considerable, differences between the two sub-corpora exist also for A and D (0.095 and 0.047 for M, as well as 0.083 and 0.085, for F authors). Collectively, these affective divergences from baseline typify emotionally-intensive COVID-related discourse.
86
+
87
+ ${}^{6}$ We used the bert-large-nli-mean-tokens model, obtaining highest scores on a the STS benchmark.
88
+
89
+ ${}^{7}$ An alternative to linear regression in cases where the dependent variable is a proportion (in 0-1 range).
90
+
91
+ ${}^{8}$ We excluded sentences shorter than 5 tokens.
92
+
93
+ max width=
94
+
95
+ 4|c|topics with highest coherence scores in $\mathrm{M}$ posts 4|c|topics with highest coherence scores in $\mathrm{F}$ posts
96
+
97
+ 1-8
98
+ M-1 M-2 M-3 M-4 F-1 F-2 F-3 F-4
99
+
100
+ 1-8
101
+ money week case fuck virus feel mask week
102
+
103
+ 1-8
104
+ economy health rate mask make thing hand test
105
+
106
+ 1-8
107
+ business close spread claim good good wear hospital
108
+
109
+ 1-8
110
+ market food hospital news thing friend woman sick
111
+
112
+ 1-8
113
+ crisis open week post vaccine talk food patient
114
+
115
+ 1-8
116
+ make travel month comment point make face symptom
117
+
118
+ 1-8
119
+ economic supply testing call happen love call doctor
120
+
121
+ 1-8
122
+ pandemic store social article human parent store positive
123
+
124
+ 1-8
125
+ lose stay lockdown chinese body anxiety close start
126
+
127
+ 1-8
128
+ vote plan measure medium study read stay care
129
+
130
+ 1-8
131
+
132
+ Table 2: Most coherent topics identified in $\mathrm{M}$ and $\mathrm{F}$ COVID-related posts.
133
+
134
+ § 4 ANALYSIS OF TOPICAL DISTINCTIONS
135
+
136
+ We next explored detailed topical similarities and differences in the productions by the two genders. Specifically, we compared two topic models: one created using $\mathrm{M}$ posts, and another using $\mathrm{F}$ posts, in the COVID dataset. ${}^{9}$ We identified the prevalent discussion topics in these two sub-corpora by using a publicly-available topic modeling tool (MALLET, McCallum, 2002). Each topic is represented by a probability distribution over the entire vocabulary, where terms more characteristic of a topic are assigned a higher probability. A common way to evaluate a topic learned from a set of documents is by computing its coherence score - a measure reflecting mutual semantic similarity of the topic's terms, and, therefore, its overall quality (Newman et al., 2010). The quality of a learned model is then estimated by averaging the scores of its individual topics - the model coherence score. We selected the optimal number of topics for each set of posts by maximizing its model coherence score, resulting in 8 topics for male and 7 topics for female posts (coherence scores of 0.48 and 0.46).
137
+
138
+ We examined the similarities and the differences across the two topical distributions by extracting the top- 4 topics - those with the highest individual coherence scores - in each of the M and F models. Table 2 presents the 10 words with highest likelihood for these topics in each model (on the left and right sides, respectively); topics within each are ordered by decreasing coherence score (left to right). We can see that both genders are occupied with health-related issues (topics M-3, F-1, F-4), and the implications on consumption habits (topics M-2, F-3). However, clear distinctions in topical preferences are also revealed by our analysis: men discuss economy/market and media-related topics (M-1, M-4), while women focus more on family and social aspects (F-2). Collectively these results show that the established postulates regarding gender-linked topical preferences are evident in COVID-related discourse on Reddit.
139
+
140
+ § 5 CONCLUSIONS
141
+
142
+ A large body of studies spanning a range of disciplines has suggested (and corroborated) assumptions regarding the differences in linguistic productions of male and female speakers. Using a large dataset of COVID-related utterances by men and women on the Reddit discussion platforms, we show clear distinctions along emotional dimensions between the two genders, and demonstrate that these differences are amplified in emotionally-intensive discourse on the pandemic. Our analysis of topic modeling further highlights distinctions in topical preferences between men and women.
143
+
144
+ ${}^{9}$ Prior to topic modeling we applied a preprocessing step including lemmatization of a post's text and filtering out stop-words (the 300 most frequent words in the corpus).
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qJYo-Bbxu07/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Twitter Social Mobility Index: Measuring Social Distancing Practices from Geolocated Tweets
2
+
3
+ Paiheng Xu, Mark Dredze
4
+
5
+ Malone Center for Engineering in Healthcare Center for Language and Speech Processing Department of Computer Science Johns Hopkins University
6
+
7
+ paiheng, mdredze@jhu.edu
8
+
9
+ David A. Broniatowski
10
+
11
+ Department of Engineering Management and Systems Engineering Institute for Data, Democracy, and Politics The George Washington University
12
+
13
+ broniatowski@gwu.edu
14
+
15
+ ## Abstract
16
+
17
+ Social distancing is an important component of the response to the novel Coronavirus (COVID-19) pandemic. Minimizing social interactions and travel reduces the rate at which the infection spreads, and "flattens the curve" such that the medical system can better treat infected individuals. However, it remains unclear how the public will respond to these policies. This paper presents the Twitter Social Mobility Index, a measure of social distancing and travel derived from Twitter data. We use public geolocated Twitter data to measure how much a user travels in a given week. We find a large reduction in travel in the United States after the implementation of social distancing policies, with larger reductions in states that were early adopters and smaller changes in states without policies. Our findings are presented on http://socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
18
+
19
+ ## 1 Introduction
20
+
21
+ The outbreak of the SARS-CoV-2 virus, a Coronavirus that causes the disease COVID-19, has caused a pandemic on a scale unseen in a generation. Without an available vaccine to reduce transmission of the virus, public health and elected officials have called on the public to practice social distancing. Social distancing is a set of practices in which individuals maintain a physical distance so as to reduce the number of physical contacts they encounter (Maharaj and Kleczkowski, 2012; Kelso et al., 2009). These practices include maintaining a distance of at least six feet and avoiding large gatherings (Glass et al., 2006). At the time of this writing, in the United States nearly every state has implemented state-wide "stay-at-home" orders to enforce social distancing practices (Zeleny, 2020).
22
+
23
+ While an important tool in the fight against COVID-19, the implementation of social distancing by the general public can vary widely. While a state governor may issue an order for the practice, individuals in different states may respond in different ways. Understanding actual reductions in travel and social contacts is critical to measuring the effectiveness of the policy. These policies may remain in effect for an extended period of time. Thus, the public may begin to relax their practices, making additional policies necessary. Additionally, epidemiologists already model the impact of social distancing policies on the course of an outbreak (Prem et al., 2020; Fenichel et al., 2011; Caley et al., 2008). These models may be more effective when incorporating actual measures of social distancing, rather than assuming official policies are implemented in practice.
24
+
25
+ It can be challenging to obtain data on the efficacy of social distancing practices, especially during an ongoing pandemic. A recent Gallup poll surveyed Americans to find that many adults are taking precautions to keep their distance from others (Saad, 2020). However, while polling can provide insights, it cannot provide a solution. Polling is relatively expensive, making it a poor choice for ongoing population surveillance practices and providing data on specific geographic locales, i.e. US States and major cities (Dredze et al., 2016a). Additionally, polling around public health issues suffers from response bias, as individuals may overstate their compliance with established public health recommendations (Adams et al., 1999).
26
+
27
+ Over the past decade, analyses of social media and web data have been widely adopted to support public health objectives (Paul and Dredze, 2017). In this vein, several efforts have emerged over the past few weeks to track social distancing practices using these data sources. Google has released "COVID-19 Community Mobility Reports" which use Google data to "chart movement trends over time by geography, across different categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential" (Google, 2020). The Unacast "Social Distancing Scoreboard" uses data collected from 127 million monthly active users to measure the implementation of social distancing practices (Unacast, 2020). Researchers at the Institute for Disease Modeling have used data from Facebook's "Data for Good" program to model the decline in mobility in the Greater Seattle area and its effect on the spread of COVID-19 (Burstein et al., 2020). Using cell phone data, the New York Times completed an analysis that showed that stay-at-home orders dramatically reduced travel, but that states that have waited to enact such orders have continued to travel widely (Glanz et al., 2020). These efforts provide new and important opportunities to study social distancing in real-time.
28
+
29
+ We present the Twitter Social Mobility Index, a measure of social distancing and travel patterns derived from public Twitter data. We use public geolocated Twitter data to measure how much a user travels in a given week. We compute a metric based on the standard deviation of a user's geolo-cated tweets each week, and aggregate these data over an entire population to produce a metric for the United States as a whole, for individual states and for some US cities. We find that, taking the US as a whole, there has been a dramatic drop in travel in recent weeks, with travel between March 16 and April 27, 2020 showing the lowest amount of travel since January 1, 2019, the start of our dataset. Additionally, we find that travel reductions are not uniform across the United States, but vary from state to state. However, there's no clear correlation between the social mobility and confirmed COVID-19 cases at the state level. A key advantage of our approach is that, unlike other travel and social distancing analyses referenced above, we rely on entirely public data, enabling others to replicate our findings and explore different aspects of these data. Additionally, since Twitter contains user generated content in addition to location information, future analyses can correlate attitudes, beliefs, and behaviors with changes in social mobility.
30
+
31
+ Our findings are presented on http:// socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
32
+
33
+ ## 2 Data
34
+
35
+ Twitter offers several ways in which a user can indicate their location. If a user is tweeting from a GPS enabled device, they can attach their exact coordinate to that tweet. Twitter may then display to the user, and provide in their API, the specific place that corresponds to these coordinates. Alternatively, a user can explicitly select a location, which can be a point of interest (coffee shop), a neighborhood, a city, state, or country. If the tweet is public, this geolocation information is supplied with the tweet.
36
+
37
+ We used the Twitter streaming ${\mathrm{{API}}}^{1}$ to download tweets based on location. We used a bounding box that covered the entire United States, including territories. We used data from this collection starting on January 1, 2019 and ending on April 27, 2020. In total, this included 3,768,959 Twitter users and 469,669,925 tweets in United States.
38
+
39
+ ## 3 Location Data
40
+
41
+ We process the two types of geolocation information described in the previous section.
42
+
43
+ Coordinates The exact coordinates (latitude/longitude) provided by the user ("coordinates" field in the Twitter JSON object). About 8% of our data included "coordinates".
44
+
45
+ Place The "place" field in the Twitter json object indicates a known location in which the tweet was authored. A place can be a point of interest (a specific hotel), a neighborhood ("Downtown Jacksonville"), a city ("Kokomo, IN"), a state ("Arizona") or a country ("United States"). The place object contains a unique ID, a bounding box, the country and a name. More information about the location is available from the Twitter GEO API. A place is available with a tweet in either of two conditions. First, Twitter identifies the coordinates provided by the user as occurring in a known place. Second, if the user manually selects the place when authoring the tweet.
46
+
47
+ Since coordinates give a more precise location, we use them instead of place when available. If we only have a place, we assume that the user is in the center of the place, as given by the place's bounding box.
48
+
49
+ For points of interest and neighborhoods, Twitter only provides the country in the associated metadata. While in some cases the city can be parsed from the name, and the state inferred, we opted to exclude these places from our analysis for states. The full location details can be obtained from querying the Twitter API, but the magnitude of data in our analysis made this too time consuming. This excluded about ${1.8}\%$ of our data.
50
+
51
+ ---
52
+
53
+ ${}^{1}$ https://developer.twitter.com/en/docs/tweets/filter-realtime/overview/statuses-filter
54
+
55
+ ---
56
+
57
+ We include an analysis of the 50 most populous United States cites. For this analysis, we included points of interest that had the city name in their names, e.g. "New York City Center". Specifically for New York City, we include places that corresponded to each of the five New York City boroughs (Brooklyn, Manhattan, Queens, Staten Island, The Bronx).
58
+
59
+ In summary, for each geolocated tweet we have an associated latitude and longitude.
60
+
61
+ ## 4 Computing Mobility
62
+
63
+ We define the Twitter Social Mobility Index as follows. For each user, we collect all locations (coordinates) in a one week period, where a week starts on Monday and ends the following Sunday. We compute the centroid of all of the the coordinates and consider this the "home" location for the user for that week. We then measure the distance between each location and the centroid for that week. For distance, we measure the geodesic distance in kilometers between two adjacent records using geopy ${}^{2}$ . After collecting the distances we measure the standard deviation of these distances. In summary, this measure reflects the area and regularity of travel for a user, rather than the raw distance traveled. Therefore, a user who takes a long trip with a small number of checkins would have a larger social mobility measure than a user with many checkins who traveled in a small area. As the measure is sensitive to the number of checkins, it would reflect when people has less checkins during the pandemic.
64
+
65
+ We aggregate the results by week by taking the mean measure of all users in a given geographic area. We also present results for a 7-day moving average aggregation as a measure of daily movement. We record the variance of these measures to study the travel variance in the population, which will indicate if travel is reduced overall but not for some users.
66
+
67
+ We produce aggregate scores by geographic area for the United State as a whole, for each US state and territory, and for the 50 most populous cities in the US. We determine the geographic area of a user based on their centroid location for all time in our collection.
68
+
69
+ We compute the social mobility index for each day and week between January 1, 2019 and April 27, 2020. We select the date of March 16, 2020 as the start of social distancing on the national level, though individual states have implemented practices at different times. Therefore, we divide the data into two time periods: before social distancing (January 1, 2019 - March 15, 2020) and after social distancing (March 16th, 2020 - April 27, 2020).
70
+
71
+ We then compute the group level reduction in social mobility by considering average values as follows:
72
+
73
+ $$
74
+ \text{ Mobility Reduction } = 1 - \frac{\text{ mobility after social distancing }}{\text{ mobility before social distancing }}.
75
+ $$
76
+
77
+ (1)
78
+
79
+ We also compute the reduction for each user and then track the median value, number of users active in both periods, and proportion of active users that completely reduce their mobility. We also conduct a similar analysis for seasonal effects by comparing mobility after social distancing and mobility during same period in 2019.
80
+
81
+ To handle sparse data issues in our dataset, we exclude (1) users with less than 3 geolocated tweets overall, and (2) a weekly record for a user if that user has less than 2 geolocated tweets in that week. Additionally, due to data loss in our data collection process we remove two weeks with far less data than other time periods by taking a ${99.75}\%$ confidence limit on number of users and records.
82
+
83
+ ## 5 Results
84
+
85
+ Social Mobility Index Table 2 shows the Twitter Social Mobility Index measured in kilometers for every state and territory in United States, and United States as a whole. City results appear in Table 3. We also include the rank of location by the group level reduction.
86
+
87
+ A few observations. The overall drop in mobility across the United States was large: ${61.83}\%$ . Figure 1 shows the weekly social mobility index for the United States for the entire time period of our dataset. The figure reflects a massive drop in mobility starting in March, with the four most recent weeks the lowest on record in our dataset. Second, every US state and territory saw a drop in mobility, ranging from 38.54% to 76.80% travel compared to numbers before March 16, 2020. However, the variance by state was high. States that were early adopters of social distancing practices are ranked highly on the reduction in travel: e.g. Washington (3) and Maryland (9). In contrast, the eight states that do not have state wide orders as of the start of April (Zeleny, 2020) rank poorly: Arkansas (45), Iowa (37), Nebraska (35), North Dakota (22), South Carolina (38), South Dakota (46), Oklahoma (50), Utah (14), Wyoming (53). We observe similar trends in the city analysis, but the median users in these cities have a larger mobility reduction than the ones in the states.
88
+
89
+ ---
90
+
91
+ ${}^{2}$ https://github.com/geopy/geopy
92
+
93
+ ---
94
+
95
+ ![01963da8-1c30-7392-84d3-1a35a1d78040_3_195_172_1264_627_0.jpg](images/01963da8-1c30-7392-84d3-1a35a1d78040_3_195_172_1264_627_0.jpg)
96
+
97
+ Figure 1: Mean social mobility index (KM) in United States from January 1, 2019 to April 27, 2020. Weeks with missing data are excluded from the figure.
98
+
99
+ Besides the group level mobility reduction (Eq. 4), we also examine the distribution of user level reduction. We only consider users that have at least two checkins in both periods, leading to a subgroup of all the users in the dataset for the reduction distribution. The median values for the reduction distribution is close to ${100}\%$ for most states. The median values for seasonal reduction are all smaller, but still suggest that people substantially reduce their mobility during the pandemic. Moreover, in the United States, 40% of the 818,213 active users completely reduced their mobility, i.e., mobility reduction of ${100}\%$ . In contrast, the same period in 2019 saw a ${31}\%$ reduction among 286,217 active users.
100
+
101
+ The White House announced "Slow the Spread" guidelines for persons to take action to reduce the spread of COVID-19 on March 16, 2020. 49.06% of the states had their largest mobility drop in the week March 16 - 22, 2020 and 22.64% in the following week. We compute a moving-average of daily mobility data, and use an offline change point detection method (Truong et al., 2020) on this trend. ${62.26}\%$ of the change points in 2020 are after the national announcement date but before the dates when individual state policies were enacted. This suggest that the national announcement had the largest effect as compared to state policies, a similar finding to the cell-phone-based mobility analysis of four large cities (Lasry et al., 2020). We also observe that, among 40 states that have announced Stay at Home policy, ${92.5}\%$ of the states have a more stationary daily mobility time series before the policy-announced date, compared to the mobility time series over all time, suggesting a rapid mobility change during pandemic.
102
+
103
+ Finally, Figure 2 shows a box-plot of the mobility variance across all users in a given time period. The distribution is long-tailed with a lot zeros, so we take the log of 1 plus each mobility index. While mobility is reduced in general, some users are still showing a lot of movement, suggesting that social distancing is not being uniformly practiced. These results clearly demonstrate that our metric can track drops in travel, suggesting that it can be used as part of ongoing pandemic response planning.
104
+
105
+ Correlation What are some of the factors that may help explain our Twitter Social Mobility Index? How well does the index track COVID-19 cases compared to other relevant factors? We analyze our data using a correlation analysis. We compute daily infection rate by dividing the number of new confirmed COVID-19 cases in each US state ${}^{3}$ by the population of the state. We compare the daily infection rate with social mobility index and the following trends (Raifman et al., 2020).
106
+
107
+ ![01963da8-1c30-7392-84d3-1a35a1d78040_4_204_178_590_443_0.jpg](images/01963da8-1c30-7392-84d3-1a35a1d78040_4_204_178_590_443_0.jpg)
108
+
109
+ Figure 2: User distribution of mean social mobility index (KM) before/after social distancing in United States.
110
+
111
+ - The size of the state in square miles.
112
+
113
+ - The number of homeless individuals (2019).
114
+
115
+ - The unemployment rate (2018)
116
+
117
+ - The percentage of the population at risk for serious illness due to COVID-19.
118
+
119
+ For each day we compute the correlation between the daily infection rate and the above data by state.
120
+
121
+ Figure 3 shows the correlation by day. We adopt infection rate because raw confirmed cases is not as informative as the population has the highest correlation. However, the most significant factor in the early stage are still population related factors, i.e., number of homeless. We don't see significant correlations with other factors including the social mobility index. Starting from mid-March, we observe trends that unemployment rate, size of the state and social mobility index have increasing correlation but still not significant enough (the absolute correlation values $< {0.5}$ ). The fluctuation in the middle is when states started to report confirmed cases.
122
+
123
+ <table><tr><td>Policy</td><td>Correlation</td></tr><tr><td>State of emergency</td><td>0.2587</td></tr><tr><td>Date banned visitors to nursing homes</td><td>0.1510</td></tr><tr><td>Stay at home/ shelter in place</td><td>0.1507</td></tr><tr><td>Froze evictions</td><td>0.1411</td></tr><tr><td>Closed non-essential businesses</td><td>0.1359</td></tr><tr><td>Closed gyms</td><td>0.0765</td></tr><tr><td>Closed movie theaters</td><td>0.0737</td></tr><tr><td>Closed day cares</td><td>0.0563</td></tr><tr><td>Closed restaurants except take out</td><td>0.0341</td></tr><tr><td>Date closed K-12 schools</td><td>-0.0821</td></tr></table>
124
+
125
+ Table 1: Pearson correlation between cumulative confirmed COVID-19 cases at May 10, 2020 and policy release date at each state.
126
+
127
+ We conduct a similar correlation analysis between each data source and the social mobility index, shown in Figure 4. As expected, Geographical state size has the highest positive correlation. We also observe that the number of people at risk for serious illness due to COVID-19 has negative correlation at the early stage of the pandemic.
128
+
129
+ Table 1 investigates the effect of various restriction policies on confirmed cases by running a similar correlation analysis on cumulative confirmed cases for each state on May 10, 2020. The policy types follow the data from (Raifman et al., 2020). We use the time difference (in days) between May 10, 2020 and policy-released date as the input for the analysis, and assign a negative value (-1000) for states that haven't announced the policy. The factor with the highest correlation to the social mobility index is the declaration of a state of emergency, which is the broadest type of policy.
130
+
131
+ ## 6 Related Work
132
+
133
+ There is a long line of work on geolocation prediction for Twitter, which requires inferring a location for a specific tweet or user (Dredze et al., 2013; Zheng et al., 2018; Han et al., 2014; Pavalanathan and Eisenstein, 2015). This includes work on patterns and trends in Twitter geotagged data (Dredze et al., 2016c). While most of this work focused on a user, and thus is not suitable for tracking a user's movements, there may be opportunities to combine these methods with our approach.
134
+
135
+ There have been many studies that have analyzed Twitter geolocation data to study population movements. Hawelka et al. (2014) demonstrated a method for computing global travel patterns from Twitter, and Dredze et al. (2016b) adapted this method to support efforts in combating the Zika epidemic.
136
+
137
+ ---
138
+
139
+ ${}^{3}$ https://github.com/CSSEGISandData/COVID-19
140
+
141
+ ---
142
+
143
+ ![01963da8-1c30-7392-84d3-1a35a1d78040_5_206_177_1250_622_0.jpg](images/01963da8-1c30-7392-84d3-1a35a1d78040_5_206_177_1250_622_0.jpg)
144
+
145
+ Figure 3: Pearson correlation between daily COVID-19 infection rates and various factors at state level.
146
+
147
+ ![01963da8-1c30-7392-84d3-1a35a1d78040_5_206_902_1247_620_0.jpg](images/01963da8-1c30-7392-84d3-1a35a1d78040_5_206_902_1247_620_0.jpg)
148
+
149
+ Figure 4: Pearson correlation between social mobility index and various factors at state level.
150
+
151
+ Several studies have used human mobility patterns from Twitter data (Jurdak et al., 2015; Huang and Wong, 2015; Birkin et al., 2014; Hasan et al., 2013). These studies have included analyses of urban mobility patterns (Luo et al., 2016; Soliman et al., 2017; Kurkcu et al., 2016). Finally, some of these analyses have considered mobility patterns around mass events (Steiger et al., 2015).
152
+
153
+ ## 7 Conclusion
154
+
155
+ We presented the Twitter Social Mobility Index, a measure of social mobility based on public Twitter geolocated tweets. Our analysis shows that overall in the United States there has been a large drop in mobility. However, the drop is inconsistent and varies significantly by state. It appears that states that were early adopters of social distancing practices have more significant drops than states that have not yet implemented these practices.
156
+
157
+ Our work on this data is ongoing, and there are several directions that warrant further study. First, as states begin to reopen, and some states maintain restrictions, tracking changes in population behaviors will be helpful in making policy decisions. Second, we focused on the United States, but Twitter data provides sufficient coverage for many countries to replicate our analysis. Third, for each user in the dataset there exists tweet content, that can reflect a user's attitudes, beliefs and behaviors. Studying these together with their mobility reduction could yield further insights. Our findings are presented on http://socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
158
+
159
+ ## References
160
+
161
+ Alyce S Adams, STEPHEN B Soumerai, Jonathan Lo-mas, and Dennis Ross-Degnan. 1999. Evidence of self-report bias in assessing adherence to guidelines. International Journal for Quality in Health Care, 11(3):187-192.
162
+
163
+ Mark Birkin, Kirk Harland, Nicolas Malleson, Philip Cross, and Martin Clarke. 2014. An examination of personal mobility patterns in space and time using twitter. International Journal of Agricultural and Environmental Information Systems (IJAEIS), $5\left( 3\right) : {55} - {72}$ .
164
+
165
+ Roy Burstein, Hao Hu, Niket Thakkar, Andrew Schroeder, Mike Famulare, and Daniel Klein. 2020. Understanding the impact of covid-19 policy change in the greater seattle area using mobility data. https://covid.idmod.org/data/ Understanding_impact_of_COVID_policy_ change_Seattle.pdf.
166
+
167
+ Peter Caley, David J Philp, and Kevin McCracken. 2008. Quantifying social distancing arising from pandemic influenza. Journal of the Royal Society Interface, 5(23):631-639.
168
+
169
+ Mark Dredze, David A Broniatowski, Michael C Smith, and Karen M Hilyard. 2016a. Understanding vaccine refusal: why we need social media now. American journal of preventive medicine, 50(4):550-552.
170
+
171
+ Mark Dredze, Manuel García-Herranz, Alex Rutherford, and Gideon Mann. 2016b. Twitter as a source of global mobility patterns for social good. In ICML Workshop on #Data4Good: Machine Learning in Social Good Applications.
172
+
173
+ Mark Dredze, Miles Osborne, and Prabhanjan Kam-badur. 2016c. Geolocation for twitter: Timing matters. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1064-1069.
174
+
175
+ Mark Dredze, Michael J Paul, Shane Bergsma, and Hieu Tran. 2013. Carmen: A twitter geolocation system with applications to public health. In Workshops at the Twenty-Seventh AAAI Conference on Artificial Intelligence.
176
+
177
+ Eli P Fenichel, Carlos Castillo-Chavez, M Graziano Ceddia, Gerardo Chowell, Paula A Gonzalez Parra, Graham J Hickling, Garth Holloway, Richard Horan, Benjamin Morin, Charles Perrings, et al. 2011. Adaptive human behavior in epidemiological models. Proceedings of the National Academy of Sciences, 108(15):6306-6311.
178
+
179
+ James Glanz, Benedict Carey, Josh Holder, Derek Watkins, Jennifer Valentino-DeVries, Rick Rojas, and Lauren Leather. 2020. Where America Didn't Stay Home Even as the Virus Spread. https: //www.nytimes.com/interactive/2020/04/ 02/us/coronavirus-social-distancing. html.
180
+
181
+ Robert J Glass, Laura M Glass, Walter E Beyeler, and H Jason Min. 2006. Targeted social distancing designs for pandemic influenza. Emerging infectious diseases, 12(11):1671.
182
+
183
+ Google. 2020. COVID-19 community mobility reports. https://www.google.com/covid19/ mobility/.
184
+
185
+ Bo Han, Paul Cook, and Timothy Baldwin. 2014. Text-based twitter user geolocation prediction. Journal of Artificial Intelligence Research, 49:451-500.
186
+
187
+ Samiul Hasan, Xianyuan Zhan, and Satish V Ukkusuri. 2013. Understanding urban human activity and mobility patterns using large-scale location-based data from online social media. In Proceedings of the 2nd ACM SIGKDD international workshop on urban computing, pages 1-8.
188
+
189
+ Bartosz Hawelka, Izabela Sitko, Euro Beinat, Stanislav Sobolevsky, Pavlos Kazakopoulos, and Carlo Ratti. 2014. Geo-located twitter as proxy for global mobility patterns. Cartography and Geographic Information Science, 41(3):260-271.
190
+
191
+ Qunying Huang and David WS Wong. 2015. Modeling and visualizing regular human mobility patterns with uncertainty: An example using twitter data. Annals of the Association of American Geographers, 105(6):1179-1197.
192
+
193
+ Raja Jurdak, Kun Zhao, Jiajun Liu, Maurice Abou-Jaoude, Mark Cameron, and David Newth. 2015. Understanding human mobility from twitter. PloS one, 10(7).
194
+
195
+ Joel K Kelso, George J Milne, and Heath Kelly. 2009. Simulation suggests that rapid activation of social distancing can arrest epidemic development due to a novel strain of influenza. BMC public health, 9(1):117.
196
+
197
+ Abdullah Kurkcu, K Ozbay, and EF Morgul. 2016. Evaluating the usability of geo-located twitter as a tool for human activity and mobility patterns: A case
198
+
199
+ study for nyc. In Transportation Research Board's 95th Annual Meeting, pages 1-20.
200
+
201
+ Arielle Lasry, Daniel Kidder, Marisa Hast, Jason Poovey, Gregory Sunshine, Nicole Zviedrite, Faruque Ahmed, and Kathleen A Ethier. 2020. Timing of cmmunity mitigation and changes in reported covid-19 and community mobility-four us metropolitan areas, february 26-april 1, 2020.
202
+
203
+ Feixiong Luo, Guofeng Cao, Kevin Mulligan, and Xi-ang Li. 2016. Explore spatiotemporal and demographic characteristics of human mobility via twitter: A case study of chicago. Applied Geography, 70:11-25.
204
+
205
+ Savi Maharaj and Adam Kleczkowski. 2012. Controlling epidemic spread by social distancing: Do it well or not at all. BMC Public Health, 12(1):679.
206
+
207
+ Michael J Paul and Mark Dredze. 2017. Social monitoring for public health. Synthesis Lectures on Information Concepts, Retrieval, and Services, 9(5):1- 183.
208
+
209
+ Umashanthi Pavalanathan and Jacob Eisenstein. 2015. Confounds and consequences in geotagged twitter data. arXiv preprint arXiv:1506.02275.
210
+
211
+ Kiesha Prem, Yang Liu, Timothy W Russell, Adam J Kucharski, Rosalind M Eggo, Nicholas Davies, Stefan Flasche, Samuel Clifford, Carl AB Pearson, James D Munday, et al. 2020. The effect of control strategies to reduce social mixing on outcomes of the covid-19 epidemic in wuhan, china: a modelling study. The Lancet Public Health.
212
+
213
+ J Raifman, K Nocka, D Jones, J Bor, S Lipson, J Jay, and P Chan. 2020. Covid-19 us state policy database. Boston, MA: Boston University.
214
+
215
+ Lydia Saad. 2020. Americans step up their social distancing even further. https://news.gallup.com/opinion/gallup/298310/ americans-step-social-distancing-even-further. aspx.
216
+
217
+ Aiman Soliman, Kiumars Soltani, Junjun Yin, Anand Padmanabhan, and Shaowen Wang. 2017. Social sensing of urban land use based on analysis of twitter users' mobility patterns. PloSone, 12(7):e0181657.
218
+
219
+ Enrico Steiger, Timothy Ellersiek, Bernd Resch, and Alexander Zipf. 2015. Uncovering latent mobility patterns from twitter during mass events. GI_Forum, 1:525-534.
220
+
221
+ Charles Truong, Laurent Oudre, and Nicolas Vayatis. 2020. Selective review of offline change point detection methods. Signal Processing, 167:107299.
222
+
223
+ Unacast. 2020. Social distancing scoreboard. https://www.unacast.com/covid19/ social-distancing-scoreboard.
224
+
225
+ Jeff Zeleny. 2020. Why these 8 Republican governors are holding out on statewide stay-at-home orders. https: //www.cnn.com/2020/04/04/politics/ republican-governors-stay-at-home-orders-coronaviru index. html.
226
+
227
+ Xin Zheng, Jialong Han, and Aixin Sun. 2018. A survey of location prediction on twitter. IEEE Transactions on Knowledge and Data Engineering, 30(9):1652-1671.
228
+
229
+ <table><tr><td/><td colspan="2">Mobility (KM)</td><td/><td colspan="3">User level reduction</td></tr><tr><td>location</td><td>Before distancing</td><td>After distancing</td><td>Group level reduction</td><td>Median reduction</td><td>Median seasonal reduction</td><td>Rank</td></tr><tr><td>AK</td><td>109.76</td><td>25.47</td><td>76.80%</td><td>99.84%</td><td>63.73%</td><td>1</td></tr><tr><td>AL</td><td>48.04</td><td>22.57</td><td>53.03%</td><td>84.47%</td><td>72.94%</td><td>47</td></tr><tr><td>AR</td><td>50.54</td><td>23.15</td><td>54.19%</td><td>91.87%</td><td>76.81%</td><td>45</td></tr><tr><td>AZ</td><td>62.85</td><td>23.47</td><td>62.66%</td><td>93.69%</td><td>85.55%</td><td>26</td></tr><tr><td>CA</td><td>78.58</td><td>29.60</td><td>62.33%</td><td>96.65%</td><td>91.35%</td><td>29</td></tr><tr><td>CO</td><td>72.23</td><td>24.47</td><td>66.12%</td><td>98.23%</td><td>93.37%</td><td>12</td></tr><tr><td>CT</td><td>45.51</td><td>14.89</td><td>67.28%</td><td>96.29%</td><td>89.25%</td><td>8</td></tr><tr><td>DC</td><td>77.67</td><td>19.74</td><td>74.58%</td><td>100.00%</td><td>97.75%</td><td>2</td></tr><tr><td>DE</td><td>43.63</td><td>13.61</td><td>68.81%</td><td>93.44%</td><td>85.08%</td><td>7</td></tr><tr><td>FL</td><td>76.99</td><td>32.24</td><td>58.13%</td><td>92.38%</td><td>82.92%</td><td>42</td></tr><tr><td>GA</td><td>65.64</td><td>27.11</td><td>58.70%</td><td>85.26%</td><td>78.00%</td><td>39</td></tr><tr><td>HI</td><td>147.61</td><td>70.75</td><td>52.07%</td><td>97.69%</td><td>89.21%</td><td>51</td></tr><tr><td>IA</td><td>50.42</td><td>20.59</td><td>59.17%</td><td>95.91%</td><td>89.82%</td><td>37</td></tr><tr><td>ID</td><td>70.77</td><td>33.36</td><td>52.86%</td><td>94.12%</td><td>78.19%</td><td>49</td></tr><tr><td>IL</td><td>55.59</td><td>19.38</td><td>65.15%</td><td>98.71%</td><td>93.01%</td><td>16</td></tr><tr><td>IN</td><td>45.86</td><td>17.15</td><td>62.60%</td><td>97.19%</td><td>89.61%</td><td>27</td></tr><tr><td>KS</td><td>65.50</td><td>23.19</td><td>64.60%</td><td>97.03%</td><td>81.57%</td><td>19</td></tr><tr><td>KY</td><td>44.67</td><td>15.31</td><td>65.74%</td><td>93.93%</td><td>83.42%</td><td>13</td></tr><tr><td>LA</td><td>45.98</td><td>19.39</td><td>57.83%</td><td>86.13%</td><td>77.76%</td><td>43</td></tr><tr><td>MA</td><td>58.69</td><td>17.64</td><td>69.95%</td><td>98.83%</td><td>93.93%</td><td>5</td></tr><tr><td>MD</td><td>46.10</td><td>15.19</td><td>67.04%</td><td>94.80%</td><td>88.67%</td><td>9</td></tr><tr><td>ME</td><td>59.68</td><td>22.45</td><td>62.38%</td><td>93.77%</td><td>78.53%</td><td>28</td></tr><tr><td>MI</td><td>56.24</td><td>20.96</td><td>62.72%</td><td>96.84%</td><td>90.42%</td><td>25</td></tr><tr><td>MN</td><td>64.01</td><td>21.68</td><td>66.13%</td><td>98.36%</td><td>91.34%</td><td>11</td></tr><tr><td>MO</td><td>52.27</td><td>20.08</td><td>61.59%</td><td>95.89%</td><td>88.65%</td><td>31</td></tr><tr><td>MS</td><td>50.24</td><td>24.36</td><td>51.51%</td><td>79.09%</td><td>69.11%</td><td>52</td></tr><tr><td>MT</td><td>69.93</td><td>32.96</td><td>52.86%</td><td>90.17%</td><td>65.58%</td><td>48</td></tr><tr><td>NC</td><td>52.11</td><td>19.73</td><td>62.14%</td><td>94.27%</td><td>85.26%</td><td>30</td></tr><tr><td>ND</td><td>65.77</td><td>23.65</td><td>64.04%</td><td>99.71%</td><td>97.21%</td><td>22</td></tr><tr><td>NE</td><td>55.11</td><td>21.88</td><td>60.29%</td><td>99.95%</td><td>91.40%</td><td>35</td></tr><tr><td>NH</td><td>55.09</td><td>19.48</td><td>64.64%</td><td>96.26%</td><td>85.35%</td><td>18</td></tr><tr><td>NJ</td><td>49.27</td><td>14.62</td><td>70.33%</td><td>97.28%</td><td>93.41%</td><td>4</td></tr><tr><td>NM</td><td>58.20</td><td>24.23</td><td>58.37%</td><td>95.66%</td><td>73.14%</td><td>41</td></tr><tr><td>NV</td><td>80.25</td><td>33.19</td><td>58.64%</td><td>93.42%</td><td>85.00%</td><td>40</td></tr><tr><td>NY</td><td>71.17</td><td>24.57</td><td>65.48%</td><td>98.94%</td><td>94.20%</td><td>15</td></tr><tr><td>OH</td><td>44.88</td><td>15.73</td><td>64.95%</td><td>94.81%</td><td>88.68%</td><td>17</td></tr><tr><td>OK</td><td>52.34</td><td>24.69</td><td>52.83%</td><td>88.38%</td><td>76.99%</td><td>50</td></tr><tr><td>OR</td><td>71.12</td><td>25.97</td><td>63.49%</td><td>97.51%</td><td>92.68%</td><td>24</td></tr><tr><td>PA</td><td>54.40</td><td>19.45</td><td>64.24%</td><td>97.59%</td><td>89.85%</td><td>20</td></tr><tr><td>PR</td><td>44.96</td><td>14.94</td><td>66.77%</td><td>97.26%</td><td>90.38%</td><td>10</td></tr><tr><td>RI</td><td>46.80</td><td>14.50</td><td>69.01%</td><td>96.74%</td><td>90.55%</td><td>6</td></tr><tr><td>SC</td><td>48.28</td><td>19.85</td><td>58.88%</td><td>86.03%</td><td>77.92%</td><td>38</td></tr><tr><td>SD</td><td>68.41</td><td>31.52</td><td>53.92%</td><td>95.91%</td><td>86.66%</td><td>46</td></tr><tr><td>TN</td><td>56.77</td><td>21.83</td><td>61.55%</td><td>94.89%</td><td>85.89%</td><td>32</td></tr><tr><td>TX</td><td>73.24</td><td>28.60</td><td>60.95%</td><td>93.81%</td><td>84.18%</td><td>34</td></tr><tr><td>UT</td><td>68.43</td><td>23.62</td><td>65.49%</td><td>93.56%</td><td>91.50%</td><td>14</td></tr><tr><td>VA</td><td>57.37</td><td>22.33</td><td>61.07%</td><td>95.62%</td><td>87.51%</td><td>33</td></tr><tr><td>VI</td><td>132.16</td><td>47.57</td><td>64.00%</td><td>98.66%</td><td>87.72%</td><td>23</td></tr><tr><td>VT</td><td>56.84</td><td>20.33</td><td>64.23%</td><td>96.35%</td><td>86.70%</td><td>21</td></tr><tr><td>WA</td><td>75.34</td><td>21.31</td><td>71.71%</td><td>98.43%</td><td>95.72%</td><td>3</td></tr><tr><td>WI</td><td>56.32</td><td>22.68</td><td>59.74%</td><td>96.88%</td><td>91.75%</td><td>36</td></tr><tr><td>WV</td><td>46.59</td><td>20.02</td><td>57.02%</td><td>88.95%</td><td>82.40%</td><td>44</td></tr><tr><td>WY</td><td>71.64</td><td>44.03</td><td>38.54%</td><td>84.95%</td><td>50.90%</td><td>53</td></tr><tr><td>United States</td><td>65.59</td><td>25.04</td><td>61.83%</td><td>95.86%</td><td>88.36%</td><td>-</td></tr></table>
230
+
231
+ Table 2: Reduction of mobility for all states and territories in United States and United States. Ranks are based on group level reduction.
232
+
233
+ <table><tr><td/><td colspan="2">Mobility (KM)</td><td/><td colspan="3">$\mathbf{{Userlevelreduction}}$</td></tr><tr><td>location</td><td>Before distancing</td><td>After distancing</td><td>Group level reduction</td><td>Median reduction</td><td>Median seasonal reduction</td><td>Rank</td></tr><tr><td>New York City</td><td>86.37</td><td>29.91</td><td>65.38%</td><td>99.70%</td><td>96.69%</td><td>27</td></tr><tr><td>Los Angeles</td><td>103.16</td><td>40.86</td><td>60.39%</td><td>98.69%</td><td>93.87%</td><td>40</td></tr><tr><td>Chicago</td><td>64.09</td><td>19.87</td><td>69.00%</td><td>99.96%</td><td>94.58%</td><td>14</td></tr><tr><td>Houston</td><td>53.70</td><td>21.50</td><td>59.96%</td><td>97.04%</td><td>88.00%</td><td>41</td></tr><tr><td>Phoenix</td><td>60.07</td><td>19.12</td><td>68.17%</td><td>96.32%</td><td>91.08%</td><td>18</td></tr><tr><td>Philadelphia</td><td>54.80</td><td>17.70</td><td>67.71%</td><td>99.16%</td><td>93.70%</td><td>19</td></tr><tr><td>San Antonio</td><td>45.43</td><td>15.93</td><td>64.93%</td><td>99.00%</td><td>91.33%</td><td>28</td></tr><tr><td>San Diego</td><td>79.21</td><td>28.19</td><td>64.41%</td><td>98.67%</td><td>92.77%</td><td>30</td></tr><tr><td>Dallas</td><td>63.92</td><td>21.85</td><td>65.81%</td><td>95.48%</td><td>89.32%</td><td>25</td></tr><tr><td>San Jose</td><td>60.63</td><td>14.82</td><td>75.55%</td><td>99.88%</td><td>97.34%</td><td>2</td></tr><tr><td>Austin</td><td>72.50</td><td>22.84</td><td>68.50%</td><td>99.66%</td><td>94.66%</td><td>17</td></tr><tr><td>Jacksonville</td><td>47.06</td><td>26.87</td><td>42.90%</td><td>96.60%</td><td>92.92%</td><td>50</td></tr><tr><td>Fort Worth</td><td>51.67</td><td>19.68</td><td>61.92%</td><td>95.33%</td><td>85.72%</td><td>37</td></tr><tr><td>Columbus</td><td>44.67</td><td>14.73</td><td>67.02%</td><td>96.91%</td><td>93.15%</td><td>22</td></tr><tr><td>San Francisco</td><td>113.77</td><td>31.99</td><td>71.89%</td><td>99.93%</td><td>98.94%</td><td>8</td></tr><tr><td>Charlotte</td><td>58.13</td><td>20.90</td><td>64.04%</td><td>96.26%</td><td>89.83%</td><td>31</td></tr><tr><td>Indianapolis</td><td>46.50</td><td>14.53</td><td>68.76%</td><td>99.26%</td><td>91.85%</td><td>15</td></tr><tr><td>Seattle</td><td>98.92</td><td>21.64</td><td>78.12%</td><td>99.98%</td><td>99.06%</td><td>1</td></tr><tr><td>Denver</td><td>81.11</td><td>23.08</td><td>71.55%</td><td>99.05%</td><td>96.30%</td><td>9</td></tr><tr><td>Washington</td><td>80.26</td><td>22.12</td><td>72.43%</td><td>99.93%</td><td>97.27%</td><td>7</td></tr><tr><td>Boston</td><td>77.58</td><td>27.47</td><td>64.59%</td><td>99.42%</td><td>96.40%</td><td>29</td></tr><tr><td>El Paso</td><td>51.10</td><td>21.50</td><td>57.92%</td><td>100.00%</td><td>95.97%</td><td>44</td></tr><tr><td>Detroit</td><td>53.94</td><td>22.38</td><td>58.50%</td><td>94.89%</td><td>83.68%</td><td>43</td></tr><tr><td>Nashville</td><td>72.83</td><td>23.94</td><td>67.13%</td><td>98.45%</td><td>94.88%</td><td>21</td></tr><tr><td>Portland</td><td>78.91</td><td>24.81</td><td>68.56%</td><td>99.45%</td><td>96.81%</td><td>16</td></tr><tr><td>Memphis</td><td>48.64</td><td>18.41</td><td>62.15%</td><td>98.65%</td><td>86.75%</td><td>35</td></tr><tr><td>Oklahoma City</td><td>46.07</td><td>16.78</td><td>63.57%</td><td>91.34%</td><td>75.19%</td><td>33</td></tr><tr><td>Las Vegas</td><td>80.21</td><td>35.69</td><td>55.50%</td><td>94.87%</td><td>83.90%</td><td>47</td></tr><tr><td>Louisville</td><td>45.52</td><td>12.97</td><td>71.51%</td><td>94.31%</td><td>77.68%</td><td>10</td></tr><tr><td>Baltimore</td><td>45.61</td><td>11.66</td><td>74.43%</td><td>96.10%</td><td>89.37%</td><td>4</td></tr><tr><td>Milwaukee</td><td>52.01</td><td>22.78</td><td>56.19%</td><td>97.01%</td><td>91.86%</td><td>46</td></tr><tr><td>Albuquerque</td><td>51.04</td><td>16.88</td><td>66.93%</td><td>98.95%</td><td>75.81%</td><td>23</td></tr><tr><td>Tucson</td><td>53.58</td><td>23.10</td><td>56.89%</td><td>95.73%</td><td>84.48%</td><td>45</td></tr><tr><td>Fresno</td><td>37.39</td><td>10.84</td><td>71.02%</td><td>96.06%</td><td>89.20%</td><td>11</td></tr><tr><td>Mesa</td><td>48.77</td><td>21.72</td><td>55.47%</td><td>92.40%</td><td>71.33%</td><td>48</td></tr><tr><td>Sacramento</td><td>62.14</td><td>25.45</td><td>59.05%</td><td>94.82%</td><td>94.47%</td><td>42</td></tr><tr><td>Atlanta</td><td>87.90</td><td>33.39</td><td>62.02%</td><td>93.50%</td><td>86.36%</td><td>36</td></tr><tr><td>Kansas City</td><td>62.93</td><td>17.23</td><td>72.61%</td><td>98.30%</td><td>96.54%</td><td>6</td></tr><tr><td>Colorado Springs</td><td>64.82</td><td>23.55</td><td>63.67%</td><td>99.47%</td><td>95.66%</td><td>32</td></tr><tr><td>Miami</td><td>114.33</td><td>55.77</td><td>51.22%</td><td>97.55%</td><td>88.56%</td><td>49</td></tr><tr><td>Raleigh</td><td>51.62</td><td>15.24</td><td>70.47%</td><td>97.79%</td><td>89.51%</td><td>12</td></tr><tr><td>Omaha</td><td>49.99</td><td>15.38</td><td>69.24%</td><td>100.00%</td><td>93.72%</td><td>13</td></tr><tr><td>Long Beach</td><td>54.97</td><td>20.51</td><td>62.70%</td><td>93.33%</td><td>89.75%</td><td>34</td></tr><tr><td>Virginia Beach</td><td>48.91</td><td>18.92</td><td>61.33%</td><td>96.35%</td><td>88.38%</td><td>39</td></tr><tr><td>Oakland</td><td>87.36</td><td>22.26</td><td>74.52%</td><td>98.41%</td><td>96.26%</td><td>3</td></tr><tr><td>Minneapolis</td><td>69.67</td><td>18.72</td><td>73.14%</td><td>99.14%</td><td>94.21%</td><td>5</td></tr><tr><td>Tulsa</td><td>48.54</td><td>18.51</td><td>61.85%</td><td>99.89%</td><td>93.20%</td><td>38</td></tr><tr><td>Arlington</td><td>56.42</td><td>18.27</td><td>67.62%</td><td>97.58%</td><td>93.25%</td><td>20</td></tr><tr><td>Tampa</td><td>70.50</td><td>23.55</td><td>66.59%</td><td>94.48%</td><td>83.23%</td><td>24</td></tr><tr><td>New Orleans</td><td>55.96</td><td>19.18</td><td>65.73%</td><td>97.00%</td><td>88.75%</td><td>26</td></tr></table>
234
+
235
+ Table 3: Reduction of mobility for top 50 United States cities by population. Ranks are based on group level reduction.
236
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qJYo-Bbxu07/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § THE TWITTER SOCIAL MOBILITY INDEX: MEASURING SOCIAL DISTANCING PRACTICES FROM GEOLOCATED TWEETS
2
+
3
+ Paiheng Xu, Mark Dredze
4
+
5
+ Malone Center for Engineering in Healthcare Center for Language and Speech Processing Department of Computer Science Johns Hopkins University
6
+
7
+ paiheng, mdredze@jhu.edu
8
+
9
+ David A. Broniatowski
10
+
11
+ Department of Engineering Management and Systems Engineering Institute for Data, Democracy, and Politics The George Washington University
12
+
13
+ broniatowski@gwu.edu
14
+
15
+ § ABSTRACT
16
+
17
+ Social distancing is an important component of the response to the novel Coronavirus (COVID-19) pandemic. Minimizing social interactions and travel reduces the rate at which the infection spreads, and "flattens the curve" such that the medical system can better treat infected individuals. However, it remains unclear how the public will respond to these policies. This paper presents the Twitter Social Mobility Index, a measure of social distancing and travel derived from Twitter data. We use public geolocated Twitter data to measure how much a user travels in a given week. We find a large reduction in travel in the United States after the implementation of social distancing policies, with larger reductions in states that were early adopters and smaller changes in states without policies. Our findings are presented on http://socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ The outbreak of the SARS-CoV-2 virus, a Coronavirus that causes the disease COVID-19, has caused a pandemic on a scale unseen in a generation. Without an available vaccine to reduce transmission of the virus, public health and elected officials have called on the public to practice social distancing. Social distancing is a set of practices in which individuals maintain a physical distance so as to reduce the number of physical contacts they encounter (Maharaj and Kleczkowski, 2012; Kelso et al., 2009). These practices include maintaining a distance of at least six feet and avoiding large gatherings (Glass et al., 2006). At the time of this writing, in the United States nearly every state has implemented state-wide "stay-at-home" orders to enforce social distancing practices (Zeleny, 2020).
22
+
23
+ While an important tool in the fight against COVID-19, the implementation of social distancing by the general public can vary widely. While a state governor may issue an order for the practice, individuals in different states may respond in different ways. Understanding actual reductions in travel and social contacts is critical to measuring the effectiveness of the policy. These policies may remain in effect for an extended period of time. Thus, the public may begin to relax their practices, making additional policies necessary. Additionally, epidemiologists already model the impact of social distancing policies on the course of an outbreak (Prem et al., 2020; Fenichel et al., 2011; Caley et al., 2008). These models may be more effective when incorporating actual measures of social distancing, rather than assuming official policies are implemented in practice.
24
+
25
+ It can be challenging to obtain data on the efficacy of social distancing practices, especially during an ongoing pandemic. A recent Gallup poll surveyed Americans to find that many adults are taking precautions to keep their distance from others (Saad, 2020). However, while polling can provide insights, it cannot provide a solution. Polling is relatively expensive, making it a poor choice for ongoing population surveillance practices and providing data on specific geographic locales, i.e. US States and major cities (Dredze et al., 2016a). Additionally, polling around public health issues suffers from response bias, as individuals may overstate their compliance with established public health recommendations (Adams et al., 1999).
26
+
27
+ Over the past decade, analyses of social media and web data have been widely adopted to support public health objectives (Paul and Dredze, 2017). In this vein, several efforts have emerged over the past few weeks to track social distancing practices using these data sources. Google has released "COVID-19 Community Mobility Reports" which use Google data to "chart movement trends over time by geography, across different categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential" (Google, 2020). The Unacast "Social Distancing Scoreboard" uses data collected from 127 million monthly active users to measure the implementation of social distancing practices (Unacast, 2020). Researchers at the Institute for Disease Modeling have used data from Facebook's "Data for Good" program to model the decline in mobility in the Greater Seattle area and its effect on the spread of COVID-19 (Burstein et al., 2020). Using cell phone data, the New York Times completed an analysis that showed that stay-at-home orders dramatically reduced travel, but that states that have waited to enact such orders have continued to travel widely (Glanz et al., 2020). These efforts provide new and important opportunities to study social distancing in real-time.
28
+
29
+ We present the Twitter Social Mobility Index, a measure of social distancing and travel patterns derived from public Twitter data. We use public geolocated Twitter data to measure how much a user travels in a given week. We compute a metric based on the standard deviation of a user's geolo-cated tweets each week, and aggregate these data over an entire population to produce a metric for the United States as a whole, for individual states and for some US cities. We find that, taking the US as a whole, there has been a dramatic drop in travel in recent weeks, with travel between March 16 and April 27, 2020 showing the lowest amount of travel since January 1, 2019, the start of our dataset. Additionally, we find that travel reductions are not uniform across the United States, but vary from state to state. However, there's no clear correlation between the social mobility and confirmed COVID-19 cases at the state level. A key advantage of our approach is that, unlike other travel and social distancing analyses referenced above, we rely on entirely public data, enabling others to replicate our findings and explore different aspects of these data. Additionally, since Twitter contains user generated content in addition to location information, future analyses can correlate attitudes, beliefs, and behaviors with changes in social mobility.
30
+
31
+ Our findings are presented on http:// socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
32
+
33
+ § 2 DATA
34
+
35
+ Twitter offers several ways in which a user can indicate their location. If a user is tweeting from a GPS enabled device, they can attach their exact coordinate to that tweet. Twitter may then display to the user, and provide in their API, the specific place that corresponds to these coordinates. Alternatively, a user can explicitly select a location, which can be a point of interest (coffee shop), a neighborhood, a city, state, or country. If the tweet is public, this geolocation information is supplied with the tweet.
36
+
37
+ We used the Twitter streaming ${\mathrm{{API}}}^{1}$ to download tweets based on location. We used a bounding box that covered the entire United States, including territories. We used data from this collection starting on January 1, 2019 and ending on April 27, 2020. In total, this included 3,768,959 Twitter users and 469,669,925 tweets in United States.
38
+
39
+ § 3 LOCATION DATA
40
+
41
+ We process the two types of geolocation information described in the previous section.
42
+
43
+ Coordinates The exact coordinates (latitude/longitude) provided by the user ("coordinates" field in the Twitter JSON object). About 8% of our data included "coordinates".
44
+
45
+ Place The "place" field in the Twitter json object indicates a known location in which the tweet was authored. A place can be a point of interest (a specific hotel), a neighborhood ("Downtown Jacksonville"), a city ("Kokomo, IN"), a state ("Arizona") or a country ("United States"). The place object contains a unique ID, a bounding box, the country and a name. More information about the location is available from the Twitter GEO API. A place is available with a tweet in either of two conditions. First, Twitter identifies the coordinates provided by the user as occurring in a known place. Second, if the user manually selects the place when authoring the tweet.
46
+
47
+ Since coordinates give a more precise location, we use them instead of place when available. If we only have a place, we assume that the user is in the center of the place, as given by the place's bounding box.
48
+
49
+ For points of interest and neighborhoods, Twitter only provides the country in the associated metadata. While in some cases the city can be parsed from the name, and the state inferred, we opted to exclude these places from our analysis for states. The full location details can be obtained from querying the Twitter API, but the magnitude of data in our analysis made this too time consuming. This excluded about ${1.8}\%$ of our data.
50
+
51
+ ${}^{1}$ https://developer.twitter.com/en/docs/tweets/filter-realtime/overview/statuses-filter
52
+
53
+ We include an analysis of the 50 most populous United States cites. For this analysis, we included points of interest that had the city name in their names, e.g. "New York City Center". Specifically for New York City, we include places that corresponded to each of the five New York City boroughs (Brooklyn, Manhattan, Queens, Staten Island, The Bronx).
54
+
55
+ In summary, for each geolocated tweet we have an associated latitude and longitude.
56
+
57
+ § 4 COMPUTING MOBILITY
58
+
59
+ We define the Twitter Social Mobility Index as follows. For each user, we collect all locations (coordinates) in a one week period, where a week starts on Monday and ends the following Sunday. We compute the centroid of all of the the coordinates and consider this the "home" location for the user for that week. We then measure the distance between each location and the centroid for that week. For distance, we measure the geodesic distance in kilometers between two adjacent records using geopy ${}^{2}$ . After collecting the distances we measure the standard deviation of these distances. In summary, this measure reflects the area and regularity of travel for a user, rather than the raw distance traveled. Therefore, a user who takes a long trip with a small number of checkins would have a larger social mobility measure than a user with many checkins who traveled in a small area. As the measure is sensitive to the number of checkins, it would reflect when people has less checkins during the pandemic.
60
+
61
+ We aggregate the results by week by taking the mean measure of all users in a given geographic area. We also present results for a 7-day moving average aggregation as a measure of daily movement. We record the variance of these measures to study the travel variance in the population, which will indicate if travel is reduced overall but not for some users.
62
+
63
+ We produce aggregate scores by geographic area for the United State as a whole, for each US state and territory, and for the 50 most populous cities in the US. We determine the geographic area of a user based on their centroid location for all time in our collection.
64
+
65
+ We compute the social mobility index for each day and week between January 1, 2019 and April 27, 2020. We select the date of March 16, 2020 as the start of social distancing on the national level, though individual states have implemented practices at different times. Therefore, we divide the data into two time periods: before social distancing (January 1, 2019 - March 15, 2020) and after social distancing (March 16th, 2020 - April 27, 2020).
66
+
67
+ We then compute the group level reduction in social mobility by considering average values as follows:
68
+
69
+ $$
70
+ \text{ Mobility Reduction } = 1 - \frac{\text{ mobility after social distancing }}{\text{ mobility before social distancing }}.
71
+ $$
72
+
73
+ (1)
74
+
75
+ We also compute the reduction for each user and then track the median value, number of users active in both periods, and proportion of active users that completely reduce their mobility. We also conduct a similar analysis for seasonal effects by comparing mobility after social distancing and mobility during same period in 2019.
76
+
77
+ To handle sparse data issues in our dataset, we exclude (1) users with less than 3 geolocated tweets overall, and (2) a weekly record for a user if that user has less than 2 geolocated tweets in that week. Additionally, due to data loss in our data collection process we remove two weeks with far less data than other time periods by taking a ${99.75}\%$ confidence limit on number of users and records.
78
+
79
+ § 5 RESULTS
80
+
81
+ Social Mobility Index Table 2 shows the Twitter Social Mobility Index measured in kilometers for every state and territory in United States, and United States as a whole. City results appear in Table 3. We also include the rank of location by the group level reduction.
82
+
83
+ A few observations. The overall drop in mobility across the United States was large: ${61.83}\%$ . Figure 1 shows the weekly social mobility index for the United States for the entire time period of our dataset. The figure reflects a massive drop in mobility starting in March, with the four most recent weeks the lowest on record in our dataset. Second, every US state and territory saw a drop in mobility, ranging from 38.54% to 76.80% travel compared to numbers before March 16, 2020. However, the variance by state was high. States that were early adopters of social distancing practices are ranked highly on the reduction in travel: e.g. Washington (3) and Maryland (9). In contrast, the eight states that do not have state wide orders as of the start of April (Zeleny, 2020) rank poorly: Arkansas (45), Iowa (37), Nebraska (35), North Dakota (22), South Carolina (38), South Dakota (46), Oklahoma (50), Utah (14), Wyoming (53). We observe similar trends in the city analysis, but the median users in these cities have a larger mobility reduction than the ones in the states.
84
+
85
+ ${}^{2}$ https://github.com/geopy/geopy
86
+
87
+ < g r a p h i c s >
88
+
89
+ Figure 1: Mean social mobility index (KM) in United States from January 1, 2019 to April 27, 2020. Weeks with missing data are excluded from the figure.
90
+
91
+ Besides the group level mobility reduction (Eq. 4), we also examine the distribution of user level reduction. We only consider users that have at least two checkins in both periods, leading to a subgroup of all the users in the dataset for the reduction distribution. The median values for the reduction distribution is close to ${100}\%$ for most states. The median values for seasonal reduction are all smaller, but still suggest that people substantially reduce their mobility during the pandemic. Moreover, in the United States, 40% of the 818,213 active users completely reduced their mobility, i.e., mobility reduction of ${100}\%$ . In contrast, the same period in 2019 saw a ${31}\%$ reduction among 286,217 active users.
92
+
93
+ The White House announced "Slow the Spread" guidelines for persons to take action to reduce the spread of COVID-19 on March 16, 2020. 49.06% of the states had their largest mobility drop in the week March 16 - 22, 2020 and 22.64% in the following week. We compute a moving-average of daily mobility data, and use an offline change point detection method (Truong et al., 2020) on this trend. ${62.26}\%$ of the change points in 2020 are after the national announcement date but before the dates when individual state policies were enacted. This suggest that the national announcement had the largest effect as compared to state policies, a similar finding to the cell-phone-based mobility analysis of four large cities (Lasry et al., 2020). We also observe that, among 40 states that have announced Stay at Home policy, ${92.5}\%$ of the states have a more stationary daily mobility time series before the policy-announced date, compared to the mobility time series over all time, suggesting a rapid mobility change during pandemic.
94
+
95
+ Finally, Figure 2 shows a box-plot of the mobility variance across all users in a given time period. The distribution is long-tailed with a lot zeros, so we take the log of 1 plus each mobility index. While mobility is reduced in general, some users are still showing a lot of movement, suggesting that social distancing is not being uniformly practiced. These results clearly demonstrate that our metric can track drops in travel, suggesting that it can be used as part of ongoing pandemic response planning.
96
+
97
+ Correlation What are some of the factors that may help explain our Twitter Social Mobility Index? How well does the index track COVID-19 cases compared to other relevant factors? We analyze our data using a correlation analysis. We compute daily infection rate by dividing the number of new confirmed COVID-19 cases in each US state ${}^{3}$ by the population of the state. We compare the daily infection rate with social mobility index and the following trends (Raifman et al., 2020).
98
+
99
+ < g r a p h i c s >
100
+
101
+ Figure 2: User distribution of mean social mobility index (KM) before/after social distancing in United States.
102
+
103
+ * The size of the state in square miles.
104
+
105
+ * The number of homeless individuals (2019).
106
+
107
+ * The unemployment rate (2018)
108
+
109
+ * The percentage of the population at risk for serious illness due to COVID-19.
110
+
111
+ For each day we compute the correlation between the daily infection rate and the above data by state.
112
+
113
+ Figure 3 shows the correlation by day. We adopt infection rate because raw confirmed cases is not as informative as the population has the highest correlation. However, the most significant factor in the early stage are still population related factors, i.e., number of homeless. We don't see significant correlations with other factors including the social mobility index. Starting from mid-March, we observe trends that unemployment rate, size of the state and social mobility index have increasing correlation but still not significant enough (the absolute correlation values $< {0.5}$ ). The fluctuation in the middle is when states started to report confirmed cases.
114
+
115
+ max width=
116
+
117
+ Policy Correlation
118
+
119
+ 1-2
120
+ State of emergency 0.2587
121
+
122
+ 1-2
123
+ Date banned visitors to nursing homes 0.1510
124
+
125
+ 1-2
126
+ Stay at home/ shelter in place 0.1507
127
+
128
+ 1-2
129
+ Froze evictions 0.1411
130
+
131
+ 1-2
132
+ Closed non-essential businesses 0.1359
133
+
134
+ 1-2
135
+ Closed gyms 0.0765
136
+
137
+ 1-2
138
+ Closed movie theaters 0.0737
139
+
140
+ 1-2
141
+ Closed day cares 0.0563
142
+
143
+ 1-2
144
+ Closed restaurants except take out 0.0341
145
+
146
+ 1-2
147
+ Date closed K-12 schools -0.0821
148
+
149
+ 1-2
150
+
151
+ Table 1: Pearson correlation between cumulative confirmed COVID-19 cases at May 10, 2020 and policy release date at each state.
152
+
153
+ We conduct a similar correlation analysis between each data source and the social mobility index, shown in Figure 4. As expected, Geographical state size has the highest positive correlation. We also observe that the number of people at risk for serious illness due to COVID-19 has negative correlation at the early stage of the pandemic.
154
+
155
+ Table 1 investigates the effect of various restriction policies on confirmed cases by running a similar correlation analysis on cumulative confirmed cases for each state on May 10, 2020. The policy types follow the data from (Raifman et al., 2020). We use the time difference (in days) between May 10, 2020 and policy-released date as the input for the analysis, and assign a negative value (-1000) for states that haven't announced the policy. The factor with the highest correlation to the social mobility index is the declaration of a state of emergency, which is the broadest type of policy.
156
+
157
+ § 6 RELATED WORK
158
+
159
+ There is a long line of work on geolocation prediction for Twitter, which requires inferring a location for a specific tweet or user (Dredze et al., 2013; Zheng et al., 2018; Han et al., 2014; Pavalanathan and Eisenstein, 2015). This includes work on patterns and trends in Twitter geotagged data (Dredze et al., 2016c). While most of this work focused on a user, and thus is not suitable for tracking a user's movements, there may be opportunities to combine these methods with our approach.
160
+
161
+ There have been many studies that have analyzed Twitter geolocation data to study population movements. Hawelka et al. (2014) demonstrated a method for computing global travel patterns from Twitter, and Dredze et al. (2016b) adapted this method to support efforts in combating the Zika epidemic.
162
+
163
+ ${}^{3}$ https://github.com/CSSEGISandData/COVID-19
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 3: Pearson correlation between daily COVID-19 infection rates and various factors at state level.
168
+
169
+ < g r a p h i c s >
170
+
171
+ Figure 4: Pearson correlation between social mobility index and various factors at state level.
172
+
173
+ Several studies have used human mobility patterns from Twitter data (Jurdak et al., 2015; Huang and Wong, 2015; Birkin et al., 2014; Hasan et al., 2013). These studies have included analyses of urban mobility patterns (Luo et al., 2016; Soliman et al., 2017; Kurkcu et al., 2016). Finally, some of these analyses have considered mobility patterns around mass events (Steiger et al., 2015).
174
+
175
+ § 7 CONCLUSION
176
+
177
+ We presented the Twitter Social Mobility Index, a measure of social mobility based on public Twitter geolocated tweets. Our analysis shows that overall in the United States there has been a large drop in mobility. However, the drop is inconsistent and varies significantly by state. It appears that states that were early adopters of social distancing practices have more significant drops than states that have not yet implemented these practices.
178
+
179
+ Our work on this data is ongoing, and there are several directions that warrant further study. First, as states begin to reopen, and some states maintain restrictions, tracking changes in population behaviors will be helpful in making policy decisions. Second, we focused on the United States, but Twitter data provides sufficient coverage for many countries to replicate our analysis. Third, for each user in the dataset there exists tweet content, that can reflect a user's attitudes, beliefs and behaviors. Studying these together with their mobility reduction could yield further insights. Our findings are presented on http://socialmobility.covid19dataresources.org and we will continue to update our analysis during the pandemic.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qd51R0JNLl/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # What Are People Asking About COVID-19? A Question Classification Dataset
2
+
3
+ Jerry Wei ${}^{ \star }$ Chengyu Huang ${}^{ \star }$ Soroush Vosoughi ${}^{ \star }$ Jason Wei ${}^{ \star }$
4
+
5
+ Dartmouth College
6
+
7
+ jerry.weng.wei@protagolabs.com
8
+
9
+ huangchengyu24@gmail.com
10
+
11
+ \{soroush, jason.20\}@dartmouth.edu
12
+
13
+ ## Abstract
14
+
15
+ We present COVID-Q, a set of 1,690 questions about COVID-19 from 13 sources, which we annotate into 15 question categories and 207 question clusters. The most common questions in our dataset asked about transmission, prevention, and societal effects of COVID, and we found that many questions that appeared in multiple sources were not answered by any FAQ websites of reputable organizations such as the CDC and FDA. We post our dataset publicly at https://github.com/ JerryWei03/COVID-Q.
16
+
17
+ For classifying questions into 15 categories, a BERT baseline scored 58.1% accuracy when trained on 20 examples per category, and for classifying questions into 89 question clusters, the baseline achieved ${54.6}\%$ accuracy. We hope COVID-Q can help either for direct use in developing applied systems or as a domain-specific resource for model evaluation.
18
+
19
+ ## 1 Introduction
20
+
21
+ A major challenge during fast-developing pandemics such as COVID-19 is keeping people updated with the latest and most relevant information. Since the beginning of COVID, several web-sites have created frequently asked questions (FAQ) pages that they regularly update. But even so, users might struggle to find their questions on FAQ pages, and many questions remain unanswered. In this paper, we ask-what are people really asking about COVID, and how can we use NLP to better understand questions and retrieve relevant content?
22
+
23
+ We present COVID-Q, a dataset of 1,690 questions about COVID from 13 online sources. We annotate COVID-Q by classifying questions into 15 general question categories ${}^{1}$ (see Figure 1) and by grouping questions into question clusters, for which all questions in a cluster ask the same thing and can be answered by the same answer, for a total of 207 clusters. Throughout $\$ 2$ , we analyze the distribution of COVID-Q in terms of question category, cluster, and source.
24
+
25
+ Transmission (27) 49 1. 26 Unique Questions Societal Effects (23) Prevention (20) Societal Response (22) Reporting (16) Origin (10) Treatment (12) Speculation (9) Economic Effects (11) Individual Response (12) Comparison (10) Testing (9) Nomenclature (5) Having COVID (9) Symptoms (7) Other (6)
26
+
27
+ Figure 1: Question categories in COVID-Q, with number of question clusters per category in parentheses.
28
+
29
+ COVID-Q facilitates several question understanding tasks. First, the question categories can be used for a vanilla text classification task to determine the general category of information a question is asking about. Second, the question clusters can be used for retrieval question answering (since the cluster annotations indicate questions of same intent), where given a new question, a system aims to find a question in an existing database that asks the same thing and returns the corresponding answer (Romeo et al., 2016; Sakata et al., 2019). We provide baselines for these two tasks in $§{3.1}$ and §3.2. In addition to directly aiding applied systems development, COVID-Q could also serve as a domain-specific resource for evaluating NLP models trained on COVID data.
30
+
31
+ ---
32
+
33
+ ${}^{1}$ We do not count the "other" category.
34
+
35
+ ---
36
+
37
+ <table><tr><td rowspan="2">Source</td><td colspan="3">Questions</td><td rowspan="2">Answers</td><td rowspan="2">Questions Removed</td></tr><tr><td>Total</td><td>Multi-q-cluster</td><td>Single-q-cluster</td></tr><tr><td>Quora</td><td>675</td><td>501 (74.2%)</td><td>174 (25.8%)</td><td>0</td><td>374</td></tr><tr><td>Google Search</td><td>173</td><td>161 (93.1%)</td><td>12 (6.9%)</td><td>0</td><td>174</td></tr><tr><td>github.com/deepset-ai/COVID-QA</td><td>124</td><td>55 (44.4%)</td><td>69 (55.6%)</td><td>124</td><td>71</td></tr><tr><td>Yahoo Search</td><td>94</td><td>87 (92.6%)</td><td>7 (7.4%)</td><td>0</td><td>34</td></tr><tr><td>${}^{ * }$ Center for Disease Control</td><td>92</td><td>51 (55.4%)</td><td>41 (44.6%)</td><td>92</td><td>1</td></tr><tr><td>Bing Search</td><td>68</td><td>65 (95.6%)</td><td>3 (4.4%)</td><td>0</td><td>29</td></tr><tr><td>*Cable News Network</td><td>64</td><td>48 (75.0%)</td><td>16 (25.0%)</td><td>64</td><td>1</td></tr><tr><td>${}^{ * }$ Food and Drug Administration</td><td>57</td><td>33 (57.9%)</td><td>24 (42.1%)</td><td>57</td><td>3</td></tr><tr><td>Yahoo Answers</td><td>28</td><td>13 (46.4%)</td><td>15 (53.6%)</td><td>0</td><td>23</td></tr><tr><td>*Illinois Department of Public Health</td><td>20</td><td>18 (90.0%)</td><td>2 (10.0%)</td><td>20</td><td>0</td></tr><tr><td>*United Nations</td><td>19</td><td>18 (94.7%)</td><td>1 (5.3%)</td><td>19</td><td>6</td></tr><tr><td>*Washington DC Area Television Station</td><td>16</td><td>15 (93.8%)</td><td>1 (6.2%)</td><td>16</td><td>0</td></tr><tr><td>${}^{ * }$ Johns Hopkins University</td><td>11</td><td>10 (90.9%)</td><td>1 (9.1%)</td><td>11</td><td>1</td></tr><tr><td>Author Generated</td><td>249</td><td>249 (100.0%)</td><td>0 (0.0%)</td><td>0</td><td>0</td></tr><tr><td>Total</td><td>1,690</td><td>1,324 (78.3%)</td><td>366 (21.7%)</td><td>403</td><td>717</td></tr></table>
38
+
39
+ Table 1: Distribution of questions in COVID-Q by source. The reported number of questions excludes vague and nonsensical questions that were removed. Multi-q-cluster: number of questions that belonged to a question cluster with at least two questions; Single-q-cluster: number of questions that belonged to a question cluster with only a single question (no other question in the dataset asked the same thing). ${}^{ * }$ denotes FAQ page sources.
40
+
41
+ ## 2 Dataset Collection and Annotation
42
+
43
+ Data collection. In May 2020, we scraped questions about COVID from thirteen sources: seven official FAQ websites from recognized organizations such as the Center for Disease Control (CDC) and the Food and Drug Administration (FDA), and six crowd-based sources such as Quora and Yahoo Answers. Table 1 shows the distribution of collected questions from each source. We also post the original scraped websites for each source.
44
+
45
+ Data cleaning. We performed several preprocessing steps to remove unrelated, low-quality, and nonsensical questions. First, we deleted questions unrelated to COVID and vague questions with too many interpretations (e.g., "Why COVID?"). Second, we removed location-specific and time-specific versions of questions (e.g., "COVID deaths in New York"), since these questions do not contribute linguistic novelty (you could replace "New York" with any state, for example). Questions that only targeted one location or time, however, were not removed-for instance, "Was China responsible for COVID?" was not removed because no questions asked about any other country being responsible for the pandemic. Finally, to minimize occurrences of questions that trivially differ, we removed all punctuation and replaced synonymous ways of saying COVID, such as "coronavirus," and "COVID-19" with "covid." Table 1 also shows the number of removed questions for each source.
46
+
47
+ <table><tr><td>Question Cluster [#Questions] (Category)</td><td>Example Questions</td></tr><tr><td>Pandemic Duration</td><td>"Will COVID ever go away?"</td></tr><tr><td>[28]</td><td>"Will COVID end soon?"</td></tr><tr><td>(Speculation)</td><td>"When COVID will end?"</td></tr><tr><td>Demographics: General</td><td>"Who is at higher risk?"</td></tr><tr><td>[26]</td><td>"Are kids more at risk?"</td></tr><tr><td>(Transmission)</td><td>"Who is COVID killing?"</td></tr><tr><td>Survivability: Surfaces</td><td>"Does COVID live on surfaces?"</td></tr><tr><td>[24]</td><td>"Can COVID live on paper?"</td></tr><tr><td>(Transmission)</td><td>"Can COVID live on objects?"</td></tr></table>
48
+
49
+ Table 2: Most common question clusters in COVID-Q.
50
+
51
+ Data annotation. We first annotated our dataset by grouping questions that asked the same thing together into question clusters. The first author manually compared each question with existing clusters and questions, using the definition that two questions belong in the same cluster if they have the same answer. In other words, two questions matched to the same question cluster if and only if they could be answered with a common answer. As every new example in our dataset is checked against all existing question clusters, including clusters with only one question, the time complexity for annotating our dataset is $O\left( {n}^{2}\right)$ , where $n$ is the number of questions.
52
+
53
+ After all questions were grouped into question clusters, the first author gave each question cluster with at least two questions a name summarizing the questions in that cluster, and each question cluster was assigned to one of 15 question categories (as shown in Figure 1), which were conceived during a thorough discussion with the last author. In Table 2, we show the question clusters with the most questions, along with their assigned question categories and some example questions. Figure 2 shows the distribution of question clusters.
54
+
55
+ Question clusters 80 15 20 25 30 Questions per Question cluster 60 40 20 0 0 10
56
+
57
+ Figure 2: Number of questions per question cluster for clusters with at least two questions. All questions in a question cluster asked roughly the same thing. 120 question clusters had at least 3 questions per cluster, 66 clusters had at least 5 questions per cluster, and 22 clusters had at least 10 questions per cluster.
58
+
59
+ Annotation quality. We ran the dataset through multiple annotators to improve the quality of our annotations. First, the last author confirmed all clusters in the dataset, highlighting any questions that might need to be relabeled and discussing them with the first author. Of the 1,245 questions belonging to question clusters with at least two questions, 131 questions were highlighted and 67 labels were modified. For a second pass, an external annotator similarly read through the question cluster labels, for which 31 questions were highlighted and 15 labels were modified. Most modifications involved separating a single question cluster that was too broad into several more specific clusters.
60
+
61
+ For another round of validation, we showed three questions from each of the 89 question clusters with ${N}_{\text{cluster }} \geq 4$ to three Mechanical Turk workers, who were asked to select the correct question cluster from five choices. The majority vote from the three workers agreed with our ground-truth question-cluster labels 93.3% of the time. The three workers unanimously agreed on ${58.1}\%$ of the questions, for which 99.4% of these unanimous labels agreed with our ground-truth label. Workers were paid $\$ {0.07}$ per question.
62
+
63
+ Finally, it is possible that some questions could fit in several categories-of 207 clusters, 40 arguably mapped to two or more categories, most frequently the transmission and prevention categories. As this annotation involves some degree of subjectivity, we post with our dataset formal definitions of each question category to make these distinctions more transparent.
64
+
65
+ Single-question clusters. Interestingly, we observe that for the CDC and FDA frequently asked questions websites, a sizable fraction of questions (44.6% for CDC and 42.1% for FDA) did not ask the same thing as questions from any other source (and therefore formed single-question clusters), suggesting that these sources might want adjust the questions on their websites to question clusters that were seen frequently in search engines such as Google or Bing. Moreover, 54.2% of question clusters that had questions from at least two non-official sources went unanswered by an official source. In the Supplementary Materials, Table 7 shows examples of these questions, and conversely, Table 8 shows CDC and FDA questions that did not belong to the same cluster as any other question.
66
+
67
+ ## 3 Question Classification
68
+
69
+ We provide baselines for question-category classification, where each question belongs to one of 15 categories, and question-cluster classification, where questions asking the same thing belong to the same cluster (of 89 question-clusters).
70
+
71
+ As our dataset is small when split into training and test sets, we manually generate an additional author-generated evaluation set of 249 questions. For these questions, the first author wrote new questions for question clusters with 4 or 5 questions per cluster until those clusters had 6 questions. These questions were checked in the same fashion as the real questions. For clarity, we only refer to them in this section (§3) unless explicitly stated.
72
+
73
+ ### 3.1 Question-Category Classification
74
+
75
+ The question-category classification task assigns each question to one of 15 categories shown in Figure 1. For the train-test split, we randomly choose 20 questions per category for training (as the smallest category has 26 questions), with the remaining questions going into the test set (see Table 3).
76
+
77
+ ---
78
+
79
+ <table><tr><td>Question Categories</td><td>15</td></tr><tr><td>Training Questions per Category</td><td>20</td></tr><tr><td>Training Questions</td><td>300</td></tr><tr><td>Test Questions (Real)</td><td>668</td></tr><tr><td>Test Questions (Generated)</td><td>238</td></tr></table>
80
+
81
+ ---
82
+
83
+ Table 3: Data split for question-category classification.
84
+
85
+ We run simple BERT (Devlin et al., 2019) feature-extraction baselines with question representations obtained by average-pooling. For this task, we use two models: (1) SVM and (2) cosine-similarity based $k$ -nearest neighbor classification (k - NN)with $k = 1$ . As shown in Table 4, the SVM marginally outperforms $k$ -NN on both the real and generated evaluation sets. Since our dataset is small, we also include results from using simple data augmentation techniques (Wei and Zou, 2019). Figure 3 shows the confusion matrix for BERT-feat: SVM + augmentation for this task.
86
+
87
+ <table><tr><td>Model</td><td>Real Q</td><td>Generated Q</td></tr><tr><td>BERT-feat: $k$ -NN</td><td>47.8</td><td>52.1</td></tr><tr><td>+ augmentation</td><td>47.3</td><td>52.5</td></tr><tr><td>BERT-feat: SVM</td><td>52.2</td><td>53.4</td></tr><tr><td>+ augmentation</td><td>58.1</td><td>58.8</td></tr></table>
88
+
89
+ Table 4: Performance of BERT baselines (accuracy in %) on question-category classification with 15 categories and 20 training examples per category.
90
+
91
+ ### 3.2 Question-Cluster Classification
92
+
93
+ Of a more granular nature, the question-cluster classification task requires a new test question to be grouped into a question cluster that asks the same thing, similar to retrieval QA contexts. For this task, we only consider question clusters with at least 4 questions per cluster, and we split 3 questions from each cluster into the training set and the remaining questions into the test set, as shown in Table 5.
94
+
95
+ ---
96
+
97
+ <table><tr><td>Question Clusters with ${N}_{cluster} \geq 4$</td><td>89</td></tr><tr><td>Training Questions per Cluster</td><td>3</td></tr><tr><td>Training Questions</td><td>267</td></tr><tr><td>Test Questions (Real)</td><td>460</td></tr><tr><td>Test Questions (Generated)</td><td>131</td></tr></table>
98
+
99
+ ---
100
+
101
+ Table 5: Data split for question-cluster classification.
102
+
103
+ As this dataset has fewer questions per cluster, we use the $k$ -NN baseline from $§{3.1}$ . We also evaluate a simple model that uses a triplet loss function to train a two-layer neural net on BERT features, a method introduced for facial recognition (Schroff et al., 2015) and now used in NLP for few-shot learning (Yu et al., 2018) and answer selection (Kumar et al., 2019). In Table 6, we show top-1 and top-5 prediction accuracies for these two models.
104
+
105
+ We find that data augmentation improves performance for most baselines, possibly due to the small size and restricted scope of our dataset increasing the utility of augmented data (e.g., there are fewer ways to ask how long COVID will last than ways to write a positive movie review). One drawback of data augmentation, however, is that for $n$ augmented questions per original question, evaluation time for $k$ -NN classification increases by up to $O\left( n\right)$ , and training time for triplet loss classification increases by up to $O\left( {n}^{2}\right)$ .
106
+
107
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Real Q</td><td colspan="2">Generated Q</td></tr><tr><td>Top-1</td><td>Top-5</td><td>Top-1</td><td>Top-5</td></tr><tr><td>BERT-feat: $k$ -NN</td><td>29.6</td><td>50.3</td><td>20.8</td><td>38.5</td></tr><tr><td>+ augmentation</td><td>30.5</td><td>52.3</td><td>20.8</td><td>39.2</td></tr><tr><td>BERT-feat: triplet loss</td><td>42.4</td><td>71.5</td><td>57.3</td><td>80.9</td></tr><tr><td>+ augmentation</td><td>54.6</td><td>78.9</td><td>60.3</td><td>83.2</td></tr></table>
108
+
109
+ Table 6: Performance of BERT baselines (accuracy in %) on question-cluster classification with 89 clusters and 3 examples per cluster in the training set.
110
+
111
+ ## 4 Discussion
112
+
113
+ Use cases. We imagine several use cases for COVID-Q. Our question clusters could help train and evaluate retrieval-QA systems, such as covid.deepset.ai or covid19.dialogue.co, which, given a new question, aim to retrieve the corresponding QA pair in an existing database. Another relevant context is query understanding, as clusters identify queries of the same intent, and categories identify queries asking about the same topic. Finally, COVID-Q could be used broadly to evaluate COVID-specific models-our baseline (Hugging-face's bert-base-uncased) does not even have COVID in the vocabulary, and so we suspect that models pre-trained on scientific or COVID-specific data will outperform our baseline. More related areas include COVID-related query expansion, suggestion, and rewriting.
114
+
115
+ Limitations. Our dataset was collected in May 2020, and we see it as a snapshot in time of questions asked up until then. As the COVID situation further develops, a host of new questions will arise, and the content of these new questions will potentially not be covered by any existing clusters in our dataset. The question categories, on the other hand, are more likely to remain static (i.e., new questions would likely map to an existing category), but the current way that we came up with the categories might be considered subjective-we leave that determination to the reader (refer to Table 9 or the raw dataset on Github). Finally, although the distribution of questions per cluster is highly skewed (Figure 2), we still provide them at least as a reference for applied scenarios where it would be useful to know the number of queries asking the same thing (and perhaps how many answers are needed to answer the majority of questions asked).
116
+
117
+ ## References
118
+
119
+ Muhammad Abdul-Mageed, AbdelRahim Elmadany, Dinesh Pabbi, Kunal Verma, and Rannie Lin. 2020. Mega-cov: A billion-scale dataset of 65 languages for covid-19. ArXiv, abs/2005.06012. https:// arxiv.org/pdf/2005.06012.pdf.
120
+
121
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Covid-19: The first public coronavirus twitter dataset. ArXiv, abs/2003.07372. https://arxiv.org/pdf/2003.07372.pdf.
122
+
123
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. https:// www.aclweb.org/anthology/N19-1423.pdf.
124
+
125
+ Zhiwei Gao, Shuntaro Yada, Shoko Wakamiya, and Eiji Aramaki. 2020. Naist covid: Multilingual covid-19 twitter and weibo dataset. ArXiv, abs/2004.08145. https://arxiv.org/pdf/ 2004.08145.pdf.
126
+
127
+ Ting-Hao Huang, Chieh-Yang Huang, Chien-Kuang Cornelia Ding, Yen-Chia Hsu, and C. Lee Giles. 2020. Coda-19: Reliably annotating research aspects on ${10},{000} +$ cord-19 abstracts using non-expert crowd. ArXiv, abs/2005.02367. https://arxiv.org/pdf/2005.02367.pdf.
128
+
129
+ Bennett Kleinberg, Isabelle van der Vegt, and Maximilian Mozes. 2020. Measuring emotions in the covid- 19 real world worry dataset. ArXiv, abs/2004.04225. https://arxiv.org/pdf/2004.04225.pdf.
130
+
131
+ Sawan Kumar, Shweta Garg, Kartik Mehta, and Nikhil Rasiwasia. 2019. Improving answer selection and answer triggering using hard negatives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5911-5917. Association for Computational Linguistics. "https: //www.aclweb.org/anthology/D19-1604".
132
+
133
+ Salvatore Romeo, Giovanni Da San Martino, Alberto Barrón-Cedeño, Alessandro Moschitti, Yonatan Be-linkov, Wei-Ning Hsu, Yu Zhang, Mitra Mohtarami, and James Glass. 2016. Neural attention for learning to rank questions in community question answering. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1734-1745, Osaka, Japan. The COLING 2016 Organizing Committee. https://www.aclweb.org/anthology/ C16-1163.pdf.
134
+
135
+ Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. FAQ retrieval using query-question similarity and bert-based query-answer relevance. CoRR, abs/1905.02851. https://arxiv.org/pdf/1905.02851.pdf.
136
+
137
+ A. Sarker, S. Lakamana, William E. Hogg, Allen Xie, Mohammed Ali Al-garadi, and Yc Yang. 2020. Self-reported covid-19 symptoms on twitter: An analysis and a research resource. In medRxiv. https://www.medrxiv.org/content/10.1101/2020.04.16.20067421v3.full.pdf.
138
+
139
+ Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. CoRR, abs/1503.03832. http://arxiv.org/abs/1503.03832.
140
+
141
+ Jason Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). http://dx.doi.org/10.18653/v1/ D19-1670.
142
+
143
+ Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text classification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1206-1215. Association for Computational Linguistics. https://www.aclweb.org/ anthology/N18-1109.
144
+
145
+ Koosha Zarei, Reza Farahbakhsh, Noel Crespi, and Gareth Tyson. 2020. A first instagram dataset on covid-19. ArXiv, abs/2004.12226. https:// arxiv.org/pdf/2004.12226.pdf.
146
+
147
+ ## Supplementary Materials
148
+
149
+ Question mismatches. Table 7 shows example questions from at least two non-official sources went unanswered by an official source. Table 8 shows example questions from the FDA and CDC FAQ websites that did not ask the same thing as any other questions in our dataset.
150
+
151
+ <table><tr><td>Question Cluster</td><td>${N}_{cluster}$</td><td>Example Questions</td></tr><tr><td>Number of Cases</td><td>21</td><td>"Are COVID cases dropping?" "Have COVID cases peaked?" "Are COVID cases decreasing?"</td></tr><tr><td>Mutation</td><td>19</td><td>"Has COVID mutated?" "Did COVID mutate?" "Will COVID mutate?"</td></tr><tr><td>Lab Theory</td><td>18</td><td>"Was COVID made in a lab?" "Was COVID manufactured?" "Did COVID start in a lab?"</td></tr></table>
152
+
153
+ Table 7: Questions appearing in multiple sources that were unanswered by official FAQ websites.
154
+
155
+ Question-category classification error analysis. In Figure 3, we show the confusion matrix for our SVM classifier on the question-category classification task on the test set of real questions. Some categories that were challenging to distinguish were Transmission and Having COVID (34% error rate), and Having COVID and Symptoms (33% error rate). Table 9 shows sample questions from each of the 15 question categories.
156
+
157
+ Corresponding Answers. The FAQ websites from reputable sources (denoted with ${}^{ * }$ in Table 1) provide answers to their questions, and so we also provide them as an auxiliary resource. Using these answers, 23.8% of question clusters have at least one corresponding answer. We caution against using these answers in applied settings, however, because information on COVID changes rapidly.
158
+
159
+ Other COVID-19 datasets. We encourage researchers to also explore other COVID-19 datasets: tweets streamed since January 22 (Chen et al., 2020), location-tagged tweets in 65 languages (Abdul-Mageed et al., 2020), tweets of COVID symptoms (Sarker et al., 2020), a multi-lingual Twitter and Weibo dataset (Gao et al., 2020), an Instagram dataset (Zarei et al., 2020), emotional responses to COVID (Kleinberg et al., 2020), and annotated research abstracts (Huang et al., 2020).
160
+
161
+ <table><tr><td>Question</td><td>Food and Drug Administration Closest Matches from BERT</td></tr><tr><td>“Can I donate convalescent plasma?”</td><td>"Why is convalescent plasma being investigated to treat COVID?" "Can I make my own hand sanitizer?" "What are suggestions for things to do in the COVID quarantine?"</td></tr><tr><td>"Where can I report websites selling fraudulent medical products?”</td><td>"What kind of masks are recommended to protect healthcare workers from COVID exposure?" "Where can I get tested for COVID?" "How do testing kits for COVID detect the virus?"</td></tr><tr><td colspan="2">Center for Disease Control QuestionClosest Matches from BERT</td></tr><tr><td>"What is the difference between cleaning and disinfecting?”</td><td>"How effective are alternative disinfection methods?" "Why has Trump stated that injecting disinfectant will kill COVID in a minute?" "Should I spray myself or my kids with disinfectant?"</td></tr><tr><td colspan="2">"How frequently should facil- thould facil- "What is the survival rate of those infected by COVID who are put on a ventilator?" ities be cleaned to reduce the : "What kind of masks are recommended to protect healthcare workers from COVID exposure?" potential spread of COVID?""Will warm weather stop the outbreak of COVID?"</td></tr></table>
162
+
163
+ Table 8: Questions from the Food and Drug Administration (FDA) and Center for Disease Control (CDC) FAQ websites that did not ask the same thing as any questions from other sources.
164
+
165
+ Transmission .46 .05 .01 .05 .07 .05 .03 .01 .02 .02 .01 .04 .15 .02 .01 .03 .15 .02 0 .08 0 0 .05 0 0 .01 0 .01 .02 .09 .08 .01 0 0 0.8 .03 .07 .02 0 .05 0 0 .02 0 0 .02 .12 .04 .02 .04 .02 .04 .02 .57 0 0 .06 0 .02 .04 .04 .11 0 0.6 0 .77 0 0 0 0 0 .06 0 0 0 .96 0 0 0 0 .04 0 0 0 0 .69 0 0 .08 0 .15 0 0.4 0 .04 0 .04 52 .11 0 0 0 0 0 0 0 .83 0 0 .03 0 .03 .1 0 .03 .5 0 .03 0 0.2 0 0 .06 .06 0 0 .56 0 0 0 0 .12 0 0 0 .81 0 0 0 0 0 0 0 .33 0 .67 0.0 Predicted Prevention .05 .57 0 .03 .02 Societal Effects 0 .01 .59 .12 .05 Societal Response .02 .05 .08 .54 .12 Reporting .02 0 .02 .08 .54 Ground Truth Origin 0 0 0 .13 .02 Treatment .03 .06 .03 .03 0 Testing 0 0 0 0 Comparison 0 0 Individual Response .04 .07 .11 .07 0 Economic Effects 0 .03 .14 0 0 Speculation 0 0 .13 .1 .03 Having COVID 19 0 0 0 .12 Nomenclature 0 0 .06 0 0 Symptoms 0 0 0 0
166
+
167
+ Figure 3: Confusion matrix for BERT-feat: SVM predictions on the question-category classification task.
168
+
169
+ <table><tr><td>Category</td><td>Example Questions</td></tr><tr><td>Transmission</td><td>"Can COVID spread through food?" "Can COVID spread through water?" "Is COVID airborne?"</td></tr><tr><td>Societal Effects</td><td>"In what way have people been affected by COVID?" "How will COVID change the world?" "Do you think there will be more racism during COVID?"</td></tr><tr><td>Prevention</td><td>"Should I wear a facemask?" "How can I prevent COVID?" "What disinfectants kill the COVID virus?"</td></tr><tr><td>Societal Response</td><td>"Have COVID checks been issued?" "What are the steps that a hospital should take after COVID outbreak?" "Are we blowing COVID out of proportion?"</td></tr><tr><td>Reporting</td><td>"Is COVID worse than we are being told?" "What is the COVID fatality rate?" "What is the most reliable COVID model right now?"</td></tr><tr><td>Origin</td><td>"Where did COVID originate?" "Did COVID start in a lab?" "Was COVID a bioweapon?"</td></tr><tr><td>Treatment</td><td>"What treatments are available for COVID?" "Should COVID patients be ventilated?" "Should I spray myself or my kids with disinfectant?"</td></tr><tr><td>Speculation</td><td>"Was COVID predicted?" "Will COVID return next year?" "How long will we be on lockdown for COVID?"</td></tr><tr><td>Economic Effects</td><td>"What is the impact of COVID on the global economy?" "What industries will never be the same because of COVID?" "Why are stock markets dipping in response to COVID?"</td></tr><tr><td>Individual Response</td><td>"How do I stay positive with COVID?" "What are suggestions for things to do in the COVID quarantine?" "Can I still travel?"</td></tr><tr><td>Comparison</td><td>"How are COVID and SARS-COV similar?" "How can I tell if I have the flu or COVID?" "How does COVID compare to other viruses?"</td></tr><tr><td>Testing</td><td>"How COVID test is done?" "Are COVID tests accurate?" "Should I be tested for COVID?"</td></tr><tr><td>Nomenclature</td><td>"Should COVID be capitalized?" "What COVID stands for?" "What is the genus of the SARS-COVID?"</td></tr><tr><td>Having COVID</td><td>"How long does it take to recover?" "How COVID attacks the body?" "How long is the incubation period for COVID?"</td></tr><tr><td>Symptoms</td><td>"What are the symptoms of COVID?" "Which COVID symptoms come first?" "Do COVID symptoms come on quickly?"</td></tr></table>
170
+
171
+ Table 9: Sample questions from each of the 15 question categories.
172
+
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/qd51R0JNLl/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § WHAT ARE PEOPLE ASKING ABOUT COVID-19? A QUESTION CLASSIFICATION DATASET
2
+
3
+ Jerry Wei ${}^{ \star }$ Chengyu Huang ${}^{ \star }$ Soroush Vosoughi ${}^{ \star }$ Jason Wei ${}^{ \star }$
4
+
5
+ Dartmouth College
6
+
7
+ jerry.weng.wei@protagolabs.com
8
+
9
+ huangchengyu24@gmail.com
10
+
11
+ {soroush, jason.20}@dartmouth.edu
12
+
13
+ § ABSTRACT
14
+
15
+ We present COVID-Q, a set of 1,690 questions about COVID-19 from 13 sources, which we annotate into 15 question categories and 207 question clusters. The most common questions in our dataset asked about transmission, prevention, and societal effects of COVID, and we found that many questions that appeared in multiple sources were not answered by any FAQ websites of reputable organizations such as the CDC and FDA. We post our dataset publicly at https://github.com/ JerryWei03/COVID-Q.
16
+
17
+ For classifying questions into 15 categories, a BERT baseline scored 58.1% accuracy when trained on 20 examples per category, and for classifying questions into 89 question clusters, the baseline achieved ${54.6}\%$ accuracy. We hope COVID-Q can help either for direct use in developing applied systems or as a domain-specific resource for model evaluation.
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ A major challenge during fast-developing pandemics such as COVID-19 is keeping people updated with the latest and most relevant information. Since the beginning of COVID, several web-sites have created frequently asked questions (FAQ) pages that they regularly update. But even so, users might struggle to find their questions on FAQ pages, and many questions remain unanswered. In this paper, we ask-what are people really asking about COVID, and how can we use NLP to better understand questions and retrieve relevant content?
22
+
23
+ We present COVID-Q, a dataset of 1,690 questions about COVID from 13 online sources. We annotate COVID-Q by classifying questions into 15 general question categories ${}^{1}$ (see Figure 1) and by grouping questions into question clusters, for which all questions in a cluster ask the same thing and can be answered by the same answer, for a total of 207 clusters. Throughout $\$ 2$ , we analyze the distribution of COVID-Q in terms of question category, cluster, and source.
24
+
25
+ < g r a p h i c s >
26
+
27
+ Figure 1: Question categories in COVID-Q, with number of question clusters per category in parentheses.
28
+
29
+ COVID-Q facilitates several question understanding tasks. First, the question categories can be used for a vanilla text classification task to determine the general category of information a question is asking about. Second, the question clusters can be used for retrieval question answering (since the cluster annotations indicate questions of same intent), where given a new question, a system aims to find a question in an existing database that asks the same thing and returns the corresponding answer (Romeo et al., 2016; Sakata et al., 2019). We provide baselines for these two tasks in $§{3.1}$ and §3.2. In addition to directly aiding applied systems development, COVID-Q could also serve as a domain-specific resource for evaluating NLP models trained on COVID data.
30
+
31
+ ${}^{1}$ We do not count the "other" category.
32
+
33
+ max width=
34
+
35
+ 2*Source 3|c|Questions 2*Answers 2*Questions Removed
36
+
37
+ 2-4
38
+ Total Multi-q-cluster Single-q-cluster
39
+
40
+ 1-6
41
+ Quora 675 501 (74.2%) 174 (25.8%) 0 374
42
+
43
+ 1-6
44
+ Google Search 173 161 (93.1%) 12 (6.9%) 0 174
45
+
46
+ 1-6
47
+ github.com/deepset-ai/COVID-QA 124 55 (44.4%) 69 (55.6%) 124 71
48
+
49
+ 1-6
50
+ Yahoo Search 94 87 (92.6%) 7 (7.4%) 0 34
51
+
52
+ 1-6
53
+ ${}^{ * }$ Center for Disease Control 92 51 (55.4%) 41 (44.6%) 92 1
54
+
55
+ 1-6
56
+ Bing Search 68 65 (95.6%) 3 (4.4%) 0 29
57
+
58
+ 1-6
59
+ *Cable News Network 64 48 (75.0%) 16 (25.0%) 64 1
60
+
61
+ 1-6
62
+ ${}^{ * }$ Food and Drug Administration 57 33 (57.9%) 24 (42.1%) 57 3
63
+
64
+ 1-6
65
+ Yahoo Answers 28 13 (46.4%) 15 (53.6%) 0 23
66
+
67
+ 1-6
68
+ *Illinois Department of Public Health 20 18 (90.0%) 2 (10.0%) 20 0
69
+
70
+ 1-6
71
+ *United Nations 19 18 (94.7%) 1 (5.3%) 19 6
72
+
73
+ 1-6
74
+ *Washington DC Area Television Station 16 15 (93.8%) 1 (6.2%) 16 0
75
+
76
+ 1-6
77
+ ${}^{ * }$ Johns Hopkins University 11 10 (90.9%) 1 (9.1%) 11 1
78
+
79
+ 1-6
80
+ Author Generated 249 249 (100.0%) 0 (0.0%) 0 0
81
+
82
+ 1-6
83
+ Total 1,690 1,324 (78.3%) 366 (21.7%) 403 717
84
+
85
+ 1-6
86
+
87
+ Table 1: Distribution of questions in COVID-Q by source. The reported number of questions excludes vague and nonsensical questions that were removed. Multi-q-cluster: number of questions that belonged to a question cluster with at least two questions; Single-q-cluster: number of questions that belonged to a question cluster with only a single question (no other question in the dataset asked the same thing). ${}^{ * }$ denotes FAQ page sources.
88
+
89
+ § 2 DATASET COLLECTION AND ANNOTATION
90
+
91
+ Data collection. In May 2020, we scraped questions about COVID from thirteen sources: seven official FAQ websites from recognized organizations such as the Center for Disease Control (CDC) and the Food and Drug Administration (FDA), and six crowd-based sources such as Quora and Yahoo Answers. Table 1 shows the distribution of collected questions from each source. We also post the original scraped websites for each source.
92
+
93
+ Data cleaning. We performed several preprocessing steps to remove unrelated, low-quality, and nonsensical questions. First, we deleted questions unrelated to COVID and vague questions with too many interpretations (e.g., "Why COVID?"). Second, we removed location-specific and time-specific versions of questions (e.g., "COVID deaths in New York"), since these questions do not contribute linguistic novelty (you could replace "New York" with any state, for example). Questions that only targeted one location or time, however, were not removed-for instance, "Was China responsible for COVID?" was not removed because no questions asked about any other country being responsible for the pandemic. Finally, to minimize occurrences of questions that trivially differ, we removed all punctuation and replaced synonymous ways of saying COVID, such as "coronavirus," and "COVID-19" with "covid." Table 1 also shows the number of removed questions for each source.
94
+
95
+ max width=
96
+
97
+ Question Cluster [#Questions] (Category) Example Questions
98
+
99
+ 1-2
100
+ Pandemic Duration "Will COVID ever go away?"
101
+
102
+ 1-2
103
+ [28] "Will COVID end soon?"
104
+
105
+ 1-2
106
+ (Speculation) "When COVID will end?"
107
+
108
+ 1-2
109
+ Demographics: General "Who is at higher risk?"
110
+
111
+ 1-2
112
+ [26] "Are kids more at risk?"
113
+
114
+ 1-2
115
+ (Transmission) "Who is COVID killing?"
116
+
117
+ 1-2
118
+ Survivability: Surfaces "Does COVID live on surfaces?"
119
+
120
+ 1-2
121
+ [24] "Can COVID live on paper?"
122
+
123
+ 1-2
124
+ (Transmission) "Can COVID live on objects?"
125
+
126
+ 1-2
127
+
128
+ Table 2: Most common question clusters in COVID-Q.
129
+
130
+ Data annotation. We first annotated our dataset by grouping questions that asked the same thing together into question clusters. The first author manually compared each question with existing clusters and questions, using the definition that two questions belong in the same cluster if they have the same answer. In other words, two questions matched to the same question cluster if and only if they could be answered with a common answer. As every new example in our dataset is checked against all existing question clusters, including clusters with only one question, the time complexity for annotating our dataset is $O\left( {n}^{2}\right)$ , where $n$ is the number of questions.
131
+
132
+ After all questions were grouped into question clusters, the first author gave each question cluster with at least two questions a name summarizing the questions in that cluster, and each question cluster was assigned to one of 15 question categories (as shown in Figure 1), which were conceived during a thorough discussion with the last author. In Table 2, we show the question clusters with the most questions, along with their assigned question categories and some example questions. Figure 2 shows the distribution of question clusters.
133
+
134
+ < g r a p h i c s >
135
+
136
+ Figure 2: Number of questions per question cluster for clusters with at least two questions. All questions in a question cluster asked roughly the same thing. 120 question clusters had at least 3 questions per cluster, 66 clusters had at least 5 questions per cluster, and 22 clusters had at least 10 questions per cluster.
137
+
138
+ Annotation quality. We ran the dataset through multiple annotators to improve the quality of our annotations. First, the last author confirmed all clusters in the dataset, highlighting any questions that might need to be relabeled and discussing them with the first author. Of the 1,245 questions belonging to question clusters with at least two questions, 131 questions were highlighted and 67 labels were modified. For a second pass, an external annotator similarly read through the question cluster labels, for which 31 questions were highlighted and 15 labels were modified. Most modifications involved separating a single question cluster that was too broad into several more specific clusters.
139
+
140
+ For another round of validation, we showed three questions from each of the 89 question clusters with ${N}_{\text{ cluster }} \geq 4$ to three Mechanical Turk workers, who were asked to select the correct question cluster from five choices. The majority vote from the three workers agreed with our ground-truth question-cluster labels 93.3% of the time. The three workers unanimously agreed on ${58.1}\%$ of the questions, for which 99.4% of these unanimous labels agreed with our ground-truth label. Workers were paid $\$ {0.07}$ per question.
141
+
142
+ Finally, it is possible that some questions could fit in several categories-of 207 clusters, 40 arguably mapped to two or more categories, most frequently the transmission and prevention categories. As this annotation involves some degree of subjectivity, we post with our dataset formal definitions of each question category to make these distinctions more transparent.
143
+
144
+ Single-question clusters. Interestingly, we observe that for the CDC and FDA frequently asked questions websites, a sizable fraction of questions (44.6% for CDC and 42.1% for FDA) did not ask the same thing as questions from any other source (and therefore formed single-question clusters), suggesting that these sources might want adjust the questions on their websites to question clusters that were seen frequently in search engines such as Google or Bing. Moreover, 54.2% of question clusters that had questions from at least two non-official sources went unanswered by an official source. In the Supplementary Materials, Table 7 shows examples of these questions, and conversely, Table 8 shows CDC and FDA questions that did not belong to the same cluster as any other question.
145
+
146
+ § 3 QUESTION CLASSIFICATION
147
+
148
+ We provide baselines for question-category classification, where each question belongs to one of 15 categories, and question-cluster classification, where questions asking the same thing belong to the same cluster (of 89 question-clusters).
149
+
150
+ As our dataset is small when split into training and test sets, we manually generate an additional author-generated evaluation set of 249 questions. For these questions, the first author wrote new questions for question clusters with 4 or 5 questions per cluster until those clusters had 6 questions. These questions were checked in the same fashion as the real questions. For clarity, we only refer to them in this section (§3) unless explicitly stated.
151
+
152
+ § 3.1 QUESTION-CATEGORY CLASSIFICATION
153
+
154
+ The question-category classification task assigns each question to one of 15 categories shown in Figure 1. For the train-test split, we randomly choose 20 questions per category for training (as the smallest category has 26 questions), with the remaining questions going into the test set (see Table 3).
155
+
156
+ <table><tr><td>Question Categories</td><td>15</td></tr><tr><td>Training Questions per Category</td><td>20</td></tr><tr><td>Training Questions</td><td>300</td></tr><tr><td>Test Questions (Real)</td><td>668</td></tr><tr><td>Test Questions (Generated)</td><td>238</td></tr></table>
157
+
158
+ Table 3: Data split for question-category classification.
159
+
160
+ We run simple BERT (Devlin et al., 2019) feature-extraction baselines with question representations obtained by average-pooling. For this task, we use two models: (1) SVM and (2) cosine-similarity based $k$ -nearest neighbor classification (k - NN)with $k = 1$ . As shown in Table 4, the SVM marginally outperforms $k$ -NN on both the real and generated evaluation sets. Since our dataset is small, we also include results from using simple data augmentation techniques (Wei and Zou, 2019). Figure 3 shows the confusion matrix for BERT-feat: SVM + augmentation for this task.
161
+
162
+ max width=
163
+
164
+ Model Real Q Generated Q
165
+
166
+ 1-3
167
+ BERT-feat: $k$ -NN 47.8 52.1
168
+
169
+ 1-3
170
+ + augmentation 47.3 52.5
171
+
172
+ 1-3
173
+ BERT-feat: SVM 52.2 53.4
174
+
175
+ 1-3
176
+ + augmentation 58.1 58.8
177
+
178
+ 1-3
179
+
180
+ Table 4: Performance of BERT baselines (accuracy in %) on question-category classification with 15 categories and 20 training examples per category.
181
+
182
+ § 3.2 QUESTION-CLUSTER CLASSIFICATION
183
+
184
+ Of a more granular nature, the question-cluster classification task requires a new test question to be grouped into a question cluster that asks the same thing, similar to retrieval QA contexts. For this task, we only consider question clusters with at least 4 questions per cluster, and we split 3 questions from each cluster into the training set and the remaining questions into the test set, as shown in Table 5.
185
+
186
+ <table><tr><td>Question Clusters with ${N}_{cluster} \geq 4$ </td><td>89</td></tr><tr><td>Training Questions per Cluster</td><td>3</td></tr><tr><td>Training Questions</td><td>267</td></tr><tr><td>Test Questions (Real)</td><td>460</td></tr><tr><td>Test Questions (Generated)</td><td>131</td></tr></table>
187
+
188
+ Table 5: Data split for question-cluster classification.
189
+
190
+ As this dataset has fewer questions per cluster, we use the $k$ -NN baseline from $§{3.1}$ . We also evaluate a simple model that uses a triplet loss function to train a two-layer neural net on BERT features, a method introduced for facial recognition (Schroff et al., 2015) and now used in NLP for few-shot learning (Yu et al., 2018) and answer selection (Kumar et al., 2019). In Table 6, we show top-1 and top-5 prediction accuracies for these two models.
191
+
192
+ We find that data augmentation improves performance for most baselines, possibly due to the small size and restricted scope of our dataset increasing the utility of augmented data (e.g., there are fewer ways to ask how long COVID will last than ways to write a positive movie review). One drawback of data augmentation, however, is that for $n$ augmented questions per original question, evaluation time for $k$ -NN classification increases by up to $O\left( n\right)$ , and training time for triplet loss classification increases by up to $O\left( {n}^{2}\right)$ .
193
+
194
+ max width=
195
+
196
+ 2*Model 2|c|Real Q 2|c|Generated Q
197
+
198
+ 2-5
199
+ Top-1 Top-5 Top-1 Top-5
200
+
201
+ 1-5
202
+ BERT-feat: $k$ -NN 29.6 50.3 20.8 38.5
203
+
204
+ 1-5
205
+ + augmentation 30.5 52.3 20.8 39.2
206
+
207
+ 1-5
208
+ BERT-feat: triplet loss 42.4 71.5 57.3 80.9
209
+
210
+ 1-5
211
+ + augmentation 54.6 78.9 60.3 83.2
212
+
213
+ 1-5
214
+
215
+ Table 6: Performance of BERT baselines (accuracy in %) on question-cluster classification with 89 clusters and 3 examples per cluster in the training set.
216
+
217
+ § 4 DISCUSSION
218
+
219
+ Use cases. We imagine several use cases for COVID-Q. Our question clusters could help train and evaluate retrieval-QA systems, such as covid.deepset.ai or covid19.dialogue.co, which, given a new question, aim to retrieve the corresponding QA pair in an existing database. Another relevant context is query understanding, as clusters identify queries of the same intent, and categories identify queries asking about the same topic. Finally, COVID-Q could be used broadly to evaluate COVID-specific models-our baseline (Hugging-face's bert-base-uncased) does not even have COVID in the vocabulary, and so we suspect that models pre-trained on scientific or COVID-specific data will outperform our baseline. More related areas include COVID-related query expansion, suggestion, and rewriting.
220
+
221
+ Limitations. Our dataset was collected in May 2020, and we see it as a snapshot in time of questions asked up until then. As the COVID situation further develops, a host of new questions will arise, and the content of these new questions will potentially not be covered by any existing clusters in our dataset. The question categories, on the other hand, are more likely to remain static (i.e., new questions would likely map to an existing category), but the current way that we came up with the categories might be considered subjective-we leave that determination to the reader (refer to Table 9 or the raw dataset on Github). Finally, although the distribution of questions per cluster is highly skewed (Figure 2), we still provide them at least as a reference for applied scenarios where it would be useful to know the number of queries asking the same thing (and perhaps how many answers are needed to answer the majority of questions asked).
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ub9_2iAo3D/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cross-lingual Transfer Learning for COVID-19 Outbreak Alignment
2
+
3
+ Sharon Levy and William Yang Wang
4
+
5
+ University of California, Santa Barbara
6
+
7
+ Santa Barbara, CA 93106
8
+
9
+ \{sharonlevy, william\}@cs.ucsb.edu
10
+
11
+ ## Abstract
12
+
13
+ The spread of COVID-19 has become a significant and troubling aspect of society in 2020. With millions of cases reported across countries, new outbreaks have occurred and followed patterns of previously affected areas. Many disease detection models do not incorporate the wealth of social media data that can be utilized for modeling and predicting its spread. It is useful to ask, can we utilize this knowledge in one country to model the outbreak in another? To answer this, we propose the task of cross-lingual transfer learning for epidemiological alignment. Utilizing both macro and micro text features, we train on Italy's early COVID-19 outbreak through Twitter and transfer to several other countries. Our experiments show strong results with up to 0.85 Spearman correlation in cross-country predictions.
14
+
15
+ ## 1 Introduction
16
+
17
+ During the COVID-19 pandemic, society was brought to a standstill, affecting many aspects of our daily lives. With increased travel due to globalization, it is intuitive that countries have followed earlier affected regions in outbreaks and measures to contain to them (Cuffe and Jeavans, 2020).
18
+
19
+ A unique form of information that can be used for modeling disease propagation comes from social media. This can provide researchers with access to unfiltered data with clues as to how the pandemic evolves. Current research on the COVID-19 outbreak concerning social media includes word frequency and sentiment analysis of tweets (Rajput et al., 2020) and studies on the spread of misinformation (Kouzy et al., 2020; Singh et al., 2020). Social media has also been utilized for other disease predictions. Several papers propose models to identify tweets in which the author or nearby person has the attributed disease (Kanouchi et al., 2015; Aramaki et al., 2011; Lamb et al., 2013; Kita-gawa et al., 2015). Iso et al. (2016) and Huang et al. (2016) utilize word frequencies to align tweets to disease rates.
20
+
21
+ A shortcoming of the above models is they do not consider how one region's outbreak may relate to another. Many of the proposed models also rely on lengthy keyword lists or syntactic features that may not generalize across languages. Text embeddings from models such as multilingual BERT (mBERT) (Devlin et al., 2019) and LASER (Artetxe and Schwenk, 2019) can allow us to combine features and make connections across languages for semantic alignment.
22
+
23
+ We present an analysis of Twitter usage for cross-lingual COVID-19 outbreak alignment. We utilize millions of tweets in several languages to evaluate how social media can help detect epidemiological outbreaks across countries. In particular, we aim to analyze how one country's tweets align with its own outbreak and if those same tweets can be used to predict the state of another country. This can allow us to determine how actions taken to contain the outbreak can transfer across countries with similar measures. We show that we can achieve strong results with cross-lingual transfer learning.
24
+
25
+ Our contributions include:
26
+
27
+ - We formulate the task of cross-lingual transfer learning for epidemiological outbreak alignment across countries.
28
+
29
+ - We are the first to investigate state-of-the-art cross-lingual sentence embeddings for cross-country epidemiological outbreak alignment. We propose joint macro and micro reading for multilingual prediction.
30
+
31
+ - We obtain strong correlations in domestic and cross-country predictions, providing us with evidence that social media patterns in relation to COVID-19 transcend countries.
32
+
33
+ ![01963db4-a0cb-7c85-93dd-c07bffbf7718_1_253_178_1144_435_0.jpg](images/01963db4-a0cb-7c85-93dd-c07bffbf7718_1_253_178_1144_435_0.jpg)
34
+
35
+ Figure 1: Timeline of COVID-19-related tweets, from COVID-19 dataset (Chen et al., 2020), in various languages. The peaks are marked by events relating to each language's main country's initial outbreak.
36
+
37
+ ## 2 Twitter and COVID-19
38
+
39
+ ### 2.1 Problem Formulation
40
+
41
+ An intriguing question in the scope of epidemiological research is: can atypical data such as social media help us model an outbreak? To study this, we utilize Twitter as our source, since users primarily post textual data and in real-time. Furthermore, Twitter users transcend several countries, which is beneficial as COVID-19 is analyzed by researchers and policymakers on a country by country basis (Kaplan et al., 2020). Our motivation in this paper is the intuition that social media users can provide us with indicators of an outbreak during the COVID-19 pandemic. In such case, we reformulate our original question: can we align Twitter with a country's COVID-19 outbreak and apply the learned information to other countries?
42
+
43
+ ### 2.2 Data
44
+
45
+ We utilize the COVID-19 Twitter dataset (Chen et al., 2020), comprised of millions of tweets in several languages. These were collected through Twitter’s streaming API and Tweepy ${}^{1}$ by filtering for 22 specific keywords and hashtags related to COVID-19 such as Coronavirus, Wuhanlockdown, stayathome, and Pandemic. We consider tweets starting from February 1st, 2020 to April 30th, 2020, and filter for tweets written in Italian, Indonesian, Turkish, Japanese, and Thai. Specifically, we filter for languages that are primarily spoken in only one country, as opposed to languages such as English and Spanish that are spoken in several countries. In Table 1, we show dataset statistics describing total tweet counts for each country along with counts after our filtering process described later in Section 2.5. When aligning tweets with each country's outbreak, we utilize the COVID-19 Dashboard by the CSSE at Johns Hopkins University (Dong et al., 2020) for daily confirmed cases from each country. Since the COVID-19 pandemic is still in its early stages at the time of writing this paper, sample sizes are limited. Therefore, our experiments have the following time cut settings: train in February and March and test in April (I), train in February and test in March and April (II), train in February and test in March (III), and train in March and test in April (IV).
46
+
47
+ <table><tr><td/><td>Italy</td><td>Thailand</td><td>Japan</td><td>Turkey</td><td>Indonesia</td></tr><tr><td>Pre</td><td>1.3M</td><td>2.2M</td><td>2.2M</td><td>960K</td><td>3.2M</td></tr><tr><td>Post</td><td>103K</td><td>6.9K</td><td>61K</td><td>96K</td><td>309K</td></tr></table>
48
+
49
+ Table 1: Dataset statistics in each country before (Pre) and after (Post) the tweet filter process described in Section 2.5.
50
+
51
+ ### 2.3 Can Twitter detect the start of a country's outbreak?
52
+
53
+ We start by investigating a basic feature in our dataset: tweet frequency. We plot each country's tweet frequency in Figure 1. There is a distinct peak within each country, corresponding to events within each country signaling initial outbreaks, denoted by the vertical lines. These correlations indicate that even a standard characteristic such as tweet frequency can align with each country's outbreak and occurs across several countries. Given this result, we further explore other tweet features for epidemiological alignment.
54
+
55
+ ---
56
+
57
+ ${}^{1}$ https://www.tweepy.org/
58
+
59
+ ---
60
+
61
+ ### 2.4 Cross-Lingual Transfer Learning
62
+
63
+ We determine that it is most helpful for researchers to first study regions with earlier outbreaks to make assumptions on later occurrences in other locations. Within the five countries we examine, Italy has the earliest peak in cases. As a result, we analyze various textual features in Italy. When aligning outbreaks from two different countries, we experiment with the transfer learning setting. We train on Italy's data and test on the remaining countries.
64
+
65
+ We present this as a regression problem in which we map our input text features $\mathbf{x} \in {\mathbb{R}}^{n}$ to the output $\mathbf{y} \in \mathbb{R}$ . Our ground-truth output $\mathbf{y}$ is presented in two scenarios in our experiments: total cases and daily new cases. The former considers all past and current reported cases while the latter consists of only cases reported on a specific day. The predicted output $\widehat{\mathbf{y}}$ is compared against ground truth y. During training and test time, we utilize support vector regression. For each day, we concatenate the chosen features as input to our regression model. Due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. Therefore, we follow related disease prediction work and evaluate predictions with Spearman's correlation (Hogg et al., 2005) to align our features with official reported cases.
66
+
67
+ ### 2.5 Creating a Base Model
68
+
69
+ In the wake of the COVID-19 crisis, society has adopted a new vocabulary to discuss the pandemic (Katella, 2020). Quarantine and lockdown have become standard words in our daily conversations. Therefore, we ask: are there specific features that indicate the state of an outbreak?
70
+
71
+ Which features can we utilize for alignment? We create a small COVID-19-related keyword list consisting of lockdown, quarantine, social distancing, epidemic, and outbreak and translate these words into Italian. We also include the English word "lockdown" as it has been used in other countries' vocabularies as well. We aim to observe which, if any, of these words align with Italy's outbreak. In addition to word frequencies, we also utilize mBERT and LASER to extract tweet representations for semantic alignment. We further filter Italy's tweets for a balanced representation of tweet embeddings. We remove duplicate tweets, retweets, tweets with hyperlinks, and tweets discussing countries other than Italy. Using the sentence encoding service bert-as-a-service (Xiao, 2018), we extract
72
+
73
+ <table><tr><td/><td/><td colspan="4">Time Setting</td></tr><tr><td>Cases</td><td>Embed</td><td>I</td><td>II</td><td>III</td><td>IV</td></tr><tr><td>Total</td><td>mBERT</td><td>0.880</td><td>0.947</td><td>0.769</td><td>0.880</td></tr><tr><td/><td>LASER</td><td>0.879</td><td>0.946</td><td>0.766</td><td>0.879</td></tr><tr><td>New</td><td>mBERT</td><td>0.805</td><td>0.416</td><td>0.718</td><td>0.794</td></tr><tr><td/><td>LASER</td><td>0.800</td><td>0.490</td><td>0.723</td><td>0.800</td></tr></table>
74
+
75
+ Table 2: Italy's Spearman correlation results with total and daily case count prediction for mBERT and LASER (Embed). Time settings are defined in 2.2. We bold the highest correlations within each case setting.
76
+
77
+ ![01963db4-a0cb-7c85-93dd-c07bffbf7718_2_902_550_500_372_0.jpg](images/01963db4-a0cb-7c85-93dd-c07bffbf7718_2_902_550_500_372_0.jpg)
78
+
79
+ Figure 2: Distribution of new daily COVID-19 cases in Italy, Turkey, Thailand, Japan, and Indonesia. Daily case counts come from COVID-19 Dashboard by CSSE at Johns Hopkins University (Dong et al., 2020).
80
+
81
+ fixed-length representations for each tweet. We explore two options for our tweet representations: average-pooling and max-pooling. Our final feature consists of daily tweet frequency after filtering.
82
+
83
+ Can tweet text align with confirmed cases? We combine combinations of our frequency features with our tweet embeddings and show results in Table 2. Through manual tuning, we find our strongest model (polynomial kernel) contained the keyword lockdown (in English) and averaged tweet representations from mBERT for the total case scenario. When aligning to new cases, the best model (sigmoid kernel) contained keyword lock-down (in English) and max-pooled LASER embed-dings. While mBERT and LASER provide very little difference in alignment to total cases, LASER is noticeably stronger in the new case setting, particularly in time II. For the total case setting, our predictions show strong alignment with ground truth, which is monotonically increasing, in all time settings. When measuring new daily cases, the correlations are weaker in time II. We find that Italy's new cases form a peak in late March, as shown in Figure 2. As a result, there is a distribution shift when training on February data only (tail of the distribution) and testing in March and April.
84
+
85
+ <table><tr><td>Setting</td><td>Thailand</td><td>Japan</td><td>Turkey</td><td>Indonesia</td></tr><tr><td>I</td><td>0.200</td><td>-.300</td><td>.188</td><td>-.316</td></tr><tr><td>II</td><td>0.696</td><td>0.543</td><td>0.715</td><td>0.285</td></tr><tr><td>III</td><td>0.823</td><td>0.856</td><td>0.679</td><td>0.925</td></tr><tr><td>IV</td><td>0.196</td><td>-.300</td><td>0.188</td><td>-.316</td></tr><tr><td>V</td><td>0.859</td><td>0.649</td><td>0.817</td><td>0.722</td></tr></table>
86
+
87
+ Table 3: Cross-lingual transfer learning Spearman correlation with total case counts. Italy is used to train and the listed countries are used for testing. Time settings are defined in 2.2 .
88
+
89
+ <table><tr><td>Setting</td><td>Thailand</td><td>Japan</td><td>Turkey</td><td>Indonesia</td></tr><tr><td>I</td><td>-.022</td><td>0.130</td><td>-.368</td><td>0.416</td></tr><tr><td>II</td><td>0.277</td><td>0.273</td><td>0.426</td><td>0.332</td></tr><tr><td>III</td><td>0.661</td><td>0.262</td><td>0.255</td><td>0.407</td></tr><tr><td>IV</td><td>-.043</td><td>0.127</td><td>-.375</td><td>0.416</td></tr><tr><td>V</td><td>0.755</td><td>0.515</td><td>0.745</td><td>0.742</td></tr></table>
90
+
91
+ Table 4: Cross-lingual transfer learning Spearman correlation with new daily case counts. Italy is used to train and the listed countries are used for testing. Time settings are defined in 2.2 .
92
+
93
+ ### 2.6 Cross-Lingual Prediction
94
+
95
+ While we can align historical data to future cases within Italy, researchers may not have enough data to train models for each country. Therefore we ask, can we use Italy's outbreak to predict the outbreak of another country? In particular, we determine whether users from two different countries follow similar patterns of tweeting during their respective pandemics and how well we can align the two. We follow the same tweet preprocessing methodology described in Section 2.5 and the timeline cuts for training and testing defined in Section 2.2. We also add another time setting (V): training in February, March, and April and testing all three months. This serves as an upper bound for our correlations, indicating how well the general feature trends align between the two countries and their outbreaks.
96
+
97
+ Can we transfer knowledge to other countries? We show our results for the total and new daily case settings in Tables 3 and 4. All of the test countries have strong correlations in time setting $\mathrm{V}$ for both case settings. Since this is used as an upper bound, we can deduce that tweets across countries follow the same general trend in relation to reported cases. When examining the other time settings, it is clear that Italy transfers well in times II and III for the total case setting. As these train in February only, this shows us that transferring knowledge works better in times of more linear case increases, rather than during peaks, which becomes unstable. Times I through IV generally do not perform as well in the new case setting, though II and III primarily have higher correlations.
98
+
99
+ Why does Indonesia differ? It is noticeable that Indonesia aligns better with new daily cases in times I through IV, as opposed to the other countries. When examining Figure 2, we find that Indonesia is the only country that had not yet reached a peak in new daily cases by the end of April, and is steadily increasing. Meanwhile, the other countries follow normal distributions like Italy. However, given that we train our model on February and March data, it does not learn information on post-peak trends and cannot generalize well to these scenarios that occur in April in the other countries.
100
+
101
+ What can we learn from our results? Overall, transfer learning in the total case setting leads to stronger correlations with case counts. While results show that training in February and testing in March and/or April works best, our results for setting V's upper bound correlation show that weaker correlations can be due to the limited sample sizes we have from the start of the pandemic. Additionally, training in February, March, and April in Italy allows us to model a larger variety of scenarios during the pandemic, with samples during pre-peak, mid-peak, and post-peak. Therefore, as we obtain more data every day, we can build stronger models that can generalize better to varying distributions of cases and align outbreaks across countries that can fully reach their upper bound correlations and beyond. Doing so is especially important for analyzing Twitter trends and enabling researchers to potentially predict future case surges in other countries.
102
+
103
+ ## 3 Conclusion
104
+
105
+ In this paper, we performed an analysis of cross-lingual transfer learning with Twitter data for COVID-19 outbreak alignment using cross-lingual sentence embeddings and keyword frequencies. We showed that even with our limited sample sizes, we can utilize knowledge of countries with earlier outbreaks to correlate with cases in other countries. With larger sample sizes and when training on a variety of points during the outbreak, we can obtain stronger correlations to other countries. We hope our analysis can lead to future integration of social media in epidemiological prediction across countries, enhancing outbreak detection systems.
106
+
107
+ ## References
108
+
109
+ Eiji Aramaki, Sachiko Maskawa, and Mizuki Morita. 2011. Twitter catches the flu: Detecting influenza epidemics using twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1568-1576, Edinburgh, Scotland, UK. Association for Computational Linguistics.
110
+
111
+ Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
112
+
113
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Covid-19: The first public coronavirus twitter dataset. arXiv preprint arXiv:2003.07372.
114
+
115
+ Robert Cuffe and Christine Jeavans. 2020. How the uk's coronavirus epidemic compares to other countries. BBC News.
116
+
117
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
118
+
119
+ Ensheng Dong, Hongru Du, and Lauren Gardner. 2020. An interactive web-based dashboard to track covid- 19 in real time. The Lancet infectious diseases, 20(5):533-534.
120
+
121
+ Robert V Hogg, Joseph McKean, and Allen T Craig. 2005. Introduction to mathematical statistics. Pearson Education.
122
+
123
+ Pin Huang, Andrew MacKinlay, and Antonio Jimeno Yepes. 2016. Syndromic surveillance using generic medical entities on twitter. In Proceedings of the Australasian Language Technology Association Workshop 2016, pages 35-44, Melbourne, Australia.
124
+
125
+ Hayate Iso, Shoko Wakamiya, and Eiji Aramaki. 2016. Forecasting word model: Twitter-based influenza surveillance and prediction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 76-86, Osaka, Japan. The COLING 2016 Organizing Committee.
126
+
127
+ Shin Kanouchi, Mamoru Komachi, Naoaki Okazaki, Eiji Aramaki, and Hiroshi Ishikawa. 2015. Who caught a cold ? - identifying the subject of a symptom. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1660-1670, Beijing, China. Association for Computational Linguistics.
128
+
129
+ Juliana Kaplan, Lauren Frias, and Morgan McFall-Johnsen. 2020. Countries around the world are reopening - here's our constantly updated list of how they're doing it and who remains under lockdown. Business Insider.
130
+
131
+ Kathy Katella. 2020. Our new covid-19 vocabulary-what does it all mean? Yale Medicine.
132
+
133
+ Yoshiaki Kitagawa, Mamoru Komachi, Eiji Aramaki, Naoaki Okazaki, and Hiroshi Ishikawa. 2015. Disease event detection based on deep modality analysis. In Proceedings of the ACL-IJCNLP 2015 Student Research Workshop, pages 28-34, Beijing, China. Association for Computational Linguistics.
134
+
135
+ Ramez Kouzy, Joseph Abi Jaoude, Afif Kraitem, Molly B El Alam, Basil Karam, Elio Adib, Jabra Zarka, Cindy Traboulsi, Elie W Akl, and Khalil Baddour. 2020. Coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter. Cureus, 12(3).
136
+
137
+ Alex Lamb, Michael J. Paul, and Mark Dredze. 2013. Separating fact from fear: Tracking flu infections on twitter. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 789-795, Atlanta, Georgia. Association for Computational Linguistics.
138
+
139
+ Nikhil Kumar Rajput, Bhavya Ahuja Grover, and Vipin Kumar Rathi. 2020. Word frequency and sentiment analysis of twitter messages during coronavirus pandemic. arXiv preprint arXiv:2004.03925.
140
+
141
+ Lisa Singh, Shweta Bansal, Leticia Bode, Ceren Budak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. A first look at covid-19 information and misinformation sharing on twitter. arXiv preprint arXiv:2003.13907.
142
+
143
+ Han Xiao. 2018. bert-as-service. https://github.com/hanxiao/bert-as-service.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/ub9_2iAo3D/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CROSS-LINGUAL TRANSFER LEARNING FOR COVID-19 OUTBREAK ALIGNMENT
2
+
3
+ Sharon Levy and William Yang Wang
4
+
5
+ University of California, Santa Barbara
6
+
7
+ Santa Barbara, CA 93106
8
+
9
+ {sharonlevy, william}@cs.ucsb.edu
10
+
11
+ § ABSTRACT
12
+
13
+ The spread of COVID-19 has become a significant and troubling aspect of society in 2020. With millions of cases reported across countries, new outbreaks have occurred and followed patterns of previously affected areas. Many disease detection models do not incorporate the wealth of social media data that can be utilized for modeling and predicting its spread. It is useful to ask, can we utilize this knowledge in one country to model the outbreak in another? To answer this, we propose the task of cross-lingual transfer learning for epidemiological alignment. Utilizing both macro and micro text features, we train on Italy's early COVID-19 outbreak through Twitter and transfer to several other countries. Our experiments show strong results with up to 0.85 Spearman correlation in cross-country predictions.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ During the COVID-19 pandemic, society was brought to a standstill, affecting many aspects of our daily lives. With increased travel due to globalization, it is intuitive that countries have followed earlier affected regions in outbreaks and measures to contain to them (Cuffe and Jeavans, 2020).
18
+
19
+ A unique form of information that can be used for modeling disease propagation comes from social media. This can provide researchers with access to unfiltered data with clues as to how the pandemic evolves. Current research on the COVID-19 outbreak concerning social media includes word frequency and sentiment analysis of tweets (Rajput et al., 2020) and studies on the spread of misinformation (Kouzy et al., 2020; Singh et al., 2020). Social media has also been utilized for other disease predictions. Several papers propose models to identify tweets in which the author or nearby person has the attributed disease (Kanouchi et al., 2015; Aramaki et al., 2011; Lamb et al., 2013; Kita-gawa et al., 2015). Iso et al. (2016) and Huang et al. (2016) utilize word frequencies to align tweets to disease rates.
20
+
21
+ A shortcoming of the above models is they do not consider how one region's outbreak may relate to another. Many of the proposed models also rely on lengthy keyword lists or syntactic features that may not generalize across languages. Text embeddings from models such as multilingual BERT (mBERT) (Devlin et al., 2019) and LASER (Artetxe and Schwenk, 2019) can allow us to combine features and make connections across languages for semantic alignment.
22
+
23
+ We present an analysis of Twitter usage for cross-lingual COVID-19 outbreak alignment. We utilize millions of tweets in several languages to evaluate how social media can help detect epidemiological outbreaks across countries. In particular, we aim to analyze how one country's tweets align with its own outbreak and if those same tweets can be used to predict the state of another country. This can allow us to determine how actions taken to contain the outbreak can transfer across countries with similar measures. We show that we can achieve strong results with cross-lingual transfer learning.
24
+
25
+ Our contributions include:
26
+
27
+ * We formulate the task of cross-lingual transfer learning for epidemiological outbreak alignment across countries.
28
+
29
+ * We are the first to investigate state-of-the-art cross-lingual sentence embeddings for cross-country epidemiological outbreak alignment. We propose joint macro and micro reading for multilingual prediction.
30
+
31
+ * We obtain strong correlations in domestic and cross-country predictions, providing us with evidence that social media patterns in relation to COVID-19 transcend countries.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1: Timeline of COVID-19-related tweets, from COVID-19 dataset (Chen et al., 2020), in various languages. The peaks are marked by events relating to each language's main country's initial outbreak.
36
+
37
+ § 2 TWITTER AND COVID-19
38
+
39
+ § 2.1 PROBLEM FORMULATION
40
+
41
+ An intriguing question in the scope of epidemiological research is: can atypical data such as social media help us model an outbreak? To study this, we utilize Twitter as our source, since users primarily post textual data and in real-time. Furthermore, Twitter users transcend several countries, which is beneficial as COVID-19 is analyzed by researchers and policymakers on a country by country basis (Kaplan et al., 2020). Our motivation in this paper is the intuition that social media users can provide us with indicators of an outbreak during the COVID-19 pandemic. In such case, we reformulate our original question: can we align Twitter with a country's COVID-19 outbreak and apply the learned information to other countries?
42
+
43
+ § 2.2 DATA
44
+
45
+ We utilize the COVID-19 Twitter dataset (Chen et al., 2020), comprised of millions of tweets in several languages. These were collected through Twitter’s streaming API and Tweepy ${}^{1}$ by filtering for 22 specific keywords and hashtags related to COVID-19 such as Coronavirus, Wuhanlockdown, stayathome, and Pandemic. We consider tweets starting from February 1st, 2020 to April 30th, 2020, and filter for tweets written in Italian, Indonesian, Turkish, Japanese, and Thai. Specifically, we filter for languages that are primarily spoken in only one country, as opposed to languages such as English and Spanish that are spoken in several countries. In Table 1, we show dataset statistics describing total tweet counts for each country along with counts after our filtering process described later in Section 2.5. When aligning tweets with each country's outbreak, we utilize the COVID-19 Dashboard by the CSSE at Johns Hopkins University (Dong et al., 2020) for daily confirmed cases from each country. Since the COVID-19 pandemic is still in its early stages at the time of writing this paper, sample sizes are limited. Therefore, our experiments have the following time cut settings: train in February and March and test in April (I), train in February and test in March and April (II), train in February and test in March (III), and train in March and test in April (IV).
46
+
47
+ max width=
48
+
49
+ X Italy Thailand Japan Turkey Indonesia
50
+
51
+ 1-6
52
+ Pre 1.3M 2.2M 2.2M 960K 3.2M
53
+
54
+ 1-6
55
+ Post 103K 6.9K 61K 96K 309K
56
+
57
+ 1-6
58
+
59
+ Table 1: Dataset statistics in each country before (Pre) and after (Post) the tweet filter process described in Section 2.5.
60
+
61
+ § 2.3 CAN TWITTER DETECT THE START OF A COUNTRY'S OUTBREAK?
62
+
63
+ We start by investigating a basic feature in our dataset: tweet frequency. We plot each country's tweet frequency in Figure 1. There is a distinct peak within each country, corresponding to events within each country signaling initial outbreaks, denoted by the vertical lines. These correlations indicate that even a standard characteristic such as tweet frequency can align with each country's outbreak and occurs across several countries. Given this result, we further explore other tweet features for epidemiological alignment.
64
+
65
+ ${}^{1}$ https://www.tweepy.org/
66
+
67
+ § 2.4 CROSS-LINGUAL TRANSFER LEARNING
68
+
69
+ We determine that it is most helpful for researchers to first study regions with earlier outbreaks to make assumptions on later occurrences in other locations. Within the five countries we examine, Italy has the earliest peak in cases. As a result, we analyze various textual features in Italy. When aligning outbreaks from two different countries, we experiment with the transfer learning setting. We train on Italy's data and test on the remaining countries.
70
+
71
+ We present this as a regression problem in which we map our input text features $\mathbf{x} \in {\mathbb{R}}^{n}$ to the output $\mathbf{y} \in \mathbb{R}$ . Our ground-truth output $\mathbf{y}$ is presented in two scenarios in our experiments: total cases and daily new cases. The former considers all past and current reported cases while the latter consists of only cases reported on a specific day. The predicted output $\widehat{\mathbf{y}}$ is compared against ground truth y. During training and test time, we utilize support vector regression. For each day, we concatenate the chosen features as input to our regression model. Due to different testing resources, criteria, and procedures, there are some offsets in each countries' official numbers. Therefore, we follow related disease prediction work and evaluate predictions with Spearman's correlation (Hogg et al., 2005) to align our features with official reported cases.
72
+
73
+ § 2.5 CREATING A BASE MODEL
74
+
75
+ In the wake of the COVID-19 crisis, society has adopted a new vocabulary to discuss the pandemic (Katella, 2020). Quarantine and lockdown have become standard words in our daily conversations. Therefore, we ask: are there specific features that indicate the state of an outbreak?
76
+
77
+ Which features can we utilize for alignment? We create a small COVID-19-related keyword list consisting of lockdown, quarantine, social distancing, epidemic, and outbreak and translate these words into Italian. We also include the English word "lockdown" as it has been used in other countries' vocabularies as well. We aim to observe which, if any, of these words align with Italy's outbreak. In addition to word frequencies, we also utilize mBERT and LASER to extract tweet representations for semantic alignment. We further filter Italy's tweets for a balanced representation of tweet embeddings. We remove duplicate tweets, retweets, tweets with hyperlinks, and tweets discussing countries other than Italy. Using the sentence encoding service bert-as-a-service (Xiao, 2018), we extract
78
+
79
+ max width=
80
+
81
+ X X 4|c|Time Setting
82
+
83
+ 1-6
84
+ Cases Embed I II III IV
85
+
86
+ 1-6
87
+ Total mBERT 0.880 0.947 0.769 0.880
88
+
89
+ 1-6
90
+ X LASER 0.879 0.946 0.766 0.879
91
+
92
+ 1-6
93
+ New mBERT 0.805 0.416 0.718 0.794
94
+
95
+ 1-6
96
+ X LASER 0.800 0.490 0.723 0.800
97
+
98
+ 1-6
99
+
100
+ Table 2: Italy's Spearman correlation results with total and daily case count prediction for mBERT and LASER (Embed). Time settings are defined in 2.2. We bold the highest correlations within each case setting.
101
+
102
+ < g r a p h i c s >
103
+
104
+ Figure 2: Distribution of new daily COVID-19 cases in Italy, Turkey, Thailand, Japan, and Indonesia. Daily case counts come from COVID-19 Dashboard by CSSE at Johns Hopkins University (Dong et al., 2020).
105
+
106
+ fixed-length representations for each tweet. We explore two options for our tweet representations: average-pooling and max-pooling. Our final feature consists of daily tweet frequency after filtering.
107
+
108
+ Can tweet text align with confirmed cases? We combine combinations of our frequency features with our tweet embeddings and show results in Table 2. Through manual tuning, we find our strongest model (polynomial kernel) contained the keyword lockdown (in English) and averaged tweet representations from mBERT for the total case scenario. When aligning to new cases, the best model (sigmoid kernel) contained keyword lock-down (in English) and max-pooled LASER embed-dings. While mBERT and LASER provide very little difference in alignment to total cases, LASER is noticeably stronger in the new case setting, particularly in time II. For the total case setting, our predictions show strong alignment with ground truth, which is monotonically increasing, in all time settings. When measuring new daily cases, the correlations are weaker in time II. We find that Italy's new cases form a peak in late March, as shown in Figure 2. As a result, there is a distribution shift when training on February data only (tail of the distribution) and testing in March and April.
109
+
110
+ max width=
111
+
112
+ Setting Thailand Japan Turkey Indonesia
113
+
114
+ 1-5
115
+ I 0.200 -.300 .188 -.316
116
+
117
+ 1-5
118
+ II 0.696 0.543 0.715 0.285
119
+
120
+ 1-5
121
+ III 0.823 0.856 0.679 0.925
122
+
123
+ 1-5
124
+ IV 0.196 -.300 0.188 -.316
125
+
126
+ 1-5
127
+ V 0.859 0.649 0.817 0.722
128
+
129
+ 1-5
130
+
131
+ Table 3: Cross-lingual transfer learning Spearman correlation with total case counts. Italy is used to train and the listed countries are used for testing. Time settings are defined in 2.2 .
132
+
133
+ max width=
134
+
135
+ Setting Thailand Japan Turkey Indonesia
136
+
137
+ 1-5
138
+ I -.022 0.130 -.368 0.416
139
+
140
+ 1-5
141
+ II 0.277 0.273 0.426 0.332
142
+
143
+ 1-5
144
+ III 0.661 0.262 0.255 0.407
145
+
146
+ 1-5
147
+ IV -.043 0.127 -.375 0.416
148
+
149
+ 1-5
150
+ V 0.755 0.515 0.745 0.742
151
+
152
+ 1-5
153
+
154
+ Table 4: Cross-lingual transfer learning Spearman correlation with new daily case counts. Italy is used to train and the listed countries are used for testing. Time settings are defined in 2.2 .
155
+
156
+ § 2.6 CROSS-LINGUAL PREDICTION
157
+
158
+ While we can align historical data to future cases within Italy, researchers may not have enough data to train models for each country. Therefore we ask, can we use Italy's outbreak to predict the outbreak of another country? In particular, we determine whether users from two different countries follow similar patterns of tweeting during their respective pandemics and how well we can align the two. We follow the same tweet preprocessing methodology described in Section 2.5 and the timeline cuts for training and testing defined in Section 2.2. We also add another time setting (V): training in February, March, and April and testing all three months. This serves as an upper bound for our correlations, indicating how well the general feature trends align between the two countries and their outbreaks.
159
+
160
+ Can we transfer knowledge to other countries? We show our results for the total and new daily case settings in Tables 3 and 4. All of the test countries have strong correlations in time setting $\mathrm{V}$ for both case settings. Since this is used as an upper bound, we can deduce that tweets across countries follow the same general trend in relation to reported cases. When examining the other time settings, it is clear that Italy transfers well in times II and III for the total case setting. As these train in February only, this shows us that transferring knowledge works better in times of more linear case increases, rather than during peaks, which becomes unstable. Times I through IV generally do not perform as well in the new case setting, though II and III primarily have higher correlations.
161
+
162
+ Why does Indonesia differ? It is noticeable that Indonesia aligns better with new daily cases in times I through IV, as opposed to the other countries. When examining Figure 2, we find that Indonesia is the only country that had not yet reached a peak in new daily cases by the end of April, and is steadily increasing. Meanwhile, the other countries follow normal distributions like Italy. However, given that we train our model on February and March data, it does not learn information on post-peak trends and cannot generalize well to these scenarios that occur in April in the other countries.
163
+
164
+ What can we learn from our results? Overall, transfer learning in the total case setting leads to stronger correlations with case counts. While results show that training in February and testing in March and/or April works best, our results for setting V's upper bound correlation show that weaker correlations can be due to the limited sample sizes we have from the start of the pandemic. Additionally, training in February, March, and April in Italy allows us to model a larger variety of scenarios during the pandemic, with samples during pre-peak, mid-peak, and post-peak. Therefore, as we obtain more data every day, we can build stronger models that can generalize better to varying distributions of cases and align outbreaks across countries that can fully reach their upper bound correlations and beyond. Doing so is especially important for analyzing Twitter trends and enabling researchers to potentially predict future case surges in other countries.
165
+
166
+ § 3 CONCLUSION
167
+
168
+ In this paper, we performed an analysis of cross-lingual transfer learning with Twitter data for COVID-19 outbreak alignment using cross-lingual sentence embeddings and keyword frequencies. We showed that even with our limited sample sizes, we can utilize knowledge of countries with earlier outbreaks to correlate with cases in other countries. With larger sample sizes and when training on a variety of points during the outbreak, we can obtain stronger correlations to other countries. We hope our analysis can lead to future integration of social media in epidemiological prediction across countries, enhancing outbreak detection systems.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/yx-k0ukHzDR/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # COVID-19 and Arabic Twitter: How can Arab World Governments and Public Health Organizations Learn from Social Media?
2
+
3
+ Lama Alsudias
4
+
5
+ King Saud University / Saudi Arabia
6
+
7
+ Lancaster University / UK
8
+
9
+ lalsudias@ksu.edu.sa
10
+
11
+ Paul Rayson
12
+
13
+ Lancaster University / UK
14
+
15
+ p.rayson@lancaster.ac.uk
16
+
17
+ ## Abstract
18
+
19
+ In March 2020, the World Health Organization announced the COVID-19 outbreak as a pandemic. Most previous social media related research has been on English tweets and COVID-19. In this study, we collect approximately 1 million Arabic tweets from the Twitter streaming API related to COVID-19. Focussing on outcomes that we believe will be useful for Public Health Organizations, we analyse them in three different ways: identifying the topics discussed during the period, detecting rumours, and predicting the source of the tweets. We use the k-means algorithm for the first goal with $\mathrm{k} = 5$ . The topics discussed can be grouped as follows: COVID- 19 statistics, prayers for God, COVID-19 locations, advise and education for prevention, and advertising. We sample 2000 tweets and label them manually for false information, correct information, and unrelated. Then, we apply three different machine learning algorithms, Logistic Regression, Support Vector Classification, and Naïve Bayes with two sets of features, word frequency approach and word em-beddings. We find that Machine Learning classifiers are able to correctly identify the rumour related tweets with ${84}\%$ accuracy. We also try to predict the source of the rumour related tweets depending on our previous model which is about classifying tweets into five categories: academic, media, government, health professional, and public. Around (60%) of the rumour related tweets are classified as written by health professionals and academics.
20
+
21
+ ## 1 Introduction
22
+
23
+ The current coronavirus disease (COVID-19) outbreak is of major global concern and is classified by the World Health Organization as an international health emergency. Governments around the world have taken different decisions in order to stop the spread of the disease. Many academic researchers in various fields including Natural Language Processing (NLP) have carried out studies targetting this subject. For example, COVID-19 and ${\mathrm{{AI}}}^{1}$ is one of the conferences that has been convened virtually to show how Artificial Intelligence (AI) can contribute in helping the Public Health Organizations during pandemics.
24
+
25
+ People use social media applications such as Twitter to find the news related to COVID-19 and/or express their opinions and feelings about it. As a result, a vast amount of information could be exploited by NLP researchers for a myriad of analyses despite the informal nature of social media writing style.
26
+
27
+ We hypothesise that Public Health Organizations (PHOs) may benefit from mining the topics discussed between people during the pandemic. This may help in understanding a population's current and changing concerns related to the disease and help to find the best solutions to protect people. In addition, during an outbreak people themselves will search online for reliable and trusted information related to the disease such as prevention and transmission pathways. COVID-19 Twitter conversations may not correlate with the actual disease epidemiology. Therefore, Public Health Organizations have a vested interest in ensuring that information spread in the population is accurate (Vorovchenko et al., 2017). For instance, the Ministry of Health in Saudi Arabia ${}^{2}$ presents a daily press conference incorporating the aim of quickly stopping the spread of false rumours. However, there is currently a prolonged period of time until warnings are issued. For example, the first tweet that included false information about hot weather killing the virus was on 10 February 2020, while the press conference which responded to this rumour on 14 April 2020. There is a clearly a need to find false information as quickly as possible. In addition, effort needs to be made in relation to tracking the user accounts that promote rumours. This can be undertaken using a variety of techniques, for example using social network features, geolocation, bot detection or content based approaches such as language style. Public Health Organizations would benefit from speeding up the process of tracking in order to stop rumours and remove the bot networks.
28
+
29
+ ---
30
+
31
+ 'https://hai.stanford.edu/events/ covid-19-and-ai-virtual-conference
32
+
33
+ ${}^{2}$ https://www.moh.gov.sa
34
+
35
+ ---
36
+
37
+ The vast majority of the previous research in this area has been on English Twitter content but this will not directly assist PHOs in Arabic speaking countries. The Arabic language is spoken by 467 million people in the world and has more than 26 dialects ${}^{3}$ . Of particular importance for NLP is coping with dialectal and/or meaning differences in less formal settings such as social media. As an example in health field, the word ( $\downarrow \downarrow \downarrow \downarrow$ ) may be understood as vaccination ${}^{4}$ in Modern Standard Arabic or reading supplications in Najdi dialect ${}^{5}$ . There has been much recent progress in Arabic NLP research yet there is still an urgent need to develop fake news detection for Arabic tweets (Mouty and Gazdar, 2018).
38
+
39
+ In this paper, we have combined qualitative and quantitative studies to analyse Arabic tweets aiming to support Public Health Organizations who can learn from social media data along various lines:
40
+
41
+ - Analyzing the topics discussed between people during the peak of COVID-19
42
+
43
+ - Identifying and detecting the rumours related to COVID-19.
44
+
45
+ - Predicting the type of sources of tweets about COVID-19.
46
+
47
+ ## 2 Related Work
48
+
49
+ There is a vast quantity of research over recent years that analyses social media data related to different pandemics such as H1N1 (2009), Ebola (2014), Zika Fever (2015), and Yellow Fever (2016). These studies followed a variety of directions for analysis with multiple different goals (Joshi et al., 2019). The study of Ahmed et al. (2019) used a thematic analysis of tweets related to the H1N1 pandemic. Eight key themes emerged from the analysis: emotion, health related information, general commentary and resources, media and health organisations, politics, country of origin, food, and humour and/or sarcasm.
50
+
51
+ A survey study (Fung et al., 2016b) reviewed the research relevant to the Ebola virus and social media. It compared research questions, study designs, data collection methods, and analytic methods. Ahmed et al. (2017b) used content analysis to identify the topics discussed on Twitter at the beginning of the 2014 Ebola epidemic in the United States. In (Vorovchenko et al., 2017), they determined the geolocation of the Ebola tweets and named the accounts that interacted more on Twitter related to the 2014 West African Ebola outbreak. The main goal of the study by Kalyanam et al. (2015) was to distinguish between credible and fake tweets. It highlighted the problems of manual labeling process with verification needs. The study in (Fung et al., 2016a) highlighted how the problem of misinformation changed during the disease outbreak and recommended a longitudinal study of information published on social media. Moreover, it pointed out the importance of understanding the source of this information and the process of spreading rumours in order to reduce their impact in the future.
52
+
53
+ Ghenai and Mejova (2017) tracked Zika Fever misinformation on social media by comparing them with rumours identified by the World Health Organization. Also, they pointed out the importance of credible information sources and encouraged collaboration between researchers and health organizations to rapidly process the misinformation related to health on social media. The study in (Ortiz-Martínez and Jiménez-Arcia, 2017) reviewed the quality of available yellow fever information on Twitter. It also showed the significance of the awareness of misleading information during pandemic spread. The study of Zubiaga et al. (2018a) summarised other studies related to social media rumours. It illustrated techniques for developing rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. Vorovchenko et al. (2017) mentioned the importance of Twitter information during the epidemic and how Public Health Organisations can benefit from this. They showed the requirement to monitor false information posted by some accounts and recommended that this was performed in real time to reduce the danger of this information. Also, they discussed the lack of available datasets which help in the development of rumour classification systems.
54
+
55
+ ---
56
+
57
+ ${}^{3}$ https://en.wikipedia.org/wiki/Arabic
58
+
59
+ ${}^{4}$ https://en.wikipedia.org/wiki/Modern_ Standard_Arabic
60
+
61
+ ${}^{5}$ https://en.wikipedia.org/wiki/Najdi_ Arabic
62
+
63
+ ---
64
+
65
+ Researchers have been doing studies on building and analysing COVID-19 Twitter datasets since the disease appeared in December 2019. So far, there are two different datasets which have been published recently related to Arabic and COVID-19 (Alqurashi et al., 2020) and (Haouari et al., 2020). The former collected 3,934,610 million tweets until April 152020, and the latter included around ${748}\mathrm{k}$ tweets until March 31, 2020. These papers contain an initial analysis and statistical results for the collected tweets and some suggestions for future work, which include pandemic response, behavior analysis, emergency management, misinformation detection, and social analytics.
66
+
67
+ On the other hand, there are some datasets in English such as (Chen et al., 2020) and (Lopez et al., 2020). Also, there is a multilingual COVID-19 dataset containing location information (Qazi et al., 2020). This contains more than 524 million tweets, with 5.5 million Arabic tweets, posted over a period of three months since February 1, 2020. It focuses on determining the geolocation of a tweet which can help research with various different challenges, including identifying rumours.
68
+
69
+ Although the above studies have produced datasets related to COVID-19, they do not analyse them deeply using NLP methods. Previous studies representing earlier epidemics present good techniques and results, however none of them are related to Arabic tweets. Therefore, to assist PHOs in Arabic speaking countries there is an urgent need to analyse tweets related to COVID-19 using multiple Arabic NLP techniques.
70
+
71
+ ## 3 Update Arabic Infectious Disease Ontology
72
+
73
+ With the recent appearance of COVID-19 as a new disease, there is need to update our Arabic Infectious Disease Ontology (Alsudias and Rayson, 2020), which integrates the scientific and medical vocabularies of infectious diseases with their informal equivalents used in general discourse. We collated COVID-19 information from the World Health Organization ${}^{6}$ and Ministry of health in Saudi Arabia. This included symptom, cause, prevention, infection, organ, treatment, diagnosis, place of the disease spread, and slang terms for COVID-19 and extended our ontology ${}^{7}$ . These terms were then used in our collection process.
74
+
75
+ ## 4 Data Collection
76
+
77
+ We began collecting Arabic tweets about a number of infectious diseases from September 2019. Here in this paper, we analysed only the tweets related to COVID-19 from December 2019 to April 2020 (there are a few tweets between September and November, these are related to Middle East respiratory syndrome coronavirus, MERS-CoV ${}^{8}$ ). We have collected approximately six million tweets in Arabic during this period. We obtained the tweets depending on three keywords (C) $S$ (g.s. 19 ) which mean Coronavirus, a misspelling of the name of Coronavirus, and COVID-19 respectively in English. We collected the tweets weekly using Twitter API.
78
+
79
+ Next, we pre-processed the tweets through a pipeline of different steps:
80
+
81
+ - Manually remove retweets, advertisements, and spam.
82
+
83
+ - Filter out URLs, mentions, hashtags, numbers, emojis, repeating characters, and non-Arabic words using Python scripts ${}^{9}$ .
84
+
85
+ - Normalize and tokenize tweets.
86
+
87
+ - Remove Arabic stopwords (Alrefaie, 2017).
88
+
89
+ After pre-processing, the resulting dataset was 1,048,575 unique tweets from the original 6,578,982collected. Figure 1 shows the number of Arabic tweets about Coronavirus each week with specific dates highlighted to show government decisions on protecting the population from COVID-19 and other key dates for context.
90
+
91
+ ---
92
+
93
+ 6 http://www.emro.who.int
94
+
95
+ ${}^{7}$ https://github.com/alsudias/ Arabic-Infectious-Disease-Ontology
96
+
97
+ 8 https://www.who.int/
98
+
99
+ news-room/fact-sheets/detail/
100
+
101
+ ---
102
+
103
+ ![01963db1-6402-7722-b886-5a779557568a_3_236_167_1113_675_0.jpg](images/01963db1-6402-7722-b886-5a779557568a_3_236_167_1113_675_0.jpg)
104
+
105
+ Figure 1: Number of Arabic tweets about Coronavirus
106
+
107
+ ## 5 Methods
108
+
109
+ We performed three different types of analysis on the collected data. Firstly, in order to better understand the topics discussed in the corpus, we carried out a cluster analysis. Secondly, taking a sample of the corpus, we performed rumour detection. Finally, we extended our previous work to classify the source of tweets into five types of Twitter users which aims at helping to determine their veracity.
110
+
111
+ ### 5.1 Cluster Analysis
112
+
113
+ To explore the topics discussed on Twitter during the COVID-19 epidemic in Saudi Arabia and other countries in the Arab World, we subjected the text of the tweets to cluster analysis. After pre-processing the tweets as described above, we used the N-gram forms (unigram, bigram, and trigram) of twitter corpus and clustered them using the K-means algorithm with the Python Scikit-learn 0.20.2 (Pedregosa et al., 2011) software and set the value of $\mathrm{k}$ , the number of clusters, to be five.
114
+
115
+ ### 5.2 Rumour Detection
116
+
117
+ Following previous work (Zubiaga et al., 2018b), we applied a top-down strategy, which is where the set of rumours is identified in advance then the data is sampled to extract the posts associated with the previously identified rumours. In our dataset, out
118
+
119
+ ’https://github.com/alsudias of the one million tweets, we sampled 2,000 tweets to classify them for rumour detection. We manually labelled the tweets to create a gold standard dataset and then applied different machine learning algorithms in this part of our study.
120
+
121
+ #### 5.2.1 Labelling Guidelines
122
+
123
+ We manually labelled the tweets with 1,-1, and 0 to represent false information, correct information, and unrelated content, respectively. Our reference point for deciding whether the content contained true or false information was based on the list issued by the Ministry of Health in Saudi Arabia ${}^{10}$ and is regularly updated (the last update applied for this study dates from 14 April 2020). Table 1 presents the list in both Arabic and English and Table 2 shows some example tweets for each label.
124
+
125
+ #### 5.2.2 Machine Learning Models
126
+
127
+ We applied three different machine learning algorithms: Logistic Regression (LR), Support Vector Classification (SVC), and Naïve Bayes (NB). To help the classifier distinguish between the classes more accurately, we extracted further linguistic features. The selected features fall into two groups: word frequency, count vector and TF-IDF, and word embedding based (Word2Vec and FastText). We used 10-fold cross validation to determine accu-
128
+
129
+ racy of the classifiers for this dataset, splitting the middle-east-respiratory-syndrome-coronavirus-(mers-cov)
130
+
131
+ ${}^{10}$ https://www.moh.gov.sa entire sample into ${90}\%$ training and ${10}\%$ testing for each fold.
132
+
133
+ <table><tr><td>Rumour in Arabic</td><td>Rumour in English</td></tr><tr><td>.lignes for any list</td><td>Pets are transporters of Coronavirus.</td></tr><tr><td>Bugs I J J J J J J J</td><td>Mosquitoes are transporters of Coron- avirus.</td></tr><tr><td>Eggeration of the Hubble</td><td>Children are not infected by Coron- avirus.</td></tr><tr><td>Jule is the region of the Bugs regions are in the</td><td>Only old people may have a high risk of Coronavirus.</td></tr><tr><td>Type III Is Japan System System Sys</td><td>Hot or cold weather can kill the virus.</td></tr><tr><td>The results will get the best</td><td>Gargling with water and salt eliminates the virus.</td></tr><tr><td>Eggs: 11 is a list of the list of</td><td>There are some herbs that protect against from Coronavirus.</td></tr><tr><td>The Hubble is a signal</td><td>The virus does not survive on surfaces.</td></tr></table>
134
+
135
+ Table 1: List of rumours that appear during COVID-19 (source: Saudi Arabia Ministry of Health)
136
+
137
+ <table><tr><td>Tweet in Arabic</td><td>Tweet in English</td><td>Label</td></tr><tr><td>Egger. Lacy Lacy LLS is a 3-1 Progress Let us use ② $1 - \frac{1}{2}1 = 1$ ② $1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1$</td><td>There will be a decrease in the spread of the Corona virus at the beginning of the summer, espe- cially in the Arab world, due to the high temperatures.</td><td>1 (false)</td></tr><tr><td>Burth states are still provided in all Bob is closed by its Add with the state of the state of the state. Presentatively the probability of the detail.</td><td>The Ministry of Health: A virus lives and is mainly concentrated in the respiratory system, so it is not likely to be transmitted by insects or by mosquito bites.</td><td>- 1 (true)</td></tr><tr><td>Exp. 31 (1) is still asked as a full Japan Mat. Let us will get be done in a install SSL as B as to be less than 31 g</td><td>Oh God, in this blessed hour, We ask you to have mercy on us and keep away from us all dis- ease and calamity, and protect us from the evil of diseases and sick- nesses. Preserve our country and other Muslim countries.</td><td>0 (unre- lated)</td></tr></table>
138
+
139
+ Table 2: Example tweets and our labelling system
140
+
141
+ ### 5.3 Source Type Prediction
142
+
143
+ We replicated a Logic Regression model from our previous study, which was useful for classifying tweets into five categories: academic, media, government, health professional, and public (Alsu-dias and Rayson, 2019). We used this LR model because it previously achieved the best accuracy (77%), and employed it here to predict the source of the COVID-19 tweets that we had already labelled in Section 5.2.
144
+
145
+ ## 6 Results and Discussion
146
+
147
+ ### 6.1 Cluster Analysis
148
+
149
+ Our cluster analysis of the five main public topics discussed in tweets content is as follows: (1) disease statistics: the number of infected, died, and recovered people; (2) prayers: prayer asking God to stop virus; (3) disease locations: spread and location (i.e., name of locations, information about spread); (4) Advise for prevention education: health information (i.e., prevention methods, signs, symptoms); and (5) advertising: adverts for any product either related or not related to the virus. Figure 2 illustrates the five topics of COVID-19 tweets with examples.
150
+
151
+ For each cluster, the top terms by frequency are as follows: (1) disease statistics: case (3) be w $\left( {\delta \downarrow ,{\lambda }_{ - }}\right)$ , and infection $\left( {\delta \downarrow ,\left| \delta \right| }\right)$ ; (2) prayers: Allah (或1), Oh God (201), and Muslims (Multi); (3) disease locations: Dammam ( $\cdot$ LAI), Riyadh (city 11), and Makkah (cit.); (4) Advise for prevention education: crisis (%_j'), spread (_láxil), and pandemic (%L); (5) advertising: discount $\left( { \sim \rightarrow }\right)$ , coupon $\left( { \downarrow \rightarrow }\right)$ , and code (39).
152
+
153
+ We found that four of our categories (disease statistics, prayers, disease locations, and advise for prevention education) are similar to those found by Odlum and Yoon (2015) which are risk factors, prevention education, disease trends, and compassion. The marketing category is one of the topics in (Ahmed et al., 2017a) which discussed the topics in Twitter during the Ebola epidemic in the United States. Jokes and/or sarcasm is one of the categories that did not appear in our study but can be found in (Ahmed et al., 2017a) and (Ahmed et al., 2019), a thematic analysis study of Twitter data during $\mathrm{H}1\mathrm{\;N}1$ pandemic. This may be a result of more concern and panic from COVID-19 than other diseases during this period of time.
154
+
155
+ ### 6.2 Rumour Detection
156
+
157
+ The result of our manual labelling process is 316 tweets label with 1 (false), 895 tweets label with 1 (true), and 789 tweets label with 0 (unrelated). Therefore, the false information represents about 15.8% (from 2,000 tweets) and around 26% (from 1,211 tweets, after removing the unrelated ones). In the study by Ortiz-Martínez and Jiménez-Arcia (2017), 61.3% (from 377 tweets) of data was classified as misinformation about Yellow Fever. It represented ${32}\%$ (from 26,728 tweets) considered as rumours related to Zika Fever in Ghenai and
158
+
159
+ ## Mejova (2017).
160
+
161
+ Figures 3, 4, and 5 show the accuracy, F1-score, recall, and precision on our corpus using LR, SVC, and NB algorithms with various feature selection approaches. The highest accuracy (84.03%) was achieved by the LR classifier with a count vector set of features and SVC with TF-IDF. Therefore, the count vector gives best result in LG and NB in all metrics results whereas with precision which achieves better results with TF-IDF 83.71% in LG and 81.28% in NB. while TF-IDF in SVC has the best results except recall which achieved 75.55% in count vector set features.
162
+
163
+ We also applied several word embedding based approaches but without obtaining good results. The accuracy ranges from ${50}\%$ to ${60}\%$ and the F1 score is around ${40}\%$ on average. FastText models achieve better accuracy in SVC (54.89%) and NB (59.49%) than Word2Vec by approximately (5%). While Word2Vec shows the best result with LG 60.68% for accuracy, 49% for F1 and recall, and 65.97% in precision.
164
+
165
+ The word frequency based approaches have around a ${20}\%$ better result than the word embedding ones. The reason for this is expected to be the dataset size and the specific domain of context (Ma, 2018). We assumed that the word embedding methods may achieve good results due to the importance of the relevant information around the word. For example, FastText can deal with the misspelling problem which is common in social media language style and improves word vectors with subword information (Bojanowski et al., 2017).
166
+
167
+ ### 6.3 Source Type Prediction
168
+
169
+ The model predicts the source type for each of the tweets. Table 3 shows some examples of the tweets with predicted labels by the model. We focus on the result of the fake news content since they are of highest importance for the Public Health Organization. ${30}\%$ (95 of 316) and 28% (91 of 316) of the rumour tweets are classified as written by a health professional and academic consequently. While only ${12}\% \left( {{39}\text{of 316}}\right)$ of them are predicted as written by the public. With this result, we find that the tweets containing false information quite often used the language style of academics and health professionals.
170
+
171
+ ![01963db1-6402-7722-b886-5a779557568a_6_216_188_1238_804_0.jpg](images/01963db1-6402-7722-b886-5a779557568a_6_216_188_1238_804_0.jpg)
172
+
173
+ Figure 2: Examples of tweets in each cluster
174
+
175
+ <table><tr><td>Tweet in Arabic</td><td>Tweet in English</td><td>Predicted Label</td></tr><tr><td>Type III JUI Engin ... @link "Self" J BJILL JUNK LIG</td><td>In scientific reading ... the virus is expected to erode in April due to heat.</td><td>Academic</td></tr><tr><td>ON In Sec. 3.1 C. VELI in all data. in all times of types or and it</td><td>Health spokesman has confirmed cases so far infected with coronavirus, mostly for adults.</td><td>Media</td></tr><tr><td>orgale deletions. Then, the June 11, Läville Leitings</td><td>Ministry of health please spray mosquitoes, as they are carriers of the Coronavirus, increased infections as mosquitoes spread.</td><td>Government</td></tr><tr><td>.LII LI BLARI JI WII LISI LISI lies for some line.</td><td>A Chinese expert confirms that inhal- ing water vapor kills coronavirus.</td><td>Health professional</td></tr><tr><td>part is input to be $\because g = g \cdot h = 1$</td><td>Corona treatment with lemon and gar- lic "YouTube link".</td><td>Public</td></tr></table>
176
+
177
+ Table 3: Some examples of false tweets from different source predicted labels
178
+
179
+ ## 7 Conclusion and Future work
180
+
181
+ In this paper, we identified and analysed one million tweets related to the COVID-19 pandemic in the Arabic language. We performed three experiments which we expect can help to develop methods of analysis suitable for helping Arab World Governments and Public Health Organi-sations. Our analysis first identifies the topics discussed on social media during the epidemic, detects the tweets that contain false information, and predicts the source of the rumour related tweets based on our previous model for other diseases. The clustered topics are COVID-19 statistics, prayers for God, COVID-19 locations, advise for preventing education, and advertising.
182
+
183
+ ![01963db1-6402-7722-b886-5a779557568a_7_193_167_599_407_0.jpg](images/01963db1-6402-7722-b886-5a779557568a_7_193_167_599_407_0.jpg)
184
+
185
+ Figure 3: Results using Logistic Regression
186
+
187
+ ![01963db1-6402-7722-b886-5a779557568a_7_193_659_601_406_0.jpg](images/01963db1-6402-7722-b886-5a779557568a_7_193_659_601_406_0.jpg)
188
+
189
+ Figure 4: Results using Support Vector Classification
190
+
191
+ ![01963db1-6402-7722-b886-5a779557568a_7_193_1151_599_406_0.jpg](images/01963db1-6402-7722-b886-5a779557568a_7_193_1151_599_406_0.jpg)
192
+
193
+ Figure 5: Results using Naïve Bayes
194
+
195
+ Our second contribution is a labeled sample of tweets (2,000 out of 1 million) annotated for false information, correct information, and unrelated. To investigate the replicability and scalability of this annotation, we applied multiple Machine Learning Algorithms with different sets of features. The highest accuracy result was ${84}\%$ achieved by the LR classifier with count vector set of features and SVC with TF-IDF.
196
+
197
+ Finally, we also used our previous model to predict the source types of the sampled tweets. Around ${60}\%$ of the rumour related tweets are classified as written by health professional and academics which shows the urgent need to respond to such fake news. The dataset, including tweet IDs, manually assigned labels for the sampled tweets, and other resources used in this paper are made freely available for academic research purposes ${}^{11}$ .
198
+
199
+ There are clearly many potential future directions related to analysing social media data on the topics of pandemics. Since false information has the potential to play a dangerous role in topics related to health, there is a need to enhance and automate the automatic detection process supporting different languages beyond just English. Future potential directions include monitoring the spread of the disease by finding the infected individuals, defining the infected locations, or observing people that do not apply self isolation rules. Moreover, the analysis could proceed in an exploratory and thematic way such as discovering further topics discussed during the epidemic, as well as assisting governments and public health organisations in measuring people's concerns resulting from the disease.
200
+
201
+ ## References
202
+
203
+ Wasim Ahmed, Peter A Bath, Laura Sbaffi, and Gian-luca Demartini. 2019. Novel insights into views towards H1N1 during the 2009 Pandemic: a thematic analysis of Twitter data. Health Information & Libraries Journal, 36(1):60-72.
204
+
205
+ Wasim Ahmed, Gianluca Demaerini, and Peter A Bath. 2017a. Topics discussed on twitter at the beginning of the 2014 Ebola epidemic in United States. iCon-ference 2017 Proceedings.
206
+
207
+ Wasim Ahmed, G. Demartini, and P. Bath. 2017b. Topics Discussed on Twitter at the Beginning of the 2014 Ebola Epidemic in United States.
208
+
209
+ Sarah Alqurashi, Ahmad Alhindi, and Eisa Alanazi. 2020. Large arabic twitter dataset on covid-19. arXiv preprint arXiv:2004.04315.
210
+
211
+ ---
212
+
213
+ "https://doi.org/10.17635/lancaster/ researchdata/375
214
+
215
+ ---
216
+
217
+ Mohamed Taher Alrefaie. 2017. arabic stop
218
+
219
+ words. https://github.com/mohataher/ arabic-stop-words.
220
+
221
+ Lama Alsudias and Paul Rayson. 2019. Classifying information sources in Arabic twitter to support online monitoring of infectious diseases. In Proceedings of the 3rd Workshop on Arabic Corpus Linguistics, pages 22-30, Cardiff, United Kingdom. Association for Computational Linguistics.
222
+
223
+ Lama Alsudias and Paul Rayson. 2020. Developing an Arabic Infectious Disease Ontology to Include Non-Standard Terminology. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4844-4852, Marseille, France. European Language Resources Association.
224
+
225
+ Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
226
+
227
+ Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Covid-19: The first public coronavirus twitter dataset. arXiv preprint arXiv:2003.07372.
228
+
229
+ I.C.-H Fung, King-wa Fu, C.-H Chan, Benedict Chan, Chi-Ngai Cheung, Thomas Abraham, and Zion Tse. 2016a. Social Media's Initial Reaction to Information and Misinformation on Ebola, August 2014: Facts and Rumors. Public Health Reports, 131:461- 473.
230
+
231
+ Isaac Chun-Hai Fung, Carmen Hope Duke, Kathryn Cameron Finch, Kassandra Renee Snook, Pei-Ling Tseng, Ana Cristina Hernandez, Manoj Gambhir, King-Wa Fu, and Zion Tsz Ho Tse. 2016b. Ebola virus disease and social media: A systematic review. American journal of infection control, 44(12):1660-1671.
232
+
233
+ Amira Ghenai and Yelena Mejova. 2017. Catching Zika fever: Application of crowdsourcing and machine learning for tracking health misinformation on Twitter. arXiv preprint arXiv:1707.03778.
234
+
235
+ Fatima Haouari, Maram Hasanain, Reem Suwaileh, and Tamer Elsayed. 2020. The First Arabic COVID-19 Twitter Dataset with Propagation Networks. arXiv preprint arXiv:2004.05861.
236
+
237
+ Aditya Joshi, Sarvnaz Karimi, Ross Sparks, Cécile Paris, and C Raina Macintyre. 2019. Survey of Text-based Epidemic Intelligence: A Computational Linguistics Perspective. ACM Computing Surveys (CSUR), 52(6):1-19.
238
+
239
+ Janani Kalyanam, Sumithra Velupillai, Son Doan, Mike Conway, and Gert R. G. Lanckriet. 2015. Facts and Fabrications about Ebola: A Twitter Based Study. ArXiv, abs/1508.02079.
240
+
241
+ Christian E Lopez, Malolan Vasu, and Caleb Galle-more. 2020. Understanding the perception of COVID-19 policies by mining a multilanguage Twitter dataset. arXiv preprint arXiv:2003.10359.
242
+
243
+ Edward Ma. 2018. 3 basic approaches in Bag of Words which are better than Word Embeddings.
244
+
245
+ Rabeaa Mouty and Achraf Gazdar. 2018. Survey on Steps of Truth Detection on Arabic Tweets. In 2018 21st Saudi Computer Society National Computer Conference (NCC), pages 1-6. IEEE.
246
+
247
+ Michelle Odlum and Sunmoo Yoon. 2015. What can we learn about the Ebola outbreak from tweets? American journal of infection control, 43(6):563- 571.
248
+
249
+ Yeimer Ortiz-Martínez and Luisa F Jiménez-Arcia. 2017. Yellow fever outbreaks and Twitter: Rumors and misinformation. American journal of infection control, 45(7):816-817.
250
+
251
+ Fabian Pedregosa, Gaël. Varoquaux, Alexandre Gram-fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825-2830.
252
+
253
+ Umair Qazi, Muhammad Imran, and Ferda Ofli. 2020. GeoCoV19: A Dataset of Hundreds of Millions of Multilingual COVID-19 Tweets with Location Information. arXiv preprint arXiv:2005.11177.
254
+
255
+ Tatiana Vorovchenko, Proochista Ariana, Francois van Loggerenberg, and Amirian Pouria. 2017. Ebola and Twitter. What Insights Can Global Health Draw from Social Media?, pages 85-98.
256
+
257
+ Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection and Resolution of Rumours in Social Media: A Survey. ACM Comput. Surv., 51(2).
258
+
259
+ Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018b. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):1-36.
papers/ACL/ACL 2020/ACL 2020 Workshop/ACL 2020 Workshop NLP-COVID/yx-k0ukHzDR/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COVID-19 AND ARABIC TWITTER: HOW CAN ARAB WORLD GOVERNMENTS AND PUBLIC HEALTH ORGANIZATIONS LEARN FROM SOCIAL MEDIA?
2
+
3
+ Lama Alsudias
4
+
5
+ King Saud University / Saudi Arabia
6
+
7
+ Lancaster University / UK
8
+
9
+ lalsudias@ksu.edu.sa
10
+
11
+ Paul Rayson
12
+
13
+ Lancaster University / UK
14
+
15
+ p.rayson@lancaster.ac.uk
16
+
17
+ § ABSTRACT
18
+
19
+ In March 2020, the World Health Organization announced the COVID-19 outbreak as a pandemic. Most previous social media related research has been on English tweets and COVID-19. In this study, we collect approximately 1 million Arabic tweets from the Twitter streaming API related to COVID-19. Focussing on outcomes that we believe will be useful for Public Health Organizations, we analyse them in three different ways: identifying the topics discussed during the period, detecting rumours, and predicting the source of the tweets. We use the k-means algorithm for the first goal with $\mathrm{k} = 5$ . The topics discussed can be grouped as follows: COVID- 19 statistics, prayers for God, COVID-19 locations, advise and education for prevention, and advertising. We sample 2000 tweets and label them manually for false information, correct information, and unrelated. Then, we apply three different machine learning algorithms, Logistic Regression, Support Vector Classification, and Naïve Bayes with two sets of features, word frequency approach and word em-beddings. We find that Machine Learning classifiers are able to correctly identify the rumour related tweets with ${84}\%$ accuracy. We also try to predict the source of the rumour related tweets depending on our previous model which is about classifying tweets into five categories: academic, media, government, health professional, and public. Around (60%) of the rumour related tweets are classified as written by health professionals and academics.
20
+
21
+ § 1 INTRODUCTION
22
+
23
+ The current coronavirus disease (COVID-19) outbreak is of major global concern and is classified by the World Health Organization as an international health emergency. Governments around the world have taken different decisions in order to stop the spread of the disease. Many academic researchers in various fields including Natural Language Processing (NLP) have carried out studies targetting this subject. For example, COVID-19 and ${\mathrm{{AI}}}^{1}$ is one of the conferences that has been convened virtually to show how Artificial Intelligence (AI) can contribute in helping the Public Health Organizations during pandemics.
24
+
25
+ People use social media applications such as Twitter to find the news related to COVID-19 and/or express their opinions and feelings about it. As a result, a vast amount of information could be exploited by NLP researchers for a myriad of analyses despite the informal nature of social media writing style.
26
+
27
+ We hypothesise that Public Health Organizations (PHOs) may benefit from mining the topics discussed between people during the pandemic. This may help in understanding a population's current and changing concerns related to the disease and help to find the best solutions to protect people. In addition, during an outbreak people themselves will search online for reliable and trusted information related to the disease such as prevention and transmission pathways. COVID-19 Twitter conversations may not correlate with the actual disease epidemiology. Therefore, Public Health Organizations have a vested interest in ensuring that information spread in the population is accurate (Vorovchenko et al., 2017). For instance, the Ministry of Health in Saudi Arabia ${}^{2}$ presents a daily press conference incorporating the aim of quickly stopping the spread of false rumours. However, there is currently a prolonged period of time until warnings are issued. For example, the first tweet that included false information about hot weather killing the virus was on 10 February 2020, while the press conference which responded to this rumour on 14 April 2020. There is a clearly a need to find false information as quickly as possible. In addition, effort needs to be made in relation to tracking the user accounts that promote rumours. This can be undertaken using a variety of techniques, for example using social network features, geolocation, bot detection or content based approaches such as language style. Public Health Organizations would benefit from speeding up the process of tracking in order to stop rumours and remove the bot networks.
28
+
29
+ 'https://hai.stanford.edu/events/ covid-19-and-ai-virtual-conference
30
+
31
+ ${}^{2}$ https://www.moh.gov.sa
32
+
33
+ The vast majority of the previous research in this area has been on English Twitter content but this will not directly assist PHOs in Arabic speaking countries. The Arabic language is spoken by 467 million people in the world and has more than 26 dialects ${}^{3}$ . Of particular importance for NLP is coping with dialectal and/or meaning differences in less formal settings such as social media. As an example in health field, the word ( $\downarrow \downarrow \downarrow \downarrow$ ) may be understood as vaccination ${}^{4}$ in Modern Standard Arabic or reading supplications in Najdi dialect ${}^{5}$ . There has been much recent progress in Arabic NLP research yet there is still an urgent need to develop fake news detection for Arabic tweets (Mouty and Gazdar, 2018).
34
+
35
+ In this paper, we have combined qualitative and quantitative studies to analyse Arabic tweets aiming to support Public Health Organizations who can learn from social media data along various lines:
36
+
37
+ * Analyzing the topics discussed between people during the peak of COVID-19
38
+
39
+ * Identifying and detecting the rumours related to COVID-19.
40
+
41
+ * Predicting the type of sources of tweets about COVID-19.
42
+
43
+ § 2 RELATED WORK
44
+
45
+ There is a vast quantity of research over recent years that analyses social media data related to different pandemics such as H1N1 (2009), Ebola (2014), Zika Fever (2015), and Yellow Fever (2016). These studies followed a variety of directions for analysis with multiple different goals (Joshi et al., 2019). The study of Ahmed et al. (2019) used a thematic analysis of tweets related to the H1N1 pandemic. Eight key themes emerged from the analysis: emotion, health related information, general commentary and resources, media and health organisations, politics, country of origin, food, and humour and/or sarcasm.
46
+
47
+ A survey study (Fung et al., 2016b) reviewed the research relevant to the Ebola virus and social media. It compared research questions, study designs, data collection methods, and analytic methods. Ahmed et al. (2017b) used content analysis to identify the topics discussed on Twitter at the beginning of the 2014 Ebola epidemic in the United States. In (Vorovchenko et al., 2017), they determined the geolocation of the Ebola tweets and named the accounts that interacted more on Twitter related to the 2014 West African Ebola outbreak. The main goal of the study by Kalyanam et al. (2015) was to distinguish between credible and fake tweets. It highlighted the problems of manual labeling process with verification needs. The study in (Fung et al., 2016a) highlighted how the problem of misinformation changed during the disease outbreak and recommended a longitudinal study of information published on social media. Moreover, it pointed out the importance of understanding the source of this information and the process of spreading rumours in order to reduce their impact in the future.
48
+
49
+ Ghenai and Mejova (2017) tracked Zika Fever misinformation on social media by comparing them with rumours identified by the World Health Organization. Also, they pointed out the importance of credible information sources and encouraged collaboration between researchers and health organizations to rapidly process the misinformation related to health on social media. The study in (Ortiz-Martínez and Jiménez-Arcia, 2017) reviewed the quality of available yellow fever information on Twitter. It also showed the significance of the awareness of misleading information during pandemic spread. The study of Zubiaga et al. (2018a) summarised other studies related to social media rumours. It illustrated techniques for developing rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. Vorovchenko et al. (2017) mentioned the importance of Twitter information during the epidemic and how Public Health Organisations can benefit from this. They showed the requirement to monitor false information posted by some accounts and recommended that this was performed in real time to reduce the danger of this information. Also, they discussed the lack of available datasets which help in the development of rumour classification systems.
50
+
51
+ ${}^{3}$ https://en.wikipedia.org/wiki/Arabic
52
+
53
+ ${}^{4}$ https://en.wikipedia.org/wiki/Modern_ Standard_Arabic
54
+
55
+ ${}^{5}$ https://en.wikipedia.org/wiki/Najdi_ Arabic
56
+
57
+ Researchers have been doing studies on building and analysing COVID-19 Twitter datasets since the disease appeared in December 2019. So far, there are two different datasets which have been published recently related to Arabic and COVID-19 (Alqurashi et al., 2020) and (Haouari et al., 2020). The former collected 3,934,610 million tweets until April 152020, and the latter included around ${748}\mathrm{k}$ tweets until March 31, 2020. These papers contain an initial analysis and statistical results for the collected tweets and some suggestions for future work, which include pandemic response, behavior analysis, emergency management, misinformation detection, and social analytics.
58
+
59
+ On the other hand, there are some datasets in English such as (Chen et al., 2020) and (Lopez et al., 2020). Also, there is a multilingual COVID-19 dataset containing location information (Qazi et al., 2020). This contains more than 524 million tweets, with 5.5 million Arabic tweets, posted over a period of three months since February 1, 2020. It focuses on determining the geolocation of a tweet which can help research with various different challenges, including identifying rumours.
60
+
61
+ Although the above studies have produced datasets related to COVID-19, they do not analyse them deeply using NLP methods. Previous studies representing earlier epidemics present good techniques and results, however none of them are related to Arabic tweets. Therefore, to assist PHOs in Arabic speaking countries there is an urgent need to analyse tweets related to COVID-19 using multiple Arabic NLP techniques.
62
+
63
+ § 3 UPDATE ARABIC INFECTIOUS DISEASE ONTOLOGY
64
+
65
+ With the recent appearance of COVID-19 as a new disease, there is need to update our Arabic Infectious Disease Ontology (Alsudias and Rayson, 2020), which integrates the scientific and medical vocabularies of infectious diseases with their informal equivalents used in general discourse. We collated COVID-19 information from the World Health Organization ${}^{6}$ and Ministry of health in Saudi Arabia. This included symptom, cause, prevention, infection, organ, treatment, diagnosis, place of the disease spread, and slang terms for COVID-19 and extended our ontology ${}^{7}$ . These terms were then used in our collection process.
66
+
67
+ § 4 DATA COLLECTION
68
+
69
+ We began collecting Arabic tweets about a number of infectious diseases from September 2019. Here in this paper, we analysed only the tweets related to COVID-19 from December 2019 to April 2020 (there are a few tweets between September and November, these are related to Middle East respiratory syndrome coronavirus, MERS-CoV ${}^{8}$ ). We have collected approximately six million tweets in Arabic during this period. We obtained the tweets depending on three keywords (C) $S$ (g.s. 19 ) which mean Coronavirus, a misspelling of the name of Coronavirus, and COVID-19 respectively in English. We collected the tweets weekly using Twitter API.
70
+
71
+ Next, we pre-processed the tweets through a pipeline of different steps:
72
+
73
+ * Manually remove retweets, advertisements, and spam.
74
+
75
+ * Filter out URLs, mentions, hashtags, numbers, emojis, repeating characters, and non-Arabic words using Python scripts ${}^{9}$ .
76
+
77
+ * Normalize and tokenize tweets.
78
+
79
+ * Remove Arabic stopwords (Alrefaie, 2017).
80
+
81
+ After pre-processing, the resulting dataset was 1,048,575 unique tweets from the original 6,578,982collected. Figure 1 shows the number of Arabic tweets about Coronavirus each week with specific dates highlighted to show government decisions on protecting the population from COVID-19 and other key dates for context.
82
+
83
+ 6 http://www.emro.who.int
84
+
85
+ ${}^{7}$ https://github.com/alsudias/ Arabic-Infectious-Disease-Ontology
86
+
87
+ 8 https://www.who.int/
88
+
89
+ news-room/fact-sheets/detail/
90
+
91
+ < g r a p h i c s >
92
+
93
+ Figure 1: Number of Arabic tweets about Coronavirus
94
+
95
+ § 5 METHODS
96
+
97
+ We performed three different types of analysis on the collected data. Firstly, in order to better understand the topics discussed in the corpus, we carried out a cluster analysis. Secondly, taking a sample of the corpus, we performed rumour detection. Finally, we extended our previous work to classify the source of tweets into five types of Twitter users which aims at helping to determine their veracity.
98
+
99
+ § 5.1 CLUSTER ANALYSIS
100
+
101
+ To explore the topics discussed on Twitter during the COVID-19 epidemic in Saudi Arabia and other countries in the Arab World, we subjected the text of the tweets to cluster analysis. After pre-processing the tweets as described above, we used the N-gram forms (unigram, bigram, and trigram) of twitter corpus and clustered them using the K-means algorithm with the Python Scikit-learn 0.20.2 (Pedregosa et al., 2011) software and set the value of $\mathrm{k}$ , the number of clusters, to be five.
102
+
103
+ § 5.2 RUMOUR DETECTION
104
+
105
+ Following previous work (Zubiaga et al., 2018b), we applied a top-down strategy, which is where the set of rumours is identified in advance then the data is sampled to extract the posts associated with the previously identified rumours. In our dataset, out
106
+
107
+ ’https://github.com/alsudias of the one million tweets, we sampled 2,000 tweets to classify them for rumour detection. We manually labelled the tweets to create a gold standard dataset and then applied different machine learning algorithms in this part of our study.
108
+
109
+ § 5.2.1 LABELLING GUIDELINES
110
+
111
+ We manually labelled the tweets with 1,-1, and 0 to represent false information, correct information, and unrelated content, respectively. Our reference point for deciding whether the content contained true or false information was based on the list issued by the Ministry of Health in Saudi Arabia ${}^{10}$ and is regularly updated (the last update applied for this study dates from 14 April 2020). Table 1 presents the list in both Arabic and English and Table 2 shows some example tweets for each label.
112
+
113
+ § 5.2.2 MACHINE LEARNING MODELS
114
+
115
+ We applied three different machine learning algorithms: Logistic Regression (LR), Support Vector Classification (SVC), and Naïve Bayes (NB). To help the classifier distinguish between the classes more accurately, we extracted further linguistic features. The selected features fall into two groups: word frequency, count vector and TF-IDF, and word embedding based (Word2Vec and FastText). We used 10-fold cross validation to determine accu-
116
+
117
+ racy of the classifiers for this dataset, splitting the middle-east-respiratory-syndrome-coronavirus-(mers-cov)
118
+
119
+ ${}^{10}$ https://www.moh.gov.sa entire sample into ${90}\%$ training and ${10}\%$ testing for each fold.
120
+
121
+ max width=
122
+
123
+ Rumour in Arabic Rumour in English
124
+
125
+ 1-2
126
+ .lignes for any list Pets are transporters of Coronavirus.
127
+
128
+ 1-2
129
+ Bugs I J J J J J J J Mosquitoes are transporters of Coron- avirus.
130
+
131
+ 1-2
132
+ Eggeration of the Hubble Children are not infected by Coron- avirus.
133
+
134
+ 1-2
135
+ Jule is the region of the Bugs regions are in the Only old people may have a high risk of Coronavirus.
136
+
137
+ 1-2
138
+ Type III Is Japan System System Sys Hot or cold weather can kill the virus.
139
+
140
+ 1-2
141
+ The results will get the best Gargling with water and salt eliminates the virus.
142
+
143
+ 1-2
144
+ Eggs: 11 is a list of the list of There are some herbs that protect against from Coronavirus.
145
+
146
+ 1-2
147
+ The Hubble is a signal The virus does not survive on surfaces.
148
+
149
+ 1-2
150
+
151
+ Table 1: List of rumours that appear during COVID-19 (source: Saudi Arabia Ministry of Health)
152
+
153
+ max width=
154
+
155
+ Tweet in Arabic Tweet in English Label
156
+
157
+ 1-3
158
+ Egger. Lacy Lacy LLS is a 3-1 Progress Let us use ② $1 - \frac{1}{2}1 = 1$ ② $1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1 - \frac{1}{2}1$ There will be a decrease in the spread of the Corona virus at the beginning of the summer, espe- cially in the Arab world, due to the high temperatures. 1 (false)
159
+
160
+ 1-3
161
+ Burth states are still provided in all Bob is closed by its Add with the state of the state of the state. Presentatively the probability of the detail. The Ministry of Health: A virus lives and is mainly concentrated in the respiratory system, so it is not likely to be transmitted by insects or by mosquito bites. - 1 (true)
162
+
163
+ 1-3
164
+ Exp. 31 (1) is still asked as a full Japan Mat. Let us will get be done in a install SSL as B as to be less than 31 g Oh God, in this blessed hour, We ask you to have mercy on us and keep away from us all dis- ease and calamity, and protect us from the evil of diseases and sick- nesses. Preserve our country and other Muslim countries. 0 (unre- lated)
165
+
166
+ 1-3
167
+
168
+ Table 2: Example tweets and our labelling system
169
+
170
+ § 5.3 SOURCE TYPE PREDICTION
171
+
172
+ We replicated a Logic Regression model from our previous study, which was useful for classifying tweets into five categories: academic, media, government, health professional, and public (Alsu-dias and Rayson, 2019). We used this LR model because it previously achieved the best accuracy (77%), and employed it here to predict the source of the COVID-19 tweets that we had already labelled in Section 5.2.
173
+
174
+ § 6 RESULTS AND DISCUSSION
175
+
176
+ § 6.1 CLUSTER ANALYSIS
177
+
178
+ Our cluster analysis of the five main public topics discussed in tweets content is as follows: (1) disease statistics: the number of infected, died, and recovered people; (2) prayers: prayer asking God to stop virus; (3) disease locations: spread and location (i.e., name of locations, information about spread); (4) Advise for prevention education: health information (i.e., prevention methods, signs, symptoms); and (5) advertising: adverts for any product either related or not related to the virus. Figure 2 illustrates the five topics of COVID-19 tweets with examples.
179
+
180
+ For each cluster, the top terms by frequency are as follows: (1) disease statistics: case (3) be w $\left( {\delta \downarrow ,{\lambda }_{ - }}\right)$ , and infection $\left( {\delta \downarrow ,\left| \delta \right| }\right)$ ; (2) prayers: Allah (或1), Oh God (201), and Muslims (Multi); (3) disease locations: Dammam ( $\cdot$ LAI), Riyadh (city 11), and Makkah (cit.); (4) Advise for prevention education: crisis (%_j'), spread (_láxil), and pandemic (%L); (5) advertising: discount $\left( { \sim \rightarrow }\right)$ , coupon $\left( { \downarrow \rightarrow }\right)$ , and code (39).
181
+
182
+ We found that four of our categories (disease statistics, prayers, disease locations, and advise for prevention education) are similar to those found by Odlum and Yoon (2015) which are risk factors, prevention education, disease trends, and compassion. The marketing category is one of the topics in (Ahmed et al., 2017a) which discussed the topics in Twitter during the Ebola epidemic in the United States. Jokes and/or sarcasm is one of the categories that did not appear in our study but can be found in (Ahmed et al., 2017a) and (Ahmed et al., 2019), a thematic analysis study of Twitter data during $\mathrm{H}1\mathrm{\;N}1$ pandemic. This may be a result of more concern and panic from COVID-19 than other diseases during this period of time.
183
+
184
+ § 6.2 RUMOUR DETECTION
185
+
186
+ The result of our manual labelling process is 316 tweets label with 1 (false), 895 tweets label with 1 (true), and 789 tweets label with 0 (unrelated). Therefore, the false information represents about 15.8% (from 2,000 tweets) and around 26% (from 1,211 tweets, after removing the unrelated ones). In the study by Ortiz-Martínez and Jiménez-Arcia (2017), 61.3% (from 377 tweets) of data was classified as misinformation about Yellow Fever. It represented ${32}\%$ (from 26,728 tweets) considered as rumours related to Zika Fever in Ghenai and
187
+
188
+ § MEJOVA (2017).
189
+
190
+ Figures 3, 4, and 5 show the accuracy, F1-score, recall, and precision on our corpus using LR, SVC, and NB algorithms with various feature selection approaches. The highest accuracy (84.03%) was achieved by the LR classifier with a count vector set of features and SVC with TF-IDF. Therefore, the count vector gives best result in LG and NB in all metrics results whereas with precision which achieves better results with TF-IDF 83.71% in LG and 81.28% in NB. while TF-IDF in SVC has the best results except recall which achieved 75.55% in count vector set features.
191
+
192
+ We also applied several word embedding based approaches but without obtaining good results. The accuracy ranges from ${50}\%$ to ${60}\%$ and the F1 score is around ${40}\%$ on average. FastText models achieve better accuracy in SVC (54.89%) and NB (59.49%) than Word2Vec by approximately (5%). While Word2Vec shows the best result with LG 60.68% for accuracy, 49% for F1 and recall, and 65.97% in precision.
193
+
194
+ The word frequency based approaches have around a ${20}\%$ better result than the word embedding ones. The reason for this is expected to be the dataset size and the specific domain of context (Ma, 2018). We assumed that the word embedding methods may achieve good results due to the importance of the relevant information around the word. For example, FastText can deal with the misspelling problem which is common in social media language style and improves word vectors with subword information (Bojanowski et al., 2017).
195
+
196
+ § 6.3 SOURCE TYPE PREDICTION
197
+
198
+ The model predicts the source type for each of the tweets. Table 3 shows some examples of the tweets with predicted labels by the model. We focus on the result of the fake news content since they are of highest importance for the Public Health Organization. ${30}\%$ (95 of 316) and 28% (91 of 316) of the rumour tweets are classified as written by a health professional and academic consequently. While only ${12}\% \left( {{39}\text{ of 316 }}\right)$ of them are predicted as written by the public. With this result, we find that the tweets containing false information quite often used the language style of academics and health professionals.
199
+
200
+ < g r a p h i c s >
201
+
202
+ Figure 2: Examples of tweets in each cluster
203
+
204
+ max width=
205
+
206
+ Tweet in Arabic Tweet in English Predicted Label
207
+
208
+ 1-3
209
+ Type III JUI Engin ... @link "Self" J BJILL JUNK LIG In scientific reading ... the virus is expected to erode in April due to heat. Academic
210
+
211
+ 1-3
212
+ ON In Sec. 3.1 C. VELI in all data. in all times of types or and it Health spokesman has confirmed cases so far infected with coronavirus, mostly for adults. Media
213
+
214
+ 1-3
215
+ orgale deletions. Then, the June 11, Läville Leitings Ministry of health please spray mosquitoes, as they are carriers of the Coronavirus, increased infections as mosquitoes spread. Government
216
+
217
+ 1-3
218
+ .LII LI BLARI JI WII LISI LISI lies for some line. A Chinese expert confirms that inhal- ing water vapor kills coronavirus. Health professional
219
+
220
+ 1-3
221
+ part is input to be $\because g = g \cdot h = 1$ Corona treatment with lemon and gar- lic "YouTube link". Public
222
+
223
+ 1-3
224
+
225
+ Table 3: Some examples of false tweets from different source predicted labels
226
+
227
+ § 7 CONCLUSION AND FUTURE WORK
228
+
229
+ In this paper, we identified and analysed one million tweets related to the COVID-19 pandemic in the Arabic language. We performed three experiments which we expect can help to develop methods of analysis suitable for helping Arab World Governments and Public Health Organi-sations. Our analysis first identifies the topics discussed on social media during the epidemic, detects the tweets that contain false information, and predicts the source of the rumour related tweets based on our previous model for other diseases. The clustered topics are COVID-19 statistics, prayers for God, COVID-19 locations, advise for preventing education, and advertising.
230
+
231
+ < g r a p h i c s >
232
+
233
+ Figure 3: Results using Logistic Regression
234
+
235
+ < g r a p h i c s >
236
+
237
+ Figure 4: Results using Support Vector Classification
238
+
239
+ < g r a p h i c s >
240
+
241
+ Figure 5: Results using Naïve Bayes
242
+
243
+ Our second contribution is a labeled sample of tweets (2,000 out of 1 million) annotated for false information, correct information, and unrelated. To investigate the replicability and scalability of this annotation, we applied multiple Machine Learning Algorithms with different sets of features. The highest accuracy result was ${84}\%$ achieved by the LR classifier with count vector set of features and SVC with TF-IDF.
244
+
245
+ Finally, we also used our previous model to predict the source types of the sampled tweets. Around ${60}\%$ of the rumour related tweets are classified as written by health professional and academics which shows the urgent need to respond to such fake news. The dataset, including tweet IDs, manually assigned labels for the sampled tweets, and other resources used in this paper are made freely available for academic research purposes ${}^{11}$ .
246
+
247
+ There are clearly many potential future directions related to analysing social media data on the topics of pandemics. Since false information has the potential to play a dangerous role in topics related to health, there is a need to enhance and automate the automatic detection process supporting different languages beyond just English. Future potential directions include monitoring the spread of the disease by finding the infected individuals, defining the infected locations, or observing people that do not apply self isolation rules. Moreover, the analysis could proceed in an exploratory and thematic way such as discovering further topics discussed during the epidemic, as well as assisting governments and public health organisations in measuring people's concerns resulting from the disease.
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B6PlLQtl8Zq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP
2
+
3
+ Anonymous ACL submission
4
+
5
+ ## Abstract
6
+
7
+ 001 In this work, we use a transformer-based pre-trained multimodal model, CLIP, to shed light on the mechanisms employed by human speak-
8
+
9
+ 004 ers when referring to visual entities. In particular, we use CLIP to quantify the degree of descriptiveness (how well an utterance describes an image in isolation) and discriminativeness (to what extent an utterance is effective in picking out a single image among similar images) of human referring utterances within multimodal dialogues. Overall, our results show that utterances become less descriptive over time while their discriminativeness remains unchanged. Through analysis, we propose that this trend could be due to participants relying on the previous mentions in the dialogue history, as well as being able to distill the most discriminative information from the visual context. In general, our study opens up the possibility of using this and similar models to quantify patterns in human data and shed light on the underlying cognitive mechanisms.
10
+
11
+ ## 1 Introduction
12
+
13
+ During a conversation, speakers can refer to an entity (e.g., the girl in Fig. 1) multiple times within different contexts. This has been shown to lead to subsequent referring expressions that are usually shorter and that show lexical entrainment with previous mentions (Krauss and Weinheimer, 1967; Brennan and Clark, 1996). This trend has been confirmed in recent vision-and-language (V&L) datasets (Shore and Skantze, 2018; Haber et al., 2019; Hawkins et al., 2020): referring utterances become more compact (i.e., less descriptive), and yet participants are able to identify the intended referent (i.e., they remain pragmatically informative).
14
+
15
+ Several approaches (Mao et al., 2016; Cohn-Gordon et al., 2018; Schüz et al., 2021; Luo et al., 2018, i.a.) have tackled the generation of image captions from the perspective of pragmatic infor-mativity; Coppock et al. (2020) have compared the
16
+
17
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_0_843_586_618_246_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_0_843_586_618_246_0.jpg)
18
+
19
+ Figure 1: Referring utterance chain from PhotoBook (Haber et al., 2019). The chain has 4 ranks ( 4 references to the target image, in red outline). For simplicity, only the 5 distractor images from rank 1 are shown.
20
+
21
+ informativity of image captions and of referring 042
22
+
23
+ expressions; and Haber et al. (2019); Hawkins et al. 043
24
+
25
+ (2020) have explored how dialogue history con- 044
26
+
27
+ tributes to discriminativeness. However, no work to 045
28
+
29
+ date has investigated how these two dimensions, de- 046 scriptiveness and discriminativeness or pragmatic
30
+
31
+ informativity, interact in referring expressions ut- 048 tered in dialogue.
32
+
33
+ In this work, we use a transformer-based pre-trained multimodal model to study the interplay between descriptiveness and discriminativeness in hu-
34
+
35
+ man referring utterances produced in dialogue. Due 053 to their unprecedented success in numerous tasks,
36
+
37
+ pretrained V&L models—such as LXMERT (Tan 055 and Bansal, 2019), VisualBERT (Li et al., 2019), UNITER (Chen et al., 2020) and ALIGN (Jia et al., 2021)-have recently attracted a lot of interest aimed at understanding the properties and
38
+
39
+ potential of their learned representations as well 060 as the effect their architectures and training setups have (Bugliarello et al., 2021). These include probing such models in a zero-shot manner, i.e., without any specific fine-tuning (Hendricks and Ne-matzadeh, 2021; Parcalabescu et al., 2021); quantifying the roles of each modality (Frank et al., 2021);
40
+
41
+ and inspecting attention patterns (Cao et al., 2020). 067
42
+
43
+ We focus on one model: Contrastive Language-Image Pre-training (CLIP, Radford et al., 2021), which learns via contrasting images and texts that can be aligned or unaligned with each other. This contrastive objective makes CLIP particularly suitable for modelling referential tasks that inherently include such comparisons. Here, we use CLIP to gain insight into the strategies used by humans in sequential reference settings, finding that although the descriptiveness of referring utterances decreases significantly, the utterances remain discriminative over the course of multimodal dialogue.
44
+
45
+ ## 2 Data
46
+
47
+ We focus on PhotoBook (PB; Haber et al., 2019), a dataset of multimodal task-oriented dialogues where players aim to pick the images they have in common without seeing each other's visual contexts (which consist of 6 images coming from the same domain). The game is played over several rounds in which the previously seen images reappear in different visual contexts, giving the players an opportunity to refer to such images again. As a result, chains of utterances referring to a single image are formed over the rounds as the players build common ground. See Fig. 1 for a simplified representation of a chain. ${}^{1}$ In total, PB consists of 2,500 games, ${165}\mathrm{\;K}$ utterances, and 360 unique images from COCO (Lin et al., 2014).
48
+
49
+ All our experiments are conducted on a subset of ${50}\mathrm{{PB}}$ games with manually annotated referring utterances, which contains 364 referential chains about 205 unique target images. We refer to this subset as PB-GOLD. ${}^{2}$ Although a dataset of automatically-extracted chains using all PB data is also available (Takmaz et al., 2020), as reported by the authors these chains may contain errors. We therefore opt for using the smaller but higher-quality PB-GOLD subset since we are interested in analysing human strategies. Given that we use a pretrained model without fine-tuning, experimenting with large amounts of data is not a requisite.
50
+
51
+ PB-GOLD's chains contain 1,078 utterances, i.e., 2.96 utterances per chain on average (min 1, max 4). We henceforth use the term 'rank' to refer to the position of an utterance in a chain. The average token length of utterances is13.34,11.03,9.23, and 7.82, respectively, for ranks1,2,3, and $4.{}^{3}$ This decreasing trend, which is statistically significant at $p < {0.01}$ with respect to independent samples t-tests between the ranks, is in line with the trend 117
52
+
53
+ observed in the whole dataset (Haber et al., 2019). 118
54
+
55
+ PB-GOLD's vocabulary consists of 926 tokens. 119
56
+
57
+ ## 3 Model
58
+
59
+ 120
60
+
61
+ We use CLIP (Radford et al., 2021), a model pre-trained on a dataset of 400 million image-text pairs
62
+
63
+ collected from the internet using a contrastive ob- 123 jective to learn strong transferable vision representations with natural language supervision. ${}^{4}$ In particular, we employ the ViT-B/32 version of CLIP, which utilizes separate transformers to encode vision and language (Vaswani et al., 2017; Dosovit-skiy et al., 2021; Radford et al., 2019, 2021).
64
+
65
+ As the model learns to align images and texts, 130 this enables zero-shot transfer to various V&L tasks such as image-text retrieval and image classification and even certain non-traditional tasks in a simple and efficient manner (Radford et al., 2019; Agarwal et al., 2021; Shen et al., 2021; Cafagna et al., 2021; Hessel et al., 2021). This makes it an intriguing tool to investigate the properties of visually grounded referring utterances. In this work, we freeze CLIP's weights and do not fine-tune the model or perform prompt engineering, since we aim to exploit the model's pretrained knowledge
66
+
67
+ for the analysis of human referring strategies. 142
68
+
69
+ ## 4 Descriptiveness
70
+
71
+ In our first experiment, we investigate the degree of descriptiveness exhibited by referring utterances in the PhotoBook game, i.e., the amount of information they provide about the image out of context. We consider each target image and corresponding referential utterance at a given rank in isolation, i.e., without taking into account the other competing images nor the dialogue history. We quantify descriptiveness as the alignment between an utterance and its image referent using CLIPScore (Hessel et al., 2021), assuming that a more descriptive utterance will attain a higher score. For all the target image-utterance pairs in the chains of PB-GOLD, we use CLIP to obtain a vector $t$ representing the utterance and a vector $v$ representing the image. CLIPScore is then computed as the scaled cosine similarity between these two vectors, with range $\left\lbrack {0,{2.5}}\right\rbrack { : }^{5}$ CLIPScore $\left( {t, v}\right) = {2.5} * \max \left( {\cos \left( {t, v}\right) ,0}\right)$ .
72
+
73
+ ---
74
+
75
+ ${}^{1}$ Only 1 player’s perspective for 1 context is represented.
76
+
77
+ ${}^{2}$ We use the gold set of the utterance-based chains v2 available at https://dmg-photobook.github.io/.
78
+
79
+ ${}^{3}$ We use TweetTokenizer: https://www.nltk.org/ api/nltk.tokenize.html
80
+
81
+ ${}^{4}$ https://github.com/openai/CLIP
82
+
83
+ ${}^{5}$ The scaled factor was introduced by Hessel et al. (2021) to account for the relatively low observed cosine values.
84
+
85
+ ---
86
+
87
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_2_248_187_501_355_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_2_248_187_501_355_0.jpg)
88
+
89
+ Figure 2: Descriptiveness (CLIPScore) for PB-GOLD, COCO and IDS. We only plot the first 4 'ranks' (x-axis) for COCO and IDS for comparability with PB-GOLD. The error bars illustrate the standard error.
90
+
91
+ 163 We compute the average CLIPScore per rank over the whole PB-GOLD dataset.
92
+
93
+ Results. We find that earlier utterances are better aligned with the target image features and that there is a monotonically decreasing trend over the 4 ranks (Fig. 2, blue bars). The differences between all pairs of ranks are statistically significant (according to independent samples t-tests, $p < {0.01}$ ), except for the comparison between the last 2 ranks $(p >$ 0.05 ). Since earlier referring utterances tend to be longer (see Sec. 2), we check to what extent length may be a confounding factor. We find that there is only a weak correlation between token length and CLIPScore (Spearman’s $\rho = {0.29}, p < {0.001}$ ).
94
+
95
+ We compare these results on PhotoBook with text-to-image alignment computed with the same method on two other datasets: (1) COCO (Lin et al., ${2014}){,}^{6}$ which includes 5 captions per image provided independently by different annotators; here we do not expect to find significant differences in the level of descriptiveness across the captions, and (2) Image Description Sequences (IDS, Ilinykh et al., ${2019}{)}^{7}$ where one participant describes an image incrementally, by progressively adding sentences with further details; here we do expect a similar pattern to PhotoBook, albeit for different reasons (because participants add less salient information; Ilinykh et al., 2019). See Appendix A.
96
+
97
+ Fig. 2 shows that these expectations are confirmed. According to CLIP, COCO captions (green bars) are more descriptive than IDS descriptions and PB referring utterances, and are equally aligned with the image across 'ranks' (the order is arbitrary in this case). In contrast, IDS incremental descrip-
98
+
99
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_2_913_185_475_355_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_2_913_185_475_355_0.jpg)
100
+
101
+ Figure 3: Discriminativeness (reference resolution accuracy, ACC) per rank with PB-GOLD utterances (Utterance) and utterances with history (w/Prev. Utt), along with their respective entropies (ENT).
102
+
103
+ tions (yellow bars) are intrinsically ordered and 197
104
+
105
+ show a significant decreasing trend similar to PB. 198
106
+
107
+ ## 5 Discriminativeness
108
+
109
+ 199
110
+
111
+ In order for a listener to select the target image 200 among distractor images, a referring utterance should be discriminative in its visual context. Our results in the previous section show that descriptiveness decreases over time-what is the trend regarding discriminativeness? To address this question, in our second experiment we use CLIP from
112
+
113
+ the perspective of reference resolution. 207
114
+
115
+ We focus on local text-to-image alignment, initially ignoring the previous dialogue history. To this end, we feed CLIP a single referring utterance together with the visual context of the speaker who
116
+
117
+ produced that utterance. CLIP yields softmax prob- 212 abilities for each image contrasted with the single text. As a metric, we use accuracy: 1 if the target image gets the highest probability; 0 otherwise.
118
+
119
+ Results. The overall accuracy is ${80.15}\%$ , which 216 is well above the random baseline of ${16.67}\%$ . In Fig. 3, we break down the results per rank (blue bars). A $4 \times 2$ chi-square test (4ranks vs. correct/incorrect) did not yield significant differences in accuracy between the ranks, $p > {0.05}$ . Thus, although descriptiveness decreases over time, discriminativeness is not significantly affected. An analysis of the entropy of the softmax distributions reveals that entropy increases monotonically over the ranks (this difference is statistically significant according to an independent samples t-test between ranks 1 and 4; ${H}_{1} = {0.62},{H}_{4} = {0.79}, p < {0.01}$ ). That is, the model is more uncertain when trying to resolve less descriptive utterances. There is indeed a negative correlation between entropy 232 and CLIPScore computed between the target image and the corresponding utterance (Spearman's $\rho = - {0.5}, p < {0.001}$ ).
120
+
121
+ ---
122
+
123
+ ${}^{6}$ We use the set of COCO images in PB-GOLD (N=205).
124
+
125
+ ${}^{7}$ The images are from ADE20k corpus (Zhou et al.,2017)
126
+
127
+ ---
128
+
129
+ ## 6 Analysis
130
+
131
+ How do participants manage to maintain discriminativeness while decreasing descriptiveness? Do they rely on the previous mentions present in the dialogue history? Do they refine their referring strategy by distilling the most discriminative information in a given context?
132
+
133
+ Dialogue history The results of our experiment in the previous section show that the utterances in isolation are effective at referring; yet, uncertainty increases when the less descriptive utterances are considered out of context. To reduce such uncertainty, participants may rely on the dialogue history (Brennan and Clark, 1996; Shore and Skantze, 2018; Takmaz et al., 2020). We consider a scenario where participants keep in memory the previous mention when processing the current referring utterance. We model this scenario by prepending the previous referring utterance in the chain to the current utterance and feeding this into the reference resolution model described in Section 5. As shown in Fig. 3, the resulting discriminativeness is similar to the one obtained earlier (the differences are not significant; chi-square test, $p < {0.05}$ ) and, as before, remains stable across ranks (chi-square test, $p > {0.05})$ . However, taking into account the previous mentions leads to a significant reduction of the entropy in general: e.g., at the last rank ${H}_{4} = {0.79}$ vs. ${H}_{4}^{\prime } = {0.62}$ (t-test, $p < {0.05}$ ). This suggests that relying on the dialogue history allows speakers to use less descriptive utterances by reducing discriminative uncertainty.
134
+
135
+ Most discriminative information Besides exploiting the dialogue history, participants may refine their referring strategy by distilling the most discriminative information in a given context. To gain insight into this hypothesis, we explore what is discriminative in the images: we compute the discriminative features ${v}_{d}$ of a target image by taking the average of the visual representations of distractor images to obtain the mean context vector and then subtracting this vector from the visual representation of the target image. We encode all 926 words in the vocabulary of PB-GOLD using CLIP, and retrieve the top-10 words whose representations are the closest to ${v}_{d}$ in terms of cosine
136
+
137
+ similarity (amounting to 1% of the vocabulary). 281
138
+
139
+ We take these words to convey the most discrimina- 282
140
+
141
+ tive properties of an image in context. We analyse 283 whether at least one of these retrieved words is mentioned exactly in the referring utterance, find-
142
+
143
+ ing that this is indeed the case for a remarkable ${60}\%$ 286 of utterances. ${}^{8}$ As an illustration, for the example
144
+
145
+ in Fig. 1, the words walking (mentioned at rank 1) 288 and blue (used at ranks1,2,3,4) are among the top-10 most discriminative words, while the word water (mentioned at ranks1,2,3,4) is close to the word beach, which is also retrieved as one of most discriminative words in this case.
146
+
147
+ The most discriminative words are likely to be reused in later utterances, even though the visual context changes from rank to rank. For instance, the most discriminative words mentioned at rank 1 constitute ${60}\%$ of the discriminative words at rank 2, indicating that entrainment is likely for
148
+
149
+ words that have high utility across contexts. We 300 also find a significant increase in the proportion of discriminative content words to all the content words per utterance (only between ranks 1 and 4 , 14% vs. 19%, $p < {0.01}$ ).
150
+
151
+ ## 7 Conclusion
152
+
153
+ We used a pre-trained multimodal model claimed 306 to be a reference-free caption evaluator, CLIP (Rad-
154
+
155
+ ford et al., 2021), to quantify descriptiveness and 308 discriminativeness of human referring utterances within multimodal dialogues. We showed that (i) later utterances in a dialogue become less descriptive in isolation while (ii) remaining similarly dis-
156
+
157
+ criminative against a visual context. 313
158
+
159
+ We found that the addition of dialogue history helps decrease and control the entropy of resolution accuracy even when the speakers produce less descriptive referring utterances. In addition, we found that the proportion of discriminative words increases over the ranks. These suggest that participants playing the PhotoBook game (Haber et al., 2019) show a tendency towards distilling discriminative words and utilize the dialogue history to keep task performance stable over the dialogue.
160
+
161
+ Interestingly, future work could explore novel ways of incorporating the CLIP model or its representations into a reference resolution or generation
162
+
163
+ model embedding dialogue history and visual con- 327 text to obtain human-like outcomes. 329
164
+
165
+ ---
166
+
167
+ ${}^{8}$ Randomly sampling 10 words from the vocabulary for each utterance yields ${11}\%$ (average of 5 random runs).
168
+
169
+ ---
170
+
171
+ ## References
172
+
173
+ 330 Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec 331 Radford, Jong Wook Kim, and Miles Brundage. 332 2021. Evaluating clip: Towards characterization of 333 broader capabilities and downstream implications.
174
+
175
+ 334 Susan E. Brennan and Herbert H. Clark. 1996. Concep- 335 tual pacts and lexical choice in conversation. Jour- 336 nal of Experimental Psychology: Learning, Memory, 337 and Cognition, 22:1482-1493.
176
+
177
+ 338 Emanuele Bugliarello, Ryan Cotterell, Naoaki 339 Okazaki, and Desmond Elliott. 2021. Multimodal 340 pretraining unmasked: A meta-analysis and a 341 unified framework of vision-and-language BERTs. 342 Transactions of the Association for Computational 343 Linguistics.
178
+
179
+ 344 Michele Cafagna, Kees van Deemter, and Albert Gatt. 2021. What vision-language models 'see' when they see scenes. ArXiv, abs/2109.07301.
180
+
181
+ Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, and Jingjing Liu. 2020. Behind the scene: Revealing the secrets of pre-trained vision-and-language models. ECCV Spotlight.
182
+
183
+ Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: Universal image-text representation learning. In European Conference on Computer Vision, pages 104-120. Springer.
184
+
185
+ Reuben Cohn-Gordon, Noah Goodman, and Christo-
186
+
187
+ 357 pher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Proceedings of the 2018 Conference of the North Amer-
188
+
189
+ 360 ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439-443, New Orleans,
190
+
191
+ 363 Louisiana. Association for Computational Linguistics.
192
+
193
+ 365 Elizabeth Coppock, Danielle Dionne, Nathanial Graham, Elias Ganem, Shijie Zhao, Shawn Lin, Wenx-ing Liu, and Derry Wijaya. 2020. Informativity in
194
+
195
+ 368 image captions vs. referring expressions. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 104-108, Gothenburg. Associa-
196
+
197
+ 371 tion for Computational Linguistics.
198
+
199
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale.
200
+
201
+ Stella Frank, Emanuele Bugliarello, and Desmond Elliott. 2021. Vision-and-language or vision-for-language? On cross-modal influence in multimodal transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). Association for Computational Linguistics.
202
+
203
+ Janosch Haber, Tim Baumgärtner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, and Raquel Fernández. 2019. The PhotoBook dataset: Building common ground through visually-grounded dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1895-1910, Florence, Italy. Association for Computational Linguistics.
204
+
205
+ Robert Hawkins, Minae Kwon, Dorsa Sadigh, and Noah Goodman. 2020. Continual adaptation for efficient machine communication. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 408-419, Online. Association for Computational Linguistics.
206
+
207
+ Lisa Anne Hendricks and Aida Nematzadeh. 2021. Probing image-language transformers for verb understanding. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3635-3644, Online. Association for Computational Linguistics.
208
+
209
+ Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A reference-free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7514-7528, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
210
+
211
+ Nikolai Ilinykh, Sina Zarrieß, and David Schlangen. 2019. Tell me more: A dataset of visual scene description sequences. In Proceedings of the 12th International Conference on Natural Language Generation, pages 152-157, Tokyo, Japan. Association for Computational Linguistics.
212
+
213
+ Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In ${ICML}$ .
214
+
215
+ Robert M. Krauss and Sidney Weinheimer. 1967. Effect of referent similarity and communication mode on verbal encoding. Journal of Verbal Learning & Verbal Behavior, 6(3):359-363.
216
+
217
+ Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.
218
+
219
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer International Publishing.
220
+
221
+ Ruotian Luo, Brian L. Price, Scott D. Cohen, and Gregory Shakhnarovich. 2018. Discriminability objective for training descriptive captions. In 2018
222
+
223
+ 386 387 388 389 390 391 392 393 394 395 396 397
224
+
225
+ 398 399
226
+
227
+ 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435
228
+
229
+ 437 438 439 440 441 442 IEEE/CVF Conference on Computer Vision and Pat-
230
+
231
+ 443 tern Recognition, pages 6964-6974.
232
+
233
+ 444 Junhua Mao, Jonathan Huang, Alexander Toshev, Oana
234
+
235
+ 445 Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob-
236
+
237
+ 447 ject descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition,
238
+
239
+ 449 pages 11-20.
240
+
241
+ Letitia Parcalabescu, Albert Gatt, Anette Frank, and Iacer Calixto. 2021. Seeing Past Words: Testing the Cross-Modal Capabilities of Pretrained V&L Models. In Proceedings of the First Workshop on Multimodal Semantic Representations (MMSR), Groningen. To appear.
242
+
243
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In ${ICML}$ .
244
+
245
+ Alec Radford, Jeff Wu, Rewon Child, David Luan,
246
+
247
+ 463 Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI.
248
+
249
+ Simeon Schüz, Ting Han, and Sina Zarrieß. 2021. Diversity as a by-product: Goal-oriented language gen-
250
+
251
+ 468 eration leads to linguistic variation. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 411- 422, Singapore and Online. Association for Computational Linguistics.
252
+
253
+ 473 Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How Much Can
254
+
255
+ 476 CLIP Benefit Vision-and-Language Tasks? arXiv, abs/2107.06383.
256
+
257
+ 478 Todd Shore and Gabriel Skantze. 2018. Using lexical alignment and referring ability to address data sparsity in situated dialog reference resolution. In Pro-
258
+
259
+ 481 ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2288- 2297, Brussels, Belgium. Association for Computa-
260
+
261
+ 484 tional Linguistics.
262
+
263
+ Ece Takmaz, Mario Giulianelli, Sandro Pezzelle, Ara-
264
+
265
+ 486 bella Sinclair, and Raquel Fernández. 2020. Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts. In Proceed-
266
+
267
+ 489 ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4350-4368, Online. Association for Computational
268
+
269
+ 492 Linguistics.
270
+
271
+ Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages
272
+
273
+ 5100-5111, Hong Kong, China. Association for 499 Computational Linguistics. 500
274
+
275
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 501 Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz 502 Kaiser, and Illia Polosukhin. 2017. Attention is all 503 you need. In Advances in Neural Information Pro- 504 cessing Systems, volume 30. Curran Associates, Inc. 505
276
+
277
+ Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, 506
278
+
279
+ Adela Barriuso, and Antonio Torralba. 2017. Scene 507
280
+
281
+ parsing through ade ${20}\mathrm{k}$ dataset. In 2017 IEEE Con- 508 ference on Computer Vision and Pattern Recognition 509
282
+
283
+ (CVPR), pages 5122-5130. 510
284
+
285
+ 511
286
+
287
+ ## Appendix
288
+
289
+ 512
290
+
291
+ ## A Data Examples
292
+
293
+ 513
294
+
295
+ To illustrate the differences between the datasets 514
296
+
297
+ used in our experiment in Section 4, we provide 515
298
+
299
+ an additional example of a reference chain in Pho- 516
300
+
301
+ tobook (Haber et al., 2019) in Figure 4, an exam- 517
302
+
303
+ ple of a set of image captions from COCO (Lin 518 et al., 2014) in Figure 5, and an example of a se- 519
304
+
305
+ quential description from Image Description Se- 520 quences (Ilinykh et al., 2019) in Figure 6. 521
306
+
307
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_5_844_1695_616_258_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_5_844_1695_616_258_0.jpg)
308
+
309
+ Figure 4: Referring utterance chain from Photo-Book (Haber et al., 2019). The chain has 4 ranks (4 references to the target image, in red outline). For simplicity, only the 5 distractor images from rank 1 are shown.
310
+
311
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_6_191_357_613_511_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_6_191_357_613_511_0.jpg)
312
+
313
+ Figure 5: Set of captions from COCO (Lin et al., 2014), consisting of 5 captions provided independently by different annotators. The order is arbitrary.
314
+
315
+ ![01963d9a-c0de-76f0-88c6-5b4febc5b65e_6_192_1333_616_490_0.jpg](images/01963d9a-c0de-76f0-88c6-5b4febc5b65e_6_192_1333_616_490_0.jpg)
316
+
317
+ Figure 6: Sequential description from Image Description Sequences (Ilinykh et al., 2019). The description includes 5 installments, which incrementally add more information about the image.
318
+
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B6PlLQtl8Zq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LESS DESCRIPTIVE YET DISCRIMINATIVE: QUANTIFYING THE PROPERTIES OF MULTIMODAL REFERRING UTTERANCES VIA CLIP
2
+
3
+ Anonymous ACL submission
4
+
5
+ § ABSTRACT
6
+
7
+ 001 In this work, we use a transformer-based pre-trained multimodal model, CLIP, to shed light on the mechanisms employed by human speak-
8
+
9
+ 004 ers when referring to visual entities. In particular, we use CLIP to quantify the degree of descriptiveness (how well an utterance describes an image in isolation) and discriminativeness (to what extent an utterance is effective in picking out a single image among similar images) of human referring utterances within multimodal dialogues. Overall, our results show that utterances become less descriptive over time while their discriminativeness remains unchanged. Through analysis, we propose that this trend could be due to participants relying on the previous mentions in the dialogue history, as well as being able to distill the most discriminative information from the visual context. In general, our study opens up the possibility of using this and similar models to quantify patterns in human data and shed light on the underlying cognitive mechanisms.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ During a conversation, speakers can refer to an entity (e.g., the girl in Fig. 1) multiple times within different contexts. This has been shown to lead to subsequent referring expressions that are usually shorter and that show lexical entrainment with previous mentions (Krauss and Weinheimer, 1967; Brennan and Clark, 1996). This trend has been confirmed in recent vision-and-language (V&L) datasets (Shore and Skantze, 2018; Haber et al., 2019; Hawkins et al., 2020): referring utterances become more compact (i.e., less descriptive), and yet participants are able to identify the intended referent (i.e., they remain pragmatically informative).
14
+
15
+ Several approaches (Mao et al., 2016; Cohn-Gordon et al., 2018; Schüz et al., 2021; Luo et al., 2018, i.a.) have tackled the generation of image captions from the perspective of pragmatic infor-mativity; Coppock et al. (2020) have compared the
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: Referring utterance chain from PhotoBook (Haber et al., 2019). The chain has 4 ranks ( 4 references to the target image, in red outline). For simplicity, only the 5 distractor images from rank 1 are shown.
20
+
21
+ informativity of image captions and of referring 042
22
+
23
+ expressions; and Haber et al. (2019); Hawkins et al. 043
24
+
25
+ (2020) have explored how dialogue history con- 044
26
+
27
+ tributes to discriminativeness. However, no work to 045
28
+
29
+ date has investigated how these two dimensions, de- 046 scriptiveness and discriminativeness or pragmatic
30
+
31
+ informativity, interact in referring expressions ut- 048 tered in dialogue.
32
+
33
+ In this work, we use a transformer-based pre-trained multimodal model to study the interplay between descriptiveness and discriminativeness in hu-
34
+
35
+ man referring utterances produced in dialogue. Due 053 to their unprecedented success in numerous tasks,
36
+
37
+ pretrained V&L models—such as LXMERT (Tan 055 and Bansal, 2019), VisualBERT (Li et al., 2019), UNITER (Chen et al., 2020) and ALIGN (Jia et al., 2021)-have recently attracted a lot of interest aimed at understanding the properties and
38
+
39
+ potential of their learned representations as well 060 as the effect their architectures and training setups have (Bugliarello et al., 2021). These include probing such models in a zero-shot manner, i.e., without any specific fine-tuning (Hendricks and Ne-matzadeh, 2021; Parcalabescu et al., 2021); quantifying the roles of each modality (Frank et al., 2021);
40
+
41
+ and inspecting attention patterns (Cao et al., 2020). 067
42
+
43
+ We focus on one model: Contrastive Language-Image Pre-training (CLIP, Radford et al., 2021), which learns via contrasting images and texts that can be aligned or unaligned with each other. This contrastive objective makes CLIP particularly suitable for modelling referential tasks that inherently include such comparisons. Here, we use CLIP to gain insight into the strategies used by humans in sequential reference settings, finding that although the descriptiveness of referring utterances decreases significantly, the utterances remain discriminative over the course of multimodal dialogue.
44
+
45
+ § 2 DATA
46
+
47
+ We focus on PhotoBook (PB; Haber et al., 2019), a dataset of multimodal task-oriented dialogues where players aim to pick the images they have in common without seeing each other's visual contexts (which consist of 6 images coming from the same domain). The game is played over several rounds in which the previously seen images reappear in different visual contexts, giving the players an opportunity to refer to such images again. As a result, chains of utterances referring to a single image are formed over the rounds as the players build common ground. See Fig. 1 for a simplified representation of a chain. ${}^{1}$ In total, PB consists of 2,500 games, ${165}\mathrm{\;K}$ utterances, and 360 unique images from COCO (Lin et al., 2014).
48
+
49
+ All our experiments are conducted on a subset of ${50}\mathrm{{PB}}$ games with manually annotated referring utterances, which contains 364 referential chains about 205 unique target images. We refer to this subset as PB-GOLD. ${}^{2}$ Although a dataset of automatically-extracted chains using all PB data is also available (Takmaz et al., 2020), as reported by the authors these chains may contain errors. We therefore opt for using the smaller but higher-quality PB-GOLD subset since we are interested in analysing human strategies. Given that we use a pretrained model without fine-tuning, experimenting with large amounts of data is not a requisite.
50
+
51
+ PB-GOLD's chains contain 1,078 utterances, i.e., 2.96 utterances per chain on average (min 1, max 4). We henceforth use the term 'rank' to refer to the position of an utterance in a chain. The average token length of utterances is13.34,11.03,9.23, and 7.82, respectively, for ranks1,2,3, and $4.{}^{3}$ This decreasing trend, which is statistically significant at $p < {0.01}$ with respect to independent samples t-tests between the ranks, is in line with the trend 117
52
+
53
+ observed in the whole dataset (Haber et al., 2019). 118
54
+
55
+ PB-GOLD's vocabulary consists of 926 tokens. 119
56
+
57
+ § 3 MODEL
58
+
59
+ 120
60
+
61
+ We use CLIP (Radford et al., 2021), a model pre-trained on a dataset of 400 million image-text pairs
62
+
63
+ collected from the internet using a contrastive ob- 123 jective to learn strong transferable vision representations with natural language supervision. ${}^{4}$ In particular, we employ the ViT-B/32 version of CLIP, which utilizes separate transformers to encode vision and language (Vaswani et al., 2017; Dosovit-skiy et al., 2021; Radford et al., 2019, 2021).
64
+
65
+ As the model learns to align images and texts, 130 this enables zero-shot transfer to various V&L tasks such as image-text retrieval and image classification and even certain non-traditional tasks in a simple and efficient manner (Radford et al., 2019; Agarwal et al., 2021; Shen et al., 2021; Cafagna et al., 2021; Hessel et al., 2021). This makes it an intriguing tool to investigate the properties of visually grounded referring utterances. In this work, we freeze CLIP's weights and do not fine-tune the model or perform prompt engineering, since we aim to exploit the model's pretrained knowledge
66
+
67
+ for the analysis of human referring strategies. 142
68
+
69
+ § 4 DESCRIPTIVENESS
70
+
71
+ In our first experiment, we investigate the degree of descriptiveness exhibited by referring utterances in the PhotoBook game, i.e., the amount of information they provide about the image out of context. We consider each target image and corresponding referential utterance at a given rank in isolation, i.e., without taking into account the other competing images nor the dialogue history. We quantify descriptiveness as the alignment between an utterance and its image referent using CLIPScore (Hessel et al., 2021), assuming that a more descriptive utterance will attain a higher score. For all the target image-utterance pairs in the chains of PB-GOLD, we use CLIP to obtain a vector $t$ representing the utterance and a vector $v$ representing the image. CLIPScore is then computed as the scaled cosine similarity between these two vectors, with range $\left\lbrack {0,{2.5}}\right\rbrack { : }^{5}$ CLIPScore $\left( {t,v}\right) = {2.5} * \max \left( {\cos \left( {t,v}\right) ,0}\right)$ .
72
+
73
+ ${}^{1}$ Only 1 player’s perspective for 1 context is represented.
74
+
75
+ ${}^{2}$ We use the gold set of the utterance-based chains v2 available at https://dmg-photobook.github.io/.
76
+
77
+ ${}^{3}$ We use TweetTokenizer: https://www.nltk.org/ api/nltk.tokenize.html
78
+
79
+ ${}^{4}$ https://github.com/openai/CLIP
80
+
81
+ ${}^{5}$ The scaled factor was introduced by Hessel et al. (2021) to account for the relatively low observed cosine values.
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 2: Descriptiveness (CLIPScore) for PB-GOLD, COCO and IDS. We only plot the first 4 'ranks' (x-axis) for COCO and IDS for comparability with PB-GOLD. The error bars illustrate the standard error.
86
+
87
+ 163 We compute the average CLIPScore per rank over the whole PB-GOLD dataset.
88
+
89
+ Results. We find that earlier utterances are better aligned with the target image features and that there is a monotonically decreasing trend over the 4 ranks (Fig. 2, blue bars). The differences between all pairs of ranks are statistically significant (according to independent samples t-tests, $p < {0.01}$ ), except for the comparison between the last 2 ranks $(p >$ 0.05 ). Since earlier referring utterances tend to be longer (see Sec. 2), we check to what extent length may be a confounding factor. We find that there is only a weak correlation between token length and CLIPScore (Spearman’s $\rho = {0.29},p < {0.001}$ ).
90
+
91
+ We compare these results on PhotoBook with text-to-image alignment computed with the same method on two other datasets: (1) COCO (Lin et al., ${2014}){,}^{6}$ which includes 5 captions per image provided independently by different annotators; here we do not expect to find significant differences in the level of descriptiveness across the captions, and (2) Image Description Sequences (IDS, Ilinykh et al., ${2019}{)}^{7}$ where one participant describes an image incrementally, by progressively adding sentences with further details; here we do expect a similar pattern to PhotoBook, albeit for different reasons (because participants add less salient information; Ilinykh et al., 2019). See Appendix A.
92
+
93
+ Fig. 2 shows that these expectations are confirmed. According to CLIP, COCO captions (green bars) are more descriptive than IDS descriptions and PB referring utterances, and are equally aligned with the image across 'ranks' (the order is arbitrary in this case). In contrast, IDS incremental descrip-
94
+
95
+ < g r a p h i c s >
96
+
97
+ Figure 3: Discriminativeness (reference resolution accuracy, ACC) per rank with PB-GOLD utterances (Utterance) and utterances with history (w/Prev. Utt), along with their respective entropies (ENT).
98
+
99
+ tions (yellow bars) are intrinsically ordered and 197
100
+
101
+ show a significant decreasing trend similar to PB. 198
102
+
103
+ § 5 DISCRIMINATIVENESS
104
+
105
+ 199
106
+
107
+ In order for a listener to select the target image 200 among distractor images, a referring utterance should be discriminative in its visual context. Our results in the previous section show that descriptiveness decreases over time-what is the trend regarding discriminativeness? To address this question, in our second experiment we use CLIP from
108
+
109
+ the perspective of reference resolution. 207
110
+
111
+ We focus on local text-to-image alignment, initially ignoring the previous dialogue history. To this end, we feed CLIP a single referring utterance together with the visual context of the speaker who
112
+
113
+ produced that utterance. CLIP yields softmax prob- 212 abilities for each image contrasted with the single text. As a metric, we use accuracy: 1 if the target image gets the highest probability; 0 otherwise.
114
+
115
+ Results. The overall accuracy is ${80.15}\%$ , which 216 is well above the random baseline of ${16.67}\%$ . In Fig. 3, we break down the results per rank (blue bars). A $4 \times 2$ chi-square test (4ranks vs. correct/incorrect) did not yield significant differences in accuracy between the ranks, $p > {0.05}$ . Thus, although descriptiveness decreases over time, discriminativeness is not significantly affected. An analysis of the entropy of the softmax distributions reveals that entropy increases monotonically over the ranks (this difference is statistically significant according to an independent samples t-test between ranks 1 and 4; ${H}_{1} = {0.62},{H}_{4} = {0.79},p < {0.01}$ ). That is, the model is more uncertain when trying to resolve less descriptive utterances. There is indeed a negative correlation between entropy 232 and CLIPScore computed between the target image and the corresponding utterance (Spearman's $\rho = - {0.5},p < {0.001}$ ).
116
+
117
+ ${}^{6}$ We use the set of COCO images in PB-GOLD (N=205).
118
+
119
+ ${}^{7}$ The images are from ADE20k corpus (Zhou et al.,2017)
120
+
121
+ § 6 ANALYSIS
122
+
123
+ How do participants manage to maintain discriminativeness while decreasing descriptiveness? Do they rely on the previous mentions present in the dialogue history? Do they refine their referring strategy by distilling the most discriminative information in a given context?
124
+
125
+ Dialogue history The results of our experiment in the previous section show that the utterances in isolation are effective at referring; yet, uncertainty increases when the less descriptive utterances are considered out of context. To reduce such uncertainty, participants may rely on the dialogue history (Brennan and Clark, 1996; Shore and Skantze, 2018; Takmaz et al., 2020). We consider a scenario where participants keep in memory the previous mention when processing the current referring utterance. We model this scenario by prepending the previous referring utterance in the chain to the current utterance and feeding this into the reference resolution model described in Section 5. As shown in Fig. 3, the resulting discriminativeness is similar to the one obtained earlier (the differences are not significant; chi-square test, $p < {0.05}$ ) and, as before, remains stable across ranks (chi-square test, $p > {0.05})$ . However, taking into account the previous mentions leads to a significant reduction of the entropy in general: e.g., at the last rank ${H}_{4} = {0.79}$ vs. ${H}_{4}^{\prime } = {0.62}$ (t-test, $p < {0.05}$ ). This suggests that relying on the dialogue history allows speakers to use less descriptive utterances by reducing discriminative uncertainty.
126
+
127
+ Most discriminative information Besides exploiting the dialogue history, participants may refine their referring strategy by distilling the most discriminative information in a given context. To gain insight into this hypothesis, we explore what is discriminative in the images: we compute the discriminative features ${v}_{d}$ of a target image by taking the average of the visual representations of distractor images to obtain the mean context vector and then subtracting this vector from the visual representation of the target image. We encode all 926 words in the vocabulary of PB-GOLD using CLIP, and retrieve the top-10 words whose representations are the closest to ${v}_{d}$ in terms of cosine
128
+
129
+ similarity (amounting to 1% of the vocabulary). 281
130
+
131
+ We take these words to convey the most discrimina- 282
132
+
133
+ tive properties of an image in context. We analyse 283 whether at least one of these retrieved words is mentioned exactly in the referring utterance, find-
134
+
135
+ ing that this is indeed the case for a remarkable ${60}\%$ 286 of utterances. ${}^{8}$ As an illustration, for the example
136
+
137
+ in Fig. 1, the words walking (mentioned at rank 1) 288 and blue (used at ranks1,2,3,4) are among the top-10 most discriminative words, while the word water (mentioned at ranks1,2,3,4) is close to the word beach, which is also retrieved as one of most discriminative words in this case.
138
+
139
+ The most discriminative words are likely to be reused in later utterances, even though the visual context changes from rank to rank. For instance, the most discriminative words mentioned at rank 1 constitute ${60}\%$ of the discriminative words at rank 2, indicating that entrainment is likely for
140
+
141
+ words that have high utility across contexts. We 300 also find a significant increase in the proportion of discriminative content words to all the content words per utterance (only between ranks 1 and 4 , 14% vs. 19%, $p < {0.01}$ ).
142
+
143
+ § 7 CONCLUSION
144
+
145
+ We used a pre-trained multimodal model claimed 306 to be a reference-free caption evaluator, CLIP (Rad-
146
+
147
+ ford et al., 2021), to quantify descriptiveness and 308 discriminativeness of human referring utterances within multimodal dialogues. We showed that (i) later utterances in a dialogue become less descriptive in isolation while (ii) remaining similarly dis-
148
+
149
+ criminative against a visual context. 313
150
+
151
+ We found that the addition of dialogue history helps decrease and control the entropy of resolution accuracy even when the speakers produce less descriptive referring utterances. In addition, we found that the proportion of discriminative words increases over the ranks. These suggest that participants playing the PhotoBook game (Haber et al., 2019) show a tendency towards distilling discriminative words and utilize the dialogue history to keep task performance stable over the dialogue.
152
+
153
+ Interestingly, future work could explore novel ways of incorporating the CLIP model or its representations into a reference resolution or generation
154
+
155
+ model embedding dialogue history and visual con- 327 text to obtain human-like outcomes. 329
156
+
157
+ ${}^{8}$ Randomly sampling 10 words from the vocabulary for each utterance yields ${11}\%$ (average of 5 random runs).
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/BGMfS7tgIWq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
2
+
3
+ Anonymous ACL submission
4
+
5
+ ## Abstract
6
+
7
+ 001 Distributional semantic models capture word-level meaning that is useful in many natural language processing tasks and have even been shown to capture cognitive aspects of word meaning. The majority of these models are
8
+
9
+ 006 purely text based, even though the human sensory experience is much richer. In this paper we create visually grounded word embeddings by
10
+
11
+ 009 combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning. Our analysis shows that visually grounded embedding similarities are more predictive of the human reaction times in a large priming experiment than the purely text-based embeddings. 017 The visually grounded embeddings also correlate well with human word similarity ratings. Importantly, in both experiments we show that the grounded embeddings account for a unique portion of explained variance, even when we include text-based embeddings trained on huge corpora. This shows that visual grounding allows our model to capture information that cannot be extracted using text as the only source of information.
12
+
13
+ ## 1 Introduction
14
+
15
+ Distributional semantic models create word representations that quantify word meaning based on the idea that a word's meaning depends on the contexts in which the word appears. Such representations (also called embeddings) are widely used as the linguistic input for computational linguistic models, with research showing that they can account for response times in lexical decision tasks (Man-dera et al., 2017; Rotaru et al., 2018; Petilli et al., 2021), decode brain data (Xu et al., 2016; Abnar et al., 2018), account for brain activity during text comprehension (Frank and Willems, 2017), and correlate with human judgements of word similarity (Kiela et al., 2018; Derby et al., 2018, 2020).
16
+
17
+ While such embeddings have proven useful, they 042 are not cognitively plausible as creating high qual- 043
18
+
19
+ ity embeddings requires billions of word tokens. 044
20
+
21
+ For instance, the GloVe embeddings developed by 045 Pennington et al. (2014) are trained on 840 bil- 046 lion words. It would require a human 80 years of constant reading at about 330 words per second to digest that much information. Obviously, humans are able to understand language after much less ex-
22
+
23
+ posure, and furthermore, their sensory experience 051 is much richer than solely reading texts.
24
+
25
+ Embodied cognition theory poses that our con- 053 ceptual knowledge is based on the entirety of our sensory experience (Barsalou, 2008; Foglia and Wilson, 2013). For instance, reading the word ${dog}$ elicits sensory experiences we have with dogs, such
26
+
27
+ as their sound and how they look. Embodied cogni- 058 tion theory thus assumes that all our sensory experi-
28
+
29
+ ences contribute to our conceptual knowledge and 060 processing, which should be reflected in human behaviour. Early priming studies have indeed found that visual similarities can elicit priming effects 063 (D'Arcais et al., 1985; Schreuder et al., 1998).
30
+
31
+ If visual features are part of our conceptual 065 knowledge, word embeddings incorporating visual features should be able to explain human behavioural data to a degree unattainable by purely text-based methods (that is, if we assume visual
32
+
33
+ sensory experiences can never be fully captured 070 by textual descriptions). That is why recent re-
34
+
35
+ search has taken an interest in multimodal word 072 embeddings, combining text with a second source of information, resulting in visually grounded em-beddings (VGEs) in the case of visual information.
36
+
37
+ ### 1.1 Related work
38
+
39
+ 076
40
+
41
+ Using image tags as a source of visual context, 077
42
+
43
+ Bruni et al. (2013) create visual distributional se- 078
44
+
45
+ mantic embeddings and use dimensionality reduc- 079
46
+
47
+ tion to map visual and text-based embeddings to the 080 common VGE space. Derby et al. (2018) combine 082 text-based embeddings with the network activations of an object recognition model and show that these visual features improve the embeddings' performance in downstream tasks. Petilli et al. (2021) use visual embeddings created by an object recog- 087 nition network, and show that the embedding similarities are predictive of priming effects over and 089 above text-based similarities.
48
+
49
+ The studies described above involve separately trained word and visual embeddings. An end-to-end approach to combine visual and linguistic information is through a deep neural network 094 based caption-to-image retrieval (C2I) models (e.g., Karpathy and Fei-Fei 2015; Kamper et al. 2017). 096 While these models are trained to encode images and corresponding written or spoken captions in a common embedding space such that relevant captions can be retrieved given an image and vice versa, the resulting embeddings have been shown to capture sentence-level semantics (Chrupala et al., 2017; Merkx and Frank, 2019; Merkx et al., 2021). Kiela et al. (2018) showed that pretrained embed-dings correlated better with human intuition about word meaning after being fine-tuned as learnable parameters in their C2I model.
50
+
51
+ ### 1.2 Current study
52
+
53
+ In this study we investigate whether VGEs created by a C2I model explain human behavioural data. Our research question is: can VGEs capture aspects of word meaning that (current) text-based approaches cannot? To answer this question we investigate novel end-to-end trained VGEs and test them on two types of human behavioural data thought to rely on conceptual/semantic knowledge. Secondly, we take care to separate the contribution of the image modality from that of the linguistic information to see whether visual grounding captures word properties that cannot be learned by purely text-based methods. We do this by comparing our VGEs to three well-known text-based methods.
54
+
55
+ Throughout our experiments we will use two versions of the text-based methods: custom trained on the same data as our VGEs and pretrained on large corpora. From a cognitive modelling perspective, the former of these is more interesting. While the use of large corpora may not be problematic for natural language processing applications where performance comes first, we aim to create cognitively plausible embeddings, that is, from a realistic amount of linguistic exposure. However, the inclu-
56
+
57
+ sion of pretrained embeddings serves to answer our 132
58
+
59
+ main research question. 133
60
+
61
+ #### 1.2.1 Semantic similarity judgements
62
+
63
+ 134
64
+
65
+ In our first experiment we test whether the VGEs 135
66
+
67
+ correlate better with a measure of human intuition 136 about word meaning than text-based embeddings. A well-known method to capture human intuition about word meaning is simply by asking subjects how similar two words are in meaning. To evaluate word embeddings, one can then see if embedding similarities for those word pairs correlate with the human judgements (e.g., Bruni et al., 2013; Baroni et al., 2014; Speer and Chin, 2016; Kiela et al., 2018; Derby et al., 2020).
68
+
69
+ While the study by Kiela et al. (2018) performed a similar investigation on pretrained word embed-
70
+
71
+ dings fine-tuned through their C2I model, they did 148 not take into account the fact that text might also contain visual knowledge. It is not unreasonable to assume that some visual knowledge can be gained from a large corpus of sentences solely describing visual scenes. We account for this visual knowledge from text by incorporating word embeddings trained on the image descriptions in order to investigate the contribution of the image modality included in the VGEs.
72
+
73
+ Collecting word similarity ratings typically involves showing participants two words and asking them to rate how similar or related their meanings are, or picking the most related out of several pairs.
74
+
75
+ Semantic relatedness refers to the strength of the 162 association between two word meanings. For instance, 'dog' and 'leash' have a strong relationship but are not similar in meaning. Semantic similarity refers to two words sharing semantic properties, for
76
+
77
+ instance 'dogs' and 'cats' which are both animals 167 that people keep as pets (Hill et al., 2015).
78
+
79
+ #### 1.2.2 Semantic priming
80
+
81
+ 169
82
+
83
+ In the second experiment, we test whether our
84
+
85
+ VGEs are predictive of semantic priming effects 171 from a large priming experiment (Hutchison et al., 2013). Semantic priming effects occur when activation of a semantically related prime word facilitates the processing of the target word, resulting in shorter reaction times. If all our sensory experiences contribute to word meaning, we would expect
86
+
87
+ visual perceptual properties of the prime-target pair 178 to influence the response times.
88
+
89
+ Petilli et al. (2021) performed a similar experiment using visual embeddings derived from activation features from an object recognition network and text-based word embeddings. Their results show that after accounting for the text-based similarity, the visual embedding similarities contribute to explaining the human reaction times only for lexical decision trails with a short stimulus onset asynchrony (SOA), and not for the naming task or long SOA trials. They attribute this to: 1) the lexical decision task being more sensitive to semantic effects than the naming task (Lucas, 2000), and 2) visual information being activated in early linguistic processing and rapidly decaying (Pecher et al., 1984; Schreuder et al., 1998). We will further test these interactions in our own experiment.
90
+
91
+ ## 2 Methods
92
+
93
+ In our experiments, we compare the VGEs from our own model with three well known text-based distributional semantic models: FastText (Bojanowski et al., 2017), Word2Vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014). For the purpose of this study, we take two approaches: 1) we train our own text-based distributional models to allow for a fair comparison to the VGEs, and 2) we use the pretrained models to investigate whether our VGEs capture semantic information that even models trained on large text corpora do not.
94
+
95
+ ### 2.1 Training data
96
+
97
+ MSCOCO is a database intended for training image recognition, segmentation and captioning models (Chen et al., 2015). It has 123,287 images and 605,495 written English captions, that is, five captions paired to each image. Captions were collected by asking annotators to describe what they saw in the picture. Five thousand images (25,000 captions) are reserved as a development set.
98
+
99
+ The captions are provided in tokenised format. In order to use them in our models we only de-capitalised all words and removed the punctuation at the end of each sentence. This results in a total of 6,184,656 word tokens and 28,415 unique word types, to which we add start- and end-of-sentence tokens for training our visually grounded model.
100
+
101
+ The images are pre-processed by resizing the images such that the shortest side is 256 pixels, while keeping the original aspect ratio. We take ten 224 by 224 crops of the image: one from each corner, one from the middle and the same five crops for the mirrored image. We use ResNet-152 (He et al., 2016) pretrained on ImageNet to extract visual features from these ten crops and then average the 231 features of the ten crops into a single vector with 232 2,048 features. These features are extracted by re- 233
102
+
103
+ moving ResNet's classification layer and taking the 234 activations of the penultimate layer.
104
+
105
+ ### 2.2 Models
106
+
107
+ 236
108
+
109
+ #### 2.2.1 Visually grounded model
110
+
111
+ 237
112
+
113
+ Our visually grounded model is based on the im- 238
114
+
115
+ plementation by Merkx and Frank (2019), and we 239 refer to that paper for the details. Here we will provide a brief overview of the model, any differences with Merkx and Frank (2019) and the parameter settings tested in this study.
116
+
117
+ The VGE model maps images and their corresponding captions to a common embedding space.
118
+
119
+ It is trained to make the embeddings for matching 246 images and captions as similar as possible, and those for mismatched images and captions dissimilar. The model consists of two parts; an image embedder and a caption embedder. The image em-
120
+
121
+ bedder is a single-layer linear projection on top of 251 the image features extracted with ResNet-152. We train only the linear projection and do not further fine-tune ResNet.
122
+
123
+ The caption embedder consists of a word embedding layer followed by a two-layer bi-directional recurrent Long Short Term Memory (LSTM) layer and finally a self-attention layer. The embedding layer has 300 dimensions and is used to represent the input words as learnable embeddings. The purpose of the LSTM is to create a contextualised hidden state for each time-step (input word). Its first layer has 1028 hidden units, while its second layer acts as a bottleneck with 300 hidden units. Finally, the purpose of the attention layer is to weigh each time-step in order to create a single fixed-length embedding for the entire caption. The attention layer has 128 hidden units.
124
+
125
+ The image embedder has $2 \times {300}$ dimensions
126
+
127
+ so that the output matches the size of the caption 270 embeddings. Both image and caption embedding are L2 normalised and we take their distance as the loss signal for the batch hinge loss function (see Merkx and Frank, 2019). The networks are trained for 32 epochs using Adam with a cyclic learning rate schedule based on Smith (2017), which varies the learning rate smoothly between ${10}^{-3}$ and ${10}^{-6}$ .
128
+
129
+ The obvious way to extract word embeddings from the trained model would be to use the trained weights of the embedding layer. Unlike for instance
130
+
131
+ 281 in GloVe, where each word's embedding is based on its full co-occurrence distribution, these embed-dings are not trained specifically to capture word context or meaning and they are not necessarily the best word embeddings. However, our initial tests showed that they performed very poorly as semantic embeddings when trained from a random initialisation ${}^{1}$ . Rather than taking the input em-beddings we create our own embeddings from the hidden representations of the model.
132
+
133
+ We create our VGEs from the hidden activations of the bottleneck LSTM layer. We use the trained caption encoder to encode all training sentences in MSCOCO. However, we remove the attention layer that creates the sentence embedding and we retain the individual activations of the LSTM at each time step. As the word representations in this layer can be used to create semantic sentence embeddings that capture human intuition about sentence meaning (as shown for instance by Merkx and Frank, 2019; Merkx et al., 2021), we expect these representations to better capture word meaning than the input embeddings.
134
+
135
+ The embedding for each word is then created by summing and normalising its LSTM layer activations from all its occurrences in the dataset. As opposed to Merkx and Frank (2019), who used a single recurrent layer and found no further benefit of additional layers in terms of sentence embedding quality, we found that the quality of our VGEs improves when we use a two-layer LSTM, with the second layer acting as a bottleneck from which we derive the embeddings.
136
+
137
+ #### 2.2.2 Text-based models
138
+
139
+ The text-based distributional models are trained on the MSCOCO captions. We train Word2Vec and FastText using the Gensim package (Rehürek and Sojka, 2010). We train GloVe using the code that Pennington et al. (2014) made publicly available ${}^{2}$ .
140
+
141
+ Word2Vec and FastText were trained as the Skip-gram variant with embedding size 300 , a context window of 10 and 10 negative samples. GloVe was trained with embedding size 300 and a context window of 10 . All resulting word embeddings are then L2 normalised.
142
+
143
+ In addition, we use the following pretrained vectors (all 300 dimensional): Word2Vec trained
144
+
145
+ Table 1: Description of the word similarity/relatedness evaluation datasets. #available is the number of word pairs included in the evaluation. Type indicates whether the dataset captures similarity or relatedness. NA indicates subjects were not specifically instructed on the difference.
146
+
147
+ <table><tr><td>Dataset</td><td>#word-pairs</td><td>#available</td><td>type</td></tr><tr><td>WordSim353</td><td>353</td><td>240</td><td>NA</td></tr><tr><td>WordSim-S</td><td>203</td><td>147</td><td>Similarity</td></tr><tr><td>WordSim-R</td><td>252</td><td>166</td><td>Relatedness</td></tr><tr><td>SimLex999</td><td>999</td><td>793</td><td>Similarity</td></tr><tr><td>-SimLex999 Q1</td><td>249</td><td>141</td><td>Similarity</td></tr><tr><td>-SimLex999 Q4</td><td>250</td><td>249</td><td>Similarity</td></tr><tr><td>MEN</td><td>3000</td><td>2889</td><td>Relatedness</td></tr><tr><td>RareWords</td><td>2034</td><td>204</td><td>NA</td></tr></table>
148
+
149
+ on 100 billion tokens of the Google News corpus 328
150
+
151
+ (Mikolov et al., 2013b), FastText trained on 600 329
152
+
153
+ billion tokens of Common Crawl (Mikolov et al., 330
154
+
155
+ 2018) and GloVe trained on 840 billion tokens of 331
156
+
157
+ Common Crawl (Pennington et al., 2014). 332
158
+
159
+ ### 2.3 Evaluation data
160
+
161
+ 333
162
+
163
+ #### 2.3.1 Semantic similarity judgements
164
+
165
+ 334
166
+
167
+ We include both semantic relatedness and similar-
168
+
169
+ ity datasets in our analysis. It has been argued that 336 subjects' intuitive understanding of similarity is not necessarily in line with the 'scientific' notions of similarity and relatedness explained in the introduction (Hill et al., 2015). Thus, if subject are not
170
+
171
+ clearly instructed on these notions of similarity or 341 relatedness, we consider the nature of the dataset undefined.
172
+
173
+ The WordSim353 dataset by Finkelstein et al. (2002) contains 353 word pairs annotated with similarity ratings. While the name suggests it is a similarity rating dataset, more recent studies consider it a hybrid dataset, as subjects were not specifically instructed to judge relatedness or similarity. In a later study by Agirre et al. (2009), the WordSim353 data was split into similar and related pairs by annotating the word pairs. WordSim-S (similar) contains word pairs annotated as being synonyms, antonyms, identical, or hyponym-hyperonym. WordSim-R (related) contains word pairs annotated as being meronym-holonym, and pairs with none of the above relationships but with a similarity score greater than 5 (out of 10). Both sets contain all unrelated words (words not anno-
174
+
175
+ tated with any of the above relationships and a 360 similarity lower than 5).
176
+
177
+ SimLex999 was created with the caveats of the original WordSim353 in mind in order to create a 364 dataset of 999 word pairs annotated for similarity 365 rather than relatedness (Hill et al., 2015). Sim- 366 Lex999 furthermore contains concreteness ratings 367 for the word pairs. Hill et al. (2015) divided the the dataset into concreteness quartiles based on the 369 sum of the concreteness ratings for each pair. Using 370 these quartiles we also look at the ${25}\%$ most con- 371 crete word pairs versus the ${25}\%$ most abstract pairs in the dataset, of course expecting our grounded model to perform best on the concrete words.
178
+
179
+ ---
180
+
181
+ ${}^{1}$ Kiela et al. (2018) were able to use the input embeddings because they were initialised using pretrained embeddings.
182
+
183
+ 2 https://nlp.stanford.edu/projects/ glove/
184
+
185
+ ---
186
+
187
+ MEN contains 3000 word pairs annotated for semantic relatedness (Bruni et al., 2013). Ratings were collected by showing subjects two word pairs and asking them to select the most related one. MEN was specifically collected to test multi-modal models, by selecting only words that have a visual referent that appeared in a large image database.
188
+
189
+ The RareWords dataset contains 2034 word pairs, where at least one word of each pair has a low frequency in Wikipedia (Luong et al., 2013). Modelling low-frequency words is a challenge for many models of distributional semantics.
190
+
191
+ Not all of the words in these databases are available in our training data and thus some will not have a word embedding. Table 1 contains an overview of the datasets described here and the number of word pairs that could be entered in our evaluations.
192
+
193
+ #### 2.3.2 Semantic priming
194
+
195
+ The Semantic Priming Project (SPP) dataset (Hutchison et al., 2013) contains lexical decision times and naming times from a large priming experiment. The database is large for its kind, with 1,661 target words (and 1,661 non-words for the lexical decision task), each paired with a strong and weak prime and two unrelated primes. Furthermore, each prime-target pair was presented with a short(200ms)and a long(1200ms)SOA. Every combination of prime-target and SOA received responses from 32 subjects.
196
+
197
+ This gives us ${26},{576}({1661}$ target words $\times 4$ priming conditions $\times 2$ SOAs $\times 2$ tasks) trials (disregarding the non-word word trials). We preprocessed the data by removing target words that mistakenly had more or fewer than the required four primes, trials with erroneous responses and missing data. We also lowered any capitals in the prime and target words, averaged the response times over the 32 subjects, and removed any prime-target pair that did not occur in our training data, resulting in 18,326 datapoints.
198
+
199
+ ### 2.4 Analysis
200
+
201
+ 414
202
+
203
+ #### 2.4.1 Semantic similarity judgements
204
+
205
+ 415
206
+
207
+ To test whether the word embedding models cap- 416 ture human intuitions on word similarity, we use 417 the models to calculate embedding cosine similar- 418
208
+
209
+ ities for each word pair and correlate them with 419 the human annotations. This allows us to evaluate our custom trained word embeddings to see which method best extracts word-level semantics from the
210
+
211
+ MSCOCO dataset. Next, we also compute partial 423 correlations between the human annotations and our VGE model using each of the text-based models as a control. Given that all models are trained on the same textual data, with only the VGEs hav-
212
+
213
+ ing excess to the visual modality, this allows us to 428 see whether visual grounding captures information
214
+
215
+ that the text-based methods do not. 430
216
+
217
+ Finally we also test the partial correlations using the pretrained embeddings as a control. For each pretrained model we also add in its custom MSCOCO-trained equivalent as a control, to take
218
+
219
+ into account the information that text-based models 435 can extract from the MSCOCO captions.
220
+
221
+ #### 2.4.2 Semantic priming
222
+
223
+ 437
224
+
225
+ Using linear regression models, we analyse how well embedding similarities predict human (log-transformed) reaction times in the SPP data using the Statsmodels package in Python (Seabold and Perktold, 2010). We code SOA and Task as factor variables. The reaction times are not on the same scale due to differences in the required response for the lexical decision and naming tasks so we standardise the log-transformed reaction time data separately for each combination of SOA and Task. This removes the main effects of SOA and Task but we include them in the regression as we are interested in their interactions with the similarity measures.
226
+
227
+ We fit a baseline regression including the target length (number of characters), Task and SOA as regressors. We furthermore include several regressors based on SUBTLEX-US (Brysbaert and New, 2009): log-transformed word-frequency counts, contextual diversity (the number of SUBTLEX-US documents a word appears in) and the orthographic neighbourhood density (the number of SUBTLEX-US words that are one character edit away) for the target words.
228
+
229
+ Next, for each of our embedding models, we include the prime-target embedding similarities as 464 a regressor to the baseline model. We also add two two-way interactions to test the claims made in 466 Petilli et al. (2021): 1) the interaction between the embedding similarities and Task to test the difference between lexical decision and naming in terms of sensitivity to semantic effects and 2) the interaction between the embedding similarities and SOA to test their claim about the time-frame in which visual information plays a role. These regression models allow us to compare the word embedding models to each other and to the baseline using the Akaike Information Criterion (AIC), where a lower AIC indicates a better model fit.
230
+
231
+ We also test if our VGEs can explain variance in the human reaction times that the text-based methods do not. We do this by refitting the regression models for each of the text-based similarity measures and adding the VGE similarity measures and their interactions with Task and SOA as extra regressors. For each of these regressions we then calculate the log-likelihood ratio (LLR) with the corresponding regression without the VGEs, indicating the decrease in model deviance due to adding the VGE similarity measures. Higher LLRs indicate a larger contribution of the VGEs to explaining variance in the human response times beyond what the text-based embedding similarities explain. Because the LLR follows a ${\chi }^{2}$ distribution, we can test whether including the VGEs significantly improves the regression model.
232
+
233
+ We apply a similar approach to the pretrained text-based embeddings, but we also want to account for the information that text-based embedding models can extract from the MSCOCO captions. We do this by fitting a regression model as in the previous step except that we include both the pretrained and MSCOCO trained embeddings and their interactions with SOA and Task. We then follow the same procedure as described above by adding the VGE similarities and calculate LLRs to see if adding VGEs improves the regression fit.
234
+
235
+ ## 3 Results
236
+
237
+ ### 3.1 Semantic similarity judgements
238
+
239
+ Figure 1 shows the ${R}^{2}$ (explained variance) based on the Pearson correlation coefficients ${}^{3}$ between the human similarity annotations and the embedding similarities. All Pearson correlations were pos-
240
+
241
+ Table 2: AIC comparison of regression models (lower is better). $\Delta$ indicates the difference in AIC compared to the VGE model or the Baseline model. $\beta$ indicates the coefficient of the embedding similarity main effect (lower is better) and its significance.
242
+
243
+ <table><tr><td>Model</td><td>AIC</td><td>$\Delta \mathbf{{VGE}}$</td><td>$\Delta$ Baseline</td><td>$\beta$</td></tr><tr><td>VGE</td><td>46997.55</td><td>-</td><td>-211.04</td><td>$- {.67} * * *$</td></tr><tr><td>FastText</td><td>47101.90</td><td>104.35</td><td>-106.86</td><td>$- {.54} * * *$</td></tr><tr><td>GloVe</td><td>47163.70</td><td>166.15</td><td>-44.88</td><td>$- {.20}^{* * }$</td></tr><tr><td>Word2Vec</td><td>47184.45</td><td>186.90</td><td>-24.13</td><td>$- {.22}^{* * }$</td></tr><tr><td>Baseline</td><td>47208.58</td><td>211.03</td><td>-</td><td>-</td></tr></table>
244
+
245
+ itive, as expected, except for two non-significant 511 partial correlations which are therefore not included in the figure.
246
+
247
+ For the MSCOCO models (left panel) we see that while GloVe has the worst performance on each dataset, there is no single best model. Furthermore, while the VGEs are outperformed by FastText and Word2Vec on SimLex999, we see that VGE performs best on the most concrete words (Q4) in SimLex999. A bit surprising then, is that VGE is outperformed by FastText and Word2Vec on MEN, which contains solely picturable nouns.
248
+
249
+ Looking at the partial ${R}^{2}$ , that is, the extra variance explained by the VGEs after controlling for one of the other embedding models, we see that for nearly every dataset and every model, the VGEs explain a significant portion of variance that is not explained by the text-based models. This is not very surprising on WordSim, where the VGEs were the best performing embeddings by quite a margin. However, we also see that even though the VGEs are outperformed by FastText and Word2Vec on MEN, they still explain a large extra portion of variance even though the ${R}^{2}$ for these models was already quite high.
250
+
251
+ Lastly, the pretrained models (right panel) outperform the MSCOCO models. This was expected, as the used training data is several orders of magnitude larger than MSCOCO. However, here we still
252
+
253
+ see that the VGEs explain a significant portion of 540 extra variance on SimLex999 Q4 and MEN.
254
+
255
+ ### 3.2 Semantic priming
256
+
257
+ The $\Delta$ AIC scores in Table 2 show that all word embedding models trained on MSCOCO improve the regression fit above the baseline. The embedding similarity effects were all negative, that is, a higher similarity correctly predicts a lower reaction time. We furthermore see that the VGE-derived similarity measures result in the best model fit by quite a margin, as evidenced by the AIC scores and effect size.
258
+
259
+ ---
260
+
261
+ ${}^{3}$ As total explained variance is the partial ${R}^{2}$ plus ${R}^{2}$ of the control, this more clearly visualises the amount of extra variance explained by the VGEs than Pearson’s $r$ .
262
+
263
+ ---
264
+
265
+ ![01963da4-abd3-7526-8372-ea125aa595e4_6_189_192_1270_672_0.jpg](images/01963da4-abd3-7526-8372-ea125aa595e4_6_189_192_1270_672_0.jpg)
266
+
267
+ Figure 1: The coloured bars indicate the ${R}^{2}$ scores of the four word embedding models. The grey-scale bars on top of the ${R}^{2}$ scores of the text-based models indicate the partial ${R}^{2}$ scores and their significance $( * p < {.05}, * * p <$ ${.01}, * * * p < {.001}$ , corrected using the Benjamini and Hochberg (1995) procedure with a false discovery rate of 0.05 of the VGEs after controlling for the variance explained by that text-based model. Left panel: models trained on MSCOCO. Right panel: pretrained text-based models.
268
+
269
+ Table 3: LLRs between regression models with the indicated text-based similarity measures and the same model with the VGE similarities as extra regressors. $\beta$ VGE are the regression coefficients for the VGE similarities in each model. Higher LLRs indicate a larger improvement in model quality due to adding the VGEs.
270
+
271
+ <table><tr><td rowspan="2"/><td colspan="2">MSCOCO</td><td colspan="2">+ Pretrained</td></tr><tr><td>LLR</td><td>$\beta$ VGE</td><td>LLR</td><td>$\beta$ VGE</td></tr><tr><td>Word2Vec</td><td>193.72***</td><td>$- {.77} * * *$</td><td>69.72***</td><td>$- {.49} * * *$</td></tr><tr><td>FastText</td><td>111.46***</td><td>$- {.63} * * *$</td><td>47.32***</td><td>$- {42}^{* * * }$</td></tr><tr><td>GloVe</td><td>168.34***</td><td>$- .{72} * * *$</td><td>49.80***</td><td>$- {36}^{* * * }$</td></tr></table>
272
+
273
+ We also find significant interactions between Task and the embedding similarities for the VGE $\left( {\beta = {0.201}, P = {0.009}}\right)$ and FastText regression models $\left( {\beta = {0.197}, P = {0.027}}\right)$ , meaning that the effect of embedding similarity is stronger for the lexical decision task. We find no significant interactions between the embedding similarities and SOA.
274
+
275
+ Table 3 shows the LLRs between regression models including the (pretrained) text-based and our VGE word similarity measures and the corresponding model including only the text-based measures. We see that our VGEs significantly im-
276
+
277
+ prove the regression fit for every type of text-based 565
278
+
279
+ method, even when we include both the pretrained 566
280
+
281
+ and MSCOCO text-based measures. The coeffi- 567
282
+
283
+ cients of the VGE effects in these models are all 568
284
+
285
+ positive, meaning a higher VGE similarity predicts 569 a lower reaction time.
286
+
287
+ In the regression models including the VGEs and
288
+
289
+ the MSCOCO text-based embeddings we found 572 significant interactions between the VGE similar-
290
+
291
+ ities and Task in the regression models that also 574 include Word2Vec $\left( {\beta = {0.239}, P = {0.007}}\right)$ or GloVe $\left( {\beta = {0.234}, P = {0.01}}\right)$ and no other interactions with Task or SOA.
292
+
293
+ Lastly, in the regression models including the
294
+
295
+ VGEs and both pretrained and MSCOCO text- 579 based embeddings, we find significant interactions with Task for Word2Vec $\left( {\beta = {0.312}, P < {0.001}}\right)$ , FastText $\left( {\beta = {0.297}, P = {0.001}}\right)$ and GloVe $\left( {\beta = {0.443}, P < {0.001}}\right)$ vectors, and none for
296
+
297
+ the VGEs. 584
298
+
299
+ ## 4 Discussion
300
+
301
+ We created Visually Grounded Embeddings using
302
+
303
+ a caption-image retrieval model in order to test if 587 these embeddings can capture information about word meaning that text-based approaches cannot. Importantly, by testing our VGEs on human be- 591 havioural measures typically thought to rely on conceptual/semantic knowledge, we test a central idea of embodied cognition theory, namely that our visual experiences contribute to our conceptual knowledge.
304
+
305
+ ### 4.1 Semantic similarity judgements
306
+
307
+ Our first experiment showed that, when trained on the same corpus, our VGEs are on par with text-based methods. While there is no clear overall best method, the VGEs perform well on WordSim and, as might be expected, on the datasets with concrete picturable nouns. Even though the text-based methods outperform the VGEs on one of these (MEN), the VGEs still explain a significant amount of extra variance over and above what is explained by the text-based methods. This indicates that the text-based embeddings and VGEs capture non-overlapping conceptual knowledge, which we attribute to the visual grounding of the VGEs, given that the training materials were otherwise equal.
308
+
309
+ The only database where the VGEs performed notably worse than the text-based methods was RareWords. This is perhaps because during training, the VGEs are grounded in the image corresponding to the text input, even if not all words in the sentence are visible in the picture. As the words in RareWords are generally not picturable nouns, any visual information incorporated into the word-embedding is unlikely to be helpful, or, as evidenced by the results, counterproductive.
310
+
311
+ We furthermore found that our VGEs explain additional variance in the human similarity ratings even after accounting for both the MSCOCO text-based models and pretrained models trained on massive text corpora. The fact that the VGEs explain a significant amount of extra variance even after the text-based models have seen billions of tokens of text, suggests that some aspects of word meaning cannot be captured solely from text and as well as that visual similarity plays a role in human intuition about word meaning.
312
+
313
+ ### 4.2 Semantic priming
314
+
315
+ In our second experiment, the VGEs outperformed the text-based methods on explaining human reaction times from the Semantic Priming Project. Even after we account for both the MSCOCO text-based models and pretrained models in our regression, the VGEs still explain a significant amount of variance in the reaction times.
316
+
317
+ In previous work, Petilli et al. (2021) only found 640
318
+
319
+ a significant contribution of visual information in 641
320
+
321
+ the short SOA lexical decision task. We found no 642
322
+
323
+ further proof for their hypothesis that visual infor- 643
324
+
325
+ mation is activated in early linguistic processing 644
326
+
327
+ and thereafter rapidly decays. Rather, we find that 645 our VGEs improve the model quality for both short
328
+
329
+ and long SOA trials. 647
330
+
331
+ We did find a significant positive interaction with 648
332
+
333
+ Task, meaning that the word embeddings explain 649
334
+
335
+ less variance in the naming task than in the lexical 650 decision task. This interaction was not specific to
336
+
337
+ the VGEs but also occurred in the models including 652 FastText and for all the pretrained embeddings. As claimed in Petilli et al. (2021) and Lucas (2000) this suggests that naming tasks are in general less
338
+
339
+ sensitive to semantic effects. 656
340
+
341
+ ## 5 Conclusion
342
+
343
+ 657
344
+
345
+ We set out to test an end-to-end approach to com- 658 bining visual and textual input in a single embed-
346
+
347
+ ding, trained on a cognitively plausible amount of 660 data. The results from our two experiments suggest
348
+
349
+ that VGEs capture aspects of word meaning that 662 text-based approaches cannot. Even though we include word embeddings trained on corpora several orders of magnitude greater than any human's exposure to language, our VGEs still explain a unique
350
+
351
+ portion of variance in both human behavioural mea- 667 sures.
352
+
353
+ While our results indicate that visual grounding can provide complementary information for certain words, it may not play a role in our conceptual knowledge of rare, abstract words, as shown by our results on the RareWords corpus. Similar to Petilli
354
+
355
+ et al. (2021) this then does not support the strongest 674 formulations of embodied cognition theory which suggest total equivalence between conceptual and sensorimotor processing (Glenberg, 2015).
356
+
357
+ Of course, one could always claim that it is just current word-embedding models that do not fully capture word meaning yet. However, given that VGEs trained on a relatively small amount of visual data can complement text-based embeddings, we do not think even larger text-corpora or more complex embedding models can ever fully capture human semantic knowledge. The human experi-
358
+
359
+ ence is rich and varied, and our computational mod- 686 els can never fully capture human word knowledge while ignoring visual aspects of this experience. 689
360
+
361
+ ## References
362
+
363
+ 690 Samira Abnar, Rasyan Ahmed, Max Mijnheer, and 691 Willem Zuidema. 2018. Word Embeddings have 692 Complementary Roles in Decoding Brain Activity. 693 Proceedings of the 8th Workshop on Cognitive Mod- 694 eling and Computational Linguistics (CMCL 2018), 695 Salt Lake City, Utah, USA, January 7, 2018, pages 696 57-66.
364
+
365
+ 697 Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana
366
+
367
+ 698 Kravalova, Marius Pasca, and Aitor Soroa. 2009. A 699 study on similarity and relatedness using distributional and wordnet-based approaches. In Proceed- 701 ings of the 2009 Annual Conference of the North 702 American Chapter of the Association for Computational Linguistics (HLT-NAACL-2009), pages 19-27, 704 Boulder, Colorado.
368
+
369
+ 705 Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don't count, predict! a 707 systematic comparison of context-counting vs. context-predicting semantic vectors. Proceedings of the 52nd Annual Meeting of the Association for 710 Computational Linguistics (Volume 1: Long Papers), 711 pages 238-247.
370
+
371
+ 712 Lawrence W. Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59(1):617-645.
372
+
373
+ 714 Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal 717 Statistical Society $B,{57} : {289} - {300}$ .
374
+
375
+ Piotr Bojanowski, Edouard Grave, Armand Joulin, and
376
+
377
+ 719 Tomás Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
378
+
379
+ Elia Bruni, Namh Khanh Tran, and Marco Baroni. 2013. Multimodal distributional semantics. Journal of Arti-
380
+
381
+ 724 ficial Intelligence Research, 49:1-47.
382
+
383
+ Marc Brysbaert and Boris New. 2009. Moving beyond
384
+
385
+ 726 Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American
386
+
387
+ 729 English. Behavior Research Methods, 41(4):977- 990.
388
+
389
+ 731 Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft COCO Cap-
390
+
391
+ 734 tions: Data Collection and Evaluation Server. arXiv preprint arXiv: 1504.00325.
392
+
393
+ 736 Grzegorz Chrupala, Lieke Gelderloos, and Afra Al-ishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings
394
+
395
+ 739 of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 613-622.
396
+
397
+ 741 Steven Derby, Paul Miller, and Barry Devereux. 2020. Analysing word representation from the input and output embeddings in neural network language mod- 743
398
+
399
+ els. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 442- 454.
400
+
401
+ Steven Derby, Paul Miller, Brian Murphy, and Barry Devereux. 2018. Using sparse semantic embeddings learned from multimodal text and image data to model human conceptual knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 260-270, Brussels, Belgium. Association for Computational Linguistics.
402
+
403
+ Giovanni B. Flores D'Arcais, Robert Schreuder, and Ge Glazenborg. 1985. Semantic activation during recognition of referential words. Psychological Research, 45(1):39-49.
404
+
405
+ Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey-tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116-131.
406
+
407
+ Lucia Foglia and Robert A Wilson. 2013. Embodied cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(3):319-325.
408
+
409
+ Stefan L. Frank and Roel M. Willems. 2017. Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9):1192-1203.
410
+
411
+ Arthur M. Glenberg. 2015. Few believe the world is flat: How embodiment is changing the scientific understanding of cognition. Journal of Experimental Psychology, 69(2):165-171.
412
+
413
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
414
+
415
+ Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with genuine similarity estimation. Computational Linguistics, 41(4):665-695.
416
+
417
+ Keith A. Hutchison, David A. Balota, James H. Neely, Michael J. Cortese, Emily R. Cohen-Shikora, Chi-Shing Tse, Melvin J. Yap, Jesse J. Bengson, Dale Niemeyer, and Erin Buchanan. 2013. The semantic priming project. Behaviour Research Methods, 45:1099-1114.
418
+
419
+ Herman Kamper, Shane Settle, Gregory Shakhnarovich, and Karen Livescu. 2017. Visually grounded learning of keyword prediction from untranscribed speech. INTERSPEECH 2017 - 18th Annual Conference of the International Speech Communication Association, pages 3677-3681.
420
+
421
+ Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3128-3137.
422
+
423
+ 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 792
424
+
425
+ 793
426
+
427
+ 794
428
+
429
+ 795 796 797 798
430
+
431
+ 799 Douwe Kiela, Alexis Conneau, Allan Jabri, and Max- 800 imilian Nickel. 2018. Learning visually grounded 801 sentence representations. In Proceedings of NAACL- 802 HLT 2018, pages 408-418. Association for Compu- 803 tational Linguistics.
432
+
433
+ 804 Margery Lucas. 2000. Semantic priming without associ- 805 ation: A meta-analytic review. Psychonomic Bulletin 806 & Review, 7(4):618-630.
434
+
435
+ 807 Thang Luong, Richard Socher, and Christopher Man- 808 ning. 2013. Better word representations with recur- 809 sive neural networks for morphology. In Proceed- 810 ings of the Seventeenth Conference on Computational 811 Natural Language Learning, pages 104-113, Sofia, 812 Bulgaria. Association for Computational Linguistics.
436
+
437
+ Pawel Mandera, Emmanuel Keuleers, and Marc Brys-baert. 2017. Explaining human performance in psy- 815 cholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. Journal of Memory and Language, 818 92:57-78.
438
+
439
+ Danny Merkx and Stefan L. Frank. 2019. Learn- 820 ing semantic sentence representations from visually grounded language without lexical knowledge. Natural Language Engineering, 25(4):451-466.
440
+
441
+ Danny Merkx, Stefan L. Frank, and Mirjam Ernestus. 2021. Semantic Sentence Similarity: Size does not
442
+
443
+ 825 Always Matter. In INTERSPEECH 2021 - 22 ${}^{nd}$ Annual Conference of the International Speech Communication Association, pages 4393-4397.
444
+
445
+ Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Repre-
446
+
447
+ 830 sentations in Vector Space. arXiv preprint arXiv: 1301.3781.
448
+
449
+ Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).
450
+
451
+ 838 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositional- 841 ity. In ${NIPS}$ .
452
+
453
+ Diane Pecher, René Zeelenberg, and Jeroen G. W. Raai- 843 jmakers. 1984. Does pizza prime coin? perceptual priming in lexical decision and pronunciation. Psychological Research, 45(4):339-354.
454
+
455
+ Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word 848 representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
456
+
457
+ Marco A. Petilli, Fritz Günther, Alessandra Vergallito, Marco Ciapparelli, and Marco Marelli. 2021. Data-driven computational models reveal perceptual sim-
458
+
459
+ 853 ulation in word processing. Journal of Memory and Language, 117.
460
+
461
+ Radim Řehürek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta. ELRA.
462
+
463
+ Armand S. Rotaru, Gabriella Vigliocco, and Stefan L. Frank. 2018. Modeling the structure and dynamics of semantic processing. Cognitive Science, pages 1-28.
464
+
465
+ Robert Schreuder, Giovanni B. Flores D'Arcais, and Ge Glazenborg. 1998. Effects of perceptual and conceptual similarity in semantic priming. Journal of Memory and Language, 38(4):401-418.
466
+
467
+ Skipper Seabold and Josef Perktold. 2010. statsmodels: Econometric and statistical modeling with python. In 9th Python in Science Conference.
468
+
469
+ Leslie N. Smith. 2017. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464-472.
470
+
471
+ Robert Speer and Joshua Chin. 2016. An Ensemble Method to Produce High-Quality Word Embeddings. arXiv preprint arXiv: 1604.01692.
472
+
473
+ Haoyan Xu, Brian Murphy, and Alona Fyshe. 2016. Brainbench: A brain-image test suite for distributional semantic models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2017-2021.
474
+
475
+ 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/BGMfS7tgIWq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SEEING THE ADVANTAGE: VISUALLY GROUNDING WORD EMBEDDINGS TO BETTER CAPTURE HUMAN SEMANTIC KNOWLEDGE
2
+
3
+ Anonymous ACL submission
4
+
5
+ § ABSTRACT
6
+
7
+ 001 Distributional semantic models capture word-level meaning that is useful in many natural language processing tasks and have even been shown to capture cognitive aspects of word meaning. The majority of these models are
8
+
9
+ 006 purely text based, even though the human sensory experience is much richer. In this paper we create visually grounded word embeddings by
10
+
11
+ 009 combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning. Our analysis shows that visually grounded embedding similarities are more predictive of the human reaction times in a large priming experiment than the purely text-based embeddings. 017 The visually grounded embeddings also correlate well with human word similarity ratings. Importantly, in both experiments we show that the grounded embeddings account for a unique portion of explained variance, even when we include text-based embeddings trained on huge corpora. This shows that visual grounding allows our model to capture information that cannot be extracted using text as the only source of information.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Distributional semantic models create word representations that quantify word meaning based on the idea that a word's meaning depends on the contexts in which the word appears. Such representations (also called embeddings) are widely used as the linguistic input for computational linguistic models, with research showing that they can account for response times in lexical decision tasks (Man-dera et al., 2017; Rotaru et al., 2018; Petilli et al., 2021), decode brain data (Xu et al., 2016; Abnar et al., 2018), account for brain activity during text comprehension (Frank and Willems, 2017), and correlate with human judgements of word similarity (Kiela et al., 2018; Derby et al., 2018, 2020).
16
+
17
+ While such embeddings have proven useful, they 042 are not cognitively plausible as creating high qual- 043
18
+
19
+ ity embeddings requires billions of word tokens. 044
20
+
21
+ For instance, the GloVe embeddings developed by 045 Pennington et al. (2014) are trained on 840 bil- 046 lion words. It would require a human 80 years of constant reading at about 330 words per second to digest that much information. Obviously, humans are able to understand language after much less ex-
22
+
23
+ posure, and furthermore, their sensory experience 051 is much richer than solely reading texts.
24
+
25
+ Embodied cognition theory poses that our con- 053 ceptual knowledge is based on the entirety of our sensory experience (Barsalou, 2008; Foglia and Wilson, 2013). For instance, reading the word ${dog}$ elicits sensory experiences we have with dogs, such
26
+
27
+ as their sound and how they look. Embodied cogni- 058 tion theory thus assumes that all our sensory experi-
28
+
29
+ ences contribute to our conceptual knowledge and 060 processing, which should be reflected in human behaviour. Early priming studies have indeed found that visual similarities can elicit priming effects 063 (D'Arcais et al., 1985; Schreuder et al., 1998).
30
+
31
+ If visual features are part of our conceptual 065 knowledge, word embeddings incorporating visual features should be able to explain human behavioural data to a degree unattainable by purely text-based methods (that is, if we assume visual
32
+
33
+ sensory experiences can never be fully captured 070 by textual descriptions). That is why recent re-
34
+
35
+ search has taken an interest in multimodal word 072 embeddings, combining text with a second source of information, resulting in visually grounded em-beddings (VGEs) in the case of visual information.
36
+
37
+ § 1.1 RELATED WORK
38
+
39
+ 076
40
+
41
+ Using image tags as a source of visual context, 077
42
+
43
+ Bruni et al. (2013) create visual distributional se- 078
44
+
45
+ mantic embeddings and use dimensionality reduc- 079
46
+
47
+ tion to map visual and text-based embeddings to the 080 common VGE space. Derby et al. (2018) combine 082 text-based embeddings with the network activations of an object recognition model and show that these visual features improve the embeddings' performance in downstream tasks. Petilli et al. (2021) use visual embeddings created by an object recog- 087 nition network, and show that the embedding similarities are predictive of priming effects over and 089 above text-based similarities.
48
+
49
+ The studies described above involve separately trained word and visual embeddings. An end-to-end approach to combine visual and linguistic information is through a deep neural network 094 based caption-to-image retrieval (C2I) models (e.g., Karpathy and Fei-Fei 2015; Kamper et al. 2017). 096 While these models are trained to encode images and corresponding written or spoken captions in a common embedding space such that relevant captions can be retrieved given an image and vice versa, the resulting embeddings have been shown to capture sentence-level semantics (Chrupala et al., 2017; Merkx and Frank, 2019; Merkx et al., 2021). Kiela et al. (2018) showed that pretrained embed-dings correlated better with human intuition about word meaning after being fine-tuned as learnable parameters in their C2I model.
50
+
51
+ § 1.2 CURRENT STUDY
52
+
53
+ In this study we investigate whether VGEs created by a C2I model explain human behavioural data. Our research question is: can VGEs capture aspects of word meaning that (current) text-based approaches cannot? To answer this question we investigate novel end-to-end trained VGEs and test them on two types of human behavioural data thought to rely on conceptual/semantic knowledge. Secondly, we take care to separate the contribution of the image modality from that of the linguistic information to see whether visual grounding captures word properties that cannot be learned by purely text-based methods. We do this by comparing our VGEs to three well-known text-based methods.
54
+
55
+ Throughout our experiments we will use two versions of the text-based methods: custom trained on the same data as our VGEs and pretrained on large corpora. From a cognitive modelling perspective, the former of these is more interesting. While the use of large corpora may not be problematic for natural language processing applications where performance comes first, we aim to create cognitively plausible embeddings, that is, from a realistic amount of linguistic exposure. However, the inclu-
56
+
57
+ sion of pretrained embeddings serves to answer our 132
58
+
59
+ main research question. 133
60
+
61
+ § 1.2.1 SEMANTIC SIMILARITY JUDGEMENTS
62
+
63
+ 134
64
+
65
+ In our first experiment we test whether the VGEs 135
66
+
67
+ correlate better with a measure of human intuition 136 about word meaning than text-based embeddings. A well-known method to capture human intuition about word meaning is simply by asking subjects how similar two words are in meaning. To evaluate word embeddings, one can then see if embedding similarities for those word pairs correlate with the human judgements (e.g., Bruni et al., 2013; Baroni et al., 2014; Speer and Chin, 2016; Kiela et al., 2018; Derby et al., 2020).
68
+
69
+ While the study by Kiela et al. (2018) performed a similar investigation on pretrained word embed-
70
+
71
+ dings fine-tuned through their C2I model, they did 148 not take into account the fact that text might also contain visual knowledge. It is not unreasonable to assume that some visual knowledge can be gained from a large corpus of sentences solely describing visual scenes. We account for this visual knowledge from text by incorporating word embeddings trained on the image descriptions in order to investigate the contribution of the image modality included in the VGEs.
72
+
73
+ Collecting word similarity ratings typically involves showing participants two words and asking them to rate how similar or related their meanings are, or picking the most related out of several pairs.
74
+
75
+ Semantic relatedness refers to the strength of the 162 association between two word meanings. For instance, 'dog' and 'leash' have a strong relationship but are not similar in meaning. Semantic similarity refers to two words sharing semantic properties, for
76
+
77
+ instance 'dogs' and 'cats' which are both animals 167 that people keep as pets (Hill et al., 2015).
78
+
79
+ § 1.2.2 SEMANTIC PRIMING
80
+
81
+ 169
82
+
83
+ In the second experiment, we test whether our
84
+
85
+ VGEs are predictive of semantic priming effects 171 from a large priming experiment (Hutchison et al., 2013). Semantic priming effects occur when activation of a semantically related prime word facilitates the processing of the target word, resulting in shorter reaction times. If all our sensory experiences contribute to word meaning, we would expect
86
+
87
+ visual perceptual properties of the prime-target pair 178 to influence the response times.
88
+
89
+ Petilli et al. (2021) performed a similar experiment using visual embeddings derived from activation features from an object recognition network and text-based word embeddings. Their results show that after accounting for the text-based similarity, the visual embedding similarities contribute to explaining the human reaction times only for lexical decision trails with a short stimulus onset asynchrony (SOA), and not for the naming task or long SOA trials. They attribute this to: 1) the lexical decision task being more sensitive to semantic effects than the naming task (Lucas, 2000), and 2) visual information being activated in early linguistic processing and rapidly decaying (Pecher et al., 1984; Schreuder et al., 1998). We will further test these interactions in our own experiment.
90
+
91
+ § 2 METHODS
92
+
93
+ In our experiments, we compare the VGEs from our own model with three well known text-based distributional semantic models: FastText (Bojanowski et al., 2017), Word2Vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014). For the purpose of this study, we take two approaches: 1) we train our own text-based distributional models to allow for a fair comparison to the VGEs, and 2) we use the pretrained models to investigate whether our VGEs capture semantic information that even models trained on large text corpora do not.
94
+
95
+ § 2.1 TRAINING DATA
96
+
97
+ MSCOCO is a database intended for training image recognition, segmentation and captioning models (Chen et al., 2015). It has 123,287 images and 605,495 written English captions, that is, five captions paired to each image. Captions were collected by asking annotators to describe what they saw in the picture. Five thousand images (25,000 captions) are reserved as a development set.
98
+
99
+ The captions are provided in tokenised format. In order to use them in our models we only de-capitalised all words and removed the punctuation at the end of each sentence. This results in a total of 6,184,656 word tokens and 28,415 unique word types, to which we add start- and end-of-sentence tokens for training our visually grounded model.
100
+
101
+ The images are pre-processed by resizing the images such that the shortest side is 256 pixels, while keeping the original aspect ratio. We take ten 224 by 224 crops of the image: one from each corner, one from the middle and the same five crops for the mirrored image. We use ResNet-152 (He et al., 2016) pretrained on ImageNet to extract visual features from these ten crops and then average the 231 features of the ten crops into a single vector with 232 2,048 features. These features are extracted by re- 233
102
+
103
+ moving ResNet's classification layer and taking the 234 activations of the penultimate layer.
104
+
105
+ § 2.2 MODELS
106
+
107
+ 236
108
+
109
+ § 2.2.1 VISUALLY GROUNDED MODEL
110
+
111
+ 237
112
+
113
+ Our visually grounded model is based on the im- 238
114
+
115
+ plementation by Merkx and Frank (2019), and we 239 refer to that paper for the details. Here we will provide a brief overview of the model, any differences with Merkx and Frank (2019) and the parameter settings tested in this study.
116
+
117
+ The VGE model maps images and their corresponding captions to a common embedding space.
118
+
119
+ It is trained to make the embeddings for matching 246 images and captions as similar as possible, and those for mismatched images and captions dissimilar. The model consists of two parts; an image embedder and a caption embedder. The image em-
120
+
121
+ bedder is a single-layer linear projection on top of 251 the image features extracted with ResNet-152. We train only the linear projection and do not further fine-tune ResNet.
122
+
123
+ The caption embedder consists of a word embedding layer followed by a two-layer bi-directional recurrent Long Short Term Memory (LSTM) layer and finally a self-attention layer. The embedding layer has 300 dimensions and is used to represent the input words as learnable embeddings. The purpose of the LSTM is to create a contextualised hidden state for each time-step (input word). Its first layer has 1028 hidden units, while its second layer acts as a bottleneck with 300 hidden units. Finally, the purpose of the attention layer is to weigh each time-step in order to create a single fixed-length embedding for the entire caption. The attention layer has 128 hidden units.
124
+
125
+ The image embedder has $2 \times {300}$ dimensions
126
+
127
+ so that the output matches the size of the caption 270 embeddings. Both image and caption embedding are L2 normalised and we take their distance as the loss signal for the batch hinge loss function (see Merkx and Frank, 2019). The networks are trained for 32 epochs using Adam with a cyclic learning rate schedule based on Smith (2017), which varies the learning rate smoothly between ${10}^{-3}$ and ${10}^{-6}$ .
128
+
129
+ The obvious way to extract word embeddings from the trained model would be to use the trained weights of the embedding layer. Unlike for instance
130
+
131
+ 281 in GloVe, where each word's embedding is based on its full co-occurrence distribution, these embed-dings are not trained specifically to capture word context or meaning and they are not necessarily the best word embeddings. However, our initial tests showed that they performed very poorly as semantic embeddings when trained from a random initialisation ${}^{1}$ . Rather than taking the input em-beddings we create our own embeddings from the hidden representations of the model.
132
+
133
+ We create our VGEs from the hidden activations of the bottleneck LSTM layer. We use the trained caption encoder to encode all training sentences in MSCOCO. However, we remove the attention layer that creates the sentence embedding and we retain the individual activations of the LSTM at each time step. As the word representations in this layer can be used to create semantic sentence embeddings that capture human intuition about sentence meaning (as shown for instance by Merkx and Frank, 2019; Merkx et al., 2021), we expect these representations to better capture word meaning than the input embeddings.
134
+
135
+ The embedding for each word is then created by summing and normalising its LSTM layer activations from all its occurrences in the dataset. As opposed to Merkx and Frank (2019), who used a single recurrent layer and found no further benefit of additional layers in terms of sentence embedding quality, we found that the quality of our VGEs improves when we use a two-layer LSTM, with the second layer acting as a bottleneck from which we derive the embeddings.
136
+
137
+ § 2.2.2 TEXT-BASED MODELS
138
+
139
+ The text-based distributional models are trained on the MSCOCO captions. We train Word2Vec and FastText using the Gensim package (Rehürek and Sojka, 2010). We train GloVe using the code that Pennington et al. (2014) made publicly available ${}^{2}$ .
140
+
141
+ Word2Vec and FastText were trained as the Skip-gram variant with embedding size 300, a context window of 10 and 10 negative samples. GloVe was trained with embedding size 300 and a context window of 10 . All resulting word embeddings are then L2 normalised.
142
+
143
+ In addition, we use the following pretrained vectors (all 300 dimensional): Word2Vec trained
144
+
145
+ Table 1: Description of the word similarity/relatedness evaluation datasets. #available is the number of word pairs included in the evaluation. Type indicates whether the dataset captures similarity or relatedness. NA indicates subjects were not specifically instructed on the difference.
146
+
147
+ max width=
148
+
149
+ Dataset #word-pairs #available type
150
+
151
+ 1-4
152
+ WordSim353 353 240 NA
153
+
154
+ 1-4
155
+ WordSim-S 203 147 Similarity
156
+
157
+ 1-4
158
+ WordSim-R 252 166 Relatedness
159
+
160
+ 1-4
161
+ SimLex999 999 793 Similarity
162
+
163
+ 1-4
164
+ -SimLex999 Q1 249 141 Similarity
165
+
166
+ 1-4
167
+ -SimLex999 Q4 250 249 Similarity
168
+
169
+ 1-4
170
+ MEN 3000 2889 Relatedness
171
+
172
+ 1-4
173
+ RareWords 2034 204 NA
174
+
175
+ 1-4
176
+
177
+ on 100 billion tokens of the Google News corpus 328
178
+
179
+ (Mikolov et al., 2013b), FastText trained on 600 329
180
+
181
+ billion tokens of Common Crawl (Mikolov et al., 330
182
+
183
+ 2018) and GloVe trained on 840 billion tokens of 331
184
+
185
+ Common Crawl (Pennington et al., 2014). 332
186
+
187
+ § 2.3 EVALUATION DATA
188
+
189
+ 333
190
+
191
+ § 2.3.1 SEMANTIC SIMILARITY JUDGEMENTS
192
+
193
+ 334
194
+
195
+ We include both semantic relatedness and similar-
196
+
197
+ ity datasets in our analysis. It has been argued that 336 subjects' intuitive understanding of similarity is not necessarily in line with the 'scientific' notions of similarity and relatedness explained in the introduction (Hill et al., 2015). Thus, if subject are not
198
+
199
+ clearly instructed on these notions of similarity or 341 relatedness, we consider the nature of the dataset undefined.
200
+
201
+ The WordSim353 dataset by Finkelstein et al. (2002) contains 353 word pairs annotated with similarity ratings. While the name suggests it is a similarity rating dataset, more recent studies consider it a hybrid dataset, as subjects were not specifically instructed to judge relatedness or similarity. In a later study by Agirre et al. (2009), the WordSim353 data was split into similar and related pairs by annotating the word pairs. WordSim-S (similar) contains word pairs annotated as being synonyms, antonyms, identical, or hyponym-hyperonym. WordSim-R (related) contains word pairs annotated as being meronym-holonym, and pairs with none of the above relationships but with a similarity score greater than 5 (out of 10). Both sets contain all unrelated words (words not anno-
202
+
203
+ tated with any of the above relationships and a 360 similarity lower than 5).
204
+
205
+ SimLex999 was created with the caveats of the original WordSim353 in mind in order to create a 364 dataset of 999 word pairs annotated for similarity 365 rather than relatedness (Hill et al., 2015). Sim- 366 Lex999 furthermore contains concreteness ratings 367 for the word pairs. Hill et al. (2015) divided the the dataset into concreteness quartiles based on the 369 sum of the concreteness ratings for each pair. Using 370 these quartiles we also look at the ${25}\%$ most con- 371 crete word pairs versus the ${25}\%$ most abstract pairs in the dataset, of course expecting our grounded model to perform best on the concrete words.
206
+
207
+ ${}^{1}$ Kiela et al. (2018) were able to use the input embeddings because they were initialised using pretrained embeddings.
208
+
209
+ 2 https://nlp.stanford.edu/projects/ glove/
210
+
211
+ MEN contains 3000 word pairs annotated for semantic relatedness (Bruni et al., 2013). Ratings were collected by showing subjects two word pairs and asking them to select the most related one. MEN was specifically collected to test multi-modal models, by selecting only words that have a visual referent that appeared in a large image database.
212
+
213
+ The RareWords dataset contains 2034 word pairs, where at least one word of each pair has a low frequency in Wikipedia (Luong et al., 2013). Modelling low-frequency words is a challenge for many models of distributional semantics.
214
+
215
+ Not all of the words in these databases are available in our training data and thus some will not have a word embedding. Table 1 contains an overview of the datasets described here and the number of word pairs that could be entered in our evaluations.
216
+
217
+ § 2.3.2 SEMANTIC PRIMING
218
+
219
+ The Semantic Priming Project (SPP) dataset (Hutchison et al., 2013) contains lexical decision times and naming times from a large priming experiment. The database is large for its kind, with 1,661 target words (and 1,661 non-words for the lexical decision task), each paired with a strong and weak prime and two unrelated primes. Furthermore, each prime-target pair was presented with a short(200ms)and a long(1200ms)SOA. Every combination of prime-target and SOA received responses from 32 subjects.
220
+
221
+ This gives us ${26},{576}({1661}$ target words $\times 4$ priming conditions $\times 2$ SOAs $\times 2$ tasks) trials (disregarding the non-word word trials). We preprocessed the data by removing target words that mistakenly had more or fewer than the required four primes, trials with erroneous responses and missing data. We also lowered any capitals in the prime and target words, averaged the response times over the 32 subjects, and removed any prime-target pair that did not occur in our training data, resulting in 18,326 datapoints.
222
+
223
+ § 2.4 ANALYSIS
224
+
225
+ 414
226
+
227
+ § 2.4.1 SEMANTIC SIMILARITY JUDGEMENTS
228
+
229
+ 415
230
+
231
+ To test whether the word embedding models cap- 416 ture human intuitions on word similarity, we use 417 the models to calculate embedding cosine similar- 418
232
+
233
+ ities for each word pair and correlate them with 419 the human annotations. This allows us to evaluate our custom trained word embeddings to see which method best extracts word-level semantics from the
234
+
235
+ MSCOCO dataset. Next, we also compute partial 423 correlations between the human annotations and our VGE model using each of the text-based models as a control. Given that all models are trained on the same textual data, with only the VGEs hav-
236
+
237
+ ing excess to the visual modality, this allows us to 428 see whether visual grounding captures information
238
+
239
+ that the text-based methods do not. 430
240
+
241
+ Finally we also test the partial correlations using the pretrained embeddings as a control. For each pretrained model we also add in its custom MSCOCO-trained equivalent as a control, to take
242
+
243
+ into account the information that text-based models 435 can extract from the MSCOCO captions.
244
+
245
+ § 2.4.2 SEMANTIC PRIMING
246
+
247
+ 437
248
+
249
+ Using linear regression models, we analyse how well embedding similarities predict human (log-transformed) reaction times in the SPP data using the Statsmodels package in Python (Seabold and Perktold, 2010). We code SOA and Task as factor variables. The reaction times are not on the same scale due to differences in the required response for the lexical decision and naming tasks so we standardise the log-transformed reaction time data separately for each combination of SOA and Task. This removes the main effects of SOA and Task but we include them in the regression as we are interested in their interactions with the similarity measures.
250
+
251
+ We fit a baseline regression including the target length (number of characters), Task and SOA as regressors. We furthermore include several regressors based on SUBTLEX-US (Brysbaert and New, 2009): log-transformed word-frequency counts, contextual diversity (the number of SUBTLEX-US documents a word appears in) and the orthographic neighbourhood density (the number of SUBTLEX-US words that are one character edit away) for the target words.
252
+
253
+ Next, for each of our embedding models, we include the prime-target embedding similarities as 464 a regressor to the baseline model. We also add two two-way interactions to test the claims made in 466 Petilli et al. (2021): 1) the interaction between the embedding similarities and Task to test the difference between lexical decision and naming in terms of sensitivity to semantic effects and 2) the interaction between the embedding similarities and SOA to test their claim about the time-frame in which visual information plays a role. These regression models allow us to compare the word embedding models to each other and to the baseline using the Akaike Information Criterion (AIC), where a lower AIC indicates a better model fit.
254
+
255
+ We also test if our VGEs can explain variance in the human reaction times that the text-based methods do not. We do this by refitting the regression models for each of the text-based similarity measures and adding the VGE similarity measures and their interactions with Task and SOA as extra regressors. For each of these regressions we then calculate the log-likelihood ratio (LLR) with the corresponding regression without the VGEs, indicating the decrease in model deviance due to adding the VGE similarity measures. Higher LLRs indicate a larger contribution of the VGEs to explaining variance in the human response times beyond what the text-based embedding similarities explain. Because the LLR follows a ${\chi }^{2}$ distribution, we can test whether including the VGEs significantly improves the regression model.
256
+
257
+ We apply a similar approach to the pretrained text-based embeddings, but we also want to account for the information that text-based embedding models can extract from the MSCOCO captions. We do this by fitting a regression model as in the previous step except that we include both the pretrained and MSCOCO trained embeddings and their interactions with SOA and Task. We then follow the same procedure as described above by adding the VGE similarities and calculate LLRs to see if adding VGEs improves the regression fit.
258
+
259
+ § 3 RESULTS
260
+
261
+ § 3.1 SEMANTIC SIMILARITY JUDGEMENTS
262
+
263
+ Figure 1 shows the ${R}^{2}$ (explained variance) based on the Pearson correlation coefficients ${}^{3}$ between the human similarity annotations and the embedding similarities. All Pearson correlations were pos-
264
+
265
+ Table 2: AIC comparison of regression models (lower is better). $\Delta$ indicates the difference in AIC compared to the VGE model or the Baseline model. $\beta$ indicates the coefficient of the embedding similarity main effect (lower is better) and its significance.
266
+
267
+ max width=
268
+
269
+ Model AIC $\Delta \mathbf{{VGE}}$ $\Delta$ Baseline $\beta$
270
+
271
+ 1-5
272
+ VGE 46997.55 - -211.04 $- {.67} * * *$
273
+
274
+ 1-5
275
+ FastText 47101.90 104.35 -106.86 $- {.54} * * *$
276
+
277
+ 1-5
278
+ GloVe 47163.70 166.15 -44.88 $- {.20}^{* * }$
279
+
280
+ 1-5
281
+ Word2Vec 47184.45 186.90 -24.13 $- {.22}^{* * }$
282
+
283
+ 1-5
284
+ Baseline 47208.58 211.03 - -
285
+
286
+ 1-5
287
+
288
+ itive, as expected, except for two non-significant 511 partial correlations which are therefore not included in the figure.
289
+
290
+ For the MSCOCO models (left panel) we see that while GloVe has the worst performance on each dataset, there is no single best model. Furthermore, while the VGEs are outperformed by FastText and Word2Vec on SimLex999, we see that VGE performs best on the most concrete words (Q4) in SimLex999. A bit surprising then, is that VGE is outperformed by FastText and Word2Vec on MEN, which contains solely picturable nouns.
291
+
292
+ Looking at the partial ${R}^{2}$ , that is, the extra variance explained by the VGEs after controlling for one of the other embedding models, we see that for nearly every dataset and every model, the VGEs explain a significant portion of variance that is not explained by the text-based models. This is not very surprising on WordSim, where the VGEs were the best performing embeddings by quite a margin. However, we also see that even though the VGEs are outperformed by FastText and Word2Vec on MEN, they still explain a large extra portion of variance even though the ${R}^{2}$ for these models was already quite high.
293
+
294
+ Lastly, the pretrained models (right panel) outperform the MSCOCO models. This was expected, as the used training data is several orders of magnitude larger than MSCOCO. However, here we still
295
+
296
+ see that the VGEs explain a significant portion of 540 extra variance on SimLex999 Q4 and MEN.
297
+
298
+ § 3.2 SEMANTIC PRIMING
299
+
300
+ The $\Delta$ AIC scores in Table 2 show that all word embedding models trained on MSCOCO improve the regression fit above the baseline. The embedding similarity effects were all negative, that is, a higher similarity correctly predicts a lower reaction time. We furthermore see that the VGE-derived similarity measures result in the best model fit by quite a margin, as evidenced by the AIC scores and effect size.
301
+
302
+ ${}^{3}$ As total explained variance is the partial ${R}^{2}$ plus ${R}^{2}$ of the control, this more clearly visualises the amount of extra variance explained by the VGEs than Pearson’s $r$ .
303
+
304
+ < g r a p h i c s >
305
+
306
+ Figure 1: The coloured bars indicate the ${R}^{2}$ scores of the four word embedding models. The grey-scale bars on top of the ${R}^{2}$ scores of the text-based models indicate the partial ${R}^{2}$ scores and their significance $( * p < {.05}, * * p <$ ${.01}, * * * p < {.001}$ , corrected using the Benjamini and Hochberg (1995) procedure with a false discovery rate of 0.05 of the VGEs after controlling for the variance explained by that text-based model. Left panel: models trained on MSCOCO. Right panel: pretrained text-based models.
307
+
308
+ Table 3: LLRs between regression models with the indicated text-based similarity measures and the same model with the VGE similarities as extra regressors. $\beta$ VGE are the regression coefficients for the VGE similarities in each model. Higher LLRs indicate a larger improvement in model quality due to adding the VGEs.
309
+
310
+ max width=
311
+
312
+ 2*X 2|c|MSCOCO 2|c|+ Pretrained
313
+
314
+ 2-5
315
+ LLR $\beta$ VGE LLR $\beta$ VGE
316
+
317
+ 1-5
318
+ Word2Vec 193.72*** $- {.77} * * *$ 69.72*** $- {.49} * * *$
319
+
320
+ 1-5
321
+ FastText 111.46*** $- {.63} * * *$ 47.32*** $- {42}^{* * * }$
322
+
323
+ 1-5
324
+ GloVe 168.34*** $- .{72} * * *$ 49.80*** $- {36}^{* * * }$
325
+
326
+ 1-5
327
+
328
+ We also find significant interactions between Task and the embedding similarities for the VGE $\left( {\beta = {0.201},P = {0.009}}\right)$ and FastText regression models $\left( {\beta = {0.197},P = {0.027}}\right)$ , meaning that the effect of embedding similarity is stronger for the lexical decision task. We find no significant interactions between the embedding similarities and SOA.
329
+
330
+ Table 3 shows the LLRs between regression models including the (pretrained) text-based and our VGE word similarity measures and the corresponding model including only the text-based measures. We see that our VGEs significantly im-
331
+
332
+ prove the regression fit for every type of text-based 565
333
+
334
+ method, even when we include both the pretrained 566
335
+
336
+ and MSCOCO text-based measures. The coeffi- 567
337
+
338
+ cients of the VGE effects in these models are all 568
339
+
340
+ positive, meaning a higher VGE similarity predicts 569 a lower reaction time.
341
+
342
+ In the regression models including the VGEs and
343
+
344
+ the MSCOCO text-based embeddings we found 572 significant interactions between the VGE similar-
345
+
346
+ ities and Task in the regression models that also 574 include Word2Vec $\left( {\beta = {0.239},P = {0.007}}\right)$ or GloVe $\left( {\beta = {0.234},P = {0.01}}\right)$ and no other interactions with Task or SOA.
347
+
348
+ Lastly, in the regression models including the
349
+
350
+ VGEs and both pretrained and MSCOCO text- 579 based embeddings, we find significant interactions with Task for Word2Vec $\left( {\beta = {0.312},P < {0.001}}\right)$ , FastText $\left( {\beta = {0.297},P = {0.001}}\right)$ and GloVe $\left( {\beta = {0.443},P < {0.001}}\right)$ vectors, and none for
351
+
352
+ the VGEs. 584
353
+
354
+ § 4 DISCUSSION
355
+
356
+ We created Visually Grounded Embeddings using
357
+
358
+ a caption-image retrieval model in order to test if 587 these embeddings can capture information about word meaning that text-based approaches cannot. Importantly, by testing our VGEs on human be- 591 havioural measures typically thought to rely on conceptual/semantic knowledge, we test a central idea of embodied cognition theory, namely that our visual experiences contribute to our conceptual knowledge.
359
+
360
+ § 4.1 SEMANTIC SIMILARITY JUDGEMENTS
361
+
362
+ Our first experiment showed that, when trained on the same corpus, our VGEs are on par with text-based methods. While there is no clear overall best method, the VGEs perform well on WordSim and, as might be expected, on the datasets with concrete picturable nouns. Even though the text-based methods outperform the VGEs on one of these (MEN), the VGEs still explain a significant amount of extra variance over and above what is explained by the text-based methods. This indicates that the text-based embeddings and VGEs capture non-overlapping conceptual knowledge, which we attribute to the visual grounding of the VGEs, given that the training materials were otherwise equal.
363
+
364
+ The only database where the VGEs performed notably worse than the text-based methods was RareWords. This is perhaps because during training, the VGEs are grounded in the image corresponding to the text input, even if not all words in the sentence are visible in the picture. As the words in RareWords are generally not picturable nouns, any visual information incorporated into the word-embedding is unlikely to be helpful, or, as evidenced by the results, counterproductive.
365
+
366
+ We furthermore found that our VGEs explain additional variance in the human similarity ratings even after accounting for both the MSCOCO text-based models and pretrained models trained on massive text corpora. The fact that the VGEs explain a significant amount of extra variance even after the text-based models have seen billions of tokens of text, suggests that some aspects of word meaning cannot be captured solely from text and as well as that visual similarity plays a role in human intuition about word meaning.
367
+
368
+ § 4.2 SEMANTIC PRIMING
369
+
370
+ In our second experiment, the VGEs outperformed the text-based methods on explaining human reaction times from the Semantic Priming Project. Even after we account for both the MSCOCO text-based models and pretrained models in our regression, the VGEs still explain a significant amount of variance in the reaction times.
371
+
372
+ In previous work, Petilli et al. (2021) only found 640
373
+
374
+ a significant contribution of visual information in 641
375
+
376
+ the short SOA lexical decision task. We found no 642
377
+
378
+ further proof for their hypothesis that visual infor- 643
379
+
380
+ mation is activated in early linguistic processing 644
381
+
382
+ and thereafter rapidly decays. Rather, we find that 645 our VGEs improve the model quality for both short
383
+
384
+ and long SOA trials. 647
385
+
386
+ We did find a significant positive interaction with 648
387
+
388
+ Task, meaning that the word embeddings explain 649
389
+
390
+ less variance in the naming task than in the lexical 650 decision task. This interaction was not specific to
391
+
392
+ the VGEs but also occurred in the models including 652 FastText and for all the pretrained embeddings. As claimed in Petilli et al. (2021) and Lucas (2000) this suggests that naming tasks are in general less
393
+
394
+ sensitive to semantic effects. 656
395
+
396
+ § 5 CONCLUSION
397
+
398
+ 657
399
+
400
+ We set out to test an end-to-end approach to com- 658 bining visual and textual input in a single embed-
401
+
402
+ ding, trained on a cognitively plausible amount of 660 data. The results from our two experiments suggest
403
+
404
+ that VGEs capture aspects of word meaning that 662 text-based approaches cannot. Even though we include word embeddings trained on corpora several orders of magnitude greater than any human's exposure to language, our VGEs still explain a unique
405
+
406
+ portion of variance in both human behavioural mea- 667 sures.
407
+
408
+ While our results indicate that visual grounding can provide complementary information for certain words, it may not play a role in our conceptual knowledge of rare, abstract words, as shown by our results on the RareWords corpus. Similar to Petilli
409
+
410
+ et al. (2021) this then does not support the strongest 674 formulations of embodied cognition theory which suggest total equivalence between conceptual and sensorimotor processing (Glenberg, 2015).
411
+
412
+ Of course, one could always claim that it is just current word-embedding models that do not fully capture word meaning yet. However, given that VGEs trained on a relatively small amount of visual data can complement text-based embeddings, we do not think even larger text-corpora or more complex embedding models can ever fully capture human semantic knowledge. The human experi-
413
+
414
+ ence is rich and varied, and our computational mod- 686 els can never fully capture human word knowledge while ignoring visual aspects of this experience. 689
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B_OII7tlIZ5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Neural Language Models Evaluate Human Performance: The Role of Language and Prosody in Predicting Job Interview Scores
2
+
3
+ Anonymous ACL submission
4
+
5
+ ## Abstract
6
+
7
+ 001 In this work we test the use of state-of-the-art neural language model representations to predict behavioral traits that cannot be easily ex-
8
+
9
+ 004 tracted from the textual input alone. We take the task of automated job interview scoring and make predictions on behavioral traits such
10
+
11
+ 007 as hirability, engagement, or friendliness. We find that representing text using neural models trained only on text already leads to better overall prediction results compared to a feature engineering approach that uses a combination of linguistic and extra-linguistic materials. Moreover, we show that combining word em-beddings and prosodic features improves the results even further, highlighting the value of adding information from modalities other than text when evaluating human performance.
12
+
13
+ ## 1 Introduction
14
+
15
+ Recent advances in neural networks have enabled machines to perform tasks such as natural language inference or textual similarity to a very high level of accuracy (Devlin et al., 2019; Wang et al., 2018). A point of investigation relatively new and not completely explored is the ability of neural language models (NLMs) to also capture latent information that is not directly detectable at the lexical level. These are tasks requiring more than the simple inspection of the meaning of words in context because they aim at evaluating humans' performance based both on the communicative form and intent: e.g., the evaluation of communication skills (Rasipuram and Jayagopi, 2016), proficiency levels (Oh et al., 2017), quality of reviews or posts (Danescu-Niculescu-Mizil et al., 2013; Cheong et al., 2019), and engagement level of public speeches (Acharyya et al., 2020). Another clear example is automatic job interview scoring; this task requires the system to identify and analyze linguistic and stylistic information that is critical to successfully evaluate the competence, and consequent hirability, of the candidates (Rasipuram and Jayagopi, 2018; Nguyen et al., 2014; DeGroot 042 and Gooty, 2009). The evaluation of such systems 043
16
+
17
+ generally requires paragraph-long responses rather 044
18
+
19
+ than single words or sentences because it is very dif- 045
20
+
21
+ ficult to judge someone's skills based on responses 046 that are too short (Batrinca et al., 2011).
22
+
23
+ In this paper, we use job interview scoring as a testbed for our analyses. We investigate the ability of NLMs to process paragraph-long texts and
24
+
25
+ successfully predict behavioral variables such as 051 hirability or friendliness, by simply using latent
26
+
27
+ information expressed in the language. Our contri- 053 butions are threefold. First, we use word embed-dings generated by state-of-the-art neural language models instead of manually created features as predictors in our task. Second, we show how the com-
28
+
29
+ bination of word embeddings to capture paragraph 058 level information significantly outperforms existing
30
+
31
+ feature engineering approaches. Third, we perform 060
32
+
33
+ predictions using regression models instead of clas- 061
34
+
35
+ sification models, which allows for a more precise 062
36
+
37
+ comparison between the performance of humans 063
38
+
39
+ and neural models. 064
40
+
41
+ ## 2 Background and Related Work
42
+
43
+ 065
44
+
45
+ The automatic evaluation of human performance 066
46
+
47
+ is often carried out by using feature engineering 067
48
+
49
+ approaches, in which manually extracted features 068
50
+
51
+ from the lexical, acoustic or visual modalities are 069
52
+
53
+ selected and fed into the prediction models. Rasipu- 070
54
+
55
+ ram and Jayagopi (2016) designed a model that 071 predicts the communication skills of people based
56
+
57
+ on prosody and visual cues from their interview 073 videos. Oh et al. (2017) designed a DNN-based language proficiency assessment classifier that assigns
58
+
59
+ the speech responses of non-native speakers into 076 acceptance or rejection by extracting meaning fea-
60
+
61
+ tures and grammar features. Cheong et al. (2019) 078 performed automatic detection of the thoughtfulness of a post in an online discussion forum, using both structural and syntactic features. Zong
62
+
63
+ 082 et al. (2020) performed a forecasting skill prediction combining textual and cognitive factors and concluded that the textual materials are sufficient for the task. Agrawal et al. (2020), Naim et al. (2018), and Nguyen et al. (2014) performed the task of predicting performance scores of job interviews using several manually crafted features.
64
+
65
+ ## 3 Experiments
66
+
67
+ ### 3.1 Data
68
+
69
+ We use the job interview dataset provided by Naim et al. (2018), which consists of transcripts, videos, and scores of 138 paragraph-long responses (747 tokens on average) by MIT students in a mock job interview. Below is an example of a potential response taken from Naim et al. (2018).
70
+
71
+ "I led the team by showing how to program the robot. The students did a wonderful job! In ten weeks, we made the robot play soccer. It was a lot of fun."
72
+
73
+ Each response was evaluated by 9 human raters on a scale from 1 to 7 for diverse behavioral traits such as friendliness, engagement, or hirability, many of which require access to information that is not directly available in the linguistic input (see Figure 1 for the full list of traits and Naim et al. (2018) for their description).
74
+
75
+ ### 3.2 Language Modeling and Paragraph Representation
76
+
77
+ As shown in Table 1, we build the linguistic representations for our experiments by using the output of: a) four static neural language models (word2vec (Mikolov et al., 2014), fastText-wiki, fastText-crawl (Mikolov et al., 2018), gloVe (Pennington et al., 2014)), and b) four different combinations of BERT embeddings (Devlin et al., 2019). We follow Devlin et al. (2019) to select and combine the best four output layers of a BERT-base model.
78
+
79
+ To represent the entire paragraph, we take the average and the sum of the embeddings of each word in it and produce one vector representation for each paragraph. This approach was inspired by Arora et al. (2017), who reported that this method outperformed more sophisticated approaches for sentence representations by about ${10} - {30}\%$ . We expand this approach to paragraph level and create one numerical representation for each paragraph.
80
+
81
+ ### 3.3 Experimental Setup
82
+
83
+ We treat our task as a regression problem, with the 129 target scores being continuous numbers averaged 130
84
+
85
+ among the nine human raters. In this work, we 131 perform two experiments and compare the use of information embedded in different neural language model representations alone and in combination
86
+
87
+ with prosodic information against manually crafted 135 features (baseline).
88
+
89
+ Baseline As baseline we take the best perfor- 137 mance from Naim et al. (2018), who uses handpicked features from three modalities (linguistic, prosodic, and facial) as input parameters of two regression models (Lasso Regression and Support
90
+
91
+ Vector Regression). Their linguistic features, 23 142 of which are obtained from the software LIWC
92
+
93
+ (Tausczik and Pennebaker, 2010), include features 144 like content word categories, positive/negative emotion words, or function word categories. Their prosodic features include information such as fundamental frequency (F0) and intensity. Their facial
94
+
95
+ features include information regarding movements 149 of eyebrows and lips, nods and head-shakes. In all the subsequent comparisons, the baseline results are the ones obtained by combining features from all three modalities.
96
+
97
+ Our experiments We perform two experiments. In Experiment 1, only linguistic information - in the
98
+
99
+ form of paragraph embeddings- is used as features 156 for the regression models. In Experiment 2, the linguistic information is combined with the prosodic information provided by Naim et al. (2018) to probe for potential improvement in the model performance while providing multimodal information to the system. We exclude facial features from our
100
+
101
+ experiments because, in the original study, the im- 163 provement in prediction results obtained by adding facial features was minimal and limited to certain features such as friendliness or excitement (See Figure 5 in Naim et al. (2018)). This is probably due
102
+
103
+ to the fact that visual features were automatically 168 extracted from the video recordings and, consequently, are extremely noisy. For both experiments, we use Lasso Regression and Support Vector Regression for comparisons of the prediction results against Naim et al. (2018). Due to the small size of the dataset provided, we consider these models the
104
+
105
+ only valid options to obtain reliable results. 175
106
+
107
+ Setup The prediction experiments are performed using the sklearn library in Python. A five-fold cross-validation is performed to avoid overfitting. Pearson's correlation coefficient between human-generated and machine-generated scores is used as our evaluation metric as in Naim et al. (2018). The grid search algorithm is used to tune hyperparame-ters that elicit better results for the majority of the traits.
108
+
109
+ <table><tr><td>Name</td><td>Type</td><td>Description and Trained Corpora</td></tr><tr><td>word2vec</td><td>S</td><td>word embeddings produced by word2vec (Google News)</td></tr><tr><td>gloVe</td><td>S</td><td>word embeddings produced by gloVe (Wikipedia and Gigaword)</td></tr><tr><td>wiki</td><td>S</td><td>word embeddings produced by fastText (Wikipedia)</td></tr><tr><td>crawl</td><td>S</td><td>word embeddings produced by fastText (Common Crawl)</td></tr><tr><td>BERT-all</td><td>C</td><td>row-wise sum of the weights from all 12 Transformer output layers</td></tr><tr><td>BERT-s21</td><td>C</td><td>the weights from the second-to-last Transformer output layer</td></tr><tr><td>BERT-4sum</td><td>C</td><td>row-wise sum of the weights from the last four Transformer output layers</td></tr><tr><td>BERT-4cat</td><td>C</td><td>row-wise concatenation of the weights from the last four Transformer output layers</td></tr></table>
110
+
111
+ Table 1: Overview of the 8 word embedding types used in our experiments. The top four embeddings are static embeddings (S) and the bottom four embeddings are contextualized embeddings (C). Finally, each word embedding is either summed or averaged across paragraphs resulting in a total of 16 representations.
112
+
113
+ ## 4 Results
114
+
115
+ ### 4.1 Experiment 1
116
+
117
+ After comparing the performance of the 16 possible paragraph representations as predictors of the two regression methods (Lasso and SVR), we find very similar and consistent results. ${}^{1}$ Because of limited space and for clarity, in the following sections we only report the results from our best combination of models and regression methods: Lasso regression on word2vec (summed) from the static models and BERT-all (summed) from the contextualized models.
118
+
119
+ As shown in Figure 1, with the exception of Excited, recommendHire, and noFillerWords, our paragraph-based language models outperform the baseline approach with a varying degree per trait. We perform a pairwise t-test to statistically compare the average performance improvement (in the form of correlation coefficients) of our models compared to the baseline. The t-test analysis shows that word2vec $\left( {\mathrm{M} = {0.71} \pm {0.07}}\right)$ significantly outperforms the baseline approach $(\mathrm{M} = {0.57} \pm {0.16}$ , p-value $< {0.007}$ ). Moreover, as indicated by the big reduction in the standard deviation values, the neu-
120
+
121
+ ral models obtain more even performances across 209
122
+
123
+ the predicted individual traits compared to the base- 210 line (SD baseline $= {0.16}$ vs. word $2\mathrm{{vec}} = {0.07}$ ,
124
+
125
+ BERT-all $= {0.10})$ . This indicates that our mod- 212 els are robust on a wider range of traits compared
126
+
127
+ to the feature engineering approach. Compared to 214 the baseline model, which especially struggles for traits like notStressed or eyeContact, even though the information from the prosodic and facial modalities was leveraged, our static models show significantly better results. Also BERT-all leads to a slight yet non-significant improvement $(\mathrm{M} = {0.66} \pm$ ${0.10},\mathrm{p} = {0.07})$ compared to the baseline although it does not outperform word2vec $\left( {\mathrm{p} = {0.10}}\right)$ . It is worth noting that neural models entirely based on lexical cues significantly outperform a model that combines features extracted from three different modalities. Particularly interesting traits are notStressed, eyeContact, calm, authentic, smiled, focused, and paused, which show highly improved results compared to the baseline even though, intuitively, making judgments on such traits should
128
+
129
+ require more than just textual information. 231
130
+
131
+ ### 4.2 Experiment 2
132
+
133
+ In Experiment 2 we test how adding prosodic information affects the overall model performance. We combine linguistic and prosodic inputs by concatenating the corresponding two vector representations. Adding prosodic features to word2vec (word2vec+pro; $\mathrm{M} = {0.76} \pm {0.07}$ ) leads to a slight but non-significant improvement $\left( {\mathrm{p} = {0.10}}\right)$ ; whereas the addition of prosodic features to BERT-all (BERT-all+pro; M = 0.77 ± 0.08) leads to a significant improvement $\left( {\mathrm{p} < {0.007}}\right)$ . Furthermore, BERT-all+pro does not outperform the language-only representation by word2vec $\left( {\mathrm{p} = {0.06}}\right)$ . This last result indicates that the use of textual representations obtained by pre-trained word2vec alone is comparable to the use of BERT representations together with the prosodic features (see Figure 2 for statistical significance between different groups). A possible interpretation of these results is that, even though prosody in general plays a contributing role in predicting behavioral traits, its effect becomes more relevant when the linguistic representation alone is not sufficient to perform a specific task and requires external supporting materials.
134
+
135
+ ---
136
+
137
+ ${}^{1}$ The full results for the 8 word embedding types summed and averaged can be found at: http://osf.io/6gzyq/?view_only= 700678d4764e4feba545c0dddb0df6f5
138
+
139
+ ${}^{1}$ To attenuate the problem of multiple comparisons, all p-values have been alpha-corrected: $* * =$ p-value $\leq {0.007}, * * *$ $= \mathrm{p}$ -value $\leq {0.0007}$
140
+
141
+ ---
142
+
143
+ ![01963d99-5e8a-7aed-b45f-1639fbf88aa2_3_193_198_623_550_0.jpg](images/01963d99-5e8a-7aed-b45f-1639fbf88aa2_3_193_198_623_550_0.jpg)
144
+
145
+ Figure 1: Experiment 1 - Pearson's correlation coefficients for each trait predicted by a Lasso Regression model using: manual features from Naim et al. (2018) (baseline; black circle), word2vec (red triangle), and BERT-all (blue square).
146
+
147
+ ## 5 Conclusion
148
+
149
+ Our study shows that neural embeddings generally outperform manually elicited features from multiple modalities in a task that evaluates human performance and on traits that are not easily measurable via shallow access to text. Compared to the feature engineering approach previously adopted, the use of pre-trained embeddings clearly constitutes a step forward in guaranteeing replicability and in reducing implementation issues. Moreover, we show that we can successfully build paragraph-level representations by combining the embedding of each word and still obtain mid-high correlations with human judgments for all the 16 traits (especially with word2vec). Also, our approach performs well on a relatively small dataset, which is valuable given that for many tasks a high amount of data is simply not available or difficult to collect. Finally, we observe that the addition of prosodic features im-
150
+
151
+ ![01963d99-5e8a-7aed-b45f-1639fbf88aa2_3_844_195_630_554_0.jpg](images/01963d99-5e8a-7aed-b45f-1639fbf88aa2_3_844_195_630_554_0.jpg)
152
+
153
+ Figure 2: Boxplots representing average Pearson's correlation coefficients across trait for different language modeling types (baseline from Naim et al. (2018) (black), word2vec (red), and BERT-all (blue). word2vec+pro and BERT-all+pro show the average results after combining linguistic and prosodic information. The lines indicate the medians and the dots the mean values across all predicted traits. We provide the significant results from the pairwise $\mathrm{t}$ -tests ( $\mathrm{{ns}} =$ nonsignificant, $* * = \mathrm{p} \leq {0.007}; * * * = \mathrm{p} \leq {0.0007}$ )
154
+
155
+ proves the prediction performance even further, es- 275
156
+
157
+ pecially for models with a lower performance in the 276
158
+
159
+ language-only setup. This model behavior has an 277
160
+
161
+ interesting similarity with the way humans process 278
162
+
163
+ and understand language: when not enough linguis- 279
164
+
165
+ tic cues are available at the lexical-semantic level, 280
166
+
167
+ additional extra-linguistic materials are required 281 to successfully process the information provided
168
+
169
+ (Zhang et al., 2021). 283
170
+
171
+ ## References
172
+
173
+ 284
174
+
175
+ Rupam Acharyya, Shouman Das, Ankani Chattoraj, 285 and Md. Iftekhar Tanveer. 2020. FairyTED: A Fair Rating Predictor for TED Talk Data. Proceedings of the AAAI Conference on Artificial Intelligence,
176
+
177
+ 34(01):338-345. 289
178
+
179
+ Anumeha Agrawal, Rosa Anil George, Selvan Sunitha 290 Ravi, Sowmya Kamath S, and Anand Kumar. 2020. Leveraging Multimodal Behavioral Analytics for Automated Job Interview Performance Assessment and Feedback. In Second Grand-Challenge and Work-
180
+
181
+ shop on Multimodal Language (Challenge-HML), 295
182
+
183
+ pages 46-54. Association for Computational Linguis- 296
184
+
185
+ tics. Event-place: Seattle, USA. 297
186
+
187
+ Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A 298
188
+
189
+ simple but tough-to-beat baseline for sentence embed- 299 300 dings. In 5th International Conference on Learning 301 Representations, ICLR 2017.
190
+
191
+ 302 Ligia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio 303 Pianesi, and Nicu Sebe. 2011. Please, tell me about 304 yourself: automatic personality assessment using 305 short self-presentations. In Proceedings of the 13th 306 international conference on multimodal interfaces - 307 ICMI '11, page 255, Alicante, Spain. ACM Press.
192
+
193
+ 308 Michelle L. F. Cheong, Jean Y.-C. Chen, and Bing Tian Dai. 2019. An Intelligent Platform with Automatic 310 Assessment and Engagement Features for Active Online Discussions. In Franz Wotawa, Gerhard Friedrich, Ingo Pill, Roxane Koitz-Hristov, and Moo- 313 nis Ali, editors, Advances and Trends in Artificial Intelligence. From Theory to Practice, volume 11606, pages 730-743. Springer International Publishing, 316 Cham. Series Title: Lecture Notes in Computer Sci-
194
+
195
+ 317 ence.
196
+
197
+ 318 Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A Computational Approach to Politeness with Appli- 321 cation to Social Factors. arXiv:1306.6078 [physics]. ArXiv: 1306.6078.
198
+
199
+ 323 Timothy DeGroot and Janaki Gooty. 2009. Can Nonverbal Cues be Used to Make Meaningful Personality Attributions in Employment Interviews? Journal of
200
+
201
+ 326 Business and Psychology, 24(2):179-192.
202
+
203
+ 327 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
204
+
205
+ 328 Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019,
206
+
207
+ 331 pages 4171-4186. Association for Computational Linguistics.
208
+
209
+ 333 Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in Pre-Training Distributed Word Represen-
210
+
211
+ 336 tations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC
212
+
213
+ 338 2018), page 4.
214
+
215
+ 339 Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2014. Linguistic Regularities in Continuous Space
216
+
217
+ 341 Word Representations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 6. Association for
218
+
219
+ 344 Computational Linguistics.
220
+
221
+ 345 Iftekhar Naim, Md. Iftekhar Tanveer, Daniel Gildea, and
222
+
223
+ 346 Mohammed Ehsan Hoque. 2018. Automated Analysis and Prediction of Job Interview Performance. IEEE Transactions on Affective Computing, 9(2):191- 349 204.
224
+
225
+ 350 Laurent Son Nguyen, Denise Frauendorfer, Marianne Schmid Mast, and Daniel Gatica-Perez. 2014. Hire me: Computational Inference of Hirability in Employment Interviews Based on Nonverbal Behav-
226
+
227
+ 354 ior. IEEE Transactions on Multimedia, 16(4):1018-
228
+
229
+ 355 1031.
230
+
231
+ Yoo Rhee Oh, Hyung-Bae Jeon, Hwa Jeon Song, 356 Byung Ok Kang, Yun-Kyung Lee, Jeon-Gue Park, 357 and Yun-Keun Lee. 2017. Deep-Learning Based Au- 358 tomatic Spontaneous Speech Assessment in a Data- 359 Driven Approach for the 2017 SLaTE CALL Shared 360 Challenge. In 7th ISCA Workshop on Speech and 361 Language Technology in Education (SLaTE 2017), 362 pages 103-108. ISCA. 363
232
+
233
+ Jeffrey Pennington, Richard Socher, and Christopher 364 Manning. 2014. Glove: Global Vectors for Word 365 Representation. In Proceedings of the 2014 Confer- 366 ence on Empirical Methods in Natural Language Pro- 367 cessing (EMNLP), pages 1532-1543, Doha, Qatar. 368 Association for Computational Linguistics. 369
234
+
235
+ Sowmya Rasipuram and Dinesh Babu Jayagopi. 2016. 370 Automatic assessment of communication skill in 371 interface-based employment interviews using audio- 372 visual cues. In 2016 IEEE International Conference 373 on Multimedia & Expo Workshops (ICMEW), pages 374 1-6, Seattle, WA, USA. IEEE. 375
236
+
237
+ Sowmya Rasipuram and Dinesh Babu Jayagopi. 2018. 376 Automatic assessment of communication skill in 377 interview-based interactions. Multimedia Tools and 378 Applications, 77(14):18709-18739. 379
238
+
239
+ Yla R. Tausczik and James W. Pennebaker. 2010. The 380 Psychological Meaning of Words: LIWC and Com- 381 puterized Text Analysis Methods. Journal of Lan- 382 guage and Social Psychology, 29(1):24-54. 383
240
+
241
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix 384 Hill, Omer Levy, and Samuel R. Bowman. 2018. 385 GLUE: A Multi-Task Benchmark and Analysis Plat- 386 form for Natural Language Understanding. In 387 Proceedings of the 2018 EMNLP Workshop Black- 388 boxNLP: Analyzing and Interpreting Neural Net- 389
242
+
243
+ works for NLP, pages 353-355. Association for Com- 390
244
+
245
+ putational Linguistics. 391
246
+
247
+ Ye Zhang, Diego Frassinelli, Jyrki Tuomainen, Jeremy I 392
248
+
249
+ Skipper, and Gabriella Vigliocco. 2021. More 393
250
+
251
+ than words: Word predictability, prosody, gesture 394
252
+
253
+ and mouth movements in natural language com- 395
254
+
255
+ prehension. Proceedings of the Royal Society B, 396
256
+
257
+ 288(1955):20210500. 397 Shi Zong, Alan Ritter, and Eduard Hovy. 2020. Measur- 398
258
+
259
+ ing Forecasting Skill from Text. arXiv:2006.07425 399 [cs]. ArXiv: 2006.07425. 400
papers/ACL/ACL 2022/ACL 2022 Workshop/ACL 2022 Workshop CMCL/B_OII7tlIZ5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NEURAL LANGUAGE MODELS EVALUATE HUMAN PERFORMANCE: THE ROLE OF LANGUAGE AND PROSODY IN PREDICTING JOB INTERVIEW SCORES
2
+
3
+ Anonymous ACL submission
4
+
5
+ § ABSTRACT
6
+
7
+ 001 In this work we test the use of state-of-the-art neural language model representations to predict behavioral traits that cannot be easily ex-
8
+
9
+ 004 tracted from the textual input alone. We take the task of automated job interview scoring and make predictions on behavioral traits such
10
+
11
+ 007 as hirability, engagement, or friendliness. We find that representing text using neural models trained only on text already leads to better overall prediction results compared to a feature engineering approach that uses a combination of linguistic and extra-linguistic materials. Moreover, we show that combining word em-beddings and prosodic features improves the results even further, highlighting the value of adding information from modalities other than text when evaluating human performance.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Recent advances in neural networks have enabled machines to perform tasks such as natural language inference or textual similarity to a very high level of accuracy (Devlin et al., 2019; Wang et al., 2018). A point of investigation relatively new and not completely explored is the ability of neural language models (NLMs) to also capture latent information that is not directly detectable at the lexical level. These are tasks requiring more than the simple inspection of the meaning of words in context because they aim at evaluating humans' performance based both on the communicative form and intent: e.g., the evaluation of communication skills (Rasipuram and Jayagopi, 2016), proficiency levels (Oh et al., 2017), quality of reviews or posts (Danescu-Niculescu-Mizil et al., 2013; Cheong et al., 2019), and engagement level of public speeches (Acharyya et al., 2020). Another clear example is automatic job interview scoring; this task requires the system to identify and analyze linguistic and stylistic information that is critical to successfully evaluate the competence, and consequent hirability, of the candidates (Rasipuram and Jayagopi, 2018; Nguyen et al., 2014; DeGroot 042 and Gooty, 2009). The evaluation of such systems 043
16
+
17
+ generally requires paragraph-long responses rather 044
18
+
19
+ than single words or sentences because it is very dif- 045
20
+
21
+ ficult to judge someone's skills based on responses 046 that are too short (Batrinca et al., 2011).
22
+
23
+ In this paper, we use job interview scoring as a testbed for our analyses. We investigate the ability of NLMs to process paragraph-long texts and
24
+
25
+ successfully predict behavioral variables such as 051 hirability or friendliness, by simply using latent
26
+
27
+ information expressed in the language. Our contri- 053 butions are threefold. First, we use word embed-dings generated by state-of-the-art neural language models instead of manually created features as predictors in our task. Second, we show how the com-
28
+
29
+ bination of word embeddings to capture paragraph 058 level information significantly outperforms existing
30
+
31
+ feature engineering approaches. Third, we perform 060
32
+
33
+ predictions using regression models instead of clas- 061
34
+
35
+ sification models, which allows for a more precise 062
36
+
37
+ comparison between the performance of humans 063
38
+
39
+ and neural models. 064
40
+
41
+ § 2 BACKGROUND AND RELATED WORK
42
+
43
+ 065
44
+
45
+ The automatic evaluation of human performance 066
46
+
47
+ is often carried out by using feature engineering 067
48
+
49
+ approaches, in which manually extracted features 068
50
+
51
+ from the lexical, acoustic or visual modalities are 069
52
+
53
+ selected and fed into the prediction models. Rasipu- 070
54
+
55
+ ram and Jayagopi (2016) designed a model that 071 predicts the communication skills of people based
56
+
57
+ on prosody and visual cues from their interview 073 videos. Oh et al. (2017) designed a DNN-based language proficiency assessment classifier that assigns
58
+
59
+ the speech responses of non-native speakers into 076 acceptance or rejection by extracting meaning fea-
60
+
61
+ tures and grammar features. Cheong et al. (2019) 078 performed automatic detection of the thoughtfulness of a post in an online discussion forum, using both structural and syntactic features. Zong
62
+
63
+ 082 et al. (2020) performed a forecasting skill prediction combining textual and cognitive factors and concluded that the textual materials are sufficient for the task. Agrawal et al. (2020), Naim et al. (2018), and Nguyen et al. (2014) performed the task of predicting performance scores of job interviews using several manually crafted features.
64
+
65
+ § 3 EXPERIMENTS
66
+
67
+ § 3.1 DATA
68
+
69
+ We use the job interview dataset provided by Naim et al. (2018), which consists of transcripts, videos, and scores of 138 paragraph-long responses (747 tokens on average) by MIT students in a mock job interview. Below is an example of a potential response taken from Naim et al. (2018).
70
+
71
+ "I led the team by showing how to program the robot. The students did a wonderful job! In ten weeks, we made the robot play soccer. It was a lot of fun."
72
+
73
+ Each response was evaluated by 9 human raters on a scale from 1 to 7 for diverse behavioral traits such as friendliness, engagement, or hirability, many of which require access to information that is not directly available in the linguistic input (see Figure 1 for the full list of traits and Naim et al. (2018) for their description).
74
+
75
+ § 3.2 LANGUAGE MODELING AND PARAGRAPH REPRESENTATION
76
+
77
+ As shown in Table 1, we build the linguistic representations for our experiments by using the output of: a) four static neural language models (word2vec (Mikolov et al., 2014), fastText-wiki, fastText-crawl (Mikolov et al., 2018), gloVe (Pennington et al., 2014)), and b) four different combinations of BERT embeddings (Devlin et al., 2019). We follow Devlin et al. (2019) to select and combine the best four output layers of a BERT-base model.
78
+
79
+ To represent the entire paragraph, we take the average and the sum of the embeddings of each word in it and produce one vector representation for each paragraph. This approach was inspired by Arora et al. (2017), who reported that this method outperformed more sophisticated approaches for sentence representations by about ${10} - {30}\%$ . We expand this approach to paragraph level and create one numerical representation for each paragraph.
80
+
81
+ § 3.3 EXPERIMENTAL SETUP
82
+
83
+ We treat our task as a regression problem, with the 129 target scores being continuous numbers averaged 130
84
+
85
+ among the nine human raters. In this work, we 131 perform two experiments and compare the use of information embedded in different neural language model representations alone and in combination
86
+
87
+ with prosodic information against manually crafted 135 features (baseline).
88
+
89
+ Baseline As baseline we take the best perfor- 137 mance from Naim et al. (2018), who uses handpicked features from three modalities (linguistic, prosodic, and facial) as input parameters of two regression models (Lasso Regression and Support
90
+
91
+ Vector Regression). Their linguistic features, 23 142 of which are obtained from the software LIWC
92
+
93
+ (Tausczik and Pennebaker, 2010), include features 144 like content word categories, positive/negative emotion words, or function word categories. Their prosodic features include information such as fundamental frequency (F0) and intensity. Their facial
94
+
95
+ features include information regarding movements 149 of eyebrows and lips, nods and head-shakes. In all the subsequent comparisons, the baseline results are the ones obtained by combining features from all three modalities.
96
+
97
+ Our experiments We perform two experiments. In Experiment 1, only linguistic information - in the
98
+
99
+ form of paragraph embeddings- is used as features 156 for the regression models. In Experiment 2, the linguistic information is combined with the prosodic information provided by Naim et al. (2018) to probe for potential improvement in the model performance while providing multimodal information to the system. We exclude facial features from our
100
+
101
+ experiments because, in the original study, the im- 163 provement in prediction results obtained by adding facial features was minimal and limited to certain features such as friendliness or excitement (See Figure 5 in Naim et al. (2018)). This is probably due
102
+
103
+ to the fact that visual features were automatically 168 extracted from the video recordings and, consequently, are extremely noisy. For both experiments, we use Lasso Regression and Support Vector Regression for comparisons of the prediction results against Naim et al. (2018). Due to the small size of the dataset provided, we consider these models the
104
+
105
+ only valid options to obtain reliable results. 175
106
+
107
+ Setup The prediction experiments are performed using the sklearn library in Python. A five-fold cross-validation is performed to avoid overfitting. Pearson's correlation coefficient between human-generated and machine-generated scores is used as our evaluation metric as in Naim et al. (2018). The grid search algorithm is used to tune hyperparame-ters that elicit better results for the majority of the traits.
108
+
109
+ max width=
110
+
111
+ Name Type Description and Trained Corpora
112
+
113
+ 1-3
114
+ word2vec S word embeddings produced by word2vec (Google News)
115
+
116
+ 1-3
117
+ gloVe S word embeddings produced by gloVe (Wikipedia and Gigaword)
118
+
119
+ 1-3
120
+ wiki S word embeddings produced by fastText (Wikipedia)
121
+
122
+ 1-3
123
+ crawl S word embeddings produced by fastText (Common Crawl)
124
+
125
+ 1-3
126
+ BERT-all C row-wise sum of the weights from all 12 Transformer output layers
127
+
128
+ 1-3
129
+ BERT-s21 C the weights from the second-to-last Transformer output layer
130
+
131
+ 1-3
132
+ BERT-4sum C row-wise sum of the weights from the last four Transformer output layers
133
+
134
+ 1-3
135
+ BERT-4cat C row-wise concatenation of the weights from the last four Transformer output layers
136
+
137
+ 1-3
138
+
139
+ Table 1: Overview of the 8 word embedding types used in our experiments. The top four embeddings are static embeddings (S) and the bottom four embeddings are contextualized embeddings (C). Finally, each word embedding is either summed or averaged across paragraphs resulting in a total of 16 representations.
140
+
141
+ § 4 RESULTS
142
+
143
+ § 4.1 EXPERIMENT 1
144
+
145
+ After comparing the performance of the 16 possible paragraph representations as predictors of the two regression methods (Lasso and SVR), we find very similar and consistent results. ${}^{1}$ Because of limited space and for clarity, in the following sections we only report the results from our best combination of models and regression methods: Lasso regression on word2vec (summed) from the static models and BERT-all (summed) from the contextualized models.
146
+
147
+ As shown in Figure 1, with the exception of Excited, recommendHire, and noFillerWords, our paragraph-based language models outperform the baseline approach with a varying degree per trait. We perform a pairwise t-test to statistically compare the average performance improvement (in the form of correlation coefficients) of our models compared to the baseline. The t-test analysis shows that word2vec $\left( {\mathrm{M} = {0.71} \pm {0.07}}\right)$ significantly outperforms the baseline approach $(\mathrm{M} = {0.57} \pm {0.16}$ , p-value $< {0.007}$ ). Moreover, as indicated by the big reduction in the standard deviation values, the neu-
148
+
149
+ ral models obtain more even performances across 209
150
+
151
+ the predicted individual traits compared to the base- 210 line (SD baseline $= {0.16}$ vs. word $2\mathrm{{vec}} = {0.07}$ ,
152
+
153
+ BERT-all $= {0.10})$ . This indicates that our mod- 212 els are robust on a wider range of traits compared
154
+
155
+ to the feature engineering approach. Compared to 214 the baseline model, which especially struggles for traits like notStressed or eyeContact, even though the information from the prosodic and facial modalities was leveraged, our static models show significantly better results. Also BERT-all leads to a slight yet non-significant improvement $(\mathrm{M} = {0.66} \pm$ ${0.10},\mathrm{p} = {0.07})$ compared to the baseline although it does not outperform word2vec $\left( {\mathrm{p} = {0.10}}\right)$ . It is worth noting that neural models entirely based on lexical cues significantly outperform a model that combines features extracted from three different modalities. Particularly interesting traits are notStressed, eyeContact, calm, authentic, smiled, focused, and paused, which show highly improved results compared to the baseline even though, intuitively, making judgments on such traits should
156
+
157
+ require more than just textual information. 231
158
+
159
+ § 4.2 EXPERIMENT 2
160
+
161
+ In Experiment 2 we test how adding prosodic information affects the overall model performance. We combine linguistic and prosodic inputs by concatenating the corresponding two vector representations. Adding prosodic features to word2vec (word2vec+pro; $\mathrm{M} = {0.76} \pm {0.07}$ ) leads to a slight but non-significant improvement $\left( {\mathrm{p} = {0.10}}\right)$ ; whereas the addition of prosodic features to BERT-all (BERT-all+pro; M = 0.77 ± 0.08) leads to a significant improvement $\left( {\mathrm{p} < {0.007}}\right)$ . Furthermore, BERT-all+pro does not outperform the language-only representation by word2vec $\left( {\mathrm{p} = {0.06}}\right)$ . This last result indicates that the use of textual representations obtained by pre-trained word2vec alone is comparable to the use of BERT representations together with the prosodic features (see Figure 2 for statistical significance between different groups). A possible interpretation of these results is that, even though prosody in general plays a contributing role in predicting behavioral traits, its effect becomes more relevant when the linguistic representation alone is not sufficient to perform a specific task and requires external supporting materials.
162
+
163
+ ${}^{1}$ The full results for the 8 word embedding types summed and averaged can be found at: http://osf.io/6gzyq/?view_only= 700678d4764e4feba545c0dddb0df6f5
164
+
165
+ ${}^{1}$ To attenuate the problem of multiple comparisons, all p-values have been alpha-corrected: $* * =$ p-value $\leq {0.007}, * * *$ $= \mathrm{p}$ -value $\leq {0.0007}$
166
+
167
+ < g r a p h i c s >
168
+
169
+ Figure 1: Experiment 1 - Pearson's correlation coefficients for each trait predicted by a Lasso Regression model using: manual features from Naim et al. (2018) (baseline; black circle), word2vec (red triangle), and BERT-all (blue square).
170
+
171
+ § 5 CONCLUSION
172
+
173
+ Our study shows that neural embeddings generally outperform manually elicited features from multiple modalities in a task that evaluates human performance and on traits that are not easily measurable via shallow access to text. Compared to the feature engineering approach previously adopted, the use of pre-trained embeddings clearly constitutes a step forward in guaranteeing replicability and in reducing implementation issues. Moreover, we show that we can successfully build paragraph-level representations by combining the embedding of each word and still obtain mid-high correlations with human judgments for all the 16 traits (especially with word2vec). Also, our approach performs well on a relatively small dataset, which is valuable given that for many tasks a high amount of data is simply not available or difficult to collect. Finally, we observe that the addition of prosodic features im-
174
+
175
+ < g r a p h i c s >
176
+
177
+ Figure 2: Boxplots representing average Pearson's correlation coefficients across trait for different language modeling types (baseline from Naim et al. (2018) (black), word2vec (red), and BERT-all (blue). word2vec+pro and BERT-all+pro show the average results after combining linguistic and prosodic information. The lines indicate the medians and the dots the mean values across all predicted traits. We provide the significant results from the pairwise $\mathrm{t}$ -tests ( $\mathrm{{ns}} =$ nonsignificant, $* * = \mathrm{p} \leq {0.007}; * * * = \mathrm{p} \leq {0.0007}$ )
178
+
179
+ proves the prediction performance even further, es- 275
180
+
181
+ pecially for models with a lower performance in the 276
182
+
183
+ language-only setup. This model behavior has an 277
184
+
185
+ interesting similarity with the way humans process 278
186
+
187
+ and understand language: when not enough linguis- 279
188
+
189
+ tic cues are available at the lexical-semantic level, 280
190
+
191
+ additional extra-linguistic materials are required 281 to successfully process the information provided
192
+
193
+ (Zhang et al., 2021). 283