text
stringlengths 0
1.01M
|
|---|
Students' Acceptance of Technology-Mediated Teaching -How It Was Influenced During the COVID-19 Pandemic in 2020: A Study From Germany
|
Digital technologies have provided support in diverse policy, business, and societal application areas in the COVID-19 outbreak, such as pandemic management (Radanliev et al., 2020b) , corporate communications (Camilleri, 2020) , analysis of research data (Radanliev et al., 2020a) , and education (Crawford et al., 2020) . COVID-19 started as a global infectious disease in the spring of 2020, but the necessary measures to control the virus went beyond treatment and were also directed against its spread. Thus, for months, all interpersonal relationships were characterized by social distancing, and the pandemic raised not only medical but also social, economic and technological issues, among others. Higher education was one domain that the pandemic affected radically (Nuere and de Miguel, 2020; Watermeyer et al., 2020) . During the worldwide lockdown, higher educational institutions had to immediately switch their activities from the classroom and the campus to a virtual space, which was the only alternative to a complete incapacity to act (Crawford et al., 2020; Kamarianos et al., 2020; Karalis and Raikou, 2020; Owusu-Fordjour et al., 2020; Shah et al., 2020) .University students represent a generation of digital natives for whom this steady switch from the real to the virtual world should not pose any operational challenge (Carlson, 2005; Berk, 2009; Jones et al., 2010) . However, research indicates that students show differences according to discipline, such as subject matter (Biglan, 1973; Neumann, 2001) or facets of digital literacy and competency (Nelson et al., 2011) , which should be taken into consideration when developing digital learning environments and approaches. The issue of whether and how teaching and learning differs across disciplines has however long been neglected in academic discourse (Neumann, 2001) . Furthermore, as in any field, the successful introduction of technology into existing processes -such as the phenomenon that occurred in the COVID-19 pandemic during the springsummer 2020 semester (or the so-called COVID-19 semester)can only be guaranteed if teachers and students show or develop appropriate attitudes, beliefs, behaviors and habits (Al-alak and Alnawas, 2011; Al-Harbi, 2011) .Starting from the circumstances of the pandemic -a rapid transition to fully technology-mediated teaching for students taking different subjects, with no alternative, accompanied by several months of social isolation -in this paper, we ask:Do the acceptance toward completely technology-mediated teaching differ, depending on the discipline of study?Did the students' acceptance toward completely technologymediated teaching change over time during the COVID-19 semester?To address the research questions, we empirically examine the acceptance of technology-mediated teaching by students during the COVID-19 semester in the spring-summer of 2020. We follow the suggestion of Neumann (2001) that "the strong influence of disciplines on [. . .] students' learning" creates the need "disciplines to be subjected to greater systematic study, especially regarding their effect on the quality of teaching and learning in higher education, " and present, analyze and discuss the collected data from 875 responses gathered from students of two disciplines (information systems [IS] and music and arts [M&A] ) at four points in time.For our empirical investigation, we apply an extended version of the Technology Acceptance Model (TAM). Technology acceptance is a main topic in information systems (IS) research, and TAM is a widely used approach to investigating a subject's attitude and adoption behavior, inter alia in university context (Venkatesh and Davis, 2000; Lee et al., 2005; Pituch and Lee, 2006; Al-Azawei et al., 2017) . For the purpose of this study, the model allows us to investigate the acceptance of technology-mediated teaching, especially regarding certain aspects (usefulness, ease of use and enjoyment) that are relevant for students. Our goal is to understand not only whether students accept technologymediated teaching but also what key aspects are decisive for the future design of technology-mediated teaching environments. For this reason, we apply the TAM, as well as look beyond the model at the research on the advantages and the disadvantages of technology-mediated teaching and the extended TAM, using three new variables to be able to analyze the construct of perceived usefulness for students during the COVID-19 season in more detail and depth.This paper is organized as follows: In Section "Theoretical Framework, " we discuss the theoretical foundations of our investigation. The design and the procedure of the study, as well as the measures and data analysis are presented in Section "Materials and Methods." The presentation of the results is the focus of Section "Results." We discuss the results of the analysis in Section "Discussion of the Results" and provide implications for teaching practice and organization, educational technology, and research in Section "Implications for Teaching Practice, Educational Technology and Further Research." We conclude this paper in Section "Conclusion" with a short summary, limitations of the study, and remarks on future studies.The TAM is one of the most widely investigated and applied models of technology acceptance. Perceived usefulness (PU) and perceived ease of use (PEOU) are the two decisive variables for a person's attitude (ATT) toward a used technology, which in turn affects the actual system use. PU depicts a person's subjective sensation that the application of a certain technology improves individual work performance, while PEOU measures a person's perception of how much effort the usage of the new technology requires. Both variables are influenced by diverse external variables, such as job relevance, subjective norm or output quality (Venkatesh and Davis, 2000) . Davis et al. (1989) adjusted the model by adding a person's behavioral intention (BI) as mediator between ATT and actual system use. Table 1 shows an overview of the research on the TAM in the e-learning context. For the e-learning context, Lee et al. (2005) added perceived enjoyment (PE) as an intrinsic motivator, in addition to PU and PEOU, to TAM constructs. Šumak et al. (2011) conducted a meta-analysis and found that the TAM was the most applied model in e-learning and that the size of the causal effects between individual TAM-related factors depends on the type of user and the type of e-learning technology.For our study, we adapt the research model of Lee et al. (2005) , as presented in Figure 1 .Consistent with the findings of prior studies (cf. Davis et al., 1989; Lee et al., 2005) , we expect the relations among the constructs to exhibit significant strength (for the list of the hypotheses, cf. Attachment 1). However, in our discussion, we take into account that the TAM in e-learning is usually researched in cases where blended learning or e-learning is an additional part of face-to-face teaching, whereas in the COVID-19 semester, virtual teaching and learning was Mohammadi, 2015 quality features, perceived ease of use, perceived usefulness on users' intentions, satisfaction, usability toward use of e-learning SEM, path analysis 'Intention" and "user satisfaction" both had positive effects on actual use of e-learning. "System quality" and "information quality" were found to be the primary factors driving users' intentions and satisfaction toward use of e-learning. "Perceived usefulness" mediated the relationship between ease of use and users' intentions Al-Azawei et al., 2017 e-learning self-efficacy, perceived satisfaction, learning styles, perceived usefulness, perceived ease of use, intention to use PLS SEM Highlights the integration of perceived satisfaction and technology acceptance in accordance with psychological traits and learner beliefs. Model achieved an acceptable fit and successfully integrated intention to use (ITU) and perceived satisfaction the only channel used to convey content. We examine the measurement model and the structural model and then compare the results over time and for the two student populations (IS and M&A).Acceptance Over Time Venkatesh and Davis (2000) tested an extended TAM (TAM2) in four longitudinal studies and introduced experience as a relevant influencing factor that is important for understanding the changes in PU over time, whereby experience in general reflects an opportunity to use a technology and is typically operationalized as the passage of time from an individual's initial use of a technology. Based on the TAM, Venkatesh et al. (2003) developed UTAUT and tested it in a longitudinal field study. Venkatesh et al. (2012) introduced three new constructs to UTAUT, measured users' experience and investigated its influence on the users' acceptance and habits. Davis and Wong (2007) applied TAM2 in an educational context and measured users' experience in relation to the actual student usage (system use) of an e-learning system. They pointed out the complex underlying interactions during e-learning adoption processes and recommended a longitudinal design as appropriate for future studies. Pynoo et al. (2011) applied UTAUT in the educational context to investigate the acceptance of digital learning environments and found differences over time; they also pointed out that the usefulness of digital learning environments should be demonstrated to maximize its use.In contrast to other studies, our study does not focus on a specific technology but on the experience with the technology-mediated teaching in the COVID-19 context. We expect and show within this context that students gain experience during the semester, which will lead to measurable changes in their acceptance. Hu et al. (1999) define a set of users' characteristics as one factor that can be used to explain, predict and effectively manage technology acceptance. Biglan (1973) points out the characteristics of academic matter, according to which the strongest differences between the "hard" (e.g., engineering sciences) and the "soft" sciences (e.g., social sciences, educational sciences, and humanities) can be identified. Vo et al. (2020) investigated the effects of blended learning on student learning performance and compared the output of students in hard and soft disciplines. According to their study, students in soft disciplines perform better than their peers in hard disciplines when courses are designed in the blended learning modality. Cameron (2017) identified differences in student engagement between 'humanities' (e.g., M&A) and 'professional fields' students (such as IS). Additionally, teaching experiences are more highly regarded by humanities students than those in hard sciences (Cashin and Downey, 1995) . In the context of our study, this is expected to lead to differentiating results when lecturers had to quickly change toward virtual formats based on their diverse levels of experience with technologies. Pike et al. (2012) found that that students' academic majors are significantly related to levels of engagement, which is influenced by their acceptance and learning outcomes. Students of enterprising disciplines are more engaged than artistic discipline students. Students of soft applied knowledge (e.g., M&A) need more intensive practical training than those from the disciplines of hard applied knowledge (e.g., IS) (Neumann et al., 2002) . Here might be a major disadvantage for specific groups when virtual teaching is applied for learning.According to the research on the learning characteristics and the learning styles of the Net generation (born after 1980) and the Z generation (millennials), university students at the time of the COVID-19 crisis are digital natives, who can be described as tending toward independence and autonomy in their learning styles, technology savvy, interested in communicating visually and in multimedia and able to move seamlessly between real and virtual worlds (Carlson, 2005; Berk, 2009; Jones et al., 2010) .Despite this, it is also characteristic of this generation to view class as a social opportunity and to crave face-to face social interaction, whereby relationships, in-person conversation, interaction and collaboration are high priorities (Howe, 2000; Carlson, 2005; Ramaley and Zia, 2005) . Zheng et al. (2017) investigated lowand high-performing students in an e-learning environment and identified a significant difference in the students' perceived usefulness. Xu and Jaggars (2014) found that the typical student had more difficulty with succeeding in online courses than in face-to-face courses [compare also (Nelson et al., 2011) ; they also noted a variation across subject areas in terms of online course effectiveness].To the best of our knowledge, to date, no research has put the TAM in the context of the specific characteristics of a study's subjects. This is where our study can make a contribution, as we have examined two different subject groups: M&A students and IS students.In summary, we expect differences in the students' attitudes toward virtual learning, according to their academic subject.The benefits and the disadvantages of technology-mediated teaching and learning became a focal point for university research in the context of the COVID-19 crisis (Kamarianos et al., 2020; Karalis and Raikou, 2020; Owusu-Fordjour et al., 2020; Shah et al., 2020) . However, this topic is not new but one of the central research focuses in the context of learning in digital learning environments. Davis and Wong (2007) define e-learning as a global phenomenon for organizations and educational institutions, aiming to enhance students' learning experience and effectiveness in terms of the learning outcome. The benefits of e-learning have been discussed in recent research, but so far, there is no consensus on whether the outputs of e-learning are more effective than those of traditional learning formats (Derouin et al., 2005) . The most frequently stated benefits are cost efficiency, flexibility (in terms of time and place), saving time to travel to the learning location, easy access to learning materials, as well as the usefulness of learning materials for a longer period (Welsh et al., 2003; Brown et al., 2006; Hameed et al., 2008; Jefferson and Arnold, 2009; Hill and Wouters, 2010; Al-Qahtani and Higgins, 2013; Becker et al., 2013) , or the potential to offer personalized learning according to the learner's specific needs (Berge and Giles, 2006) .On the negative side, technology-mediated learning lacks direct social interaction and a personal touch and has the potential to socially isolate the learner or at least to negatively influence social aspects of learning processes (Gimson and Bell, 2007; Hameed et al., 2008; Al-Qahtani and Higgins, 2013; Becker et al., 2013) . Socially isolated learning can negatively influence the development of learners' communication skills, as well as change communication conditions, including the lack of support and feedback using non-verbal cues or by observing the interactions of others, as well as the lack of social and cognitive presence and teacher's involvement (Al-Qahtani and Higgins, 2013) . Furthermore, learners are insecure about their learning in the absence of regular contact to the teachers (Al-Qahtani and Higgins, 2013). Technology-mediated teaching and learning requires self-motivation, time management and a focused approach and self-directed learning and organization skills of learners (Hameed et al., 2008; Jefferson and Arnold, 2009 ). According to Al-Qahtani and Higgins (2013), these requirements arise partly from the conditions of social isolation and lack of direct social interaction, which means that the learner must have a relatively strong motivation to mitigate this effect.During the lockdown of the universities the expectation was that most of the young students will not have any difficulty in switching to online teaching, which is indeed confirmed by actual findings (e.g., Kamarianos et al., 2020) . Shah et al. (2020) point out the numerous and immediately apparent benefits of transferring learning to the virtual world: free exchange of information, access to lectures and presentations at conferences that used to involve considerable travel costs, webinars and online discussions, reduction of time inefficiency associated with travel and increased commitment. Owusu-Fordjour et al. (2020) identify negative effects, e.g., learning at home can be ineffective because of many distractions, no adequate learning environment, or contact with the teacher. Less problems have been found in switching to online teaching, however, on the negative side, technical obstacles as well as lack of communication and cooperation, difficulties to concentrate, too many screen-time, lack of logistical infrastructure, non-physical presence, more workload and the loss of lab courses and the general restriction of social contact have been pointed out as important during the crisis. To the positive characteristics belong the easy participation in class, time savings, home comfort, the possibility to learn, new competences, attending and learning flexibility.We conducted a longitudinal study in four German universities using an online survey to capture students' perceptions of technology-mediated teaching throughout the COVID-19 semesters in 2020. Participants in the study were students from selected courses and programs that have been invited to voluntarily take part in the survey. To identify potential differences between disciplines, we gathered responses from different subjects being taught. We have used from the beginning defined e-mail distribution lists and the group of potential respondents remained the same throughout the study. Students were asked for their agreement to the respective statements on an administered LimeSurvey. One survey was administered at the beginning of the semester in Germany (April), two surveys during the semester (May and June), and a final survey at the end of the semester (July 2020).The study focused on two main theoretical constructs: (1) (technology) acceptance of e-learning (see Section "Technology Acceptance Model") and (2) the benefits and disadvantages of e-learning compared with face-to-face or blended learning (see Section "Benefits and Disadvantages of E-Learning"). We relied on pre-tested scales when possible; however, we had to adopt these scales for our study. Furthermore, we collected demographic data and asked open-ended questions to gain deeper insights into students' perceptions over the semester.Concerning the first group of acceptance measurements, we used related items from former studies in a comparable context. We adopted the measurement scales for PU, PEOU, PE, ATT, and BI from Lee et al. (2005) , as the authors had already pre-tested these constructs for e-learning activities and proven their applicability. As in the original constructs, the items were measured using a 7-point Likert scale. Slight modifications were made to fit items to the investigated e-learning context.To address the benefits and disadvantages of e-learning, the identified factors (see Section "Benefits and Disadvantages of E-Learning") were operationalized through a combination of previous studies and the authors' assessment. As highlighted in the previous chapter, for time flexibility (TF), learning flexibility (LF), and social isolation (SI), the theoretical literature provides several important insights into the factors behind the advantages and disadvantages of technology-mediated teaching environments. Table 2 provides an overview of survey constructs and related measurement items as well as their sources of adoption.To identify differences in students' perceptions over time, we surveyed the same student populations four times during the semester. At University 1, we gathered responses from master's students in IS, while at Universities 2, 3, and 4, we surveyed participants involved in courses that are part of the music and arts curriculum (bachelor, M&A).We sent a link to the questionnaire throughout the semester and gathered 875 responses, of which 246 (28%) came from IS students and 629 (72%) from M&A students. We gathered 147 responses in April, 319 in May, 269 in June, and 128 in July. Of the responses, 59% (513) were received from women, 35% (310) came from men, and the remaining 62 (6%) specified another sex or provided no information. Data preparation and analysis were conducted in R with the Stats package, version 3.6.1. Incorrect encodings and values were filtered manually. Throughout the survey, no questions were designated as mandatory. For model testing, only constructs for which all related items were answered were used. Regression model analysis was used to test the individual models. Regression models were estimated using the ordinary least square (OLS) method. The survey constructs were calculated based on the mean values of the respective items. Given the focus of our study, we employed the students' subject as the control variable in all model constructs (see the section on the differences between student groups). A binary dummy variable indicating the M&A group was used. Table 3 provides descriptive details for the model constructs. The constructs average values varied. The respondents assessed the ease of using technology-mediated teaching and related technologies as relatively high (avg. 5.2, SD = 1.19) and simultaneously stated that learning with digital technologies did not necessary lead to completely socially isolated work (avg. 3.47, SD = 1.57). The students were almost in agreement regarding the benefits of learning flexibility (avg. 4.92) and time flexibility (avg. 5.01).A comparison of the students' groups revealed that uniformity within the information systems group was larger in almost any of the respective constructs (standard deviation was lower). Moreover, we observed that the agreement was higher for the central model constructs for the group of IS students. Details are discussed in the following sections.To ensure the validity of the measurement constructs, two approaches were used. For the new items regarding the benefits and disadvantages of technology-mediated teaching and learning, we first employed an explorative factor analysis (EFA) to assess their suitability to measure related aspects. Apart from the developed items, we assessed the internal validity for all constructs in the model.Explorative factor analysis has been applied for the constructs related to technology-mediated teaching and learning validity and reliability. A principal component factor analysis with a maximal likelihood estimation rotation was performed on the collected items. The related nine items were employed in a factor analysis, resulting in three constructs. Factor 1 (time flexibility) comprised two items reported on a 7-point Likert scale that explained 30% of the variance with factor loadings from 0.652 to 0.997. Factor 2 (learning flexibility) comprised two items (instead of the three expected, compare Table 4 for the item deleted after the EFA) reported on a 7-point Likert scale that explained 12% of the variance with factor loadings from 0.573 to 0.678. Factor 3 (social isolation) comprised three items reported on a 7-point Likert scale that explained 26% of the variance, with factor loadings from 0.758 to 0.929. Following the results of the EFA, the factors social isolation and time flexibility matched our developed items for each construct. Concerning learning flexibility, the item related to the video lectures (c.f. Table 4 ) did not match to a significant extent (0.351) and was dropped accordingly. Lastly, the internal validity was assessed for all constructs. The established group of technology acceptance constructs was only tested for their internal validity through Cronbach's alphas. Table 4 provides an overview of the survey constructs internal validity and the survey items used. Apart from the PU and The overall results of the structural model test are shown in Figure 2 . The model accounts for 65% of the variance in ATT and 54% of the variance in BI. For all the model constructs, the significant factors were identified with the survey data. Table 5 provides an overview of the hypotheses and the related results. With the exception of H1 (PEOU -> PU), all TAM hypotheses could be verified in our sample.Two of the items in PU directly address the perception of the benefits or advantages of technology-mediated teaching. The third deals with the direct output of learning, which is related to its perceived effectiveness (cf . Table 4 ). Thus, we analyzed the data in view of the potential relations between the perceived benefits and disadvantages of technology-mediated teaching and PU. Based on our empirical results, we were interested in identifying the sentiments underlying students' perceptions of the usefulness of technology-mediated teaching. We therefore extended the TAM core model with the three new factors influencing PU, as presented in Figure 3 .Furthermore, we conducted a regression analysis of PU over time, as illustrated in Table 6 . The effects of TF, LF, and SI explained 34% of the variance in PU in the model test (Figure 3) , as well as up to 35% of the variance in PU over time (Table 6) , with a very low explanation rate in May, which was also the only month when SI had a significant effect on PU. To identify the differences between the two student groups, a Kruskal-Wallis test was performed. As a non-parametric test, the approach allowed us to identify differences among our subsamples of different sizes. Overall, all central model constructs vary between the student's subject. Moreover, in general compared with IS students, M&A students have more negative perceptions of almost all model constructs.The differences in the central model constructs were analyzed in terms of variations over time and between subject groups. Figure 3 shows the results for BI over time and between subject groups as generally higher for IS students and indicates further differences over time. For IS students, the analysis results reveal increased BI over time toward the end of the semester. For the FIGURE 2 | TAM test results, including LF, TF, and SI as influencing variables. ** Significant at the 0.01-level and *** significant at the 0.001-level.Frontiers in Psychology | www.frontiersin.org M&A group, we found similar increased BI over time; a slight decline was identified at the end of the semester.The same tendency in development over time and in significant (see Table 7 ) differences between the subject groups was observed with regard to PU, PEOU, and PE (visualized in Figures 4-6, respectively) .As shown in Table 8 , the model explains up to 59% of the variance in BI for IS students and up to 52% for M&A students. The effect of PU was not significant for both student groups at the beginning and at the end of semester and remained constantly non-significant for IS students. For M&A students, PU was very significant in the middle of the semester. The effect of ATT was significant in the beginning and the middle of the semester (June) for IS students but weakened over time. For M&A students, the effect of ATT also varied during the semester, being significant in the second month and at the end of the semester. The strongest and most constant significant effect was found for PE in both groups.In this study, we identified differences in the perceptions of the investigated subject groups and over time. The first research question (RQ1) could be answered positively: For all constructs of our model, the results show significant differences toward completely technology-mediated teaching depending on the discipline of study. In general, for all constructs, M&A students answer more negatively than IS, which leads to the conclusion that they will not accept (complete) technology-mediated teaching to the same extent as IS students. This supports, inter alia, our theoretical findings, especially the findings of Pike et al. (2012) , which emphasized that students' academic majors were significantly related to levels of engagement, which is influenced by their acceptance and learning outcomes. IS students in our study furthermore enjoyed the technology-mediated teaching more, although social isolation was the most negatively indicated by both groups. Neumann (2001) emphasizes a strong influence of disciplines on students' learning and behavior, and Nelson et al. (2011) point out differences in facets of digital literacy and competency. Our findings empirically support disciplinary differences in acceptance of technical-mediated teaching between M&A and IS students. We assume a higher acceptance of IS to be a result of the appropriateness of the medium for the subjects' content and the confidence that the content of their lecture can be conveyed technologically, as well as based on general openness toward technology-mediated teaching. The higher acceptance of virtual classroom format might also be a result of the general tendency for people to adopt familiar formats more easily (cf. Janssen et al., 2009) ; that is, in the present case, IS students were more familiar with technologies and virtual environments than M&A students, and, thus, possibly influencing the corresponding acceptance.To answer the second research question (RQ2), we analyzed differences over time and between the two groups. The results show that this research question was also positively answered: The students' attitudes toward completely technology-mediated teaching changed over time during the COVID-19 semester. Especially in the last month of the semester, a decline in all constructs was apparent for the M&A group. One reason for this finding may be that at this time, the loosening of social isolation had begun, and face-to-face teaching was possible again, clearly demonstrating its advantages for this group compared with completely technology-mediated teaching. A reason for M&A students' perception of technology-mediated teaching as much less suitable for conveying their learning content could be the lack of opportunities for laboratories and studios, audience response (in music and theater), and practical work in technology-mediated teaching, which are a main focus of their curriculum. We assume that the type of knowledge imparted in the curriculum is also responsible for these differences.Further, in the context of RQ2, we measured the effect of PE, ATT, and PU on BI separately for both groups using a regression analysis. Even if both groups did not rate the PU as high, the usefulness of technology-mediated teaching did not significantly affect the intentional behavior for any of the months of the survey. PU seemed to be important only for M&A students in the middle of the semester. This could be explained by the fact that at this time, M&A students had gained enough experience and recognized that the contents of their study cannot be transferred properly enough in a technology-mediated teaching environment. The intrinsic factor -enjoyment -however, has a decisive importance, with the remaining strong influencing effect for both groups. Further, the results show that PE was much lower for M&A students. This should be an object of further investigation focusing on variables influencing enjoyment of technology-mediated teaching. To address the special situation during this empirical study, the attitude toward technologymediated teaching was placed in a close relation to the COVID-19 crisis. The effects on BI over time should therefore also be discussed in the context of the crisis. In the middle of the semester, the BI of both groups was significantly affected by the perception of the students regarding the influence of the crisis on the future digitalization of learning processes as well as on the current semester. At the end of the semester, however, IS and M&A students' responses developed in different directionsthe experiences of IS students reduced this significant effect, although for the second group, the effect remained significant at the end. For this group, the lessons learned during the crisis are also more negative.Besides the results related to RQ1 and RQ2, we provided results with regard to the TAM, which are in alignment with the results of our main theoretical underlying basis, Lee et al. (2005) as well as Venkatesh and Davis (2000) . Significant effects could be confirmed in TAM core constructs. We identified in contrast to prior research an inverse effect of PEOU on PU. This might be explained by the fact that this was conducted during the COVID-19 situation, not voluntary, and we did not question the use of a specific technology, but rather technology-mediated teaching in general.We identified a change of acceptance over time. Shehzadi et al. (2020) investigated the influence of e-learning of Pakistani public and private university students on their satisfaction in the context of the pandemic. Therein, a positive dependence of students' satisfaction with information and communication technologies, e-service quality, and e-information quality as influencing factors of students' e-learning experience was identified. This implies that specific technologies, the service quality, which is, for example, technological smoothness and a high degree of usability, as well as teacher and teaching characteristics, might also be decisive factors for consideration when (re)designing technology-mediated teaching and learning and, thus, addressing student acceptance.We extended prior research introducing the new variables LF, TF, and SI to TAM and showed how they influence the PU of technology-mediated teaching learning. The possibilities to learn from home, save travel time, and access (video-recorded) lectures independently of time and place are universal benefits of technology-mediated teaching and learning that have gained importance under the special conditions of the lockdown period. Thus, it is not surprising that LF and TF were positively related to PU. The effect of LF was identified as significant in June ( Table 6) . TF had a strong effect during the whole lockdown period. Regarding the perceived disadvantages of technologymediated teaching, our results show that SI had a surprisingly positive effect on PU. This could be explained by the situation of the complete lockdown, without alternatives to learning and direct exchange. There is evidence that the willingness to perceive technology-mediated teaching and learning as equivalent to faceto-face teaching and learning is greatest when offered without alternatives (Mehra and Omidian, 2011) . The impact of SI on PU was strongest in the second month of the lockdown. This may be the result of the overall phase characteristics: during the second time, it became clear that the crisis would last longer, but the frustration about the social isolation was not yet too great by comparison.The results of this study on technology acceptance during the virtual COVID-19 semester in Germany are important in both the short and the long term. We point out three areas of implications: teaching practice and organization, educational technology, and research.Our study was conducted in a situation of immediate switch from physical presence to technology-mediated teaching. The extreme circumstances were a big challenge; however, they provided important evidence about technology-mediated teaching at universities. In the current course of the pandemic, the fallwinter 20/21 semester is equally or at least partially technologymediated. In this respect, the findings can help improve teaching directly, especially regarding the differences in the perceptions of the subjects of study.The differences between the student groups need to be taken into consideration by the teachers when designing virtual teaching and learning environments and conducting teaching. For example, different formats, such as breakout sessions in smaller groups, could be used. Furthermore, specific sensitization to the advantages or necessity of the formats can be applied or the degree of interactivity within the sessions adjusted. To this end, teachers should develop competencies, not only regarding the use of technical tools but also new didactic and methodological skills. Further, the overlap of technological, pedagogical and content knowledge leads to new kinds of interrelated knowledge (Mishra and Koehler, 2006; Archambault and Crippen, 2009; Schmid et al., 2020) , which are gaining importance in the context of teachers' education and professional development. The transfer of knowledge through teaching must not occur in such a way that a single technique implies innovation. It is much more challenging for lecturers to demonstrate their methodological and professional competencies through the use of media in the same way as in face-to-face teaching. The initial experiences during the COVID-19 lockdown have shown both possibilities and limitations. The students' direct feedback is all the more important to better exploit the potential of technology-mediated teaching in the future.In the long term, not only direct teaching practices but also the organization of the teaching processes at the universities as a whole should be taken into consideration. Customized approaches, which differ in respective share of online and offline teaching and learning formats, should be considered for students of different subjects. Whereas, for example, IS students are more familiar with virtual environments, it is assumed that they are more likely to accept and manage the switch to fully virtual learning formats. By contrast, M&A students who are generally assumed to be less familiar with virtual environments may show less acceptance of related formats. Moreover, the appropriateness of virtual teaching and learning may also generally vary among subjects. The acceptance of virtual learning formats should not be considered as similar for all students simply referring to their age/generation. We argue for a consideration of their familiarity and competences with related technologies as well as their technological affinity, which varies among subjects. Moreover, we surmise that personal interaction may not be fully substituted through virtual formats. Hybrid teaching forms seem to become most promising for the future of learning and teaching at universities (Vladova et al., 2020) . Therefore, administrative and organizational changes and reorganization of (well established) practices become necessary. These will involve adjustment and further development of the curriculum, stable and trustful technological infrastructure, organization of learning results assessment, as well as the development of a new culture of technology-mediated teaching, including netiquette, behavioral norms, and standards.The differences between the student groups clearly show that the use of technologies and the design of technology-mediated teaching offerings should address the specific needs of different study subjects. At the time of the study, communication platforms such as Zoom, Cisco Webex, or Big Blue Button were mainly used for teaching, as well as Moodle as a learning platform for organization of the teaching process. Against this background, the direct user feedback in our study includes important hints for educational technology (EdTech) companies. Currently, these companies mostly focus on the development of learning courses for individual use, pointing out the role of artificial intelligence (AI) and learning analytics. However, the results show the immense importance of the differences in the field of study during the transfer of knowledge in an academic environment. This can be addressed by short-and long-term solutions and may lead to innovative concepts and products, whereby the role of the teacher remains central for the transfer of specific study content. However, students can acquire different content in a completely self-directed and self-organized way. The curriculum of the two groups in our study can be used, among other things, to identify the subject-specific needs of the students.In the long term, the effects of LF, TF, and SI should be empirically tested and investigated by further research in a COVID-19 neutral situation. Furthermore, the changes in the TAM constructs over time refer to the influence of experience within the acceptance model in education. Thus, future research should investigate whether this experience can influence students' habits and, through this, their acceptance of face-to-face teaching. This is relevant for the phase of returning to direct face-to-face teaching after the crisis, but much more in the long term as university teaching becomes increasingly technology-mediated.We also identified implications for further research in the context of knowledge management. The results of the study indicate a relationship between the nature of knowledge transferred during the teaching process and the acceptance of technology-mediated teaching. When the shift to the technologymediated learning environment is considered, the nature of knowledge and how it is transferred comes to the forefront (Vladova et al., 2020) . The knowledge management literature points out the critical distinction between tacit knowledge (person-bound) and explicit knowledge (not person-bound) (Polanyi, 1966) . Whereas explicit knowledge can be transferred in the context of communication processes with the help of numbers, pictures, or language, tacit knowledge is personal and context-specific (Nonaka and Takeuchi, 1995) . Therefore, tacit knowledge is difficult to communicate (Nonaka and Takeuchi, 1995) and can be transferred only partly and by common application and practice. For example, Polanyi (1958, p. 92) posits: "Although the expert (. . .) can indicate their clues and formulate their maxims, they know many more things than they can tell, knowing them only in practice, as instrumental particulars, and not explicitly, as objects."Next, during our data analysis, we found some implications for research on the topic of innovation diffusion (Rogers, 2010) , as IS students can probably be described as early adaptors and M&A students as the late majority. IS-students can thus be used as a test audience as well as ambassadors for a new learning technology solution. Thus, they would have a trendsetting role within universities. Thanks to their high acceptance, new technologies can be tried out without fear of resistance and their advantages can be recognized.We also believe that our study could be of interest in the interdisciplinary research field, especially in the context of digital-mediated team, net, and project work. At this point, the experiences and needs of M&A students are especially important to explore. Experiments as well as surveys on these types of teamwork in the university context can provide necessary information on how technology-mediated teaching should be appropriately designed for this user group. This necessitates scientific collaboration between work psychologists, computer scientists and educators.Although a study of this scale cannot be wholly representative of the entire higher education sector, it has provided views from two different disciplines, that is, M&A as well as IS, on the acceptance of technology-mediated teaching and learning in four universities in Germany. Motivated by the need to understand the underlying drivers of student adoption of digital-mediated learning during the COVID-19 semester, we applied the TAM in a longitudinal study and incorporated three new variables (LF, TF, and SI) influencing PU into the TAM. Furthermore, we identified differences between the subject groups regarding their perceived acceptance of digital-mediated teaching and showed the changes in BI over time for both student groups. We used a validated construct for acceptance. However, as we were aware of the specifics of the situation -social isolation and no alternative to the use of technology -we first tested the hypotheses using our sample.Our study also has some limitations. First, it was conducted under the special circumstances of complete social isolation in every area of life, which has an influence on the results. Furthermore, we summarized the M&A group in the evaluation without consideration for the differences within it (e.g., music, theater, architecture, visual communication). Given the urgency and the circumstances of the situation of our empirical research context, we furthermore did not have the opportunity to directly examine the organizational situation at the participating universities. However, we included questions about digital platforms and tolls, as well as open-ended questions about students' perceptions of the performance of their teachers. Thus, we addressed organizational and technical issues and their impact on student acceptance. The answers to these questions are not the focus of this paper; however, they will help us to place the model in connection with the specific framework conditions at the universities and to analyze the answers more in depth.In following up on our data analysis, our future research will especially address the changes on the individual level over time, further data collection in the current semester (fall-winter 20/21), and the analysis of the gathered qualitative data of the answers to the open-ended questions. These efforts will allow us to gain further information on students' perceptions of technologymediated teaching during the semester.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.Ethical approval was not provided for this study on human participants because this was an anonymous survey, we used the own server for data administration. The participants provided their informed consent to participate in this study.GV developed the idea for this empirical research and was involved in all steps of the study and the manuscript preparation. GV and AU prepared the instrument applied in the empirical study. Both prepared a large part of the introduction, theory, and discussion of the results. GV was mainly responsible for the implications, AU for the conclusion. BB carried out the data analysis and described its procedure and results. NG was actively involved in the implementation of the survey and was responsible for the internal review process. All authors contributed to the article and approved the submitted version.This work has been partly funded by the Federal Ministry of Education and Research of Germany (BMBF) under grant no. 16DII127 ("Deutsches Internet-Institut").
|
Direct-from-specimen microbial growth inhibition spectrums under antibiotic exposure and comparison to conventional antimicrobial susceptibility testing
|
Introduction Direct-from-specimen microbial growth inhibition assessment can assist in emergency preparedness and pre-hospital interventions by providing timely patient-specific antimicrobial efficacy profiling information. The use of empirical therapy proves that the methods currently used are inadequate when it comes to informing initial treatment decisions in a timely manner [1] . Phenotypic antimicrobial efficacy profiling, in which clinical specimens are directly exposed to different antibiotic conditions, could provide critical information for the prescription of antibiotics in hours. The results of a phenotypic antimicrobial efficacy profile test, taken in conjunction with local antibiogram data, could guide the course of therapy to improve patient outcomes and slow the spread of antimicrobial resistance. We present a molecular test based on the transcriptional responses of causative bacteria to antibiotic exposure that can be performed directly from urine specimens. Quantification of group-specific or species-specific 16S rRNA growth sequences was used to provide rapid antimicrobial efficacy profiling results. Categorical agreement was assessed with reference AST methods according to CLSI guidelines.Even though antibiotics do not directly affect the SARS-CoV-2 respiratory virus responsible for the COVID-19 pandemic, physicians are administering many more antibiotics than normal when treating COVID-19 patients [2] . The appearing surge in antibiotic use is reflected in the higher percentages of COVID-19 patients with severe conditions and pediatric patients (85% in a multicenter pediatric COVID-19 study [3] ) receiving antibiotic therapies. The World Health Organization warned that the use of antibiotic therapy may lead to higher bacterial resistance rates and increase the burden of the pandemic [4] . A recent study by Zhou et al. [5] found that 15% of 191 hospitalized COVID-19 patients, as well as 50% of the 54 non-survivors, acquired bacterial infections. Therefore, a shorter time to rule out certain antibiotic options by detecting microbial growth under such conditions may provide physicians with valuable information before the availability of conventional AST results.Generating curves to illustrate the microbial growth inhibition response to antibiotic exposure conditions across a range of microbial loads may provide a dynamic method for estimating antimicrobial efficacy that is much more rapid than the endpoint minimum inhibitory concentration (MIC) method used in conventional AST. Here, we present a method to quantify the 16S rRNA content of viable target pathogens in unprocessed specimens, such as urine, following exposure to various antibiotic concentrations in vitro (Fig 1) . This method allows for interpretation of the antimicrobial effect by analyzing the differential microbial responses at two inoculum dilutions. The hypothesis is that the growth inhibition concentration (GIC) is the lowest antimicrobial concentration necessary to inhibit the growth of target strains in a given sample after adjusting for pathogen concentration effects. The combined GIC in a polymicrobial sample is not evaluated in this pilot study. We compare the GIC reported from this direct-from-specimen antimicrobial efficacy profiling method to the MIC and susceptibility reported from CLSI reference methods to assess the categorical agreement, after which we establish a correlation between the microbiological susceptibility (i.e., MIC) and antimicrobial efficacy (i.e., GIC).Prior to the presented study, we developed a PCR-less RNA quantification method that performs enzymatic signal amplification with a proprietary electrochemical sensor array. We applied this quantification method to a streamlined pathogen identification and AST using species-specific probe pairs, then validated and published studies with our clinical collaborators using contrived and remnant clinical specimens . The detection strategy of our universal electrochemical sensors is based on sandwich hybridization of capture and detector oligonucleotide probes which target 16S rRNA as described in S1 Protocol. The capture probe is anchored to the gold sensor surface, and the detector probe is linked to horseradish peroxidase (HRP). When a substrate such as 3,3',5,5'-tetramethylbenzidine (TMB) is added to an electrode with capture-target-detector complexes bound to its surface, the substrate is oxidized by HRP and reduced by the bias potential applied onto the working electrode. Oligonucleotide probe sequences for both capture and detector probes are detailed in Fig 2 of S1 Protocol. This redox cycle results in the shuttling of electrons by the substrate from the electrode to the HRP, producing enzymatic signal amplification of current flow in the electrode. The concentration of the RNA target captured on the sensor surface can be quantified by the reduction current measured through the redox reaction between the TMB and HRP with a multi-channel potentiostat built into our system as demonstrated in Fig 4 of S1 Protocol. Quantifying changes in RNA transcription appears to be a more suitable approach in the case of timely reporting due to its rapid changes upon exposure to antibiotics [37, 38] . Measuring the RNA response of pathogens to antibiotic exposure directly in clinical specimens would provide a rapid susceptibility assessment that can be performed in clinical settings.Strains were obtained from various sources including the CDC AR Bank and New York-Presbyterian Queens (NYPQ). The number of strains of each species is listed in S1 Table. All clinical isolates were obtained anonymously from remnant patient samples collected for routine culture and were de-identified prior to testing under the approved NYP/Queens Institutional Review Board and joint master agreement. We aimed to test an even distribution of species with MIC values on or near the susceptible and resistant breakpoints of each antibiotic. We included three representative antibiotics of three different classes (fluoroquinolones, aminoglycosides, and carbapenems): ciprofloxacin (CIP; Cayman Chemical Company, Ann Arbor, MI), gentamicin (GEN; Sigma-Aldrich, St. Louis, MO), and meropenem (MEM; Cayman Chemical Company). CDC AR Bank isolates were used to include representative bacteria susceptibility profiles that were not covered by those from NYPQ. CDC AR Bank isolates were stored as glycerol stocks at -80˚C and were grown from these stocks at 35˚C on tryptic soy Graphical abstract of presented direct-from-specimen antimicrobial efficacy profiling method. Unprocessed urine is inoculated into two antibiotic stripwells at the original concentration and 10-fold dilution. After antibiotic exposure, viable 16S rRNA is quantified using an electrochemical sensor assay and reported as categorical susceptibility. The presented method is able to be fully automated.https://doi.org/10.1371/journal.pone.0263868.g001 agar plates with 5% sheep's blood (Hardy Diagnostics) for 18-24 hours before testing. Suspensions of each isolate to be used for contriving urine samples were prepared using cationadjusted Mueller-Hinton II (MH) broth (Teknova; Hollister, CA) and a Grant DEN-1B densitometer (Grant Instruments, Cambridge, UK). Negative urine specimens to be used for contrived samples were stored in Falcon tubes at 4˚C. Clinical urine samples from NYPQ were stored in BD 364954 Vacutainer Plus C&S tubes containing boric acid at 4˚C prior to overnight shipment for testing. Consumables consisted of stripwells with dried antibiotics, electrochemical-based sensor chips functionalized with oligonucleotide probe pairs complementary to Enterobacterales and Pseudomonas aeruginosa for RNA quantification (probe sequences in Fig 2A of S1 Protocol), and a reagent kit for lysing and viability culture. Stripwells were prepared as previously described by drying antibiotics in DI water with 0.1% Tween onto EIA/ RIA 8-well strips (Corning, Corning, NY) at the following concentrations: CIP 0.0625, 0.125, 0.25, 0.5, 1, 2, 4 μg/mL; GEN 1, 2, 4, 8, 16, 32, 64 μg/mL; MEM 0.5, 1, 2, 4, 8, 16, 32 μg/mL [39] . The first well of each stripwell was left without antibiotic to be used as a growth control (GC) during the assay. Electrochemical sensor chips were produced in-house by deposition of gold onto a plastic substrate and functionalized with probes as previously described [35] .Urine samples were spun down to remove the majority of matrix components in the supernatant. Specifically, urine samples with a 4-mL starting volume were spun in a centrifuge at 5,000 RPM for 5 minutes, after which supernatant was removed and replaced with 4 mL of cationadjusted MH broth to create the 1x inoculum. A ten-fold dilution of this sample was prepared by adding 100 μL of this sample to 900 μL of MH broth, resulting in the 0.1x inoculum.The direct-from-specimen antimicrobial efficacy profiling approach presented in this study aims to demonstrate a significant correlation to conventional AST results. The electrochemical-based biosensor measures the reduction current from cyclic enzymatic amplification of an HRP label with TMB and H 2 O 2 . The resulting reduction current signal can be estimated with the Cottrell equation (Equation 1 of S1 Protocol) [40] . Signal levels (in nanoamperes) from each microbial exposure well were normalized to that of the GC well (no antibiotics) to form GC ratios. These ratios were then plotted against the spectrum of antimicrobial concentrations tested for statistical analysis. Two antibiotic stripwells containing the same range of seven antibiotic concentrations separated by twofold dilutions, as well as one GC, were used for each specimen at 1x (undiluted pellet) and 0.1x (diluted pellet) concentrations to generate two microbial response curves. Each dual-response curve signature was generated by overlaying the two GC ratio curves over the antibiotic range, establishing a signature library that corresponded to each antimicrobial efficacy and microbial susceptibility combination. Changes in the response signature and inflection point in the GC curve were analyzed by three algorithms to match a categorical classification (susceptible, intermediate, or resistant).One hundred microliters of reconstituted specimen pellets at 1x and 0.1x concentrations were inoculated into each well of their corresponding antibiotic stripwell. All stripwells were incubated at 35˚C for the exposure time indicated in each study. Thirty-six microliters of 1M NaOH were then added to each well to lyse target gram-negative pathogens after antibiotic exposure, followed by a 3-minute incubation at room temperature. Twenty-four microliters of 1M HCl were then added to each well to neutralize the pH of the lysed sample, or lysate, and prevent the degradation of free RNA. Ten microliters of the lysate from each well were pipetted onto its corresponding sensors on two electrochemical sensor chips for a total of 4 sensors per well. No sample was delivered to the negative control sensor. All chips were incubated for 30 minutes at 43˚C, and the RNA content was quantified using the method described above and in S1 Protocol to obtain the microbial growth response.For the blind testing study, we used remnant clinical specimens collected at NYPQ under the current IRB. These urine specimens were prospectively collected for urine culture as part of routine care. All samples shipped overnight to GeneFluidics for testing were confirmed positives for either Enterobacterales or Pseudomonas aeruginosa. De-identification and data analysis were performed by administrative staff. We included species belonging to the Enterobacterales family and Pseudomonas aeruginosa due to their prevalence in urinary tract infections, bloodstream infections, and healthcare-associated pneumonia, as well as their increasing resistance to commonly used antimicrobial agents [41] [42] [43] .Signals generated from each sensor from enzymatic reaction with TMB substrate were analyzed with three different algorithms for comparison. Before reporting GC ratios, the algorithms first assessed the signal level from the negative and growth controls from each sensor chip. If either control was out of the acceptable range (i.e., greater than 50 nA for the negative control, less than 50 nA for the growth control), the algorithm reported "NC fail" or "GC fail", respectively, indicating substandard quality of a sensor chip or no bacterial growth. If all controls passed the acceptance criteria, the algorithm proceeded to determine the inflection point in the plot of GC ratios against the antibiotic spectrum. The antibiotic concentration corresponding to the inflection point was estimated by two algorithms (Inhibited Growth Cutoff and Maximum Inhibition) and reported as the growth inhibition concentration (GIC). The Inhibited Growth Cutoff method reported the lowest antibiotic concentration with a GC ratio lower than a predetermined cutoff value; the GIC was solely determined by the GC ratio. Initial assessment of the Inhibited Growth Cutoff method used both 0.4 and 0.5 as cutoff values, and the final cutoff value was determined using on-scale strains with an MIC on or one 2-fold dilution above or below the CLSI breakpoints. The Maximum Inhibition method reported the GIC as the lowest antibiotic concentration observed after the maximum GC ratio reduction in the plot. Unlike in the Inhibited Growth Cutoff method, the GIC corresponded to the greatest change in the slope of the response curve as a whole instead of individual GC ratios. For both algorithms, if the GC ratio from the lowest antibiotic concentration tested was less than 0.45, indicating significant growth inhibition, the GIC was reported as less than or equal to this antibiotic concentration. If the GC ratio from the highest antibiotic concentration tested was greater than 0.9, indicating limited growth inhibition, the GIC was reported as greater than this antibiotic concentration. The first level of analysis was qualitative, whereby the antimicrobial efficacy profiles (significant growth, moderate growth, and inhibited growth) derived from the GIC were compared to the corresponding antibiotic susceptibility results (R for resistant, I for intermediate, or S for susceptible) determined by the clinical microbiology lab or CLSI reference methods. The concordance between susceptibilities reported from the GIC and reference susceptibilities was determined by essential and categorical agreement. Essential agreement is an agreement in which a reported MIC value falls within a log2 dilution of the reference MIC from the CDC AR Bank or CLSI broth microdilution. Categorical agreement is an agreement in which a reported S, I, or R interpretation agrees with the reference category from the CDC AR Bank or CLSI disk diffusion method. Discrepancy rates for the detection of antimicrobial susceptibility were analyzed by very major (vmj), major (maj), and minor errors (min). A vmj, maj, and min are defined as false susceptible reporting (resistant strain reported as susceptible), false resistant reporting (susceptible strain reported as resistant), and misclassification of an intermediate strain (intermediate strain reported as susceptible or resistant), respectively. Any direct-from-specimen antimicrobial efficacy profiles found to be misclassified (i.e., GIC higher than the susceptible breakpoint for a susceptible strain) were retested with both the presented method and microdilution reference method.The negative control is a direct indicator of the electrochemical reaction between HRP and TMB taking place on the sensor, and the growth control is the quantification of microbial loads in diagnostic specimens without any interference from antimicrobials. Therefore, we assessed the normality of these controls by generating box and whisker plots in S1 and S2 Figs. In S1A and S2 Figs, the distribution of negative controls is positively skewed due to the signal cutoff of our potentiotat reader to detect only amperometric signal from the reduction of oxidized TMB. Although there are few data points that fall in the upper quartile, these data points are still below the negative control maximum of 50 nA.There is valid concern about detection sensitivity and matrix interference when developing a direct-from-specimen microbial growth inhibition method that tests directly from clinical specimens rather than an overnight-cultured isolate. Starting directly from unprocessed specimens introduces the challenge of unknown pathogen concentrations ranging from 0 to > 10 8 CFU/mL. To address this concern, we established the correlation between the limit of detection (LOD) of the current molecular analysis platform and the assay turnaround time (TAT). Fig 2C, the analyte incubation time for higher target LODs may be significantly reduced, resulting in a TAT of 16 to 36 minutes. Target pathogen enrichment and matrix component removal by centrifugation may be included to achieve mid-level target LODs, resulting in a TAT of 42 to 110 minutes. For low-abundance pathogens and early infection diagnostics, additional viability culture steps may be included to achieve an LOD of < 10 CFU/mL with a TAT of 4 to 5.5 hours. The direct-from-specimen antimicrobial efficacy profiling protocol was based on the assay parameters summarized in Fig 2D. To evaluate the potential impact of urine matrix components on microbial growth inhibition, we tested two sets of contrived samples, one prepared in culture media and the other prepared in negative urine. This initial evaluation was conducted with highly-susceptible E. coli (EC69, MIC � 0.06 μg/mL for ciprofloxacin) and highly resistant K. pneumoniae (KP79, MIC: >8 μg/mL for ciprofloxacin) isolates from the CDC AR Bank (Fig 3) . The goal of the pilot study was to investigate the potential interference in urine; therefore, a higher and more clinically relevant concentration of 1.0 x10 7 CFU/mL was used to contrive the samples. Three antibiotic exposure times of 30, 60, and 90 minutes were tested as primary parameters for optimization. Microbial growth inhibition was analyzed by plotting GC ratios against the ciprofloxacin concentrations tested, which ranged from 0.0625 μg/mL (two 2-fold dilutions below the Enterobacterales CLSI susceptible breakpoint) to 4 μg/mL (two 2-fold dilutions above the Enterobacterales CLSI resistant breakpoint). As shown in Fig 3A and 3B , all microbial response curves of the resistant K. pneumoniae CDC 79 strain were overlapping at the GC ratios near 1.0 (S2 Table) , indicating little to no inhibited growth regardless of the exposure time. However, there was a clear trend of inhibited growth, as exhibited by the lower GC ratios, with the susceptible E. coli CDC 69 strain; this trend was also more apparent with increasing exposure time or ciprofloxacin concentration. The reported GIC value from the Maximum Inhibition algorithm is listed to the right of each response curve. The bolded GIC value (S strain in MH 30 min, S strain in urine 30 min, S strain in urine 60 min) indicates incorrect categorical susceptibility reporting, which occurred when the exposure time was insufficient. The microbial growth inhibition curves from the contrived urine samples in Fig 3B exhibit characteristics identical to those of the culture media samples in Fig 3A. This similarity suggests that the additional pelleting step performed on the urine samples is sufficient to mitigate the effects of the urine matrix but not harsh enough to put the pathogen into the stationary phase. Additionally, in S1A Fig, the growth controls are clearly separated for each exposure time. Although longer exposure times show a wider distribution, this range may have been caused by the different growth rates of the two included strains, resulting in different signal levels. We also expect there to be a natural dispersion of growth rates within the same strain population. However, these data points are still clearly separated by those from shorter exposure times.As illustrated in Fig 3A and 3B , it is likely that a shorter antimicrobial exposure time may lead to insignificant growth inhibition of susceptible strains, reducing the separation between susceptible and resistant responses. This phenomenon could potentially lead to more errors in categorical susceptibility reporting without the use of a more sophisticated algorithm. We suspected that a similar reduction in the separation between susceptible and resistant strains would occur if the microbial load were much higher than the standard inoculum density of 5x10 5 CFU/mL. To evaluate the effects of higher microbial loads and to explore the biological, chemical, and molecular analytical limitations of our assay, we tested contrived urine samples Direct-from-specimen microbial growth inhibition under antibiotic exposure and comparison to conventional AST prepared at three different microbial loads against a different class of antibiotics (Fig 3C) . A shorter antibiotic exposure time of 2 hours was used to assess the separation between resistant and susceptible response curves. Antimicrobial efficacy profiling tests directly from these contrived urine samples were evaluated. Based on the trend of GC ratios along the increasing meropenem concentrations (0.5 to 32 μg/mL), the GIC would be reported as "susceptible" (� Sbreakpoint of 1 μg/mL for meropenem) for E. coli CDC 77 (MIC: � 0.12 μg/mL) and "resistant" (� R-breakpoint of 4 μg/mL for meropenem) for E. coli CDC 55 (MIC: > 8 μg/mL), which agree with the categorical susceptibilities listed by the CDC AR Bank. The GIC reported from only 2 hours of antimicrobial exposure did not match the MIC value reported from the broth microdilution method, which included a 16-to-24-hour exposure using clinical isolates from an overnight subculture. This disagreement is most likely due to the antimicrobial exposure of the causative pathogen taking place in a different matrix environment (urine vs. agar plate) with different antimicrobial conditions (short vs. long exposure). This study was an initial assessment of the effects of different matrices and testing conditions on categorical susceptibility reporting.To establish a higher correlation between the MIC and GIC values, it would be necessary to incorporate the impact of the microbial load into the GIC reporting, which is not within the scope of this initial study. With higher contrived concentrations, we expect the inflection point to shift to a higher antimicrobial concentration due to a higher bug-to-drug ratio. Even for susceptible strains, microbial growth can be observed at antibiotic exposure concentrations on or below the susceptible breakpoint if the microbial load is higher than the standard inoculum concentration of 5x10 5 CFU/mL. S1B Fig displays the distribution of growth controls from each inoculum concentration. As the inoculum concentration increases from 10 6 to 10 8 CFU/mL, more data points become saturated, leading to our hypothesis of a shifting inflection point and inoculum effect on the MIC. Fig 3C only demonstrated the feasibility to differentiate highly-susceptible from highlyresistant strains, which do not represent all clinical strains; therefore, we wanted to evaluate the growth inhibition curves from on-scale strains containing a MIC on or near the CLSI breakpoints. These strains included E. coli CDC 1 with an MIC of 4 μg/mL for gentamicin (on susceptible breakpoint), E. coli CDC 85 with an MIC of 1 μg/mL for meropenem (on susceptible breakpoint), and K. pneumoniae CDC 80 with an MIC of 0.5 μg/mL for ciprofloxacin (on intermediate breakpoint). To determine if on-scale strains required an exposure time longer than that of highly susceptible and resistant strains, we tested exposure times of 2, 3, and 4 hours. General trends of inhibited growth were observed at 2, 3, and 4 hours for all susceptible-breakpoint strains as shown in Fig 4. The GIC values reported for E. coli CDC 1 were at 2 μg/mL for all exposure times; they were one two-fold dilution below the reference MIC (S3 Table) . In addition, the categorical susceptibility listed in parentheticals was correctly reported as susceptible. The GIC values for E. coli CDC 85 increased from �0.5 to 2 μg/mL as the meropenem exposure time increased from 2 to 4 hours. Although the GIC values at all three exposure times were within one 2-fold dilution of the reference MIC for E. coli CDC 85, the GIC values with longer exposure times more closely aligned with the MIC value. The goal of this study was to report susceptibility within a much shorter time frame. However, the GIC reporting based on only one response curve with a shorter exposure time of 2 hours was insufficient to differentiate borderline susceptible strains. Therefore, a slightly extended exposure time of 3 hours proved necessary in the case of the CDC 85 strain. We then tested the reproducibility of GIC reporting using two different batches of ciprofloxacin stripwells in Fig 4C and 4D ; the GIC reporting was consistent for both batches. The GIC reporting from just two hours of ciprofloxacin exposure was 0.5 μg/mL, which was in agreement with the MIC value from CDC AR Bank database. However, the GIC value transitioned to 0.125 μg/mL with longer exposure times, further signaling the risk of changes in susceptibility reporting when relying on only one response curve. The MIC from the microdilution of K. pneumoniae CDC 80 was 0.25 μg/ mL, which is within one two-fold dilution from the GIC reporting of all response curves in Fig 4C and 4D .In S2 Fig, the GC signals are clearly separated for each exposure time. The wider distribution observed for each time may be due to the inclusion of different strains in each dataset, as well as the natural dispersion of growth rates within a single strain population, resulting in different signal levels. Despite this distribution, each exposure time was distinguishable from the others and 3 hours proved to be adequate, unlike 2 and 4 hours, which generated many data points that were either too low or too high (saturated) to observe a clear susceptibility trend.After demonstrating that 3 hours of antimicrobial exposure may be sufficient for testing on-scale strains, we explored the ability to differentiate bacterial strains with a range of onscale MIC values on or near the susceptible and resistant breakpoints using a 3-hour exposure time (Fig 5) . Fig 5A shows the growth inhibition responses to ciprofloxacin from E. coli (EC69: MIC�0.0625 μg/mL, EC85: MIC> 8μg/mL) and K. pneumoniae (KP126: MIC = 0.125μg/mL, KP80: MIC = 0.5 μg/mL, KP76: MIC = 1μg/mL). There is a clear trend of increasing GIC values (�0.06 μg/mL to >4 μg/mL) that matches the reference MIC values (from �0.06 μg/mL to >8 μg/mL), indicating successful distinction of strains with on-scale MICs. Detailed GIC reporting from all three algorithms is displayed in S4 Table. In addition, we evaluated the growth inhibition responses to gentamicin from K. pneumoniae (KP126: MIC� 0.25μg/mL, KP79: MIC >16 μg/mL) and E. coli (EC1: MIC = 4 μg/mL, EC451: MIC = 8 μg/mL, EC543: MIC = 16 μg/mL) in Fig 5B. Similar to Fig 5A, there is a clear trend of increasing GIC values (from �1 μg/mL to >32 μg/mL) that matches the reference MIC values (from �0.25 μg/mL to >16 μg/mL). Among all susceptible and resistant strains tested in Fig 5, the categorical susceptibility was reported 100% correctly based on the reported GIC value. The two intermediate strains (KP80 for ciprofloxacin and EC451 for gentamicin) were both reported as susceptible, given that both of their GIC values were one two-fold dilution below the reference MIC. We suspected that this incorrect reporting was due to the use of only one response curve. Results leading up to this point in the study suggested the need for a dual-kinetic response curve approach to provide more information on borderline susceptibility such as strains with a MIC on the intermediate breakpoint. Using only one curve resulted in essential agreement between MIC and GIC values, which is acceptable according to CLSI M100 classifications; however, both intermediate strains reported minor errors in categorical susceptibility based on the GIC from one curve [44, 45] .As revealed by the bolded GIC values, categorical susceptibility reporting (susceptible, intermediate, or resistant) may be incorrect if the antimicrobial exposure time is too short (Figs 3A, 3B, and 4B) , the microbial load is too high (Fig 3C) , or the MIC is on one of the susceptibility breakpoints (Figs 4C, 4D and 5A, 5B). In addition to extending the antimicrobial exposure time-especially for time-dependent antibiotics such as meropenem-we explored the feasibility of a dual-kinetic response approach that would allow us to observe a broader range of microbiological responses. In this approach, we inoculated two stripwells containing the same spectrum of seven antimicrobial concentrations with clinical urine specimens at the original concentration (1x) and at a 10-fold dilution (0.1x). Additionally, to evaluate the correlation between the current GIC reporting algorithm and reference categorical susceptibilities and MIC values throughout the physiological range, we tested a scale of clinically relevant microbial loads for urine (10 5 to 10 8 CFU/mL) in Fig 6. The GIC was calculated from the dual kinetic curves, and the inflection point shifted toward higher antibiotic concentrations in samples with higher microbial loads (S5 Table) . In Fig 6B, the growth inhibition curves of 1x and 0.1x of 10 6 CFU/mL overlapped with each other in the insert graph despite the signal levels of Direct-from-specimen microbial growth inhibition under antibiotic exposure and comparison to conventional AST these two sets of curves being significantly different. It is likely that the similarity in microbial load and symmetry between 10 5 to 5x10 5 CFU/mL and from 5x10 5 to 10 6 CFU/mL resulted in overlapping GC ratio curves. Fig 6A-6D show the transition of GIC reporting from �0.0625 μg/mL (susceptible) to 1 μg/mL (resistant). The categorical susceptibility reporting of "Susceptible" was correct over a range from 10 4 CFU/mL (0.1x of 10 5 CFU/mL) to 10 7 CFU/ mL (0.1x of 10 8 CFU/mL). The GIC value jumped from 0.125 μg/mL (0.1x of 10 8 CFU/mL) to 1 μg/mL (1x of 10 8 CFU/mL) as shown in Fig 6D. Similar microbial responses were observed in the rapid ciprofloxacin exposure study with the same E. coli CDC 69 strain in Fig 3B; the GIC value jumped from 0.0625 μg/mL (90-min exposure) to 1 μg/mL (30-min and 60-min exposure). The GC signal levels as listed in S5 Table were saturated at 10,000 nA for 10 7 and 10 8 CFU/mL; therefore, the reported GIC value is expected to be higher than the MIC values due to the inoculum effect.The combined categorical susceptibility reporting for the dual-kinetic response approach in Table 1 . We used the Maximum Inhibition algorithm, in which the combined categorical susceptibility is determined by the maximum GC reduction in both microbiological response curves. Specifically, the combined GIC corresponds to the greatest change in the slope of both response curves. Table 1 also includes the individual and combined GIC reporting from all contrived concentrations in Fig 6. Given that the combined categorical susceptibility is determined by the greatest GC ratio reduction in the extended antimicrobial spectrum (all 1x and 0.1x bug-to-drug ratios), it represents the most significant growth inhibition caused by antimicrobial exposure throughout the entire spectrum. Although there was one categorical susceptibility reporting error in the 1x curve in Fig 6D, the reported combined categorical susceptibility was correct for all microbial load conditions. The purpose of the combined GIC reporting in the dual-kinetic-curve approach is to report only the maximum growth inhibition and to discard GIC reporting errors caused by high or low microbial loads.To evaluate the correlation between the GIC reporting algorithm and reference microbial susceptibility and MIC values throughout the physiological range with other antimicrobial classes, we tested the same set of microbial loads in urine against gentamicin in Fig 7. The GIC value reported by the Maximum Inhibition algorithm is displayed next to each response curve. GIC reporting from all three algorithms can be found in S6 Table. There is a clear transition in GIC reporting for gentamicin across the range of microbial loads-from �1 μg/mL (susceptible) to 16 μg/mL (resistant). The categorical susceptibility reporting of "Susceptible" was correct over a range of 10 4 CFU/mL (0.1x of 10 5 CFU/mL) to 10 6 CFU/mL (0.1x of 10 7 CFU/mL). A GIC of 8 μg/mL was reported for 10 7 CFU/mL (1x of 10 7 CFU/mL and 0.1x of 10 8 CFU/ mL), which is one dilution above the reference MIC of 4 μg/mL. This disagreement between GIC and MIC is acceptable for essential agreement but is a minor error for categorical agreement. The GIC reporting of 16 μg/mL from 10 8 CFU/mL was a major error. Similar to the results of ciprofloxacin, the GC signal levels, as listed in S6 Table, were saturated at 10,000 nA for 10 7 and 10 8 CFU/mL; therefore, the reported GIC value is expected to be higher than the reference MIC value due to the inoculum effect. Table 2 is a summary of the individual and combined GIC reporting from all contrived concentrations in Fig 7. The MIC of E. coli CDC 451 is listed by the CDC as 8 μg/mL, indicating intermediate susceptibility, but our microdilution indicated a MIC of 4 μg/mL, which would be categorically classified as susceptible. Therefore, we used the MIC of 4 μg/mL for reference, as it was obtained with the reference microdilution method. Without adjusting for the inoculum effect, the maximum growth inhibition would indicate a combined GIC of 8 μg/mL for both Fig 7C and 7D . However, due to the signal level saturation observed at the growth control and low antibiotic concentrations (1 and 2 μg/mL in 1x curve in Fig 7C, 1-8 μg/mL in 1x curve in Fig 7D, 1-4 μg/mL in 0.1x curve in Fig 7D) , we adjusted the combined GIC to account for the inoculum effect. The electrochemical current reading is set to saturate at 10,000 nA to maximize the resolution at lower current readings around the limit of detection, so the reading would be saturated if the starting microbial load were too high. The reported GIC was adjusted one dilution down for every antibiotic concentration reported at a saturated signal level. As a result, the combined GIC reporting from Fig 7C and 7D was adjusted from 8 μg/mL to 4 μg/ mL. In comparison to the microdilution MIC, there were three categorical susceptibility reporting errors in the single response curves in Fig 7C and 7D . After adjusting for the saturated signal level, the combined categorical susceptibility of both response curves was correct for all microbial load conditions. Similar results were observed for the same study using meropenem in Fig 8. The reported GIC transitioned from �0.5 μg/mL (susceptible) to 32 μg/mL (resistant). The categorical susceptibility reporting of "Resistant" was correct over a range of10 5 CFU/mL to 10 8 CFU/mL. A GIC of �0.5μg/mL was reported for 10 4 CFU/mL (0.1x of 10 5 CFU/mL in Fig 8A) and resulted in a very major error for categorical agreement. However, the GC signal level listed in S7 Table was 39 nA, which indicated insufficient microbial growth and was reported as "GC fail". No GIC value was reported in the case of GC failures (<50 nA). Table 3 is a summary of the individual and combined GIC reporting for Fig 8. Similar to the CDC 451 strain, the MIC value of K. pneumoniae CDC 79 is listed by the CDC as 8 μg/mL, but our microdilution indicated the MIC was 4 μg/mL. We used the microdilution result as the reference. Without adjusting for the inoculum effect, the maximum growth inhibition would result in a combined GIC of 0.5 and 32 μg/mL for Fig 8A and 8D , respectively. However, the combined GIC was adjusted due to growth control failure in Fig 8A and saturated signal level at the growth control and five antibiotic concentrations in Fig 8D. There was initially only one categorical susceptibility reporting error in Fig 8A, but it was not reported due to GC failure. The combined categorical susceptibility was correct for all microbial load conditions. After initial validation of the presented antimicrobial efficacy profiling method using CDC clinical strains, we conducted a pilot feasibility study on blinded urine specimens from NYPQ. De-identified remnant clinical specimens were shipped overnight to GeneFluidics for testing as described above, and the summary of combined categorical susceptibility is detailed in Table 4 . Sample #7 was positive for P. aeruginosa but when tested with the assay produced a GC failure. Subculture of NYPQ sample #7 on a Chromagar plate exhibited two separate strains, indicating that the original specimen may have contained a polymicrobial infection or was contaminated during sample collection or testing. For specimens containing multiple organisms, species-specific susceptibility reporting would require the pathogen identification sensor chip with complementary oligonucleotide probes for each target pathogen, which is outside the scope of this study. The categorical susceptibilities of the remaining nine specimens were reported correctly, resulting in 100% categorical agreement with the susceptibilities reported by NYPQ. All individual and combined GIC reports are listed in S8 Table. NYPQ's AST panel tests levofloxacin (LEV) instead of ciprofloxacin (CIP) for the class of fluoroquinolones; therefore, the GIC reporting of CIP susceptibility for Samples 1, 4, and 6 was compared to the categorical susceptibility interpreted from the reference broth microdilution result. Levofloxacin is generally less effective than ciprofloxacin against Gram-negative pathogens, as explained in the literature [46, 47] . If a pathogen is susceptible to levofloxacin, it may not be Although recent technologies have allowed PCR-based pathogen identification to be performed in fewer than 30 minutes, there is currently no phenotypic AST that can be performed within a reasonable time frame-specifically, in hours-directly from clinical samples in clinical microbiology laboratory settings. Schoepp et al. demonstrated a benchtop digital LAMP quantification method that measured the phenotypic response of E. coli in clinical urine samples and presented AST results after a 15-minute antibiotic exposure. However, only highlyresistant or susceptible strains with rapid doubling times were selected for testing [48] . For pathogens with an on-scale MIC or a longer doubling time, an extended antibiotic-exposure incubation is necessary. Khazaei et al. demonstrated that quantifying changes in RNA signatures instead of DNA replication resulted in significant shifts (>4-fold change) in transcription levels within 5 minutes of antibiotic exposure [37, 49] . However, there was a wide range of control:treated (C:T) ratio dispersion from highly susceptible strains with MICs at least seven 2-fold dilutions below the resistant breakpoint. With 8 strains of the same MIC (0.015 μg/mL) and one strain with an MIC only 2-fold above (0.03 μg/mL), the C:T ratio can change from 2 to 6, but the C:T ratio separation between resistant and susceptible populations is only roughly 0.4. The aforementioned study demonstrates the limitation observed in clinical settings where not all susceptible strains have an extremely low MIC. Doern noted that although the concept of using unprocessed clinical specimens as inoculum for direct-from-specimen AST or antimicrobial efficacy profiling is appealing, there are significant challenges to this approach [1] . The first challenge he mentioned was accommodating clinical specimens with unknown organism concentrations that may be significantly higher or lower than the standardized inoculum concentration used in most growth-based susceptibility tests. In a proof-of-concept study by Mezger et al., urine was used as an inoculum for rapid AST, in which a 120-minute antimicrobial exposure was performed, followed by quantitative PCR [50] . Although pilot experiments demonstrated E. coli susceptibility to ciprofloxacin and trimethoprim within 3.5 hours, the susceptibility profiling algorithm was not correlated to CLSI M100 categorical reporting. In our method, we attempted to address this second challenge of providing susceptibility profiling equivalent to AST performed in a clinical microbiology lab (>95% categorical agreement) by assessing susceptibility response dynamic trends at three different bug/drug ratios. This was done by inoculating the raw specimens in two dilutions as detailed above. The third challenge is the need to ensure pathogens are isolated from clinical samples to allow for retesting, confirmation of phenotypic testing (e.g., AST), polymicrobial testing, or epidemiological studies. This challenge will be addressed by setting aside the remainder of specimens for QC or archiving purposes. Despite being recognized as the standard quantitative index of antimicrobial potency, the MIC is subject to several limitations, the first of which is a long antimicrobial exposure time of 16 to 20 hours. Furthermore, it requires a standard inoculum concentration of 5x10 5 CFU/mL, rendering it insufficient to test a low initial bacterial inoculum (i.e., 3 to 5 colonies usually in the absence of resistant populations). Lastly, it utilizes constant, or static, antibiotic concentrations [51] . Therefore, the MIC provides no information on the time-course of bacterial killing or emergence of resistance [52] [53] [54] [55] [56] . Several static and dynamic in vitro and in vivo infection model studies have been performed to analyze and interpret in vitro efficacy results of antimicrobial drugs as an alternative to MIC reporting [56] [57] [58] [59] [60] [61] . These experimental models provide a wealth of time-course data on bacterial growth and killing but have not been adopted into a diagnostic test directly from clinical specimens [62] .An ideal growth inhibition spectrum can fit concentration-responses in a sigmoidal curve that is symmetrical about its inflection point and flattened on both ends with statistical fluctuations, as shown in Figs 6-8. The left plateau represents insignificant growth inhibition under antibiotic exposure below the MIC, and the right plateau represents significant growth inhibition above the MIC. The inflection point indicates the concentration at which antimicrobial potency lies midway between non-inhibited growth (left plateau) and complete inhibited growth (right plateau); the slope of the tangent to the curve at the inflection point is a measure of the antimicrobial intensity.With increasing concentrations of antibiotic in each well, the effectiveness of the antibiotic increases and lowers the rate of pathogen viability. This behavior is reflected in the growth control ratio, which would be negatively correlated with the instantaneous mortality rate. Therefore, the antimicrobial concentration at the inflection point, or GIC, will likely increase when the microbial load in the clinical specimen is higher. This concept is exemplified in the literature [63] [64] [65] [66] . Based on this hypothesis, we developed a direct-from-specimen microbial growth inhibition test that utilizes two dilutions of unprocessed clinical specimens (1x and 0.1x) as inoculums for two antibiotic exposure stripwells, each containing one GC well and the same range of seven antibiotic concentrations. The resulting response curves are used to visualize the microbial growth inhibition spectrum. As the drug concentration increases, the probability of drug molecules reaching a lethal concentration increases as a function modeled by a smooth sigmoidal curve. Considering the unknown microbial load in clinical specimens, the coverage of this spectrum is designed to capture the inflection point within the entire range of physiological conditions. The GC well of each stripwell serves two purposes: assist in GIC adjustment based on the microbial load under no antibiotics and provide quality control to eliminate the data set if there is no growth due to a microbial load below the limit of detection. In this study, we developed a tentative algorithm that aims to identify the antibiotic concentration at the inflection point and adjust this inflection point based on microbial load determined by the GC signal level; the reported GICs were compared to the MIC obtained from reference methods or FDA-cleared systems.Supporting information S1 Fig. Signal distribution of negative and growth controls for Fig 3.
|
Incidence of clinical malaria, acute respiratory illness, and diarrhoea in children in southern Malawi: a prospective cohort study Malaria Journal
|
Background Worldwide, the three most common causes of death of under-five children are pneumonia, diarrhoea and malaria [1] . In sub-Saharan Africa, an estimated 2.5 million children under-five years old die each year from one of these three preventable and treatable diseases [2] . High population density, poor housing and sanitation infrastructure, inadequate knowledge, malnutrition, and poor access to quality health, education, and employment all contribute to the risk of pneumonia, diarrhoea and malaria in this region [3] . Addressing these contributing factors may reduce the occurrence and therefore, morbidity and mortality associated with these diseases in rural communities. Reducing morbidity and mortality from infectious diseases has potential economic benefits: there is no income loss due to treatment and death costs, and there is an increase in productivity because healthy people are better able to participate in economic activities that will improve their lives [4, 5] . In addition, scaling up interventions such as vaccines (pneumococcal and rotavirus), insecticide-treated mosquito nets (ITNs), structural improvements of houses, and nutritional supplementation may further reduce the incidence of these diseases [6] [7] [8] [9] .Uptake and effectiveness of existing efficacious health interventions, in all communities, requires community buy-in. To improve the uptake and effectiveness of health interventions and promote positive health behaviour in communities, several strategies have been used in the past. These strategies include active community engagement and participation, information, education, and communication (IEC), communication for behaviouralimpact (COMBI), behaviour change communication (BCC), and school-based health education [10] [11] [12] [13] [14] [15] . In any setting, health promotion and community engagement strategies should consider factors such as culture, pre-existing beliefs, community structures, literacy levels and education, communication language, transportation options, and availability of resources for personnel support and effective programme operation [16] .A community participation strategy was utilized to increase awareness and participation in malaria prevention and control in Chikwawa, Southern Malawi [17] . This community participation strategy included activities such as leadership training for local volunteers, known as health animators, who were tasked with conducting IEC on malaria and involving the local community in the implementation of malaria control strategies using existing community structures. The goal of this communitybased approach was to influence a change in mind-set of the entire community to promote self-reliance through understanding malaria and its control [18, 19] . The approach was led and integrated into the community by building capacity within the communities in malaria prevention and control using regular community malaria workshops and sensitization [17] . Complementary malaria control strategies, such as house improvement (HI) and larval source management (LSM), were introduced in the community in addition to the governmentrecommended ITNs to achieve a significant reduction in malaria burden [20] .This study aimed to determine the trend of incidence of clinical malaria, acute respiratory infections (ARIs), and acute diarrhoea in under-five children residing in rural communities where a community-based malaria prevention strategy, using HI and LSM, was implemented in addition to government-recommended ITNs. The study also identified risk factors associated with clinical malaria, ARI and diarrhoea. Investigation of these conditions within the under-five population in a rural setting could provide a more holistic approach to the treatment and control of all conditions, especially in sites that are remote and hard to reach.This was a prospective cohort study of children aged 6-48 months. Study participants were recruited from households sampled in a rolling malaria indicator survey (rMIS) [21, 22] .The study was implemented in a rural community surrounding the Majete Wildlife Reserve (MWR) in Chikwawa, southern Malawi, from September 2017 to May 2019. The study area was within the catchment of the Majete Malaria Project (MMP), a community-based malaria control project (Additional file 1: Fig. S1 ). The surveys were carried out in 65 villages with a total population of about 25,000 people and 6600 households [21] . The health facilities serving the area consist of one district referral hospital and 26 primary healthcare facilities (village clinics and health centres). Some of the services offered at these settings include immunization, underfive clinics, maternity, and outpatient department. The district hospital offers secondary healthcare, requiring hospital admission. This study was conducted in three main sites which were referred to as focal areas A, B and C, representing hubs for community development.A sample size of 285 children was calculated using the statistical z-test with the following formula N = (z 1-α/2 /ε) 2 to calculate the sample size of the cohort, where z 1-α/2 is the standard deviation for the probability p and ε is the relative precision [23] . The confidence level was set at 90%, relative precision was 10%; the addition of 5% of the total was done to account for attrition.All children in six rMIS [21] household sampling rounds aged 6-48 months were eligible. Every two months a sample of 270 households (90 per focal area) were sampled using inhibitory spatial random sampling [24] . These households were selected as described in [19, 25] from a demographic database covering the research area, and surveys were conducted in these households over a 6 to 8-week period in each round with short breaks between rounds (5 to 15 days) [19, 25] . Surveys were conducted simultaneously in each focal area. In brief, rMIS involved small teams of study staff containing a nurse with at least 2-4 research assistants visiting sampled households [22] .The study team used an electronic structured questionnaire in a mobile-based application called the Open Data Kit (ODK).The study team consisted of 12 research assistants and three study nurses. The study team underwent a twoday protocol training including identifying sampled households, administering consent, and completing the questionnaire using sick-visit cards. Research assistants booked guardians for recruitment and follow-up and nurses administered the questionnaire, recorded sickvisit cards details, and conducted clinical assessments.Health workers from the health facilities within the study site involved in the clinical management of underfive children underwent a one-day orientation of the study procedures. These health workers were trained on how to record the malaria diagnosis and malaria rapid diagnostic test (RDT) results on the sick-visit cards provided to participants. A research nurse recorded details from the health passport to the sick-visit card if the health worker did not record the sick-visit in the sickvisit card; health workers are required to record details of each clinical consultation in a health passport as part of their routine work.At the recruitment visit, a questionnaire was administered to a consenting head of the household or the child's guardian. The age and gender of the study participant was recorded. Data on the features of the house, including the main materials of the walls, roof and floor, and the presence of eaves were obtained and validated. In addition, ownership of ITNs and household items were recorded. A clinical assessment, including temperature and anthropometric measurements, was conducted. The participants were tested for malaria using a RDT (SD Bioline malaria Ag Pf HRP-2 Standard Diagnostics Inc, Gyeonggi-do, Republic of Korea) and haemoglobin (Hb) level was measured using Hemocue 301 (Haemocue, Angelholm, Sweden) machines in children with signs and symptoms suggestive of malaria. Children with uncomplicated malaria were prescribed first-line treatment of artemether-lumefantrine (AL), while those with Hb levels less than 11 g/dl were referred to the nearest health facility for care. Recruited participants were given a sick-visit card which was pinned to their health passports. Health workers are expected to record the details of each clinical consultation in a health passport as part of their routine work.Study personnel visited the participants' households at two-monthly periods for 12 months to collect and replace sick-visit cards. The follow-ups conducted at months 2, 4, 8, and 10 involved the collection and replacement of sick-visit cards. However, malaria tests were conducted using RDT to children that had malaria symptoms during these visits. All children were screened for malaria symptoms at 6 and 12-month follow-up household visits, and only symptomatic children were tested for malaria using RDT. Malaria treatment was given to participants who tested positive for the disease, and the information was recorded on the sick-visit card as a clinical malaria case. Clinical events documented by health workers at health facilities as a result of acute diarrhoea and ARIs were transcribed from the sick-visit card by research nurses. After cross-referencing the data on the sick-visit card with the data on the health passport, the data were entered electronically into a tablet and sent to a remote server via an internet connection.Clinical malaria was defined according to the national malaria control programme revised guidelines for the treatment of malaria in Malawi [26] as signs and symptoms suggestive of malaria and positive RDT result during a sick-visit or at 6 or 12-month study visit. Symptoms of malaria listed in the revised guidelines for standard malaria treatment include fever, vomiting, headache, and malaise [26] . Only the first clinical malaria diagnosis was included for children who had a clinical malaria diagnosis more than once within 14 days.Acute diarrhoea was defined according to the World Health Organization (WHO) as a clinical syndrome with acute onset of three or more loose or liquid stools in 24 h with the duration lasting several hours or days [27] .ARIs, classified as upper respiratory tract infections (URTIs) and lower respiratory tract infections (LRIs), are clinical syndromes involving the upper and lower airways. ARIs have also been defined according to WHO. URTIs are defined as a clinical syndrome characterized by the sudden onset of fever and cough or sore throat. For LRIs, pneumonia was the condition of interest. Pneumonia was defined as a clinical syndrome with at least one of the following: fever or cough, shortness of breath, and chest pains with appropriate antibacterial therapy initiated or recommended [28] .Three HOBO weather stations (Onset Computer Corporation, MA, USA), one in each focal area, measured hourly rainfall in millimetres (mm), the temperature in degrees Celsius (°C), and relative humidity as a percentage. Monthly average temperature and relative humidity together with the total rainfall were calculated from the weather data. Monthly averages of weather components (parts) from the three weather stations were used to calculate the overall weather for each month. This was used to compare to monthly incidence data.R version 4.0.2 was used to analyse data. For the overall incidence rate of clinical malaria, the total followup time was calculated from the total time in years between recruitment and study exit, whether this was at the end of the one year, lost to follow-up, relocation, or withdrawal. To account for the time, the children were not at risk of clinical malaria, 14 days were subtracted from the child-years follow-up with each case of clinical malaria which occurred. Univariate analysis for continuous variables such as age was conducted. Furthermore, reporting of median and inter-quartile range (IQR), and proportions for categorical variables was done. Principal component analysis (PCA) was used to create a wealth index from ownership of livestock (cattle, goats, sheep, chicken, etc.) and other assets (mobile phones, radios, television, bed, bicycle, toilet type, sofa, etc.). Each asset was assigned a score factor depending on the standard deviation (i.e., an asset which all households own or which no households own would have a minimum score). The incidence rate was calculated by dividing the total number of clinical outcomes by the time at risk. Incidence rates by a priori variables such as age, focal area and intervention arm are reported and compared. Comparison of the rates is reported as an incidence rate ratio (IRR) calculated as a ratio of the incidence rate in a particular group divided by the incidence rate in a comparison or reference group.The College of Medicine Research and Ethics Committee (COMREC) reviewed and approved the protocol (Certificate numbers P.11/14/1658). The study conformed to the principles of human subjects' protection. Before the data collection, communities were sensitized and informed of the purpose of the study. The district health office and all health facilities were engaged and supported the implementation of the study. Consent was obtained from the parents and legal guardians of the children.A total of 281 children were recruited for the study out of 325 children who were from sampled households. There were 29 children aged > 48 months and 15 guardians refused consent. Seven of the 281 children had incomplete data and were excluded from the analysis; the remaining 274 children from 254 households had complete data and were included (Fig. 1) ; 250 (91.2%) children completed all the 12 months of follow-up. Overall, participants who were lost to follow-up were 24 (8.8%), some of whom had withdrawn consent and others had relocated out of the study catchment area. .Included in analysis (n=274)The median age of the children was 25 months with 50.7% of the children aged between 24 and 48 months (IQR 16.0-35.0); 52.6% were female ( Table 1 ). The mean Hb was 11.0 g/dl and 47.7% of the children had Hb above 11.0 g/dl. There was a high prevalence of stunting (33.5%). A total of 12 children tested positive for malaria by RDT at recruitment; 58.7% of households reported owning at least one ITN for sleeping.There was a total of 110 malaria cases recorded from the 274 participants through passive case detection. The total duration of follow-up was 235.7 child-years, of which 231.5 were child-years at risk (Additional file 1: S1). The overall incidence of clinical malaria was 0.5 cases per child-years at risk ( Table 2 ). The incidence of clinical malaria was highest in focal area B and across the HI and LSM intervention arm. The incidence of clinical malaria was highest in children aged 24.0-59.9 months at 0.6 cases per child-years at risk. Out of 274 children, the proportion of children with at least one clinical malaria case was 35.0%.Malawi has three annual seasons, namely hot-wet, colddry, and hot-dry seasons. In this study, there was no significant difference in the weather pattern across the focal areas. Figure 2 shows the pattern of malaria within the study period from September 2017 to April 2019. The hot-wet season ranges from November to April, cold-dry season from May to July, and hot-dry season from August to October. As shown in Fig. 2 , there was an initial rise in malaria incidence in December 2017 (0.18 clinical malaria cases/child-years at risk) with the first peak arriving in January 2018 (1.4 clinical malaria cases/child-years at risk). This was followed by a decline in February followed by a small rise in March (0.8 clinical malaria cases/ child-years at risk). The highest peak was reached in May 2018 (1.4 clinical malaria cases/child-years at risk) and this was followed by a drop in incidence with the lowest value being reached in November 2018 (0 clinical malaria cases/child-years at risk). A minor peak occurred a year later in January 2019 (0.3 clinical malaria cases/childyears at risk). The malaria peaks immediately follow the peaks in rainfall and relative humidity (RH). The mean monthly temperature varied between 20 and 30 °C.There were two hospital admissions due to severe malaria for the entire study period representing an incidence rate of 0.01 cases per child-years at risk. No hospitalizations occurred due to other conditions. No deaths were recorded during this period.A total of 47 acute diarrhoea cases were recorded over 235.7 child-years followed-up ( Table 3 ). The overall incidence of diarrhoea was 0.2 cases per child-years at risk and was highest among those aged 12.0-23.9 months. There were 66 cases of URTI, with the overall incidence being 0.3 cases per child-years. Incidence for pneumonia was 0.3 cases per child-years, with 65 cases observed and the 12.0-23.9 and 24.0-59.0 months' age groups having the highest risk. Figure 3 shows the monthly variation of incidence of ARIs and diarrhoea. The first rise of incidence of URTI and diarrhoea was observed in January 2018 which then subsided and started rising in March 2018. The second and highest peak in incidence for URTI, pneumonia and diarrhoea was in April 2018. A partial rise of pneumonia was also observed in September 2018 which declined remarkably. Peaks in pneumonia and URTI immediately follow the drops in temperatures.In this prospective cohort study in a rural area with a community engagement initiative on malaria prevention, overall incidence rates of clinical malaria, ARIs and diarrhoea were found to be fewer than 1 case per child-years at risk. There were significant temporal variations in disease incidence of all the diseases. The results of temporal measurements show when cases of malaria, ARIs and diarrhoea occurred. Previous estimates of malaria incidence within the same area are available, however no previous reliable estimates of ARIs and diarrhoea incidence from cohort studies in this area, or in Malawi in general, are available. Estimates of incidence based on LMIC data suggest an impending epidemic of malaria and pneumonia in low and middle-income countries (LMIC) [28, 30] and reports of deficiencies in care raise the concept that young lives are being unnecessarily lost. Findings in this study suggest the incidence of clinical malaria was 2.5 times (1.2 cases per child-years at risk vs 0.5 cases per child-years at risk) lower than a previous study conducted between 2015 and 2016 in the same area [31] . Before 2016, household ownership of ITNs in the study site was at 29% with a majority having damaged or no ITNs at all [25] . During the current study, a mass ITN distribution campaign, community participation and mobilization, and community-led malaria interventions had been implemented [19, 32] . Studies from Kenya, Rwanda and Uganda have shown the significance of community participation in the formulation of appropriate measures towards malaria control [33, 34] . These interventions may have contributed to the decrease in malaria incidence. However, it should be noted that household ownership of at least one ITN in this community (58.7%) during the time of the study was lower than the 2017 national aggregated estimate (82%) [35] .Despite this disparity, these figures demonstrate a remarkable improvement in ITN ownership at both the community and national levels as evidenced by the previous study [31] . Between 2015 and 2018, the incidence of malaria in Malawi has stagnated between 214 to 217 per 1,000 population at risk despite an increase in the national ITN coverage [30] . Despite this intervention's enormous success, residual malaria transmission cannot Table 2 Number of clinical malaria cases, incidence rates and incidence rate ratios (IRR) of the follow-up study Reference for all calculations including IRR is from Rothman et al. [29] The reference or comparison area/intervention arm/age-group have been labelled as Reference a Calculated from subtracting 14 days from total child-years for each malaria case; child-years at risk also includes the period of follow-up of children who did not complete 12 months be addressed by ITNs or indoor residual spraying (IRS) alone, even at very high coverage [36, 37] . Their longterm viability is jeopardized by a widespread increase in insecticide resistance in the target species [38, 39] , which could be the case in the Malawian scenario, where despite an increase in ITN coverage there has been stagnation in clinical malaria incidence. The combined effect of existing interventions with novel strategies involving environmental management such as LSM and socio-economic development through house improvement provides a non-insecticidal, complementary approach to increasing protection against mosquito bites [40, 41] . These supplementary interventions could help to halt malaria transmission by reducing and preventing human-vector contact within residential areas. District-level malaria incidence estimates are however unavailable but could have been better if available for comparison sake. Malnutrition is one of the possible risk factors for malaria infection among the under-five population in this rural area. At recruitment, some children in the area were discovered to be stunted, underweight and wasted. These conditions increase the children's susceptibility to infection. The presence of anaemia could be another contributing factor. The common causes of anaemia in Malawian under-five children include malaria, deficiencies in iron, folate, and other micronutrients, intestinal worms, and sickle cell disease. Though not specific to malaria as elaborated above, trends in anaemia prevalence can reflect malaria morbidity and have been illustrated in how they respond to changes in the coverage of malaria interventions [42] . Few cases of anaemia were recorded in this study compared to the previous study [31] , with a majority being mild anaemia cases, suggesting that the intervention coverage that began in 2016 may have had an impact on malaria, resulting in reduced anaemia cases.Clinical malaria was unevenly distributed among children within different age-groups and within the focal areas and intervention arms. There were 194 (70.8%) children that did not experience any clinical malaria, 80 children had a total of 110 clinical malaria infections, with some experiencing a repetition of infections. The majority of these cases occurred in focal area B. There are different reasons why focal area B had more malaria cases and repeated infections than other focal areas. Firstly, the Shire River, being the largest river in Malawi, flows through the district, including focal area B. This promotes Anopheles mosquito proliferation [43] . Furthermore, due to other agricultural and economic activities being conducted in the area, there is a development of multiple mosquito larval habitats which include cattle hoof prints, rice paddies, brick-pits, and wells [43] . A high number and repeated malaria infections in focal area B suggests the presence of a hotspot with higher malaria transmission than the surrounding areas [44] . A malaria transmission hotspot is defined as a geographical area within a malaria transmission focus where transmission intensity surpasses the average level [44] . Hotspots serve as foci for malaria transmission in most areas, particularly those undergoing malaria elimination, highlighting the importance of establishing targeted control within these areas.In this area, this could be accomplished through targeted interventions to reduce the human infectious reservoir, such as reactive screening and treatment of individuals diagnosed with malaria at health facilities [45] , proactive case detection, which involves screening people in hotspots at regular intervals [45] , and mass drug administration where a full therapeutic dose of drugs are administered to a population without prior screening. Targeted vector control activities, such as increasing ITN and IRS coverage [46, 47] and larviciding [48] of mosquito breeding sites, can be carried out; however, these interventions are laborious and costly in this context. In this study, 29% of households reported having open eaves, a decline of 10% from the previous study. Closing open eaves in houses has been shown in previous studies to reduce mosquito entry and anaemia in children [9, 49, 50] . A reduction of incidence in this area generally could be attributed to the fact that most houses in the area had closed eaves during the community-led implementation, limiting infectious bites from malaria mosquitoes. In this study, malaria incidence in southern Malawi was observed peaking during and immediately after the hot-wet season. An increase in the number of mosquito breeding sites during and post rainfall is likely the main contributor to an increase in incidence in this area. A similar pattern of malaria incidence was observed in a previous study in the same area [31] . There are various studies supporting the significance of weather factors in malaria. Studies previously done have shown that temperature and humidity affect mosquito breeding and larval and malaria parasite development [51] .The incidence of diarrhoea was low with the highest incidence in children aged 12.0-23.9 months. Although 87% of children had no episodes of acute diarrhoea, 36 children had a total of 47 episodes, with only a few having two or more episodes. In contrast, previous reviews have reported that the highest burden of disease has remained consistent with age for the past 30 years (i.e. 6.0-11.0 month-olds at both global and in the region) [52] . The higher the age the lower the incidence. For instance, in 2010, in the WHO Africa region, the incidence rate among 12.0-23.9 months old was 4.2 episodes/child-years compared to 2.7 episodes/child-years in the 24.0-59.0 months' age-group. The rise in the incidence of diarrhoea at the age of 12.0-23.9 months can be explained by the possible introduction of contaminated weaning foods by most mothers during this time [53] . There is normally a sequential decrease in the risk after the 6-11 months age category and this is attributed to the development of immunity following repeated exposure to pathogens after this age group [53] . Generally, this correlates with findings in this study which show a decrease in the incidence of diarrhoea with an advance in age.An important finding from this study is that the ARIs displayed seasonal trends in incidence; ARIs peaked during the cold dry season, suggestive of increased susceptibility to respiratory infections during this season. ARIs may occur more often among under-five children because of their anatomical structure [54, 55] . This age category is still undergoing development of organs such as lungs and is known to have relatively immune immaturity which makes them more vulnerable to infection [54, 55] .This study was not without limitations. There could be rare possibilities that some cases of malaria may have been missed if health care workers did not record a sick visit on a sick-visit card provided or health passport, due to other challenges encountered such as emergencies or a guardian had forgotten the child's health passport and sick-visit card. Larger sample size and a much longer follow-up period could have been better for the follow-up of these conditions. A period of one year was not adequate for follow-up of these conditions. The other limitation was that the trial was not powered to interpret incidence by trial arm. Furthermore, a generalized linear mixed model to account for within and between cluster effects was not fitted. As a result, the precision of the IRR estimates may be slightly affected (Additional files 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14) .Considering that the current study area is rural, the findings of this study show that the incidence of clinical malaria, ARIs and diarrhoea was low. It was found that, compared to a pre-intervention malaria incidence study, malaria incidence in under-five children had significantly declined. Integrating malaria control strategies as a community participatory initiative has the potential to reduce malaria incidence as it allows a better understanding of the disease and the way it can be controlled, as shown in this community. However, there is a need for community engagement to be as comprehensive as possible by incorporating other major causes of childhood morbidity, such as ARIs and diarrhoea, to improve understanding concerning all diseases. This study highlights the need for more comprehensive studies on malaria, ARIs and diarrhoea to develop more effective interventional strategies to prevent and treat these conditions that impose a significant public health and socio-economic burden in resource-limited countries. Monitoring the epidemiology of these conditions in children may aid in the planning of local service expansion, educational programmes, and preventive measures.
|
Journal Pre-proof Russo-Ukrainian war: An unexpected event during the COVID-19 pandemic
|
Ukraine's significance stems from the fact that it is located between Central Europe and 49 Russia, and it's a vital role in regional stability ( Figure 1A ). Besides oil, gas, and minerals,
|
Egyptian Rousette IFN-ω Subtypes Elicit Distinct Antiviral Effects and Transcriptional Responses in Conspecific Cells
|
Bats comprise about 20% of all classified mammal species with over 1,200 species and host a number of viruses known to cause severe disease in humans. While humans develop severe and life-threatening illnesses from many of these viruses (e.g., henipaviruses, SARS and MERS coronaviruses, and filoviruses), bats show no symptoms of disease in natural or experimental infections (1, 2) . The adaptations (in host or virus) that allow bats to host emerging viruses without developing symptoms of disease are not yet known.Type I interferons (IFNs) are an important component of the early antiviral immune response, and make up a diversified multi-gene family, including subtypes like α, β, δ, ω, ε, and others (3) . Type I IFNs are induced by the recognition of viral pathogen-associated molecular patterns (PAMPs), and act by inducing interferon stimulated genes (ISGs) that collectively contribute to an antiviral response (4, 5) . All type I IFNs bind to and signal through the same heterodimeric receptor complex IFNAR1/2, but both evolutionary analyses and functional studies suggest that multiple IFN subtypes make non-redundant contributions to immunity (6) (7) (8) (9) (10) . Although the exact functional contribution for each IFN is not completely understood, differences in the interaction of various IFN subtypes with IFNAR1/2 are known to differentially induce downstream ISGs (10) (11) (12) . As a result, differences in pathogen-specific antiviral effect are possible, depending on the amount and profile of ISGs induced by a particular IFN.The importance of type I IFNs in innate antiviral responses and in bridging innate and adaptive immune responses has sparked interest in exploring this pathway in several bat species. Due to the lack of bat-derived IFNs, much of the work to analyze IFN responses in bats thus far has been done using universal interferon (UIFN; a pan-species type I IFN derived from two human IFN-α subtypes) or cell culture medium from stimulated bat cells as a surrogate for authentic bat IFN (13) (14) (15) . More recently, bat IFN responses have been explored using recombinant bat IFNα or -β, but additional bat IFN subtypes remain poorly characterized (16) (17) (18) .We have previously shown that the type I IFN locus is expanded in the Egyptian rousette (R. aegyptiacus), an asymptomatic host of Marburg virus (MARV) (19) . Whereas, humans have a single IFNW gene, almost half of the Egyptian rousette IFN genes belong to the IFN-ω subtype. The functional relevance of the expansion is not known. In humans and other species, IFN-ω is induced by viral infection and has potent antiviral activity against various RNA viruses, including vesicular stomatitis virus (VSV), bovine viral diarrhea virus, yellow fever virus, West Nile virus, and influenza A virus (20) (21) (22) (23) (24) . Multiple subtypes of porcine IFN-ω are expressed after viral infection and have dramatic differences in activity despite very few single nucleotide polymorphisms (23) . The ISGs induced specifically by these IFN-ω proteins, however, are not known. To begin to understand the role of these genes in the immune response to viruses in bats, we synthesized and purified recombinant Egyptian rousette IFN-ω proteins. We characterized the antiviral potency and efficacy of these recombinant proteins against VSV and against MARV, and examined the downstream ISGs they induce.Although the Egyptian rousette IFN-ω subfamily has 22 members, many of these genes fall into clusters of highly similar genes. Phylogenetic analysis shows that the 22 IFNs are divided into five distinct groups: two large clades, each containing IFNs with >0.847 amino acid pairwise identity, one pair of IFNs sharing 0.989 identity with each other and <0.681 with any other, and two single IFNs with maximum homology of 0.832 to any other ( Figure 1A) . The IFNAR1 and IFNAR2 binding sites in all human type I IFNs, including IFN-ω, have been well-characterized (12) . We annotated the broad collection of residues that participate in receptor binding on all 22 proteins using the NCBI conserved domain database search. While proteins within a clade have high identity at these receptorbinding sites, these sites are much less conserved between clades ( Table 1) .Although all human type I IFNs bind to the same receptor complex, their downstream signaling upon binding differs among types and subtypes. Crystal structure analysis of two human type I IFNs bound to their receptor has led to a model that connects differences in receptor recognition and conformation to differences in downstream signaling (12) . According to this model, ligand residues that interact with the receptor are classified into three groups. First, there are conserved "anchor" residues-those that are identical among all or most human IFN subtypes. Second, there are conserved "modulating" residues that are identical among all or most human IFN subtypes but when mutated, change the energetics of the ligand-receptor interaction and lead to functional changes. Third, there are "ligand-specific" residues that vary greatly among human IFN subtypes (12) . We examined several rousette IFN-ω subtypes using this classification and compared residues that were identified as functionally important. The "anchor, " "modulating, " and "ligand-specific" residues are overall fairly conserved between the human and at least one of the bat IFN-ω proteins ( Figure 1B) . However, when comparing the bat proteins, there is noticeable diversity among the "ligandspecific" residues. Additionally, by definition, conservation of the "modulating" residues does not guarantee an energetically equal reaction with the receptor subunits. Together, this suggests that the five rousette IFN-ω groups are likely to react with the receptor with different kinetics and affinities, potentially resulting in different downstream effects.To examine whether Egyptian rousette IFN-ω proteins retain the canonical function of type I IFNs as antiviral proteins, we expressed two recombinant IFN-ω proteins (rIFN-ω4, rIFN-ω9) containing a C-terminal histidine tag (6x-His) in 293F cells and purified the proteins from cell supernatants as previously described (19) . As a negative control, we included an unrelated 6x-His-tagged protein (rD1) of similar size. We tested the antiviral efficacy of these recombinant proteins against VSV, using UIFN as a positive control.As expected, VSV replication was not inhibited in untreated cell or cells treated with rD1, whereas there was significant inhibition of VSV replication in cells pretreated with UIFN (Figure 2) . Although both rIFN-ω4 and rIFN-ω9 showed antiviral activity against VSV, this effect was more pronounced for rIFN-ω9, which was effective at concentrations a 100-fold less than IFN-ω4 after 4 h treatment, and even lower concentrations after 8 h treatment (Figure 2) . Predicted signal sequences were cleaved for each protein. Annotations and putative receptor binding sites are based on the structure of the human IFN-ω-IFNAR1/2 complex (12) . Residues important for interacting with IFNAR1 are highlighted in green, and residues that interact with IFNAR2 are highlighted in orange. Black stars indicate conserved residues that help anchor human IFNs to receptor subunits, and blue stars indicate conserved residues that influence the energetics of receptor binding. All residues highlighted as interacting with IFNAR1 or IFNAR2 but without stars are considered "ligand-specific" according to the model in Thomas et al. (12) . We treated RoNi/7.1 cells with UIFN, or three different concentrations of rIFN-ω4, rIFN-ω9, or rD1 for 4 or 8 h and collected RNA for mRNA sequencing (RNA-Seq) and differential gene expression analysis. We first compared the effect of treatment on mean expression of each gene via an ANOVA-like test for differential expression. For each gene rejecting the null in the ANOVA analysis (ANOVA FDR < 0.05) (Figure 3A) , every IFN treatment condition was contrasted with the appropriate control treatment; rIFN-ω-treated samples were compared to rD1-treated samples at the same concentration and time point. UIFN-treated samples were compared to untreated samples at the same time point in a pairwise analysis. The total numbers of genes that passed our pairwise reporting criteria (2-fold expression change or greater and Bonferroni-corrected p < 0.05) are shown in Figure 3B and Table S1 . Most of the differentially expressed genes were upregulated relative to the corresponding control, and there were very few downregulated genes across all conditions. This trend of positive gene expression has also been seen with UIFN treatment of cells from the black flying fox (Pteropus alecto) (14) . IFN concentrations that were not observed to be antiviral in our VSV-eGFP assay induced very few genes (Figures 2, 3B) . For example, 0.01 ng/mL of rIFN-ω9 induced only six genes after 4 h and 14 genes after 8 h (Table S1 ). In contrast, IFN concentrations that blocked VSV-eGFP replication induced many more genes. For a given concentration, many genes were induced at both time points (Figure 3C) . At a given time point, almost every gene induced at a low IFN concentration was also induced at a high concentration, with the main exception being 4 h rIFN-ω9 treatment, where 61 genes were induced by 1 ng/mL but not by 100 ng/mL ( Figure 3D and Table S1 ).Given the observed difference in antiviral activity between IFN-ω4 and IFN-ω9, we next compared the ISG expression profile induced by each IFN (Figure 4 and Table S1 ). At both time points, 1 ng/mL of IFN-ω9 induced many more genes than 1 ng/mL of IFN-ω4, and there were very few genes induced only by IFN-ω4 ( Figure 4A) . A higher concentration of IFN-ω4 induced additional genes, though only two of these genes were unique to IFN-ω4 treatment. When all time points and concentrations were combined, IFN-ω4 treatment induced only five unique genes, while IFN-ω9 treatment induced 54 unique genes.We next compared the expression levels of the genes induced by both IFNs by performing a pairwise comparison between expression in IFN-ω4 and IFN-ω9 treated samples at a given time point and concentration (Figure 4B ). At a low concentration of 1 ng/mL, IFN-ω9 treatment resulted in greater expression of all the genes that were induced by both IFN-ωs at both 4 and 8 h of treatment. In contrast, at a high concentration of 100 ng/mL, the change in the expression ratio was similar between IFN-ω4 and IFN-ω9 treatments ( Figure 4B ). This suggests that at high concentrations, the ISG response between the two IFNs may be interchangeable.We also examined the change in expression over time at a given concentration of IFN (Figure S1 ). At a low concentration, genes induced by IFN-ω4 appeared to be increasing in expression over time. In contrast, genes induced by IFN-ω9 began at a higher expression level at 4 h, and many genes had reduced expression at 8 h, suggesting that a peak response may have already been achieved. At a high concentration, the kinetic profiles of both IFN-ωs were very similar, and many genes had lower expression at 8 h than at 4 h. This is consistent with an early peak response, followed by subsequent downregulation, though additional time points and concentrations would be needed to examine the kinetics in detail.To explore whether additional IFNs may provide a host advantage by inducing unusual ISGs, we compared genes induced by IFN-ωs and UIFN ( Figure 5 ). Both IFN-ωs and UIFN induced a familiar panel of ISGs, including pathogen sensors (DDX58, IFIH1, CGAS, ZBP1), and antiviral ISGs like IFIT1, IFIT2, Mx genes, ISG15, and OAS genes. As part of a positive-feedback loop, IFN treatment upregulates the expression of interferon regulatory factor (IRF) genes that are transcription factors for further IFN induction. Both IFN-ωs induced several IRFs, including IRF1, 2, 4, 7, 8, and 9, while UIFN induced only IRF4, 7, and 9.We compared the genes induced by each IFN-ω and UIFN with those in multiple ISG databases to determine how many of them are known to be type I IFN-inducible ( Table 2 and Table S2 ). Since these databases are composed of data from other species (27), we excluded any MHC genes from the analysis as these gene evolve in complex ways and homology is inherently uncertain among species. We cross-referenced the remaining genes with data from (1) the Interferome-a database of ISGs from a wide variety of human and mouse studies (27); (2) data from a recent analysis of the IFN response in ten different species (25) ; and (3) ImmGen-a database of ISGs from a variety of human immune cells (26) . In our hands, more than 95% of genes induced by UIFN were found in the at least one of these three databases. In contrast, of the 358 genes upregulated by either IFN-ω at any time point or concentration, 87.4% (313 genes) were found in at least one database, and 12.6% (52 genes) were not. These percentages are slightly lower than those found in similar studies with UIFN and recombinant IFN-α3 in the black flying fox (14, 17) . This is partly because we use multiple databases that capture ISGs across a number of species and cell types and partly because there are differences in the total number of upregulated genes in those studies.Of the 52 genes not previously known to be IFN-inducible, 22 were completely uncharacterized by the NCBI annotation pipeline. However, given that the Egyptian rousette genome was itself annotated by this pipeline, which uses all available genomes in GenBank and RefSeq to produce annotations, a more comprehensive examination will be required to determine whether the previously uncharacterized ISGs may be genes thus far unique to the Egyptian rousette.Many of the 313 genes known to be IFN-inducible were paralogs of canonical ISGs, especially GTPase-related families. Among these genes were Mx genes, guanylate binding proteins (GBPs), GTPase IMAP family members (GIMAPs), and interferon-induced very large GTPases (GVINs). Humans have two functional Mx genes and seven functional genes each in the GIMAP and GBP subfamilies. GVIN1 is an interferoninducible gene in other species but is only a pseudogene in the human genome. IFN-ω4 or -ω9 treatment of RoNi/7.1 cells led to the induction of nine different GIMAPs, five GBPs, and three GVIN genes ( Figure 4C and Table S1 ). In contrast, UIFN induced only three interferon-inducible GTPases other than Mx genes suggesting that IFN-ωs may be able to induce a more diverse ISG response by tapping into expanded families of GTPase-related genes. Consistent with this hypothesis, the bat IFN-ωs induced all but one of the ISGs that were induced by UIFN, as well as additional ISGs that were not induced by UIFN (Figure 5) . Fifty four genes were expressed only with IFN-ω9 treatment, while five were induced only with IFN-ω4 treatment. These results indicate that UIFN does not appropriately represent the ISG response of batspecific IFNs.It has been reported that IFN-α treatment of black flying fox cells led to high expression of RNASEL, an RNase that degrades cellular and viral RNA after activation via the 2 ′ ,5 ′ -oligoadenylate synthetase (OAS) family of nucleic acid sensors (14, 17, 28) . Surprisingly, neither UIFN nor IFN-ωs led to RNASEL induction in our experiments. This may be due to species, cell-type, or treatment time and concentration differences. A higher basal (rather than inducible) expression of RNASEL was observed in Egyptian rousette cells (29) , which could further support the hypothesis of a species-specific difference among bats. There are a number of ISGs that were shown to have antiviral activity against filoviruses, including tetherin, Zinc finger antiviral protein (ZAP), interferon-induced transmembrane (IFITM) proteins, and ISG15 [reviewed in (30) ]. While ZAP was not induced under any condition, ISG15 expression was induced by both IFN-ωs and UIFN, and tetherin and IFITM3 were only induced by IFN-ω treatment.To determine whether IFN-ω proteins could protect against MARV infection, RoNi/7.1 cells were treated with rIFN-ω4, rIFN-ω9, rD1, or UIFN for 18 h and infected with MARV Angola or MARV Musoke at an MOI of 3. One day post infection, the cells were fixed and the infection rate was quantified by immunofluorescence. Consistent with the results of the VSV bioassay, both IFN-ω4 and IFN-ω9 significantly inhibited MARV replication, with IFN-ω9 exhibiting greater antiviral activity. Surprisingly, treatment with UIFN had much less of an antiviral effect than either IFN-ω (Figure 6 ).Although the number of type I IFN genes appears to vary substantially among bat species, many bat genomes encode multiple IFN subtypes beyond IFN-α and -β. These additional genes include single copies of IFN-ε and IFN-κ, which have orthologs in humans, as well as multiple copies of subtypes that exist only in one copy (IFN-ω) or not at all (IFN-δ) in humans (31, 32) .In humans, the production and secretion of IFN-ω is induced by viral infection, and human IFN-ω is associated with more potent anti-proliferative and antiviral capabilities than other type I IFNs (21, 24, 33, 34) . Based on previous observations of the large expansion in the type I IFN-ω subfamily in the Egyptian rousette (19) , we sought to examine members of this family to gain insight into the contributions of these proteins to Egyptian rousette antiviral immunity and to ask whether the considerable duplications in the IFN locus may have functional relevance for antiviral responses. In species with multiple IFN-ω genes, like Sus scrofa (pig) and Bos taurus (cow), differences in induction and antiviral potency among IFN-ω paralogs have been observed (23, (35) (36) (37) . Consistent with this work, we found functional differences between two rousette IFN-ω subtypes, supporting the hypothesis that the IFN-ω subtypes are not interchangeable.Differences in antiviral potency among tested IFNs were dramatic, with 100-fold differences between IFN-ω4 and IFN-ω9. These differences are not explained by a difference in sample purity, since the technique used for isolating recombinant protein yielded preparations that were remarkably pure ( Figure S2) . In general, longer exposure to IFN prior to infection resulted in Only upregulated genes were searched against the Interferome database, data from Shaw et al. (25) , and data from Mostafavi et al. (26) . "Both IFN-ωs" refers to genes upregulated after treatment with either IFN-ω at any concentration and time point. Genes that were annotated as uncharacterized by the NCBI annotation pipeline were labeled as uncharacterized genes, and were included in the total count. a MHC class I-like or class II-like genes were excluded since the naming structure of these genes can be species-specific (UIFN: 1 gene, IFN-ω4: 2 genes, IFN-ω9: 7 genes, "Both IFN-ωs": 7 genes).greater antiviral efficacy even at low IFN concentrations. This could be explained by a second wave of IFN induction due to positive feedback or by higher concentrations of antiviral ISGs. At low concentrations, IFN-ω9 induced more genes and higher expression levels of the same genes when compared to IFN-ω4, which may explain their differences in potency. However, at high concentrations these proteins induced very similar though not identical transcriptional responses. The high overlap in differentially expressed genes and similarity in level of gene expression between IFN-ω4 and -ω9 suggest that these proteins could be redundant at high concentrations. Nevertheless, both proteins induced a number of unique genes, and these were furthermore distinct from those induced by UIFN. These data reinforce the notion of IFN subtype-specific differences, and highlight the importance of using bat-specific IFNs for understanding bat ISG responses.Among the genes uniquely induced by IFN-ω proteins were multiple paralogs of known ISGs, especially interferon stimulated GTPases, including Mx, GVIN, GBP, and GIMAP genes. Similar to their counterparts in other species, bat Mx genes limited viral replication in in vitro studies (38) . The GVIN family is reduced to a single pseudogene in humans, but GVIN genes are highly expressed in mice after IFN stimulation, though their antiviral function remains uncharacterized (39, 40) . The presence and IFN-dependent upregulation of multiple distinct GVINs in the Egyptian rousette and in other bats suggest that these genes do play an antiviral role in bats (14, 17) .GBPs are induced by type I, type II, and type III IFNs, and are mainly known for their GTPase-dependent role as cell-autonomous defenders against bacterial and protozoal infection (41) . However, several members of the GBP family in humans and/or mice have been shown to have antiviral activity against VSV, influenza A virus, encephalomyocarditis virus, and retroviruses, including HIV-1 (40) (41) (42) . GBPs are recruited to pathogen-containing compartments, including viral replication sites, by autophagy related proteins (43) . Once recruited, they exert a variety of antiviral activities that interfere with various steps of the viral replication cycle within these compartments, coordinate lysis or lysosomal fusion, and activate the inflammasome (44, 45) . For example, GBP1 inhibits the delivery of Kaposi's sarcoma-associated herpesvirus virions to the nucleus by interfering with actin filament organization (46) . This blocking mechanism could be significant for MARV infection, given that MARV also relies on actin filaments to transport nucleocapsids to the budding sites (47) . Given their close association with autophagy related proteins, GBPs can direct autophagy for pathogen clearance (45) , which has recently been shown to be one mechanism by which bats can limit viral infection (48) .There is also compelling evidence that GBPs act in concert, as hetero-and homodimers, and non-redundantly against different viral and bacterial pathogens (49, 50) . If this phenomenon is also present in bats, the induction of 5 different GBPs by IFNω subtypes could mean a second tier of antiviral flexibility. In addition to these varied functions, GBPs play a role in inflammasome activation, either by helping to create PAMPs by lysing pathogen-containing compartments or by promoting caspase-11 activation or both (40) . Fewer inflammasome-related genes and a diminished NLRP3-related inflammasome response have been observed in bats compared to other mammals (51, 52) ; whether the expression of multiple GBPs could compensate for this diminished response with less inflammation remains to be explored.GIMAPs are involved in the development, maintenance, and homeostasis of lymphocytes, especially CD4 + T cells and B cells, but also T regulatory cells (Tregs) (53) (54) (55) . In the absence of individual GIMAP members, there is progressive lymphocyte loss leading to lymphopenia, poor cell proliferation, and paradoxical autoimmune states because of impaired Treg function (54) (55) (56) (57) (58) . A previous gene family analysis showed that the GIMAP family is expanded in the Egyptian rousette compared to a bat ancestor (19) , with 14 putative members. It is striking that nine of the 14 GIMAP genes are induced by IFN-ω treatment, with only a single GIMAP induced by UIFN treatment. GIMAPs are also expressed in black flying fox cells after UIFN or bat IFN-α stimulation (14, 17) , although not as many individual GIMAP genes were observed to be upregulated in these studies. Given the classical role of these genes in B and T cells, it will be important to examine their induction in bat lymphocytes once reagents are available for this work.We show that recombinant IFN-ω proteins qualitatively inhibit MARV infection in vitro, with noticeable differences in their antiviral potency. Kuzmin et al. have previously shown that two Egyptian rousette cell lines transfected with either Egyptian rousette IFN-β or a consensus IFN-α are able to resist both EBOV and MARV infection (59) , but whether filovirus infection itself induces IFNs in bat cells is unclear. In general, MARV infection seems to suppress immune gene expression in immortalized Egyptian rousette cells, yet there are conflicting studies regarding the extent of this suppression. The EBOV antiviral protein VP35 is known to efficiently inhibit IFN induction in vitro in human cells; in contrast, MARV VP35 is a much weaker inhibitor, and MARV infection of human THP-1 cells does lead to an IFN response (60, 61) . Infection of RoNi/7.1 cells with a MARV isolate from a wild-caught bat did not lead to any IFN induction (29) . A MARV mutant with an impaired VP35 IFN-inhibiting domain only induced IFN-α subtypes. In contrast, in the same study, Sendai virus induced multiple IFNs, including low levels of IFN-ωs. MARV infection of R06E-J cells (an Egyptian rousette embryonic cell line) led to modest induction of a few ISGs and no detectable IFN induction (62) . However, a MARV isolate from a patient in Uganda did lead to significant ISG expression in RoNi/7.1 cells, though IFN-ω genes were not examined (59) . These differences may be attributable to different viral strains, cell lines, and time points. Of note, all these studies were performed with immortalized cells. Even if MARV infection of these cell lines does not lead to IFN-ω induction, it is still possible that various primary cell types could serve as a source of IFN-ω in vivo. It has recently been shown that Egyptian rousette dendritic cells that are infected with a bat isolate of MARV upregulate IFNs and ISGs, though the few IFNω genes included in the Nanostring-based study were not reported to be significantly upregulated (63) .In conclusion, we propose that the expansion of the IFN-ω subfamily may contribute to a more flexible antiviral response that could be useful to the host by avoiding excess pathology. Ideally, this hypothesis would be tested with multiple viruses, including other viruses that naturally infect Egyptian rousettes. We provide evidence that recombinant IFN-ωs are effective against MARV infection in vitro. However, it remains to be determined whether our findings pertain to filovirus infection in vivo. were maintained in RoNi cell medium [Dulbecco's Modified Eagles Medium (DMEM) containing 1% MEM non-essential amino acids solution (100x concentrate), 100 units/ml penicillin, 100 µg/ml streptomycin, 1 mM sodium pyruvate] supplemented with 10% fetal bovine serum (FBS). Culture conditions for 293F suspension cells (human embryo kidney cells, ThermoFisher Scientific) have been described previously (19) . Vero E6 cells (BEI Resources, Cat NR-596) were maintained in DMEM supplemented with 10% FBS, 100 units/ml penicillin and 100 µg/ml streptomycin. VSV-eGFP was a gift from Dr. John H. Connor (Boston University School of Medicine), and was propagated as previously described (19) . Virus stocks for MARV isolates Musoke (GenBank: NC_001608; BEI Resources) and Angola (GenBank: KR867677.1; BEI Resources) were propagated in Vero E6 cells as described previously (64) . Identity of virus extracted nucleic acids was confirmed by deep sequencing. Virus titers were determined in the same cells by plaque assay as described elsewhere (65) . All work with infectious MARV was performed in the BSL-4 facility of the Texas Biomedical Research Foundation, San Antonio, TX.IFN-ω genes annotated in Pavlovich et al. (19) were examined in BioEdit v7.0.0 (66) . Genes were translated into protein sequences within BioEdit. Sequences were aligned with Mafft v7.305b (67) (-auto parameter), and the resulting alignment was trimmed with trimAL v1.3 (-automated1 parameter) (68) . The trimmed alignments were used to generate maximum likelihood phylogenetic trees with RAxML v8.2.9 under a JTT + Ŵ substitution model with empirical base frequencies (69) . Hundred bootstrap replicates were used to assess branch reliability. The best scoring maximum likelihood tree was analyzed in MEGA v7.0.26 (70) .To capture as many possible receptor binding sites, each protein was used as input into the NCBI conserved domain database search (https://www.ncbi.nlm.nih.gov/Structure/cdd/ wrpsb.cgi) (71, 72) , which relies on previous published work to pick out possible IFNAR1 and IFNAR2 binding sites (73) (74) (75) . The residues at each site (20 sites for IFNAR1 and 27 for IFNAR2) were compared across all Egyptian rousette IFNω proteins (every protein compared to every other protein), and the total number of conserved sites divided by the total number of sites (50) is reported in Table 1 for each clade. Protein sequences were also used as input for signal sequence prediction via the Signal P v4.1 server (http://www.cbs.dtu.dk/ services/SignalP/) (76) . Proteins were then aligned to the human IFN-ω with ClustalW within BioEdit and the predicted signal peptides (amino acids 1-21) were cleaved. The proteins were then compared to human IFN-ω and important binding sites as described in (12) were labeled.Recombinant 6x-His tagged IFN-ω proteins and D1 were produced and characterized as previously described (19) . Briefly, 293F suspension cells were transfected with plasmids encoding pCAGGS/6x-His-IFN-ω4 or IFN-ω9 (plasmids synthesized by Blue Heron Biotech, Bothell WA) according to the manufacturer's protocol using FreeStyle TM MAX reagent (Thermo Fisher). D1 is domain 1 of Bacillus anthracis protective antigen (PA) and was expressed in E. coli BL21 (DE3) cells transformed with pET22b/6xHis-PA-D1. Recombinant IFNs were purified from clarified media using the Capturem His-tagged purification maxiprep kit (Clontech, Takara Bio) and buffer-exchanged into sterile PBS with a Vivaspin 2 protein concentration column (MWCO 10 kDa; GE Life Sciences). Proteins were characterized by Western blot for the 6x-His tag (anti-6x-His tag mouse mAb, Thermo Fisher), silver staining for purity (Pierce Silver Stain assay kit; Thermo Fisher), and quantified by Bradford assay (Biorad, Hercules, CA) and by 280 nm absorbance measured on a NanoDrop spectrophotometer.The VSV antiviral assay was performed as previously described (19) . Briefly, RoNi/7.1 cells were seeded at a density of 3 × 10 4 cells per well in 96-well plates and 1 day after seeding, were mocktreated or treated with dilutions of rIFN-ω4, rIFN-ω9, rD1, or UIFN in RoNi cell medium supplemented with 10% FBS for 4 or 8 h as indicated. Cells were infected with VSV-eGFP at an MOI of 0.1 or 0.05 (in RoNi cell medium supplemented with 2% FBS), and examined for GFP expression one day post-infection.RoNi/7.1 cells were seeded at a density of 3 × 10 4 cells per well in 96-well plates ∼18 h prior to treatment. Duplicate wells were mock-treated or treated with 1,000 U of UIFN, 100 ng/mL of rD1, or various dilutions of rIFN-ω4 or rIFN-ω9 in RoNi cell medium supplemented with 10% FBS for 18 h. Cell supernatants were removed, and cells were mock-infected or infected with MARV Musoke or MARV Angola at an MOI of 3. After an attachment period of 1 h at 37 • C, inoculum was removed and replaced with RoNi cell medium supplemented with 2% FBS, and cells were incubated for 24 h at 37 • C. Cells were then inactivated and fixed with 10% formalin for 16 h, washed and stored in PBS at 4 • C until use. For immunofluorescence analysis, fixed cells were permeabilized with 0.1% Triton X-100 for 5 min at room temperature, treated with 0.1 M glycine for 5 min, and incubated in blocking buffer (2% bovine serum albumin (BSA), 0.2% Tween 20, 3% glycerol and 0.05% sodium azide in PBS) for 20 min. Cells were then incubated with an anti-MARV nucleocapsid rabbit antiserum (1:500 dilution in blocking buffer) overnight at 4 • C. As a secondary antibody, a goat anti-rabbit antibody conjugated to Alexa Fluor 488 (Invitrogen) was used. Cells were imaged with fluorescent microscopy and the fluorescent signal of pictures (two per sample per experiment) from two independent experiments was quantified in ImageJ.RoNi/7.1 cells were seeded at a density of 2.5 × 10 5 cells per well in 12-well plates, and 1 day later were mock-treated or treated with 1,000 U of UIFN or dilutions of rIFN-ω4, rIFN-ω9, or rD1 (0.01, 1, or 100 ng/mL) for 4 or 8 h at 37 • C. Cell culture medium was removed and 600 µL of RNAzol RT (Molecular Research Center Inc, Cincinnati, OH) was added to each well. RNA in RNAzol was then transferred to an RNAse-free tube, vortexed for 20 s, and immediately stored at −80 • C. Experiments were performed in triplicate, resulting in a total of 66 samples.RNA was extracted using a previously established protocol for RNAzol. Briefly, 240 µL of nuclease-free water (0.4x RNAzol volume, Ambion #AM9937) was added to each sample, followed by vigorous vortexing and pelleting of DNA and protein components. RNA was precipitated with isopropanol (equal volume) and 20 µg of glycogen (stock of 20 µg/µL, Invitrogen #10814-010), and washed twice with 75% ethanol, before resuspension in nuclease-free water. RNA was quantified by Nanodrop and a subset of samples was assessed for quality by BioAnalyzer (Agilent) evaluation on an RNA chip. All samples tested had RIN scores of 10.1.25 µg of RNA was used as input for the TruSeq R Stranded mRNA Library Prep kit (Illumina). Briefly, mRNA was purified by polyA capture and fragmented to an average length of ∼410 bases. The first strand of cDNA was synthesized with SuperScript II Reverse Transcriptase (Thermo Fisher #18064014), followed by second strand synthesis and cleanup with AMPure XP beads (Beckman Coulter #A63880). The resulting double-stranded cDNA was stored at −80 • C until 3 ′ adenylation and end repair the following day. Samples were barcoded with adapters from the TruSeq RNA CD Index Plate (Illumina), cleaned with AMPure XP beads, and libraries were enriched by PCR for 12 cycles. Final libraries were washed twice with AMPureXP beads (final bead wash ratio: 0.85x to remove adapter dimers), and quantified by Qubit 3.0 assay. A subset of samples was examined by DNA Bioanalyzer (Agilent) for quality purposes before pooling samples within each of three replicates. Pooled libraries were sent to Tufts Genomics Center for size selection (pippin size selection, 180-1,100 bp) and sequencing. Each library was sequenced on a separate lane of an eight-lane flow cell in high output mode on an Illumina HiSeq 2500 using single-end 100 bp chemistry.Raw reads were demultiplexed by the Tufts Genomics Center and evaluated for quality with FastQC v0.11.3. Remaining 5 ′ adapter sequences were trimmed using cutadapt v1.5, and all reads shorter than 50 bp were discarded. Trimmed reads were mapped to the Raegyp2.0 genome (RefSeq accession: GCF_001466805.2) with hisat2 v2.1.0 (77, 78) , with an average mapping rate of 97.1%. Count tables of uniquely mapped reads were tabulated with HTSeq v0.6.1p1 (79) with the parameters -stranded=reverse and -mode=union for htseq-count with a gtf annotation file from RefSeq (with in-house modifications to use GeneID as the ID attribute). Count tables were used for pairwise differential expression analysis (and multiple hypothesis testing correction) with edgeR (80, 81) within the R environment (R version 3.4.3). First, an ANOVA-like test was performed where each treatment (rD1, rIFN-ω4, rIFN-ω9) was compared to all other treatments for a given treatment concentration and time.For all genes rejecting the null in the ANOVA-like test with an FDR ≤ 0.05 (Benjamini-Hochberg procedure), each IFN-treated sample was compared to the corresponding rD1-treated sample at the same concentration and time point (e.g., treatment with 1 ng/mL of rIFN-ω4 for 4 h compared to treatment with 1 ng/mL of rD1 for 4 h), and IFN-ω4 was compared to IFN-ω9 for the same conditions. Genes were considered differentially expressed if the p-value of the pairwise comparison was <0.05/3 and if the absolute value of the log 2 fold change in the pairwise comparison was >1. UIFN-treated samples were compared to untreated samples at the same time point, with an FDR < 0.05 and the absolute value of the log 2 fold change was >1. Gene symbols were mapped back onto Gene IDs with the rentrez package in R. Plots were generated in R with the pheatmap (82) and ggplot2 packages, except for Venn diagrams, which were produced in Venn diagram plotter (v1.5.5228.29250).Genes that were upregulated at any time point and concentration for each treatment (post ANOVA, post pairwise analysis) were searched against the Interferome (v2.01) database (27) , data from Shaw et al. (25) (accessible at http://isg.data.cvr.ac.uk/), and data from Mostafavi et al. (26) . Genes without canonical gene symbols were cross-referenced with the NCBI Gene database to identify probable gene names based on the given gene description. Alternate gene names were identified using UniProt and Genecards. Genes were considered uncharacterized if they were annotated as uncharacterized by the NCBI annotation pipeline based on insufficient homology to any other gene in Genbank.The data discussed in this publication have been deposited in NCBI's Gene Expression Omnibus (83) and are accessible through GEO Series accession number GSE145761 (https://www. ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=%GSE145761). The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fimmu. 2020.00435/full#supplementary-material Figure S1 | Change in ISG expression over time. (A) Genes that were significantly differentially expressed in cells treated with 1 or 100 ng/mL rIFN-ω4 or rIFN-ω9. Only upregulated genes with p ≤ 0.05/3 in the pairwise test were included if expressed after both rIFN-ω4 and rIFN-ω9 treatment, regardless of treatment time. (B) Pattern of differential expression over time. Genes that passed the significance criteria for the ANOVA tests for a given concentration of IFN at any time point are shown. Figure S2 | Purity of rIFN-ω4, rIFN-ω9, and rD1 preparations containing 6xHis-tagged recombinant proteins. Recombinant proteins from clarified 293F cell supernatants were purified using affinity chromatography, dialyzed into PBS, quantified by Nanodrop and Bradford assay, and evaluated by (A) silver stain for purity (100 ng/well), and by (B). Western blot for His-tag specificity (250 ng/well). Table S1 | Differentially expressed genes in post-ANOVA and pairwise tests.
|
Comparative Risk: Dread and Unknown Characteristics of the COVID-19 Pandemic Versus COVID-19 Vaccines
|
Nearly all events in everyday life carry some risk. Riding a bicycle at a park could result in an accident, taking a prescription drug might cause unpleasant side effects, and living near a factory increases the odds of contracting serious illnesses. Discrete events differ in their probability of causing death or injury, as well as the benefits they afford to society. Further, different event characteristics may hold more or less weight on people's judgments about how risky these events appear or feel. The psychometric paradigm offers a theoretical framework to explain why risk events stir different risk perceptions (Slovic, 1987) .The psychometric paradigm explains that members of the public are intolerant of risks perceived as dreadful and unknown (Slovic, 1987) . Against the backdrop of the ongoing COVID-19 pandemic, the current research centers on two risk events-the COVID-19 pandemic and the COVID-19 vaccines. Applying the psychometric paradigm, we examine whether vaccinated and unvaccinated Americans weigh the COVID-19 pandemic and the COVID-19 vaccines differentially along the dread and unknown dimensions. We additionally examine which specific risk characteristic contribute to the overall risk perception of the COVID-19 pandemic and the COVID-19 vaccines. Lastly, we investigate if this mental risk comparison influences people's (a) vaccination intention, (b) vaccine acceptance, (c) maintenance of preventive behaviors, and (d) emotional responses.To our knowledge, this study is the first to comparatively analyze perceptions of dread and unknown risks of the COVID-19 pandemic and the COVID-19 vaccines. Currently, over half of the American population (53.9%) have been fully vaccinated (Centers for Disease Control and Prevention [CDC] , 2021b), but some divide over vaccination still persists (CDC, 2021e) . For example, slightly over half (57%) of unvaccinated adults are White; Black and Hispanic Americans are less likely to have received the vaccines. In addition, Republicans, rural populations, and younger adults remain hesitant about getting vaccinated (Kaiser Family Foundation [KFF] , 2021b). Integrating the psychometric paradigm, this research directly compares two related risk events to better understand public perceptions and behaviors. Increased understanding of risk perceptions of the COVID-19 pandemic and the COVID-19 vaccines will help risk communication scholars better understand vaccine hesitancy. Further, it will provide an avenue to encourage the American public to continue adhering to preventive behaviors until the pandemic is over.Risk is the probability of something bad occurring to an individual, a group of individuals, or society at large (Sjöberg, Moen, & Rundmo, 2004) . Construed not only by technical parameters and probability numbers, risk also involves psychological, social, and cultural contexts (Slovic, 2000) . Risk perception is accordingly shaped by individual and social characteristics that impact how individuals react to certain risks (e.g., Barke, Jenkins-Smith, & Slovic, 1997; Braman, Kahan, Peters, Wittlin, & Slovic, 2012; Dake, 1992; Flynn, Slovic, & Mertz, 1994; Wachinger, Renn, Begg, & Kuhlicke, 2013) . The well-documented attenuation or amplification of risk perception is explicated in the psychometric paradigm (Slovic, 1987) . A prominent model in risk research, the psychometric paradigm constructs a taxonomy of hazards illustrating how the lay public perceive varying risks (Fischhoff, Slovic, Lichtenstein, Read, & Combs, 1978; Slovic, Fischoff, & Lichtenstein, 1986) . Notably, it predicts public risk behavior using a quantifiable approach (Slovic, 1987) .In this paradigm, risk perception is viewed as an impediment to rational decision making. This is attributable to a difference in how experts and the lay public perceive risk (Slovic, 1987; Starr, 1969) . Experts most commonly define risk based on annual fatalities, whereas the lay public more often interpret risk by considering other factors such as catastrophic potential, equity, effects on future generations, controllability, and involuntariness (Slovic, 1987; Slovic et al., 1986) . The lay public therefore allot comparatively less weight to risk assessments conducted by experts (Lichtenstein, Slovic, Fischhoff, & Layman, 1978; Slovic, Fischoff, & Lichtenstein, 1979; 1985) .Slovic's seminal piece and his ensuing research on the psychometric paradigm demonstrate a significant relationship between the perceived need for regulation of a risk event or hazard and two primary dimensions (Slovic, 1987) -the dread factor and the unknown factor. Characteristics of the dread factor include perceived uncontrollability, bearing a catastrophic and fatal outcome, and having an inequitable distribution of risks and benefits. Characteristics of the unknown factor include delayed manifestation of probable harm and novelty Slovic, Fischoff, & Lichtenstein, 1985) . The public respond not only to the scientific assessments of a risk event or hazard, but also to the subjective features of risk in ways that heighten or abate their concern (e.g., Slovic et al., 1979; Slovic et al., 1985) . Put another way, the higher a risk event or a hazard score on these two factors, the higher its perceived risk. Concomitantly, individuals want to see the risk attenuate, motivating demands for stricter regulation (e.g., Clahsen et al., 2018; de Vries et al., 2019; Siegrist, Earle, Gutscher, & Keller, 2005) .To explain further, Fig. 1 represents four hazards (DNA technology; microwave ovens; bicycles; commercial aviation) located in each of the four quadrants. The dread dimension (Factor 1) lies on the xaxis. The unknown dimension (Factor 2) lies on the y-axis. Less dreaded hazards appear on the left side of the plot and vice versa. For example, DNA technology (upper right quadrant) score high on both axes. Bicycles (lower left quadrant), in contrary, score low on both axes. The circumference of the circle denotes the level of risk perception; the larger the circle, the higher the level of risk perception. In our example, DNA technology is a hazard to which the public respond to most negatively. In comparison, they seem more tolerant of microwave ovens and commercial aviation and perceive very little risk about bicycles. As a result, the public demand more stringent regulations for the development and commercialization of DNA technology (e.g., Priest, 2017; Savadori et al., 2004) , but embrace microwave ovens and bicycles in their everyday life. The current plot is based off the full model shown in Slovic (1987) .Applying the psychometric paradigm to this context, we first unearth how the American public perceive the risk characteristics of the COVID-19,draft pandemic and the COVID-19 vaccines. Specifically, the risk characteristics of the dread dimension comprise catastrophic potential, controllability, dread, severity, and involuntariness; the risk characteristics of the unknown factor comprise immediacy, unknown to the public, unknown to scientists, and novelty. While existing research on pandemics (e.g., H1N1-Fung, Namkoong, & Brossard, 2011; Oh, Paek, & Hove, 2015) and vaccinations (e.g., MMR-Marta et al., 2017; Raithatha, Holland, Gerrard, & Harvey, 2003) have employed this theoretical framework (partially or completely), few studies have compared two risk objects side by side to gauge how interrelated risk perceptions influence people's subsequent decisions about risk mitigation behaviors.Comparing the COVID-19 pandemic and the COVID-19 vaccines, we speculate that these two risk objects may take different positions on the psychometric paradigm in people's perceptions. On the one hand, the COVID-19 vaccines became available only in December 2020 (U.S. Department of Health and Human Services, 2020). Some Americans may view them as an unfamiliar risk. Compounded by the swift development of the vaccines, concerns about their safety and efficacy remain high (see more on vaccine confidence, KFF, 2021a). Thus, the COVID-19 vaccines may score high on the unknown dimension.For certain segments of the American public, the vaccines may even score high on the dread dimension. Conversely, having lived with the pandemic for over a year, the risk of the pandemic is more familiar, or even mundane, to many. Thus, as opposed to the COVID-19 vaccines, the pandemic may score lower on the unknown dimension. Yet, emerging COVID-19 variants of concern (e.g., United Kingdom B.1.1.7 strain; delta variant strain) (Center for Disease Control and Prevention [CDC], 2021c), could still elicit dread. Nevertheless, a longitudinal research based on a nationally representative sample shows that both younger and older U.S. adults had a higher tendency to participate in risky behaviors (e.g., close contact within 6 feet with people who do not live with them; social gathering with more than 10 people) just two months into the pandemic (Kim & Crimmins, 2020) . It is possible that some Americans will score the pandemic lower on the dread dimension at the time of data collection. Concluding from the theorization above, our first hypothesis is:H1: Americans will rate the COVID-19 pandemic and the COVID-19 vaccines differently on the dread and unknown dimensions.To probe further into how each risk characteristic contribute to differences in risk perception of the COVID-19 pandemic and the COVID-19 vaccines, we inquire: RQ1: How do risk characteristics in the dread and unknown dimensions (i.e., pertaining to each event) influence Americans' risk perception of the COVID-19 pandemic and the COVID-19 vaccines?The present study additionally examines how dread and unknown risks of the COVID-19 pandemic, and the COVID-19 vaccines influence the American public's COVID-19 vaccine uptake. Research on a wide range of hazards (e.g., air pollutions: Pu et al., 2019; floods: Kellens, Terpstra, & De Maeyer, 2012; food safety: You & Ju, 2016; nuclear accidents: Kim & Kim, 2017; river pollutions: Aragonés, Tapia-Fonllem, Poggio, & Fraijo-Sing, 2017) has evaluated how the interplay between the dread dimension and the unknown dimension shapes risk perception about a single risk object. However, to our knowledge, no research to date has evaluated people's mental comparison of two interrelated risk objects and its impact on risk mitigation behaviors. Three specific outcomes are examined here. First, vaccination intention refers to individuals' likelihood of getting a COVID-19 vaccine. Vaccine acceptance is individuals' overall confidence in the COVID-19 vaccines. Next, maintenance of preventive behaviors refers to the degree to which individuals will continue to practice preventative personal behaviors (e.g., washing hands with soap or using hand sanitizers several times a day) or engage in risky social behaviors (e.g., having visitors such as friends, neighbors, or relatives at their residence). The opportune timeline of both the COVID-19 pandemic and COVID-19 vaccines allows us to psychometrically analyze this, which leads to the following hypotheses:H2: Compared to the COVID-19 vaccines, Americans who rate the COVID-19 pandemic as higher on the dread dimension will be more likely to get the COVID-19 vaccine (H2a), report higher vaccine acceptance (H2b), engage more in preventive personal behaviors (H2c), and partake less in risky social behaviors (H2d). H3: Compared to the COVID-19 vaccines, Americans who rate the COVID-19 pandemic as higher on the unknown dimension will be more likely to get the COVID-19 vac-cine (H3a), report higher vaccine acceptance (H3b), engage more in preventive personal behaviors (H3c), and partake less in risky social behaviors (H3d).Extending these theoretical arguments, we assess how the American public's dread and unknown risk comparisons of the COVID-19 pandemic and the COVID-19 vaccines influence their emotional reactions. In this research, we evaluate both general affect and discrete emotions. Early scholarship on risk was largely focused on individuals' cognitive evaluation of risk such as probability (e.g., Kahneman, Slovic, & Tversky, 1982; Tversky, 1972; Tversky & Kahneman, 1974) , but more recent research increasingly recognizes the imperative role that affect and emotion play in risk appraisals (e.g., Finucane, Alhakami, Slovic, & Johnson, 2000; Slovic, Finucane, Peters, & MacGregor, 2002; Slovic, Finucane, Peters, & MacGregor, 2007) .The affect heuristic thesis depicts how affect provides a mental shortcut, influencing people's risk perception and risk-related decision making (Finucane et al., 2000; Slovic et al., 2007) . In other words, individuals rely on affect and emotion to make judgements about risks. In a similar vein, the risk-asfeelings hypothesis postulates that emotional reactions to risks are frequently independent of cognitive evaluations, and they are often stronger predictors of individuals' behaviors (Loewenstein, Weber, Hsee, & Welch, 2001) . Taken together, this work attempts to uncover how the risk comparisons of the COVID-19 pandemic and the COVID-19 vaccines will elicit differing emotional reactions. Our next set of hypotheses are therefore:H4: Compared to the COVID-19 vaccines, Americans who rate the COVID-19 pandemic as higher on the dread dimension will experience decreased positive affect (H4a), increased negative emotion (H4b), and decreased positive emotion (H4c) toward the pandemic. H5: Compared to the COVID-19 vaccines, Americans who rate the COVID-19 pandemic as higher on the unknown dimension will experience decreased positive affect (H5a), increased negative emotion (H5b), and decreased positive emotion (H5c) toward the pandemic.Further, vaccine status likely reveals people's perceptions of these two risk events. For instance, compared to unvaccinated people, vaccinated people may perceive the pandemic as a higher risk than the vaccines:Americans will differ significantly in their ratings of the COVID-19 pandemic and the COVID-19 vaccines on the dread and unknown dimensions. H7: Vaccinated Americans and unvaccinated Americans will differ significantly in their risk perception and emotional responses toward the COVID-19 pandemic and the COVID-19 vaccines.As above, risk mitigation behaviors may also be associated with individuals' vaccine status. At present, the U.S. Food and Drug Administration (FDA) has authorized the use of a third dose of the Pfizer-BioNTech and Moderna vaccines for immunocompromised individuals. The Biden administration also announced that after September 20th, 2021, fully vaccinated Americans would be eligible for a third dose eight months after their second shot (CDC, 2021a). Therefore, it is crucial to continue monitoring vaccinated individuals' future intention for vaccination. Thus, our final set of hypotheses are:Americans will differ significantly in their likelihood to get the COVID-19 (H8a), preventive personal behaviors (H8b), and risky social behaviors (H8c).Americans will differ significantly in their affect (H9a), negative emotion (H9b), and positive emotion (H9c).To test our hypotheses and address the research question, we contracted Qualtrics in May, 2021 to recruit a sample (N = 1532) that matched the U.S. adult population on age, gender, race/ethnicity (i.e., based on the latest United States Census Bureau Data, 2019), and political affiliation (i.e., based on the American National Election Studies [ANES], 2020).All participants who completed the survey were compensated based on established agreement between Qualtrics and their opt-in panelists. The median survey completion time was 16 minutes. All research procedures were approved by the Institutional Review Board (IRB) at the authors' institution.Only fully completed responses were included in the final sample for analysis (N = 1,532). All participants in this final sample passed the attention check. Participants' age ranged from 18 to 100 (M = 46.89, SD = 16.80). There were 864 (56.4%) females and 668 males (43.6%). The sample was predominantly White (n = 800, 52.2%), followed by Hispanic/Latino (n = 321, 21.0%), Black/African American (n = 233, 15.2%), Asian/Pacific Islander (n = 106, 6.9%), Biracial (n = 49, 3.2%), and American Indian/Alaskan Native (n = 23, 1.5%). In terms of political affiliation, 512 participants (33.4%) selfidentified as Democrat, 509 (33.2%) self-identified as Independent, and 511 (33.4%) self-identified as Republican. We purposely screened participants to ensure an even split between unvaccinated (n = 827, 54.0%) and (n = 705, 46.0%) vaccinated individuals. In this sample, 755 (49.3%) received some college education or below, 605 (39.5%) received twoyear associate degree or four-year bachelor's degree, and 172 (11.2%) received a master's degree or above. The median household income was in the bracket of $40,000 to $49,999. 1 Most participants indicated that they did not have any K-12 school-age children (28.9% did).At the beginning of the survey, all participants were presented with the informed consent and a set of instructions detailing the research procedure. They first responded to questions measuring demographics and vaccine status. Then, they answered a series of questions as described in the measures section. All measures were randomized to reduce survey order effect (Day et al., 2012 ). An attention check appeared midway to ensure data quality. Upon completion of the survey, all participants were debriefed and compensated.Participants were asked, "Have you received at least one dose of the COVID-19 vaccine?" They selected one of the following options: (1) yes, I'm fully vaccinated against COVID-19 already; (2) yes, and I will get the second dose soon; (3) yes, but I skipped the second dose; (4) yes, but I intend to skip the second dose; (5) no, but I plan to get vaccinated soon; (6) no, and I do not plan on getting vaccinated. Those who selected options with "yes, …" were coded as vaccinated participants, and those who selected options with "no, …" were coded as unvaccinated participants. [old-new] ). These items were adapted from Fischhoff et al. (1978) and Siegrist, Keller, and Kiers (2006) and designed to capture the two dimensions on the psychometric paradigm. A differential score approach was used to gauge risk comparison between the pandemic and the vaccines. Specifically, differential scores for each item were first computed (e.g., severity of pandemic minus severity of vaccines). Then, each differential score was aggregated into dread risk (i.e., five items) and unknown risk (i.e., four items). A positive value indicates higher risk perception toward the pandemic. Overall, comparative dread risk was higher for the pandemic than the vaccines (M = 0.19, SD = 0.89), but comparative unknown risk was higher for the vaccines than the pandemic (M = −0.27, SD = 0.88).A four-item measure rated on a five-point scale from 1 = not at all concerned/serious/likely to 5 = extremely concerned/serious/likely evaluated participants' risk perception (Leiserowitz, 2006) toward the COVID-19 pandemic (M = 3.46, SD = 1.27, α = 0.90) and the COVID-19 vaccines (M = 3.21, SD = 1.23, α = 0.85). Comparative risk perception was higher for the pandemic than the vaccines (M = 0.25, SD = 1.83).To assess participants' vaccination intention, a four-item measure rated on a six-point scale from 1 = very unlikely to 6 = very likely (Gerend & Shepard, 2012) was employed (M = 3.75, SD = 1.90, α = 0.98). An example item is, "How likely is it that you will actually get the COVID-19 vaccine when it becomes available to you?"Vaccine acceptance was measured on five key facets of acceptance (Sarathchandra, Navin, Largent, & McCright, 2018) with eight items rated on a fivepoint Likert scale from 1 = strongly disagree to 5 = strongly agree (M = 3.36, SD = 1.20, α = 0.93). First, perceived safety of vaccines was evaluated with one item ("COVID-19 vaccines are safe."), and perceived efficacy of vaccines was evaluated with two items (e.g., "COVID-19 vaccines are effective at preventing infection from the virus."). Next, acceptance of the selection and scheduling of vaccines was assessed with one item ("The speed at which the current COVID-19 vaccines were approved was appropriate."), positive valuation of vaccines was assessed with three items (e.g., "COVID-19 vaccines are a major advancement for humanity."), and perceived legitimacy of authorities to require vaccinations was assessed with one item (e.g., "It is legitimate for government to mandate the COVID-19 vaccinations.").Maintenance of preventive behaviors was evaluated as the likelihood to engage in preventive personal behaviors and risky social behaviors (Kim & Crimmins, 2020) . All items were rated on a five-point scale from 1 = very unlikely to 5 = very likely. First, seven items measured preventive personal behaviors (M = 3.73, SD = 1.22, α = 0.90). An example item for preventive personal behaviors is, "Wash hands with soap or use hand sanitizers several times a day." Second, three items measured risky social behaviors (M = 3.13, SD = 1.20, α = 0.71). An example item for risky social behaviors is, "Go to a friend, neighbor, or relative's residence (not your own)."Two items anchored by "bad-good" and "negative-positive" assessed participants' affective response (Leiserowitz, 2006) toward the COVID-19 pandemic (M = 2.31, SD = 1.29, α = 0.87) and the COVID-19 vaccines (M = 3.24, SD = 1.49, α = 0.93). Comparative affect was more positive for the vaccines than for the pandemic (M = −0.93, SD = 1.64). Apart from affect, participants' negative emotions (anger; disgust; fear; sadness) and positive emotions (encouraged; hope; joy; pride) were also measured (Nabi, Gustafson, & Jensen, 2018) . Negative emotion toward the COVID-19 pandemic (M = 3.23, SD = 1.14, α = 0.78) was higher than negative emotion toward the COVID-19 vaccines (M = 2.66, SD = 1.24, α = 0.85). Comparatively, more negative emotion was expressed toward the pandemic than the vaccines (M = 0.57, SD = 1.27). Positive emotion toward the COVID-19 pandemic (M = 2.60, SD = 1.18, α = 0.87) was lower than positive emotion toward the COVID-19 vaccines (M = 2.99, SD = 1.30, α = 0.91). Comparatively, more positive emotion was expressed toward the vaccines than the pandemic (M = −0.39, SD = 1.15).Participants reported demographic information such as age, gender, and political affiliation. A complete document with all measures in original wording is available here on the Open Science Framework (OSF): https://bit.ly/3pAxSF4All analyses were performed in SPSS 26. Prior to performing more advanced statistical analyses, zeroorder correlations were computed for the dread dimension and the unknown dimension (Table I) , as well as for all key variables in the study (Table II) . H1 was tested with paired-sample t-test, and H2-H5 were tested through a series of ordinary least squared (OLS) regressions. The remaining hypotheses related to vaccine status were tested using independent samples t-tests. For the various analyses, differential scores were used to effectively compare the two risk events. Lastly, to evaluate RQ1, we performed hierarchical regression analyses with demographics as control variables.H1 examined whether Americans rated the COVID-19 pandemic and the COVID-19 vaccines differently on the dread and unknown dimensions. All risk characteristics on the dread and unknown dimensions significantly differed between the two risk events (see Table III for paired sample t-test results). Therefore, H1 was supported.H2 focused on the dread dimension and its association with vaccination intention, vaccine acceptance, and maintenance of preventive behaviors. Relative to the vaccines, higher dread risk toward the pandemic was related to higher vaccination intention, greater vaccine acceptance, increased likelihood to practice preventive behaviors, and decreased likelihood to participate in risky social behaviors, lending support to H2.H3 focused on the unknown dimension and its association with the aforementioned intention and behaviors. Relative to the vaccines, higher unknown risk toward the pandemic was related to higher vaccination intention, greater vaccine acceptance, and lower risky social behaviors. No significant relationship emerged between unknown risk and personal preventive behaviors. H3 was thus partially supported.H4 and H5 investigated the relationship between emotional responses and dread and unknown risks respectively. Relative to the vaccines, as participants perceived higher dread risk toward the pandemic, they reported lower positive affect, more negative emotion, and less positive emotion, lending support to H4. Concerning unknown risk, higher unknown risk toward the pandemic, relative to the vaccines, was related to less positive affect, and less positive emotion. No significant relationship was found between unknown risk and negative emotion. As such, H5 was partially supported. See Table IV for a complete summary of the regression results.H6 posited that vaccinated and unvaccinated Americans would differ in their dread and unknown risk perceptions. A significant difference emerged between the two groups for dread risk, t(1530) = 11.89, p = 0.001. That is, vaccinated individuals perceived more dread risk toward the pandemic Finally, we queried how risk characteristics along the dread and unknown dimensions influenced Americans' risk perception of the COVID-19 pandemic and the COVID-19 vaccines. All models included demographics as control variables. First, catastrophic potential, uncontrollability, dread, and severity were positively associated with risk perception of the pandemic. Second, whereas immediacy and unknown to the public were negatively associate with risk perception of the pandemic, novelty was positively associated with risk perception of the pandemic. In both models, participants who were female, non-White, Democrat, with K-12 school-aged children reported higher risk perception of the pandemic. In comparison, dread, severity, immediacy, unknown to the public, unknown to scientists, and novelty were positively associated with risk perception of the vaccines. These models revealed that participants who were female, White, Republican, with lower income and K-12 school-aged children reported higher risk perception toward the vaccines. Table V presents these regression results.Applying the psychometric paradigm to an ongoing public health crisis, the present research theoretically establishes the characterization of dread and unknown risks of the COVID-19 pandemic and the COVID-19 vaccines. We first test if Americans perceive different risk characteristics for the two risk events. Indeed, our participants state each characteristic of dread risk (catastrophic potential; uncontrollability; dread; severity; involuntariness) and unknown risk (immediacy; unknown to the public; unknown to scientists; novelty) as distinct between the pandemic and the vaccines. Overall, they report more dread risk toward the pandemic and more unknown risk toward the vaccines. Specific risk characteristics associated with the pandemic and the vaccines are also interesting. On the one hand, it makes sense that evaluations of the pandemic as an event high in catastrophic potential (i.e., killing a large number of people all at once), dreadful, severe, and out of control increase participants' risk perception of the pandemic. On the other hand, delayed manifestation of harm (i.e., low immediacy) and more known to the public decrease their risk perception of the pandemic. This seems conceivable because most Americans have lived alongside the pandemic for over a year. Nevertheless, belief that the pandemic still involves novelty, perhaps due to the new variants, is positively associated with risk perception of the pandemic. As expected, when participants appraise the vaccines as dreadful, with fatal consequences, as well as unknown and novel, they are more likely to report higher risk perception of the vaccines.To some degree, these findings parallel the volatile nature of this crisis. In May 2021, when our data were collected, more COVID-19 variants have emerged globally (Berger, 2021; Kottasová & McKenzie, 2021) . In the United States, the pandemic already claimed a death toll higher than many recent wars combined (Waxman & Wilson, 2021) . Together, these facts could elicit strong visceral reactions of dread among the participants. Conversely, news of blood clots associated with the Johnson & Johnson vaccine (Ledford, 2021; World Health Organization [WHO] , 2021) probably increased perceived unknownness of the vaccines. We also note some demographic differences in risk perception that are consistent with empirical research (e.g., Rana, Bhatti, Aslam, Ahmad, & Shah, 2021) and public opinion polls (e.g., CDC, 2021c; KFF, 2021b) . In particular, females, minorities, and Democrats tend to report higher risk perception toward the pandemic. Those who earn lower income frequently cite concerns about fair distribution of health services, which in turn may increase their risk perception of the vaccines. Moreover, as vaccines for younger children have not been approved by the FDA, it is natural for parents to perceive higher risk perception toward both risk events.Another prime contribution of this work is the direct comparison of the COVID-19 pandemic and the COVID-19 vaccines to efficaciously gauge risk perception and decision making related to vaccination. For vaccination intention and maintenance of preventive behaviors, results highlight that dread risk seems more salient than unknown risk in determining risk perception. Perception of higher dread risk toward the pandemic influences risk mitigation behaviors. On the contrary, there are mixed results for unknown risk; higher unknown risk of the pandemic is not correlated with preventive personal behaviors. Although quite unexpected, this set of results is not surprising due to the fluctuating guidelines from the CDC for vaccinated individuals (CDC, 2021d) . Further, emotional responses stimulated by the two dimensions are also interesting. Again, dread risk appears to be the stronger predictor. As participants experience more dread risk toward the pandemic, they feel less positive affect, less positive emotion, and more negative emotion toward the pandemic, as opposed to unknown risk. In addition, higher unknown risk toward the pandemic is not associated with participants' negative emotion toward the pandemic, which could be attributed to the positive trajectory of vaccination rates in the United States in May (CDC, 2021b). Theoretically speaking, the findings of the current research hitherto add to the extensive literature on the psychometric paradigm (Clahsen et al., 2018; de Vries et al., 2019; Priest, 2017; Savadori et al., 2004; Siegrist et al., 2005) . Corroborating other scholarships on pandemics (Fung et al., 2011; Oh et al., 2015) and vaccinations (Marta et al., 2017; Raithatha et al., 2003) , the current study extends the paradigm by comparing two interrelated risk objects while attempting to appraise them in parallel. Results on emotional responses toward the pandemic and the vaccines additionally support the importance of affect, emotion, and risk perception in these complex relationships (Finucane et al., 2000; Loewenstein et al., 2001; Slovic et al., 2002; Slovic et al., 2007) .In this research, we also query if vaccine status correlates with risk perception and risk mitigation behaviors. Indeed, there are significant differences between vaccinated and unvaccinated individuals across both samples. The two groups are distinct on all key variables. Most noteworthy are the mean differences between the two groups. For instance, regarding unknown risk, while vaccinated and unvaccinated individuals hold increased risk perception toward the vaccines, this effect was stronger among unvaccinated individuals. Whereas both groups experience more positive affect and positive emotion toward the vaccines, the vaccinated group indicates a stronger emotional response.Taken together, this study bears important practical implications. First, it becomes apparent that dread risk reprises its role as a stronger predictor of risk perception and risk mitigation behavior. To this end, risk communication practitioners need to be aware of the unique risk attributes that influence people's risk perception to better craft public health messaging in times of crisis. In the COVID-19 context, severity of consequences, catastrophic potential, controllability, and voluntariness are all important attributes that shape the public's view of the pandemic as highly dreadful. Therefore, communication messaging should monitor the extent to which the target audiences see the pandemic as bearing severe future consequences that are uncontrollable. These perceptions are likely to determine whether people engage in adaptive or maladaptive behaviors (Witte & Donohue, 2000) . That is, excessive dread and fear responses may activate defensive motivation and reduce people's efficacy to engage in danger-control behaviors.More importantly, there are differences in how vaccinated and unvaccinated individuals view the two risk events. For one, unvaccinated individuals perceive the vaccine as more "unknown," therefore, communication messaging could leverage on this insight (e.g., highlight that mRNA technology has existed for a decade and is not completely novel) (e.g., Fanlund, 2021) . Conversely, since vaccinated individuals perceive the pandemic as more "unknown," they might be more sensitive to the new variants of the COVID-19 virus or daily developments related to the pandemic. For instance, despite the CDC's current recommendation that fully vaccinated individuals can now resume the same activities as prior to the pandemic such as not needing to wear a mask or staying six feet apart from others (CDC, 2021d), it is highly likely that this group will continue to adhere to preventive guidelines (e.g., Green, 2021; Sanchez & Vargas, 2021) . Moreover, in an increasingly hyperpartisan society like the United States (Slovic, 2021) , emerging research shows that difference in risk perception and preventive behaviors in the context of COVID-19 are often motivated by political ideology (Gallup, 2021 ; see also Nowlan & Zane, 2020) . It is therefore inherently critical to first understand the audience prior to any dissemination of health and risk related information to the public.Leveraging on insights from our research, we offer two communication strategies to promote COVID-19 vaccine uptake. The first is to source for a common adversary. Amalgamating two divergent groups (e.g., vaccinated versus unvaccinated; Republicans versus Democrats) may depend on locating a third, more loathed common adversary. The clear adversary here is the virus. Nevertheless, portraying the pandemic as something that is threatening will only work if the two groups recognize it as real and dangerous. Currently, the vaccinated group perceive the pandemic as more dreadful, and the unvaccinated group perceive the vaccines as more unknown. Therefore, the most applicable adversaries might be downstream effects such as concentrating on vaccines as a solution to restart the economy by getting Americans back to work or making a communal effort to rival other countries to return to normalcy.The second strategy is to avoid delivering fragmented risk information. Information about vaccine development is typically disseminated slowly to the public in a good effort to expand transparency for the scientific community (e.g., Petersen, Bor, Jørgensen, & Lindholt, 2021) . Nonetheless, new research (Wood & Schulman, 2021) shows that relaying such information in a piecemeal fashion may harm the potential adoption of biotechnological innovations. The lay public are less likely to embrace a new technology if risk information is given in a fragmented way because they are highly reactive to probable side effects. Though the discussion of the efficacy and safety of the COVID-19 vaccines is essential, policymakers and practitioners should recognize that feeding information bit by bit can inexplicably impact the public. Further, many participants in this study view the vaccines as a highly unknown intervention, perhaps due to the immediate media frenzy in praising the novel application of the mRNA technology when the vaccines were first introduced. Altogether, heeding to the unintended effects of risk communication may prove helpful to aid COVID-19 vaccination uptake.As with all studies, this research has its limitations. The constantly evolving nature of the COVID-19 pandemic and the COVID-19 vaccines, including the frequent updates in health recommendations from the CDC (2021d) render limited generalizability to our findings. Second, we acknowledge the cross-sectional nature of our survey. Future research should consider longitudinal surveys with panel design or experimental works to establish causality. Utilizing thought-listing measures could also substantiate quantitative results related to the psychometric paradigm. Lastly, we used age, gender, race, and political affiliation as quota variables, but the house-hold income of the final sample was lower than census data (United States Census Bureau Data, 2019). Therefore, our sample overrepresented Americans with lower household income. Readers should use caution when interpreting our findings.In conclusion, this study characterized risk perceptions of the COVID-19 pandemic and the COVID-19 vaccines along the dread and unknown dimensions of the psychometric paradigm. We examine if mental risk comparisons of the two risk events would spur various vaccine-related decisions among unvaccinated and vaccinated Americans. Our results reveal critical differences in the types of risk characteristics that determine risk perception. In particular, dread risk appears to be more prominent in influencing risk perception and risk mitigation behaviors. As COVID-19 vaccination continues to roll out in the United States, we must comprehend why a segment of the population remain reluctant to get vaccinated. Results from this study indicate that difference in the way in which people perceive the pandemic versus the vaccines may contribute to this vaccine hesitancy. Barring the common reasons cited for not getting vaccinated (e.g., side effects; lack of trust, KFFa, 2021), the current research presents an alternative angle to understand vaccine hesitancy during an ongoing crisis.
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- -