id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
266836605
pes2o/s2orc
v3-fos-license
Doing mutual understanding in child and family therapy sessions: How three interlocutors calibrate new information This paper presents an analysis of how three interlocutors sequentially organize and accomplish mutual understanding in naturally occurring audiovisual recordings of therapy sessions. The analysis is in keeping with microanalysis of face-to-face dialog (MFD) and follows operational definitions of three-step micro-processes that interlocutors use when they calibrate new information; that is, how they agree that they have understood each other’s words and actions well enough for current practical purposes. Pointing to some of the complexities that characterize triadic interactions, the analysis contributes with new documentations of ‘suspended’, ‘nested’, ‘branched’, ‘multi-paced’, and ‘mixed interpretations’ calibrations. The analysis also demonstrates how interlocutors may calibrate the ‘tone’ of an utterance before the topical content is mutually understood. The results and their implications may be relevant to practitioners of institutional talks at large, where the quality and outcome of, for instance, assessments and interventions largely rely on accomplishing mutual understanding. The quality and outcome of child and family therapies largely depend on therapists and clients accomplishing mutual understanding.In child and family therapies, it is common for three or more interlocutors to be involved.However, research on how interlocutors accomplish mutual understanding (as outlined by Bavelas et al., 2017) has focused on dialogs between only two people.This gap in research highlights the need for studies that explore how three or more people organize this process. Doing mutual understanding Therapists and clients do mutual understanding in incremental interactive processes.It is essentially a social event (Clark, 1996;Garfinkel, 1967;Lindgren et al., 2007;Linell, 2009;Roberts and Bavelas, 1996;Schegloff, 1992;Svennevig, 2009).Put differently, mutual understanding is not only a thing that interlocutors may have or share, it is also an interactive activity they do together (Bavelas et al., 2017).In this study, 'doing mutual understanding' refers to the interactive microprocess that interlocutors accomplish together as they make inferences, respond to each other, and calibrate on their meaning (Bavelas et al., 2017).A calibration sequence consists of the following three steps: A. The speaker initiates the sequence by presenting new information (the A-initiation). B. The addressee responds in a way that implies or demonstrates understanding (the B-response). C. The speaker follows up in a way that implies or demonstrates that the addressee's response was sufficient for current purposes (the C-follow up). (p. 96, italics original) That is, according to Bavelas et al. (2017), interlocutors use three-step calibration sequences to change the status of individual presentations to mutually understood contributions (cf.Lindgren et al., 2007;Tsui, 1989).In contrast to Clark's two-phase grounding process (e.g.Clark, 1996;Clark and Brennan, 1991), calibrating sequences are micro-units consisting of three observable steps (cf.Bavelas et al., 2017;Linell, 1998), which can include both visible and audible minimal responses, such as a quick shrug or a brief 'mhm'.This is similar to Deppermann's (2015) and Enfield and Sidnell's (2021) reasoning and analyses of achieving intersubjectivity; for understanding to be mutual, interlocutors must make their understanding of each other's utterances publicly available and ensure that these understandings are 'compatible enough for practical purposes' (Deppermann, 2015: 78;cf. Clark and Brennan, 1991;Clark and Schaefer, 1987;Garfinkel, 1967;Linell, 2009;Linell and Lindström, 2016;Schuetz, 1953;Svennevig, 2009).In Arundale's terminology, interpretings are 'operative' in that interlocutors may use them to design their subsequent utterances (Arundale, 2022).That is, calibrations do not necessarily bring about cognitive intersubjectivity, but rather a practical one.Hence, doing mutual understanding does not require that interlocutors have, or share, a final or identical interpreting of an utterance (cf.Arundale, 2020).Instead, calibrating is an interactive activity in which interlocutors presuppose more and more information, allowing their dialog to progress. The following example, abstracted from one of the audiovisual recordings of triadic child and family therapies (De Jong and Berg, 2013) that I return to in the results section, illustrates the nature and scale of doing mutual understanding: Alex (ch, child), his mom (pt, parent), and their therapist (th) are talking about issues that are going on at home.Alex is unhappy with his amount of household chores, and the therapist asks him what he is going to do about it: 1. Th: So (turns her hands up and down) what are you going to do? 2. Ch: (looks down) Eh, just try to keep, keep (looks at the therapist) talking about it, keep bringing it up.3. Th: (nods and smiles) Yeah (keeps nodding), wow. When the therapist says, 'So what are you going to do' (utterance 1), she cannot assess whether her client, Alex, has understood her until he displays his understanding in a response (utterance 2).However, Alex cannot know whether he has understood his therapist until the third utterance, when the therapist displays her evaluation of Alex's response.Once the therapist answers '(nods and smiles) Yeah (keeps nodding), wow', Alex can infer that he has understood the therapist's first utterance well enough for current practical purposes.According to Bavelas et al. (2017), these three steps are the minimal unit for interlocutors to infer that they have accomplished mutual understanding, in this case on the therapist's question.Doing mutual understanding does not require that the utterances are dedicated to displaying understanding (cf.Deppermann, 2015).If Alex and his therapist had to explicitly state how they understand previous utterances, their conversation would become 'endless' (Heritage and Atkinson, 1984), 'uneconomical' (Deppermann, 2015), and 'less on track' (Enfield and Sidnell, 2021). One could argue that the example is made up of two overlapping two-part sequences (Jovanovic et al., 2006).However, the 'wow' in the third utterance does not make sense except in relation to the first utterance; that is, utterances 2 and 3 are not a canonical adjacency pair. Contextualizing three-step sequences in previous research Using audio-recorded telephone calls between two interlocutors and analyzing meticulous transcripts of these, pioneering approaches established that the smallest units of conversational organizations are adjacency pairs (Schegloff and Sacks, 1973).These pairs consist of two parts, where the second utterance is functionally dependent on the first.Examples of adjacency pairs include, 'greeting-greeting', 'question-answer', and 'offer-accept/ decline' (Schegloff, 2007;Schegloff and Sacks, 1973).According to Bavelas et al. (2017), the possibility of a third step has been mainly treated as an exception that expands the two-turn standard (e.g.Peräkylä, 2011;Seedhouse, 1996).That is, three-step sequences have mainly been treated as specific to the analyzed settings, not as evidence of a generic structure (Bavelas et al., 2017;Heritage, 2022).Despite strong evidence, it is not until recently that the centrality of the three steps has been embraced (Heritage, 2022). In contrast, other scholars have built on or are in line with Mead (1934) and Goffman (1976Goffman ( , 1981)).They argue that 'the bulk of conversation is not constructed from adjacency pairs' (Levinson, 1981: 482-483) and propose that mutual understanding is accomplished in three-step units (e.g.Arundale, 2010;Bavelas et al., 2012Bavelas et al., , 2017;;De Jong et al., 2013;Heritage, 1984Heritage, , 2022;;Linell and Markovä, 1993;Severinson Eklundh and Linell, 1983;Svennevig, 2009;Tsui, 1989).Severinson Eklundh and Linell (1983) illustrated that it takes a minimum of three steps for interlocutors in a dialog to display shared agreement on what has been said.Indeed, in their words, a systematic holding back of third steps generates a sense of dissatisfaction.As in the opening example, without the third step, the therapist and Axel would not have been able to accomplish mutual understanding (cf.Deppermann, 2015;Heritage, 2022;Severinson Eklundh and Linell, 1983); instead, understanding would have remained implicit or inferred.In the therapist-client example provided, only the therapist would have been able to assess whether Alex had understood the initiation after two steps.Alex would not have known whether his response was sufficient for the current purpose until the therapist had followed up on it. The present study This study is part of a larger research project on children's involvement in child and family therapy sessions (e.g.(Edman et al., 2022;Edman et al., 2023)).It combines two main themes: First, it focuses on the interactive micro-process interlocutor use when they do mutual understanding, drawing on a recent conceptual framework, methodology, and terminology (Bavelas et al., 2017).Second, it extends this framework from dyadic to triadic dialogs. By exploring how therapists and clients organize and accomplish mutual understanding in both staged and naturally occurring audiovisual recordings of triadic child and family therapy sessions, I aim to delineate complexities associated with multiparty therapies.Such knowledge has the potential to improve the quality of child and family therapies and dialogs in allied fields and to advance research on dialogs at large. The results are based on a detailed analysis informed by Microanalysis of Face-toface Dialogue (Bavelas et al., 2016).In keeping with this method, I have attended to the pragmatics in the audiovisual recordings: only behaviors that are recognizable through observation (including audition) are analyzed.Observable behaviors involve both speech, co-speech gestures (Bavelas et al., 2014), andembodied non-verbal behaviors (cf. Goodwin, 2000), including quick shrugs, an exclamatory oh, raised brows, and so forth.I interpreted the interactive function of speech, co-speech gestures, and embodied behaviors (Bavelas et al., 2017). I utilized the ELAN software for qualitative data analysis of the recordings.ELAN kept the annotated utterances and sequences synchronized with the recordings and allowed me to watch the recording frame by frame (e.g.Lausberg and Sloetjes, 2009), which was helpful as the analysis demands a very focused attention to brief micro units (Bavelas et al., 2017). Dataset The data consist of four audiovisual recordings of triadic child and family therapy sessions (Table 1).The first recording is published as supplementary material in Interviewing for Solutions (De Jong and Berg, 2013).Recordings 2-4 are extant, that is, products of every-day therapeutic practices that took place from September 2019 to January 2022.They are part of a larger dataset and belong to an overarching research project on children's involvement (e.g.Edman et al., 2022;Edman et al., 2023).At the time of recording, none of the interlocutors were aware of this particular analysis on doing mutual understanding. Three main reasons determined the dataset: (1) Because the first recording is not protected by confidentiality, parts of the study can be replicated.Such transparency can provide a solid base for further research.(2) Recordings 2-4 are naturally occurring audiovisual recordings of practice.They enhance the results' applicability to real-world settings.(3) The interlocutors are visible in the frame at the same time, which enables detailed analyses of their interactions. Analysis To identify calibrating sequences, the procedure consists of three stages (see below), which are based on the microanalysis developed by Gerwing and Healing (2017).In addition to decision trees and guidelines, the manual includes operational definitions of utterances (Table 2, with examples from the first recording), minimal responses, A-initiations, B-responses, and C-follow ups (Table 3, with examples from the first recording).The different operational definitions have inter-analyst agreements ranging from 89% to 98% (Gerwing and Healing, 2017).The manual is available upon request from any of its authors.Because the first recording is not protected by confidentiality, I have not anonymized the examples in the tables.A speaker's utterance ended either when the speaker paused and looked at the addressee, creating a gaze window to elicit a response (Bavelas et al., 2002) or when the addressee said or did something that could be construed as communicative (e.g. a formulation, "M-hm," or a nod)'.(Bavelas et al., 2017 (Gerwing and Healing, 2017: 18). th: Is he a good student? The therapist asks for the mom's take on whether she thinks her son is a good student.The request introduces a topic and the requester's curiosity about that topic. Proposing something that managed the conversation at a meta-level above topical content (Gerwing and Healing, 2017) An utterance that functions to manage the conversation at a meta-level, above the level of topical content (Gerwing and Healing, 2017:21). th: Are you ready? The therapist proposes to continue the conversation, using a question that manages the conversation rather than introduces new topical content.It offers an opportunity for the mom to confirm or disconfirm that the inference is correct. Alerting Stage 1.I identified, categorized, and selected utterances by which the child, parent, or therapist presented new information (A-initiation, Table 3).These utterances were the initial steps in potential three step sequences. Stage 2. For each potential A-initiation, I evaluated whether the next utterance fulfilled the criteria for a B-response (Table 4). Stage 3.For each identified B-response, I evaluated whether the original speaker's next utterance fulfilled the criteria for a C-follow up (Table 4). Transferring a framework and methodology developed in one context (getting acquainted conversations among undergraduates) to a new context is not without challenges.In child and family therapies, for example, when therapists request new topical content, it may be part of a treatment model and does not necessarily introduce the therapist's personal curiosity in the topic.Further, in triadic dialogs one person's utterance does not always build on a just prior utterance.During or after a B-response, for instance, a third interlocutor may insert an utterance before the person who uttered the A-initiation provides a C-follow up.The meaning of 'after' (cf.Table 4) is interpreted in line with the works of Severinson Eklundh and Linell (1983), who argued that it takes a minimum of three steps for interlocutors in a dialog to display shared agreement on what has been said.The overarching definitions acted as a guide when a detail in an operational definition did not explicitly correspond to utterances in the recordings.To ensure consistency with the microanalysis of face-to-face dialog, an independent analyst and I applied the manual to 10% stratified random samples from the first dataset.As the recording is not protected by confidentiality, it was thus possible to share with an external analyst.We determined an inter-analyst agreement by dividing our number of agreements with our total amount of agreements and disagreements: A-initiations 84%, B-responses 100%, and C-follow ups 90%.I also presented a selection of three-step sequences at data sessions, and I reviewed my findings several times.After this final stage of identifying three-step sequences, I looked at how the child, parent, and therapist in each recording organized and accomplished mutual understanding. Ethical considerations The research project was approved by the Swedish Ethical Review Authority in August 2019.In addition, local boards of ethics approved of the research collection at the mental health clinics and the social service department.In the study, the 15-year-old provided hir own consent.According to Swedish ethical guidelines, children under the age of 15 are protected from being burdened with consent issues.Therefore, the legal guardians of the 13-year-old and 14-year-old approved of their participation, which the children did not oppose.The confidentiality of the social services departments and mental health clinics was transferred to the research project.The children and parents were informed about the study after the sessions had taken place to avoid affecting the children's engagement in their treatments.The procedure also ensured that children whom the social workers considered likely to be burdened by the study were not subjected to it (cf.Westlake, 2016;Winter et al., 2017).To safeguard the anonymity of the participants, I have pseudonymized the excerpts from the protected recordings (recordings 2-4).I use ze/hir pronouns to safeguard these participants' anonymity. Results Most of the identified calibrations had the same structure as in the initial example.That is, the participants uttered B-responses and C-follow ups immediately after the A-initiations.However, while the children, parents, and therapists calibrated 95.3% of the identified 669 A-initiations using a minimum of three-steps, I observed complexities linked to the three-person setting in 14.4% of the calibrations.I identified six patterns of interaction that point to complexities in three-person dialogs in child and family therapies.I termed the patterns as follows: (1) suspended calibrations, (2) nested calibrations, (3) branched calibrations, (4) multi-paced calibrations, (5) calibrations of different interpretations, and (6) calibrations of the tone.The sixth pattern is, however, not necessarily specific to triadic or multiparty therapies, nor to triadic or multiparty dialogues at large.Please note that the calibrating sequences are inevitably more visible in the recordings, which I analyzed directly, than in the following transcribed excerpts (cf.Bucholtz, 2007;Hammersley, 2010;Ochs, 1979).In addition, while there are various examples of calibrating sequences and other conversational activities in the excerpts, they are beyond the scope of the analysis.I concentrate on patterns that highlight complexities in how three interlocutors accomplish mutual understanding, and I elaborate on these patterns and their possible implications on both practice and research in the concluding discussion. Suspended and nested calibrations Suspended and nested calibrations occur when a competing A-initiation puts ongoing calibrating sequences on hold.They delay initiations that could otherwise have been calibrated in straightforward three-step processes. This first excerpt (from the publicly available recording) starts with the therapist (th) asking Alex (ch, child) what his mom (pt, parent) is like when he is like 'that' (first A-initiation at line 814-815 and 817), to which Alex alerts that he is not following (second A-initiation at line 818). Excerpt 1, first recording, 00:32:34-00:32:50 At lines 819 and 821, the therapist provides a B-response to Alex's A-initiation (line 818), to which Alex follows up much later, at line 827.In between the therapist's B-response and Alex's C-follow up lies a nested calibration initiated by the mom: A-initiation at line 822 ('maniac' formulates the therapist's 'jovial good humor' and refers to a discussion they had earlier in the session), a B-response at line 823, and a C-follow up at line 825.This nested calibration sequence means that the first and second A-initiations are suspended for eight and seven utterances, respectively. The excerpt and introductory paragraph illustrate some of the complexities that characterize calibrations in triadic child and family therapies.When the mom inserts a joke (initiated at line 822), she puts the two ongoing calibrations on hold while the inserted joke is calibrated.That is, what could have been calibrated in two straightforward sequences was suspended over several utterances (cf. Severinson Eklundh and Linell, 1983).interlocutors will inevitably accomplish mutual understandings of the same A-initiation at different stages of the dialog.That is, their temporal processes will differ. Branched and multi-paced calibrations In the next excerpt, the therapist's A-initiation (at lines 440-441) generates two different calibration sequences, indicated by two identifiable B-responses -one from the child (at line 442) and one from the parent (at line 443).The excerpt starts with the therapist formulating why the child is disappointed over hir parent's plan to stay at home. Excerpt 2, fourth recording, 00:08:26-00:08:30 In this excerpt, we see that the calibration branches out, and the child and the therapist, as well as the parent and the therapist, accomplish mutual understanding at different stages.For the therapist's A-initiation at lines 440-441 to be fully calibrated, the separate B-responses demand their own C-follow ups from the therapist, each directed at the relevant addressee (at lines 444 and 446-448).The therapist and the child accomplish mutual understanding at line 444, when the therapist looks at the child while saying 'and stuff'.However, the parent and the therapist's mutual understanding remains implicit until lines 446-448, where the therapist, by directing hir gesture and speech toward the parent, follows up in a way that demonstrates that the parent's response was also sufficient for current purposes (C-follow up, Bavelas et al., 2017). Accomplishing mutual understanding with different clients who both respond requires that therapists engage in divergent calibrations simultaneously.The next excerpt provides an additional example of this pattern.When the excerpt begins, the child is looking down, and the therapist and the parent have exchanged eye contact.The therapist extrapolates what ze has understood so far (that both the parent and the child are happy with a conversation they had earlier that morning) and invites the parent and the child to confirm or disconfirm hir inference.'That' at line 782 refers to that conversation.As can be seen, the parent and the therapist accomplish mutual understanding of the therapist's A-initiation before the child and the therapist: the therapist makes an A-initiation at line 782-784, to which the parent responds to at line 785 (B-response) and the therapist follows up on at line 786 (C-follow up).The child, who is looking down, cannot see that the therapist is rounding up hir utterance by looking at the parent, hirself and then at the parent again (at lines 772-773, cf.utterance operationalization, Table 2).As a result, the child does not provide hir B-response until later (at line 787).The therapist follows up on the child's B-response at lines 789-890 (C-follow up).The therapist's C-follow up also follows up on the parent's second B-response (at line 788).Put differently, the therapist and the parent accomplish mutual understanding at a faster pace than the therapist and the child.They manage to complete two calibrations at the same time as the therapist and the child complete one. Calibrations of different interpretations Calibrations of different interpretations occur when addresses provide different B-responses that imply different understandings of the same A-initiation, which the initial speaker accepts in hir C-follow up/s.Calibrating different interpretations is possible in three-person interaction and exemplifies one of many complexities in multiparty therapies. The excerpt below shows where the three interlocutors engage in overlapping calibrations of different interpretations of the same A-initiation.The excerpt begins with the child proposing that 'you' may also bring up issues (A-initiation at lines 157-158), which gets calibrated twice: once between the child and the therapist (B response at line 161 and C-follow up at line 162) and once between the child and the parent (B-response at line 159) and C-follow up at line 160. Excerpt 4, second recording, 00:03:13-00:03:19 156 th: (looks down at their shared notepad) 157 ch: (looks at hir mom) you can (looks down) bring up 782 th: tha:t you both seem to be (looks at the parent, at 783 the child and then at the parent again) happy with 784 [with] 785 pt: [yes] 786 th: if you just restrict [it] to that 787 ch: [mm] (keeps looking down) 788 pt: (nods) mm 789 th: em ( .)what was in it (.) eh the dialogue that that 790 made it good (.) As can be seen, the child/therapist and the child/parent calibrate different understandings of the A-initiation: While the child/therapist calibrate that the child invites the therapist to bring up what the therapist finds important, the child/parent calibrate that the invite ('you') is directed to the parent.Because the therapist is looking down at the notepad while the child is talking, ze is unable to see that the child is turned to hir parent when ze utters 'you' (at line 157).As the therapist continues to look down, ze is also unable to register the parent's B-response (at line 159), as well as the child's C-follow up to that response (at line 160).Instead, the therapist provides a B-response (at line 161) to a longer A-initiation (at lines 157-158, 160), which the child follows up on (supposedly unintentionally as the child is still focusing on hir parent) at line 162. Because the two calibrations are paced differently, the excerpt also illustrates multipaced calibrations. Calibrations of the tone Below, the interlocuters calibrate the 'tone', or the 'quality', of an A-initiation before the 'topical content' of the utterance is established as mutually understood.It starts with the therapist asking the mom what it is like when her son is in a good mood and then smiles widely (A-initiation starting at line 787).At this stage, both Alex and his mom are looking at the therapist.The mom and Alex provide separate B-response at line 791 and at line 792, to which the therapist follows up on at lines 793-794.By looking and smiling at both Alex and his mom, the therapist indicates that both of their responses are acceptable.However, it is only at line 795 that the mom provides a B-response regarding the topical content of the therapist A-initiation, to which the therapist follows up on at lines 797-798 (C-follow up). Concluding discussion The analysis identifies how therapists, children, and parents organize and accomplish mutual understanding in triadic child and family therapies.It contributes with new documentations of how three interlocutors do mutual understanding in 'suspended' (excerpt 1), 'nested' (excerpt 1), 'branched' (excerpt 2-3), 'multi-paced' (excerpt 2-4), and 'mixed interpretations' (excerpt 4) calibrations.The analysis also demonstrates how interlocutors may calibrate the 'tone' of an A-initiation (e.g. that what is put forward is said with humor) before the topical content of the same A-initiation is mutually understood (excerpt 5). Implications on practice Calibrations coexist with elements that are specific to their setting.Their implications therefore depend on context-specific parameters.For instance, when interlocutors in a family therapy session calibrate different interpretations of the same A-initiation (excerpt 4) -sometimes at different times and paces (excerpts 2-4) -it may contribute to generating or enhancing an imbalance among them.If a therapist and a parent, for instance, calibrate at a faster pace than the therapist and the child, the child may fall behind (or even further behind), which may lead to the therapist-parent-child relationship evolving unevenly (cf.'working alliance ', McLeod et al., 2014).As a result, the child may be less willing to share hir concerns with the therapist and thus less inclined to exercise hir right to express hir view in matters that affect hir (United Nations, 1989).Such imbalances may also pertain to other triadic and multiparty dialogs.Similar implications may arise when two or more clients provide separate B-responses to different fragments of the same A-initiation.If one client is looking in another direction, that person may respond to a longer utterance than the client who is looking at the therapist and uses minimal responses (cf.excerpt 4).Further, when a therapist asks several questions in a single utterance, the clients' B-responses may imply or demonstrate understanding of different sections of that utterance.Another possible implication may arise when a therapist and a client calibrate the tone of an utterance differently than the therapist and another client, or perhaps not at all.When the tone of an utterance is calibrated differently among the interlocutors, the utterance may come to serve different functions, which may steer a therapy session in conflicting directions.Furthermore, there is likely a great frequency of nested and suspended calibrations in triadic child and family therapies, which therapists supposedly need to both anticipate and address to stay on track.This knowledge about complexities related to how three interlocutors sequentially organize and accomplish mutual understanding in child and family therapies has the potential to prevent misunderstandings and to modify imbalances, including uneven working alliances.For instance, if the therapist in excerpt 3 had taken control of the calibration pace, the therapist and the child could have calibrated their understanding quicker than the therapist and the parent.By calibrating both the tone and the topical content with every interlocutor, the therapist could also prevent possible misunderstandings. The results could be useful to professionals working with different types of institutional talks and be invaluable to, for example, therapists who work with families or health care providers who talk with patients and next of kin.There may also be implications for institutional talks involving assessments or evaluations, including service-users who depend on accurate social work assessments or asylum seekers at citizenship and immigration services. Corroborating existing research results on doing mutual understanding The interlocutors calibrated 95.3% of the identified 669 A-initiations using a minimum of three-steps.This suggests that the interactive micro-process two interlocutors use when they do mutual understanding (as defined by Bavelas et al., 2017) is likely applicable to triadic dialogs.This would be in line with what both Mead (1934) and Goffman (1976Goffman ( , 1981) ) have suggested and what succeeding researchers have proposed -threestep sequences are a fundamental unit of conversational organization (e.g.Bavelas et al., 2012Bavelas et al., , 2017;;Deppermann, 2015;Gerwing and Indseth, 2016;Heritage, 2022;Linell and Markovä, 1993;Severinson Eklundh and Linell, 1983;Svennevig, 2009;Tsui, 1989).As mentioned in the introduction, the ubiquity of such three-step sequences has been identified in classroom settings (Mehan, 1979;Sinclair and Coulthard, 1975), doctors' appointments (Tsui, 1989), rescue operations (Lindgren et al., 2007), customer calls (Kevoe-Feldman and Robinson, 2012) psychotherapies (Bavelas et al., 2012;De Jong et al., 2020), emergency calls with a language barrier (Gerwing and Indseth, 2016), and getting acquainted conversations (Bavelas et al., 2017).The present article adds child and family therapy and audiovisual recordings of three person interactions to the list of activity types in which three-step sequences are common.That they are common in such a diverse array of contexts suggests that when it comes to organizing and accomplishing mutual understanding of new information, adjacency pairs are probably insufficient.In Heritage's words, 'Although a great deal has been written about the significance of adjacency pairs, it may not do to exaggerate their empirical frequency.Conversations are far from being constructed exclusively from them' (Heritage, 2022: 318).Demonstrating understanding of the just-prior utterance is especially difficult to apply to multiparty interactions where progression appears less straightforward.Instead of anticipating what will come next based on the just-prior utterance, each interlocutor must subprehend (Enfield and Sidnell, 2021) the unexpected. Methodological issues and future research directions Besides replicating previous studies with similar findings (e.g.Bavelas et al., 2012Bavelas et al., , 2017;;De Jong et al., 2020;Tsui, 1989), this study's results are supported by an analysis that used a manual with operational definitions, numerous presentations and discussions of three-step micro sequences at data sessions, and a high inter analyst agreement on stratified random samples of three-step micro sequences, as well as on separate A-initiations, B-responses and C-follow ups.However, applying a framework and methodology developed on getting-acquainted conversations between two persons to triadic child and family therapy sessions has not been a straightforward process.This could explain why I, for instance, achieved a lower inter-analyst agreement on A-initiations than the original study's authors (84% vs 94%). The number of calibrated sequences may be underestimated, and they were possibly both greater and the pace even more rapid than what I account for here.As several of the interlocutors in the recordings were mostly visible in profile, some utterances may have been unavailable for analysis.An addressee's crooked smile or one-eyed wink (utterances that may have been unavailable for analysis) could, for instance, have split an initial speaker's utterance in two (Table 2), thus possibly resulting in additional calibrations and faster rates than what were observable.To address this limitation and further detach research on face-to-face dialog from written -or 'spoken' -word bias (cf.Linell, 1982Linell, , 2005)), future research could analyze recordings in which the interlocutor's faces (and preferably bodies) are fully visible. How children, parents, and therapists organize and accomplish mutual understanding in triadic child and family therapies may inform future research on interactions with more than two interlocutors.Additional microanalyses on organizing and accomplishing mutual understanding in triadic and multiparty interactions could, for instance, advance knowledge on how to modify imbalances and prevent and repair misunderstandings. Table 4 . B-response and C-follow up. Branched calibrations occur when A-initiations branch out in two calibration sequences involving different addressees.When calibrations branch out, the
2024-01-08T16:10:43.845Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "c84a89cf6b7aab94c4a0d79cb028b7ef7fef11e6", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14614456231207519", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "f1878150e0115331929c1d73e378d732f26e0756", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
55987697
pes2o/s2orc
v3-fos-license
Aboveground Biomass Stockpile and Carbon Sequestration Potential of Albizia saman in Chennai Metropolitan City, India Albizia saman (Jacquin) F. Mueller belongs to the family Fabaceae (sub family: Mimosoideae) is a native to Northern South America. Commonly known as rain tree and locally known as Thoongu-moonchi maram (Tamil). The species’ introduced during Colonial period as an ornamental tree in Chennai metropolitan city (CMC). Though A. saman represent as a dominant tree species’ in CMC, there are voids in baseline data such as density, biomass stockpile, and annual C sequestration potential hence this study was conducted to fill these voids. A total of 2522 individuals which cover 1672.14 m basal area (mean = 9.61 ± 4.95 m ha; range = 0-24.96 m ha) was recorded from study plots. During study period A. saman stocked a sum of 6403.51 Mg aboveground biomass (AGB) (mean = 36.8 ± 18.9 Mg ha; range = 0-95.4 Mg ha) and 3201.76 Mg C (mean = 18.9 ± 9.45 Mg ha; range = 0-47.7 Mg ha). C storage of individual tree ranged from 3.74 to 4598.18 kg with a mean value of 1269.53 ± 1082.25 kg. On an average, each tree achieved 1.04 ± 0.27 cm horizontal growth yr. In a year A. saman population sequestered 111.23 Mg biomass in aboveground (in 174 ha). The mean C sequestration of study area was 319.62 ± 184.0 kg ha year. In total, the study area sequestered 55.62 Mg C year. Overall, in a year A. saman absorbed 204.13 Mg CO2 for C sequestration in study area. CO2 absorption ranged from 385.46 to 3009.29 kg ha -1 yr. The monetary value of C storage and annual sequestration of A. saman is also investigated. Though introduced from tropical Northern South America A. saman provides a considerable ecosystem services to CMC through C storage and sequestration. This study estimated monetary values of just two ecosystem services of A. saman, study that concentrates on all ecosystem services is essential to assess total actual ecosystem service values. Introduction Urban areas are home for about half of the global human population [1]. An estimation shows that urban human population will increase up to 5 billion by 2030. Approximately, 1.2 million km 2 area (three times larger compared to the year 2000) would come under cities in 2030, ultimately, this would lead to loss of biodiversity and forest cover around the world including India [2]. Thus, in-depth scientific studies are essential to understand the importance of urban forests and ecosystem services they provide. Tireless efforts and decades of continuous research work has advanced our understanding of urban forests and green spaces [3]. Urban forests and their biotic components play vital roles in reducing energy budgets of building and urban heat islands [4,5], augmenting water and air quality [6], decreasing the impacts of flooding [7], improving human health and reducing sound pollution [8]. Among lifeforms, trees are important constituent of urban ecosystems. Besides, urban trees do array of ecosystem services including biomass and carbon storage [9]. Urban forests are either rich in native species [10] or introduced species [11]. McKinney [12] named the introduced species as urban exploiters, found extensively from urban areas around the world. Introduced trees can also provide considerable quantum of ecosystem services [13]. Albizia saman (Jacquin) F. Mueller belongs to the family Fabaceae (sub family: Mimosoideae) is a native to Northern South America. Commonly known as rain tree and monkey pod, locally known as Thoongu-moonchi maram (Tamil). Now extensively grows throughout the tropics. It reaches up to 25 m tall and 30 m crown diameter, highly suitable for large homesteads, parks, roadsides and school play grounds [14]. The tree has good qualities, grows well at sea level to 300 m amsl, adapts to a broad array of soil types and pH ranges, growth rate is relatively high (2.5-5 ft yr -1 ), produces fodder and timber, generates 1700-4200 kg biomass in 5 years [14]. Besides that the tree also has economic importance as fuel wood [15], food and fodder [16], timber [17], gum and resin [18], nitrogen fixer and green manure [19], and medicine [20,21]. The species was introduced during Colonial period as an ornamental tree in Chennai metropolitan city (CMC) [22]. Now it grows extensively in parks, roadsides, playgrounds of academic institutions, and avenues in CMC [23]. The urban forest division of Chennai district prefers this tree for its fastgrowing nature, handsome dome-shaped crown and shade. Though A. saman represent as a dominant species in CMC there are voids in baseline data such as density, sequestered biomass, C stockpile and sequestration potential hence this study was conducted to fill these voids. Study Area Chennai Metropolitan city is 34th largest city in the world with the human population ~ 5 million [24]. CMC is one among the four mega-cities of the Indian subcontinent, and the capital city of Tamil Nadu state. The city is experiencing a tropical dissymmetric climate and receiving bulk of the rainfall during north-east monsoon (September-December). Mean temperature and rainfall were 30°C and 1300 mm [25]. East-side of the city is bounded with the Bay of Bengal and remaining three sides are bordered with Thiruvallur and Kanchipuram districts. CMC is endowed with rich plant diversity (1039 species) [24] which include both native as well as introduced species. Field Survey The entire geographical area of CMC (174 km 2 ) was divided in to the regular rectangular grids (1 km 2 × 1 km 2 ) by fishnet tool of ArcGIS software (version 9.3). The sample sites were selected randomly inside of the each grid. A total of 174 one-hectare sample plots were laid to record density and diameter at breast height (dbh) of Albizia saman (> 5cm dbh). Diameter of all trees >5cm dbh was measured at the height of 137 cm above the ground and recorded in field data sheet. In order to record DBH value for consecutive years, trees were tagged with consecutively numbered aluminium tags. Field survey was conducted during January-March on 2011 and 2012. Data on trees was recorded with the help of students of Botany departments across the Chennai city. Rainfall and temperature recorded in the year 2011 and 2012 were more or less equal to the mean rainfall and temperature of the study area, hence the study period represented Chennai's usual climatic and environmental conditions. Estimation of Aboveground Biomass A region-cum-species specific allometric formula developed by destructive sampling method was employed to estimate AGB of Albizia saman in study area [26,27]. AGB dry = exp (1.9724*LN (DBH) -1.0717); where, AGB dry is aboveground dry biomass of tree (kg); DBH is stem diameter at breast height (cm); LN is natural logarithm; 1.9724 and 1.0717 are constants. The allometric formula developed with the destructively sampled healthy individuals of A. saman (DBH range 4.45 to 178.7 cm). Due to hetero-scedasticity nature of field data, the error variance was not constant. The problem was dealt with the transformation of variables. But the de-transformed predicted values are biased [28]. To overcome those bias, the back transformed results from logarithmic unit was multiplied by a conversion factor (CF = 1.016) [29]. DBH of trees ranged from 5 to 176 cm in the present study. The coefficient of determination of allometric equation is high (r 2 ) i. e. 0.98. Standard error of the estimate is 0.76. Assessment of Carbon Storage and Sequestration To get carbon storage values of trees aboveground biomass multiplied by 0.50 [30]. The annual increase of stem diameter and biomass sequestration of trees were calculated by the difference in estimates of dbh and biomass stockpile between year x and x+1. Carbon storage and sequestration values were converted to CO 2 equivalent by multiplying with 3.67, the ratio of molecular weights of CO 2 to C [31]. Monetary Value of Ecosystem Services The money value of ecosystem services provided by A. saman, namely C storage and sequestration was calculated based on international C price. International price for one tonne C is 41 US$ [32]. Tree Density and Basal Area A total of 2522 individuals (>5 cm dbh) was recorded in 174 ha. Density of trees ranged from 0-30 ha -1 . The mean tree density of A. saman was 14.49 ± 8.52 ha -1 . Likewise, the basal area of trees varied from 0-24.96 m 2 ha -1 . The average basal area of A. saman was 9.61 ± 4.95 m 2 ha -1 . Few sample plots completely fell on water bodies where density, basal area and AGB were recorded as '0'. DBH of trees differed from 5-176 cm, while the mean dbh in study area recorded as 80.95 ± 43.53 cm (Table 1). Aboveground Biomass As on March 2011, A. saman stores a sum of 6403.51 Mg AGB in 174 ha study plots. The mean AGB of study area was 36.8 ± 18.9 Mg ha -1 (range, 0 to 95.4 Mg ha -1 ). The mean AGB of an individual tree was recorded as 2539.1 ± 2164.5 kg (range, 7.48 to 9196.35 kg) ( Table 1). DBH of tree is positively linked with AGB (r 2 =0.94, p < 0.01). The larger is the tree the more is the sequestered biomass. The largest tree holds nearly 1200 times more AGB than the smallest one in study area. AGB storage of diameter classes varied considerably in the study area, DBH class 135. Annual Horizontal Stem Growth On an average, each tree achieved 1.04 ± 0.27 cm horizontal growth yr -1 . There is a negative relationship exists between tree dbh and tree horizontal growth (r 2 =0.87, p < 0.01). The smaller is the tree the larger is the annual stem horizontal growth (Table 3). Sequestration of Biomass and Carbon In a year A. saman population sequestered 111.23 Mg biomass in aboveground parts (174 ha). AGB sequestration varied from 210.05 to 1639.94 kg ha -1 year -1 . The mean AGB sequestration of study area was estimated as 639.24 ± 367.99 kg ha -1 year -1 . Carbon sequestration of A. saman differed from 105.03 to 819.97 kg ha -1 year -1 among study plots. The mean C sequestration of study area was 319.62 ± 184.0 kg ha -1 year -1 . In total, the study area sequestered 55.62 Mg C year -1 . Absorption of CO 2 for C Sequestration Overall, in a year A. saman absorbed 204.13 Mg CO 2 for C sequestration in study area. CO 2 absorption ranged from 385.46 to 3009.29 kg ha -1 yr -1 . On an average, each onehectare plot absorbed 1173.0 ± 675.28 kg CO 2 to sequester C. Monetary Values of Carbon Storage and Sequestration The monetary value of C storage and sequestration of A. saman in study area (174 ha) is 131,272 and 2,280 US$, respectively. The money value of these kind of ecosystem services of A. saman for entire CMC (17400 ha) could be estimated as 13.12 and 0.23 million US$, respectively. On an average, the monetary values of C storage and sequestration of each hectare could be valued as 754 and 13.11 US$, correspondingly. Tree Density and Basal Area In an earlier tree diversity study conducted across different land uses of CMC A. saman constituted 6% tree community and topped in the list of important value index (IVI) among 45 species [10]. Likewise, A. saman constituted a considerable proportion of tree communities in urban forests of Bangalore and West Bengal, India [33,34]; Bangkok, Thailand [35,36]; Chittagong, Bangladesh [15] and USA [37]. The mean basal area recorded for A. saman (9.61 ± 4.95 m 2 ha) is not in agreement with previous study [23]. Earlier a study recorded 22.33 m 2 BA ha -1 for A. saman. However, the previous study concentrated only on a hectare area of CMC, present study concentrated on large area i.e. 174 ha. The mean basal area recorded for species' under study is larger than the mean stand basal area (of all species) of urban forests in Ohio, USA (4.8 m 2 BA ha -1 ) [38], and more or less equal to urban forests of Miami-Dade County, USA (10.0 m 2 BA ha -1 ) [39]. It is apparent that the occurrence of good proportion of well-grown large trees in the CMC (>60 DBH, 61.14%) contributed to larger tree stand basal' areas. Population Structure Albizia saman is showing a non-expanding population structure in CMC (Figure 2). Among 11 diameter classes, nine (except smallest and largest DBH classes) had more or less similar number (230-249) of individuals. Ongoing developmental activities such as construction of buildings and bridges, widening of roads etc. are contributing to the destruction of trees. Non-expanding population structure of A. saman indicates that the individuals of all girth classes are under disturbances. Aboveground Biomass and Carbon Storage The results obtained on mean AGB and C storage of A. saman (AGB=36.8 ± 18.9 Mg ha -1 ; C=18.4 ± 9.45 Mg ha -1 ) is higher than in urban forest of Tripura university campus, Northeast India (AGB=11.81 Mg ha -1 ; C=5.91 Mg ha -1 [40] [30]; and, Oakland, USA (22 Mg ha -1 ; 11 Mg ha -1 ) [46]. The population of A. saman composed of relatively larger trees (mean DBH=80.95 cm) hence stored good amount of biomass and C in its aboveground parts. Quantitative studies should be conducted to estimate biomass and C storage of all tree species in Chennai city. The absence of region-specific multi-species tree allometric models is the primary lacking for these studies hence research on these lines could be valuable. Each tree stored 1269.53 ± 1082.25 kg C (range, 3.74-4598.18 kg) in study area. This value is higher than in Tshwane, South Africa (474.22 kg C) [47]; Beijing, China (98.87 kg C) [41]; Shenyang, China (58.51 kg C) [32]; and cities of USA (mean = 227.01 kg C; range = 91.81-638.95 kg C) [30]. On the other hand, the present study concentrated only on single species' thus studies that consider all tree species in CMC are essential to confirm the dominance. Stem Horizontal Growth The findings pertaining to mean stem horizontal growth [48] who reported 1.1 cm stem horizontal growth tree-1 year-1 for urban trees of USA. While the result of current study is not in line with that of deVries [49], Nowak [50] and Smith and Shifley [51] recorded 0.61, 0.90 and 0.38 cm horizontal stem growth tree-1 yr-1 respectively for trees of central park, New Jersey, three USA cities, and Indiana and Illinois, USA. Fast growth nature of A. saman in CMC contributed to a high dbh growth tree-1 yr-1 in this study. A negative relationship as obtained between stem horizontal growth yr-1 and diameter class is chiefly linked to the age of trees. Young trees show faster growth than larger trees. Age-related decreases in aboveground C sequestration were extensively recorded and reported around the world [52][53][54]. The per hectare C sequestration potential recorded for A. saman (319.62 ± 184.0 kg ha-1 year-1) is extremely lower than in urban forests of Pungol Eco-town, Singapore (3.61 Mg ha-1 yr-1) [43] and lower than in seven USA cities (458.57 kg ha-1 yr-1) [30]. Present study concentrated on single species', if all species taken in to consideration for C sequestration then the value may exceeds than in USA. Further studies are necessary to validate these lines. However, the per hectare C sequestration potential of A. saman is relatively higher than the cumulative C sequestration potential (individuals belongs to all species in a hectare) of urban trees in California (300 kg ha-1 year-1), Texas (300 kg ha-1 year-1), Arizona (300 kg ha-1 year-1), Rhode Island (300 kg ha-1 year-1), North Dakota (200 kg ha-1 year-1), and Wyoming (100 kg ha-1 year-1) of USA [47,48]. Monetary Value of two Ecosystem Services The monetary value of two ecosystem services namely, C storage and sequestration of A. saman for entire CMC is 13.12 and 0.23 million US$, respectively ( Table 4). The monetary value estimated in this study is higher as well as lower than in urban forests elsewhere. Stoffberg et al. [44] reported 3 million US$ for Tshwane, South Africa; Liu and Li [32] CO 2 Emission Reduction In a day Chennai needs 1300 kilolitres petrol and 2000 kilolitres diesel. Use of petrol, diesel emits 3003, 5360 Mg CO 2 in to the atmosphere, respectively. In all, fossil fuel use in CMC releases about 8363 Mg CO 2 into the atmosphere per day. C stockpile and sequestration of A. saman is equal to 1175, 204.13 Mg CO 2 , correspondingly. The average effects of tree diameter classes listed in Table 5. In total, A. saman population in CMC provide C storage and sequestration equivalent to 16.49% of a day's CO 2 emissions by fossil fuels. Conclusion Though introduced from tropical Northern South America A. saman provides a considerable quantity of ecosystem services to CMC through C storage and sequestration. This study estimated monetary values of just two ecosystem services of A. saman, study that concentrates on all ecosystem services is essential to assess total actual ecosystem service values.
2019-04-03T13:08:15.164Z
2018-10-29T00:00:00.000
{ "year": 2018, "sha1": "f865fe5eeffee5afe4677664430d062d701d5e09", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.plant.20180603.12.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2714e2d3754ffe3bfeb8654f1967228e2ec65119", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
245512344
pes2o/s2orc
v3-fos-license
Evaluation of the phenolic compounds and the antioxidant potentials of Vitex agnuscastus L. leaves and fruits Objectives: In this study, we aim at deciphering the phenolic content of Vitex agnus-castus L. leaf and fruit extracts prepared with different methods and relate it to their antioxidant activity. Methods: In this study, phenolic compounds and the antioxidant potential of the ethanol fruit and leaf extracts of V. agnus-castus L. (Chaste tree) were evaluated spectrometrically. Furthermore, selected polyphenols, i.e., chlorogenic acid and rutin, were determined by the HPLC-DAD method qualitatively and quantitatively. Results: The results obtained from leaf and fruit extracts were compared with a commercial product (CP) containing the fruit extract of V. agnus-castus . Leaf extract was found to be richer in fl avonoids when compared to the fruit counterparts. Accordingly, they also showed higher antioxidant activity. Conclusions: Extracts prepared here can be considered as promising antioxidant agents for future therapeutic formulations. Introduction When considering the prolonged life expectancy of humankind, natural sources rich in phenolic components have gained significant interest against chronic diseases. Oxidative damage is one of the most important factors in developing and progressing many chronic diseases, including cardiovascular disorders and cancer [1]. During this inevitable process, phenolic compounds step forward due to their superior effects, including free radical scavenging, enzyme regulation, and antiallergic and antiinflammatory properties [2]. Furthermore, antioxidants have proven to delay the oxidation of molecules by inhibiting the initiation and/or propagation of oxidizing chain reactions by free radicals. Therefore, they are crucial in reducing oxidative damage in living organisms [3]. To date, the presence of these compounds in medicinal plants has been strongly related to their antioxidant activity [4]. Therefore, natural antioxidants rich in phenolic components are widely investigated in healthcare, active pharmaceutical ingredient research, and the food industry. Vitex agnus-castus L. is a plant that belongs to the Lamiaceae family. V. agnus-castus L. is a rarely small tree, 1-3 m in height, much-branched, shortly tomentosecanescent. The leaves of the plant can be defined as digitately 5(-7)-partite, leaflets are usually entire, 3.5-15 × 0.5-2.8 cm, and occasionally broader or distinctly dentate, acute, narrowed to both ends, sessile or at least terminal petiolulate, dull green above, white-tomentose beneath; petioles long, and those of lower leaves to ∼4 cm. Inflorescence of the plant is relatively dense. Cymes are compact, frequently subglobose, sessile, or sub-sessile. Drupes are 3-4 mm in size, globose, and black or reddish [5]. In Turkey, it is also known with different regional names such as "hayıt, acıayıt, ayıd, hayıd and beşparmakotu" [6]. In the regions that it naturally grows (e.g., middle Asia, southern Europe, Mediterranean countries), it is widely used as a treatment against premenstrual syndrome, lactation difficulties, low fertility, and the regulation of menstrual cycles. However, the active principles responsible for such therapeutic effects have not been fully identified yet [7]. Traditionally, the seeds of V. agnus-castus are also used as lactagog and hormone regulators [8]. In addition, it is indicated in the literature that the extract of V. agnus-castus fruits has an antiaging effect in the reproductive system in vivo [9]. Moreover, antimicrobial, antifungal, fracture healing activities of V. agnus-castus extracts prepared in different methods have been reported previously [10,11]. Phytochemical screening analyses performed with V. agnuscastus revealed that the plant is rich in phenolic compounds, glycosides, iridoids, flavonoids, diterpenes, and essential oils [12,13]. Casticin and agnuside are determined to be major compounds of its fruits, and the analytical determination of these compounds is studied well [14][15][16]. This study investigated the phenolic components of ethanolic fruit and leaf extracts of V. agnus-castus. This is the first multi-comparative analysis that evaluates the effect of different extraction techniques on antioxidant potential, total phenolic and flavonoid content of chaste tree fruits and leaves to the best of our knowledge. In detail, we determined the total phenolic and flavonoid content of the extracts, then investigated the rutin and chlorogenic amounts in selected preparations qualitative and quantitatively using high-performance liquid chromatography diode array detector system (HPLC-DAD). Finally, we determined the antioxidant activity of the extracts and related the results to their phenolic contents. Plant materials The leaves and fruits of V. agnus-castus L. were collected from Urla, Izmir, Turkey (May-leaves; and July-fruits, 2019). The plant was identified by Dr. Hüsniye Kayalar from Ege University, Faculty of Pharmacy, Department of Pharmacognosy. A voucher specimen is conserved in the Herbarium of the Faculty of Pharmacy, Department of Pharmacognosy, Ege University (No. 1262/2). Preparation of the plant extracts and tinctures The collected leaves and fruits of the plant were air-dried at room temperature (RT) by avoiding any light exposure. Before extraction, the dried material was reduced to a coarse powder using an electrical grinder (Retsch GmbH, diameter: 1 mm). The powdered materials were subjected to two different extraction methods. To perform the first set of extraction, powdered plant material (5 g) was dispersed in ethanol solution (96% v/v), and the solid-liquid extraction was performed with Soxhlet apparatus for 4 h (samples were denoted as VL S and VF S for leave and fruit prepares, respectively). As a second extraction method, the maceration technique was selected. Here, 5 g of powdered material was placed into 50 and 60% v/v of 100 mL ethanol solution and gently stirred overnight (denoted as VL50 and VL60 for leave preparations; VF50 and VF60 for fruit preparations). Next, the dispersions were placed in an ultrasonic bath for 4 h to avoid any light exposure. Finally, obtained dispersions were filtered and evaporated to dryness in vacuo. The obtained extracts were weighed, and the extraction yield was calculated by means of the initial and final weight difference. The tinctures were prepared by placing the samples (5 g) in 70% ethanol (100 mL) under shaking at RT for five days and subsequent filtering. The final solutions were VL70 and VF70 for leaf and fruit tinctures, respectively. All the prepared samples were coded as described in Table 1. Determination of the total phenolic compounds The total phenolic content of V. agnus-castus extracts was assessed by applying the Folin-Ciocalteu method as described in Singleton et al. [17]. In brief, 0.1 mL of each sample (extract=10 mg/mL) was mixed thoroughly with 2.8 mL of ddH 2 O and 2 mL of Na 2 CO 3 (2% w/v). Then, 0.1 mL of 0.1 N of Folin-Ciocolteau reactive agent was added to the mixtures. The obtained mixture was shaken thoroughly and incubated for 30 min by avoiding any light exposure. Next, the absorbance of the mixtures was detected at 750 nm against a reference mixture that does not contain any plant extract. The total phenolic content of the extracts was calculated by converting the absorbance of the samples to gallic acid equivalent by using a gallic acid standard curve, which is obtained by measuring the absorbance values of serially diluted gallic acid solutions (GA, Sigma Aldrich, Schnelldorf, Germany) at 750 nm. All tests were carried out as triplicates. Total flavonoid analysis The total flavonoid content of V. agnus-castus extracts was determined spectrophotometrically using aluminum chloride colorimetric method as described in Zhishen et al. [18]. Briefly, 0.5 mL of the plant extracts (extract=10 mg/mL) were mixed with 1.5 mL of ethanol, 0.1 mL of 10% aluminum chloride, and 2.8 mL of ddH 2 O. The mixture was incubated at RT for 40 min by avoiding any light exposure. The solution was mixed carefully, and the absorbance was measured against ethanol at 415 nm. Here, quercetin was used as the standard for a calibration curve. The flavonoid content was calculated using a linear equation based on a serial dilution of quercetin (QE, ≥98% HPLC-grade, Sigma Aldrich) and described as quercetin equivalents (µg QE/mg) [19]. All tests were carried out as triplicates. Analysis of chlorogenic acid and rutin by HPLC-DAD The plant extracts' quantification of chlorogenic acid and rutin amounts was carried out using a high-performance liquid chromatography diode array detector system (Agilent 1100 HPLC-DAD). Before experiments, standard curves for chlorogenic acid and rutin were prepared as follows: To detect rutin in the plant extracts, 8.2 mg of rutin hydrate (Sigma, R5143) was dissolved in 2 mL of methanol. Twenty microliter of the prepared rutin solution was then mixed with 980 µL of methanol and scanned in the wavelength range of 345-580 nm (concentration range of 20-80 μg/mL). The maximum absorbance was obtained at the wavelength of 355 nm. To detect the chlorogenic acid content of the plant extracts, chlorogenic acid solution (Sigma Aldrich, 1 mg/mL in methanol, concentration range of 5-20 μg/mL) was scanned in the wavelength range of 200-400 nm by taking methanol as a reference. The maximum absorbance was detected at 330 nm. The calibration curves forrutin and chlorogenic acid were prepared by injecting serially diluted solutions of the chemicals. The calibration curve equations for rutin and chlorogenic acid was determined as y=1,395.3x (R 2 =0.93) and y=609.47x (R 2 = 0.99), respectively. Plant extracts were dissolved in corresponding solvents to a 100 mg/mL concentration and diluted in methanol to a concentration of 10 mg/mL. Next, the extract was centrifuged at 5,000 rpm for 20 min and filtered (diameter: 0.45 µm) before injection to the column. Inertsil ® ODS-3 (25 cm × 4.6 mm × 5 µm) column was used, and the analyses were carried out at isocratic conditions. The mobile phase consisted of acetonitrile and water with a ratio of 15:85, including 0.1% phosphoric acid. The column temperature was set to 30°C. For each test, 20 µL of the sample was fed to the system with a 1 mL/min flow rate. The identification of chlorogenic acid and rutin was held using retention time and standard internal methods, and the quantification was performed using the abovementioned regression curves. Finally, the obtained results were compared to the CP that contains 4 mg of V. agnus-castus fruit extract. To do so, the product was dissolved in methanol (1 mg/mL), centrifuged, filtered, and injected into the HPLC-DAD system and analyzed as described above. For the method validation, performance parameters such as linearity, the limit of detection (LOD), the limit of quantitation (LOQ), and precision were determined. Linearity was defined by drawing a five-point calibration curve. Based on the signal-to-noise ratio of S/N=3/1, the measurement limit and the signal-to-noise ratio S/N=10/1, the detection limitwas calculated [20]. Repeatability and relative standard deviation percentages were determined to determine method accuracy. The repeatability was tested by performing five repetitions on the same day and three repetitions on three different days. Using Microsoft Office Excel, a descriptive statistical analysis (correlation coefficient, mean ± SD, and % relative standard deviation) was calculated. Determination of the antioxidant activity The antioxidant activities of the fruit and leaves extracts of the V. agnus-castus were determined by 2,2′-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging ability based on the method as described in Esmaeili et al. [21]. The absorbance of the serially diluted solutions was measured at 517 nm using a spectrophotometer. One thousand microliter of 1 mg/mL extracts were appended to 4 mL of 0.004% DPPH in methanol. The absorbance of samples at 517 nm was assessed after incubation for 30 min (Optima SP 3000 Nano UV-Vis, Tokyo, Japan). Then, the relative inhibition of the tested samples was evaluated by comparing with control. The DPPH inhibition value was calculated as described in Eq. (2.1). DPPH Inhibition where A blank refers to absorbance observed for 1 mL methanol diluted with 4 mL of DPPH radical stock solution, and A sample refers to the sample absorbance. The relative DPPH radical scavenging activity (%) values were converted to α-tocopherol equivalent values using a standard curve relating the DPPH inhibition (%) to equivalent α-tocopherol concentration. To do so, serially diluted α-tocopherol solutions (0.50-10 μg/mL) were prepared, and their free radical scavenging activities were determined as described above. The IC 50 values are calculated by detecting the extract amount required to obtain 50% DPPH radical scavenging. Statistical analysis The data were reported as the mean ± standard deviation. Linear regression coefficient (R 2 ) for phenolic and flavonoid content with antioxidant activity was analyzed by Graph Pad Prism for Windows, Version 7 (GraphPad Software, San Diego CA, USA). A p-value<0.05 was considered significant. Results It is known that the phenolic compounds of a plant have a detrimental impact on the antioxidant activity, which is responsible for the essential medicinal effects of the plants. Here, a conceptual phytochemical analysis was performed with the extracts and tinctures obtained from V. agnus-castus leaves and fruits to detect rutin and chlorogenic acid, as well as total phenolic contents. Notably, several parameters affecting the phenolic content of the extracts were evaluated using different techniques (i.e., maceration, Soxhlet extraction, and tincture preparation) and two separate parts (i.e., leaves and fruits) of the plant. For all groups, extracts were prepared with ethanol since it was previously found that the ethanolic extracts of V. agnus-castus show superior antioxidant activity when compared to their counterparts prepared with n-hexane [22]. As shown in Table 2, an extraction yield of 11-38% was obtained for the different preparations. Overall, extraction of the leaf samples resulted in a higher yield when compared to the fruit samples. Next, the total phenolic contents of the extracts were determined spectroscopically and described as gallic acid equivalent per unit mass of extract ( Figure 1A). The regression coefficient obtained by the gallic acid standard curve (R 2 >0.99) indicated a good precision of the method used here. Among all preparations, the tincture of V. agnuscastus leaves (VL70) showed the highest total phenolic amount (≈190 µg GAE/mg extract). All the other preparations exhibited a similar level of phenolic content, which is in the range of 50-90 µg GAE/mg extract. However, when the same extraction method and conditions were applied to leaf and fruit samples, leaf extracts showed significantly higher phenolic content than fruit extracts (p<0.05). In line with the total phenolic content determination, VL70 exhibits comparably higher levels of flavonoid contents (∼150 µg QE/mg extract). Surprisingly, the highest flavonoid content was determined for the preparate VL60 (∼190 µg QE/mg extract). Here also, the leaf extracts showed significantly higher flavonoid contents when compared to fruit extracts (p<0.05). Next, the antioxidant activity of the extracts was assessed by DPPH radical inhibition tests. Among fruit extracts, the group VF60 showed the highest antioxidant activity (17.86 ± 0.021 µg α-tocopherol equivalent/mL) at a concentration of 1,500 μg/mL. The leaf extracts showed significantly higher antioxidant activity when compared to fruit extracts (p<0.05). Accordingly, the calculated IC 50 values obtained from leaf extracts were lower (161.9-396 μg/mL) than those obtained for fruit extracts (626.4-798.3 μg/mL). Discussion Among the natural phenolics, flavonoids represent one of the most important compounds responsiblefor a wide range of biological and chemical properties. For example, such secondary metabolites of the plants can scavenge reactive oxygen species that are harmful to cells by inducing oxidative damage in essential macromolecules such as proteins, nucleic acids, and lipids [23,24]. Additionally, their preventive role in cancer and coronary heart diseases was underscored [25]. Therefore, we have evaluated the total flavonoid content of the extracts using the aluminum chloride colorimetric method. Overall, all the samples prepared with the leaves of V. agnus-castus showed higher levels of flavonoid contents when compared to fruit extracts. In a study by Gökbulut et al. [26], the total phenolic content of the methanolic extracts of the V. agnus-castus leaves and fruits was found as 123 and 114 µg GAE/mg extract, which is in line with our findings. Similarly, in another study, the phenolic content of the essential oil (EO) obtained from V. agnus-castus leaves was found as 82 µg GAE/mg EO [27]. Previously, Maltas et al. [28] determined the total flavonoid and phenolic content of methanolic extract of V. agnus-castus leave as 27 mg QE and 48 mg GAE per dry extract respectively. Previously, V. agnus-castus extracts were found to be rich in casticin, aucubin, p-hydroxybenzoic acid, rutin, and ferulic acid [15,16,29,30]. Indeed, among those, rutin is one of the flavonoid compounds that has a wide range of pharmacological activities [31]. It has been reported that rutin has a positive effect during the treatment of chronic diseases such as hypercholesterolemia, diabetes, and hypertension [32]. Therefore, we analyzed the extracts VF60 and VL60 for their content of rutin and chlorogenic acid by using high-pressure liquid chromatography with a diode array detector (HPLC-DAD). Rutin was detected for the samples at a retention time of 34.9 min at the maximum wavelength of 355 nm. As summarized in Table 3, it was found that extracts constituted rutin concentrations of 0.68 × 10 −3 and 3.3 × 10 −3 mg rutin/mg extract at fruit and leaf samples, respectively. Previously, Proestos et al. [30] identified the rutin content of methanolic extract of V. agnus-castus as a 1.58 mg/100 g dry sample. As another substance, chlorogenic acid is an important antioxidant in plants, which limitslow-density lipid oxidation [33]. Furthermore, findings indicate that chlorogenic acidsupplied diets support protection against degenerative, agerelated diseases in vivo [34]. Previously, the chlorogenic acid amount in V. agnus-castus fruits and leaves was determined and found in the range of 0.103-0.343 and 0.089-0.206% w/w, respectively [29]. Additionally, it was indicated that the chemical composition differs according to plants' region. Here, chlorogenic acid was observed at a retention time of 9.3 min at a maximum wavelength of 330 nm and detected for VF60 and VL60 samples as 0.17 × 10 −2 and 0.45 × 10 −2 mg/mg extract, respectively. These two groups were selected for these experiments due to their identical extraction conditions, which further helps to obtain a better comparison. In line with previous findings, leaf samples were richer in both rutin and chlorogenic acid. The widely used CP, which contains fruit extracts of V. agnus-castus, was proven to have anti-inflammatory potential, and this finding was related to its antioxidant activity and potential to reduce inflammatory cytokines in vivo [35]. Therefore, we compared the performance of this product with the obtained extracts here in terms of chlorogenic acid and rutin amounts. Surprisingly, although rutin is one of the components identified in V. agnus-castus, it was not detected in CP. Furthermore, the chlorogenic acid amount was also relatively low compared to VF60 and VL60 samples. Previous studies focused on V. agnus-castus showed that the high antioxidant activity of the extracts obtained from this plant is strongly related to their total phenolic content [36]. Therefore, it is crucial to perform phytochemical analyses and biological activity studies in parallel. To relate the analyses performed for phenolic compound detection, the antioxidant activity of the extracts was determined by the 2,2′-diphenyl-1-picrylhydrazyl (DPPH) free radical in the last step scavenging method. As observed in Figure 2, free radical scavenging activity of both fruit and leaf extracts of V. agnuscastus followed a dose-dependent response. It was found that the leaf extracts showed higher antioxidant potential when compared to fruit extracts. VL70 showed the most potent Table : Detection and comparison of rutin and chlorogenic acid in extracts and a commercial product (CP) using HPLC-DAD analysis. Sample Rutin amount, mg/mg extract Chlorogenic acid amount, mg/mg extract radical scavenging activities among all preparations, followed by VL50, VL60, and VLS. This trend in antioxidant activity possibly stems from phenolic compounds in these extracts. In line with these findings, previously, V. agnuscastus methanolic extracts of leaves were found to show higher antioxidant activity when compared to fruit extracts. However, rutin was not detected in any of these samples [26]. The IC 50 values calculated according to the findings was depicted in Figure 3. The lowest IC 50 value was detected for the VL70 sample and calculated as 132 μg/mL. The findings related to the antioxidant activity of the V. agnus-castus extracts and essential oils show a wide range of results, which shows that not only the extraction method but also the drying process, solvent selection, and the sample region has an important effect on the results [26,37,38]. In a study, the IC 50 value for methanol, chloroform, and water extracts of V. agnus-castus leaves was calculated as 127, 179, and 224 μg/mL [39], which shows some similarities with the findings presented here. Conclusions This study shows that the V. agnus-castus fruit and leaf extracts prepared with different methods and solvent concentrations contain different amounts of phenolic and flavonoid components, which are projected to impact their antioxidant activity. Overall, samples obtained from leaves showed higher content of phenolics and flavonoids when compared to the fruit samples. Furthermore, both of the groups (VF60 and VL60) exhibited higher chlorogenic acid and rutin content than the commercial product as detected with HPLC-DAD. Furthermore, in line with these findings, leaf extracts showed higher antioxidant activity when compared to fruit extracts. Therefore, although the medicinal part of this plant is widely defined as its fruits [40], leaf extracts displayed here very promising results that could be evaluated for future formulations. Of course, the plant materials used in this study were collected from one particular location, and variations in locations may vary the concentrations of active phytochemicals. Thus, comparing the extracts of plant materials collected at different locations in future studies can help obtain wellstandardized preparations. Quantification and activity studies with the extracts prepared by different methods are very important for preparing standardized products. Furthermore, to benefit from such therapeutic effects of these extracts, some strategies can be considered to maximize their bioavailability while considering safety and acceptable dosages in the downregulating body environment. The results show that extracts from chaste tree leaves and fruits are rich in phenolic components and flavonoids and should be considered an antioxidant raw material for therapeutic preparations. Research funding: None declared. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
2021-12-29T14:12:12.875Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "b884d8c1b29456bc2996d005315a00baeea1dd0e", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/tjb-2021-0208/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1f467db07b5ad367a524b9dc1ad894f96768d422", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
7857716
pes2o/s2orc
v3-fos-license
Lactase non-persistence as a determinant of milk avoidance and calcium intake in children and adolescents This study examines if lactase non-persistent (LNP) children and adolescents differ from those who are lactase persistent (LP) as regards milk avoidance and Ca intake. We also studied potential differences in anthropometric features related to obesity, and examined if milk avoidance is associated with lactase-persistence status. Additionally, we aimed to determine if heterozygous subjects showed an intermediary phenotype as regards Ca intake. Furthermore, we tested if LP and LNP influence vitamin D intake. The European Youth Heart Study is an ongoing international, multi-centre cohort study primarily designed to address CVD risk factors. Children (n 298, mean age 9·6 years) and adolescents (n 386, mean age 15·6 years) belonging to the Swedish part of the European Youth Heart Study were genotyped for the LCT-13910 C > T polymorphism. Mendelian randomisation was used. Milk avoidance was significantly more common in LNP adolescents (OR 3·2; 95% CI 1·5, 7·3). LP subjects had higher milk consumption (P < 0·001). Accordingly, energy consumption derived from milk and Ca intake was lower in LNP (P < 0·05 and P < 0·001, respectively). Heterozygous subjects did not show an intermediary phenotype concerning milk consumption. LP or LNP status did not affect vitamin D intake or anthropometric variables. LNP in children and adolescents is associated with reduced intake of milk and some milk-product-related nutritional components, in particular Ca. This reduced intake did not affect the studied anthropometric variables, indicators of body fat or estimated vitamin D intake. However, independently of genotype, age and sex, daily vitamin D intake was below the recommended intakes. Milk avoidance among adolescents but not children was associated with LNP. Lactase non-persistence (LNP) is as an autosomal recessive trait leading to down-regulation of lactase activity in the intestinal mucosa and to maldigestion of lactose (1) . Milk and some dairy products contain lactose, a disaccharide hydrolysed by the enzyme lactase-phlorizin hydrolase to glucose and galactose in the brush border of the small intestine. LNP is widespread throughout the world and plays an important role in the everyday work of general practitioners, gastroenterologists and paediatricians (2) . The diagnosis of lactase persistence (LP) and LNP has by definition been based on the measurement of lactase, sucrase and maltase activities and the lactase to sucrase ratio in intestinal biopsies (3) . This is an invasive technique that is not suitable for primary exploration of abdominal complaints or for large-scale population studies of the effect of LNP on anthropometric or nutritional variables. Enattah et al. identified in 2002 the position of the LP-associated allele as a single nucleotide polymorphism C > T located 13·9 kb upstream of the first ATG of LCT. The single nucleotide polymorphism is located in intron 13 of the MCM6 gene. Homozygosity for the C allele (LCT-13910 CC) shows for all practical purposes a complete association with LP in populations of European descent (4,5) . Molecular epidemiological studies have shown that the prevalence of LNP assessed by genotyping is consistent with previously published phenotypically determined epidemiological data in more than seventy countries (6) . Different test methods in the diagnosis of LNP, including the standard physiological tests, are useful at different levels of health care organisation or symptomatology (7) . The majority of heterozygous individuals, having intermediary levels of lactase activity in intestinal biopsies, are traditionally thought to produce sufficient lactase to be classified by the standard physiological tests as LP (8) , but pre-2002 studies can hardly be regarded as conclusive. One of the aims of this study was to look for a potential gene-dose effect of the LCT-13910 C > T polymorphism with respect to some nutritional variables in children and adolescents. Milk and other dairy products are major dietary sources of Ca in Western diets (9) and the diet of LNP individuals may be restricted as regards milk intake. The main hypothesis tested in this study was that LNP might influence the intake of Ca and vitamin D through its effects on milk consumption, since there is mandatory fortification of milk with vitamin D in Sweden. We were also interested in seeing whether milk avoidance, which is partly a lifestyle and behavioural/attitudinal variable, was associated with the LCT-13910 C > T polymorphism. Population Blood samples were obtained from 684 children (334 girls and 350 boys) belonging to the Swedish part of the European Youth Heart Study, which is a cross-sectional school-based study of risk factors for future CVD among children 9-10 years old and adolescents 15-16 years old. Mean ages in the Swedish sample were 9·6 years and 15·6 years, respectively. Sampling procedures and participation rates have been described previously (10) . Height, weight, hip and waist circumference were directly measured by standardised procedures. BMI was calculated as weight/height 2 (kg/m 2 ). The consumption of milk was assessed by an interviewer-mediated 24-h recall. A qualitative food record completed on the day before the interview served as checklist for the data obtained by 24-h recall. A food atlas was used to estimate portion sizes. Dietary data were processed by StorMats (version 4.02; Rudans Lättdata) and analysed using the Swedish National Food database (version 99.1). Total Ca intake was calculated in mg/d and vitamin D intake was calculated in μg/d. For the genetic analysis genomic DNA was isolated from the EDTA whole blood samples from the individuals with the QIAamp DNA Blood Mini Kit spin procedure. The DNA fragment spanning the -13910-C/T polymorphic site was genotyped by pyrosequencing, using a PSQ96 SNP reagent Kit and a PSQ 96MA system (Pyrosequencing AB) PSQ96MA 2.0.1 software. The procedure has been previously described in detail (8) . Statistical analysis Statistical analyses were performed with the Statistical Package for Social Sciences (SPSS, version 13.0 for Windows; SPSS Inc., Chicago). The data are presented as means and standard deviations in Table 1. The Student's t test for differences was used to compare the LCT-13910 TT and LCT-13910 CT and CC genotypes concerning nutritional and anthropometric data. Data were checked for normality. Quantitative effects of the LCT-13910 C > T genotype on the selected anthropometric and food intake related variables were tested in three-way ANOVA models with the fixed factors age group (children/ adolescents), sex (girls/boys) and LCT-13910 C > T genotype with the levels TT/CT v. CC. A qualitative variable 'complete milk avoidance' (yes/no) was generated from the continuous variable, intake of milk (g/d), present in the dietary survey. The OR for complete milk abstinence was tested separately in children and adolescents by logistic regression with sex and LCT-13910 C > T genotype with the levels TT/CT v. CC as covariates. Mendelian randomisation Since the carriage of the LCT-13910 C > T polymorphism is subject to random assortment of maternal and paternal alleles at the time of gamete formation, associations between LCT genotypes and our observational data should not be subject to reverse causality. This is a basic assumption of Mendelian randomisation (11,12) , which examines causal effects of modifiable exposures on disease in genetic epidemiology. A functional genetic variant, in our study LCT-13910 C > T polymorphism, acts as a proxy for modifiable lifetime exposure patterns (milk consumption). The LCT-13910 C > T polymorphism is known to influence milk consumption (13) . According to Mendel's second law of independent assortment, the inheritance of one trait is independent of the inheritance of other traits. Thus, associations between genetic variants and outcome are not generally confounded by behavioural, physiological or environmental exposures, and observational studies of genetic variants have similar properties to intention-to-treat analyses in randomised controlled trials (11,12,14,15) . This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures were approved by the Research Ethics Committees of Örebro County Council and Huddinge University Hospital. Parents and 15-year-olds gave specific written informed consent to participate in the present study. Results Distribution of the LCT-13910 C > T genotype showed that the CT and TT genotypes, which are associated with LP, were found in 273 and 317 subjects, respectively. The genotype LCT-13910 CC, associated with LNP, was found in ninety-four subjects. The baseline characteristics including selected anthropometric and milk intake-related variables of these three genotypes are shown in Table 1. There were no statistically significant differences between the genotypes CT and TT in any of the selected variables. Intermediary phenotypes for LCT-13910 C > T heterozygous subjects were thus not observable with respect to these anthropometric and food intake data in the studied population. In subsequent analyses the effect of the LCT-13910 C > T polymorphism was tested by using two levels: TT/CT v. CC. As shown in Table 1, the LCT-13910 CC genotype was associated with statistically significantly lower levels of the following variables: milk intake, energy intake from milk and Ca intake. No statistically significant interactions of the LCT-13910 C > T genotype with age and sex for the selected variables were found. Height, weight, BMI, hip circumference and waist circumference as well as total daily energy intake and vitamin D intake did not differ significantly between the LCT-13910 C > T genotypes in these ANOVA models. The odds for milk avoidance due to the subjects' LCT-13910 C > T genotype was tested ( Table 2). Only five children avoided milk, and none of these had the CC genotype. Among adolescents, milk avoidance was more frequent (n 34): the OR for subjects with LNP compared with LP subjects was 3·2 (95% CI 1·5, 7·3, P = 0·003), with sex and LCT-13910 C > T genotype as covariates in the model. Milk protein allergies were not reported in the studied population. Discussion The main finding was that LNP subjects had a lower milk consumption, a lower daily energy intake based on these products and a lower Ca intake. These differences did not translate into any difference in total daily energy intake. Furthermore, no signs of increased indicators of body fat in LP individuals could be observed. No evidence was found for a gene-dose effect of the LCT-13910 C > T mutation in the studied sample. An intermediary phenotype could not be identified with respect to the studied nutritional and anthropometric variables (Table 1), in accordance with the traditional view of 'lactose intolerance' as a recessive trait. However, we cannot rule out such a gene-dose effect in adults, in subjects of other populations, or with respect to other possible LCT-dependent phenotypes not yet studied. Hence, further studies are needed to verify the absence or presence of a gene-dose effect. The other main finding was that the LCT-13910 C > T polymorphism influences complete milk avoidance among adolescents ( Table 2). The lower milk intake of LNP adolescents could be a cause of concern as trends of replacement of milk by soft drinks have been reported, and appear to be detrimental to bone health (16,17) . Milk constitutes a basic source of dietary Ca in most Western diets, and adequate Ca intakes are directly related to the consumption of these food items. Frequently, milk is being replaced as a beverage of choice by sweetened and carbonated soft drinks and juices (18,19) . In addition, limiting milk in the diet might in some cases lead to the necessity for dietary adjustments beyond meeting only Ca requirements. The critical role of Ca in human health has been recognised for many years, as reflected by a long history of Ca recommendations (20) . LP children and adolescents consumed significantly more milk than LNP subjects, and LP did not reveal any tendency towards increased BMI or other indicators of obesity. This observation has been confirmed by an earlier study performed by the authors on the same sample using body fat percentage as a variable (21) . Vitamin D intake was in both LNP and LP subjects below the recommended intakes issued by the Swedish National Food Agency (updated 9 October 2012). The Swedish National Food Agency's recommended intake for vitamin D is 7·5 µg per d for the studied sample of children and adolescents. Independently of genotype, age group or sex all subjects did not meet recommended intakes for vitamin D ( Table 1). Limitations of this study are sample size and age of onset of LP that can show a wide regional and ethnic variation. Genetically programmed down-regulation of lactase-phlorizin hydrolase synthesis has been observed starting from the second year of life. The majority of Thai children manifest LNP by the age of 2 years and in black populations genetically determined lactose intolerance manifests between 1 and 8 years. In white populations, it is rarely seen before 5 years of age (22,23) . In this study two different age groups were compared, 9-year-old children (mean age 9·6 years) and 15-year-old adolescents (mean age 15·6 years). Thus, it can be assumed that the majority of children and almost all adolescents with the LCT-13910 CC genotype had developed manifest LNP at the time of inclusion in this study. However, the correlation between manifest LNP and self-reported 'lactose intolerance' has been suggested earlier to be poor in a few other studies (24)(25)(26) . Almost all children and many adolescents with the CC genotype consumed some amount of milk even though having LNP status. This is compatible with many studies showing that LNP subjects can tolerate a certain amount of lactose intake per d (27)(28)(29) . Other individuals do not consume milk and dairy products because of health reasons such as milk protein allergies, perceived 'lactose-intolerance', taste preferences or dietary culture and fashion. Mendelian randomisation was used in this study. The main assumption in this study is that LP (lactose tolerant) individuals consume on average significantly more milk than LNP (lactose intolerant) individuals throughout their lifetime, and not only at the moment dietary intakes were assessed. If this assumption is correct, the LCT-13910 C > T polymorphism can be used as a proxy measure for lifetime exposure to milk and dairy intake patterns. Cultural influences on milk consumption might be able to override the discomforts consequent on milk ingestion in lactose-intolerant individuals. This, nevertheless, could not be observed in our sample. Neither have we been able to observe this in another sample representative of the general population of the Canary Islands in Spain (30) . In summary, LNP in children and adolescents is associated with reduced intake of milk and particularly Ca. This reduced intake did not affect the studied anthropometric variables with respect to indicators of body fat. Reduced intake of Ca could be compensated by consumption of dairy products with lower amounts of lactose than milk. Heterozygous subjects did not show an intermediary phenotype. Estimated vitamin D intake unexpectedly did not differ between LP and LNP subjects, although milk is regularly fortified with vitamin D in Sweden. Independently of genotype, age and sex, overall daily vitamin D intake was below recommended intakes.
2018-05-08T17:43:21.508Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "7b439ae7ca695c6fedf1e747a886764ff281d77d", "oa_license": "CCBYNCSA", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/63889836514AD00148C143EA7309299B/S2048679013000116a.pdf/div-class-title-lactase-non-persistence-as-a-determinant-of-milk-avoidance-and-calcium-intake-in-children-and-adolescents-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b439ae7ca695c6fedf1e747a886764ff281d77d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247505353
pes2o/s2orc
v3-fos-license
Relationship of Clinical and Ultrasonographic Grading of Varicocele with Semen Analysis Profile and Testicular Volume Background: Varicoceles are a major cause of infertility. The purpose of this study was to determine the relationship of the clinical and ultrasonographic grades of varicocele with the semen analysis profile and testicular volume among men undergoing scrotal ultrasonography. Methods: This cross-sectional analytical study involved 109 males undergoing scrotal ultrasonography for various indications in Shiraz, Iran, between January 2019 and January 2020. Varicoceles were graded with color Doppler ultrasonography (CDU) by an expert radiologist (Sarteschi’s criteria) before an experienced urologist determined the clinical grade (Dubin and Amelar criteria) and requested further investigations. Next, the demographics, reasons for referral, testicular volumes, and semen analysis profiles across the different clinical/ultrasonographic grades were compared. Key statistical measures included Cohen’s kappa coefficient, the Mann–Whitney and Kruskal-Wallis tests, and Spearman correlation. Data were analyzed using SPSS v. 21 with p-values <0.05 indicating statistical significance. Results: Ultrasonographic grades 1 and 2 provided the highest correlation with subclinical cases, while ultrasonographic grades 3, 4, and 5 corresponded with clinical grades 1, 2, and 3, respectively. Further comparisons were made between subclinical and clinical cases, which were similar in terms of reason for referral, total testicular volume, testicular volume differential, and semen analysis profile. Notably, total testicular volumes below 30 ml were associated with oligoasthenoteratospermia. Conclusion: The present study showed a relatively high correlation between varicocele grading based on clinical evaluation and CDU. However, the grades were similar in testicular volume parameters and semen analysis indices. Hence, decision-making should be guided by the infertility history, testicular atrophy, and abnormal semen analysis. Introduction aricoceles affect 15-20% of the general population and are diagnosed in 35-40% of men who attend infertility clinics, though only 15% of men with varicoceles are infertile (1,2). This condition is characterized by dilation of the pampiniform plexus secondary to retrograde flow Abolhasani Foroughi A, et al. JRI in the spermatic veins. Varicoceles are predominantly identified in the left testicle, though the disease is believed to be bilateral in nature (3,4). Varicoceles can lead to decreased sperm quantity and quality, with a surgical repair being indicated for selected couples complaining of infertility (5). The semen analysis profile can elucidate the reproductive potential of individuals, though a unilateral reduction in testicular volume may not necessarily indicate testicular dysfunction secondary to a varicocele. Nonetheless, patients with unilateral left varicocele and ipsilateral testicular atrophy have been reported to have significantly worse semen analysis profile parameters compared to patients without atrophy (6). Since long ago, the mainstay method of diagnosing varicoceles has been the physical examination (7). However, rapid developments in imaging techniques have made modalities like venography and color doppler ultrasonography (CDU) invaluable paraclinical tools for the physician, with the former representing the gold standard of diagnosis and the latter providing 97% sensitivity and 94% specificity (8,9). Radiologists have devised multiple systems for grading varicoceles, and although ultrasonography can diagnose varicoceles in their subclinical stages, studies with a focus on early diagnosis are limited. Furthermore, given the undeniable shift toward the use of paraclinical tools and considering the fact that patients may undergo scrotal ultrasonography for a wide variety of reasons, it is essential to evaluate different ultrasonographic findings. Hence, the purpose of the present study was to determine the relationship of the clinical and ultrasonographic grades of varicocele with the testicular volume and semen analysis profile among individuals undergoing scrotal ultrasonography. Study design: This analytical cross-sectional study was conducted on 109 men scheduled for scrotal ultrasonography at the Motahari Clinic affiliated to Shiraz University of Medical Sciences (Shiraz, Iran) between January 2019 and January 2020. All men aged 15-65 who were referred for various reasons (pain, swelling, infertility, etc.) to our clinic were included for scrotal ultrasonography. After explaining the study protocol and obtaining written consent, the patients filled a data collection form including demographic characteristics and past medical history. Patients with a history of an operated inguinal hernia, testicular or varicocele sur-gery, diabetes, malignancy, transplantation, urinary tract infection, rheumatologic disease, or renal failure were excluded. All patients who used any medications that could affect the testicular size or semen analysis were also excluded. The study protocol was approved by the Ethics Committee of Shiraz University of Medical Sciences (Code: IR.SUMS.MED.REC.1398.078). Study measures: All 109 participants were asked about scrotal pain through a yes or no question. Married participants (n=44) were assessed for a history of infertility. All participants were referred for a semen analysis after abstaining for three days from sexual activity. Imaging was performed by an experienced radiologist with ten years of experience in scrotal CDU. The testicular area was initially covered with a sheet before applying the prewarmed gel. Ultrasonography was carried out using a 5-12 MHz linear ultrasound probe (DC8 Expert, Mindray, China). First, the testicles were examined by ultrasound grayscale imaging to rule out any pathology other than varicocele. Also, the testicular dimensions (length, width, height) were measured, with the testicular volume (ml) being calculated using the below formula: The testicular volume differential (TVD) was subsequently determined as follows: The total testicular volume (TTV) was calculated using the left and right testicular volumes. In line with some previous studies, TTVs below 30 ml and TVD percentages above 20% were considered abnormal, signifying testicular atrophy (10,11). Patients were then subjected to CDU in the standing position with and without the Valsalva maneuver to assess reflux in the inguinal canal and pampiniform plexus. The ultrasonographic grade was then recorded according to the Sarteschi criteria, which are presented in table 1 (5). Subsequently, the study participants were referred to an experienced attending urologist for physical examination. The initial examination was performed in a warm and quiet room while the patient was standing, before switching the patient JRI to the supine position and examining him with and without the Valsalva maneuver. Then, varicoceles were graded clinically according to the criteria of Dubin and Amelar (Grade 0/subclinical: impalpable varicocele but detected on ultrasonography; Grade 1: palpable during Valsalva; Grade 2: palpable at rest, but not visible; Grade 3: visible varicocele) (12). To minimize bias, the radiologist and urologist were each blinded to the grade given by the other physician. Statistical analysis: Data were analyzed using SPSS vs. 21 (IBM, USA). Quantitative variables were expressed as mean±standard deviation (SD), whereas qualitative variables were reported as frequency and percentage. The statistical tests performed included the Chi-squared test and the independent t-test or their non-parametric tests when data was not normally distributed. Also, Cohen's kappa coefficient (κ) was used to assess the interrater reliability between the ultrasonographic and clinical grades of varicocele. To determine the relationship between patient age and the clinical/ ultrasonographic grades of varicocele, the Kruskal-Wallis test was used. The independent t-test and Mann-Whitney U test were used where appropriate to compare the semen analysis indices and testicular volumes between patients with clinical (ultrasonographic grades 3-5) and subclinical (ultrasonographic grades 1-2) varicocele. Furthermore, Spearman correlation was performed to determine the semen analysis variables correlated with TTV and TVD as indices related to testicular function. Moreover, semen analysis indices were compared between two groups with TTV below and above 30 ml using Mann-Whitney U test. P-values <0.05 were considered statistically significant in all cases. Demographic data and the reason for referral: The study participants had a mean age of 28.7±8.5 years (range: 17-63 years). Among the 109 patients, 62 (58.9%) required scrotal ultrasonography due to pain, while 22 (20.2%) were referred due to infertility. The remaining participants sought medical attention due to signs like testicular swelling or were followed up based on a prior case of varicocele. Correlation between clinical and ultrasonographic grading: First, correlation between the grades given by the urologist and radiologist was examined. The frequency of different grades based on clinical evaluation and ultrasonography of the left testicle indicated a relatively high correlation between the two grading systems. In roughly threequarters of cases, the grading of left testicular varicoceles was consistent between the ultrasonographic and clinical grading systems (Kappa= 0.74, p<0.001) ( Table 2). Similarly, an acceptable and significant correlation was found between the two systems in grading right testicular varicoceles (Kappa=0.68; p<0.001) ( Table 3). Overall, it was found that ultrasonographic grades 1 and 2 provided the highest compatibility and correlation with subclinical cases, while ultrasonographic grades 3, 4, and 5 corresponded with clinical grades 1, 2, and 3, respectively. Testicular volume and patient age according to varicocele grade: Our results showed no significant relationships between the testicular volume and 2 Small posterior varicosities extend to the superior pole of the testis. Their diameters increase and venous reflux is seen in the supratesticular region only during the Valsalva maneuver. 3 Vessels appear enlarged at the inferior pole of the testis when the patient is evaluated in the standing position; no enlargement is detected if the patient is examined in the supine position. Reflux is observed only during the Valsalva maneuver. 4 Vessels appear enlarged even when the patient is studied in the supine position; the dilatation is more marked in the upright position and during the Valsalva maneuver. Testicular hypotrophy is common at this stage. 5 Venous ectasia is evident even in the prone decubitus and supine positions. Reflux is observed at rest and does not increase during the Valsalva maneuver. 1-2. Hence, the subsequent investigations were done by comparing these two groups of patients. Notably, there was no significant difference between the clinical and subclinical groups in terms of pain (p=0.922) and infertility (p=1.000) as reasons for referral to the radiology clinic. TTV and TVD in clinical and subclinical varicocele: According to our results, the TTV and TVD ranged between 9-67 ml and 0-67.5%, respectively. Semen analysis and testicular volume in patients with clinical and subclinical varicocele: The results of the semen analysis and the testicular volume parameters for patients with clinical and subclinical varicocele are compared in table 4. No significant differences were found between the mentioned groups in the studied parameters. Correlation of semen analysis parameters with the TTV and TVD indices: Given the lack of a significant difference between the high and low ultrasonographic grades of varicocele in terms of the TTV and TVD indices, the relationship of these important ultrasonographic indices with semen analysis parameters was examined. Statistical analysis revealed that the TVD index was not correlated with any of the semen analysis indices, though the TTV index had positive and significant relationships with sperm count, morphology, and motility ( Table 5). Comparison of semen analysis indices between two groups of normal and abnormal TTV: Considering the observed correlation between TTV and some components of the semen analysis profile, a further investigation was performed in which the semen analysis indices were compared between Abolhasani Foroughi A, et al. JRI patients with normal (≥30 ml) and abnormal (<30 ml) TTV values. The sperm count was significant-ly lower among patients with abnormal TTV than the normal counterparts (p=0.003). A similar result was obtained for Grade 4 motility (p=0.027). Furthermore, patients with abnormal TTV had significantly higher rates of both immobile and abnormal sperm (p=0.016 and 0.007, respectively) ( Table 6). Discussion The present study was conducted on 109 males undergoing scrotal ultrasonography for various reasons to determine the relationship of varicocele grading via ultrasonography and clinical examination with testicular volume and semen analysis indices. The study sample included all men undergoing scrotal ultrasonography for various reasons (pain, swelling, infertility, follow-up, etc.) as our purpose was to assess the value of the grading systems for the general population of men with testicular symptoms rather than limiting it to those who are infertile. Our results are significant as they confirmed the correlation between clinical evaluation and ultrasonographic grading while providing novel evidence on the relationship of the ultrasonographic/clinical grades with the testicular volume and semen analysis profile. Diagnostically, disease grading is an indicator that helps to understand the patient's condition better. It also may help in selecting the appropri- JRI ate definitive diagnostic method, treatment, and follow-up. Our findings show the correlation in grading between the physical examination and ultrasonography, showing that a common language may be established between urologists and radiologists, which may facilitate better diagnosis and management of varicoceles. Nonetheless, significant differences remain between the two methods, reminding us of the need for accurate methods like an ultrasound for reaching the definitive diagnosis. These results reaffirm those of our previous study (13). In a related study, Jedrzejewski et al. (2019) compared the CDU findings between the normal and affected testis in adolescents with unilateral left-sided varicoceles. Decreased tissue perfusion was reported on the affected side according to all CDU parameters, with the difference reaching statistical significance for the mean velocity and resistance indices and changes being particularly prominent in grade 3 varicoceles (14). The novel aspect of our study was comparing clinical and ultrasonographic grades of varicocele in terms of a number of essential parameters among males scheduled for scrotal ultrasonography. First, no significant differences in testicular volume and patient age were found between the various ultrasonographic and clinical grades of varicoceles. Then, considering the identified correlation between ultrasonographic grades 1-2 and subclinical disease and between ultrasonographic grades 3-5 and clinical disease, these two groups were compared in our subsequent analysis. Our findings indicated that among men undergoing scrotal ultrasonography for various reasons, patients with subclinical and clinical varicocele had no significant differences in terms of the reasons for referral (pain and infertility), TTV, TVD, and semen analysis profile. It is important to note that while varicoceles are a major cause of infertility and are diagnosed in 40% of infertile men, only 15-20% of men with varicocele are infertile (2,15). Furthermore, varicoceles are present in about 15% of normal male population, and this figure is expected to rise if subclinical cases are included as well. Recent studies indicate that infertile men with varicocele have decreased sperm count, motility, and normal morphology (16,17). Furthermore, surgical treatment can improve the semen analysis profile among such patients (18,19), with some evidence even indicating an improvement in forward progressive sperm motility after surgical treatment of subclinical varicocele (20). However, such results were not replicated among our study population as it included all men referring for scrotal ultrasonography for a variety of reasons. In fact, 60% of our patients were referred due to scrotal pain, which only occurs in 10% of varicocele patients (21). Moreover, just 15% of our study population underwent testicular ultrasonography due to infertility, and oligoasthenoteratospermia was detected in only about 16% of our patients. It should also be taken in to account that 45-65% of men with clinical grades 1-3 varicocele have normal semen parameters (22). Hence, despite the well-established detrimental effects of varicoceles on semen quality and sperm function among infertile men (17), no significant differences were observed in these parameters among our study population of mostly fertile males with different ultrasonographic and clinical grades of varicocele. The ultrasound examination is a method that can provide a quantitative evaluation of varicoceles using a number of indices. Semiz et al. (2014) investigated the relationship between semen analysis parameters and intraparenchymal testicular spectral Doppler indices in patients with clinical varicocele (23). However, no significant correlation was observed between three Doppler parameters of the testicular arteries (end-diastolic velocity [EDV], resistivity index [RI], and pulsatility index [PI]) with semen analysis parameters such as number, motility, volume, and morphology of sperm. On the other hand, the peak systolic velocity (PSV) index showed a significant relationship with sperm count. In our study, ultrasonographic grades 1-2 of varicocele did not significantly differ from ultrasonographic grades 3-5 in terms of indices related to the testicular volume and semen analysis. As described previously, variations between studies can be explained by differences in study populations, with only a minority of our patients complaining of infertility. According to our findings, the ultrasonographic grade is of little value in isolation among men undergoing scrotal ultrasonography for various reasons. Given the fact that differences in the study measures between the various ultrasonographic grades could not be identified in this research, the effect of testicular volume (as another ultrasonographic parameter) on the semen analysis profile was also evaluated. According to previous studies, TTV values below 30 ml are associated with decreased sperm production (11). Varicoceles appear to give rise to a progressive disease, with increased testicular atrophy prevalence having been reported as (24). In the present study, it was found that TTV parameter had a significant relationship with semen analysis parameters, while TVD failed to show a meaningful relationship in this regard. In fact, testicular atrophy (TTV <30 ml) was associated with a drop in both the quantity and quality of spermatozoa. Our results are in alignment with those of Kurtz et al. (2015), who investigated the association between TTV/TVD and semen analysis parameters and reported a direct significant relationship between TTV and total sperm motility (14). The research of Oliva and Multigner (2018) described low sperm production and motility in patients with grade 3 varicocele, as well as a high proportion of sperm with abnormal morphology (25). Notably, Sakamoto et al. (2008) confirmed improvements in semen analysis parameters (sperm count and motility) and left testicular volume following surgical repair of varicocele (26). In our study, the semen analysis yielded acceptable results for determining the progression of varicocele disease, indicating that this highly available test can provide useful and comparable clinical data to the ultrasound study in centers that lack radiology facilities. In line with our findings, Krishna et al. reported that variables such as testicular volume, sperm count, and sperm motility are useful when following patients treated surgically for varicocele. These researchers found significant differences in sperm motility and concentration between different clinical grades of varicocele and asserted that the testicular volume has a good correlation with the severity of oligospermia (27). Overall, it can be said that the semen analysis and testicular volume results are essential in guiding the decision-making process when managing and following up patients with varicoceles. The present study had some limitations. One was the fact that the sample size was limited in that only 19 out of 119 patients were categorized as having subclinical varicocele. Hence, it is likely that a larger sample would have yielded more significant differences in the investigated variables between the study groups. Another limitation was that pain was not measured quantitatively using a visual analog scale, though this did not affect our primary outcomes. Finally, there was no point of comparison against a group of normal individuals in this study and no particular complaints, which can be an interesting subject for future research. Conclusion The present study showed a relatively high correlation between grading based on clinical evaluation and scrotal CDU. However, given that the grades were similar in terms of testicular volume parameters and semen analysis indices, the clinical or ultrasonographic varicocele grade alone does not provide significant clinical information about men undergoing scrotal sonography based on any indication. Rather, decision-making should be guided according to infertility history, testicular atrophy, and abnormal semen analysis.
2022-03-18T15:17:57.037Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "a02e3b7625f848d6f4825e13ad9da9bbba27a9f8", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "03024473da24ffff35f8a994ca11e80e146f0fd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204946571
pes2o/s2orc
v3-fos-license
2188. Provider Education and Rapid Antigen Detection Test Use in Private and Academic Pediatric Clinics Abstract Background Rapid antigen detection testing (RADT) is needed to differentiate Group A Streptococcal (GAS) pharyngitis from viral pharyngitis. Guidelines do not recommend RADT in patients with viral symptoms or in children <3 years old without GAS exposure. Reduction in unnecessary RADT use may impact inappropriate antibiotic use by decreasing prescriptions in children likely colonized with GAS. We examined the impact of guideline concordant education of appropriate RADT and antibiotic use in pharyngitis on providers’ (physician and APRN) use of RADT in an academic and private pediatric primary care clinic. Methods Retrospective chart review of 1,085 healthy children, age 1–5 years old, seen in clinics between September 2015 and March 2019 (355 pre- and 730 post-education; 211 academic and 874 private). Education occurred in 3/2017. Cases selected had either complaint of sore throat, RADT, or diagnosis of GAS pharyngitis or pharyngitis. Data collected included the presence of viral symptoms (e.g., cough, rhinorrhea), RADT/GAS culture results, diagnosis, and prescribed antibiotics. RADT was deemed unnecessary for all children < 3 years old without GAS exposure, in patients with ≥ 2 viral symptoms, or in patients ≥ 3 years old without pharyngitis. Results Overall, RADT use decreased from pre to post intervention (72.1% vs. 23.4% of patients, P ≤ 0.0001). Unnecessary RADT use decreased overall (50.4% vs. 16.2%, P ≤ 0.0001), in all clinics (private: 56.2% vs. 16.0%, P ≤ 0.0001; academic: 38.1% vs. 17.4%, P = 0.0012), and with all providers (physician: 41.6% vs. 18.3%, P ≤ 0.0001; APRN: 58.8% vs. 14.1%, P ≤ 0.0001). Unnecessary RADT use decreased for children <3 years old (28.1% vs. 7.4%, P ≤ 0.0001) and ≥2 viral symptoms (65.7% vs. 16.5%, P ≤ 0.0001). Conclusion Unnecessary RADT use decreased in the post-education period overall (34%), in children <3 years old (21%), and in patients with ≥ 2 viral symptoms (49%). Reductions were also seen in both academic (21%) and private (40%) clinics as well as with both physicians (23%) and ARPNs (45%). Limitations include lack of a control group and sample size variance by the clinic. We observed positive trends in RADT reduction following provider education in private and academic settings; however, further research including control and optimal sample size is needed to confirm any direct impact. Disclosures All authors: No reported disclosures. area under the receiver operating characteristic curve (AUC) of 0.75. In the validation data set the optimized model had a sensitivity/specificity of 36% and 99% (AUC: 0.68; misclassification error: 0.12) and positive/negative predictive values of 89% and 88%, respectively. The most important features were albumin, age, and procalcitonin. Conclusion. Structured granular medical data and machine learning approaches are an innovative tool that can be used in a retrospective setting for prediction of adverse outcomes in patients with prolonged febrile neutropenia. This study is the first important step toward clinical decision support based on predictive models in high-risk cancer patients. Disclosures. All authors: No reported disclosures. Background. Rapid antigen detection testing (RADT) is needed to differentiate Group A Streptococcal (GAS) pharyngitis from viral pharyngitis. Guidelines do not recommend RADT in patients with viral symptoms or in children <3 years old without GAS exposure. Reduction in unnecessary RADT use may impact inappropriate antibiotic use by decreasing prescriptions in children likely colonized with GAS. We examined the impact of guideline concordant education of appropriate RADT and antibiotic use in pharyngitis on providers' (physician and APRN) use of RADT in an academic and private pediatric primary care clinic. Provider Education and Rapid Antigen Detection Methods. Retrospective chart review of 1,085 healthy children, age 1-5 years old, seen in clinics between September 2015 and March 2019 (355 pre-and 730 post-education; 211 academic and 874 private). Education occurred in 3/2017. Cases selected had either complaint of sore throat, RADT, or diagnosis of GAS pharyngitis or pharyngitis. Data collected included the presence of viral symptoms (e.g., cough, rhinorrhea), RADT/GAS culture results, diagnosis, and prescribed antibiotics. RADT was deemed unnecessary for all children < 3 years old without GAS exposure, in patients with ≥ 2 viral symptoms, or in patients ≥ 3 years old without pharyngitis. Conclusion. Unnecessary RADT use decreased in the post-education period overall (34%), in children <3 years old (21%), and in patients with ≥ 2 viral symptoms (49%). Reductions were also seen in both academic (21%) and private (40%) clinics as well as with both physicians (23%) and ARPNs (45%). Limitations include lack of a control group and sample size variance by the clinic. We observed positive trends in RADT reduction following provider education in private and academic settings; however, further research including control and optimal sample size is needed to confirm any direct impact. Poster Abstracts • OFID 2019:6 (Suppl 2) • S745 Disclosures. All authors: No reported disclosures. Background. Several Arizona tribal lands are highly endemic for the potentially deadly tickborne disease Rocky Mountain spotted fever (RMSF). In 2017, state public health officials were concerned with the underreporting of RMSF in our rural American Indian (AI) community. Surveillance of RMSF using serologic methods requires two samples-a baseline (acute) titer and a second (convalescent) titer two to 4 weeks later. Patient return rates are low, leading to poor understanding of disease burden. Our hospital serves a predominantly AI population that is spread across a large geographic area, with limited access to reliable transportation. Improving Surveillance of Rocky Mountain Spotted Fever (RMSF): Implementation of a Multidisciplinary Process Methods. We established a model (Figure 1) for improved RMSF surveillance with a multidisciplinary team comprising clinicians, pharmacists, laboratorians, community health representatives (CHRs), environmental health, clinical care coordinators (CCCs), and public health nurses. The success and sustainability of the system depends on multiple departments sharing the workload. Results. As a result of the model, we identified 22 cases of RMSF in 2018, including one death (Figure 2). Testing in the community increased over 9-fold and the total number of titers sent to state lab increased over 13-fold from 2017 to 2018. The system facilitated laboratory follow-up resulting in 61% of samples sent as pairs (acute + convalescent), compared with 36% of samples paired in 2017 (Figure 3). Conclusion. This multidisciplinary process led to improved case identification, improved testing efficiency and sustainable surveillance for RMSF. There was a marked increase in RMSF cases detected at our site, an increase in the number of samples tested and the percent of paired samples obtained during 2018. Beyond this relative improvement, the success rate in paired titers is now the highest in Arizona State, where approximately 40% of samples are paired. There is a need for practical and integrated systems to more accurately test and track cases of RMSF in highly endemic, rural areas. Working together across departments was crucial to address challenges and provide solutions, and led to the success of the model. This process provides a model framework for inter-departmental collaboration and develops a unique system to improve both patient care and education to healthcare workers and the community. Disclosures. All authors: No reported disclosures. Background. Respiratory specimens help inform the treatment of hospital-acquired pneumonia (HAP), permitting clinicians to ensure effective and, ideally, narrow-spectrum antibiotic therapy. Here, we examine changes in antibiotic regimens to treat HAP based on the antibiotic susceptibility of pathogens recovered from respiratory samples. Influence of Microbiological Culture Results on Antibiotic Choice for Veterans with Hospital-Acquired Pneumonia Methods. At a single Veterans Affairs (VA) Medical Center, we identified veterans hospitalized between October 2014 and September 2018 with HAP, defined as a clinical respiratory sample obtained >48 hours after admission and corresponding clinical signs and symptoms. Exclusion criteria were death, transfer to hospice care or discharge within 48 hours of sample collection or admission from an outside hospital. For each specimen, we assessed timestamps for collection, Gram stain, identification of organisms and results of susceptibility testing. We used the antibiotic spectrum index (ASI) to assess changes in antibiotics given to patients during hospitalization and at discharge.
2019-10-24T09:17:08.648Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "50beda799bb44d538594088617073271349f9d85", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/6/Supplement_2/S744/30271601/ofz360.1868.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cafcf1cd16a0eb3a692e2eda748cf0440d591f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57012891
pes2o/s2orc
v3-fos-license
Masked uncontrolled hypertension: Prevalence and predictors Background There are limited data on ‘masked uncontrolled hypertension’ (MUCH) in patients with treated and apparently well-controlled BP is unknown. Objectives To define the prevalence and predictors of MUCH among hypertensive patients with controlled office blood pressure. Methods One hundred ninety-nine hypertensive patients presented to the specialized hypertension clinics at two University Hospitals. All patients had controlled office blood pressure (less than 140/90 mmHg). Patients were assessed regarding history, clinical examination, and laboratory data. All patients underwent ambulatory blood pressure monitoring (ABPM) for 24 h, within a week after the index office visit. MUCH was diagnosed if average 24-h ABPM was elevated (systolic BP ≥ 130 mmHg and/or diastolic BP ≥ 80 mmHg) despite controlled clinic BP. Results Sixty-six patients (33.2%) had MUCH according to 24-h ABPM criteria (mean age 53.5 ± 9.3 years, 60.6% men). MUCH was mostly caused by the poor control of nocturnal BP; with the percentage of patients in whom MUCH was solely attributable to an elevated nocturnal BP almost double that due to daytime BP elevation (57.3% vs. 27.1%, P < 0.001). The most common predictors of MUCH were smoking, DM and positive family history of DM. Conclusion The prevalence of masked suboptimal BP control is high. Office BP monitoring alone is thus inadequate to ascertain optimal BP control because many patients have an elevated nocturnal BP. ABPM is needed to confirm proper BP control, especially in patients with high cardiovascular risk profile. Smoking, DM and positive family history of DM were the most common predictors of MUCH. Introduction Masked hypertension (MH) is a term used to define people who have a normal seated clinic blood pressure (BP) but an elevated out-of-office BP, as determined by ambulatory BP monitoring (ABPM) or home BP monitoring (HBPM). Masked hypertension is the opposite of the more commonly recognized 'white coat hypertension'. Patients with MH are now known to be at particularly high risk of developing cardiovascular disease (CVD) because they often remain undetected and untreated. 1 Most studies on the prevalence of MH have primarily focused on 'treatment naïve' patients, prior to the diagnosis of hypertension, and many of them based the measurements on HBPM or daytime ABPM, or were of small size. 1 This daytime definition of MH didn't include people whose sole abnormality is an elevation in nocturnal BP, which some studies suggest is the strongest predictor of CVD risk compared with daytime or 24-h mean pressures. 2 Furthermore, few studies have established the prevalence of the equivalent of MH, i.e. 'masked uncontrolled hypertension; MUCH', in patients with treated hypertension. MUCH is used to describe treated patients in whom BP levels are sub-optimally controlled according to ABPM, but who are considered controlled according to clinic BP targets by current treatment guidelines recommendations (<140/90 mmHg). Despite the recognized potential for clinic BP alone to both over-and under diagnose hypertension, to date, few guidelines, as NICE 2011 guidelines have recommended the routine use of ABPM to monitor the quality of BP control because there are very little data on the quality of BP control in routine clinical practice. 3 The aim of this study is to determine the prevalence and predictors of MUCH in hypertensive patients with controlled office BP. Methods This is a prospective, non-randomized, observational, cross sectional study that enrolled 199 HTN patients who presented to the specialized HTN clinics at two university hospitals. Patients were recruited from February 2016 to June 2017. Inclusion included hypertensive patients on regular antihypertensive treatment who had controlled office blood pressure readings (less than 140/90 mmHg and 140/85 for diabetics, for at least two visits, one month apart). 4 Excluded from the study were those with secondary hypertension, acute myocardial infarction, significant valvular heart disease, decompensated heart failure (New York Heart Association class III and IV), and pregnant ladies. Patients gave informed consent about being included in this study. They underwent full clinical evaluation including cardiovascular risk factors assessment e.g. history of diabetes mellitus, smoking and their duration, current medications, family history of CV risk factors and current antihypertensive drugs (class and dosage). Examination included; assessment of the body mass index (BMI) (Obesity is defined as BMI > 30 Kg/m 2 ), waist circumference, supine heart rate, peripheral pulses as well as searching for signs of target organ damage. Blood pressure measurement was done using a digital fully automated device (Omron-6 automated device). 5 Patients were allowed to rest for 3-5 min before measurement. Three BP readings were taken, 1-3 min apart, the first one was omitted and the last two readings were averaged. Patients were allowed to stand unsupported for 2 min and then standing BP readings were recorded. Laboratory workup included (Hemoglobin level, serum Creatinine, potassium, total cholesterol, low density lipoprotein, high density lipoprotein, and triglycerides, fasting blood sugar and uric acid). Fundus examination was asked for to detect significant hypertensive retinopathy ( grade II hypertensive retinopathy). Urine analysis was performed in all studied patients, those who had proteinuria underwent albumin creatinine (A/C) ratio. Patients with abnormal A/C ratio (defined as having albuminuria above 30 mg/dl) are considered to have proteinuria as a marker of target organ damage. 6 Standard 12 lead ECG was done in all patients. Abnormalities as arrhythmias, premature beats, ischemic heart disease, conduction defects and left ventricular hypertrophy (LVH) were documented. Criteria for LVH diagnosis with ECG were followed. 7 Target organ damage (TOD) including; LVH, carotid bruit, hypertensive retinopathy grade II, peripheral arterial disease, and clinical CVD (coronary heart disease, congestive heart failure) were diagnosed, using the appropriate investigation, and were documented. Chronic renal disease was diagnosed when serum creatinine was >1.3 mg/dl and/or when proteinuria was present. All patients underwent 24-hour ABPM. ABPM was conducted on the patient's non-dominant arm using Holter system Model DMS 300-4A 8 with device set to measure the BP every half an hour in daytime and every hour during the night, according to the patient's sleep and awake times. The patients were asked to continue performing their normal routines but remain still during the measurements. Blood pressure measurement performed for all patients on all days of the working week. Average day, night, and 24-hour blood pressure and pulse rate of patients were collected. Dipping (i.e. nocturnal blood pressure fall) has been categorized into four groups:(6) (a) normal dipping; where the ratio between mean night systolic and mean day systolic is (0.8-0.9), (b) no dipping; where the ratio is (0.9-1), (c) reverse dipping; the ratio is more than 1 and (d) extreme dipping; the ratio is less than 0.8. Valid ABPM recordings had to fulfill a series of pre-established criteria, including successful recording of 80% of systolic BP (SBP) and diastolic BP (DBP) during both the daytime and nocturnal periods, and at least one BP measurement per hour. The primary aim is to detect the prevalence of MUCH which is defined as:(6) normal office BP and, (a) mean awake ABPM readings 135/85 and/or (b) mean night ABPM readings 120/70 and/or (c) mean average 24H ABPM readings 130/80. Statistical analysis Quantitative variables were expressed as mean and standard deviation (SD), while qualitative variables were presented as numbers and percentages. We divided the study patients into two groups; group 1 with normal office and normal mean 24 h ABPM, (controlled HTN) and group 2 with normal office and elevated mean 24 h ABPM, (masked uncontrolled HTN; MUCH). We compared the two groups regarding demographics, risk factors, target organ damage and other parameters by means of Chisquare/Fisher exact test for categorical data, and student t-test for continuous data. Linear Regression analysis was used to detect predictors of MUCH. All statistical tests were 2 sided, and we judged a P-value of <0.05 to be significant. All analyses were carried out using SPSS 20. Results Demographic and clinical characteristics of patients are demonstrated in Table 1. Most patients were middle aged. About one third had diabetes mellitus (DM) and one third were current heavy cigarette smokers. The data obtained from the Ambulatory BP analysis shows that about two-thirds of patients had non-dipping or reversed dipping patterns in nocturnal BP readings. About one third of patients (n = 66, 33.2%) were diagnosed to have MUCH, according to 24-hours ABPM readings. Taking only day time ABPM, 54 (27.1%) patients had MUCH, while when using only nighttime BP, 114 (57.2%) patients had MUCH. Nighttime ABPM seems to cause greater impact on the high prevalence of MUCH found in 24-hours average ABPM analysis. Diagnosis of MUCH (in 24-hours average BP) is mainly due to combined elevation of both systolic and diastolic BP (48%) rather than elevated only systolic (30.3%) or only diastolic (21.2%) blood pressures. Characteristically, patients with MUCH had more prevalence of cardiovascular risk factors; higher prevalence of DM, dyslipidemia, heart failure, smoking and higher prevalence of positive family history of HTN and DM. They also had inadequate response to standing as compared to the controlled HTN group. Both groups showed comparable results regarding the prescribed antihypertensive medication, Fig. 1. The most frequently prescribed anti-hypertensive drug, in both groups, was beta blocker and the least prescribed was diuretic. Laboratory workup of patients with MUCH is shown in Table 2. No characteristic laboratory difference was found between the 2 groups. Linear regression analysis showed that the most significant predictors of MUCH were smoking, DM, and a positive family history of DM, Table 3. Discussion Hypertension (HTN) is a chief public-health problem challenging both economically developed and developing countries as it is highly coupled with cardiovascular and kidney diseases. 9 Data from the Egyptian National Hypertension Project (NHP;1993 showed that prevalence of HTN was 26.3% among Egyptian adults. Awareness rate among Egyptians was 37.5% with 23.9% of patients receiving anti-hypertensive medications and control rate of only 8%. 10 For several years, BP measurement in the clinic was the golden standard for detection and diagnosis of clinical HTN and monitoring the beneficial effect of anti-hypertensive medications. With the introduction of ABPM and HBPM to clinical practice, new clinical terms for describing HTN were introduced. One of the underscored terms in clinical practice is MH which was first described by Pickering in 2002. 11 Despite the term was originally used for untreated hypertensive persons, later periodicals used the term to refer to patients with treated hypertension. 12 The exact mechanism responsible for MH is not completely recognized. In order to understand the mechanism behind MH it was postulated that it may be the result of reduced office BP and /or increased ABPM. Lower office BP measurement may be due to white coat effect which is the difference between office and out of office BP and this effect is negative in patients with MH. 13 Another reason that may attribute to lower office BP is the relation between diagnostic labeling as hypertensive and office BP, as it was found that the absence of diagnostic labeling as hypertensive was found to be associated with lower office BP. 14 Smoking, alcohol consumption, physical activity and psychosocial factors (anxiety, interpersonal conflict and job stress) may all contribute to increase in ABPM. 15 The prevalence of MH was reported to range from 8% to 49% with tendency to be higher in treated hypertensive. 16 This discrepancy of MH prevalence is attributed to several factors as the characteristic of the population studied (general population vs. clinicbased population, treated HTN vs. HTN naïve, ethnic background) and ABPM criteria used to define MH (day time BP vs. 24 h BP) and the use of different BP thresholds for defining MH. Our study which is a clinic-based study on patients with treated HTN showed MUCH prevalence of 33.2%. In concordant to our results, the Spanish registry 17 reported MUCH prevalence of 31.1% in Spanish patients with treated HTN. Similar prevalence of MUCH in treated HTN was reported by Pierdomenico et al. 18 However, our results were higher than that reported by SHEAF study 9.4% 19 and Jhome study 19% 20 both of which used HBPM rather than ABPM for detection of MUCH. Nighttime BP is known to be a strong predictor factor for total, cardiovascular, stroke and cardiac mortality. 21 Elevated night time BP showed a great impact on the prevalence of MUCH in our study as 57.3% of the patients proved to have MUCH using only nighttime BP vs. 27.1% when only daytime BP was used. About 80% of patients reported marked discomfort with the device especially at night and they were awakened from sleep by cuff inflations. The resulting disturbed sleep rhythm may have altered sympathetic activity leading to nocturnal surge of BP. Results from the Spanish registry 17 showed lower prevalence of nocturnal HTN (24.3% vs 57.3% in our study). One of the findings of this study is that patients with MUCH showed higher standing SBP compared to those with controlled BP. An inverse relationship between BP response to standing and the difference between clinic BP and daytime BP was documented before. Compared to patients with normal reaction to standing, patients with increased reaction showed higher levels of systolic and diastolic ABPM. 22 Such data indicate that increased reactivity to standing is predictive of higher ABPM and explain the reason why patient with MUCH had higher standing SBP in our study. This study showed that patients with MUCH had high prevalence of cardiovascular risk factors and TOD which is concordant to what reported by Pickering et al, 23 the Spanish registry 17 and Japan home study 20 which signifies the importance of early detection and treatment of patients with MH since such patients are at increased risk of cardiovascular mortality and stroke. 24 Whether the high-risk profile is a consequent or merely an association to MUCH is not yet known. Using multivariate analysis, smoking, DM and family history of DM were found to be the strongest predictor of MH in our study. Bromfield et al 25 also reported diabetes to be associated with a higher prevalence of masked daytime and isolated nocturnal uncontrolled hypertension among African Americans taking antihypertensive medication in the Jackson Heart Study. The Spanish registry 17 showed that after multivariable adjustment, the odds ratio for masked 24-hour uncontrolled hypertension associated with diabetes taking antihypertensive medications was 1.25 (95% CI = 1.14-1.37). Limitations This study has several limitations that we have to address. It was performed on a small sample size of hypertensive patients and a study on a larger number of patients from different geographic regions across the country is needed to verify our findings. Also, our study doesn't reflect the general population as patients were recruited from specialized HTN clinics and multicenter population-based study may be required. The diagnosis of MUCH was based on a single AMBP recording and would have been better to repeat the ABPM to test the reproducibility of MUCH diagnosis. Conclusion More than one third of patients showed MUCH despite apparently well controlled office BP readings. Elevated nocturnal BP was acting as a major determinant of the presence of MUCH, a finding which cannot be detected by regular clinic measurements. Patients with MUCH showed a higher constellation of traditional cardiovascular risk factors and TOD, which imposes tight BP control in order to reduce future cardiovascular events. Our recommendation is to suspect MUCH in apparently controlled HTN patients with high risk profile and to order ABPM for these patients for better evaluation and management of HTN.
2019-01-22T22:23:18.681Z
2018-10-22T00:00:00.000
{ "year": 2018, "sha1": "de27f413db84f240c5d8492cfebfa75700ce0a59", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ehj.2018.10.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de27f413db84f240c5d8492cfebfa75700ce0a59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263816466
pes2o/s2orc
v3-fos-license
Aminopeptidase N/CD13 Crosslinking Promotes the Activation and Membrane Expression of Integrin CD11b/CD18 The β2 integrin CD11b/CD18, also known as complement receptor 3 (CR3), and the moonlighting protein aminopeptidase N (CD13), are two myeloid immune receptors with overlapping activities: adhesion, migration, phagocytosis of opsonized particles, and respiratory burst induction. Given their common functions, shared physical location, and the fact that some receptors can activate a selection of integrins, we hypothesized that CD13 could induce CR3 activation through an inside-out signaling mechanism and possibly have an influence on its membrane expression. We revealed that crosslinking CD13 on the surface of human macrophages not only activates CR3 but also influences its membrane expression. Both phenomena are affected by inhibitors of Src, PLCγ, Syk, and actin polymerization. Additionally, after only 10 min at 37 °C, cells with crosslinked CD13 start secreting pro-inflammatory cytokines like interferons type 1 and 2, IL-12p70, and IL-17a. We integrated our data with a bioinformatic analysis to confirm the connection between these receptors and to suggest the signaling cascade linking them. Our findings expand the list of features of CD13 by adding the activation of a different receptor via inside-out signaling. This opens the possibility of studying the joint contribution of CD13 and CR3 in contexts where either receptor has a recognized role, such as the progression of some leukemias. Introduction CD13, or aminopeptidase N, is a cell membrane ectoenzyme that is considered a marker of the myelomonocytic lineage [1].Most of its 960 aa are located extracellularly, roughly 25 aa constitute the transmembrane portion and only 7-10 aa correspond to the intracellular portion of the protein [2].The intracellular and extracellular segments of CD13 have distinct functions.The enzymatic activity is located in the extracellular domains and accounts for CD13's role in the processing of bioactive peptides.The intracellular portion, on the other hand, mediates signal transduction when the receptor is crosslinked.This signaling activity is independent of its peptidase activity.Thus, CD13 can mediate cellular processes like phagocytosis and the subsequent respiratory burst, cell migration, and adhesion [3][4][5].Signal transduction takes place despite the shortness of the intracellular tail with only a single potential p-Tyr and the absence of classical signaling sequences like ITAMs.Due to its wide range of activities, CD13 is considered a "moonlighting" protein. Complement receptor 3 (CR3, Mac-1, or integrin αM/β2) is a member of a group of heterodimeric membrane proteins called α/β integrins.It is composed of the peptides CD11b (exclusive to CR3) and CD18; thus, it is also called CD11b/CD18 [6].CR3 is primarily expressed in leukocytes like neutrophils, monocytes, macrophages, and dendritic cells [7].CR3 has two main physiological roles.First, it acts as a phagocytic receptor for particles and pathogens opsonized with iC3b complement fragments (reviewed in [8]).Second, it is an adhesion molecule that participates in leukocyte extravasation during inflammation due to its ability to bind ligands present in endothelial cells [9,10].The activation of CR3, that is, the transition from its low-affinity to its high-affinity conformation, occurs either through ligand recognition (outside-in signaling) or via an intracellular signal coming from a different cell surface receptor (inside-out signaling).Some signaling molecules can participate in both CR3 inside-out and outside-in signaling, including Rap1, RIAM, Talin, Kindlin, and Syk [7,[11][12][13][14]. The link between CD13 and CR3 is also supported by in vivo evidence, as both molecules can be found together in functional microdomains within the cell membrane called lipid rafts [15].These structures are key to cell signaling since they bring components of specific pathways close together, thus decreasing the possibility of fortuitous activation or blocking of signals from other cascades (reviewed in [16,17]). In summary: i) integrins like CR3 can be activated by the engagement of other receptors due to a mechanism known as inside-out signaling, as is the case for FcγRs [18,19], with which CD13 shares the function of primary phagocytic receptor as well as the activation of many signaling molecules, and ii) both CD13 and CR3 can mediate functions such as phagocytosis, adhesion, and respiratory burst.Moreover, CD13 and CR3 can be found in physical proximity as both are present in lipid rafts [15], which is a strong indicator of a functional relationship.Additionally, a few publications have shown a functional link between CD13 and integrins: Carrascal et al. demonstrated that the expression of CD13 is associated with that of integrin α v β 3 in breast cancer [20], and Ghosh et al. [21] showed that CD13 modulates the trafficking of integrin β1 via IQGAP, ARF6, and EFA6 in Kaposi sarcoma and human cervical cancer epithelial cells.In this work, we report the existence of a previously undescribed signaling pathway that links CD13 and CR3 in human macrophages using an integrated analysis of bioinformatics and experimental data. First, we ascertained that crosslinking CD13 using monoclonal antibodies causes the activation of CR3.Second, we established the existence of at least two levels of control for the activation of CR3 following CD13 crosslinking: one that involves the inside-out signaling cascade that directly links both receptors and the second one that regulates the membrane expression of CR3.Third, we measured a panel of 12 cytokines and showed that, even at a short time after CD13 stimulation, CR3 activation is accompanied by the secretion of pro-inflammatory cytokines.Fourth, we used the information yielded by experiments along with the interrogation of molecular ontology bioinformatic databases, text mining analyses, and a manually curated functional protein interaction network, to suggest the components of the signal transduction pathway that leads to the activation and membrane expression of CR3 following CD13 crosslinking.A summary of our workflow can be found in Supplementary Figure S1. Our findings have implications for the study of conditions in which the expression of CD13 is related to disease progression, as it is in breast cancer, where CD13 is linked to the development of metastases [20], a phenomenon largely driven by integrins. Cell Culture Tohoku Hospital Pediatrics-1 (THP-1) cells (ATCC) were maintained in a humidified atmosphere at 37 • C with 5% CO 2 in RPMI-1640 medium complemented as recommended by the selling company.For differentiation into macrophages, cells were seeded at 4.5 × 10 6 /10 cm plate or 8 × 10 5 /well in 6-well plates, in complemented RPMI-1640 medium and stimulated with 20 nM phorbol 12-myristate 13-acetate (PMA) for three days.Cells were washed once with warm PBS and incubated with fresh medium for 24 h before use.Differentiation was confirmed with CD11b expression (Supplementary Figure S2).All experiments carried out with cells from human donors were performed following the Ethical Guidelines of the Instituto de Investigaciones Biomédicas, UNAM, Mexico City, Mexico.Human peripheral blood mononuclear cells (PBMCs) were isolated from anonymous healthy male donors' buffy coats obtained from the blood bank at Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Secretaría de Salud, Mexico City, Mexico, using gradient centrifugation with Lymphoprep, as previously described [4].For monocyte isolation, PBMCs were washed three times with PBS, pH 7.4, using centrifugation at 400× g for 10 min.After the last wash, cells were resuspended in serum-free RPMI-1640 medium complemented as described before and were seeded (5-6 × 10 7 PBMCs/plate) in 100 mm × 20 mm cell culture-treated polystyrene culture dishes (Corning, New York, NY, USA).Cultures were maintained in a humidified atmosphere at 37 • C with 5% CO 2 for 1 h to allow monocytes to adhere to the plastic plate.Non-adherent cells were eliminated with gentle washing, and adherent cells, enriched for monocytes (≥95% purity, as determined with flow cytometry using CD14 as a marker of the monocytic population.were cultured for 7-10 days, for differentiation into macrophages, in RPMI-1640 medium complemented as described before plus 5 ng/mL rh M-CSF at 37 • C. For experiments, macrophages were harvested with gentle cell scraping. CR3 Activation The cells were incubated for 3 h in serum-free supplemented RPMI-1640 with or without inhibitors (10 mM BAY, 20 mM Src inhibitor-1, 5 mM U-73122 hydrate, or 10 mM cytochalasin D).Then, they were harvested.Freshly harvested macrophages incubated without inhibitors were called "pre-treatment".Thus, the treatment consisted of incubating 0.25 × 10 6 cells/sample in 0.2 mL serum-free supplemented RPMI-1640 medium with 2.5 mg of mAb C (anti-CD13) or mAb IV.3 (anti-CD32, positive control) complete antibody for 30 min at 4 • C. The cells were washed three times with fresh medium and incubated with 4 mg of GaM F(ab)'2 fragments for 30 min at 4 • C. Immediately after, the cells were incubated for 10 min at 37 • C and then pelleted before being fixed with 1% paraformaldehyde (PFA) for 25 min at RT.The cell-free supernatants from certain samples were collected and stored at −20 • C for cytokine quantification (see below). Flow Cytometry To quantitate CD11b expression and activation, fixed samples were washed two times with cold PBS and stained with 50 mL of a 1:20 dilution of murine monoclonal APC antihuman CD11b (IgG1, ICRF44) or FITC anti-human CD11b (activated) antibody (IgG1, CBRM1/5) for 40 min at 4 • C. The cells were washed three times with cold PBS.Staining for CD13 or CD32 (FcgRII) was performed by incubation in 10 mg anti-CD13 or anti-CD32 mAbs in serum-free supplemented RPMI-1640 medium for 30 min at 4 • C. The cells were washed three times with the same medium, incubated with 1:500 GaM-FITC antibody for 30 min at 4 • C, and then washed three times with cold PBS and fixed with 1% PFA for 25 min at RT. Fluorescence intensity was measured using flow cytometry (Blue/red Attune cytometer, Applied Biosystems-Thermo Fisher, Waltham, MA, USA).Flow cytometry data were displayed either as MFI for cells with crosslinked CD13 vs. their activation controls or as integrated MFI (iMFI, percentage of positive cells multiplied by their MFI) for cells with CD13 crosslinked in the presence of inhibitors [22].For the latter, a normalized proportion of the cells incubated with inhibitors vs. their respective controls is presented. Cytokine Quantification Samples from the CR3 activation experiments were used to quantify a panel of 12 cytokines.Specifically, supernatants from cells without antibodies (control) and cells with both primary and secondary antibodies (Mab C + sec) were assayed.The frozen supernatants were carefully thawed in ice and loaded as duplicates onto two Milliplex plates (Millipore Sigma, Darmstadt, Germany), one to detect IFN-a and a second one for IFNg, IL-12p70, IL-17a, IL-6, IL-1b, IL-2, IL -8, IL-4, IL-10, MCP-1, and TNF-a.The assays were performed according to the manufacturer's instructions and measured in a Luminex Multiplexing Instrument (Millipore Sigma, Darmstadt, Germany). Theoretical Cell Signaling Interaction Network Assembly We constructed the functional protein interaction network of CD13, Syk, and CR3 and their closest partners using combined interaction scores from STRING [23].A functional association in this context means either physical contact, participation in the same metabolic pathway, and/or cellular process [24].STRING scores are indicators of text mining and protein homology.Each type of evidence gives rise to an individual score for each likelihood of an interaction given currently available evidence in the database, which includes gene neighborhood, gene fusions, gene co-occurrence, experimental evidence, curated databases, and pairs of proteins.STRING computes combined scores by integrating the individual scores and correcting for the probability of randomly observing the interaction.Scores rank from 0 to 1, with 1 being the highest possible result. The search for functional partners was performed individually for each interrogation query (CD13, CD11b, CD18, and Syk) and focused on human proteins.High confidence scoring molecules (0.8 and above) from the first layer of interactions with the query were considered.The resultant group of proteins was filtered based on the requirements for this particular inside-out signaling pathway: non-receptor kinases, adaptor proteins able to bridge CD13 to other components of the pathway, especially Syk, and inhibitory molecules like protein phosphatases or ubiquitin ligases.In some cases, other interacting receptors were considered, as they may provide insight into the reported mechanisms for this type of interaction.Namely, those similar to the studied receptors, CD13 and CR3: metalloproteases, phagocytic receptors, integrins, and other adhesion molecules.To ensure the quality and specificity of the network text mining STRING element, The GeneCards website [25] and the repository PubMed [26] were used to ascertain the suitability of each selected protein, i.e., to confirm the function of each node, as well as its gene and protein expression in myelomonocytic cells.Finally, the interaction network was manually curated according to experimental evidence gathered from previous publications. Statistical Analysis Statistical analyses were performed using one-way ANOVA followed by a multiple comparisons test or a paired two-tailed t-test in the case of experiments with MDMs and cytokines.p values below 0.05 were considered significant. Crosslinking CD13 Results in the Activation of CR3 (CD11b/CD18) We assessed the activation status of CR3 (CD11b/CD18) following CD13 crosslinking on human macrophages.CD13 molecules on the surface of THP-1 macrophages were crosslinked using the complete anti-CD13 antibody mAb C as the primary antibody and GαM F(ab)'2 fragments as the secondary antibody.Next, the cells were stained with a FITC-anti-CD11b (activated) antibody and analyzed using the flow cytometer.The cells were first gated for size and granularity (Figure 1A), then for singlets (Figure 1B), and finally, for median fluorescence intensity in the BL1 (FITC) channel (Figure 1C,D).Figure 1C shows that the fluorescence histogram for the control unstimulated cells stained with anti-CD11b (activated) antibody overlaps with the auto-fluorescence of unstained cells.The controls are cells incubated either without antibodies or only with secondary antibodies.The resulting histograms demonstrate that incubation in the absence of an anti-CD13 antibody does not produce a nonspecific anti-CD11b (activated) signal.In contrast, panel D shows a representative histogram for the CR3 activation produced when CD13 is crosslinked using both primary and secondary antibodies.Figure 1E shows the average and SD in the MFI from CR3 activation in CD13-crosslinked cells (n = 3) along with its controls.A one-way ANOVA followed by a multiple comparisons test confirms that our negative controls, i.e., cells incubated with only primary or secondary antibodies, as well as freshly harvested macrophages ("pre-treatment"), show no significant difference with cells incubated without antibodies.Only the activation of CR3 in cells with either crosslinked CD32 (positive control [18,19]) or CD13 is significantly different from that in cells without antibodies (<0.01 and <0.0001, respectively).Figure 1F is a representative histogram for THP-1 macrophages incubated with Mab C and a secondary antibody coupled to FITC, showing that Mab C bounds efficiently to all cells.These results were consistent in MDMs (Supplementary Figure S3). Syk, Src, PLCγ, and Actin Polymerization Participate in the Activation of CR3 Triggered by CD13 Crosslinking In order to gain insight into the signaling pathway connecting CD13 crosslinking and the activation of CR3, we chemically inhibited some of the molecules related to the signaling of these receptors. We assessed how these inhibitors affect the activation of CR3 triggered by CD13 crosslinking.For this, we pre-incubated THP-1 macrophages with either BAY 61-3606 (BAY), Src kinase inhibitor-1 (SKI-1), U73122, cytochalasin D (Cyt D), or no inhibitor (control) for 3 h in a serum-free medium.Then, the cells were harvested and CD13 on their surface was crosslinked.Finally, we measured CR3 activation using flow cytometry.Figure 2A shows representative histograms comparing cells stained with the anti-CD11b (activated) with or without inhibitors.BAY augments the signal, while Cyt D, SKI-1, and U73122 diminish it.Such differences were statistically confirmed and are represented in Figure 2B, where the average and SD of the proportion of each inhibitor-incubated sample vs. their respective control is plotted.The p-values for both BAY and SKI-1 are <0.0001 and <0.001 for both Cyt D and U73122 (n = 3).These results indicate that Syk, Src, PLCγ, and actin polymerization have a role in the activation of CR3 triggered by CD13 crosslinking.It is noteworthy that incubation with BAY had the same effects on human MDMs (Supplementary Figure S4). Syk, Src, PLCγ, and Actin Polymerization Participate in the Activation of CR3 Triggered by CD13 Crosslinking In order to gain insight into the signaling pathway connecting CD13 crosslinking and the activation of CR3, we chemically inhibited some of the molecules related to the signaling of these receptors. We assessed how these inhibitors affect the activation of CR3 triggered by CD13 crosslinking.For this, we pre-incubated THP-1 macrophages with either BAY 61-3606 U73122 diminish it.Such differences were statistically confirmed and are represented in Figure 2B, where the average and SD of the proportion of each inhibitor-incubated sample vs. their respective control is plotted.The p-values for both BAY and SKI-1 are <0.0001 and <0.001 for both Cyt D and U73122 (n = 3).These results indicate that Syk, Src, PLCγ, and actin polymerization have a role in the activation of CR3 triggered by CD13 crosslinking.It is noteworthy that incubation with BAY had the same effects on human MDMs (Supplementary Figure S4). CD13 Crosslinking Also Controls CR3 Membrane Expression In order to determine if CD13 crosslinking had any effect on the membrane expression of CR3, we evaluated CR3 membrane expression using flow cytometry in THP-1 macrophages.Figure 3A displays representative histograms showing that the signal from the controls (cells incubated without antibodies or only with secondary antibody) stained with an a-CD11b antibody coupled to APC practically overlaps with that of unstained cells.In contrast, Figure 3B shows that crosslinking CD13 on the surface of macrophages induces the surface expression of CR3. Figure 3C shows the average and SD for the MFIs of CD11b expression on freshly harvested macrophages (pre-treatment), control cells, CD13 Crosslinking Also Controls CR3 Membrane Expression In order to determine if CD13 crosslinking had any effect on the membrane expression of CR3, we evaluated CR3 membrane expression using flow cytometry in THP-1 macrophages.Figure 3A displays representative histograms showing that the signal from the controls (cells incubated without antibodies or only with secondary antibody) stained with an a-CD11b antibody coupled to APC practically overlaps with that of unstained cells.In contrast, Figure 3B shows that crosslinking CD13 on the surface of macrophages induces the surface expression of CR3. Figure 3C shows the average and SD for the MFIs of CD11b expression on freshly harvested macrophages (pre-treatment), control cells, and cells in which CD13 was crosslinked from three independent experiments.The overall expression of CR3 exhibits the same pattern as CR3 activation, except for cells before treatment.Cells stained before treatment have a basal CR3 level significantly different from that of cells treated without antibodies (control).These data indicate that basal CD11b membrane expression decreases after treating cells (two 30 incubations at 4 • C, three washes, and a 10 incubation at 37 • C) in the absence of antibodies or only with primary or secondary antibodies.Only CD13 crosslinking restores CR3 membrane expression, even at a higher level than pre-treatment. overall expression of CR3 exhibits the same pattern as CR3 activation, except for cells before treatment.Cells stained before treatment have a basal CR3 level significantly different from that of cells treated without antibodies (control).These data indicate that basal CD11b membrane expression decreases after treating cells (two 30′ incubations at 4 °C, three washes, and a 10′ incubation at 37 °C) in the absence of antibodies or only with primary or secondary antibodies.Only CD13 crosslinking restores CR3 membrane expression, even at a higher level than pre-treatment. Src, PLCγ, Syk, and Actin Polymerization Also Have a Role in CR3 Membrane Expression After confirming that CD13 crosslinking also influences CR3 membrane expression, we investigated the possibility that the signaling pathway controlling this phenomenon and the one governing the activation of CR3 shared some of their components.For this, we pre-incubated THP-1 macrophages with either BAY, SKI-1, U73122, Cyt D, or no inhibitor (control) for 3 h in a serum-free medium.Then, the cells were harvested and CD13 on their surface was crosslinked.Finally, we measured CR3 membrane expression using flow cytometry.Figure 4A shows representative histograms comparing cells stained with anti-CD11b (total) after stimulation by CD13 crosslinking in the presence of the different inhibitors.The pattern is similar to the one observed for CR3 activation: BAY augments the signal produced by the fluorochrome-coupled antibody in comparison with the control, while Cyt D, SKI-1, and U73122 diminish it.Such differences were statistically confirmed and are represented in Figure 2B, where the average and SD for the proportion of each inhibitor-incubated sample vs. their respective control is plotted.The p-value for SKI-1 was <0.0001, <0.001 for both BAY and Cyt D and, 0.0221 for U73122 (n = 3).These results indicate that Syk, Src, PLCγ, and actin polymerization have a role in CR3 membrane expression influenced by CD13 crosslinking.Supplementary Figure S5 shows a comparison of the relation CD11b activation/expression in cells crosslinked in the presence of the different inhibitors. Src, PLCγ, Syk, and Actin Polymerization Also Have a Role in CR3 Membrane Expression After confirming that CD13 crosslinking also influences CR3 membrane expression, we investigated the possibility that the signaling pathway controlling this phenomenon and the one governing the activation of CR3 shared some of their components.For this, we pre-incubated THP-1 macrophages with either BAY, SKI-1, U73122, Cyt D, or no inhibitor (control) for 3 h in a serum-free medium.Then, the cells were harvested and CD13 on their surface was crosslinked.Finally, we measured CR3 membrane expression using flow cytometry.Figure 4A shows representative histograms comparing cells stained with anti-CD11b (total) after stimulation by CD13 crosslinking in the presence of the different inhibitors.The pattern is similar to the one observed for CR3 activation: BAY augments the signal produced by the fluorochrome-coupled antibody in comparison with the control, while Cyt D, SKI-1, and U73122 diminish it.Such differences were statistically confirmed and are represented in Figure 2B, where the average and SD for the proportion of each inhibitor-incubated sample vs. their respective control is plotted.The p-value for SKI-1 was <0.0001, <0.001 for both BAY and Cyt D and, 0.0221 for U73122 (n = 3).These results indicate that Syk, Src, PLCγ, and actin polymerization have a role in CR3 membrane expression influenced by CD13 crosslinking.Supplementary Figure S5 shows a comparison of the relation CD11b activation/expression in cells crosslinked in the presence of the different inhibitors. CR3 Activation Triggered by CD13 Crosslinking Is Accompanied by the Secretion of Inflammatory Cytokines Immune cells commonly respond to stimuli by secreting cytokines.The array of secreted cytokines determines the events that will follow the original stimulus (e.g., proinflammatory or anti-inflammatory).This is the reason why these proteins largely help orchestrate the local and systemic response.Thus, it is of interest to know the milieu generated, i.e., the accompanying cytokine profile, when immune receptors activate, in this case, CR3.This does not mean that the activation and rise in membrane expression of CR3 triggered by CD13 crosslinking are driven by cytokine secretion, rather, they are part of the overall cell response to a single stimulus.To this effect, we measured a panel of 12 cytokines.The cell-free supernatants of cells incubated without antibodies (control) and incubated with primary and secondary antibodies (Mab C + sec) were used to determine IFN-α, IFN-γ, IL-12p70, IL-17a, IL-6, IL-1β, IL-2, IL -8, IL-4, IL-10, MCP-1, and TNF-α.Only the pro-inflammatory cytokines IFN-α (p = 0.0154), IFN-γ (p < 0.01), IL-12p70 (p = 0.0283), and IL-17a (<0.01) had a significant increase in their concentration compared with the control, as seen in Figure 5. IFNs reached an average of 30 pg/mL, and IL-12p70 and IL-17 reached an average of 8 pg/mL.Even though other cytokines like IL-8, TNF-α, and MCP-1 have higher concentrations, these were not significantly different from their controls. Biomolecules 2023, 13, 1488 11 of 21 more interrogation queries.In the case of Syk, two of its interactors were also common with CD11b and CD18.Polypeptide chains forming CR3 (CD11b (ITGAM) and CD18 (ITGB2)), were also subjected to this type of analysis.For CD11b, the first layer of interactions with STRING combined scores above 0.9 consisted of 167 proteins, resulting in 22 molecules of interest. The Interaction Network for CD13, Syk, and CR3 (CD11b/CD18) Functional Partners Contains 76 Proteins The previous results showed that crosslinking CD13 on human macrophages induced the high-affinity conformation of CR3 and its membrane expression.Thus, we turned to bioinformatic databases to assemble an interaction network composed of functional partners of CD13, CR3, and Syk, one of the molecules explored in our chemical inhibition assays and a key signaling kinase in the immune system, particularly in myeloid cells, to propose a sequential mechanistic model for the inside-out signaling pathway that could account for the activation of CR3 following CD13 crosslinking. To determine the potential set of proteins and pathways that participate in the CD13-CR3 inside-out-signaling cascade, we constructed an interaction network using information from public databases, the literature, and previous experimental work from our laboratory. Given the high number of potential candidates, network nodes were selected using the predicted interaction score, biological function, and presence in the target cell type. A functional protein interaction network for CD13, Syk, and CR3 was assembled by selecting the proteins with the highest combined scores (0.8 or more) from the STRING database [23], as well as previously determined experimental interactions.Data mining the STRING element and the databases GeneCards [25] and PubMed [26] were used to confirm that the chosen proteins were present in the myelomonocytic lineage.Supplementary Figure S6 presents the main ontology clusters for the selected proteins.For those interrogation nodes that resulted in more than 50 proteins with combined scores ≥0.8, the top 50 molecules were analyzed. Using Syk as the interrogation query, we obtained a first layer of interactions among 158 proteins and STRING combined scores above 0.9.Twenty-nine entries were selected according to the established criteria, i.e., representing CD13 and/or CR3 known functional interactors or potential elements for the inside-out signaling pathway connecting the two of them.Two of these proteins were also selected in the CD11b and CD18 analyses.Supplementary Figure S7 includes the proteins selected to assemble the network, and the Venn diagram allows the identification of those molecules common to two or more interrogation queries.In the case of Syk, two of its interactors were also common with CD11b and CD18. Polypeptide chains forming CR3 (CD11b (ITGAM) and CD18 (ITGB2)), were also subjected to this type of analysis.For CD11b, the first layer of interactions with STRING combined scores above 0.9 consisted of 167 proteins, resulting in 22 molecules of interest.Ten of these were also selected for CD18, as well as the two previously mentioned for both Syk and CD18. Using CD18 as the interrogation query resulted in 184 interactors with a combined score of ≥0.9.Twenty-five molecules of interest were chosen, 13 of which were exclusive to CD18, and the rest were shared with Syk and CD11b, as aforementioned. Using CD13 as the interrogation query yielded 27 molecules with STRING combined interaction scores of 0.8 and above; these were filtered to four proteins of interest using the criteria of being either downstream signal inhibitors or enhancers, adhesion molecules, or coreceptors which, following text mining, might provide information on the signaling pathways necessary for interaction with CD13.Finally, 12 proteins for which an interaction with CD13 was previously experimentally determined (SYK, GRB2, PI3K, FAK, IQGAP1, SRC, JNK, p38, MEK-1, PKC, ERK 1 /2, and SOS1) were added to the molecules of interest [5,27,28]. Figure 6 depicts the interaction network obtained, consisting of 76 non-redundant proteins.Of note, pink lines and bubbles represent interactions experimentally determined, including the ones contributed by this study.A detailed list of all proteins in the network, their main characteristics, and their corresponding interrogation nodes, is presented in Table S1. Figure 6 depicts the interaction network obtained, consisting of 76 non-redundant proteins.Of note, pink lines and bubbles represent interactions experimentally determined, including the ones contributed by this study.A detailed list of all proteins in the network, their main characteristics, and their corresponding interrogation nodes, is presented in Table S1. Next, based on our interaction network, we constructed a sequential mechanistic model of the CD13-CR3 inside-out signaling pathway (Supplementary Figure S8). Discussion CD13 is an ectopeptidase that, along with other proteins like CD157, CD73, CD38, and CD26, can initiate signaling events upon stimulation [27,29].Despite the need for extra accessory proteins, the existence of receptors without tyrosine-kinase activity (non-RTKs) like these ectopeptidases, may have been retained during evolution as they provide a tighter cell activation control than receptor tyrosine kinases (RTKs).Unlike non-RTKs, RTKs can undergo spontaneous activation upon stochastic encounters in the cell membrane, which poses a risk when they are overexpressed, as they can lead to disease development.For instance, overexpression of the RTK human epidermal growth factor receptor 2 (HER2) is associated with various cancers, including ovarian, prostatic, Next, based on our interaction network, we constructed a sequential mechanistic model of the CD13-CR3 inside-out signaling pathway (Supplementary Figure S8). Discussion CD13 is an ectopeptidase that, along with other proteins like CD157, CD73, CD38, and CD26, can initiate signaling events upon stimulation [27,29].Despite the need for extra accessory proteins, the existence of receptors without tyrosine-kinase activity (non-RTKs) like these ectopeptidases, may have been retained during evolution as they provide a tighter cell activation control than receptor tyrosine kinases (RTKs).Unlike non-RTKs, RTKs can undergo spontaneous activation upon stochastic encounters in the cell membrane, which poses a risk when they are overexpressed, as they can lead to disease development.For instance, overexpression of the RTK human epidermal growth factor receptor 2 (HER2) is associated with various cancers, including ovarian, prostatic, gastric, lung, and breast cancers.Furthermore, HER2 activation serves as a known mechanism of resistance to endocrine treatment in experimental models [30]. CR3 can exist in two main conformational states that correspond to a high or low affinity for its ligands, referred to as the active and inactive states, respectively.The high-affinity state can be reached either by outside-in or inside-out signaling (reviewed in [31,32]).We hypothesized that CD13 could functionally interact with integrins like CR3 by promoting its activation, considering the following two facts.First, the stimulation of many immune receptors activates CR3 via inside-out signaling, including but not limited to CD14, TLR2, TLR4, TLR9, and FcγRs [19,33,34].Second, CD13 functionally interacts with other immune receptors, for example, crosslinking CD13 with monoclonal antibodies increases the phagocytic efficiency of particles directed to FcγRs [35].Of note, due to the lack of reported natural ligands that stimulate CD13 signal transduction, so far crosslinking has been the stimulus of choice for this receptor [5,27,36]. We found that CD13, a non-RTK with a short cytoplasmic tail and no canonical signaling motifs [28], activates CR3 and controls its membrane expression.CD11b membrane expression does not always indicate activation, as shown by the different CD11b activation/total CD11b ratios in cells treated with various inhibitors.This supports the idea that CD13 crosslinking triggers two separate phenomena: expression involving Syk and actin polymerization and activation involving PLCγ.Notably, CD13 crosslinking is necessary to initiate these signaling events.In the absence of antibodies or with only primary or secondary antibodies, CR3 is most likely internalized and only recycled back to the membrane upon CD13 crosslinking, potentially through a clathrin-mediated mechanism. CD11b levels increase when monocytes differentiate into macrophages, i.e., it is a differentiation marker.This partially explains the enhanced potential of macrophages for mobility, adhesion, and phagocytosis, compared to their precursors [6].Upon stimulation of immune receptors, associated factors activate, and, in many cases, their gene expression increases [37].However, crosslinking CD13 on the surface of macrophages at 4 • C, followed by a brief ten-minute incubation at 37 • C, results in an even higher expression of CD11b on the cell membrane than the baseline differentiation levels (referred to as "pre-treatment" in our experiments).The interpretation of this phenomenon is that, first, certain in vivo scenarios require that CD11b membrane levels increase at shorter times than those allowed by gene expression.Thus, the existence of receptor reservoirs in the form of intracellular vesicles [38].Second, CD13 may function as a sentinel, detecting stimuli that require the activation and involvement of CD11b, thereby facilitating a swift response to immunological challenges. Subramani et al. [5] demonstrated that crosslinking CD13 on the human monocytic cell line U937 induces adhesion to endothelial cells and that this phenomenon is related to the phosphorylation of the receptor by Src, as well as to the recruitment of cytoskeleton-binding machinery.Thus, the selection of SKI-1 and Cyt D. BAY, a highly selective and widely used Syk inhibitor [39][40][41], was tried because Syk acts downstream of several immune receptors on myelomonocytic cells, including CD13 and integrins [3,41].In fact, Zheng et al. [41] showed that after the glycoprotein VI on human platelets engages its ligand collagen, an inside-out signaling pathway sets off, activating Syk, which phosphorylates PLCγ, leading to the activation of integrin a IIb b 3 .Thus, our choice to also include the PLCγ inhibitor U73122. Our sequential mechanistic model is supported by both STRING-predicted interactions and previous experimental data, making the pathway theoretically conceivable.For example, we chose Grb2 as an adaptor molecule bridging CD13 and Syk based on our previous findings of crosslinked CD13 co-precipitating with Grb2.Also, this molecule associates with Shc, Src, Syk, and SHP-1 during inside-out signaling between CD32a and αIIbβ3 integrin in human platelets [42,43].Similar pathways have been observed in other systems, such as human neutrophil PSGL-1 binding endothelial P-and E-selectins, thus activating β2 integrins CR3 and LFA-1 (reviewed in [7]).Another example is Rap1, chosen for being a key regulator of inside-out activation in phagocytic integrins like CR3, where signals from various receptors converge [44].These findings highlight the effectiveness of combining experimental and bioinformatic approaches to unravel complex signaling pathways.Our model expands the understanding of the intricate inside-out signaling cascade. Previous publications have linked cytokine production to the expression of CD13 in various cell types and contexts [45][46][47][48][49].However, only a handful of studies have reported the secretion of cytokines as a result of CD13 stimulation in human myeloid cells.In a recent work from our group, Perez-Figueroa et al. [36] incubated human neutrophils for 24 h with the same primary and secondary antibodies we used in this study.A bead-based multiplex assay was used to determine the production of IL-1β, TNF-α, IL-8, IL-6, and IL-10 in cell-free supernatants.From these, only IL-1β and TNF-α showed a significant increase compared with the control.Similarly, Santos et al. [27] showed that the ligation of CD13 on U937 human monocytes upregulates the mRNA expression of IL-8, peaking at 2 h of incubation.In contrast, Villaseñor-Cardoso et al. [50] surveyed supernatants from human monocyte-derived DCs and macrophages for IL-6, IL-12, IL-10, and TNF-α, but found no increase in their concentration after 18 h of CD13 crosslinking.This result could be due to the use of sandwich ELISA, a less sensitive method.In this work, we measured a panel of 12 cytokines and detected the presence of pro-inflammatory cytokines accompanying the activation and rise in membrane expression of CR3 after only 10 min following CD13 crosslinking.Specifically, IFNs type 1 (IFN-α) and 2 (IFN-γ), IL-12p70, and IL-17a increased significantly.As expected for such a short time after stimulation, the concentrations were lower than what other authors have reported for THP-1 macrophages.For example, we detected approximately 8 pg/mL IL-12, whereas Shabir et al. [51] and Souissi et al. [52] reported that THP-1 macrophages produce 100-125 pg/mL IL-12, albeit after 4-18 h of stimulation with 100 ng/mL LPS, a potent pro-inflammatory cytokine inducer.Similarly, Zhou et al. [53] reported that 24 h after infection with Mycobacterium tuberculosis, THP-1 macrophages secret almost 300 pg/mL IL-12.Therefore, when both the time after stimulation and the method of detection are considered, it becomes evident that there is still a knowledge gap regarding the early cytokine response after CD13 crosslinking.Thus, time-course experiments and monitoring cytokine expression and secretion between 10 min and 18-24 h are necessary to determine if CD13 crosslinking can induce similar cytokine concentrations as other pro-inflammatory stimuli.Nevertheless, the cytokines we detected are biologically relevant to CD13-associated processes.Type 1 IFNs are canonical cytokines secreted as part of the antiviral response [54]; thus, it is expected that a viral receptor, like CD13 [55,56], drives its production.This is evidenced by the work of Yamaya et al. [57], who demonstrated that Type 1 IFN is secreted after the human coronavirus 229E, one of the many that cause common colds, engages its receptor CD13 on the surface of primary human nasal and tracheal epithelial cells.IL-12 is an early cytokine secreted by myeloid cells in response to PAMPs and DAMPs and induces the expression and secretion of IFN-γ (reviewed in [58][59][60]).Of note, the p70 subunit is also one of the two monomers constituting IL-23, another member of the IL-12 family of heterodimeric cytokines.In any case, both IL-12 and IL-23 contribute to the functions of Th1 and Th17 subsets of T lymphocytes, respectively.The secretion of these cytokines may also be related to CD13induced endosome recycling, as these compartments are involved in the secretion of cytokines like TNF-α, IL-6, and IL-10 [61][62][63].Nevertheless, the most intriguing of the CD13 crosslinking-induced cytokines is IL-17a.This is because it is primarily associated with the Th17 subpopulation of CD4+ T cells, where it was first described (reviewed in [64]).However, an increasing body of evidence indicates that macrophages and other myeloid cells express IL-17a [65][66][67].Considering the pathophysiological significance of both IL-17a and macrophage recruitment in various conditions such as endometriosis, sepsis, and lung cancer [65,68,69], future research is warranted on the contribution of macrophage-derived-IL-17a in these contexts.Additionally, both IL-17 and IFN-γ are known to drive CD13 upregulation [49,70,71].Of note, TNF-α, IL-8, and MCP-1 were detected at concentrations ranging from 50 to 500 pg/mL, but they did not increase significantly after crosslinking CD13.Thus, it is highly likely that the secretion of these cytokines is the result of the conditions to which the cells are subjected during the incubations, namely, changes in temperature and mechanical stress. Our findings suggest that CD13 and CR3 (CD11b/CD18) may collaborate in various cellular functions, including adhesion.CD13 has long been implicated in pro-adhesive events such as aggregation [72,73] and invasiveness [74], while CR3 (CD11b/CD18) is well-known for its adhesive properties and activation in response to the stimulation of other receptors.For example, CD11b activation triggered by human neutrophil antigen 3a auto-antibodies leads to neutrophil accumulation in the pulmonary microvasculature of some blood transfusion recipients, causing severe transfusion-related acute lung injury [75].This suggests that CD13 and CR3 may participate in the same adhesion events during inflammation-related transendothelial migration.Although our group previously reported that CD13-mediated adhesion to endothelial cells is integrin-independent [28], it is important to note that CD11b was not among the integrins evaluated.This could partially explain the observation that CD13 ligation impairs transendothelial migration in vivo [28].Given that, as we demonstrated, CD11b activation is a consequence of CD13 stimulation, persistent CD13 engagement would render active CR3 in constant contact with its endothelial ligands such as ICAM-1 and ICAM-2, JAM-A, JAM-C, and RAGE [9].This continuous engagement could lead to cell arrest, polarization, and spreading, potentially inhibiting extravasation [76].While it remains uncertain, our findings suggest that CD11b may indeed contribute to this process.Further investigation is needed to confirm or rule out its involvement. CD13 and CR3 may also collaborate in phagocytosis, as both receptors perform this cellular function.Licona-Limón and colleagues [3] demonstrated that CD13 is a primary phagocytic receptor, capable of mediating phagocytosis of CD13-directed phagocytic preys by human macrophages and THP-1 monocytes.Even when expressed in non-phagocytic HEK293 cells, CD13 enables them to internalize the same type of phagocytic particles.As for CR3, its involvement in the complement cascade is well-established.Activation of the complement system leads to the generation of opsonizing molecules like iC3b, which are recognized by CR3 to facilitate phagocytosis (reviewed in [8]).CR3-mediated phagocytosis can be synergistically enhanced by other receptors, such as CR1, CD14, and scavenger receptors, in the internalization of pathogens like Francisella tularensis [77], and Borrelia burgdorferi [78].Considering that CD13 also acts as a co-receptor to other phagocytic receptors like FcγRs and mannose receptors [35,49], it is plausible that a similar functional interaction exists between CD13 and CR3, as it does between CD44 and CR3, where CD44mediated phagocytosis triggers, and is partially dependent on, CD11b activation [79]. CD13 is overexpressed in many cancers, whereby adhesion and cell motility, a mechanistically closely related phenomenon, contribute decisively to tumor progression [80][81][82].The peptidase activity of CD13 has long been implicated in the ability of myeloid leukemia cells to resist apoptosis.Professor Kiyohiko Hatake's research group at the Japanese Foundation of Cancer Research has dedicated decades to investigating this phenomenon and has reported that when leukemic cells attach to vascular endothelial cells, CD13 facilitates the degradation of the pro-apoptotic cytokine IL-8 produced by the endothelium [83,84].Building on the findings presented in this study, we propose that CD13 plays a dual role in this process.Initially, it promotes the attachment of leukemic cells to the endothelium-a critical step in any metastatic cascade-by activating CR3 and potentially other adhesion molecules.Subsequently, its peptidase activity aids in cell survival by breaking down pro-apoptotic molecules secreted by the vasculature. Therefore, future research should focus on understanding the functional impact of CD13 crosslinking on CR3-mediated adhesion and phagocytosis and identifying the specific functions that are coordinated by these receptors.It will also be necessary to assess the participation of other components of the proposed signaling pathway governing CD13mediated CR3 activation and membrane expression both in vitro and, eventually, in vivo.One possible approach is to use a CRISPR-Cas9 screening strategy, disrupting the genes encoding the signaling pathway components individually in immortalized cells and subsequently expanding them into cell lines.This would enable the characterization of various aspects of the signaling pathway, including the timing of events and the consequences of the absence of each protein.Additionally, the phosphorylation at serine 8 and 10 in the cytoplasmic tail of CD13, which has not been reported yet, should be evaluated as it could add extra docking sites for accessory proteins.Such specifics could provide the basis for the design of therapies that inhibit or enhance particular cellular activities to prevent the spread of cancers in which CD13 is overexpressed. Conclusions In conclusion, the understanding of CD13 has evolved from being a leukemia marker to a co-receptor and a moonlighting enzyme.This study reveals that CD13 not only elicits outside-in signaling but also triggers inside-out signaling, leading to the activation and membrane expression of CR3 (CD11b/CD18), another immune receptor.These findings highlight the ability of CD13 to induce cell phenomena comparable to classical phagocytic receptors, despite the absence of canonical signaling motifs. Figure 1 . Figure 1.CD13 crosslinking activates CR3 in human THP-1 macrophages.(A) Cells were first gated for size and granularity, then for (B) singlets, and finally, for (C,D) MFI in the BL1 (FITC) channel.(C) Controls.(D) Representative histograms from a sample crosslinked with C (anti-CD13) and secondary antibodies vs. its control without antibodies.(E) Average and SDs from 3 independent experiments.** p < 0.01, **** p < 0.0001, ns = non-significant.(F) Representative histogram demonstrating that virtually all cells are positive for the CD13 stain. Figure 1 . Figure 1.CD13 crosslinking activates CR3 in human THP-1 macrophages.(A) Cells were first gated for size and granularity, then for (B) singlets, and finally, for (C,D) MFI in the BL1 (FITC) channel.(C) Controls.(D) Representative histograms from a sample crosslinked with C (anti-CD13) and secondary antibodies vs. its control without antibodies.(E) Average and SDs from 3 independent experiments.** p < 0.01, **** p < 0.0001, ns = non-significant.(F) Representative histogram demonstrating that virtually all cells are positive for the CD13 stain. Figure 3 . Figure 3. CD13 crosslinking promotes CR3 membrane expression in THP-1 macrophages.(A) MFI in the RL1 channel (APC) from unstained cells and, control cells treated without crosslinking antibodies or only with secondary antibody, stained with anti-CD11b(total).(B) Representative histograms from a sample crosslinked with mAb C (anti-CD13) and secondary antibodies vs its control without antibodies.(C) Average ±SDs of MFIs of CD11b (total) expression on cells treated as Figure 3 . Figure 3. CD13 crosslinking promotes CR3 membrane expression in THP-1 macrophages.(A) MFI in the RL1 channel (APC) from unstained cells and, control cells treated without crosslinking antibodies or only with secondary antibody, stained with anti-CD11b(total).(B) Representative histograms from a sample crosslinked with mAb C (anti-CD13) and secondary antibodies vs its control without antibodies.(C) Average ±SDs of MFIs of CD11b (total) expression on cells treated as indicated in the graph or non-treated cells.Data from 3 independent experiments.*** p < 0.001, **** p < 0.0001.ns = non-significant. Figure 4 . Figure 4.The inhibition of Src, PLCγ, and actin polymerization reduces the membrane expression of CR3 (CD11b/CD18) triggered by CD13 crosslinking.The inhibition of Syk augments it.(A) Representative histograms from the membrane expression of CR3 on cells with CD13 crosslinked in the presence of inhibitors for Syk (BAY), actin polymerization (Cyt D), PLCγ (U73122), and Src Figure 5 . Figure 5.The activation of CR3 (CD11b/CD18) triggered by CD13 crosslinking is accompanied by the secretion of pro-inflammatory cytokines.Quantification of 12 cytokines present in the cell-free supernatant of cells with CD13 crosslinked and their control cells treated without antibodies.* p < 0.05 (0.0154 for IFN-α, and 0.0283 for IL-12p70), ** p < 0.01, ns = non significant.Average and SDs from 3 independent experiments. Figure 5 . Figure 5.The activation of CR3 (CD11b/CD18) triggered by CD13 crosslinking is accompanied by the secretion of pro-inflammatory cytokines.Quantification of 12 cytokines present in the cellfree supernatant of cells with CD13 crosslinked and their control cells treated without antibodies.* p < 0.05 (0.0154 for IFN-α, and 0.0283 for IL-12p70), ** p < 0.01, ns = non significant.Average and SDs from 3 independent experiments. Figure 6 . Figure 6.The interaction network for CD13, Syk, and CR3 functional partners contains 76 proteins.Nonredundant results from the analysis of the STRING highest scoring interacting partners among CD13, Syk, and CR3.Pink lines and bubbles represent the interactions experimentally determined in our laboratory and by others. Figure 6 . Figure 6.The interaction network for CD13, Syk, and CR3 functional partners contains 76 proteins.Nonredundant results from the analysis of the STRING highest scoring interacting partners among CD13, Syk, and CR3.Pink lines and bubbles represent the interactions experimentally determined in our laboratory and by others.
2023-10-11T15:35:52.871Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "c1c5b9eb2073781e865a787ea9b1ae7aa40d5062", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/13/10/1488/pdf?version=1696585116", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "c4ddb323b6896cbdd2eb993283255365466b8863", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
16685168
pes2o/s2orc
v3-fos-license
Antimicrobial activity and antibiotic susceptibility of Lactobacillus and Bifidobacterium spp. intended for use as starter and probiotic cultures Antimicrobial activity and antibiotic susceptibility were tested for 23 Lactobacillus and three Bifidobacterium strains isolated from different ecological niches. Agar-well diffusion method was used to test the antagonistic effect (against Staphylococcus aureus, Escherichia coli, Bacillus cereus and Candida albicans) of acid and neutralized (pH 5.5) lyophilized concentrated supernatants (cell-free supernatant; CFS) and whey (cell-free whey fractions; CFW) from de Man–Rogosa–Sharpe/trypticase-phytone-yeast broth and skim milk. Acid CFS and CFW showed high acidification rate-dependent bacterial inhibition; five strains were active against C. albicans. Neutralized CFS/CFW assays showed six strains active against S. aureus (L. acidophilus L-1, L. brevis 1, L. fermentum 1, B. animalis subsp. lactis L-3), E. coli (L. bulgaricus 6) or B. cereus (L. plantarum 24-4В). Inhibition of two pathogens with neutralized CFS (L. bulgaricus 6, L. helveticus 3, L. plantarum 24-2L, L. fermentum 1)/CFW (L. plantarum 24-5D, L. plantarum 24-4В) was detected. Some strains maintained activity after pH neutralization, indicating presence of active substances. The antibiotics minimum inhibitory concentrations (MICs) were determined by the Epsilometer test method. All strains were susceptible to ampicillin, gentamicin, erythromycin and tetracycline. Four lactobacilli were resistant to one antibiotic (L. rhamnosus Lio 1 to streptomycin) or two antibiotics (L. acidophilus L-1 and L. brevis 1 to kanamycin and clindamycin; L. casei L-4 to clindamycin and chloramphenicol). Vancomycin MICs > 256 μg/mL indicated intrinsic resistance for all heterofermentative lactobacilli. The antimicrobially active strains do not cause concerns about antibiotic resistance transfer and could be used as natural biopreservatives in food and therapeutic formulations. Introduction Foods are now considered not only in terms of taste and immediate nutritional needs, but also in terms of their ability to improve the health and well-being of consumers. [1] Hence, the increased interest in food ingredients with valuable bioactive properties and, consequently, in lactic acid bacteria (LAB) and bifidobacteria with antagonistic activity against pathogenic micro-organisms. There are different mechanisms for control and inhibition of other microbes, e.g. nutrient competition, production of inhibitory compounds, immunostimulation and competition for binding sites. Among these activities, the production of organic acids (such as lactic acid), which results in lowered pH, is the most important. Additionally, certain strains are also capable of producing bioactive molecules, such as ethanol, formic acid, fatty acids, hydrogen peroxide and bacteriocins, that have antimicrobial activity. [2] Lactobacillus and Bifidobacterium spp. and their by-products have been shown to be effective in several aspects. One of the most important advantages is the extended shelf life and safety of minimally processed foods, because these antimicrobial substances are safe and effective natural inhibitors of pathogenic and food spoilage bacteria in various foods. Additionally, the consumption of viable bacteria in the form of probiotics and functional foods is widely used for improvement of the balance and activity of the advantageous intestinal microflora, which has prophylactic benefit. [1] The close contact with native microbiota in the human intestine is an excellent precondition for horizontal transfer of antimicrobial resistance genes with the aid of mobile genetic elements. [3] Therefore, the safety of cultures intended for use as food additives should be carefully reassessed, even though most strains of the Lactobacillus and Bifidobacterium group are classified as 'generally recognized as safe' bacteria due to their long history of safe use and proven health benefits. Thus, antibiotic-resistance screening for starter and probiotic cultures now tends to become systematic. In order to eliminate the possibility of acquired resistance, the Panel on Additives and Products or Substances used in Animal Feed (FEEDAP) of the European Food Safety Authority (EFSA) requires the determination of the minimum inhibitory concentrations (MICs) of the most relevant antibiotics for each bacterial strain that is used as a feed additive. [4] In this study, Lactobacillus and Bifidobacterium spp. were screened for their antagonistic activity against four food-borne and human pathogens and antibiotic susceptibility for development of probiotics and food biopreservatives. Bacteria and source of isolation Twenty-three Lactobacillus strains (13 homofermentative and 10 heterofermentative) and three Bifidobacterium strains, part of the laboratory collection of Lactina Ltd. (Bankya, Bulgaria), were selected for this study. In a preliminary (unpublished) study, the strains were identified using biochemical (API 50 CHL) and molecular tests (species-specific polymerase chain reaction or sequence analysis). The source of isolation for each strain is presented in Table 1. All cultures were stored at ¡65 C in appropriate broth media supplemented with glycerol (20% v/v). Before the assay, the strains were pre-cultivated twice in MRS (de ManÀRogosaÀSharpe) broth (Hi-Media Pvt. Ltd., India) for lactobacilli or TPY broth (trypticase-phytone-yeast) for bifidobacteria at 37 C for 24 h. Test micro-organisms Three bacterial food-borne pathogens and one yeast culture were selected as test micro-organisms and were obtained from the National Bank for Industrial Microorganisms and Cell Cultures (Bulgaria): Staphylococcus aureus NBIMCC 3703, Escherichia coli NBIMCC 3702, Bacillus cereus NBIMCC 1085 and Candida albicans NBIMCC 74. The cultures of S. aureus and E. coli were propagated in nutrient broth (NB, HiMedia), B. cereus in tryptic soy broth (TSB, Merck, Germany) and C. 2lbicans in Sabouraud dextrose broth (HiMedia). Antimicrobial activity assay Two model systems for antimicrobial production were applied: cultivation in MRS or TPY broth (for Lactobacillus and Bifidobacterium spp., respectively) and cultivation in 10% (w/v) skim milk (Fude C Serrahn Milchprodukte GmbH & Co, Germany). The media were inoculated with 10% (v/v) previously activated Lactobacillus or Bifidobacterium culture. After incubation at 37 C for 28 h, the cultures were centrifuged (5000 g for 20 min at 5 C) for removal of bacterial cells. Part of the cell-free supernatants (CFS) and the cell-free whey fractions (CFW) were left with their initial acid pH. The rest of the samples were buffered with 5 mol/L NaOH at p= 5.5 § 0.1 in order to eliminate the putative effect of produced organic acids. The pH values of the neutralized samples were consistent with the pH of LAB cultures before freeze drying in the real technological process. After filtration (0.22 mm pore size; Millipore), the acid and neutralized CFS (aCFS and nCFS) and CFW (aCFW and nCFW) were lyophilized (Martin Christ GmbH, Germany) in Petri dishes (10 mL) at the following conditions: freezing at ¡45 E for 2 h, heating at 32 C, vacuum 0.370 mbar, duration 40 h. The obtained dry samples were Table 1. Lactobacillus and Bifidobacterium strains included in this study and source of isolation. Strain Source of isolation dissolved in 2 mL of sterile distilled water (resulting in 5£ concentration increase as compared to the initial culture prior to lyophilization) and stored at ¡65 C until later use in the screening procedures. Agar-well diffusion method was used to determine the inhibitory effect. [5] Exponential cultures of the test microorganisms were diluted to a suitable turbidity and used to inoculate a melted and cooled MuellerÀHinton Agar (MHA, HiMedia) to a final concentration of »10 6 À10 7 CFU/mL. Only C. albicans was plated on Sabouraud dextrose agar (HiMedia) by spreading the cell suspension with a sterile cotton swab. Wells, 8 mm in diameter, were punched in the agar plates and 100 mL of CFS and CFW were added to the wells. After incubation overnight at 37 C, the antimicrobial activity was expressed as the diameter of the inhibition zones (mm) around the wells. Zones of inhibition 10 mm were regarded as positive. Antibiotic susceptibility For selected Lactobacillus and Bifidobacterium strains, the MICs (mg/mL) of nine antibiotics were determined using commercial E-test Ò (Epsilometer test, bioMerieux, France): ampicillin, vancomycin, gentamicin, kanamycin, streptomycin, erythromycin, clindamycin, tetracycline and chloramphenicol. The concentration on the strips was from 0.016 to 256 mg/mL with the exception of streptomycin (0.064À1024 mg/mL). Bacterial cultures in the exponential growth phase were diluted to a suitable turbidity and used to inoculate a melted and cooled iso-sensitest agar (90% w/v, Oxoid, UK) supplemented with MRS or TPY agar (10% w/v) [6] to a final concentration of »10 6 À10 7 CFU/mL. E-test strips were placed on the surface of the inoculated agar and incubated at 37 C for 24 h. The MIC was interpreted as the point at which the ellipse intersected the E-test strip as described in the E-test technical guide. Results and discussion Antimicrobial activity is a very important criterion for selection of starter and probiotic culture as natural antagonists of potentially harmful bacteria. Therefore, 23 Lactobacillus and three Bifidobacterium strains from the Lactina Ltd. collection were screened for their activity against four food-borne and human pathogens: Staphylococcus aureus, Escherichia coli, Bacillus cereus and Candida albicans. Lyophilized and concentrated, acid and neutralized cell-free filtrates obtained after cultivation of the selected lactobacilli and bifidobacteria in MRS or TPY broth (CFS) and skim milk (CFW) were tested for activity. Skim milk was chosen as a second model system because it is a natural medium for the growth of most LAB and bifidobacteria and is commonly used for production of freeze-dried cultures. At the same time, it is an excellent medium for development of many pathogens. Acid CFSs and CFWs of all tested cultures showed activity against S. aureus, B. cereus and E. coli (pH) and the diameter of inhibitory zone was observed for most strains. The aCFWs of two strains, L. brevis 1 (p= 4.87) and L. fermentum 1 (p= 4.79), were inhibitory only for S. aureus. C. albicans was less affected. Acid CFSs of only five strains (L. bulgaricus 1, L. bulgaricus 2, L. rhamnosus Lio1, L. paracasei 4K, L. plantarum 24-4%) were active against the yeast. None of the acid CFWs inhibited C. albicans (data not shown). After pH neutralization, 18 strains were determined as active due to the observed ability to inhibit the growth of at least one target strain. In most cases, a bacteriostatic zone of inhibition was observed. The highest activity with nCFSs and nCFWs was registered against S. Aureus: 46.2% and 35.6% of the strains, respectively. Higher activity with nCFSs was observed among the strains of L. acidophilus, L. bulgaricus, L. helveticus and L. lactis (Figure 1(A)). Conversely, nCFWs of L. plantarum, L. paracasei, L. rhamnosus and L. fermentum strains with antistaphylococcal activity were predominant (Figure 1 (B)). Three strains with nCFS and one strain with nCFW were active against B. cereus ( Figure 2). Also three strains with nCFS and nCFW inhibited E. coli (Figure 3). Six strains were active against S. aureus (L. acidophilus L-1, L. brevis 1, L. fermentum 1 and B. animalis subsp. lactis L-3), E. coli (L. bulgaricus 6) or B. cereus (L. plantarum 24-4%) in both model systems (broth and milk). Inhibition of two pathogens was also observed. Activity against both Gram-positive micro-organisms S. aureus and B. cereus showed nCFS of L. plantarum 24-2L and L. fermentum 1, and nCFW of L. plantarum 24-4%. Activity against S. aureus and E. coli showed nCFS of L. bulgaricus 6 and L. helveticus 3, and nCFW of L. plantarum 24-5D. Activity of nCFSs and nCFWs against the three bacterial test micro-organisms was not registered. None of the tested nCFSs and nCFWs was active against C. 2lbicans (data not shown). The obtained results clearly show the role of acidity and pH for the antagonistic activity of Lactobacillus and Bifidobacterium spp. in vitro. The increased production of lactic acid through fermentation reduces pH of the media, which is known to inhibit the growth of most food-borne pathogens. The antimicrobial effect is also due to the undissociated form of the acid and its capacity to reduce the intracellular pH, leading to inhibition of vital cell functions. [7] Different sensitivity of the test micro-organisms determines different zone of inhibition at the same pH. The lack of activity against E. coli for two strains with aCFW could be explained with the results obtained by Goel et al. [8] for increased survival of E. coli in a fermented milk product with pH over 4.6. The observed inhibition for some strains after elimination of the putative effects of lactic acid raised the question for possible production of other inhibitor substances, such as hydrogen peroxide, bacteriocin and bacteriocinlike substances. The greater activity against Gram-positive micro-organisms than against Gram-negative ones that was observed in our work is in accordance with the previous studies. [7] The activity against Gram-positive pathogens is mostly due to the bactericidal effect of protease sensitive bacteriocins, [2,9] while the antagonistic effects towards Gram-negative pathogens could be related to the production of organic acids and hydrogen peroxide. [10,11] However, a few bacteriocins of LAB active against E. coli and Salmonella typhimurium have also been reported. [12,13] On the other hand, the antibacterial activity of six strains (L. acidophilus L-1, L. bulgaricus 6, L. plantarum 24-4%, L. fermentum 1, L. brevis 1 and B. animalis subsp. lactis L-3) in both system (broth and milk) suggests a mechanism of action different from that mentioned above. The application of such strains gives a potential advantage in the food preservation strategy. Regardless of the nature of the antibacterial substances produced by the neutralized variants, the ability to retain this activity after lyophilization would allow production of active dry starter and probiotic cultures. Strains L. plantarum 24-2L and 24-4B and L. fermentum 1 could be used as starter organisms in the production of bread and bakery products due to their activity against B. cereus. The inhibitory effect of Lactobacillus strains used as starters against rope-forming Bacillus has been previously reported. [14,15] Lactobacilli have also been shown to be effective in preventing the recurrence of urinary tract infection in women, [16] and traveler's diarrhea. [17] E. coli is the most common cause of these diseases. In this aspect, L. bulgaricus 6 and L. helveticus N11 exhibiting activity against this pathogen are good candidates for alleviating the symptoms and prophylaxis of such conditions. In our previous study, [18] L. helveticus N11 was proved to be active against the uropathogenic E. coli strain 536 and enteropathogenic E. coli strain E2348. Use of strains inhibiting S. aureus and E. coli as antimicrobial agents may provide a safe alternative in food preservation. A few studies reported Lactobacillus spp. with strong anti-Candida activity. [9,19] Although there are some clinical trials that support the effectiveness of lactobacilli for prevention or treatment of vaginal yeast infections (C. albicans), evidence regarding potential benefit still remains inconclusive. [20] The presence of active strains with potential application as natural biopreservatives or as probiotic cultures in specific therapeutic formulas determined our next steps towards more profound examination of the nature of the antimicrobial substances produced by selected Lactobacillus and Bifidobacterium strains. In addition to antimicrobial activity, the MICs of nine antimicrobials of human and veterinary importance were determined for all strains. Lack of transferable resistance against therapeutic antibiotics is an important criterion for selection of an appropriate functional strain. [4] Two groups of antibiotics are generally recommended: inhibitors of cell-wall synthesis (ampicillin and vancomycin) and inhibitors of protein synthesis (chloramphenicol, gentamicin, streptomycin, kanamycin, tetracycline, erythromycin and clindamycin). The obtained results and reference microbiological breakpoints are presented in Table 2. A micro-organism inhibited at breakpoint level to a specific antimicrobial is defined as susceptible. When the MIC is higher than the breakpoint, the micro-organism is considered resistant. [4] For the analysis, E-test was chosen in our study, as it is a simple quantitative method that is commonly used for antimicrobial susceptibility testing of different micro-organisms. [21À23] In this study, all tested Lactobacillus and Bifidobacterium strains were susceptible toward ampicillin, gentamicin, erythromycin and tetracycline (Table 2). For most of Note: AM À ampicillin, VM À vancomycin, GM À gentamicin, KM À kanamycin, SM À streptomycin, EM À erythromycin, CM À clindamycin, TC À tetracycline, CL À chloramphenicol. Ã Strains with MIC higher than the breakpoints are considered as resistant (R) according to EFSA. [4]. n.r. À not required. The bold values are reference values given by EFSA. [4] and that is why they are visually emphasized. This would allow easier comparison with the values obtained for the tested strains. the strains kanamycin, clindamycin, streptomycin and chloramphenicol were effective inhibitors. Only four lactobacilli could be considered resistant to one antibiotic (L. rhamnosus Lio 1 to streptomycin) or two antibiotics (L. acidophilus L-1 and L. brevis 1 to kanamycin and clindamycin, L. casei L-4 to clindamycin and chloramphenicol) with MICs higher than the breakpoints recently proposed by the FEEDAP Panel. [4] The obtained results are in accordance with previously reported data for lactobacilli and bifidobacteria. Generally, they are sensitive to the Gram-positive spectrum antibiotic erythromycin, the broad-spectrum antibiotics tetracycline and chloramphenicol and the beta-lactam antibiotic ampicillin. [21,23,24] Nevertheless, acquired genes which are potentially transferable have been detected in lactobacilli. [25] Among the most commonly observed resistance genes, there are two genes coding for tetracycline and erythromycin resistance, followed by genes for chloramphenicol resistance. [26,27] Thus, the chloramphenicol resistance of one of the Lactobacillus strains tested in our study deserves special attention in order to avoid potential risk. By contrast, the resistance against Gram-negative spectrum antibiotics kanamycin and streptomycin is frequently observed in lactobacilli and bifidobacteria. [6,21À23] It may be explained by the high rate of spontaneous chromosomal mutations conveying resistance to these antibiotics. [23,28] Strains with this type of acquired resistance have a low potential for horizontal spread and may be used as feed additives. [4] Among the aminoglycosides, lower MIC for gentamicin compared to kanamycin and streptomycin was observed as previously reported by Danielsen and Wind. [23] Although clindamycin is one of the most effective antibiotics against Gram-positive microorganisms, three of the tested lactobacilli (L. casei L-4, L. acidophilus L-1 and L. brevis 1) were shown to be resistant according to the microbiological breakpoint of this drug. Clindamycin is used for treatment of bacterial vaginosis and resistant strains could be used to restore the normal vaginal microflora together with antimicrobial bacterial vaginosis treatment. [29] L. acidophilus, L. helveticus, L. bulgaricus, L. lactis and Bifidobacterium proved to be very susceptible to vancomycin, as reported by other authors. [6,21] However, the highest concentration of this antibiotic was not inhibiting for all heterofermentative lactobacilli ( Table 2). This resistance was previously documented as intrinsic or 'natural'. [6] According to EFSA, [4] bacterial strains carrying intrinsic resistance present a minimal risk for horizontal spread and thus, may be used as a feed additive. Conclusions This study tested the antimicrobial activity and antibiotic susceptibility of 26 Lactobacillus and Bifidobacterium strains. The results obtained at a laboratory scale allowed selection of active strains. Ten strains with antimicrobial activity against two pathogens or in both model systems (broth and milk) appeared to be most promising: L. acidophilus L-1; L. bulgaricus 6; L. helveticus N11; L. helveticus 3; L. plantarum 24-2L; L. plantarum 24-4%; L. plantarum 24-5D; L. fermentum 1; L. brevis 1 and B. animalis subsp. lactis L-3. They may play an important role in the food industry as starter cultures, co-cultures or bioprotective cultures, to improve food quality and safety or as probiotic therapeutics appropriate for clinical practice. In addition, sensitivity or intrinsic resistance of the majority of the strains to a recommended set of antibiotics make them safe for use in different products for human or animal consumption. Disclosure statement No potential conflict of interest was reported by the author(s).
2018-04-03T03:04:25.102Z
2014-12-11T00:00:00.000
{ "year": 2014, "sha1": "ac69235189bb877c52ff943518938de24c25fc34", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/13102818.2014.987450", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac69235189bb877c52ff943518938de24c25fc34", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210949867
pes2o/s2orc
v3-fos-license
CINET: A Brain-Inspired Deep Learning Context-Integrating Neural Network Model for Resolving Ambiguous Stimuli The brain uses contextual information to uniquely resolve the interpretation of ambiguous stimuli. This paper introduces a deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments is designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain’s ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of information in the visual cortex. Introduction The goal of this paper is to develop a versatile deep learning neural network classification model that improves the interpretation of ambiguous and degraded stimuli through the inclusion of context during the training and testing phases. The deep learning neural network selected for the classification model is the convolution neural network (CNN) because it offers an effective way to integrate context stimuli with a target stimulus for the purpose of extracting features that are coupled across the target and context stimuli. The resulting context-integrating CNN classification model is referred to as the CINET. The CINET is inspired by the context effect, which is the influence of the surrounding environment on the perception of stimuli [1][2][3]. Numerous studies related to the context effect have shown that the integration of contextual information improves the interpretation of spoken words [4,5], written letters and words [6][7][8], physical objects [9][10][11], sounds [12,13], smells [14], tastes [15], threats [16], colors [17], and facial emotions [18][19][20]. The context effect has also been widely studied to show how contextual information is used to uniquely resolve the interpretation of ambiguous stimuli [7,8,[21][22][23][24][25][26]. Ambiguous stimuli contain conflicting sensory information which provides the brain with multiple, mutually exclusive interpretations [24]. Figure 1 is a simplified illustration of an example that is often The CINET attempts to emulate the brain's ability to resolve the interpretation of ambiguous and degraded stimuli; however, it is not aimed at modelling the internal mechanisms of the brain involved in context integration. Instead, the aim is to model, at the input-output level, how context included in the learning phase influences the resolution of stimuli in the classification phase. Specifically, the goal is to demonstrate that the CINET parameters can be manipulated to emulate various aspects of the Context Shift Decrement (CSD) principle [27] and the related Context Reinstatement Effect (CRE) [28], which are central to explaining how context influences perception. Together, the CSD and CRE principle state that recognition is more accurate if the relationship between the context and target is strong, and recognition decreases when this relationship is weak or the context is changed during the recognition phase. A letter classification problem is selected because it can elegantly demonstrate the capabilities and performance of the CINET by incorporating context letters to form meaningful words. The model, however, is equally applicable to more complex problems, such as the interpretation of ambiguous objects in the visual domain and ambiguous words in a spoken sentences in the auditory domain. Furthermore, the target and context stimuli can be from different modalities to emulate multisensory context integration. The structure of the paper is as follows: Section 2 describes the structure and parameters of the generalized CINET classifier model. The CNN implementation of the CINET for multidimensional inputs is described in Section 3. The visual stimuli used in the experiments and the methods used to manipulate target and context stimuli are described in Section 4. The series of experiments designed to demonstrate the capabilities and properties of the CINET, the results, and a discussion of the results are presented in Section 5. Finally, the contributions of the study are summarized in Section 6. The Generalized CINET Classifier Model The interpretation of ambiguous stimuli is formulated as a pattern recognition problem; therefore, the focus is on modelling the mapping between an ambiguous stimulus (system input) and the class of the ambiguous stimulus (system output). Due to the inclusion of context, the design of the CINET classifier is unlike the design of most pattern classifiers, which mainly focus on training and testing with isolated, context-free patterns. In the formulations, the stimuli classes are represented by ωi,i = 1,2,…,L, where is the number of stimulus classes. The proposed CINET classifier model is illustrated in Figure 2. This section focuses on the input and context integration component of the model. The CNN classifier component is described in detail in the next section. In the model, the target stimulus is represented by , the context stimuli by , The CINET attempts to emulate the brain's ability to resolve the interpretation of ambiguous and degraded stimuli; however, it is not aimed at modelling the internal mechanisms of the brain involved in context integration. Instead, the aim is to model, at the input-output level, how context included in the learning phase influences the resolution of stimuli in the classification phase. Specifically, the goal is to demonstrate that the CINET parameters can be manipulated to emulate various aspects of the Context Shift Decrement (CSD) principle [27] and the related Context Reinstatement Effect (CRE) [28], which are central to explaining how context influences perception. Together, the CSD and CRE principle state that recognition is more accurate if the relationship between the context and target is strong, and recognition decreases when this relationship is weak or the context is changed during the recognition phase. A letter classification problem is selected because it can elegantly demonstrate the capabilities and performance of the CINET by incorporating context letters to form meaningful words. The model, however, is equally applicable to more complex problems, such as the interpretation of ambiguous objects in the visual domain and ambiguous words in a spoken sentences in the auditory domain. Furthermore, the target and context stimuli can be from different modalities to emulate multisensory context integration. The structure of the paper is as follows: Section 2 describes the structure and parameters of the generalized CINET classifier model. The CNN implementation of the CINET for multidimensional inputs is described in Section 3. The visual stimuli used in the experiments and the methods used to manipulate target and context stimuli are described in Section 4. The series of experiments designed to demonstrate the capabilities and properties of the CINET, the results, and a discussion of the results are presented in Section 5. Finally, the contributions of the study are summarized in Section 6. The Generalized CINET Classifier Model The interpretation of ambiguous stimuli is formulated as a pattern recognition problem; therefore, the focus is on modelling the mapping between an ambiguous stimulus (system input) and the class of the ambiguous stimulus (system output). Due to the inclusion of context, the design of the CINET classifier is unlike the design of most pattern classifiers, which mainly focus on training and testing with isolated, context-free patterns. In the formulations, the stimuli classes are represented by ω i ,i = 1, 2, . . . , L, where L is the number of stimulus classes. The proposed CINET classifier model is illustrated in Figure 2. This section focuses on the input and context integration component of the model. The CNN classifier component is described in detail in the next section. In the model, the target stimulus is represented by T, the context stimuli Brain Sci. 2020, 10, 64 3 of 16 by C, the context weights by α, the stimulus noise by N, the weighted and noisy stimuli by R j , the context-integrated stimulus by R, and the classifier output by ω * . The context-integrated stimulus, which is the input to the CNN classifier, can be written as where the symbol ∇ is used to represent the general context-integration operation. Equation (1) can be written more compactly as where, R j+i = (α j T j + N j ) when i = 0 and R j+i = (α j+i C j+i + N j+i ) when i 0. Brain Sci. 2020, 10, 64 3 of 16 the context weights by , the stimulus noise by , the weighted and noisy stimuli by , the contextintegrated stimulus by , and the classifier output by * . The context-integrated stimulus, which is the input to the CNN classifier, can be written as where the symbol is used to represent the general context-integration operation. Equation (1) can be written more compactly as where, = + when = 0 and = + when ≠ 0. In this generalized formulation, the subscript can represent a position (spatial) index or a time (temporal) index. The transformed target stimulus is padded on the left and right by and transformed context stimuli where both and are positive constants with values greater than or equal to zero. The context "span" is defined as , where, = + , and the resulting classifier is referred to as a CINET(S) classifier. The model is symmetrical if = and asymmetrical if ≠ . Furthermore, if = = 0, that is, the context span = 0, the CINET(S) classifier reduces to a context-free classifier represented by CINET(0). For the CINET(0) classifier, the input stimulus is simply ( + ). The weight assigned to context can be varied from zero (no influence) to one (full influence) in order to control the strength of the target-context relationship. The noise and added to and accounts for randomness in the target and context stimuli, respectively. The context-integration operation in Equation (2) is critical because it specifies the manner in which the target and context stimuli are integrated to form the input into the CNN classifier, which in turn will determine the type of features that are extracted from the context-integrated input. For example, if the target and the context stimuli are × arrays, they can be integrated into an × array through averaging, a large ( )( ) × array through concatenation, or an × × ( + 1) cuboid through a stacking operation. The averaging operation mixes the target and context stimulus arrays into a single array. As a result, there is no control over the strength of the coupling between the target and context stimuli. The concatenation operation also suffers from a lack of controlled coupling. The cuboid option is selected for the development of the CINET classifier model In this generalized formulation, the subscript j can represent a position (spatial) index or a time (temporal) index. The transformed target stimulus is padded on the left and right by S 1 and S 2 transformed context stimuli where both S 1 and S 2 are positive constants with values greater than or equal to zero. The context "span" is defined as S, where, S = S 1 + S 2 , and the resulting classifier is referred to as a CINET(S) classifier. The model is symmetrical if S 1 = S 2 and asymmetrical if S 1 S 2 . Furthermore, if S 1 = S 2 = 0, that is, the context span S = 0, the CINET(S) classifier reduces to a context-free classifier represented by CINET(0). For the CINET(0) classifier, the input stimulus is simply (α j T j + N j ). The weight α j+i assigned to context C j+i can be varied from zero (no influence) to one (full influence) in order to control the strength of the target-context relationship. The noise N j and N j+i added to T j and C j+i accounts for randomness in the target and context stimuli, respectively. The context-integration operation ∇ in Equation (2) is critical because it specifies the manner in which the target and context stimuli are integrated to form the input into the CNN classifier, which in turn will determine the type of features that are extracted from the context-integrated input. For example, if the target T and the context stimuli C are H × W arrays, they can be integrated into an H × W array through averaging, a large (M)(H) × W array through concatenation, or an H × W × (S + 1) cuboid through a stacking operation. The averaging operation mixes the target and context stimulus arrays into a single array. As a result, there is no control over the strength of the coupling between the target and context stimuli. The concatenation operation also suffers from a lack of controlled coupling. The cuboid option is selected for the development of the CINET classifier model because it offers the most flexible choices for selecting features that are not only coupled across the target and context stimuli, but also features with controlled coupling. CNN Implementation of the CINET Classifier Model In the most general case, the CNN classifier in Figure 2 can be replaced with any classifier. As noted in the Introduction, the CNN is selected because it is ideal for combining target and context stimuli and for extracting coupled target-context features with controlled coupling. This section begins with a brief introduction to CNNs and is followed by a detailed description of the multidimensional CINET and its special cases. Convolution Neural Networks CNNs, inspired by the pioneering work of Nobel laureates David Hubel and Thorsten Wiesel on information processing in the visual cortex [29][30][31], are a class of deep learning networks that have proven to be very effective for large-scale object classification and detection in images [32][33][34][35][36][37][38]. Common CNN architectures generally consist of a series of convolution and pooling layers followed by a fully connected network (FCN). The function of the convolution operations in each layer is to detect features from the output of the previous layer. As a result, the complexity of the features detected increases as the number of convolution layers in the network increases. The pooling layer reduces the spatial dimension of the convolution layer output through subsampling. The most often used pooling operation is max-pooling, in which a block of features is replaced by its maximum value in order to select the most robust feature in the block. The FCN is a standard feed-forward network using either sigmoid or tanh activation functions in the hidden layers and softmax activations in the outputs in order to interpret the network outputs as class posterior probabilities. The gradient descent backpropagation algorithm is used to train the network. Designing a CNN for a given problem involves specifying the architecture, which includes the number of convolution layers; the number, stride, padding, and the dimensions of the filters in each convolution layer, the size, stride, operation (maximum, average) of the filters in the pooling layers; the sequence of the convolution and pooling layers; the number of layers in the FCN; and the activation functions in the convolution and FCN layers. The hyperparameters that need to be specified during the training phase include the loss function, weight-initialization, learning rate, momentum term, convergence criterion, and batch size. The Multidimensional CINET(S) Model The most general formulation of the CINET(S) in Figure 2 is obtained by assuming that stimuli T and C are multidimensional (arrays with more than two dimensions). A color image comprised of red, green, and blue component images may be regarded as a three-dimensional stimulus. Examples of three-dimensional signals include seismic volumes, X-ray computed tomography, and LIDAR data. In the generalized formulation, the multidimensional input into the CNN can include higher-dimensional arrays, such as multisensor satellite images and hyperspectral images. Each multidimensional input can be represented by a cuboid, and the cuboids from multiple inputs can be integrated, using the stacking operation, into hypercuboids. The height, width, and depth of hypercuboids and cuboids will be represented by the variables h, w, and z, respectively. Note that z does not represent the depth (number of layers) of the CNN. To avoid this confusion, the cuboid depth will be referred to as "z-depth." The CINET(S) for multidimensional stimuli, shown in Figure 3, is described in detail. It is then shown that the models for one-dimensional and two-dimensional stimuli are special cases of the multidimensional stimulus model. If H, W, and Z j are the height, width, and z-depth of the multidimensional input stimuli, respectively, the dimension of the cuboid R j in Equation (2) will be H × W × Z j , and the dimension of the hypercuboid R in Equation (1) will be H × W × Z = Z j (1 + S). That is, the input to the CNN is the hypercuboid R(h, w, z) of dimension H × W × Z formed by stacking the weighted-noisy target and context stimuli, as shown in the figure. Brain Sci. 2020, 10, 64 5 of 16 In order to simplify the formulations, it is assumed that the convolutions in each layer are the "same" through zero-padding the input so that the filter outputs have the same dimensions as the input. Moreover, it will be assumed that the height and width of the filters in all convolution layers are the same. If the convolution is "valid," the dimensions of the filtered outputs can be easily adjusted according to the height and width of the filter. In the first convolution layer, each filter is selected to be a cuboid filter with the same z-depth as the input hypercuboid so that the target and context are fully coupled within the receptive field of each neuron in the layer. The filters, centered at zero in the (ℎ, ) plane, are assumed to have dimensions [(2 + 1) × (2 + 1) × ]. If the number of filters in the first layer is and the cuboid filter is represented by [ , ] ( , , ), = − , … ,0, … , ; = − , … 0, … , ; = 0,1, … , ( − 1); = 0,1, … , ( − 1) , the output of the filter is given by [ , ] (ℎ, ) = ∑ ∑ ∑ [ , ] ( , , ) (ℎ + , + , ), ℎ = 0,1, … , ( − 1); = 0,1, … , ( − 1). Note that the convolution of the input hypercuboid with a cuboid filter having the same z-depth results in an array with dimension × . A bias [ , ] is added to the filtered output and passed through the nonlinear activation function so that the activation of filter in the first layer is given by If pooling follows and the stride and size of the pooling filter are ∝ and ( × × 1), respectively, the output of the pooling layer is given by In order to simplify the formulations, it is assumed that the convolutions in each layer are the "same" through zero-padding the input so that the filter outputs have the same dimensions as the input. Moreover, it will be assumed that the height and width of the filters in all convolution layers are the same. If the convolution is "valid," the dimensions of the filtered outputs can be easily adjusted according to the height and width of the filter. In the first convolution layer, each filter is selected to be a cuboid filter with the same z-depth as the input hypercuboid so that the target and context are fully coupled within the receptive field of each neuron in the layer. The filters, centered at zero in the (h, w) plane, are assumed to have dimensions [(2a + 1) × (2b + 1) × Z]. If the number of filters in the first layer is K 1 and the kth cuboid filter is represented by f [1,k] , the output of the filter is given bŷ Note that the convolution of the input hypercuboid with a cuboid filter having the same z-depth results in an array with dimension H × W. A bias B [1,k] is added to the filtered output and passed through the nonlinear ReLu activation function so that the activation of filter k in the first layer is given by The output of the first convolution layer are the K 1 activations combined into a H × W × K 1 cuboid, which can be written as . If pooling follows and the stride and size of the pooling filter are ∝ and (γ × γ × 1), respectively, the output of the pooling layer is given by are the height and width of the pooled output, respectively. In the next convolution stage, the cuboid R [1,p] . . , (K 2 − 1) and the filtered output is given by the cuboid convolution Brain Sci. 2020, 10, 64 6 of 16 As in the previous step, a bias is added to each filtered output and passed through the ReLu activation function, and the K 2 activations are combined into a H [1] × W [1] × K 2 cuboid. If a pooling layer follows, the height and width of the cuboid are adjusted accordingly. The convolution and pooling operations are repeated and terminate into a flattening operation in which the rows of the cuboid are combined into a vector which is the input to a fully connected feed-forward neural network with N layers. The fully connected network (FCN) uses the ReLu, sigmoidal, or tanh activation function for the intermediate hidden layers, the softmax activation function for the output layer, and the cross-entropy for the loss-function. As noted earlier, it is assumed that the target stimuli T j belongs to one of L classes represented by ω i , i = 1, 2, . . . , L. The softmax layer will, therefore, have L outputs, one for each class of the target stimulus. If q i is the weighted sum of the inputs into a neuron i in the softmax layer, the softmax layer outputs are given by The cross-entropy cost function is given by During testing, the softmax outputs can be regarded as estimates of class posterior probabilities; therefore, the target stimulus can be assigned to the class ω * yielding the highest posterior probability, which is given by Special Cases of the CINET(S) Model As mentioned earlier, the one-dimensional and two-dimensional inputs into the CNN are special cases of the multidimensional inputs. For the two-dimensional case, the main difference is that z-depth of the target and context stimuli is unity. Therefore, the dimension of R j in Equation (2) will be H × W, and the dimension of the cuboid R in Equation (1) will be H × W × (1 + S). The cuboid input to the CNN can, therefore, be written as R(h, w, z), h = 0, 1, . . . , (H − 1); w = 0, 1, . . . , (W − 1); z = 0, 1 . . . , S. In order to match the z-depth of the input cuboid, the filters in the first convolution layer will, therefore, Other than the changes in the dimensions of the cuboid input and filters in the first layer, the convolution, pooling, and FCN layer operations are identical to the operations in the multidimensional input case. For one-dimensional inputs, the heights and depths of the target and context stimuli are unity and are, therefore, vectors. The dimension of R j in Equation (2) will be W, and the dimension of R in Equation (1) will be 1×W × (1 + S). Note that, although R is an array, it is written as a cuboid with unity height for consistency. The cuboid input to the CNN can, therefore, be written as R(0, w, z), w = 0, 1, . . . , (W − 1); z = 0, 1 . . . , S. The dimension of each filter in the first layer will be 1 × (2b + 1) × (1 + S) and the filtered output will be a vector with dimension W, which can be written as a 1 × W × 1 cuboid. The output of the kth filter in the first convolution layer is given bŷ A bias is added to each filtered output and passed through the ReLu activation function. The K 1 filtered outputs are combined into a 1 × W × K 1 cuboid. The width of the cuboid is adjusted if a pooling layer follows the convolution layer. Subsequent convolutions are also unit height cuboid convolutions which result in vectors which are then combined into unit height cuboids. An FCN with softmax outputs is implemented after the last pooling layer, and a target stimulus is assigned to class ω * using the rule in Equation (3). Target and Context Stimuli The experiments described in the next section are aimed at demonstrating various aspects of the CSD principle and the CRE applied to the recognition of ambiguous stimuli. That is, the CINET(S) should yield the expected results in various contextual environments. In the process of doing so, it is also shown that the CINET(S) model parameters can be manipulated to: (a) Simulate various context environments; (b) Vary the strengths of the target-context relationships; and (c) Introduce ambiguities in the stimuli. As noted in the introduction, the letter recognition problem was selected simply because it is suitable for demonstrating the properties of the CINET(S) classifiers by forming meaningful words. The experiments involved the recognition of six (L = 6) binary letters which were digitized into 32 × 32 two-dimensional arrays. The six target letters are shown in Figure 4, and the three ambiguous letters are shown in Figure 5. The first ambiguous letter is labelled as [A/H] because it can be interpreted as target letter A or H. Similarly, [O/U] may be interpreted as target O or U, and [P/R] may be interpreted as target P or R. The specific goal, therefore, is to determine how accurately an ambiguous letter can be classified into one of its two possible interpretations with and without context. The context was incorporated by adding letters to both sides of the target letter to create a context-augmented letter set. Target and Context Stimuli The experiments described in the next section are aimed at demonstrating various aspects of the CSD principle and the CRE applied to the recognition of ambiguous stimuli. That is, the CINET(S) should yield the expected results in various contextual environments. In the process of doing so, it is also shown that the CINET(S) model parameters can be manipulated to: (a) Simulate various context environments; (b) Vary the strengths of the target-context relationships; and (c) Introduce ambiguities in the stimuli. As noted in the introduction, the letter recognition problem was selected simply because it is suitable for demonstrating the properties of the CINET(S) classifiers by forming meaningful words. The experiments involved the recognition of six ( = 6) binary letters which were digitized into 32 × 32 two-dimensional arrays. The six target letters are shown in Figure 4, and the three ambiguous letters are shown in Figure 5. The first ambiguous letter is labelled as [A/H] because it can be interpreted as target letter A or H. Similarly, [O/U] may be interpreted as target O or U, and [P/R] may be interpreted as target P or R. The specific goal, therefore, is to determine how accurately an ambiguous letter can be classified into one of its two possible interpretations with and without context. The context was incorporated by adding letters to both sides of the target letter to create a context-augmented letter set. Ambiguity Manipulation There are several methods for generating test sets with varying levels of ambiguity. For example, target letters can be distorted by adding segments to the letter limbs, deleting segments, and skewing segments. Ambiguity can also be introduced by blurring or changing the resolution of the images [9][10][11]. These methods, however, are not suitable for generating large sets and are also difficult to quantify. We introduce a method to systematically increase the ambiguity level by adding increasing levels of zero-mean Gaussian noise to the noise-free pixels of the letter arrays. The noise added to is specified by the variance . Because the noise is random, a large set of distorted characters can be generated for a given . Ambiguity is increased by increasing . Examples of noisy images of the ambiguous letter [A/H] with noise levels in the range used in the experiments are shown in Figure 6. Observe that the letter [A/H] is difficult to recognize visually when the variance is greater than 1.5. The effect of distortions and noise on the other ambiguous letters is quite similar. Context Manipulation The context environment during testing can be congruent (same) or incongruent (different) from the context environment that was used during training. Incongruencies can be generated in many ways. The most obvious is to replace the context stimuli that were used during training with different Target and Context Stimuli The experiments described in the next section are aimed at demonstrating various aspects of the CSD principle and the CRE applied to the recognition of ambiguous stimuli. That is, the CINET(S) should yield the expected results in various contextual environments. In the process of doing so, it is also shown that the CINET(S) model parameters can be manipulated to: (a) Simulate various context environments; (b) Vary the strengths of the target-context relationships; and (c) Introduce ambiguities in the stimuli. As noted in the introduction, the letter recognition problem was selected simply because it is suitable for demonstrating the properties of the CINET(S) classifiers by forming meaningful words. The experiments involved the recognition of six ( = 6) binary letters which were digitized into 32 × 32 two-dimensional arrays. The six target letters are shown in Figure 4, and the three ambiguous letters are shown in Figure 5. The first ambiguous letter is labelled as [A/H] because it can be interpreted as target letter A or H. Similarly, [O/U] may be interpreted as target O or U, and [P/R] may be interpreted as target P or R. The specific goal, therefore, is to determine how accurately an ambiguous letter can be classified into one of its two possible interpretations with and without context. The context was incorporated by adding letters to both sides of the target letter to create a context-augmented letter set. Ambiguity Manipulation There are several methods for generating test sets with varying levels of ambiguity. For example, target letters can be distorted by adding segments to the letter limbs, deleting segments, and skewing segments. Ambiguity can also be introduced by blurring or changing the resolution of the images [9][10][11]. These methods, however, are not suitable for generating large sets and are also difficult to quantify. We introduce a method to systematically increase the ambiguity level by adding increasing levels of zero-mean Gaussian noise to the noise-free pixels of the letter arrays. The noise added to is specified by the variance . Because the noise is random, a large set of distorted characters can be generated for a given . Ambiguity is increased by increasing . Examples of noisy images of the ambiguous letter [A/H] with noise levels in the range used in the experiments are shown in Figure 6. Observe that the letter [A/H] is difficult to recognize visually when the variance is greater than 1.5. The effect of distortions and noise on the other ambiguous letters is quite similar. Context Manipulation The context environment during testing can be congruent (same) or incongruent (different) from the context environment that was used during training. Incongruencies can be generated in many ways. The most obvious is to replace the context stimuli that were used during training with different Ambiguity Manipulation There are several methods for generating test sets with varying levels of ambiguity. For example, target letters can be distorted by adding segments to the letter limbs, deleting segments, and skewing segments. Ambiguity can also be introduced by blurring or changing the resolution of the images [9][10][11]. These methods, however, are not suitable for generating large sets and are also difficult to quantify. We introduce a method to systematically increase the ambiguity level by adding increasing levels of zero-mean Gaussian noise to the noise-free pixels of the letter arrays. The noise N j added to T j is specified by the variance σ 2 j . Because the noise is random, a large set of distorted characters can be generated for a given σ 2 j . Ambiguity is increased by increasing σ 2 j . Examples of noisy images of the ambiguous letter [A/H] with noise levels in the range used in the experiments are shown in Figure 6. Observe that the letter [A/H] is difficult to recognize visually when the variance is greater than 1.5. The effect of distortions and noise on the other ambiguous letters is quite similar. Target and Context Stimuli The experiments described in the next section are aimed at demonstrating various aspects of the CSD principle and the CRE applied to the recognition of ambiguous stimuli. That is, the CINET(S) should yield the expected results in various contextual environments. In the process of doing so, it is also shown that the CINET(S) model parameters can be manipulated to: (a) Simulate various context environments; (b) Vary the strengths of the target-context relationships; and (c) Introduce ambiguities in the stimuli. As noted in the introduction, the letter recognition problem was selected simply because it is suitable for demonstrating the properties of the CINET(S) classifiers by forming meaningful words. The experiments involved the recognition of six ( = 6) binary letters which were digitized into 32 × 32 two-dimensional arrays. The six target letters are shown in Figure 4, and the three ambiguous letters are shown in Figure 5. The first ambiguous letter is labelled as [A/H] because it can be interpreted as target letter A or H. Similarly, [O/U] may be interpreted as target O or U, and [P/R] may be interpreted as target P or R. The specific goal, therefore, is to determine how accurately an ambiguous letter can be classified into one of its two possible interpretations with and without context. The context was incorporated by adding letters to both sides of the target letter to create a context-augmented letter set. Ambiguity Manipulation There are several methods for generating test sets with varying levels of ambiguity. For example, target letters can be distorted by adding segments to the letter limbs, deleting segments, and skewing segments. Ambiguity can also be introduced by blurring or changing the resolution of the images [9][10][11]. These methods, however, are not suitable for generating large sets and are also difficult to quantify. We introduce a method to systematically increase the ambiguity level by adding increasing levels of zero-mean Gaussian noise to the noise-free pixels of the letter arrays. The noise added to is specified by the variance . Because the noise is random, a large set of distorted characters can be generated for a given . Ambiguity is increased by increasing . Examples of noisy images of the ambiguous letter [A/H] with noise levels in the range used in the experiments are shown in Figure 6. Observe that the letter [A/H] is difficult to recognize visually when the variance is greater than 1.5. The effect of distortions and noise on the other ambiguous letters is quite similar. Context Manipulation The context environment during testing can be congruent (same) or incongruent (different) from the context environment that was used during training. Incongruencies can be generated in many ways. The most obvious is to replace the context stimuli that were used during training with different Context Manipulation The context environment during testing can be congruent (same) or incongruent (different) from the context environment that was used during training. Incongruencies can be generated in many ways. The most obvious is to replace the context stimuli that were used during training with different stimuli during testing. However, this method is not suitable for resolving ambiguous stimuli because the correct interpretation of ambiguous stimuli in the new environment is unknown. Our definition of incongruent, therefore, includes weakened and/or impaired context stimuli, but does not include replacing the context with different stimuli. Other possibilities to manipulate context include changing the positions and orientations of the context letters [9], and the colors of the context backgrounds [17]. We introduce a method to generate large test sets with quantifiable incongruencies by manipulating the CINET(S) model parameters α j+m , m 0 and N j+m , m 0 individually, or together. The context weights can be decreased to emulate weak target-context relationships. The noise in the context stimuli can be increased to emulate impaired context. The context environment can also be manipulated by varying the weights and noise simultaneously. stimuli during testing. However, this method is not suitable for resolving ambiguous stimuli because the correct interpretation of ambiguous stimuli in the new environment is unknown. Our definition of incongruent, therefore, includes weakened and/or impaired context stimuli, but does not include replacing the context with different stimuli. Other possibilities to manipulate context include changing the positions and orientations of the context letters [9], and the colors of the context backgrounds [17]. We introduce a method to generate large test sets with quantifiable incongruencies by manipulating the CINET(S) model parameters , ≠ 0 and , ≠ 0 individually, or together. The context weights can be decreased to emulate weak target-context relationships. The noise in the context stimuli can be increased to emulate impaired context. The context environment can also be manipulated by varying the weights and noise simultaneously. CNN Architecture The CNN architecture used in the experiments consisted of a convolution layer, convolution layer, pooling layer, and a 2-layer FCN, in which the first layer used sigmoidal activation functions and the output layer used softmax activation functions. The "valid" operation was used in the convolution layers, and max pooling was used in the pooling layer. Figure 8 shows the architecture of the CINET(4) model implemented for [ =32 × =32 × ( + 1) = 5] input cuboids, where and are the height and width of the target and context letters, respectively, and is the context span. The number of filters were 32 and 32 in the first and second convolution layers, respectively. The filter dimensions in the first and second convolution layers were ( 3 × 3 × 5) and ( 3 × 3 × 32), respectively. The size of the pooling filter was 2 × 2. The strides of the convolution and pooling filters CNN Architecture The CNN architecture used in the experiments consisted of a convolution layer, convolution layer, pooling layer, and a 2-layer FCN, in which the first layer used sigmoidal activation functions and the output layer used softmax activation functions. The "valid" operation was used in the convolution layers, and max pooling was used in the pooling layer. Figure 8 shows the architecture of the CINET(4) model implemented for [H = 32 × W = 32 × (S + 1) = 5] input cuboids, where H and W are the height and width of the target and context letters, respectively, and S is the context span. The number of filters were 32 and 32 in the first and second convolution layers, respectively. The filter dimensions in the first and second convolution layers were (3 × 3 × 5) and (3 × 3 × 32), respectively. The size of the pooling filter was 2 × 2. The strides of the convolution and pooling filters were set to 1 and 2, respectively. The dimension of the flattened output from the pooling layer was 14 × 14 × 32 = 6272. The number of neurons in the first and output layers of the FCN were 100 and 6, respectively. The networks were implemented using the Keras library [39][40][41]. Brain Sci. 2020, 10, 64 9 of 16 were set to 1 and 2, respectively. The dimension of the flattened output from the pooling layer was 14 × 14 × 32 = 6272. The number of neurons in the first and output layers of the FCN were 100 and 6, respectively. The networks were implemented using the Keras library [39][40][41]. Classification Experiments and Results In the experiments that follow, the context-free CINET(0) and context-integrating CINET(S) classifiers were trained to classify only the target letters. The training sets were generated by adding a small level of random noise ( = 0.001) to the noise-free target and context letters. The inclusion of noise at such small levels introduces minor variations in the letters; therefore, the resulting training sets are referred to as "noise-free training sets" in the experiments. The networks were initialized with random weights and training was terminated when the cross-entropy fell below 0.001. The CINET(0) classifier was tested on the three ambiguous letters in varying noise levels. The CINET(S) classifiers were tested on the three ambiguous letters in congruent and incongruent environments. In the experiments conducted, a total of one hundred distorted and noisy versions of each ambiguous letter were generated to form the test set at each noise level . Because the classification results of a CNN are dependent on the initial weights, a total of thirty CNNs were initialized with random weights. The performance of each network was evaluated using the test sets. Consequently, the total number of tests conducted for each ambiguous letter at a given noise level was 30 × 100 = 300. The results for each ambiguous letter were averaged across the 300 tests, and the final classification probability was given by averaging the averaged results of the ambiguous letters. The following experiments were designed: Set 1: Context-free classification with the CINET(0) classifier The first set of experiments was aimed at demonstrating the performance of the classifier when no context is integrated into the training and testing phases. A CINET(0) classifier was trained to classify the six noise-free isolated target letters {A,H,O,U,P,R}, shown in Figure 4, and was tested with the three isolated, Figure 5. The noise level in the test ambiguous letters was varied from 0.1 to 2. It is important to note that in the absence of context, the true class of the ambiguous letter is unknown. An ambiguous letter could be classified into any one of the six classes, however, the interest is mainly on estimating the probability of an ambiguous letter being classified into one of its two possible categories. The classification probabilities are summarized in the row labeled Set 1 in Table 1 Classification Experiments and Results In the experiments that follow, the context-free CINET(0) and context-integrating CINET(S) classifiers were trained to classify only the target letters. The training sets were generated by adding a small level of random noise (σ 2 = 0.001) to the noise-free target and context letters. The inclusion of noise at such small levels introduces minor variations in the letters; therefore, the resulting training sets are referred to as "noise-free training sets" in the experiments. The networks were initialized with random weights and training was terminated when the cross-entropy fell below 0.001. The CINET(0) classifier was tested on the three ambiguous letters in varying noise levels. The CINET(S) classifiers were tested on the three ambiguous letters in congruent and incongruent environments. In the experiments conducted, a total of one hundred distorted and noisy versions of each ambiguous letter were generated to form the test set at each noise level σ 2 j . Because the classification results of a CNN are dependent on the initial weights, a total of thirty CNNs were initialized with random weights. The performance of each network was evaluated using the test sets. Consequently, the total number of tests conducted for each ambiguous letter at a given noise level was 30 × 100 = 300. The results for each ambiguous letter were averaged across the 300 tests, and the final classification probability was given by averaging the averaged results of the ambiguous letters. The following experiments were designed: Figure 5. The noise level in the test ambiguous letters was varied from 0.1 to 2. It is important to note that in the absence of context, the true class of the ambiguous letter is unknown. An ambiguous letter could be classified into any one of the six classes, however, the interest is mainly on estimating the probability of an ambiguous letter being classified into one of its two possible categories. The classification probabilities are summarized in the row labeled Set 1 in Table 1 U, and [P/R] as a P or an R was 0.48 when the noise level was 0.1. Observe that the probabilities drop as the noise increases because it becomes increasingly difficult to classify each ambiguous letter into one of its two possible classes. Set 2: Training and testing the CINET(2) classifier with congruent context These experiments were aimed at demonstrating the improvement in performance when strong context (unity weights) is incorporated in training, and the same noise-free context (congruent) is used during testing in order to emulate learning and testing in the same environments. A symmetrical CINET(2) classifier (S 1 = S 2 = 1) with unity weights (α j−1 = α j+1 = 1) was trained with the noise-free context-augmented training set {BAG, THE, MOW, FUN, SPY, IRK} to classify the six target letters in the center. That is, B and G were the context for target stimulus A, T and E were the context for target H, and so on. The training set is shown in Figure 9a. The classifier was tested with the center letters replaced with ambiguous letters, as shown in Figure 9b probabilities drop as the noise increases because it becomes increasingly difficult to classify each ambiguous letter into one of its two possible classes. Set 2: Training and testing the CINET(2) classifier with congruent context These experiments were aimed at demonstrating the improvement in performance when strong context (unity weights) is incorporated in training, and the same noise-free context (congruent) is used during testing in order to emulate learning and testing in the same environments. A symmetrical CINET(2) classifier ( = = 1) with unity weights ( = = 1) was trained with the noisefree context-augmented training set {BAG, THE, MOW, FUN, SPY, IRK} to classify the six target letters in the center. That is, B and G were the context for target stimulus A, T and E were the context for target H, and so on. The training set is shown in Figure 9a. The classifier was tested with the center letters replaced with ambiguous letters, as shown in Figure 9b. (2) classifier has the ability to resolve stimulus ambiguities quite effectively. As expected, the classification probabilities dropped when the noise levels increased. (a) (b) Figure 9. The training and test sets for the CINET(2) classifier (a) the noise-free context-augmented training set (b) the test set with noise-free congruent context. Set 3: Training and testing the CINET(4) classifier with congruent context These experiments were aimed at demonstrating the improvement in performance when additionally strong context (unity weights) is incorporated in training, and congruent context is used during testing. A symmetrical CINET(4) classifier ( = = 2) with unity weights ( = = = = 1) was trained with the context-augmented training set {BEAST, ETHYL, FLUID, GNOME, IMPLY, SCREW} to classify the six target letters in the center. The training set is shown in Figure 10a. As in Set 2, the classifier was tested with the center letters replaced with ambiguous letters. Figure 10b shows the test set with the noise-free ambiguous letters surrounded by noise-free congruent context. The test results under varying noise levels in the ambiguous letters are shown in the row labelled Set 3 in Table 1. The probabilities are interpreted just as they were for Set 2. It is clear Figure 9. The training and test sets for the CINET(2) classifier (a) the noise-free context-augmented training set (b) the test set with noise-free congruent context. Set 3: Training and testing the CINET(4) classifier with congruent context These experiments were aimed at demonstrating the improvement in performance when additionally strong context (unity weights) is incorporated in training, and congruent context is used during testing. A symmetrical CINET(4) classifier (S 1 = S 2 = 2) with unity weights (α j−2 = α j−1 = α j+1 = α j+2 = 1) was trained with the context-augmented training set {BEAST, ETHYL, FLUID, GNOME, IMPLY, SCREW} to classify the six target letters in the center. The training set is shown in Figure 10a. As in Set 2, the classifier was tested with the center letters replaced with ambiguous letters. Figure 10b shows the test set with the noise-free ambiguous letters surrounded by Brain Sci. 2020, 10, 64 11 of 16 noise-free congruent context. The test results under varying noise levels in the ambiguous letters are shown in the row labelled Set 3 in Table 1. The probabilities are interpreted just as they were for Set 2. It is clear that, for the same range of noise levels, the performance of the CINET(4) classifier is much better than the CINET(2) classifier. It could, therefore, be concluded that incorporating additional context improves the classification of ambiguous letters. Brain Sci. 2020, 10, 64 11 of 16 that, for the same range of noise levels, the performance of the CINET(4) classifier is much better than the CINET(2) classifier. It could, therefore, be concluded that incorporating additional context improves the classification of ambiguous letters. (a) (b) Figure 10. The training and test sets for the CINET(4) classifier (a) the noise-free context-augmented training set (b) the test set with the noise-free ambiguous center letters and noise-free congruent context. Set 4: Testing the CINET(4) classifier with weighted incongruent context This set of experiments was aimed at demonstrating how the weights can be manipulated to simulate incongruent testing environments and to show how the performance is affected by varying the context weights during testing. The CINET(4) classifier designed in Set 3 was tested with two different sets of context weights. The first set of weights, ( = = = = 0.7), were selected to show how the attenuation of context affects the performance. The next set of context weights, ( = 0.4, = 0.7, = 0.7, = 0.4), were selected to have a decaying influence as the separation span (spatial/temporal lag) between the target and context stimuli was increased. The resulting noise-free test sets with weighted incongruent context are shown in Figure 11. The results for this set of experiments are presented in the rows labelled Set 4(A) and Set 4(B) in Table 1, respectively. As expected, the performance declines as the incongruency in the testing environment is increased. Set 5: Testing the CINET(4) classifier with noisy incongruent context This set of experiments was aimed at demonstrating how the performance is affected when noise is added to the context stimuli during testing to generate incongruent context environments. The CINET(4) classifier designed in Set 3 with unity weights was tested with noise in the ambiguous letters, as well as statistically equivalent noise in the context letters. An example of a test set using = 0.75 is shown in Figure 12. The results are presented in the row labelled Set 5. As expected, performance declines when context incongruency is increased by adding noise. However, the general trend observed in the Set 3 results is maintained. Figure 12. An example of a test set with noisy incongruent context for the CINET(4) classifier. Set 4: Testing the CINET(4) classifier with weighted incongruent context This set of experiments was aimed at demonstrating how the weights can be manipulated to simulate incongruent testing environments and to show how the performance is affected by varying the context weights during testing. The CINET(4) classifier designed in Set 3 was tested with two different sets of context weights. The first set of weights, (α j−2 = α j−1 = α j+1 = α j+2 = 0.7), were selected to show how the attenuation of context affects the performance. The next set of context weights, (α j−2 = 0.4, α j−1 = 0.7, α j+1 = 0.7, α j+2 = 0.4), were selected to have a decaying influence as the separation span (spatial/temporal lag) between the target and context stimuli was increased. The resulting noise-free test sets with weighted incongruent context are shown in Figure 11. The results for this set of experiments are presented in the rows labelled Set 4(A) and Set 4(B) in Table 1, respectively. As expected, the performance declines as the incongruency in the testing environment is increased. Brain Sci. 2020, 10, 64 11 of 16 that, for the same range of noise levels, the performance of the CINET(4) classifier is much better than the CINET(2) classifier. It could, therefore, be concluded that incorporating additional context improves the classification of ambiguous letters. (a) (b) Figure 10. The training and test sets for the CINET(4) classifier (a) the noise-free context-augmented training set (b) the test set with the noise-free ambiguous center letters and noise-free congruent context. Set 4: Testing the CINET(4) classifier with weighted incongruent context This set of experiments was aimed at demonstrating how the weights can be manipulated to simulate incongruent testing environments and to show how the performance is affected by varying the context weights during testing. The CINET(4) classifier designed in Set 3 was tested with two different sets of context weights. The first set of weights, ( = = = = 0.7), were selected to show how the attenuation of context affects the performance. The next set of context weights, ( = 0.4, = 0.7, = 0.7, = 0.4), were selected to have a decaying influence as the separation span (spatial/temporal lag) between the target and context stimuli was increased. The resulting noise-free test sets with weighted incongruent context are shown in Figure 11. The results for this set of experiments are presented in the rows labelled Set 4(A) and Set 4(B) in Table 1, respectively. As expected, the performance declines as the incongruency in the testing environment is increased. Set 5: Testing the CINET(4) classifier with noisy incongruent context This set of experiments was aimed at demonstrating how the performance is affected when noise is added to the context stimuli during testing to generate incongruent context environments. The CINET(4) classifier designed in Set 3 with unity weights was tested with noise in the ambiguous letters, as well as statistically equivalent noise in the context letters. An example of a test set using = 0.75 is shown in Figure 12. The results are presented in the row labelled Set 5. As expected, performance declines when context incongruency is increased by adding noise. However, the general trend observed in the Set 3 results is maintained. Figure 12. An example of a test set with noisy incongruent context for the CINET(4) classifier. Figure 11. The noise-free test sets for the CINET(4) classifier with weighted incongruent context (a) context weights: α j−2 = α j−1 = α j+1 = α j+2 = 0.7 (b) context-weights: α j−2 = 0.4, α j−1 = 0.7, α j+1 = 0.7, α j+2 = 0.4. Set 5: Testing the CINET(4) classifier with noisy incongruent context This set of experiments was aimed at demonstrating how the performance is affected when noise is added to the context stimuli during testing to generate incongruent context environments. The CINET(4) classifier designed in Set 3 with unity weights was tested with noise in the ambiguous letters, as well as statistically equivalent noise in the context letters. An example of a test set using σ 2 j = 0.75 is shown in Figure 12. The results are presented in the row labelled Set 5. As expected, performance declines when context incongruency is increased by adding noise. However, the general trend observed in the Set 3 results is maintained. is added to the context stimuli during testing to generate incongruent context environments. The CINET(4) classifier designed in Set 3 with unity weights was tested with noise in the ambiguous letters, as well as statistically equivalent noise in the context letters. An example of a test set using = 0.75 is shown in Figure 12. The results are presented in the row labelled Set 5. As expected, performance declines when context incongruency is increased by adding noise. However, the general trend observed in the Set 3 results is maintained. Figure 12. An example of a test set with noisy incongruent context for the CINET(4) classifier. Figure 12. An example of a test set with noisy incongruent context for the CINET(4) classifier. Figure 13, and the results in varying noise levels are shown in the row labelled Set 6. Despite the fact that the same context letters were used, the results are quite poor. This, however, is not unexpected because the temporal pattern of the context was changed. in the row labelled Set 6. Despite the fact that the same context letters were used, the results are quite poor. This, however, is not unexpected because the temporal pattern of the context was changed. The best result at each noise level is shown in boldface font in Table 1. For comparison purposes and to observe the trends, Figure 14 Although not the primary focus of this study, the CINET(S) model can also be used to design experiments to demonstrate the influence of context on the recognition of non-ambiguous target stimuli in varying congruent and incongruent environments simply by testing the targets instead of the ambiguous stimuli. In general, it can be expected that the performance will be improved by including the congruent context in the learning and recognition phases. This was confirmed by repeating all six experiments in which the target letters {A,H,O,U,P,R} were tested. The average classification probabilities are summarized in Table 2 and Figure 15. The best results are shown in boldface font. As expected, the classification probabilities are higher for non-ambiguous targets. By comparing Figures 14 and 15, it is interesting to observe that the performance trends for the classification of ambiguous and non-ambiguous stimuli are quite similar. The best result at each noise level is shown in boldface font in Table 1. For comparison purposes and to observe the trends, Figure 14 summarizes the correct resolution probabilities from the experiments of Sets 2-5. The Set 1 results are also included in the figure to serve as the context-free reference. The results from Set 6 are not included in the figure. Brain Sci. 2020, 10 Figure 13, and the results in varying noise levels are shown in the row labelled Set 6. Despite the fact that the same context letters were used, the results are quite poor. This, however, is not unexpected because the temporal pattern of the context was changed. The best result at each noise level is shown in boldface font in Table 1. For comparison purposes and to observe the trends, Figure 14 Although not the primary focus of this study, the CINET(S) model can also be used to design experiments to demonstrate the influence of context on the recognition of non-ambiguous target stimuli in varying congruent and incongruent environments simply by testing the targets instead of the ambiguous stimuli. In general, it can be expected that the performance will be improved by including the congruent context in the learning and recognition phases. This was confirmed by repeating all six experiments in which the target letters {A,H,O,U,P,R} were tested. The average classification probabilities are summarized in Table 2 and Figure 15. The best results are shown in boldface font. As expected, the classification probabilities are higher for non-ambiguous targets. By comparing Figures 14 and 15, it is interesting to observe that the performance trends for the classification of ambiguous and non-ambiguous stimuli are quite similar. Although not the primary focus of this study, the CINET(S) model can also be used to design experiments to demonstrate the influence of context on the recognition of non-ambiguous target stimuli in varying congruent and incongruent environments simply by testing the targets instead of the ambiguous stimuli. In general, it can be expected that the performance will be improved by including the congruent context in the learning and recognition phases. This was confirmed by repeating all six experiments in which the target letters {A,H,O,U,P,R} were tested. The average classification probabilities are summarized in Table 2 and Figure 15. The best results are shown in boldface font. As expected, the classification probabilities are higher for non-ambiguous targets. By comparing Figures 14 and 15, it is interesting to observe that the performance trends for the classification of ambiguous and non-ambiguous stimuli are quite similar. Conclusions from the Experiments The results in Tables 1 and 2 and the trends in Figures 14 and 15 show that the CINET(S) classifiers perform in a desirable manner in the sense that various aspect of the CSD principle and the CRE are demonstrated. That is, congruent context helps resolve classification ambiguities, and this ability decreases as the ambiguity levels and context incongruencies are increased. The CNN offers an effective method for extracting features that are coupled across the target and context stimuli. Moreover, the random stimulus noise and context weights offer an effective way of manipulating the relationship and strength of the coupling. The six sets of experiments and the results obtained demonstrate, quite effectively, the performance trends of the CINET(S) classifier. It can be expected that other forms of ambiguity and context manipulations will result in similar trends. Furthermore, similar results would be obtained even if the letters used for context did not form meaningful words, as long as the same context letters were used for both training and testing. Also noteworthy is that the use of simulated ambiguities and context environments enabled the systematic and quantifiable evaluation of the CINET(S) classifier model under a wide range of conditions. Clearly, such extensive experimentation and evaluation would not be possible with real data unless an enormously large data set with quantifiable ambiguities and context is collected. Undoubtedly, the CINET(S) classifier will perform similarly on real data. Experiments can also be designed to demonstrate the influence of context on perceiving a missing stimulus, for example, a missing letter in a learned word. Because the stimulus index can be temporal, the model can also be applied to resolve ambiguities in sequentially occurring events, such as garbled words in a sentence. Context that is not inherently sequential can also be Conclusions from the Experiments The results in Tables 1 and 2 and the trends in Figures 14 and 15 show that the CINET(S) classifiers perform in a desirable manner in the sense that various aspect of the CSD principle and the CRE are demonstrated. That is, congruent context helps resolve classification ambiguities, and this ability decreases as the ambiguity levels and context incongruencies are increased. The CNN offers an effective method for extracting features that are coupled across the target and context stimuli. Moreover, the random stimulus noise and context weights offer an effective way of manipulating the relationship and strength of the coupling. The six sets of experiments and the results obtained demonstrate, quite effectively, the performance trends of the CINET(S) classifier. It can be expected that other forms of ambiguity and context manipulations will result in similar trends. Furthermore, similar results would be obtained even if the letters used for context did not form meaningful words, as long as the same context letters were used for both training and testing. Also noteworthy is that the use of simulated ambiguities and context environments enabled the systematic and quantifiable evaluation of the CINET(S) classifier model under a wide range of conditions. Clearly, such extensive experimentation and evaluation would not be possible with real data unless an enormously large data set with quantifiable ambiguities and context is collected. Undoubtedly, the CINET(S) classifier will perform similarly on real data. Experiments can also be designed to demonstrate the influence of context on perceiving a missing stimulus, for example, a missing letter in a learned word. Because the stimulus index j can be temporal, the model can also be applied to resolve ambiguities in sequentially occurring events, such as garbled words in a sentence. Context that is not inherently sequential can also be accommodated in the model. For example, if the background of an object in an image is regarded as the context, the image can be segmented into two components: object (target) and background (context). The input to the CINET(S) classifier would then be a concatenation of the target features and context features. Finally, it is important to note that the target and context stimuli can be from mixed modalities (e.g., visual and auditory stimuli) for multisensory target-context integration, which is yet another way to combine multisensory information in brain-inspired classification systems [42]. Conclusions The key contribution of this study is the development of a versatile, brain-inspired deep learning classifier model that can effectively resolve classification ambiguities by incorporating bidirectional weighted context during training and by using congruent context during classification. Supporting contributions include the design of a series of experiments which show that the model can emulate various aspects of the CSD principle and the CRE as applied to the recognition of ambiguous stimuli. The experiments also demonstrate the ability of the CINET(S) classifier model to introduce ambiguities due to distortion and noise, simulate various context environments, and vary the strengths of target-context stimulus relationships. Furthermore, it was noted that the model could accommodate symmetrical and asymmetrical context, is applicable to spatial and temporal context, includes the context-free classification model as a special case, and is not restricted to any particular type of classifier. The model was also used to demonstrate improvements in the classification of non-ambiguous target stimuli through the inclusion of context. The fact that the inclusion of context resolves ambiguities and improves classification is notable because context is seldom considered in the design of machine learning classification systems. Therefore, whenever possible, context should be incorporated to improve the performance of classifiers.
2020-01-30T09:06:21.235Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "2cc19be79fa6e14cc89c8b3d18d9a038c8766f29", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/10/2/64/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab7a304e81a58b1d8f1668b36c277af2f6f8e55e", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245785523
pes2o/s2orc
v3-fos-license
Retrospective analysis of canine monocytic ehrlichiosis in Thailand with emphasis on hematological and ultrasonographic changes Abstract Background and Aim: Canine monocytic ehrlichiosis (CME) is a tropical endemic tick-borne disease that causes fatality or chronic infection involving many organs in dogs. This study aimed to examine the prevalence, risk factors, and hematological and ultrasonographic changes in the liver, gallbladder, kidneys, and spleen following CME infection. Materials and Methods: This retrospective study used 30,269 samples collected from dogs at the hematology section of the pathology unit of a university veterinary hospital and 35 samples collected from dogs at the diagnostic imaging unit. CME was determined using the buffy coat smear method. Data were analyzed using descriptive statistics and odds ratios. Results: The data revealed that the average yearly prevalence of CME was 1.32%. Risk factors contributing to CME infection were a tick on the body during physical examination, lack of ectoparasite control, and outdoor living. All 148 dogs with CME infection had low platelet counts. The percentages of CME-infected dogs with elevated serum alanine aminotransferase, alkaline phosphatase, and both enzymes above the normal range were 33.6%, 65.9%, and 29.8%, respectively. The rates for elevated serum levels of blood urea nitrogen, creatinine, and both compounds were 33.1%, 19.1%, and 17.3%, respectively. The most common ultrasonographic changes were liver abnormalities (hyperechogenicity or hypoechogenicity, hepatomegaly, and hypoechoic nodules), hyperechogenicity of the kidneys, and an enlarged spleen. These ultrasonographic changes were consistent with the hematology results, which showed a greater elevation of serum liver enzyme levels than renal enzymes. Conclusion: Ultrasonographic changes during CME infection and after treatment with doxycycline can help to monitor and identify persistent pathological changes in the target organs resulting from immune response to CME. Introduction Canine monocytic ehrlichiosis (CME) is a tickborne, endemic disease found worldwide that causes animals in the Canidae family to become sick and die. The disease is caused by rickettsial bacteria, namely, Ehrlichia canis, transmitted by Rhipicephalus sanguineus (brown dog tick). CME is widely distributed in tropical, Mediterranean, and subtropical climates, including Europe [1,2], the United States [3], Costa Rica [4], Brazil [5], and Asia [6][7][8]. In Thailand, the reported prevalence of E. canis identified using polymerase chain reaction (PCR) in all parts of the country ranges from 7.6% to 38.3% [9][10][11][12][13]. Transstadial transmission occurs in all tick stages, and infection can result while feeding on infected dogs. On 4-7 days after infection (dai), the dog's immune system develops immunoglobulin M and immunoglobulin A antisera, and immunoglobulin G antisera can be detected at 15 dai [14]. Following the 8-20 days incubation period, CME infection progresses through three typical phases; acute, subclinical, and chronic. In the acute phase, which lasts for 3-5 weeks, the clinical symptoms of fever, anorexia, ocular discharge, mucosal and skin petechiae, epistaxis, pale mucous membrane, hemorrhagic tendencies, depression, lymphadenopathy, and neurological signs (from meningitis) are present [15]. The major hematological changes are interstitial nephritis and glomerulonephritis [16], whereas pathological changes occur in the corticomedullary junction, causing a contracted kidney [17]. Hyperechogenicity may be present with an enlarged liver, spleen, gallbladder, and ascites [18]. Some dogs may recover after the subclinical phase, whereas others may progress to the chronic phase where severe pancytopenia typically occurs from bone marrow hypoplasia and leads to severe leukopenia, anemia, and thrombocytopenia with a high risk of mortality [15]. In severe cases, dogs with poor antibiotic Available at www.veterinaryworld.org/Vol.15/January-2022/1.pdf response may die from massive hemorrhage, severe debilitation, and/or secondary infection [15]. During the chronic phase, pathological lesions occur in the kidney because of immune complex accumulation in the glomerulus that stimulates inflammation, followed by the destruction of cells and tissues in the surrounding area, leading to elevated serum blood urea nitrogen (BUN) and creatinine levels. There are also lymphocyte and plasma cell infiltration into the liver and kidney parenchyma [15], and moderate increases in serum levels of the liver alanine aminotransferase (ALT) and alkaline phosphatase (ALP) due to hepatocyte damage [15,19]. With the standard doxycycline protocol treatment [14], some dogs may not fully recuperate from symptoms related to the immune response, especially damage to the principal organs involved (liver, kidney, and spleen). These lasting effects may be missed by veterinarians that do not provide systematic follow-up after doxycycline treatment. Sarma et al. [18] studied pathological changes in the liver and spleen of 101 dogs positive for infection with tick-borne blood parasites and found that ultrasound and hematological changes can serve as a useful indicator of the damage status of internal organs after infestation with blood parasites. Although there are several reports [9][10][11][12] on the prevalence of CME in Thailand, there is scarce research on the relationship between CME and changes in ultrasound images of dogs during or after treatment. Thus, the present study aimed to investigate the retrospective prevalence of CME in dogs and examine changes in blood parameters and organs (liver and kidney) of infected dogs as revealed by ultrasound images. Ethical approval and informed consent Because of the retrospective nature of this study and the use of diagnostic data collected as a part of routine clinical procedures, the need for ethical approval was waived. All dog owners completed a consent form giving permission to utilize the data (including ultrasound images) for clinical research. Study period and location This study was divided into two parts. part 1 was performed at the Small Animal Teaching Hospital, Faculty of Veterinary Science, Chulalongkorn University from September 2016 to August 2017. Part 2 of this study was performed on dogs that were admitted and underwent examinations at the Hematology Section, Pathology Unit and Imaging Diagnostic Unit of the same Small Animal Hospital from January 2017 to September 2018. Study design and analysis A retrospective, randomized study was performed based on hematological and medical records. We divided the study into two parts in order to study the prevalence of CME based on the yearly data as well as the factors involved as in part 1. In part 2, the retrospective case-control comparison of dogs that had data on ultrasonographical and blood analysis was examined. Study part 1 We identified a group of CME-positive dogs, defined by the presence of the morulae of Ehrlichia spp. in the buffy coat smear assay and results of the Canine SNAP ® 4Dx ® test kit (IDEXX Laboratories, Inc., Westbrook, ME, USA). The prevalence of CME during the study period was determined. To understand factors influencing CME risk, we analyzed the following data: Signalment data; historical records; complete blood count (CBC) data, including platelet count; and blood chemistry data, including serum levels of ALT, ALP, BUN, and creatinine. Duplicate data were removed before analysis. Next, based on the serum platelet count and blood chemistry data (ALT, ALP, BUN, and creatinine), the dogs that were E. canis positive were grouped as below, within, or above the normal range for these measures. Moreover, the dogs that were E. canis positive were analyzed for (i) the presence of ticks on the body during physical examination, (ii) use of an ectoparasite control program, and (iii) daily indoor or outdoor living. Ectoparasite control was defined as consistent and routine control using approved products. For daily indoor or outdoor living, only dogs that spent 100% of their time indoors were considered indoor living dogs. For comparison, healthy dogs were randomly chosen from the historical data to serve as a control group. The inclusion criteria were dogs with no severe diseases or CME. The numbers of control dogs were similar to those with CME (150, 57, and 40 dogs for small, medium, and large breeds, respectively) ( Table-1). The odds ratio (OR) was computed for comparisons between CME and control (healthy) groups. Study part 2 From the data, dogs were selected using non-probability or non-random sample selection. The inclusion criteria did not restrict the gender or breed. Nevertheless, the dogs must not be older than seniors because geriatric dogs may show age-related pathophysiological changes in the ultrasound appearance of the liver or kidneys unrelated to E. canis infection. We also excluded dogs with a history of severe diseases, including heart, liver, kidney, cancer, and immune system diseases. Data were divided into a control group and a study group. The control group comprised 16 dogs with normal abdominal ultrasound results for their internal organs. The study group comprised 19 dogs positive for E. canis infection that showed abnormal abdominal ultrasound results in at least one of the periods before, during, or after infection (treatment with doxycycline at 10 mg/kg/day for 28 days). We analyzed the effect of E. canis infection on hematological changes, including CBC and serum data, including the levels of the liver (ALT and ALP) and renal (BUN and creatinine) enzymes. The blood parameters were analyzed before, during, and after (treatment with doxycycline at 10 mg/kg/ day for 28 days) E. canis detection. Hematological results were categorized as normal and abnormal when compared with normal reference values [20]. The effects of E. canis infection on changes in the serum levels of platelets and liver and renal enzymes were analyzed using descriptive statistics, with some constraint on the missing data in the historical records. The effect of E. canis infection in both groups (control and study groups) was analyzed in relation to ultrasonography changes in the three periods, as described above, for the liver, spleen, kidneys, and gallbladder, as these are the organs typically affected by E. canis infection. Statistical analysis Descriptive statistics were used to analyze and compare all parameters in Parts 1 and 2. The prevalence of E. canis infection was reported as the mean value calculated on a yearly basis. In Part 1, the Odds ratio (OR) was used to measure the association between the control and E. Canis-positive groups in terms of (i) the presence of ticks on the body during physical examination, (ii) ectoparasite control program, and (iii) daily indoor or outdoor living. Statistical analysis was performed using Sigmastat (Systat Software, San Jose, CA, USA). p<0.05 was considered to be statistically significant. Part 1 All infected dogs had platelet counts below the normal range. The percentages of infected dogs with elevated serum liver enzymes ALT, ALP, and both above the normal range [20] were 33.6%, 65.9%, and 29.8%, respectively; the rates of elevated kidney markers BUN, creatinine, and both were 33.1%, 19.8%, and 17.3%, respectively (Table-2) [20]. Data from the retrospective study identified 400 of the total 30,269 dogs with a positive E. canis test. The prevalence of E. canis infection was 1.32% with a range of 0.8-1.8% each month. The percentage of dogs with low platelet and high serum chemistry profile ranged from 2.8% to 6.8% each month with an average of 5.1% ( Figure-1). Approximately twothirds (66.3%) of the SNAP4Dx tests were positive (Table-2). Table-1 summarizes the OR of various factors affecting E. canis infection in the different groups of dogs, according to body weight. The occurrence of ticks on the body during physical examination was associated with 8.0, 12.7, and 3.0-fold higher rate of CME in dogs weighing <16, 16-25, and >25 kg, respectively. In the ectoparasite control analysis, the "no control" regime was associated with a 2.0-, 5.2-, and 3.8-fold higher risk of CME in the three weight groups, respectively, when compared with the "control" regime. Finally, outdoor dogs had 5.7, 1.3, and 1.5-fold greater risk of CME (concerning the weight groups) when compared with indoor dogs. Tables 1 and 2 due to incomplete history on various factors examined. The total n for the control group (247 dogs) was randomly selected to match with the positive group according to weights. **Using the buffy coat smear method. E. canis=Ehrlichia canis Available at www.veterinaryworld.org/Vol.15/January-2022/1.pdf Part 2 Hematological and blood data changes in E. canis infected dogs Data from 19 dogs in the study group were included in this analysis. The age was known in all cases, and the mean group age was 7 years (range of 3 months-11 years). In terms of sex, 47.4% (9 of 19) were female and 52.6% (10 of 19) were male. The two groups included both entire and neutered animals. There were 10, 6, and 3 cases of small, medium, and large breeds, respectively. Hematology The CME-positive dogs were analyzed in the phase before the presence of E. canis (Table-3 Effects of E. canis infection on ultrasound appearance of the liver, gallbladder, kidneys, and spleen Abdominal ultrasonographic examination results of the liver, gallbladder, and kidneys in all 16 cases in the control group were found to be normal. The liver showed a normal sharp border with a smooth margin, good location, contours with a homogeneous echotexture, normal appearance of the intrahepatic portal veins, uniform hypoechoic liver parenchyma related to the spleen, and falciform fat with isoechoic to the right renal cortex. Additional observations included a normal gallbladder wall thickness and anechoic bile content; a normal appearance of both kidneys in terms of size, shape, location, contour, and echotexture; normal renal cortex echogenicity; a well-defined corticomedullary junction; and a normal renal pelvis and smooth renal capsule. Reference normal range [20]. E. canis=Ehrlichia canis Ultrasonographic changes in the liver in the presence of E. canis were noted in all 13 infected cases. Hyperechogenicity of the liver was observed in 7 (53.8%) cases, whereas 4 (30.8%) cases revealed hypoechoic hepatic parenchyma. Hepatomegaly was observed in 10 cases (76.9%), as shown in Table-6 and Figure-2. After treatment with doxycycline, 4 (30.8%) and 3 (23.1%) cases still showed hyperechogenicity and hepatomegaly, respectively, of the liver (Table-6). Discussion In Thailand, there is no seasonal difference in the prevalence of E. canis infections. The yearly prevalence rate of E. canis was found to be 1.32% in the present study using the buffy coat smear method. There are several alternative techniques to diagnose CME apart from buffy coat smears, such as SNAP4D X , PCR, and immunofluorescence assay [14,15]. Nevertheless, the buffy coat smear method remains the most common method for screening E. canis infection in clinics because of its convenience and relatively low cost. Nevertheless, there are some limitations of the buffy coat smear method, which has a sensitivity and specificity of 16.1% (confidence interval [CI]=10.7-23.6%) and 89.4% (CI=85.0-92.6%), respectively, resulting in a high chance of false-negative results but a low rate of false positives. This could be the reason for the low prevalence of E. canis infection in our study. A composite study in India reported that the overall prevalence rates of ehrlichiosis by microscopic examination, commercial dot-ELISA, and nested PCR assay were 1.3%, 19.1%, and 5.8%, respectively [21]. The rate determined by microscopic examination is similar to that reported for Thailand, although the occurrence of CME using the PCR test was higher than that in India [9][10][11][12][13]22]. The sensitivity of an E. canis test also depends on the stage of infection at the time of sampling. In the acute phase, there is more opportunity to find infected leukocytes in the blood smear because of the higher degree of parasitemia. However, in the subclinical and chronic phases, the chances of finding infected leukocytes decrease, which can lead to false negatives. However, the probability of E. canis detection by specific antibodies, such as through ELISA, increases in the chronic phase because the secondary immune system (and thus immunoglobulin levels) requires time to respond [23]. In this study, all E. Canis-positive dogs showed a serum platelet count below the normal range. The [24]. This highlights the importance of evaluating true platelet counts in dogs suspected to have E. canis infection, since most infected dogs have thrombocytopenia as the main clinical sign. Thrombocytopenia in CME is attributed to various mechanisms across the different stages of the disease. In the acute stage, the cause is increased platelet consumption because of vasculitis, splenic sequestration of platelets, and immunologic destruction [15,25]. In addition, infected dogs show a significantly decreased platelet life span and increased mean platelet volume [25]. Platelet destruction by the immune response may be associated with the serum platelet bindable antiplatelet antibody, which is produced 17 dai. During the severe chronic phase, platelet production is decreased because of bone marrow hypoplasia, which can lead to pancytopenia [15]. CME can occur in dogs of any age or breed. Higher seropositive levels were found in male dogs compared to female ones, which has been explained by male dogs' higher exposure to vectors due to their behavioral characteristics [26]. Other factors associated with exposure to CME agents are the dog's habitat, contact with other dogs, and the presence of ticks. Dogs that have contact with other dogs and dogs parasitized by the ticks of R. sanguineus, which are the vector of E. canis, showed a higher likelihood of exposure [3,6,13]. Since R. sanguineus is a three-host tick species, it must complete its life cycle on the ground. Outdoor living dogs are, therefore, expected to be at higher risk of CME than indoor living dogs [27], as demonstrated in this study. Similarly, dogs with ticks on their body are more susceptible to CME infection. Hence, one effective way to prevent E. canis infection in dogs is tick control [26]. However, even with ectoparasite control, some dogs still develop CME. This can be explained by how effectively the owners control ticks because only some owners understand that preventing ticks on the dog's body can prevent CME. To improve owners' knowledge about ticks, education should focus not only on preventing dogs from becoming infested with ticks but also on measures for environmental control of ticks [28]. The brown dog tick is most abundant during the hot and humid periods of the year, particularly in Thailand [27]. The prevalence and epidemiology of ticks also depend on geographical locations [16,27]. The prevalence of ticks is as high as 80% in some areas, such as in Northeast Thailand. A high temperature (25-35℃) supports tick development and the success of laying, hatching eggs, and larval and nymphal molting, which may explain why ticks are more prevalent in the summer in several countries [27]. However, in Thailand, the weather is broadly similar to a hot and humid climate almost all year, suitable for ticks to mate and develop. In Part 2 of the study, the platelet concentrations in the period before detection of E. canis infection were still normal in some of the dogs. So they appeared uninfected in the first phase of infection. Similar to our results, the number of platelets was previously reported to be normal in the first 2 weeks after CME infection and then decreased significantly from the 3 rd to 5 th weeks [29]. In the present study, thrombocytopenia was detected in other dogs in the period before infection, which may be because of a false-negative blood smear test result or because some dogs may develop thrombocytopenia because of other causes. All 19 dogs in our study with detectable E. canis infection had thrombocytopenia. This result is consistent with Bulla et al. [24], who reported that dogs infected with E. canis in the acute and subclinical phases had mild thrombocytopenia but showed severe thrombocytopenia in the chronic stage. Although platelet levels returned to normal in 7/11 dogs in the post-treatment period (after doxycycline treatment for 28 days), 4/11 dogs still showed markedly lower platelet levels than normal [20]. This result is in accordance with the study of Villaescusa et al. [30], who treated CME-infected dogs with doxycycline at a dose of 10 mg/kg/day for 28 days and found that the platelet counts increased to the normal level 180 days after treatment. When doxycycline was administered to control group dogs, they also showed increased levels of platelets. Doxycycline may increase platelet counts; however, the mechanism is unknown. It is common and confirmed by our study that thrombocytopenia in some dogs persists after treatment to eradicate E. canis infection. Hence, platelet counts should be examined routinely after treatment with doxycycline. In the present study, we found that the rates of serum hepatic enzymes (ALT and ALP) above the normal range were 33.6% and 65.9%, respectively, in infected dogs, whereas increased kidney enzymes (BUN and creatinine) were present in 33.1% and 19.8%, respectively, of dogs. Taken together, these results suggest liver and kidney damage. The liver histopathology in infected dogs demonstrated infiltration of plasma cells, lymphocytes, and macrophage cells around the centrilobular veins and in the portal triads. Centrilobular fatty degeneration and perivascular and portal plasmacytosis were previously reported in naturally infected, chronic case of CME infected dogs [31]. In addition, dark blue cytoplasmic inclusions, which are consistent with Ehrlichia morulae, have been observed in lymphocytes and macrophages [32]. Renal protein decreases have also been reported in E. Canis-infected dogs, resulting in the increased urinary protein to creatinine ratio (average ratio=8.6) during the 3 rd and 4 th weeks after infection, which decreased to <0.5 by 6 weeks after infection. The hypoalbuminemia associated with acute E. canis infection may primarily contribute to the increased loss of renal protein rather than decreased hepatic synthesis [33]. The renal lesions in acutely infected dogs showed perivenular and interstitial infiltrate of lymphocytes and plasma cells localized principally to the renal cortex [33]. Glomerular lesions were minimal to absent. These results suggest that a minimal change in glomerulopathy can cause proteinuria without histological evidence of renal disease rather than immune complex glomerulonephritis [33]. The results of this study show that veterinarians should recognize the importance of monitoring clinical signs, hematology (e.g., hematocrit), platelet counts, and serum chemistry profiles particularly ALT, ALP, BUN, and creatinine levels to identify recurrent or resistant CME. Increased serum levels of liver enzymes were found in infected dogs both before and after treatment with doxycycline in this study. There was no significant difference between serum liver enzymes changes, such as ALT, in uninfected dogs and those treated with doxycycline [30]. The serum renal enzyme levels in some dogs with E. canis detected in the bloodstream were higher than normal levels. In dogs treated with doxycycline, the serum levels of renal enzymes decreased slightly back to normal values, which were likely because of the action of tetracycline at nephron sites in the kidneys [30]. In the present study, abdominal ultrasonography of the CME-infected dogs revealed hypoechogenicity of the liver, gallbladder distension, and hepatomegaly. Notably, Sarma et al. [34] reported the same findings. Mylonakis et al. [31] reported enlarged and diffusely hypoechoic liver in E. canis infected dogs, whereas severe hepatitis induced by E. canis has been documented as a portal infiltration of lymphocytes, plasma cells, and macrophages, resulting in a pronounced distortion of the surrounding acinar architecture [34]. This is associated with ultrasonographic changes in the liver that revealed decreased liver parenchyma echogenicity. In the case of tick-borne intracellular diseases, hepatomegaly may be due to passive congestion, reticuloendothelial hyperplasia, or infiltrative diseases mediated through cytokines [35]. The sonographic changes observed in the gallbladder included distention with the presence of sludge/clear bile, which may be due to anorexia [35]. Hyperechogenicity of the liver was also observed in the present study, which has been previously reported in chronic CME infections [36]. Sarma et al. [18] also reported hyperechogenicity of the liver, gallbladder distention, and hepatosplenomegaly concomitant with tick-borne disease. Splenomegaly was also observed in our study, which is consistent with the findings reported by Sarma et al. [34]. Multiplication of E. canis within circulating mononuclear cells and mononuclear phagocytic tissues of the spleen has been shown to result in hepatomegaly [32]. The kidney showed a hyperechoic echotexture compared with the spleen in the present study, which is presumably related to the deposition of immune complexes in the kidneys that trigger glomerulonephritis and predispose dogs to proteinuria [15]. Interstitial nephritis in dogs with CME was also observed, which is associated with lymphocyte infiltration and Available at www.veterinaryworld.org/Vol.15/January-2022/1.pdf suggests that these cells may also play an important role in the immunopathogenesis of renal lesions [37]. Although doxycycline can successfully clear E. canis infection when administered for 4 weeks, another study [38] reported persisting abnormalities of the liver and kidney through ultrasonography after treatment. Mcclure et al. [39] reported that the treatment of dogs with acute or subclinical CME with doxycycline for 28 days resulted in them becoming PCR negative for E. canis along with improved clinical parameters. Nevertheless, in the chronic CME cases in this study, there were still abnormalities in hematology, serum chemistry profiles, and ultrasonographic changes in the liver, kidney, and spleen after treatment with doxycycline for 28 days. Given the constraints of this study, it was not possible to examine ultrasonography data before E. canis detection, which limits the ability to explain changes before, during, and after CME infection. Nevertheless, the present study has demonstrated the value of ultrasound examination of the liver, kidneys, and spleen, as these organs are susceptible to change during CME infection and after doxycycline treatment. Veterinarians should be aware of the potential need to treat liver and kidney disorders, especially after 28 days of doxycycline treatment. Conclusion CME induces liver and renal pathological changes, leading to increased serum ALT, ALP, BUN, and creatinine levels. Despite treatment with doxycycline at 10 mg/kg/day for 28 days, a persistent increase in serum levels of liver and kidney enzymes was observed in some dogs. Ultrasonographic changes during and after doxycycline treatment can help monitor and indicate persistent pathological changes in the target organs.
2022-01-07T16:20:20.683Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "fdf865121f1a8682444c39ae7bec06cd39f4ddcd", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.15/January-2022/1.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f6b6af061da696ba61365f44be9d26179398a88", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
227126969
pes2o/s2orc
v3-fos-license
Development of Rubrics for Capstone Project Courses: Perspectives from Teachers and Students This study attempted to develop fair, relevant, and content-valid assessment tools for capstone project courses. Toward this goal, new rating instruments based on the concept of rubrics were proposed. To ensure that the new instruments were valid and fair, several meetings with faculty and students of the computing science departments (i.e., Computer Science and Information Technology) were successively conducted. Eight faculty members and 10 students participated in the study. The final versions of the instruments were completed after a series of careful deliberations with faculty and students. Faculty and students perceived the new instruments fairer than the previous ones. Since the final instruments will be deployed this semester, their strengths and weaknesses are not yet known at this time. Directions for future research are presented. INTRODUCTION Capstone project courses are one of the key courses in the computing degree programs. Moshkovich [12] argued that computing degree programs (e.g., Computer Science (CS), Information Technology (IT), and Information Systems (IS)) curricula must have courses that could provide opportunities for students to synthesize and apply the knowledge and skills they acquired over several years of study. These courses serve as responses of the academic community to supply the needs of the college graduates who are highly competent technically and who also possess good communication traits, strong leadership abilities, skills as effective team players, and desirable work ethic [10,13]. Miles and Kelm [11] opined that, at the end of capstone courses, students must be aware of the ethical implications of software development, understand social interactions and motivations in customer relations, learn to work effectively with colleagues in a team environment, demonstrate advanced critical thinking skills (e.g., assess the feasibility of the project, analyze the requirements of the software, identify alternative implementation strategies), demonstrate good communication and presentation skills, show project management skills, and understand the dynamics of the human-computer interface. These courses also provide good opportunities for students to apply the knowledge they learned from previous courses, develop communication skills, demonstrate problem-solving skills, and gain firsthand information as to how knowledge is produced [5,8,17]. In the Philippines, course offerings are mandated by the Commission of Higher Education (CHED). CHED [4] issued memoranda of minimum requirements, standards and policies with regard to the Philippine higher education. It agrees with Lunt et al. [9] that capstone courses are usually offered for one or two semesters during the final year of the students. To differentiate between the capstone projects of CS and IT degree programs, the former are required to take Thesis Writing while the latter are required to do the Capstone Project. Though the names of the courses are different, they also share the same curriculum principle, i.e., students are given opportunities to apply their skills and knowledge in solving challenging problems [9]. It cannot be doubted that capstone courses are one of the difficult, demanding, and challenging courses in the computing curricula [2]. The practice of requiring the students to defend their projects in a panel consisting of three members adds complexity and difficulty in meeting the requirements of the course. Students invest money, time, and effort in order to pass it. A delay of one or two semesters on the completion of the capstone projects due to failure in the oral defense is translated to additional expenses for the students and their parents. Consequently, they could not graduate at the expected time. The pressing concerns of fairness and the introduction of outcomes-based education in the Philippines [3] warrant the need to develop fair, relevant, and valid assessment instruments. Specifically, the study aimed 1) to report the steps undertaken in the development of capstone project rubrics, and 2) to present and discuss the capstone project rubrics. LITERATURE REVIEW Beyerlein et al. [1] proposed a framework in developing an assessment instrument in capstone engineering design courses. It incorporated the perspectives of the educational researcher, the student learner, and the professional practitioner. To this aim, the researchers identified the performance areas for engineering design. The performance areas were personal capacity, team processes, solution requirements, and solution assets. Personal capacity refers to the individual performance and skills Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Conference'10, Month 1-2, 2010, City, State, Country. Copyright 2010 ACM 1-58113-000-0/00/0010 …$15.00. improvement on engineering design. Team processes involved the development and implementation of collective processes that supported team design productivity. The third performance area (i.e., solution requirements) defined the goal state for design activities and features expected as required by stakeholders' needs and constraints. The last performance area (i.e., solution assets) referred to the results from a design project that meets the needs and satisfaction of the stakeholders. The researchers further commented that rubrics were the tools that could help capstone instructors to measure higher-level conceptual knowledge, performance skills, and attitudes. Meyer [10] identified the common learning outcomes of all Electronics and Communications Engineering (ECE) capstone design outcomes at Purdue University. According to the author, the learning outcomes of ECE capstone design outcomes were 1) an ability to apply knowledge obtained in earlier course and to obtain new knowledge necessary to design and test a system, component, or process to meet desired needs, 2) an understanding of the engineering design process, 3) an ability to function on a multidisciplinary team, 4) an awareness of professional and ethical responsibility, and 5) an ability to communicate effectively, in both oral and written form. Outcomes 2 and 4 were measured using rubrics. For Outcome 2, written reports were assessed in terms of their technical content, update record/completeness, professionalism, and clarity/organization. A score of 0-10 can be given to the components and each component had different weights. Technical content and professionalism each had a weight of 3 points while Update record/Completeness and Clarity/Organization had weights of 2 points each. Meanwhile, Outcome 4 was assessed in terms of Introduction, Results of Patent Search, Analysis of Patent Liability, Action Recommended, List of References, and Technical Writing Style. The criteria can also be rated using 0-10 but had different weights. Introduction, Action Recommended, List of References, and Technical Writing Style had weight of 1 point each while Results of Patent Search and Analysis of Patent Liability had weight of 3 points each. The study of Pauzi and Muda [12] described the assessment of Capstone Civil Engineering Design students in Universiti Tenaga Nasional (UNITEN). They reported that the works of the students were assessed in terms of written reports (20%), conceptual and detailed design (25%), formal presentations (25%), tender documents with the construction cost estimates, and project participation and team works. It made use of rubrics with 7 criteria to assess the students' work in terms of their teamwork and participation in Capstone Design Project course. The seven criteria were Workload (share of task), Getting Organized (initiative to conduct a meeting and make the group organize), Participation in Presentation (participation in sharing ideas, feelings, and thoughts), Client Consultancy Meeting Deadline (ability to do tasks on time or ahead of time), Showing up for Meetings (showing up in a meeting punctually or even ahead of time), Providing Feedback on the Comment from the Meeting (participate actively during a meeting), and Receiving Feedback (manner of receiving feedback). Students can have a mark of 3 (minimum) to 20 (maximum) on each criterion. Rubric was also employed at Stevens Institute of Technology in its systems engineering (SE) framework for multidisciplinary capstone design courses. Sheppard et al. [18] solicited inputs from systems engineering faculty members with extensive industrial experience in the SE field. In general, evaluators assessed the SE capstone of the students in terms of the project and of students' individual contribution. Learning goals and performance criteria were identified on each criterion. Project assessment and individual assessment had five and two learning goals, respectively. The level of achievement on each learning goal was evaluated using the rating points of 1 (poor) to 5 (excellent). It is interesting to note that students could evaluate their teammates and themselves in their contribution on the project. Using the scores 1 (below expectations), 2 (marginal), and 3 (meets or exceeds expectations), they evaluated the team members in terms of contributions of time, effort, and technical expertise, cooperation with other team members, timely completion of individual assignments, and overall contribution to the team. Lee and Lai [7] also advocated the inclusion of team members' participation in order to increase fairness in assessment. DEVELOPMENT OF THE INSTRUMENT 3.1 Research Locale and Nature of Capstone Project Courses The study was conducted in the College of Computer Studies and Systems (CCSS) of the University of the East in Manila. CCSS offers five bachelor's degree programs -Computer Science (CS), Information Technology (IT), Information Systems (IS), Digital Animation major in Gaming (DAG), and Digital Animation major in Animation (DAA). Table 1 shows the flow of capstone project courses for these programs. All except DAG undergo Methods of Research for Information Technology (MERIT). MERIT courses are tailored to each degree program. At the end of this course, CS students are expected to come up with a research topic. Once they have defended it successfully, they will further develop the concept in Thesis Writing A. In this stage, the first three chapters of the paper (i.e., Introduction, Literature Review, and Methodology) are completed. The last stage capstone project course for CS students is Thesis Writing B where the software and the full chapters of the paper are presented. Meanwhile, DAA students follow the same procedure with the CS students except that the course codes were changed. The first three chapters and the software are expected to be presented in Capstone A and Capstone B, respectively. IT/IS students have exactly the same sequence of capstone courses. They will enroll MERIT, Capstone A, and Capstone B. However, the natures of these courses are different from CS and DAA. For IT/IS, MERIT entails submissions of the first three chapters of the paper. Then, provided they pass MERIT, they will enroll Capstone A which requires them to furnish the full paper and the software. Finally, they will implement the software in their client's company during Capstone B. On the other hand, DAG students have a shortened flow of capstone project. The nature of Capstone A for DAG includes the proposal and the first chapters of project. Then, upon completion of this course, they have to develop their project and implement it at the same time. This is the Capstone B. They all have to comply all of these activities within the span of five semesters. Along with the flow of the capstone project courses, the intended learning outcomes (ILOs) of the courses of each degree program is presented (See Table 2.). Currently, all except capstone project courses of DAG have ILOs. (The ILOs of DAG are still under construction when this paper is being written.). It can be noticed in Table 2 that there are ILOs that are common across courses and students. The demonstration of mastery of communication skills is present in MERIT, Thesis Writing A, Thesis Writing B, and Capstone A. This is because these courses intend to hone the English skills of the students since English is the second language of Filipinos. It can also be observed that the presentation of the program in Thesis Writing B and Capstone A for IT/IS/DAA is one of the ILOs for these courses. Furthermore, since Capstone B of IT/IS is about system implementation, oral defense is no longer necessary. Thus, it can be concluded that based on the nature of ILOs and of the courses, only two assessment instruments are needed -one for the research proposal and another one for the software. The construction of the first version of the assessment instruments is discussed in the next section. The college has existing capstone courses assessment tools. There are about five assessment tools. The assessment tools are rating forms that gauged students' oral defense performance and/or paper from 74% and below (fail) and 75% to 100% (passing). A sample assessment tool is given below. As shown in Figure 1, an evaluator can assess the software as well as the paper of the students. Evaluators had the impression that students have to be gauged using the verbal ratings (e.g., Excellent, Outstanding, Very Good, etc.) on each box. Hence, they wrote "Excellent", "Outstanding", etc. on the boxes. Students, on the other hand, also commented numerical ratings needed to be more descriptive, that is, each numerical rating must compensate their efforts exerted and not as perceived by the evaluator. For example, a grade of 75% for a paper only reflects the evaluator's perceived applicable grade but it does not reflect why such rating was applicable. Further, the recent educational paradigm shift of the University into outcomesbased education intensifies the need to change the assessment tool. It is proposed that a new assessment tool be developed in a form of rubric. Rubric was proposed because of its perceived benefits. It clearly communicates to students the requirements of the course [13,19]. Also, the assessment becomes clearer, easier, more objective, and sometimes faster [19]. There are only two proposed rubrics as mentioned earlier. For purposes of clarity, proposal stage is composed of courses that require submission of a paper while software project stage is composed of those courses that involve the full paper and the software. Proposal and Software Project measurement criteria are shown in Table 3. Completeness - A software characteristic wherein it contains all of its necessary modules. Method -It signifies the appropriateness of the methods that would be employed to meet the objectives of the study. Reliability - The ability of software to perform a required function under stated conditions for a stated period of time without any errors. Paper -It measures the comprehensiveness of the discussion of the paper citing relevant studies. Mastery of the Subject -It evaluates the correctness of the response of the student on evaluators' query. Students under both stages are evaluated in terms of their paper and mastery of the subject matter. During the proposal stage, presentation of analysis, relevance, and methods that will be employed are evaluated. In Analysis, it intends to measure clarity of the discussion of the paper. Students must provide a vivid discussion on how the study has been built. Along with this, they have to show the importance of doing the project in the point of the client or of the academic/scientific community. The methods will also be scrutinized. These criteria have been selected since all courses employ all of these principles while writing a proposal. Meanwhile, only three software quality criteria were selected. These were selected since all software applications developed in the college for the last two years could be measured using these criteria. Functionality is a software criterion that intends to measure the conformance of the behavior of the software to its expected behavior. The software that should be presented to the evaluator should not lack essential modules (i.e., Completeness). Lastly, the program must be bug-free; hence, it should be reliable. The points are scaled from 1 (lowest) to 6 (highest) and the highest score that a student can get is 30 points (5 items x 6 points). The six-point scale was selected because the transmuted points would cover all grade points in the University grading scales. Further, the difference of a point in the scale would not make a big leap of points when transmuted. For example, a 28and 29-total point ratings would be transmuted to 97% (28/30*50+50) and 98, respectively. The 97% and 98% percentage ratings would be then equivalent to 1.25 and 1.00, respectively. Thus, the rating is deemed fair. The first versions of the rubrics are shown in Table 4 and Table 5. The proposal is exceptionally outstanding (6) that it would / is very likely (5) that it would/ likely (4) / will probably (3) that it would / is unlikely (2) it would / is very unlikely (1) it would contribute to the academic/ scientific community. Paper Exemplary! It does not only have complete paper, it surpassed more than what is required (6). Mastery of the Subject Proponents do not only answer all questions correctly but also challenge previous concepts (6). Completeness Exemplary! It does not only have complete modules, it surpassed more than what is required! (6) / The program contains all (5) essential modules. / Exceeds minimum requirements but not 100% completed (4) / Minimum requirements are met (3). / Only a portion of the minimum requirements are satisfied (2). / None of the minimum requirements are met (1). Reliability The codes are not only bug-free but also written efficiently and elegantly (6). / The software is bug-free (5). / There are bugs but do not compromise software performance (4). / There are bugs that compromise software performance to some extent (3). / There are bugs that compromise software performance (2). / There are so many bugs to the extent that the software no longer performs its functions (1). Paper Exemplary! It does not only have complete paper, it surpassed more than what is required! (6) / A comprehensive, clear, and extensive research is highly evident (5) / evident (4) / moderately evident (3) / evident to a little extent (2) / not evident (1). Mastery of the Subject Proponents do not only answer all questions correctly but also challenge previous concepts (6). Presentation of Rubrics to Faculty and Students for Comments and Suggestions The proposed rubrics were presented to seven faculty members of the computing departments. The group was composed of four chairpersons, one representative from the Office of Curriculum Development and Instruction, and two thesis coordinators. They scrutinized the contents of the initial assessment tools. Their comments, suggestions, or concerns are shown in Table 6. There were two sessions of deliberations. Faculty and students had common as well as distinct concerns on the rubrics. It was found out that faculty and students had a common concern that it was difficult to achieve a rating of 6. However, they noted that the proposed rubrics were fairer than the previous ones. They argued that while it could be difficult to have a point of 6, it is now easier to get at least half of the perfect points. In short, it is difficult to get perfect points but it is easier to pass with the new assessment tools. Completeness Exemplary! The program provides other modules beyond the required expectations (6). The program contains all (5) / most (4) / many (3) / only a portion (2) of the required modules. The program did not contain (1) any of the required modules. Reliability The codes are bug-free and follow coding standards (6). The software is error-free (5). /Errors are evident but they do not compromise the performance of the software (4). / Errors are evident and they compromise the performance of the software to some extent (3). / There are errors that affect the overall software performance (2). / There are so many errors to the extent that the software no longer performs its functions (1). Faculty commented that the confidence of students in answering questions and individual performance was not reflected in the proposed rubrics. Though important, confidence was not one of the ILOs. Thus, it was not measured in the rubrics. In terms of individual performance, the eight faculty members decided to incorporate individual rating which is not part of the rubrics. It was also disclosed that the proposed rubrics had items that were non-atomic. Non-atomic items are questions that can still be broken down to two different questions or questions that refer to the same question. For example, an item under the Paper criterion states "A comprehensive, clear, and extensive research is highly evident" is a non-atomic item. The words "comprehensive" and "extensive" may mean the same thing. Further, the "clarity" of the paper was already measured under Analysis. These concerns, comments, and suggestions were all incorporated in the first versions of the rubrics. The final rubrics are shown in Table 6 and Table 7. CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH This study attempted to develop valid and relevant capstone project courses rubrics. It was shown that it was possible to meet this goal provided that the faculty and students were consulted. As such, the final versions of the rubrics will be used for the upcoming oral defense of this current semester. Nonetheless, at this point, the strengths and limitations of the rubrics are not yet known. The actual use of the rubrics during the defense day can identify the areas to be improved with the new assessment tool. It can also be noted that the College invites external evaluators who are industry practitioners. Thus, an orientation will be held prior the oral defense. This will provide not only an avenue for the external evaluators to internalize the new assessment tool but also an opportunity to comment on the tool. The comments of external evaluators and issues concerning the instrument that might be raised during the orientation and defense will be documented and incorporated to enhance the rubrics. ACKNOWLEDGMENTS The researcher is thankful to the valuable contributions of the faculty members and students involved in the study. This study is made possible through the generous funding of the University of the East.
2020-11-24T02:01:27.707Z
2020-11-22T00:00:00.000
{ "year": 2020, "sha1": "428c5cca329d06584744a8cce4094a9454327592", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "428c5cca329d06584744a8cce4094a9454327592", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
265040646
pes2o/s2orc
v3-fos-license
ESYT1 tethers the ER to mitochondria and is required for mitochondrial lipid and calcium homeostasis A protein complex composed of the outer mitochondrial membrane protein SYNJ2BP and the endoplasmic reticulum protein ESYT1 tethers the two organelles to facilitate calcium and lipid transfer. Introduction Mitochondria interact with several membrane-delimited organelles within the cell, including the ER, lysosomes, peroxisomes, and trans-Golgi network vesicles (Tabara et al, 2021).Mitochondria-ER contact sites (MERCs), also called mitochondria-associated membranes (MAMs) when studied at a biochemical level, are the best characterized class of membrane contact sites (MCSs) and represent the close apposition of the outer mitochondrial membrane (OMM) with the ER membrane (Giacomello & Pellegrini, 2016).MERCs are functionally and structurally specialized cellular subdomains that form signaling platforms allowing lipid synthesis and transport, calcium signalling, apoptosis regulation, mitochondrial division, and autophagosome formation (Herrera-Cruz & Simmen, 2017;Giacomello et al, 2020).MERCs have also been shown to be involved in several critical cellular pathways such as metabolic regulation in diabetes (Rieusset, 2017), inflammation (Missiroli et al, 2018), the immune response (Martinvalet, 2018), and senescence (Janikiewicz et al, 2018).Alterations in these structures have also been linked to the onset of neurodegenerative diseases including Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis (Vallese et al, 2020), and aging (Janikiewicz et al, 2018). The proteins that mediate the formation of MERC have been extensively studied in the yeast Saccharomyces cerevisiae, where the four-subunit ER-mitochondria encounter structure is required to tether the two organelles and mediate lipid transport from the ER to mitochondria (Kornmann et al, 2009;Kojima et al, 2016) via the lipid-binding SMP domains (synaptotagmin-like mitochondrial and lipid-binding protein) present in three subunits of the complex (Kopec et al, 2010;AhYoung et al, 2015).Orthologues of the three SMP domain-containing proteins in the ER-mitochondria encounter structure complex have not been identified in mammals. Mitochondria synthesize cardiolipin (CL) and phosphatidylethanolamine (PE) on the inner membrane, and these lipids are essential for mitochondrial function (Steenbergen et al, 2005;Funai et al, 2020).CL is produced via a multi-enzymatic cascade and PE is synthesized by phosphatidylserine decarboxylase PISD1; however, their synthesis depends on the ER for the supply of the precursor lipids phosphatidic acid (PA) and phosphatidylserine (PS), respectively (Funai et al, 2020).Lipid synthesis activity at MAMs was the first biochemical process reported at a MCS in mammals (Vance, 1990); however, a detailed mechanism of lipid transport between ER and mitochondria in mammals remains elusive. All SMP domain-containing proteins are present at MCSs, where they are thought to facilitate non-vesicular transport of lipids between lipid bilayers (Jeyasimman & Saheki, 2020).In mammals, the ER-anchored extended synaptotagmin (ESYT) proteins are the best characterized (Saheki & De Camilli, 2017).ESYT1, ESYT2, and ESYT3 tether the ER to the plasma membrane (PM), potentially transferring lipids (Bian et al, 2018).More specifically, ESYT1 has been shown to play a role in Ca 2+ -dependent lipid transfer at ER-PM contacts, which requires its docking with PIP(4,5)P 2 in the plasma membrane (Giordano et al, 2013;Reinisch & De Camilli, 2016;Bian et al, 2018;Ge et al, 2022).It also tethers ER to peroxisomes by a similar mechanism facilitating the transport of cholesterol (Xiao et al, 2019), raising the possibility that ESYT1 could also tether ER to mitochondria to promote lipid transfer. In this study, we used the proximity mapping tool BioID to identify and characterize SMP domain proteins that might be involved in MERC structure and function in humans.We showed that ESYT1 is enriched at MERCs, where it forms a complex with the OMM protein SYNJ2BP.Depletion of the ESYT1-SYNJ2BP complex impairs mitochondrial calcium uptake capacity and provokes a reduction of essential mitochondrial lipids, demonstrating its essential function in cellular and mitochondrial homeostasis. Proximity labelling analysis of SMP domain proteins in human cells We recently established that the proximity of proteins localized on two different membrane-bound organelles can be detected by the proximity mapping tool BioID (Antonicka et al, 2020;Go et al, 2021).To identify potential proteins involved in the regulation of MCSs and lipid transport between ER and mitochondria, we selected several ER-resident human SMP domain-containing proteins as baits (PDZD8, TEX2, ESYT2, and ESYT1).We generated stable inducible Flp-In T-REx 293 cell lines expressing each protein fused with BirA* (Fig S1A) and used BioID to characterize their proximity interactomes and identify potential interacting partners on the OMM. BioID analysis of the selected SMP domain-containing proteins (Table S1) revealed that, as expected, most of their proximity interactors were ER membrane proteins involved in organelle organization, transport, lipid biosynthesis, and metabolic regulation.(34 of 40 preys shared among all four baits were ER proteins, Fig S1B and Table S1).Each bait also detected numerous unique proximity interactors.In addition, two preys common to all four baits, ALDH3A2 and FKBP8, have been reported to dually localize to mitochondria and ER (Shirane & Nakayama, 2003;Rath et al, 2021;Zeng et al, 2021) (Fig S1B). PDZD8 was previously shown to partially localize to MERCs and tether the two organelles (Hirabayashi et al, 2017), but its interacting partner on the OMM remains unknown.Because of its capacity to regulate MERCs, the absence of PDZD8 led to decreased mitochondrial calcium uptake capacity upon ER stimulation (Hirabayashi et al, 2017).PDZD8 was later described to interact with RAB7 and ZFYVE27 (Protrudin) to establish three-way MCSs between the ER, late endosomes, and mitochondria and to mediate lipid transfer required for late endosome maturation (Elbaz-Alon et al, 2020;Shirane et al, 2020;Khan et al, 2021;Gao et al, 2022).Mass spectrometry results obtained with either the N-or C-terminal PDZD8-BirA* fusion proteins confirmed the proximity interaction with ZFYVE27 but failed to identify any OMM-localized partner (Table S1). TEX2 is still uncharacterized in mammals; however, its yeast ortholog Nvj2 localizes to ER-vacuole (lysosome-like organelle) contact sites at steady state.Upon ER stress or ceramide overproduction, it translocates to ER-Golgi contacts to facilitate the non-vesicular transport of ceramide from the ER to the Golgi, counteracting ceramide toxicity (Liu et al, 2017).Consistent with the role of Nvj2 in yeast, we identified 12 proteins belonging to the ER-Golgi vesicle-mediated transport pathway in the TEX2 proximity interactome (Table S1, in green); however, as with PDZD8, we did not identify an OMM proximity interactor. In contrast to ESYT2, that constitutively tethers ER to the PM and is localized in the cortical ER, the interaction of ESYT1 with the PM is activated by Ca 2+ binding.The proportion of ESYT1 present throughout the ER or concentrated at ER-PM contacts is controlled by cytosolic Ca 2+ (Chang et al, 2013;Giordano et al, 2013;Idevall-Hagren et al, 2015).As ESYT members could form heteromeric complexes, ESYT-dependent ER-PM contacts are regulated by both cytosolic Ca 2+ and the specific phospholipid PI(4,5)P 2 at the PM (Fernandez-Busnadiego et al, 2015).In both N-and C-terminal ESYT1-BirA* experiments (Table S1), we confirmed the interaction with its known partner ESYT2.Importantly, we also found a unique specific proximity interaction with the OMM protein SYNJ2BP (OMP25) (Figs 1A and S1B).This interaction was previously noted but never further investigated (Christianson et al, 2011;Hung et al, 2017).Significantly, ESYT2 BioID analysis also identified ESYT1 (Table S1) as its main proximity interactor but failed to identify SYNJ2BP, suggesting that ESYT1 may form a specific complex with SYNJ2BP at MERCs independent of its interaction with ESYT2 at ER-PM contacts. Immunoprecipitation of ESYT1 from human fibroblasts stably overexpressing a C-terminal 3xFLAG-tagged version of ESYT1 followed by LC-MS analysis showed that SYNJ2BP (and ESYT2) coimmunoprecipitated with ESYT1 (Table S2), confirming our proximity interaction results. We further compared the BioID profile of SMP proteins with the BioID of an ER-targeted BirA*, that promiscuously labels proteins in the ER and vicinity, serving as a control for protein-independent ER proximity labelling (Table S1).SYNJ2BP was not found as proximity interactor of ER-BirA*, further validating the specificity of the interaction between ESYT1 and SYNJ2BP (Fig S1C and D). These data prompted us to perform a BioID analysis using SYNJ2BP as bait (Fig S1A and Table S1) and we observed a strong enrichment of ESYT1, confirming the proximity interaction of the two partners.SYNJ2BP was shown to interact with another ER-localized protein RRBP1 to regulate the formation of MERCs (Hung et al, 2017), and we also identified RRBP1 as prey.Hung et al (2017) also reported an interaction between SYNJ2BP and the multi aminoacyl tRNA synthetase complex (Mirande, 2017), an interaction we also confirmed, further substantiating the specificity of our BioID results. We then compared the BioID profile of SYNJ2BP with the BioID of an OMM-targeted BirA*, serving as a control for proteinindependent OMM proximity labelling (Table S1).ESYT1 was not found as proximity interactor of OMM-BirA*, validating the specificity of the interaction between ESYT1 and SYNJ2BP (Table S1 and Fig 1B). In conclusion, of the four SMP domain-containing proteins we profiled, only ESYT1 identified a specific proximity interacting partner on the OMM, SYNJ2BP, suggesting that this complex could play a role in the regulation of MERC formation and/or function. ESYT1 localizes to MERCs To further investigate the interaction between ESYT1 and SYNJ2BP at MERCs, we profiled the proximity interactome of the MERCs using an (A) Specificity plot of ESYT1-N-ter BioID analysis indicates the specific proximity interaction with SYNJ2BP.The specificity denotes the fold enrichment of the spectral counts detected for each prey in the ESYT1 BioID compared with the spectral counts for that prey in all other baits in the dataset (all four SMP proteins).Prey names for the most specific preys and for preys with the highest lengthnormalized spectral counts are indicated.Preys are colour-coded based on their GO term cellular compartment analysis.MitoCarta3.0 proteins are SYNJ2BP, FKBP8, and ALDH3A2.(B) Proximity interaction between known (and predicted) ER-mitochondrial tethers with indicated baits (BFDR ≤ 0.01).The colour of each circle represents the prey-length normalized average spectra detected for the indicated protein by each bait and the size of the circle represents the relative average spectra across the baits analyzed in this dataset.The SAINT analysis excludes self-detection for the bait protein as a prey, and is represented as X in the graph.S1).This construct was based on a fluorescent MERC tether first designed by Hajnoczky (Csordas et al, 2006) and reported to successfully rescue both MERC and Ca 2+ loss in cells devoid of several other contact site protein regulators including inositol-3-phosphate receptor (IP3R), PDZD8, RMDN3-VAPB or MFN2 (Gomez-Suaga et al, 2017;Hirabayashi et al, 2017;Hern ández-Alvarez et al, 2019).BirA* was then fused between the OMM-targeting sequence of mAKAP1 at the N-terminus and the ER-targeting sequence of yUBC6 at the C-terminus.We analysed the tether-BirA* proximity interactions with previously characterized MERC proteins alongside ESYT1 and SYNJ2BP (Fig 1B) and showed that tether-BirA* interacted with all the queried preys, consistent with an interaction of ESYT1 and SYNJ2BP at MERCs. To confirm this localization, we next studied ESYT1 intracellular localization by immunofluorescence and confocal microscopy (Fig 1C).In human fibroblasts stably overexpressing SEC61B-mCherry as an ER marker (green) and stained for PRDX3 as a mitochondrial marker (cyan), endogenous ESYT1 (magenta) specifically localized along the ER network forming puncta, especially on ER tubules (which function in lipid and hormone synthesis) rather than on the perinuclear sheets (which function in protein synthesis) (Schwarz & Blower, 2016) Quantitative analysis confirmed that more than 30% of the endogenous ESYT1 colocalized with mitochondria and that a third of mitochondria were positive for ESYT1 (Fig 1E). Consistent with these results, subcellular fractionation of mouse liver (Fig 1F) showed that endogenous ESYT1 is present in the microsomal light membrane fraction containing ER, and in the heavy membrane fraction containing mitochondria and MAM.Gradient-purification of the heavy membranes into MAM and highly purified mitochondria revealed that ESYT1 was enriched in MAMs, with a similar fractionation profile as the MAM marker SIGMAR1.Significantly, SYNJ2BP, in addition to being enriched in mitochondria, was also present in the MAM fraction. To further characterize the function of ESYT1, we generated a CRISPR-Cas9-mediated KO in human fibroblasts and fibroblasts stably overexpressing a C-terminal 3xFLAG-tagged version of ESYT1 (Fig 1G).BN-PAGE analysis of DDM-solubilized heavy membrane fractions (Fig 1H) revealed that endogenous ESYT1 was present in three main large complexes, with the main one at approximately 410 kD.The specificity of these complexes was confirmed by their absence in different clones of the KO cell lines.Finally, the ESYT1-FLAG overexpressing cell line showed that the tagged version of ESYT1 behaved similarly to the endogenous protein (Fig 1H), but formed slightly larger complexes because of the addition of the 3xFLAG tag. Together, these results show that ESYT1 and its OMM partner SYNJ2BP localize to the MERCs, and that ESYT1 forms high molecular weight complexes. Loss of ESYT1 decreases MERCs As ESYT1 is known to tether the ER membrane to the PM (Saheki, 2017) and to peroxisomes (Xiao et al, 2019), we sought to determine whether ESYT1 could similarly act as a tethering protein regulating MERCs.Using transmission electron microscopy (TEM), we analyzed the morphology and characteristics of MERCs in human control fibroblasts compared with ESYT1 KO cells and KO cells where a Myctagged version of ESYT1 was stably reintroduced (Fig 2A).TEM image analysis revealed that the loss of ESYT1 led to a decrease in both the number and mean length of MERCs, resulting in an overall decrease in the perimeter of mitochondria covered by ER membrane (Fig 2B and C).MERC defects were completely rescued by the reintroduction of ESYT1-Myc, confirming the specificity of this phenotype.Notably, mitochondria in ESYT1 KO cells have a larger perimeter than control cells, a phenotype that was fully rescued by the expression of ESYT1-Myc.The larger perimeter likely results from the loss of MERCs, which demarcate sites of mitochondrial fission (Giacomello et al, 2020).These experiments show that loss of ESYT1 impacts MERC formation, and suggests a potential direct role as a physical tether between the two organelles. SYNJ2BP but not ESYT1 promotes the formation of mitochondria-ER contacts We next investigated the consequences of the overexpression of ESYT1, or its mitochondrial partner SYNJ2BP on MERC architecture.The overexpression of a 3xFLAG-tagged version of ESYT1 did not influence the morphology of MERCs (Fig 3A and B); however, as was previously demonstrated (Nemoto & De Camilli, 1999;Hung et al, 2017;Pourshafie et al, 2022), SYNJ2BP overexpression strikingly promoted the formation of MERCs, specifically by increasing the length of individual contacts between the two organelles and the mitochondrial perimeter in contact with the ER in a "zipper-like" fashion (Fig 3B).In this condition, the perimeter of mitochondria was smaller and the ER-mitochondrial network was recruited to the perinuclear region of the SYNJ2BP overexpressing cells (Fig 3A).Immunofluorescence and confocal microscopy analysis confirmed both the significant increase of MERCs and the perinuclear accumulation of the ER-mitochondrial network when SYNJ2BP was overexpressed (Fig S2A).In these conditions, we also observed that endogenous ESYT1 was recruited to MERCs, where it accumulated and formed large foci (Fig S2A, white arrowheads).Quantitative analysis, using confocal microscopy to compare control, SYNJ2BP KO, and SYNJ2BP overexpressing fibroblasts, demonstrated that the presence of ESYT1 at mitochondria is dependent on SYNJ2BP expression (Fig 3C).In contrast to SYNJ2BP overexpression, loss of SYNJ2BP which decreased MERCs (Ilacqua et al, 2022;Pourshafie et al, 2022) was associated with a decreased localization of ESYT1 at mitochondria. SYNJ2BP was shown to interact with another ER-localized protein RRBP1 to regulate the formation of MERCs (Hung et al, 2017).To explore the relation between SYNJ2BP, ESYT1 and RRBP1, we analyzed their subcellular localization in human control fibroblasts and fibroblasts overexpressing SYNJ2BP ( Because of the contribution of mitochondrial fission and fusionrelated proteins in the formation or stabilization of MERCs, including the OMM fusion protein MFN2 (de Brito & Scorrano, 2008) and the main mitochondrial fission regulator DRP1 (Prudent et al, 2015), we decided to investigate their potential contribution to SYNJ2BP-dependent MERC formation.Control cells and cells overexpressing SYNJ2BP were depleted for either DRP1 or MFN2 (Fig S2B).As expected, in both control cells and cells overexpressing SYNJ2BP, depletion of DRP1 led to a hyperfused mitochondrial network (a and b), whereas loss of MFN2 induced mitochondrial fragmentation (c and d).In both conditions, the overexpression of SYNJ2BP still promoted a strong increase of MERCs as monitored by confocal microscopy (b and d, cyt c as a mitochondrial marker and HSPA5 as an ER marker).However, the recruitment of the ERmitochondrial network around the nucleus was less prominent after DRP1 knockdown.We conclude that the effect of SYNJ2BP on MERC formation is independent of MFN2 and DRP1. SYNJ2BP is present in a high-molecular weight complex with ESYT1 To better understand the relationship between ESYT1 and SYNJ2BP, we investigated their potential interaction by BN-PAGE analysis.Whereas endogenous SYNJ2BP ran mostly as a monomer (Fig 4A,left), when overexpressed (a condition that promotes MERCs), SYNJ2BP appeared in two high molecular weight complexes (Fig 4A,left), one of which was at the same size as the ESYT1 complex at 410 kD (Fig 4A,right,lower horizontal line).Overexpression of SYNJ2BP together with a 3xFLAG tagged version of ESYT1 leads to the shift of ESYT1 complex to a higher molecular weight.In this condition, the 410 kD SYNJ2BP complex specifically shifted to a similar molecular weight, demonstrating the interaction of the two partners in this complex (Fig 4A,right,higher horizontal line).A second dimension BN/SDS-PAGE analysis confirmed that when overexpressed, a fraction of SYNJ2BP is present in two different complexes, one that runs at the size of the ESYT1 complex and one to similar size of the RRBP1 complex (Fig 4B).Knockdown of RRBP1 did not affect the assembly of ESYT1 complex (Fig 4C), nor did the knockdown of ESYT1 affect the RRBP1 complex, demonstrating that the complexes are not interdependent.However, the presence of SYNJ2BP in the 410 kD complex is specifically dependant on ESYT1, because its depletion leads to the loss of the SYNJ2BP complex at 410 kD (Fig 4C), demonstrating that ESYT1 and SYNJ2BP belong to the same complex. A study by Hung et al reported that the interaction of SYNJ2BP with RRBP1 depends on cytoplasmic translation activity (Hung et al, 2017).To confirm that the two SYNJ2BP complexes are independent, we analyzed the effects of puromycin, a translation inhibitor, on the formation of both complexes.Puromycin treatment led to a large decrease in the steady-state level of RRBP1 and a concomitant increase of ESYT1, without affecting SYNJ2BP levels (Fig 4D).A second-dimension experiment confirmed that puromycin induced a specific loss of the SYNJ2BP-RRBP1 complex, without affecting the complex between SYNJ2BP and ESYT1 (Fig 4E).Together, these results demonstrate that SYNJ2BP interacts with both ESYT1 and RRBP1, but in two different complexes that are physically and functionally independent. ESYT1 is required for ER to mitochondria Ca 2+ transfer In mammals, the best characterized functional feature of MERCs is Ca 2+ flux from the ER to mitochondria required to sustain mitochondrial homeostasis (Rossi et al, 2019).Ca 2+ is released from the ER through the IP3R and crosses the OMM through the voltagedependent anion channel, which interacts with IP3R via the cytosolic protein GRP75 (Szabadkai et al, 2006).Ca 2+ is then transported to the matrix via the IMM mitochondrial calcium uniporter (MCU) complex (De Stefani, Raffaello et al, 2011;Bick et al, 2012).MERCs provide spatially constrained microdomains in which Ca 2+ released from the ER can accumulate at high concentrations sufficient to induce mitochondrial Ca 2+ uptake via the low Ca 2+ affinity MCU (Rizzuto et al, 1998;Csordas et al, 2006;Szabadkai et al, 2006).As a consequence, proteins that regulate MERC formation affect ER to mitochondria Ca 2+ transfer; a decrease of MERCs has been widely associated to a decrease of Ca 2+ transfer from the ER to mitochondria (de Brito & Scorrano, 2008;De Vos, Morotz et al, 2012;Stoica et al, 2014;Hirabayashi et al, 2017). ER-PM contact sites are responsible for store-operated Ca 2+ entry (SOCE), a process allowing cellular, and in particular, cytosolic and ER, Ca 2+ replenishment (Ahmad et al, 2022).Silencing ESYT1 impairs SOCE efficiency in Jurkat cells (Woo et al, 2020), but not in HeLa cells (Giordano et al, 2013;Woo et al, 2020).To avoid confounding effects because of the loss of ESYT1 at ER-PM, and to SOCE impairment which can impact mitochondrial Ca 2+ uptake capacity, we first evaluated mitochondrial Ca 2+ pumping upon ER-Ca 2+ release in HeLa cells (Fig 5).We compared control cells, ESYT1 knockdown cells, and ESYT1 knock-down cells expressing an engineered ER-mitochondria tether (Hirabayashi et al, 2017).Knock-down of ESYT1 led to a decrease of mitochondrial Ca 2+ uptake from the ER upon histamine stimulation, as monitored by a genetically encoded Ca 2+ indicator targeted to the mitochondrial matrix (CEPIA-2mt) (Suzuki et al, 2014) (Fig 5A and B).Importantly, the expression of the artificial mitochondria-ER tether was able to rescue mitochondrial Ca 2+ defects observed in ESYT1 silenced cells upon histamine (Giordano et al, 2013;Woo et al, 2020), we measured total ER Ca 2+ store using the cytosolic-targeted R-GECO Ca 2+ probe upon thapsigarin treatment, an inhibitor of the sarco/ER Ca 2+ ATPase SERCA that blocks Ca 2+ pumping into the ER (Fig 5C and D) and observed no difference in our different conditions.Finally, to confirm that these defects in mitochondrial Ca 2+ uptake were not associated with a decreased levels of the main proteins involved in mitochondrial Ca 2+ flux, we analysed their levels in ESYT1-silenced HeLa cells.Acute silencing of ESYT1 did not have appreciable effects on the levels of MCU, MICU1 or MICU2 (Fig 5E and F).Together, our results in HeLa cells show that silencing of ESYT1 leads to decreased mitochondrial calcium uptake upon ER stimulation because of a decrease of MERCs. To investigate the role of ESYT1 in mitochondrial Ca 2+ dynamics in fibroblasts, we compared control human fibroblasts, ESYT1 KO fibroblasts, and ESYT1 KO fibroblasts expressing either ESYT1-Myc or the engineered ER-mitochondria tether (Figs 6 and S3).In contrast to the above results in Hela cells, loss of ESYT1 impaired SOCE efficiency in fibroblasts, as measured with the cytosolic probe Fluoforte, after addition of calcium chloride on thapsigargin-treated cells (Fig 6A and B).We therefore investigated the influence of ESYT1 loss on cytosolic Ca 2+ concentration after ATP (Fig 6F -H) or histamine (Fig S3D -F) stimulation using the cytosolic-targeted Ca 2+ probe reporter aequorin.Both conditions showed a reduced cytosolic Ca 2+ concentration in ESYT1 KO cells after ER-Ca 2+ release.In addition, whereas ESYT1 KO does not influence the total ER Ca 2+ pool (Fig 6K and L), the decrease of ER-Ca 2+ release capacity we observed was confirmed using the ERtargeted R-CEPIA1er upon histamine stimulation (Fig 6I and J).Nevertheless, loss of ESYT1 decreased the Ca 2+ uptake capacities of mitochondria upon histamine (Fig S3A -C) or ATP stimulation (Fig 6C -E).To determine if the defect of mitochondrial Ca 2+ was fully because of the observed impairment of SOCE, or if it was partially associated with MERC defects, we performed different rescue conditions experiments.Significantly, whereas both the cytosolic and mitochondrial Ca 2+ defects were rescued by reexpression of ESYT1-Myc in ESYT1-KO fibroblsasts, expression of the artifical tether only specifically rescued the mitochondrial Ca 2+ phenotype, but not the cytosolic ones.Thus, these results suggest that similar to HeLa cells, the decrease of mitochondrial Ca 2+ uptake observed in fibroblasts is not fully because of SOCE and cytosolic Ca 2+ defects, but rather to the decrease of MERCs induced by loss of ESYT1.Finally, immunoblot analysis (Fig 6M and N) in ESYT1 KO fibroblasts showed that the levels of the major proteins involved in mitochondrial Ca 2+ pumping were not affected, nor was the assembly of the IP3R or the MCU complexes (Fig 6O).Several posttranslational modifications are known to regulate IP3R activity (Hamada & Mikoshiba, 2020) and it is possible that these could be affected by the loss of ESYT1. Together, these results highlight the distinct and dual roles of ESYT1 in Ca 2+ regulation at the ER-PM and at MERCs. SYNJ2BP is required for ER to mitochondria Ca 2+ transfer Based on the results obtained for ESYT1 and the significant increase of MERCs upon the overexpression of the OMM ESYT1 partner SYNJ2BP, we next investigated the role of SYNJ2BP in mitochondrial Ca 2+ dynamics (Fig 7).To do so, we compared control fibroblasts with SYNJ2BP KO human fibroblasts (two different clones) and fibroblasts overexpressing SYNJ2BP (either bulk cultures or a clone) (Fig 7).Similar to ESYT1 loss, the absence of SYNJ2BP strongly decreased both maximal mitochondrial Ca 2+ concentration (Fig 7A and B) and mitochondrial Ca 2+ uptake rate (Fig 7C).SYNJ2BP overexpression however significantly increased mitochondrial Ca 2+ uptake capacity upon histamine stimulation (Fig 7A -C).In contrast to ESYT1, the level of SYNJ2BP did not influence cytosolic Ca 2+ concentration (Fig 7D -F) upon histamine stimulation.Finally, SYNJ2BP overexpression did not affect levels of proteins involved in mitochondrial Ca 2+ pumping (Fig 7G and H). To better understand the effect of SYNJ2BP on mitochondrial Ca 2+ uptake, we analyzed its role in MERC formation using an in situ proximity ligation assay (PLA), an established method to analyze MERCs (Fig 7I and J) (Tubbs & Rieusset, 2016).As seen in our TEM analysis (Fig 3A and B), overexpression of SYNJ2BP increased the number of MERCs, monitored by the increase of the number of PLA foci per cell compared with controls.In contrast, SYNJ2BP KO led to a reduction in the number of PLA foci per cell, indicating a decrease number of MERCs (Fig 7I and J).Together these results confirm that the quantity of MERCs is proportional to the level of SYNJ2BP expression (Ilacqua et al, 2022;Pourshafie et al, 2022), which therefore strongly influences mitochondrial Ca 2+ uptake capacity. ESYT1 regulates mitochondrial lipid homeostasis Mitochondrial lipid composition is distinct from that in other organelles (Funai et al, 2020) and plays a critical role in the regulation of mitochondrial and cellular homeostasis (Sassano et al, 2022;Ventura et al, 2022).The most abundant mitochondrial phospholipids are phosphatidylcholine (PC), phosphatidylethanolamine (PE), cardiolipin (CL), phosphatidylinositol (PI), and phosphatidylserine (PS).CL and PE are synthetized in the IMM, requiring the import of precursor lipids, phosphatidic acid (PA) and PS, respectively, from the ER membrane at MERCs.Indeed, numerous studies have highlighted the critical contribution of MERCs in generating a platform for efficient lipid exchanges between the two organelles (Tamura et al, 2020). As the ESYT1-SYNJ2BP complex controls MERC architecture, we investigated the role of ESYT1 in lipid transfer from ER to mitochondria.We performed shotgun mass spectrometry lipidomics, allowing broad coverage of lipids and absolute quantification (Lipotype GmbH), from purified mitochondria.We compared control human fibroblasts (control, n = 3), ESYT1 KO fibroblasts (KO, n = 4), and ESYT1 KO fibroblasts expressing either ESYT1-Myc (Rescue, n = 6) or the ER-mitochondria artificial tether (Tether, n = 6).Over 1,484 lipid entities were identified and quantified of which 149 were (200 μM for 2h and 30 mins).The position of identified SYNJ2BP-containing complexes and their alignment with ESYT1 and RRBP1-containing complexes are indicated with grey lines.statistically different after filtering (Table S3).Multivariant data analysis using principal component analysis ( To investigate if overexpression of ESYT1 or the artificial tether induced ER stress, potentially changing the ER lipid composition, we performed an immunoblot analysis to compare markers of ER stress in control fibroblasts, KO ESYT1 fibroblasts, KO ESYT1 fibroblasts overexpressing ESYT1-Myc or the tether (Fig S4C).This showed no changes in the levels of several different markers of ER stress (GRP78, EIF2A, PERK) or cell death (PARP1, CAS7). Together, these results demonstrate that ESYT1 is required for optimal lipid transfer from ER to mitochondria, likely through its tethering function as this phenotype is completely rescued by the artificial tether, suggesting that other lipid transport proteins are involved. Discussion This study demonstrates that the ESYT1-SYNJ2BP tethering complex regulates essential physiological functions that occur at the mitochondrial-ER interface.ESYT1 and SYNJ2BP localize to MAM subdomains where they interact in a high molecular weight complex, favouring the formation of MERCs.The two partners are interdependent in that localization of ESYT1 at mitochondria requires SYNJ2BP expression (Fig 3C), and the absence of ESYT1 reduces the effect of SYNJ2BP overexpression on MERC induction (Fig 3E).Loss of this tethering function results in reduced mitochondrial calcium uptake capacity and impaired mitochondrial lipid homeostasis.Thus the ESYT1-SYNJ2BP complex fulfills all the essential A challenge for the study of MERCs is the multiplicity of described tethers.Although one might predict that the loss of a single protein complex would not be sufficient to disrupt MERC structure and function, that is not what we observed for ESYT1-SYNJ2BP in this study.That appears to be a general observation for the other mammalian proteins that have been proposed to tether the two organelles: PDZD8 (Hirabayashi et al, 2017), the dually OMM-and ERlocalized MFN2 (de Brito & Scorrano, 2008), and the OMM protein RMDN3 that interacts with the ER protein VAPB (De Vos et al, 2012;Stoica et al, 2014).All have been shown to regulate MERC formation and loss of function in all cases can be rescued by an engineered ER-OMM linker (Gomez-Suaga et al, 2017;Hirabayashi et al, 2017;Hern ández-Alvarez et al, 2019) indicating that each of these protein complexes constitutes an essential tether.Whether or how the loss of one tether affects the other tethering complexes remains unexplored, but loss of individual tethers is clearly sufficient to provoke abnormal cellular calcium dynamics and interorganellar lipid transport.These data suggest, at least in the cellular models where they have been studied, that compensatory mechanisms are not commonly up-regulated.This may not be the case in animal models.For instance, the loss of all three ESYTs does not affect mouse development, viability, fertility, brain structure, ER morphology or synaptic protein composition (Sclip et al, 2016;Tremblay & Moss, 2016), so clearly, adaptive mechanisms exist.In fact, the loss of all ESYTs induces the expression the lipid transfer proteins OSBPL5 and OSBPL8 and the SOCE-associated proteins ORAI1 and STIM1 (Tremblay & Moss, 2016).A mechanistic resolution of the interrelatedness of different tethering complexes will require further study. The multiplicity of tether complexes also suggests the existence of different types of MERCs of variable composition, sustaining specific functions such as lipid transfer, calcium exchange or regulation of apoptosis.We demonstrated that contact sites occupied by SYNJ2BP and MFN2 are independent and are likely physically and functionally different because SYNJ2BP still promoted MERC formation in the absence of MFN2 (Fig S2B).We also show that, when overexpressed, SYNJ2BP can be part of two different complexes with ESYT1 or RRBP1 (Fig 4), that localize in different areas of the mitochondrial network (Fig 3D), suggesting that SYNJ2BP may sustain multiple functions at MERCs.Moreover, whereas the loss of either ESYT1 or SYNJ2BP reduces the number and length of MERCs, only the overexpression of SYNJ2BP enhanced MERC formation, leading to the recruitment of ESYT1 at MERCs (Fig 3C and D) and increased mitochondrial Ca 2+ uptake capacity (Fig 7).SYNJ2BP acts like a glue zipping ER to mitochondria, the quantity of MERCs being proportional to the level of SYNJ2BP expression (Fig 7).Interestingly, it has recently been reported that SYNJ2BPdependant MERCs are involved in the physiopathology of neuronal and viral diseases (Duan et al, 2022;Pourshafie et al, 2022). The function of ESYT1 at ER-PM contact sites has been extensively studied (Saheki, 2017).ESYT1 consists of an N-terminal hairpin-like transmembrane domain that anchors ESYT1 to the ER.The ESYT1 SMP domain binds and transports lipids in vitro (Bian et al, 2018) and the five C2 domains (A to E) bind Ca 2+ and mediate interactions with phospholipids (Corbalan-Garcia & Gomez-Fernandez, 2014).Ca 2+ binding to the C2C domain in ESYT1 enables the binding of the C2E domain to PI(4,5)P 2 -rich membranes at the PM.It has been previously suggested that ESYT1 ER-PM tethering would be activated by and reinforce SOCE (Giordano et al, 2013;Maleth et al, 2014;Idevall-Hagren et al, 2015;Kang et al, 2019).A recent study demonstrated that ESYT1 deletion impacts SOCE in a cell-type specific manner, and that this phenotype is independent of its role in ER-PM tethering function (Woo et al, 2020).Our results in human fibroblasts confirmed that the loss of ESYT1 impairs SOCE (Fig 6).The implication of ESYT1 could then be explained by its function in the distribution and replenishment of PIP 2 at the ER-PM junctions (Chang et al, 2013;Maleth et al, 2014;Kang et al, 2019).Interestingly, the reintroduction of an artificial ER-mitochondria tether did not resolve either the cytosolic or the ER Ca 2+ phenotype because of the loss of ESYT1, but fully rescued the mitochondrial Ca 2+ impairment, highlighting the additional function of ESYT1 as a tether at MERCs. Loss of ESYT1 altered mitochondrial lipid composition with significant decreases of CL, PE, and PI proportions which, in addition to being among the most abundant lipids in mitochondrial membranes (Funai et al, 2020), are essential for normal mitochondrial physiology (Belikova et al, 2006;Acin-Perez et al, 2008;Bottinger et al, 2012;Raemy & Martinou, 2014;Hsu et al, 2015;Acoba et al, 2020).The observation that the artificial tether was able to rescue this phenotype, suggests that although ESYT1 is not required for lipid transfer from ER to mitochondria, it is essential for optimal lipid transfer through its tethering property.It is possible that the mechanical tethering provided by ESYT1 might organize specialized membrane domains that serve as platforms to recruit other lipid transport proteins. Several proteins have been proposed to participate in the lipid exchange between ER and mitochondria in mammals including RMDN3 (Yeo et al, 2021) and MFN2 (Hern ández-Alvarez et al, 2019).Of particular interest, VPS13D is present at MERCs, binds the OMM Sucrose bilayer purified mitochondria from control human fibroblasts (control, n = 3), ESYT1 KO fibroblasts (KO, n = 4) and ESYT1 KO fibroblasts expressing either ESYT1-Myc (Rescue, n = 6) or an mitochondria-ER artificial tether (Tether, n = 6) were analyzed for absolute quantification of lipid content using shotgun mass spectrometry lipidomics.(A) PCA analysis of individual samples.Lipid species mol% were used as input data.(B) Lipid class profile of cardiolipins (CL), phosphatidylethanolamines (PE), phosphatidylinositols (PI), and phosphatidylcholines (PC).Data are presented as molar % of the total lipid amount (mol%).One-way ANOVA with multiple comparisons analysis was applied.Error bars represent mean ± SEM. ns: not significant, *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.GTPase RHOT2 (Guillen-Samander et al, 2021), and has been proposed to link ER to mitochondria and support lipid transfer (Guillen-Samander et al, 2021).OSBPL5 and OSBPL8 were shown to localize to MAMs, their loss leading to mitochondrial morphology and respiration defects (Galmes et al, 2016).OSBPL5 and OSBPL8 bind to the mitochondrial intermembrane bridging/mitochondrial contact sites and cristae junction organizing system complexes, where they mediate non-vesicular transport of PS from ER to the mitochondria (Monteiro-Cardoso et al, 2022).Interestingly, we found VPS13D and VPS13A as proximity interactors of SYNJ2BP.Likewise, we found OSBPL8 as a proximity interactor of ESYT1, suggesting a potential partnership between ESYT1 as a tether and the lipid transport protein OSBPL8. A recent study (Leterme & Michaud, 2023;Sassano et al, 2023) suggested that ESYT1 is recruited at MERCs by the ER protein PERK, independently of its kinase activity, but an OMM partner was not identified.The loss of either partner, ESYT1 or PERK, impaired ER-mitochondria lipid transfer; however, only the loss of the latter affected the quantity of MERCs and mitochondrial Ca 2+ uptake.It was concluded that ESYT1 is not involved in MERC tethering but actively transport lipids through its SMP domain.This study and ours highlight a new and previously unappreciated role of ESYT1 at MERCs and the differences between them may reflect the cellular models investigated (HeLa and shRNA-mediated knockdown vs fibroblasts and CRISPR-Cas9-mediated KO). The molecular mechanisms that regulate SYNJ2BP-ESYT1 complex formation remain unknown.SYNJ2BP is a C-terminal tailanchored OMM protein with a PDZ domain facing the cytosol (Hung et al, 2017).PDZ domains are small globular protein-protein interaction domains that bind the C-terminus of partner proteins.Some PDZ domains can also bind phosphatidylinositides, especially PI(4,5)P 2 and cholesterol (Liu & Fuentes, 2019), suggesting a synergistic binding of PDZ to phosphatidylinositide lipids and proteins (Pemberton & Balla, 2019).This raises the possibility that the binding of ESYT1 to SYNJ2BP could involve an interaction with PI(4,5)P 2 at the surface of the OMM, an hypothesis that will require further investigation. Generation of KO and overexpression cell lines KO cell lines of ESYT1 and SYNJ2BP were generated by CRISPR-Cas9-mediated gene editing in human fibroblast cells.Genespecific target sequence 59GTTCTTTCTCGTCGCGGACC-39 for ESYT1 and 59GAAGAGATCAATCTTACCAG-39 for SYNJ2BP was cloned into pSpCas9(BB)-2A-Puro (PX459) V2.0 (62988; Addgene) (Ran et al, 2013) and transfected into cells by Lipofectamine 3000 (Thermo Fisher Scientific) according to the manufacturer's instructions.The day after, transfected cells were selected by the addition of puromycin (2.5 μg/ml) for 2 d.Individual clones were screened for loss of target protein by immunoblotting and frameshift mutations were confirmed by genomic sequencing.Cells stably overexpressing ESYT1-3xFLAG, ESYT1-Myc, SYNJ2BP, and the artificial tether (Hirabayashi et al, 2017) were engineered by retroviral infection of virus produced in Phoenix cells transfected with pLXSH-Hygro plasmids as described previously (Weraarpachai et al, 2009).The artificial tether plasmid (blue fluorescent protein with OMM-targeting sequence of mAKAP1 at the N-terminus and the ER-targeting sequence of yUBC6 at the C-terminus) was a kind gift from Franck Polleux, and was engineered based on the original artificial tether from Csordas et al (2006).Flp-In T-REx 293 stable cell lines were generated as previously described (Antonicka et al, 2020). For selection of stable Flp-In T-REx 293 expressing clones, a previously described procedure was used, and representative images for all baits are shown in Fig S1 (Antonicka et al, 2020). Immunofluorescence For immunofluorescence experiments, cells plated on coverslips 24 h before the experiment were fixed using 4% formaldehyde in PBS for 20 min at 37°C.Coverslips were washed three times with PBS and cells were permeabilized in 0.1% Triton in PBS for 15 min at room temperature.After three washes with PBS, coverslips were blocked in PBS containing 5% BSA for 30 min, incubated with primary antibodies for 1 h at room temperature, washed three times with PBS, and incubated with Alexa-conjugated secondary antibodies (1:2,000) and DAPI (1:2,000) for 30 min at room temperature.Coverslips were washed three times with PBS and mounted with Fluromount-G (Thermo Fisher Scientific).Cells were imaged with Olympus IX83 microscope connected with Yokogawa CSU-X confocal scanning unit, using UPLANSAPO 100x/1.40Oil objective (Olympus) and Andor Neo sCMOS camera.Images were processed in Fiji (Schindelin et al, 2012). BioID sample preparation, mass-spec data acquisition, and MS data analysis BioID analysis, mass spectra acquisition, and MS data analysis were performed as described previously (Antonicka et al, 2020).For analysis with SAINT, only proteins with iProphet protein probability >0.95 were considered, which corresponds to an estimated protein level FDR of ~0.5%.A minimum of two detected peptide ions was required.SAINTexpress analysis was performed using version exp3.6.3 with two biological replicates per bait.SAINT analysis included 50 negative control runs used previously in a study by Antonicka et al (2020) consisting of untransfected Flp-In T-Rex 293 cells (to detect endogenously biotinylated proteins) and BirA*-FLAG-GFP cells (to detect preys that become promiscuously biotinylated).A threshold of 1% Bayesian false discovery rate was used to select high-confidence proximity interactors (Table S1).All nonhuman protein contaminants were removed from the SAINT file. Databases used for analysis Mitocarta 3.0 (Rath et al, 2021) was used for annotation of detected preys as mitochondrial proteins.PANTHER17.0database was used for Gene Ontology annotations (GO database released 22/03/2022). BioID data visualization BioID data were visualized using ProHits-viz (Knight et al, 2017) analysis tool.For all analyses, average spectrum (AvgSpec) was used as the abundance measure and subtraction of the spectral counts across the controls was performed.The spectral counts for each prey were normalized to the Prey Sequence Length. For ESYT1 specificity plot, a file combining BioID data of all SMPdomain proteins was used as input file and the specificity module was used.For ESYT1 versus ER_BirA* comparison plot, a file combining BioID data of all SMP-domain proteins, ER_BirA*, OMM_BirA*, and tether_BirA* was used as the input file and the Conditioncondition module was used.For dot plot graph, a file combining BioID data of ESYT1, SYNJ2BP, ER_BirA*, OMM_BirA*, and tether_-BirA* was used.The figures were annotated and color-coded using the visualization module of ProHits-viz.Venn diagrams were created using either Venny 2.1 (https://bioinfogp.cnb.csic.es/tools/venny/index.html) or https://bioinformatics.psb.ugent.be/webtools/Venn/. ESYT1-FLAG immunoprecipitation Heavy membrane fraction from human fibroblasts overexpressing ESYT1-Flag was lysed in lysis buffer (10 mM Tris pH 7.5, 150 mM NaCl, 1% DDM + protease inhibitor) for 20 min at 4°C, centrifuged for 15 min at 20,000g and supernatant was collected.This extract was precleared overnight at 4°C with rotational mixing with rinsed naked beads (Dynabeads Protein A; Invitrogen).Beads for immnunoprecipitation were incubated overnight at 4°C with rotational mixing with the Flag antibody in Na-phosphate pH 8, 0.08% tween20 buffer, washed three times with 0.1 M Na-phosphate/ 0.08% Tween 20 pH 8 buffer, and washed two times with 0.2 M TEA/0.08% Tween 20 pH 8. Antibody was crossed-linked to the beads using DMP (dimethyl pimelimidate dihydrochloride) in 0.2 M TEA/0.08% Tween 20 pH 8 (5.4 mg/ml) for 30 min with rotational mixing at room temperature.Reaction was stopped by adding 50 mM Tris/0.08% Tween 20 pH 7.5 and incubate for 15 min at room temperature with rotational mixing.Beads were washed three times with PBS/0.08%Tween 20 pH 8, not cross-linked antibody was removed by eluting twice with 0.1 M glycine/0.08%Tween 20 pH 2.5 and rotational mixing at room temperature for 10 min each time.Beads were finally washed three times with PBS/0.08%Tween 20 pH 8 and incubated with the precleared extract overnight at 4°C with rotational mixing.Naked beads treated the same way were used for negative control.Beads were then washed two times with lysis buffer, two times with high salt buffer (10 mM Tris pH 7.5, 450 mM NaCl, 0.1% DDM), and two times with low salt buffer (10 mM Tris pH 7.5, 150 mM NaCl, 0.1% DDM).Immunoprecipitated proteins were eluted twice with 0.1 M glycine/0.5%DDM pH 2.5 at 50°C for 15 min.Physiological pH was restored by adding 1MTris pH 7.5.Proteins were precipitated with trichloroacetic acid and sent for mass spectrometry analysis on an Orbitrap (Thermo Fisher Scientific) at the Institute de Recherches Cliniques de Montreal. Mouse liver fractionation C57/BL6N male mice were obtained from Jackson Laboratories, and liver harvesting and animal handling were approved and performed in accordance with the Montreal Neurological Institute Animal Care Committee regulations.The fractionation was performed as described in the study by Aaltonen et al (2022). Heavy-membrane preparation and sucrose bilayer mitochondrial purification For heavy-membrane fraction preparation, cells were rinsed twice, resuspended in ice-cold ST buffer (250 mM sucrose, 10 mM Tris-HCl pH 7.4) + Complete protease inhibitor cocktail (Roche), and homogenized with 10 passes of a prechilled, zero-clearance homogenizer (Kimble/Kontes).A postnuclear supernatant was obtained by centrifugation of the samples twice for 10 min at 600g.Heavy membranes were pelleted by centrifugation for 10 min at 10,000g and washed once in the same buffer.Protein concentration was determined by Bradford assay. For sucrose bilayer mitochondrial purification, heavy-membrane fractions were resuspended in ST buffer, loaded on top of a sucrose bilayer (1 ml of 1 M sucrose in ST buffer on top of 1 ml of 1.7 M sucrose in ST buffer), and centrifuged for 40 min at 70,000g.The band at the sucrose bilayer intersection containing pure mitochondria was harvested, diluted in ST buffer, and centrifuged for 10 min at 12,000g.The pellet was then washed once with ST buffer.Protein concentration was determined by Bradford assay. SDS-PAGE, BN-PAGE, two-dimensional electrophoresis, and Western blot Blue-Native PAGE (BN-PAGE) was used to separate individual protein complexes.Heavy membranes were solubilized with 1% dodecyl maltoside or 8 mg/ml of digitonin for MCU and IP3R complexes.Solubilized samples (10-20 μg) were run in the first dimension on 6-15% polyacrylamide gradient gels as described in detail previously (Leary & Sasarman, 2009).For the second-dimension analysis, BN-PAGE/SDS-PAGE was carried out as detailed previously (Antonicka et al, 2003). SDS-PAGE was used to separate denatured whole-cell extracts, heavy membranes or mouse fractionation samples.In general, whole cells were extracted with 1.5% lauryl maltoside in PBS, after which, 20 μg of protein was run on either 10%, 12%, or 15% polyacrylamide gels. Separated proteins were transferred to a nitrocellulose membrane (PALL), and subsequently incubated with indicated primary and secondary antibodies in 5% skim-milk Tris-buffered saline solution with 0.1% Tween 20. TEM analysis Cells were washed in 0.1 M Na cacodylate washing buffer (Electron Microscopy Sciences) and fixed in 2.5% glutaraldehyde (Electron Microscopy Sciences) in 0.1 M Na cacodylate buffer overnight at 4°C.Cells were then washed three times in 0.1 M Na cacodylate washing buffer for a total of 1 h, incubated in 1% osmium tetroxide (Mecalab) for 1 h at 4°C, and washed with ddH 2 O three times for 10 min.Then, dehydration was performed in a graded series of ethanol/deionized water solutions from 30% to 90% for 8 min each, and 100% twice for 10 min each.The cells were then infiltrated with a 1:1 and 3:1 Epon 812 (Mecalab):ethanol mixture, each for 30 min, followed by 100% Epon 812 for 1 h.Cells were embedded in the culture wells with 100% Epon 812 and polymerized overnight in an oven at 60°C.Polymerized blocks were trimmed and 100-nm ultrathin sections were cut with an Ultracut E ultramicrotome (Reichert Jung) and transferred onto 200mesh Cu grids (Electron Microscopy Sciences).Sections were poststained for 8 min with 4% aqueous uranyl acetate (Electron Microscopy Sciences) and 5 min with Reynold's lead citrate (Thermo Fisher Scientific).Samples were imaged with a FEI Tecnai-12 transmission electron microscope (FEI Company) operating at an accelerating voltage of 120 kV equipped with an XR-80C AMT, 8 megapixel CCD camera.Based on the images, MERC characteristics (number, length, mitochondrial perimeter coverage) were measured using ImageJ software.The distance between ER and OMM was selected within 10-80 nm, manually traced, and quantified using ImageJ software. PLA A PLA (Duolink PLA, Merk) was used to analyze the interaction of characterised ER and mitochondria resident proteins, which interact at MAMs, namely voltage-dependent anion channel1 (ab14734; Abcam) and IP3R1 (ab264281; Abcam) (Tubbs & Rieusset, 2016).Cells were cultured on coverslips in 24-well plates and were fixed in 5% PFA for 10 min at 37°C, quenched using 50 mM ammonium chloride and permeabilized with 0.1% Trition-X100 in PBS for 10 min.Between each step, cells were washed three times in PBS.Cells were blocked in Duolink blocking solution and incubated in a humidified chamber at 37°C for 1 h.Primary antibodies were diluted in Duolink antibody diluent and incubated at 4°C overnight.The next day, cells were washed twice with PBS for 5 min and probed with the appropriate secondary antibodies coupled to the template DNA strands at 37°C for 1 h at RT.The template DNA strand on each antibody was ligated by a DNA ligase at 37°C for 30 min at RT. Cells were washed twice with PBS for 5 min at RT and rolling loop DNA amplification was then initiated using a DNA polymerase and fluorescent nucleotides enabling detection by confocal microscopy.Cells were washed twice in PBS for 10 min and once in ddH 2 O for 1 min before being mounted onto glass slides using mounting media containing 49, 6-diamidino-2-phenylindole (DAPI) (ProLong Diamond; Invitrogen).At least 20 cells were analyzed from three independent experiments. Lipidomics MS data acquisition Samples were analyzed by direct infusion on a QExactive mass spectrometer (Thermo Fisher Scientific) equipped with a TriVersa NanoMate ion source (Advion Biosciences).Samples were analyzed in both positive and negative ion modes with a resolution of Rm/z = 200 = 280,000 for MS and Rm/z = 200 = 17,500 for MSMS experiments, in a single acquisition.MSMS was triggered by an inclusion list encompassing corresponding MS mass ranges scanned in 1-D increments (Surma et al, 2015).Both MS and MSMS data were combined to monitor CE, DAG, and TAG ions as ammonium adducts; PC, PC O-, as acetate adducts; and CL, PA, PE, PE O-, PG, PI, and PS as deprotonated anions.MS only was used to monitor LPA, LPE, LPE O-, LPI, and LPS as deprotonated anions; Cer, HexCer, SM, LPC, and LPC O-as acetate adducts. Lipidomics data analysis and post-processing Data were analyzed with Lipotype's in-house developed lipid identification software based on LipidXplorer (Herzog et al, 2011;Herzog et al, 2012).Data post-processing and normalization were performed using Lipotype's in-house developed data management system.Only lipid identifications with a signal-to-noise ratio >5, and a signal intensity fivefold higher than in corresponding blank samples were considered for further data analysis. Lipidomics statistical analysis Lipidomics result analysis was performed using the integrative tool LipotypeZoom from Lipotype.Lipids were selected with a cut-off of fold change W ±3 and a P-value < 0.05 with a Benjamini & Hochberg adjustment. Intracellular calcium analysis Cells were seeded on a Nunc Lab-Tek chambered eight-well cover glass (Thermo Fisher Scientific).To measure mitochondrial, cytosolic, and ER calcium content, cells were transfected respectively with plasmids encoding mitochondria-targeted GECI (CEPIA2mt), cytosolic-targeted GECI (R-GECO) or cytosolic-targeted FluoForte and ER-targeted GECI (R-CEPIA1er) (Suzuki et al, 2014) using Fugene HD, following the manufacturer's instructions.24 h after transfections, cells were washed three times in a BSS buffer (120 mM NaCl, 5.4 mM KCl, 0.8 mM MgCl 2 , 6 mM NaHCO 3 , 5.6 mM D-glucose, 2 mM CaCl 2 , and 25 mM HEPES [pH 7.3]) before analysis.Fluorescence values were then collected every 2 s, and cells were stimulated with 10 μM histamine in BSS.Fluorescence was recorded for 3 min using the 40x objective of the Nikon Eclipse Ti-E microscope of the Andor Dragonfly spinning disk confocal system coupled with an Andor Ixon camera, exciting with a 488 nm or 568 nm laser for CEPIA-2mt/G-CEPIA1ER or R-GECO, respectively.Changes of fluorescence (ΔF) from each fluorescent calcium probe were normalized by basal signals before histamine stimulation (F0). To analyse store-operated calcium entry (SOCE), cells were first seeded on a Nunc Lab-Tek chambered eight-well cover glass (Ibidi).The cells were washed three times in BSS -Ca 2+ and incubated in BSS -Ca 2+ for 1 h.The cells were incubated with Fluoforte (5 mM) in BSS -Ca 2+ for 15 min at 37°C.Post-incubation, cells were washed three times in BSS -Ca 2+ .Fluorescence values were then collected every 5 s, ER calcium store depletion was induced through the inhibition of SERCA by thapsigargin (10 mM) at t = 0.5 min.Upon ER calcium store depletion, SOCE was activated by addition of exogenous CaCl2 (2 mM) at t = 5 min.Fluorescence was recorded for 7 min using the 40x objective of the Nikon Eclipse Ti-E microscope of the Andor Dragonfly spinning disk confocal system coupled with an Andor Ixon camera, exciting with a 488 nm laser. Data Availibility Dataset consisting of raw files and associated peak lists and results files have been deposited in ProteomeXchange (http:// www.proteomexchange.org,accession number PXD046094) and in MassIVE (https://massive.ucsd.edu,accession number MSV000093090).Additional files include the sample description, the peptide/ protein evidence, and the complete SAINTexpress output for the dataset, and a "README" file that describes the dataset composition and the experimental procedures associated with the submission. Figure 1 . Figure 1.ESYT1 localizes to mitochondria-ER contact sites where it interacts with SYNJ2BP.(A)Specificity plot of ESYT1-N-ter BioID analysis indicates the specific proximity interaction with SYNJ2BP.The specificity denotes the fold enrichment of the spectral counts detected for each prey in the ESYT1 BioID compared with the spectral counts for that prey in all other baits in the dataset (all four SMP proteins).Prey names for the most specific preys and for preys with the highest lengthnormalized spectral counts are indicated.Preys are colour-coded based on their GO term cellular compartment analysis.MitoCarta3.0 proteins are SYNJ2BP, FKBP8, and ALDH3A2.(B) Proximity interaction between known (and predicted) ER-mitochondrial tethers with indicated baits (BFDR ≤ 0.01).The colour of each circle represents the prey-length normalized average spectra detected for the indicated protein by each bait and the size of the circle represents the relative average spectra across the baits analyzed in this dataset.The SAINT analysis excludes self-detection for the bait protein as a prey, and is represented as X in the graph.(C) Confocal microscopy images of endogenous ESYT1 localization (magenta) in human fibroblasts stably overexpressing SEC61B-mCherry as an ER marker (green).Staining for endogenous PRDX3 serves as a mitochondrial marker (cyan).Yellow arrows point to foci of ESYT1 colocalizing with both ER and mitochondria.Scale bar = 5 μm.(D) Line scan of fluorescence intensities demonstrating focal accumulations of endogenous ESYT1 along the ER network that partially colocalize with mitochondria (A.U.= arbitrary units).(E) Quantitative confocal microscopy analysis of endogenous ESYT1 localization in control human fibroblasts stably overexpressing SEC61B-mCherry as an ER marker, labelled with ESYT1 and with TOMM40 as a mitochondrial marker.Percentage of ESYT1 signal colocalizing with mitochondria and percentage of mitochondria positive for ESYT1 were assessed.Results are expressed as means ± S.D. (n = 32).(F) Subcellular localization of endogenous ESYT1 and SYNJ2BP.Mouse liver was fractionated, and the fractions were analyzed by SDS-PAGE and immunoblotting.SIGMAR1 and IP3R1 are MAM markers, PRDX3 is a mitochondrial matrix marker, CARD19 is an outer mitochondrial membrane marker, PDI is an ER marker, and UBB is a cytosol marker.The percentage of ESYT1, SIGMAR1, and PDI signal in each fraction is shown.(G) ESYT1 protein levels in control human fibroblast, three individual clones of ESYT1 knock-out fibroblasts and fibroblasts overexpressing ESYT1-3xFLAG.Whole-cell lysates were analyzed by SDS-PAGE and immunoblotting.SDHA was used as a loading control.(H) Characterization of the ESYT1 complexes.Heavy membrane fractions were isolated from control human fibroblasts, ESYT1 knock-out fibroblasts, and fibroblasts overexpressing ESYT1-3xFLAG, solubilized with 1% DDM and analyzed by blue native PAGE. Figure 1.ESYT1 localizes to mitochondria-ER contact sites where it interacts with SYNJ2BP.(A)Specificity plot of ESYT1-N-ter BioID analysis indicates the specific proximity interaction with SYNJ2BP.The specificity denotes the fold enrichment of the spectral counts detected for each prey in the ESYT1 BioID compared with the spectral counts for that prey in all other baits in the dataset (all four SMP proteins).Prey names for the most specific preys and for preys with the highest lengthnormalized spectral counts are indicated.Preys are colour-coded based on their GO term cellular compartment analysis.MitoCarta3.0 proteins are SYNJ2BP, FKBP8, and ALDH3A2.(B) Proximity interaction between known (and predicted) ER-mitochondrial tethers with indicated baits (BFDR ≤ 0.01).The colour of each circle represents the prey-length normalized average spectra detected for the indicated protein by each bait and the size of the circle represents the relative average spectra across the baits analyzed in this dataset.The SAINT analysis excludes self-detection for the bait protein as a prey, and is represented as X in the graph.(C) Confocal microscopy images of endogenous ESYT1 localization (magenta) in human fibroblasts stably overexpressing SEC61B-mCherry as an ER marker (green).Staining for endogenous PRDX3 serves as a mitochondrial marker (cyan).Yellow arrows point to foci of ESYT1 colocalizing with both ER and mitochondria.Scale bar = 5 μm.(D) Line scan of fluorescence intensities demonstrating focal accumulations of endogenous ESYT1 along the ER network that partially colocalize with mitochondria (A.U.= arbitrary units).(E) Quantitative confocal microscopy analysis of endogenous ESYT1 localization in control human fibroblasts stably overexpressing SEC61B-mCherry as an ER marker, labelled with ESYT1 and with TOMM40 as a mitochondrial marker.Percentage of ESYT1 signal colocalizing with mitochondria and percentage of mitochondria positive for ESYT1 were assessed.Results are expressed as means ± S.D. (n = 32).(F) Subcellular localization of endogenous ESYT1 and SYNJ2BP.Mouse liver was fractionated, and the fractions were analyzed by SDS-PAGE and immunoblotting.SIGMAR1 and IP3R1 are MAM markers, PRDX3 is a mitochondrial matrix marker, CARD19 is an outer mitochondrial membrane marker, PDI is an ER marker, and UBB is a cytosol marker.The percentage of ESYT1, SIGMAR1, and PDI signal in each fraction is shown.(G) ESYT1 protein levels in control human fibroblast, three individual clones of ESYT1 knock-out fibroblasts and fibroblasts overexpressing ESYT1-3xFLAG.Whole-cell lysates were analyzed by SDS-PAGE and immunoblotting.SDHA was used as a loading control.(H) Characterization of the ESYT1 complexes.Heavy membrane fractions were isolated from control human fibroblasts, ESYT1 knock-out fibroblasts, and fibroblasts overexpressing ESYT1-3xFLAG, solubilized with 1% DDM and analyzed by blue native PAGE. . The focal localization of endogenous ESYT1 along the ER network partially colocalized with mitochondria (Fig 1C, yellow arrows), illustrated by line scans of fluorescence intensities (Fig 1D). Fig 3D).Although ESYT1 and RRBP1 are both ER membrane proteins, their localization differs in control cells.RRBP1 is preferentially localized on the perinuclear sheets and ESYT1 on ER tubules (Fig 3D(a)).When SYNJ2BP is overexpressed and MERCs increased, the large ESYT1 foci recruited to mitochondria specifically localize in regions of SYNJ2BP accumulation (Fig 3D(b) white arrowheads).In addition, we observed a mitochondrial ghost pattern for ESYT1 localization that we do not see in control cells (Fig 3D(c) yellow arrowheads).In this condition, ESYT1 and RRBP1 actually accumulate in different areas of the mitochondrial network (Fig 3D(c)), suggesting different functions of the SYNJ2BP-ESYT1 and SYNJ2BP-RRBP1 complexes.Quantitative confocal microscopy analysis of MERCs in control fibroblasts, SYNJ2BP overexpressing fibroblasts, ESYT1 KO fibroblasts, and ESYT1 KO fibroblasts overexpressing SYNJ2BP (Fig 3E) confirmed the reduction of MERCs in the absence of ESYT1 and showed that the Figure 2 . Figure 2. Loss of ESYT1 decreases MERCs.(A) ESYT1 protein levels in control human fibroblasts, ESYT1 knock-out fibroblasts, and ESYT1 knock-out fibroblasts expressing ESYT1-Myc.Whole cell lysates were analyzed by SDS-PAGE and immunoblotting.VDAC1 was used as a loading control.(B) Transmission electron microscopy images of control human fibroblasts, ESYT1 knock-out fibroblasts, and ESYT1 knock-out fibroblasts expressing ESYT1-Myc.(C) Quantitative analysis of Mitochondria-ER contact sites (MERCs) from the TEM images: number of MERC per mitochondria, length of MERC (nm), coverage of the mitochondrial perimeter by ER (%), and mitochondrial perimeter (nm).Results are expressed as means ± S.D. Images in each condition were analyzed (n = 38), totaling 245 mitochondria for control cells, 154 mitochondria for KO cells, and 224 mitochondria for rescued cells.Kruskal-Wallis and post hoc multiple comparisons tests were applied, ns: nonsignificant, *P < 0.05, ****P < 0.0005. Figure 4 . Figure 4. SYNJ2BP is present in a high-molecular weight complex with ESYT1.(A) Characterization of ESYT1 and SYNJ2BP complexes.Heavy-membrane fractions from control human fibroblasts, SYNJ2BP knock-down fibroblasts, fibroblasts overexpressing SYNJ2BP, and fibroblasts overexpressing SYNJ2BP together with a 3XFLAG-tagged version of ESYT1 were analyzed by blue native PAGE.Samples were run in duplicate on the same gel and immunoblotted with anti-SYNJ2BP (left) and anti-ESYT1 antibodies (right).Lower horizontal line: 410 kD complex where both SYNJ2BP and ESYT1 run.Higher horizontal line: higher molecular weight complex observed when SYNJ2BP is overexpressed together with a 3xFLAG-tagged version of ESYT1.(B) Twodimensional electrophoresis analysis (BN-PAGE/SDS-PAGE) of SYNJ2BP-interacting proteins in control human fibroblasts and fibroblasts overexpressing SYNJ2BP.The migration of known protein complexes in the first dimension is indicated on the top of the blot (UQCRC1: OXPHOS complex III at 500 kD, NDUFA9: OXPHOS complex I at 1,000 kD).The position of identified SYNJ2BP containing complexes and their alignment with ESYT1 and RRBP1 containing complexes are indicated with grey lines.(C) Characterization of ESYT1, SYNJ2BP, and RRBP1 complexes.Heavy-membrane fractions from fibroblasts overexpressing SYNJ2BP or fibroblasts overexpressing SYNJ2BP in which either ESYT1 or RRBP1 was knocked down were analyzed by Blue-Native PAGE.Samples were run in triplicate on the same gel and immunoblotted with anti-ESYT1 (left), anti-SYNJ2BP (center), and anti-RRBP1 antibodies (right).(D) RRBP1, ESYT1 and SYNJ2BP protein levels in fibroblasts overexpressing SYNJ2BP untreated or treated with puromycin (200 μM for 2h and 30 mins).Whole-cell lysates were analyzed by SDS-PAGE and immunoblotting.CCDC47 was used as a loading control.(E) Twodimensional electrophoresis analysis (BN-PAGE/SDS-PAGE) of SYNJ2BP-interacting proteins in fibroblasts overexpressing SYNJ2BP untreated or treated with puromycin Fig 8A) and hierarchical clustering with heatmap analysis (Fig S4A) showed tight clustering of the replicates and a clear separation between control, KO, and rescue conditions.ESYT1 and artificial tether overexpressing samples clustered together, suggesting that the mitochondrial lipid content is similar in these samples.Fig S4B shows the profile of the different lipid classes identified.The loss of ESYT1 resulted in a decrease proportion of the three main mitochondrial lipid categories CL, PE, and PI, which was accompanied by an increased proportion of PC (Fig 8B).Importantly, reintroduction of both ESYT1 and the artificial tether rescued this phenotype. Figure 5 . Figure 5. ESYT1 is required for ER to mitochondria Ca 2+ transfer in Hela cells.(A) Trace of mitochondrial (Ca 2+ ) upon histamine stimulation (100 μM) in control HeLa cells, cells knocked-down for ESYT1, and cells knocked-down for ESYT1 that express an artificial ER-mitochondria tether.All cells express the mitochondrial Ca 2+ probe, CEPIA-2mt.(B) Quantification of the maximal fluorescence intensity foldchange (ΔF/F0) of CEPIA-2mt induced by histamine.Results are expressed as mean ± SD; From >50 cells per condition; n = 3 independent experiments.ns: not significant; *P < 0.05 (Turkey's multiple comparisons test).(C) Trace of cytosolic (Ca 2+ ) upon thapsigargin treatment (10 μM) in control HeLa cells, cells knocked-down for ESYT1 and cells knocked-down for ESYT1 that express an artificial ER-mitochondria tether.All cells express the cytosolic Ca 2+ probe, R-GECO.(D) Quantification of the maximal fluorescence intensity fold change (ΔF/F0) of R-GECO upon thapsigargin treatment.Results are expressed as mean ± SD; from >50 cells per condition; n = 3 independent experiments.ns: not significant (Turkey's multiple comparisons test).(E) Whole-cell lysates of control HeLa cells, cells knocked-down for ESYT1 and cells knocked-down for ESYT1 that express an artificial ER-mitochondria tether were analyzed by SDS-PAGE and immunoblotting.Vinculin was used as a loading control.(E, F) Quantification of three independent experiments as in panel (E).The graphs show the signal normalized to vinculin relative to control.Results are expressed as means ± S.D. Two-way ANOVA with a Dunnett correction for multiple comparisons was performed.*P < 0.05. Figure 7 . Figure 7. SYNJ2BP is required for ER to mitochondria Ca 2+ transfer.(A) Trace of mitochondrial-aequorin measurements of mitochondrial Ca 2+ upon histamine stimulation (100 μM) in control human fibroblasts, SYNJ2BP knock-out fibroblasts (clone 1 and 2), and fibroblasts overexpressing SYNJ2BP (clone and bulk).(B) Quantification of maximal mitochondrial Ca 2+ .Results are expressed as mean ± SD.From >50 cells per condition; n = 4 independent experiments.ns: not significant; *P < 0.05; **P < 0.01; ****P < 0.0001 (Turkey's multiple comparisons test).(C) Quantification of the rate of mitochondrial Ca 2+ uptake.Results are expressed as mean ± SD.From >50 cells per condition; n = 4 independent experiments.ns: not significant; *P < 0.05; ****P < 0.0001 (Turkey's multiple comparisons test).(D) Trace of cytosolic-aequorin measurements of cytosolic Ca 2+ upon histamine stimulation (100 μM) in control human fibroblasts, SYNJ2BP knock-out fibroblasts (clone 1 and 2), and fibroblasts overexpressing SYNJ2BP (clone and bulk).(E) Quantification of maximal cytosolic Ca 2+ .Results are expressed as mean ± SD.From >50 cells per condition; n = 4 independent experiments.ns: not significant (Turkey's multiple comparisons test).(F) Quantification of the rate of cytosolic Ca 2+ uptake.Results are expressed as mean ± SD.From >50 cells per condition; n = 4 independent experiments.ns: not significant (Turkey's multiple comparisons test).(G) Whole-cell lysates of control human fibroblasts, SYNJ2BP knock-out fibroblasts (clone 1 and 2), and fibroblasts overexpressing SYNJ2BP (clone and bulk) were analyzed by SDS-PAGE and immunoblotting.Vinculin was used as a loading control.(G, H) Quantification of three independent experiments as in panel (G).The graphs show the signal normalized to vinculin relative to control.Results are expressed as means ± S.D. Two-way ANOVA with a Dunnett correction for multiple comparisons was performed.ns: not significant.(I) Representative confocal images of PLA experiment in control human fibroblasts, SYNJ2BP knock-out fibroblasts (clone 1 and 2), and fibroblasts overexpressing SYNJ2BP (clone and bulk).Anti-VDAC1 and anti-IP3R1 were used as primary antibodies in the assay.Scale bars represent 20 μm.(H, J) Quantification of average number of PLA foci per cell corresponding to (H).At least 20 cells were quantified per condition per independent experiment, n = 3 independent experiments.Error bars represent mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001.
2023-11-08T06:16:55.280Z
2023-11-06T00:00:00.000
{ "year": 2023, "sha1": "901a34952fef4001ba3a9ede073096b3bff0f5e1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "014776c7e87144d60a90feb4799d4d4f4a4f5805", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40170879
pes2o/s2orc
v3-fos-license
A New Motif Necessary and Sufficient for Stable Localization of the δ2 Glutamate Receptors at Postsynaptic Spines* The number of each subclass of ionotropic glutamate receptors (iGluRs) at the spines is differentially regulated either constitutively or in a neuronal activity-dependent manner. The δ2 glutamate receptor (GluRδ2) is abundantly expressed at the spines of Purkinje cell dendrites and controls synaptic plasticity in the cerebellum. To obtain clues to the trafficking mechanism of the iGluRs, we expressed wild-type or mutant GluRδ2 in cultured hippocampal and Purkinje neurons and analyzed their intracellular localization using immunocytochemical techniques. Quantitative analysis revealed that deletion of the 20 amino acids at the center of the C terminus (region E) significantly reduced the amount of GluRδ2 protein at the spines in both types of neurons. This effect was partially antagonized by the inhibition of endocytosis by high dose sucrose treatment or coexpression of dominant negative dynamin. In addition, mutant GluRδ2 lacking the E region (GluRδ2ΔE), but not wild-type GluRδ2, was found to colocalize with the endosomal markers Rab4 and Rab7. Moreover, the antibody-feeding assay revealed that GluRδ2ΔE was internalized more rapidly than GluRδ2wt. These results indicate that the E region (more specifically, a 12-amino-acid-long segment of the E2 region) is necessary for rendering GluRδ2 resistant to endocytosis from the cell surface at the spines. Furthermore, insertion of the E2 region alone into the C terminus of the GluR1 subtype of iGluRs was sufficient to increase the amount of GluR1 proteins in the spines. Therefore, we propose that the E2 region of GluRδ2 is necessary, and also sufficient, to inhibit endocytosis of the receptor from postsynaptic membranes. The ␣-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) 2 subclass of ionotropic glutamate receptors (iGluRs) consisting of GluR1 through GluR4, which exist as heteromers (1), plays a major role in fast excitatory synaptic transmission at the dendritic spines in the vertebrate brain. It has become increasingly clear that neuronal, activity-driven changes in the number of AMPA receptors at the postsynaptic spines mediate synaptic plasticity, such as long term potentiation and long term depression (LTD), which is thought to underlie certain forms of memory in the brain. For example, GluR1 is selectively delivered to the spines where neuronal activity is high during synaptic long term potentiation, whereas GluR2 is constitutively delivered to the spines to replace existing synaptic AMPA receptors in the CA1 region of the hippocampus (2,3). In contrast, GluR2-containing AMPA receptors are selectively endocytosed during synaptic LTD in the hippocampus and cerebellum (4 -7). Interestingly, such distinct trafficking patterns of GluR1 or GluR2 are controlled by the respective C termini of the receptors. Furthermore, depending on the phosphorylation status of the C termini, the endocytosed GluR1 could be either reinserted into postsynaptic sites via recycling endosomes, or degraded via lysosomal pathways (8). Therefore, the number of postsynaptic AMPA receptors seems to be tightly regulated by mechanisms that recognize the C termini of the AMPA receptors at multiple checkpoints, including exocytosis, lateral diffusion, endocytosis, and degradation. However, the molecular mechanisms underlying such regulation are not well understood; a potential problem is the heteromerization of endogenous AMPA receptors with the exogenously expressed AMPA receptors, which could affect the trafficking patterns of the receptors in the neurons. The ␦2 glutamate receptor (GluR␦2), which is predominantly expressed at the postsynaptic spines of parallel fiber-Purkinje cell synapses (9), is a member of the iGluR family. Unlike other iGluRs, GluR␦2 mainly exists as a homomer (10,11). In addition, although 50 -70% of the iGluRs are detected in the intracellular compartments of neurons (12,13), GluR␦2 is predominantly expressed at the cell surface (14). Interestingly, in the ataxic mutant mice hotfoot-4J, -7J, -11J, and -12J, GluR␦2 failed to be transported to the cell surface (14,15), which suggests that efficient trafficking of GluR␦2 to the cell surface is essential for its functioning in the cerebellum. Furthermore, GluR␦2 is not only transported to the Purkinje cell surface but also to spine regions where parallel fibers form synapses (16). Indeed, we found that GluR␦2 actively controls LTD by controlling endocytosis of the AMPA receptors at the postsynaptic spines of Purkinje cells (17). Therefore, characterization of the mechanisms responsible for GluR␦2 trafficking to the postsynaptic spines is not only necessary for understanding GluR␦2 signaling but also to obtain greater insight into the general pattern of iGluR signaling in the brain. In the study described here, we investigated the role of the C-terminal region of GluR␦2 to elucidate the mechanisms responsible for the efficient expression of this receptor at the postsynaptic spines. Although the most C-terminal region is often regarded as crucial for the anchoring of the iGluRs at postsynaptic sites (2), we found that 12 amino acids at the center of the C terminus of GluR␦2 are necessary, and also sufficient, for stable expression of the receptors at the spines in hippocampal and Purkinje cells. EXPERIMENTAL PROCEDURES Construction and Transfection of the Expression Plasmids-We used the modified overlap extension method and Pfu DNA polymerase (Stratagene, La Jolla, CA) to delete portions of GluR␦2. Using PCR, we added a cDNA that encoded a hemagglutinin (HA) tag to the 5Ј end * This work was supported by a grant-in-aid for young scientists (to S. M. and K. M.), the Keio Gijuku academic development funds, Keio University grant-in-aid for encouragement of young medical scientists (to S. M.), the grant-in-aid for Scientific Research on Priority Areas, the national grant-in-aid for the establishment of a high tech research center in a private university (to S. M. and M. Y.), the Toray Science and Technology grant, and the Keio University Special grant-in-aid for Innovative Collaborative Research Projects (to M. Y.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 (immediately following the signal sequence) or 3Ј end (immediately upstream of the stop codon) of the GluR␦2 cDNAs. We also added a FLAG tag to the 3Ј end (immediately upstream of the stop codon) of the dynamin1-K44E cDNA (provided by Dr. R. B. Vallee, Columbia University/) (18). Bidirectional sequencing confirmed the nucleotide sequence of the amplified open reading frame. cDNAs encoding Rab4a and Rab7a, tagged with green fluorescent protein (GFP), were provided by Dr. M. Fukuda (Tohoku University). After the cDNAs were cloned into the expression vector pCAGGS (provided by Dr. J. Miyazaki, Tohoku University), the constructs were transfected into hippocampal neurons using Effectene (Qiagen, Valencia, CA), as described previously (19). Immunohistochemical Analysis-To examine the subcellular localization of GluR␦2 in the hippocampal neurons, we used a pCAGGS vector expressing wild-type or mutant GluR␦2 tagged with HA. Cultured hippocampal neurons were transfected with these clones using effectene (Qiagen) and stained with an anti-HA antibody (Roche Applied Science), as described previously (19). To investigate the subcellular localization of GluR␦2 in the Purkinje cells, we used cells infected with a modified Sindbis virus. The GluR␦2 cDNA whose 3Ј end was linked to the sequence encoding the HA tag was cloned into a pSinRep vector (Invitrogen), and virus particles were generated according to the manufacturer's instructions. Cerebellar, dissociated cultures were prepared from embryonic day 18 mice, as described previously (6) and used at 14 -21 days in vitro. Sindbis virus encoding wild-type or mutant GluR␦2 was applied to cerebellar cultures. Eighteen hours later, the cells were fixed with 4% paraformaldehyde in phosphate-buffered saline (PBS) for 2 h at 4°C and then washed three times with PBS. The cultures were first incubated with a blocking solution (1% bovine serum albumin, 0.4% Triton X-100, and 10% normal goat serum in PBS) and then with the anti-HA (Roche Applied Science) and anti-calbindin (Chemicon, Temecula, CA) antibodies. The bound primary antibodies were detected by secondary antibodies that were conjugated to either Alexa 488 or Alexa 546 (Molecular Probes, Eugene, OR). Surface-labeling Assay-N-terminal HA-tagged wild-type or mutant GluR␦2 were expressed in cultured hippocampal neurons. Cells were fixed as described above, incubated with a blocking solution without Triton X-100, and then incubated with the anti-HA antibody (Covance, Richmond, CA). The primary antibodies bound on the cell surface were detected by secondary antibodies that were conjugated to Alexa 546 (Molecular Probes). "Antibody-feeding" Assay-N-terminal HA-tagged wild-type or mutant GluR␦2 were expressed in cultured hippocampal neurons. Mouse anti-HA monoclonal antibody (10 g/ml; Covance) was added to the culture medium for 30 min at 37°C to label GluR␦2 on the surface of live Amino acid residues are numbered beginning with the N-terminal residue of the mature subunit. N-or C-terminal HA-tagged wild-type (GluR␦2 wt ) or mutant GluR␦2 were expressed in cultured hippocampal neurons together with GFP, and their distribution was examined by immunocytochemical analysis using anti-HA antibody. B, endogenous GluR␦2 expression in cultured neurons. Representative images of the distribution patterns of endogenous GluR␦2 (red) and MAP2 (green) in hippocampal (upper and middle panels) and Purkinje (lower panel) neurons. C, representative images of the distribution patterns of N-terminal HA-tagged GluR␦2 wt (upper panels) and GluR␦2 ⌬E (lower panels) in cultured hippocampal neurons. Arrows and arrowheads indicate HA-positive and -negative spines, respectively. D, representative images of the distribution patterns of C-terminal HA-tagged GluR␦2 wt (upper panels) and GluR␦2 ⌬E (lower panels) in cultured hippocampal neurons. E, quantitative analysis of spine localization of C-terminal HA-tagged GluR␦2 in cultured hippocampal neurons. The HA-staining intensity in the spines was normalized to the GFP-staining intensity in the spines. The HA/GFP-staining intensity ratio of GluR␦2 wt was arbitrarily set at 100%. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR␦2 wt (**, p Ͻ 0.01; n ϭ 7 cells). hippocampal neurons. After washing with PBS, the neurons were treated with 0.5 M NaCl/0.2 M acetic acid (pH 3.5) for 4 min on ice to remove the remaining antibodies on the cell surface. The neurons were then rinsed and fixed with 4% paraformaldehyde with 4% sucrose. The cultures were permeabilized and incubated with a blocking solution, and total GluR␦2 proteins were stained by rabbit anti-GluR␦2 antibody (Chemicon), as described above. Anti-HA and anti-GluR␦2 antibodies were detected by secondary antibodies that were conjugated to Alexa 546 and Alexa 488, respectively (Molecular Probes). Image Analysis-Image analysis was performed in a blind manner without knowing the identity of the samples during the analysis. Spines were defined as dendritic protrusions that have an enlargement at the tip. Spines within proximal 2-3 dendritic segments (ϳ50 m for each segment; total length of Ͼ100 m) were analyzed for individual neurons. The intensity of the HA immunoreactivity was normalized to the GFP fluorescence intensity in each spine by the IP-Lab imaging software (Scanalytics, Fairfax, VA). The mean normalized HA immunoreactivity, which represents the spine localization of the HA-tagged protein in each neuron, was calculated from at least 20 spine heads/neuron. These values were analyzed from a total of 5-9 neurons from at least two separate cultures, and the averages were compared. For the antibody-feeding assay, the fluorescence intensities of Alexa 546 (internalized GluR␦2) and Alexa 488 (total GluR␦2) were quantified in proximal 2-3 dendritic segments by the IP-Lab imaging software. The averages of seven neurons (from two separate cultures) were compared. For statistical analysis, we used the Student's t test. Mice Treatment-All of the procedures related to the care and treatment of the experimental animals were conducted in accordance with the Guidelines for Animal Experiments at the Keio University. The animals were anesthetized with tribromoethanol before decapitation. Identification of the Synaptic Localization Signal of GluR␦2-To exclude the possibility of endogenous GluR␦2 modifying the trafficking of exogenously expressed GluR␦2, we expressed wild-type GluR␦2 with its N terminus tagged with HA (NT-HA-GluR␦2 wt (Fig. 1A) in cultured hippocampal neurons; no endogenous GluR␦2 is expressed in the hippocampus of adult rats (11). Indeed, we could not detect GluR␦2 expression in cultured hippocampal neurons by immunocytochemical analysis (Fig. 1B). Therefore, in these neurons, the majority of the exogenously expressed GluR␦2 should form homomer, although weak interaction between GluR␦2 and other iGluRs may occur (10, 11). Neurons were The HA-staining intensity in the spines was normalized to the calbindin-staining intensity in the spines. The HA/calbindin-staining intensity ratio of GluR␦2 wt was arbitrarily set at 100%. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR␦2 wt (**, p Ͻ 0.01; n ϭ 7 cells). cotransfected with cDNA encoding GFP to identify the morphology of the dendritic spines, and the distribution of GluR␦2 wt was examined by immunocytochemical analysis using anti-HA antibody. As reported for endogenous GluR␦2 in the cerebellar Purkinje cells (16), exogenous NT-HA-GluR␦2 wt was also efficiently transported to the spines in the cultured hippocampal neurons (Fig. 1C). Because GluR␦2 mainly exists as a homomer (10), these findings indicate that exogenous GluR␦2 wt contains sufficient information for efficient trafficking to spines by a mechanism common to both hippocampal and Purkinje neurons. To identify the region important for the synaptic localization of GluR␦2, a series of deletions were introduced in the C-terminal intracellular domain of GluR␦2 and an HA tag was added to the extreme C terminus of each clone (Fig. 1A). These mutant or wild-type receptors were expressed in cultured hippocampal neurons together with GFP. Similar to NT-HA-GluR␦2 wt , GluR␦2 wt with its C terminus tagged with HA was effectively transported to the spines of the cultured hippocampal neurons (Fig. 1D). Several postsynaptic density-95/Discs large/zona occludens-1 (PDZ) domain-containing proteins, such as PSD-93, PTPMEG, delphilin, and nPIST, have been reported to bind to the C terminus of GluR␦2 receptors, providing them anchorage at the postsynaptic spines (21). However, because an HA tag attached to the C terminus would completely interrupt such interactions, it is unlikely that these anchoring proteins are involved in the postsynaptic localization of GluR␦2. Interestingly, although all mutant GluR␦2 proteins were normally trafficked to the cell surface in human embryonic kidney cells (22), the mutant lacking amino acids 895-915 (GluR␦2 ⌬E ) was mostly excluded from the spines of cultured hippocampal neurons, regardless of the position of the HA tag (Fig. 1, C and D). Quantitative analysis of the HA-staining intensity in the spines revealed that the deletion of the "E region" significantly reduced the amount of GluR␦2 protein at the spines (p Ͻ 0.01; n ϭ 7) (Fig. 1E), indicating that the E region is necessary for the synaptic localization of GluR␦2. We examined whether the E region is also necessary for the synaptic localization of GluR␦2 in Purkinje cells, the type of cell in which GluR␦2 is predominantly expressed. GluR␦2 wt or GluR␦2 ⌬E , with the C terminus tagged with HA, was expressed in cultured Purkinje cells using the Sindbis virus. The localization of the receptor proteins and the morphology of the infected cells were examined by immunocytochemical analysis using anti-HA and anti-calbindin antibody, respectively (Fig. FIGURE 3. Effects of sucrose treatment on GluR␦2 ⌬E distribution in cultured hippocampal neurons. GluR␦2 wt and GluR␦2 ⌬E , with their C termini tagged with HA, were expressed with GFP in cultured hippocampal neurons. Neurons were treated with sucrose at 350 mM for 15 or 30 min, and the distribution of GluR␦2 was examined by immunocytochemical analysis. A, representative images of the distribution patterns of C-terminal HA-tagged GluR␦2 ⌬E in the absence of sucrose treatment (top panels), of GluR␦2 ⌬E after 30 min of sucrose treatment (middle panels), and GluR␦2 wt in the absence of sucrose treatment (bottom panels) in cultured hippocampal neurons. Arrows and arrowheads indicate HA-positive and -negative spines, respectively. B, quantitative analysis of the spinal localization of GluR␦2 in cultured hippocampal neurons. The HA-staining intensity in the spines was normalized to the GFP-staining intensity in the spines. The HA/GFP-staining intensity ratio of GluR␦2 wt was arbitrarily set at 100%. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR␦2 ⌬E at time 0 (**, p Ͻ 0.01; n ϭ 5 cells). C, detection of GluR␦2 proteins on the cell surface. Surface GluR␦2 proteins were labeled by applying anti-HA antibody to hippocampal neurons expressing N-terminal HA-tagged GluR␦2 wt (left panels) or GluR␦2 ⌬E (right panels) under non-permeabilizing conditions and visualized by Alexa 546-conjugated secondary antibodies. 2A-C). Although GluR␦2 wt was abundantly localized in a punctate pattern in the spines of the cultured Purkinje cells, GluR␦2 ⌬E was more prominently localized on the dendritic shafts (Fig. 2C). Quantitative analysis of the HA-staining intensity in the spines revealed that the amount of GluR␦2 ⌬E protein in the Purkinje cell spines was significantly lower than that of GluR␦2 wt (p Ͻ 0.01; n ϭ 7) (Fig. 2D). These results indicate that the E region is essential for the synaptic localization of GluR␦2 in both the hippocampal and Purkinje neurons. Effect of Blockade of Endocytosis on GluR␦2 ⌬E Localization in the Spines-There are two plausible mechanisms by which the E region might control the synaptic localization of GluR␦2, 1) it may facilitate the delivery of GluR␦2 to the dendritic spines by lateral diffusion from extrasynaptic sites, or 2) it may stabilize GluR␦2 localization by inhibiting its removal from the spines. To examine the latter possibility, we treated hippocampal neurons expressing GluR␦2 wt or GluR␦2 ⌬E with sucrose at a concentration of 350 mM, which is known to inhibit endocytosis (23). Treatment with this concentration of sucrose for 30 min, but not for 15 min, significantly increased the amount of the GluR␦2 ⌬E protein in the spines (p Ͻ 0.01; n ϭ 5), whereas it had no effect on the spinal localization of GluR␦2 wt (Fig. 3, A and B). Although we could not challenge the cells for longer periods of time with sucrose because of its detrimental effects on the morphology of the neurons, our results suggested that GluR␦2 ⌬E was removed more rapidly from the spines than GluR␦2 wt by endocytosis. To confirm that GluR␦2 in spines was located on the cell surface, we performed the surface labeling assay. NT-HA-GluR␦2 wt or NT-HA-GluR␦2 ⌬E was expressed in cultured hippocampal neurons, and surface receptors were labeled with anti-HA antibody under non-permeabilizing conditions. Although NT-HA-GluR␦2 wt was highly expressed in spines, weak NT-HA-GluR␦2 ⌬E immunoreactivities were observed diffusely throughout the dendrites (Fig. 3C). These results indicated that spine localization of GluR␦2 indeed represented receptors on the cell surface and that the E region stabilized GluR␦2 on the postsynaptic membrane. A, representative images of GluR␦2 ⌬E localization without (upper panels) or with (lower panels) dominant negative dynamin in cultured hippocampal neurons. Arrows and arrowheads indicate GluR␦2-containing and non-GluR␦2-containing spines, respectively. B, quantitative analysis of the spinal localization of GluR␦2 ⌬E in cultured hippocampal neurons. The HA-staining intensity in the spines was normalized to the GFP-staining intensity in the spines. The HA/GFP-staining intensity of non-dynamin-expressing neurons was arbitrarily set at 100%. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR␦2 ⌬E alone (*, p Ͻ 0.05; n ϭ 5 cells). C, representative images of N-terminal HA-tagged GluR␦2 ⌬E localization without (upper panels) or with (lower panels) dominant negative dynamin in cultured hippocampal neurons. Arrows and arrowheads indicate GluR␦2-containing and non-GluR␦2-containing spines, respectively. Dynamin has been shown to be essential for clathrin-mediated endocytosis, whereas its mutant dynamin-K44E, in which the lysine at position 44 is replaced with glutamine, blocks endocytosis in a dominant negative manner (18). When dynamin-K44E was coexpressed with GluR␦2 ⌬E , it significantly increased the amount of GluR␦2 ⌬E protein in the spines (p Ͻ 0.05; n ϭ 5) (Fig. 4, A and B). Identical results were obtained from the experiment using NT-HA-GluR␦2 ⌬E (Fig. 4C). Taken together, these results indicate that the E region might be necessary for rendering GluR␦2 resistant to endocytosis at the postsynaptic membrane. GluR␦2 ⌬E Is Internalized by Endocytosis-Endocytosed membrane proteins are either recycled via recycling endosomes or degraded via late endosomes and lysosomes (8). To examine the intracellular localization of GluR␦2 ⌬E , HA-tagged GluR␦2 ⌬E was coexpressed with GFP-tagged Rab4 (an early endosome marker) or Rab7 (a late endosome marker) (24) in cultured hippocampal neurons. In neurons expressing HA-GluR␦2 wt , the HA immunoreactivities were observed in the spines, and there was no evidence of colocalization with Rab4 or Rab7, confirming that GluR␦2 wt was predominantly localized at the cell surface (Fig. 5A). In contrast, in neurons expressing HA-GluR␦2 ⌬E , the HA immunoreactivities were predominantly colocalized with Rab4 or Rab7 (Fig. 5B). The colocalization of GluR␦2 ⌬E with early or late endosomal markers could be attributable to the role of the E region in promoting recycling of endocytosed GluR␦2 to the spine. However, because inhibition of endocytosis by high dose sucrose treatment or dynamin-K44E resulted in an increased presence of GluR␦2 ⌬E at the spines (Figs. 3 and 4), it is more plausible that the E region inhibited endocytosis and stabilized the localization of GluR␦2 at the spines. To further confirm that the E region inhibited the endocytosis of GluR␦2, we carried out an antibody-feeding immunofluorescence internalization assay in hippocampal neurons expressing NT-HA-GluR␦2 wt or NT-HA-GluR␦2 ⌬E . A significantly higher degree of internalization was observed with GluR␦2 ⌬E than with GluR␦2 wt (p Ͻ 0.05; n ϭ 7) (Fig. 5, C and D). Therefore, the E region played an essential role in preventing endocytosis of GluR␦2 from the postsynaptic membrane. Further Characterization of the E Region-The amino acid sequence around the E region is highly conserved among several species (Fig. 6A), suggesting that this region probably plays an essential role in GluR␦2 function. To further narrow down the important region for stable localization at the spines, smaller deletions were introduced in the E region, GluR␦2 ⌬E1 lacking the former half and GluR␦2 ⌬E2 lacking the latter half of the E region (Fig. 6B). When expressed in cultured hippocampal neurons, GluR␦2 ⌬E1 proteins were found to be localized in the spines, FIGURE 5. Endocytosis of GluR␦2 ⌬E in cultured hippocampal neurons. A and B, colocalization of GluR␦2 wt and GluR␦2 ⌬E with endosome markers. C-terminal HA-tagged GluR␦2 wt (A) or GluR␦2 ⌬E (B) was expressed with GFP-tagged Rab4 (early endosome marker) or GFP-tagged Rab7 (late endosome marker), and their distribution pattern was examined by immunocytochemical analysis using anti-HA antibody. C and D, the antibody-feeding assay to monitor endocytosis of surface GluR␦2 proteins. Anti-HA antibody was added to live hippocampal neurons expressing N-terminal HA-tagged GluR␦2 wt or GluR␦2 ⌬E to label GluR␦2 on the surface. Thirty minutes later, internalized and total GluR␦2 were detected by anti-HA (red) and anti-GluR␦2 (green) antibodies, respectively. Representative images of overall (C ) and dendritic (D) regions are shown. E, quantitation of GluR␦2 internalization, measured as the ratio of internalized (red)/total (green) fluorescence. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR␦2 wt (*, p Ͻ 0.05; n ϭ 7 cells). whereas GluR␦2 ⌬E2 proteins were mostly excluded from the spines (Fig. 6C), suggesting that the E2 region probably contains a sequence necessary for stable expression of the receptor at the spines. Interestingly, GluR␦2 was recently shown to interact with Shank (a multifunctional anchoring protein for metabotropic glutamate receptors at the spines) via a region containing the E2 region (25). Therefore, we examined whether the spine localization of GluR␦2 was mediated by Shank by replacing the serine at position 905 with alanine (GluR␦2 S905A ), a mutation previously shown to block the binding ability of the receptor to Shank in vitro (25). However, GluR␦2 S905A was found to be abundantly localized at the spines in the same manner as GluR␦2 wt (Fig. 6C). Similarly, GluR␦2 S905A,T915A,F917A , which included additional mutations at positions 915 and 917 outside the E2 region and which completely blocked the GluR␦2 binding to Shank (25), were also found to be localized abundantly at the spines (data not shown). From these results, it is considered unlikely that Shank is involved in the spine localization of GluR␦2. To examine whether the E region was sufficient for spine localization of GluR␦2, we expressed a truncated version of GluR␦2 (GluR␦2 E-) in cultured hippocampal neurons in which the C terminus immediately after the E region was removed (Fig. 7A). Similar to GluR␦2 wt , but unlike GluR␦2 ⌬E , GluR␦2 Ewas highly localized to spines (Fig. 7B). To further examine whether the E region was sufficient to localize other membrane proteins, we inserted the E2 region in the corresponding C-terminal region of the AMPA receptor GluR1 (Fig. 7C). Consistent with earlier reports (26), wild-type GluR1 was not effectively transported to the spines under basal conditions; however, the insertion of the E2 region significantly increased the amount of GluR1 protein detected in the spines (p Ͻ 0.05; n ϭ 9) (Fig. 7, D and E). These results indicated that the E2 region of GluR␦2 was sufficient for stable localization of GluR1 at the spines in a context-independent manner. DISCUSSION GluR␦2 is abundantly expressed at the postsynaptic membranes of Purkinje cells and plays crucial roles in cerebellar motor learning and synaptic plasticity, including LTD. In our previous study, we have shown that the juxtamembrane "A region" of GluR␦2 played an essential role in the efficient surface transport of GluR␦2 (22). However, the molecular mechanism that enables enrichment of GluR␦2 at dendritic spines remains unclear. In the present study, we demonstrated that a region at the center of the C terminus consisting of 12 amino acids (E2 region) is necessary for the efficient localization of GluR␦2 at the spines in hippocampal and Purkinje neurons. Inhibition of endocytosis by treatment with sucrose at 350 mM (Fig. 3) or expression of dominant negative dynamin (Fig. 4) increased the amount of GluR␦2 ⌬E protein in the spines. In addition, GluR␦2 ⌬E , but not GluR␦2 wt , colocalized with the endosomal proteins Rab4 and Rab7 (Fig. 5, A and B). The antibodyfeeding assay also revealed that GluR␦2 ⌬E was internalized more rapidly than GluR␦2 wt (Fig. 5, C and D). These results strongly indicate that the E2 region is necessary for rendering GluR␦2 resistant to endocytosis from the cell surface at the spines. Truncation of the C terminus immediately after the E region did not affect spine localization of GluR␦2 (Fig. 7B). Furthermore, insertion of the E2 region alone at the C terminus of GluR1 was sufficient to increase the amount of GluR1 proteins in the spines (Fig. 7, D and E). Therefore, we propose that the E2 region of GluR␦2 is necessary and also sufficient to inhibit endocytosis from the postsynaptic membranes by a mechanism common to hippocampal and Purkinje neurons. Wild-type GluR1 was not effectively transported to the spines under basal conditions, as reported previously (Fig. 7D) (2). Thus, in addition to inhibiting endocytosis from the postsynaptic membranes, the E2 region may also enhance the delivery of GluR1 to the spines. Alternatively, because neurons show low levels of spontaneous activities in culture preparations (27), only small amounts of GluR1 may be delivered to the spines, and by inhibiting endocytosis, the E2 region may enhance the retention of GluR1 delivered to the spines. Many proteins containing the PDZ domain are thought to be involved in the stabilization of postsynaptic iGluRs at the dendritic spines. For example, localization of the AMPA receptor subunit GluR2 at the spines requires the interaction of its C terminus with GRIP, a PDZ domain-containing protein (28). Neuronal activity is thought to induce phosphorylation of the serine at position 880 in the C terminus of GluR2 (29,30); GluR2 is then released from GRIP and endocytosed from the cell surface, resulting in LTD (6,20). Similarly, several other PDZ domain-containing proteins, such as PSD-93, PTPMEG, delphilin, and nPIST, have been shown to bind with the C terminus of GluR␦2 (21), although such binding depends on the C-terminal end itself and not specifically on the E2 region of GluR␦2. In contrast, structural analysis of the PDZ domain revealed that it is architecturally designed to allow binding to consensus sequences located at the C-terminal end of the peptide. Indeed, tagging of the C-terminal end of GluR␦2 with HA, which would completely block its interaction with these PDZ domaincontaining proteins, did not affect the localization of GluR␦2 (Fig. 1). Therefore, it is unlikely that these PDZ domain-containing proteins mediate the preferential localization of GluR␦2 at the spines. Certain PDZ domain-containing proteins could interact with non-Cterminal, internal regions of proteins. Indeed, Shank was shown to interact with a region within the C terminus of GluR␦2, which con- The gray box and the bold letters indicate the inserted GluR␦2 E2 region and its amino acid sequence, respectively. D, representative images of the distribution patterns of C-terminal HA-tagged GluR1 wt (upper panels) and GluR1 E2 (lower panels) in cultured Purkinje neurons. GluR1 wt or GluR1 E2 was expressed in cultured hippocampal neurons and their spinal localization was examined by immunocytochemical analysis using anti-HA antibody. Arrows and arrowheads indicate HA-positive and -negative spines, respectively. E, quantitative analysis of the spinal localization of GluR1 in cultured hippocampal neurons. The HA-staining intensity in the spines was normalized to the GFP-staining intensity in the spines. The HA/GFP-staining intensity ratio of GluR1 wt was arbitrarily set at 100%. Each bar represents the mean Ϯ S.E., and significance was established in comparison with that measured in cells that expressed GluR1 (*, p Ͻ 0.05; n ϭ 9 cells). tained the E2 region (25). However, we found that a mutation that was known to dissociate GluR␦2 from Shank did not affect the spinal localization of GluR␦2 (Fig. 6). Similarly, PSD-95, a PDZ protein that binds to N-methyl-D-aspartate receptors and mediates important postsynaptic signaling, is not essential for postsynaptic targeting of N-methyl-D-aspartate receptors (31). Furthermore, mutant mice lacking delphilin, a PDZ protein that binds to GluR␦2, showed normal postsynaptic localization of GluR␦2 (32). Therefore, although PDZ domain-containing proteins are important for postsynaptic signaling, they are unlikely to mediate the postsynaptic localization of iGluRs. The actin cytoskeleton has also been found to be critical for the stabilization of iGluR localization at the spines (33). For example, the C terminus of N-methyl-D-aspartate receptors binds to spectrin (34) and actinin (35), both of which bind to the F-actin present in abundance in the spines. Similarly, GluR␦2 also binds to the actin cytoskeleton via spectrin (36). It has been suggested that the neuronal activity-induced increase in the Ca 2ϩ in Purkinje cells may promote dissociation of GluR␦2 from spectrin, leading to endocytosis of GluR␦2 from the postsynaptic membrane (23). Thus, the stabilization of GluR␦2 localization at the spines may be mediated by the binding of the E2 region with spectrin and actin. However, the AMPA receptor GluR1 has also been shown to interact with spectrin via adapter proteins 4.1N (37) and RIL (38) in hippocampal neurons. Because insertion of the E2 region of GluR␦2 into the C terminus of GluR1 promoted the synaptic accumulation of the receptors, it seems unlikely that the role of the E2 region is simply confined to its association with the spectrin-actin cytoskeleton. In addition, our preliminary analysis also indicated that spectrin does not bind, at least directly, to the E2 region of GluR␦2 in vitro. 3 However, because many actin-binding proteins, such as calponin (39), spinophilin (40), actinin (41), myosin Va (42), myosin VI (43), and drebrin (44), are known to be localized at the spines, we postulate that, similar to the N-methyl-D-aspartate receptors that are closely associated with F-actin via multiple proteins (including actinin and spectrin), GluR␦2 localization at the spines may also be stabilized by several actin-binding proteins, one of which may bind to the E2 region. Hotfoot mice are spontaneous ataxic mouse mutants resulting from various mutations in the gene encoding GluR␦2. Interestingly, of the 20 alleles known so far, most mutants retain mutant GluR␦2 in the endoplasmic reticulum, indicating that GluR␦2 must be transported to the Purkinje cell surface for it to function properly (14,15,21). In addition, we recently demonstrated that the application of an antibody against the extracellular domain of GluR␦2 induced endocytosis of the AMPA receptor GluR2 and inhibited further induction of LTD (17). These findings indicate the essential roles of GluR␦2 at the postsynaptic spine surface. Therefore, further studies are warranted to identify specific proteins that bind to the E2 region and regulate stabilization of GluR␦2 in the dendritic spines.
2018-04-03T04:44:41.485Z
2006-06-23T00:00:00.000
{ "year": 2006, "sha1": "9f16dd5abd09a03b46701002ef0c1bf8cdd32fa8", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/25/17501.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "535cb0b79a4827b784a9a806ebec0b8f28dd81ca", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231767226
pes2o/s2orc
v3-fos-license
University of Birmingham Boric acid as an adjunct to periodontal therapy Objective: To evaluate the efficacy of boric acid as an adjunct to non-surgical periodontal therapy, in comparison with a placebo adjunct, in terms of changes in probing pocket depth (PPD) and clinical attachment level (CAL), in patients with periodontitis. Methods: Four electronic databases were searched from inception to May 2020 (PubMed, Cochrane CENTRAL, EMBASE via OVID and Web of Science). Clinical out - comes were extracted, pooled and meta-analyses conducted using mean difference with standard deviations. Results: For PPD, a mean additional reduction of 0.58 mm (95% CI: −0.03– 1.19 mm, p = 0.06) was observed at 3 months and a mean additional reduction of 1.18 mm (95% CI: 0.97– 1.40 mm, p < 0.05) at 6 months, compared with placebo. For CAL, a mean additional gain of 0.62 mm (95% CI: −0.07– 1.32 mm, p = 0.08) was observed at 3 months and a mean additional gain of 1.24 mm (95% CI: 0.89– 1.58 mm, p < 0.05) at 6 months, compared with placebo. No adverse events were reported in any studies. Conclusions: The adjunctive use of boric acid in non-surgical periodontal therapy re-sults in improved treatment outcomes at 3 and 6 months, with no adverse events reported. | INTRODUC TI ON Periodontitis is a chronic inflammatory disease characterized by the loss of periodontal attachment and mediated by the host-bacteria interaction. 1 The management of the disease fundamentally involves the elimination of pathogenic microbiota in order to arrest the inflammatory response and induce healing. 2 The foundation of effective periodontal therapy is mechanical debridement of the root surface, with a view to disrupt the established biofilm; all other treatments and agents are considered adjunctive to this. 3 Nonsurgical periodontal therapy is efficacious, eliciting improvements in clinical outcomes in the majority of cases. 47][8][9] Antibiotics, administered systemically or locally, have been proven to be efficacious across numerous studies and are one of the most common adjunctive treatments. 10,11However, the critical issue of antimicrobial resistance greatly restricts their use. 12Photodynamic therapy has been investigated, and numerous different methods of photosensitization have been explored; however, systematic reviews reveal very limited clinical benefit. 13,14ric acid is one agent which has been postulated to convey benefits in the management of periodontitis, with animal models demonstrating a reduction in periodontal inflammation and attachment loss. 156][17] The boron-containing compound AN0128, a derivative of boric acid, is thought to contribute to the anti-inflammatory and immune regulatory effects by inhibiting the release of tumour necrosis factorα (TNFα). 15,17In addition, boric acid is osteogenesispromoting through its actions on stromal cells within bone marrow, where it promotes the differentiation of osteogenetic cells. 15,17,18A clinical application of these properties has been demonstrated in a randomized controlled trial in which boric acid was found to induce significantly more bony infill in furcation defects, as compared with placebo. 17Despite these potentially beneficial properties, to the authors' knowledge, there are no existing systematic reviews evaluating the adjunctive use of boric acid in the management of periodontitis. The aim of this systematic review was to assess the efficacy of boric acid as an adjunct to non-surgical periodontal therapy, as compared to placebo, in patients with periodontitis. | Protocol and registration Prior to starting the study, the authors outlined a review protocol.The protocol was approved and registered in the International Prospective Register of Systematic Reviews, PROSPERO (CRD42020187484). This review is reported according to PRISMA guidelines, and all methods used in conducting the review were taken from the Cochrane Handbook for Systematic Reviews of Interventions. 19 | Study eligibility: inclusion and exclusion criteria Studies were included according to the PICOS criteria: | (P)opulation Patients with periodontitis, defined as either PPD ≥5 mm and / or ≥4 mm loss of CAL. 20 | (I)ntervention Supra-and subgingival debridement (ie scaling and root planing or root surface debridement) plus adjunctive boric acid administered to the sites being treated. | (C)omparison Supra-and subgingival debridement plus adjunctive placebo administered to the sites being treated. | (O)utcome There were two primary outcome measures: change in PPD and change in CAL.Secondary outcome measures evaluated were adverse events due to adjunctive boric acid therapy. | (S)tudy design Randomized controlled trials with at least 3 months of follow-up. No restrictions were placed on the studies according to the date of publication, phase of the trials or method of boric acid administration.Studies were excluded if they did not meet the PICOS parameters outlined above, if they were not in English language or if they evaluated outcomes in patients below 18 years of age. | Study selection The studies were independently screened by the two review authors, initially according to relevance of the title and relevance of the abstract, in accordance with the eligibility criteria outlined.Following this, the remaining articles then underwent full-text analysis and excluded articles were documented, with reasons for exclusion.Discrepancies between the reviewers regarding any specific paper were settled through discussion until a consensus was reached.Inter-reviewer agreement for screening and inclusion of articles was assessed via kappa scores. | Data extraction Data were extracted into a custom-designed spreadsheet made in Microsoft Excel (2019).A standardized data sheet was pre-piloted and then implemented for data extraction by a single reviewer (NZB).The second reviewer (MK) verified the accuracy of data obtained from the studies.The unpopulated spreadsheet into which data were input is presented in Appendix 2. | Risk of bias The risk of bias of the included studies was evaluated using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions. 19The following parameters were assessed: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other bias. | Data synthesis Meta-analyses were conducted for treatment outcomes at 3 months and 6 months.Data from the included studies were pooled, using mean difference (mm) with standard deviations.Where standard deviations were not provided, authors were contacted for individual patient data to allow for calculation.If these data could not be obtained, standard deviations were imputed using the correlation coefficient method recommended in the Cochrane Handbook for Systematic Reviews of Interventions. 19e secondary outcome measure, adverse events, was assessed through calculation of risk ratios.Data were pooled using both a fixed effects model and a random effects model, and if significant heterogeneity was identified, the findings from the random effects model were presented.Fixed effects models were only used if there was no significant methodological heterogeneity and no significant statistical heterogeneity.Forest plots were generated to illustrate the findings of the meta-analyses.Review Heterogeneity was assessed on the basis of two parameters: (i) assessing the characteristics of the included studies and (ii) statistical assessment of heterogeneity through calculation of appropriate statistical parameters.Methodological heterogeneity was assessed by evaluating differences in the treatment protocols used, study designs, sampled populations, methods of boric acid delivery used across the studies, methods of placebo administration across the studies and disease definition used across the studies.Statistical heterogeneity was assessed through Cochran's Q chi-squared testing and calculation of the I 2 index.In accordance with the Cochrane Handbook for Systematic Reviews of Interventions, I 2 values between 0 and 40% were deemed as not representing significant heterogeneity, and values above 40% were considered to represent significant heterogeneity. 19 | Additional tests The following additional tests were conducted as per the guidelines in the Cochrane Handbook for Systematic Reviews of Interventions 19 : Meta-regressions would be conducted if there were an adequate number of studies (10 or more). Risk of bias across studies (publication bias) would be evaluated through generation of funnel plots and Egger's tests, if there were an adequate number of studies (10 or more). Sensitivity analyses were conducted to assess the contribution of each individual study on the totality of the evidence. | Certainty assessment Assessment of certainty in the overall body of evidence for each outcome was performed using Grading of Recommendations, Assessment, Development and Evaluations (GRADE) criteria.The following parameters were assessed: risk of bias, imprecision, inconsistency, indirectness and publication bias. | Selected studies The initial search returned 64 articles, of which 25 articles were identified as duplicates.The remaining 39 articles were screened according to the title and abstract, and 35 were excluded (kappa = 1.00, 95% CI: 1.00-1.00).The remaining 4 studies underwent full-text analysis, of which all 4 met the inclusion criteria.All 4 studies were suitable for meta-analyses (kappa = 1.00, 95% CI: 1.00-1.00).The study selection process is outlined as a PRISMA flowchart in Figure 1. | Study design and demographics The author, year, country, study setting, age range of participants, sample size, treatment protocols and time at which outcomes were evaluated are outlined in Table 1. Across the four included trials, three were of parallel-arm design, and one was of split-mouth design (Singhal et al., 2017).All trials were conducted in a university hospital setting, with three being in India and one being in Turkey (Saglam et al., 2013).Across the trials, the ages of the included participants ranged from 18 to 63 years.Three trials evaluated delivery of boric acid as a 0.75% concentration gel, which was deposited subgingivally using a syringe with a blunt cannula, following non-surgical therapy.Of these, 2 trials explicitly stated the use of 0.1 mL of the gel, and one did not specify the volume of boric acid gel used (Mamajiwala et al., 2019).The remaining trial evaluated delivery of boric acid as a 0.75% concentration irrigant, where 10 mL of the irrigant was applied subgingivally to each site for 1 min, following non-surgical therapy.All studies investigating boric acid gel reported the site-specific change in PPD and CAL for the areas receiving therapy, whilst the study investigating boric acid irrigation reported whole-mouth parameters. | Disease definition All studies defined the condition being evaluated as 'chronic periodontitis'.Three studies defined chronic periodontitis as PPD ≥5 mm. 17,21,22e study defined it as PPD ≥5 mm or ≥4 mm loss of CAL. 23 | Outcome assessment All studies reported on changes in PPD and CAL, and these were extracted to allow for meta-analyses.Not all studies reported outcomes at both 3 months and 6 months (see Table 1).One study did not provide standard deviations for changes in PPD and CAL from baseline. 21e corresponding author was contacted for individual patient data, but no reply was received.Therefore, standard deviations were im- | Risk of bias A risk of bias summary for all included studies is provided in Figure 2. As per Cochrane guidelines, a narrative description, with authors' F I G U R E 1 PRISMA flowchart outlining the study selection process judgements and evidence for these judgements, regarding each risk of bias parameter was documented.This is presented in Appendix 4. | Probing pocket depth Sub-group meta-analyses were conducted for outcomes at 3 months and 6 months post-therapy.The adjunctive use of boric acid resulted in a mean additional reduction in PPD of 0.58 mm (95% CI: −0.03-1.19mm) at 3 months and of 1.18 mm (95% CI: 0.97-1.40mm) at 6 months. Studies evaluating outcomes at 3 months demonstrated significant heterogeneity (I 2 > 40%), so the findings from the random effects model are presented.Studies evaluating outcomes at 6 months demonstrated low heterogeneity (I 2 = 0%), so the findings from the fixed effects model are presented (Figure 3).No adverse events were reported in any of the participants; risk ratios could not be calculated. | Clinical attachment level Sub-group meta-analyses were conducted for outcomes at 3 months and 6 months post-therapy.The adjunctive use of boric acid resulted in a mean additional gain in CAL of 0.62 mm (95% CI: −0.07-1.32mm) at 3 months and of 1.24 mm (95% CI: 0.89-1.58mm) at 6 months. Studies evaluating outcomes at 3 months and 6 months demonstrated significant heterogeneity (I 2 > 40%), so the findings from the random effects models are presented (Figure 4).No adverse events were reported in any of the participants; risk ratios could not be calculated. | Meta-regression The number of studies included in the systematic review was below the threshold required to conduct meta-regressions. | Risk of bias across studies The number of studies included in the systematic review was below the threshold required to generate funnel plots and conduct Egger's tests. | Sensitivity analyses The results of the sensitivity analyses are presented in Table 3. | GRADE assessment GRADE certainty in the body of evidence for PPD reduction and CAL gain at 3 months post-therapy was very low (⊕◯◯◯). GRADE certainty in the body of evidence for PPD reduction and CAL gain at 6 months post-therapy was moderate (⊕⊕⊕◯). | Summary of evidence This systematic review identified 4 randomized controlled trials evaluating the efficacy of boric acid as an adjunct to non-surgical periodontal therapy.The trials evaluated boric acid delivered subgingivally to the base of the probing pocket, either as a gel or an irrigant, immediately following non-surgical periodontal therapy. The results of the meta-analyses suggest that boric acid used as an adjunct to non-surgical periodontal therapy produces an improvement in treatment outcomes, as compared to placebo.For PPD, a 0.58 mm mean additional reduction is seen at 3 months and a 1.18 mm mean additional reduction at 6 months.For CAL, a 0.62 mm mean additional gain is seen at 3 months and a 1.24 mm mean additional gain is seen at 6 months.These improvements are not statistically significant (PPD: p = 0.06, CAL: p = 0.08) at 3 months post-therapy, but they are statistically significant at 6 months post-therapy (p < 0.05).There is a very low certainty in the body of evidence for outcomes at 3 months and a moderate certainty in the body of evidence for outcomes at 6 months.No adverse effects were observed in patients where boric acid was administered as an adjunct. | Level of evidence Whilst all studies were of randomized controlled design, not all studies were of equal quality with regard to the risk of bias assessment. The trial presenting with the most concerning findings for risk of bias was Saglam et al. (2013), where the study was described by the authors as being 'single-masked', that is the personnel administering treatment and analysing outcome data were unblinded.This is highlighted within the article as an issue which should be addressed in future trials, and this poses a risk of introducing biased results into the meta-analyses.This is addressed and highlighted in the sensitivity analyses (Table 3), where exclusion of the study leads to an observed increase in the efficacy of boric acid, as well as a reduction in the heterogeneity between studies. It should be noted that whilst Mamajiwala et al. (2019) provided data to a high standard, the authors did not report standard deviations for changes from baseline.These values had to be imputed using the correlation coefficient method recommended by the Cochrane Collaboration, leading to an 'unclear' risk of reporting bias.In addition, Mamajiwala et al. (2019) do not make it entirely clear as to whether the personnel providing treatment were blinded.As the statements made in the article could have been interpreted in multiple ways, the study was assigned an 'unclear' risk of bias for this parameter. The quality of evidence in future systematic reviews on the subject may be particularly improved if future trials report on, and implement, blinding for participants, personnel and outcome assessors, where this is feasible. | Comparison with other studies and reviews 5][26][27] It has been postulated that boron-containing compounds may be efficacious in the management of chronic inflammatory conditions, and the results of this meta-analysis are in line with these findings. 28,29The reasons for its efficacy may be largely attributable to the immune-dampening properties of boron derivatives, particularly with regard to pro-inflammatory cytokines such as TNFα and C-reactive protein. 23,30These inflammatory mediators are known to be critical in the pathophysiology of periodontitis, and downregulation by boric acid may be part of the reason for the observed improvement in treatment outcomes. 31e improvements in treatment outcomes observed in this review are similar to, or greater than, the improvements in treatment outcomes which have been observed in meta-analyses evaluating the efficacy of locally administered antibiotics. 32This is of particular importance as it indicates that similar clinical benefits to those derived from the use of antibiotics may be attained through the use of boric acid, without the same drawbacks, namely antibiotic resistance.Direct comparisons between boric acid and antibiotics in future trials would be beneficial in order to validate these findings. An important consideration when evaluating the clinical application of boric acid is its low pH and the potential for deleterious effects on the tooth structures.Boric acid is a weak acid which dissociates to give solutions of around pH 5.1; in comparison with the pH of conventional phosphoric acid etchant protocols (pH 0.1-0.4),this is far higher, and therefore, the potential for damage of the tooth surfaces is minimal. 33Another concern associated with an acidic pH is the potential for inducing dentine hypersensitivity, as this can be caused by acidic agents. 34Whilst dentine hypersensitivity was not observed as an adverse event across the included trials, this is not to say that it does not occur; rather, the sample sizes within the meta-analyses may be inadequately powered to pick up these events.Furthermore, a challenge for clinicians would be to identify when hypersensitivity is occurring due to boric acid therapy and when it is simply due to natural recession of the gingivae following periodontal therapy.Whilst statistical computation F I G U R E 2 Risk of bias summary for all included studies F I G U R E 3 Forest plots summarizing effect of adjunctive boric acid on probing pocket depth of risk ratios or odds ratios would allow for quantifiable risks of hypersensitivity with boric acid therapy, this is infeasible in the present meta-analyses, due to no observed events amongst the included participants. The findings of this review indicate that the adjunctive use of boric acid may provide improvements in periodontal treatment outcomes, particularly when administered as a gel in situ.It has been demonstrated to be safe for human gingival fibroblasts and human periodontal ligament fibroblasts at a concentration of 0.75%. 22wever, high-quality literature surrounding the field is scarce, and further investigations into the efficacy, safety and any adverse effects of boric acid should be investigated further before recommendations for its use can be made. | Limitations Whilst the authors endeavoured to locate all relevant studies, it is acknowledged that there may have been studies which were not published, registered or presented.At the time of writing, there was one randomized controlled trial indexed in the Cochrane Library and registered in the WHO International Clinical Trials Registry Platform ID: CTRI/2019/04/018697) with no published results.The protocol outlined for this trial indicates that it would not meet the inclusion criteria for this systematic review, as the control group received adjunctive treatment with curcumin. All included studies evaluated the pre-defined outcome measures outlined in the review protocol.One of the primary limitations of this systematic review is the quantity of evidence, both in terms of the number of trials and number of participants within trials.Across the meta-analyses, the total sample size for comparison of boric acid versus placebo was 117 (individual study sample sizes ranging from 30 to 48), which may not be adequately powered to allow for precise estimation of effect size.In addition, not all trials evaluated outcomes at both 3 months and 6 months post-therapy, further reducing the overall sample size incorporated into the meta-analyses. Of the four trials, three were conducted in India and one was conducted in Turkey.Therefore, the external validity of the findings from the meta-analyses in application to cohorts of patients from other countries is unknown. There was significant heterogeneity for all studies evaluating outcomes at 3 months.This may be largely attributed to the difference in treatment protocols used; 3 of the studies 17,21,23 evaluated the use of boric acid as a subgingival gel, whilst 1 of the studies 22 evaluated its use as a subgingival irrigant.The contribution of Saglam et al. (2013) to the findings of this review is highlighted in the sensitivity analyses (Table 3).The exclusion of this study, where boric acid was administered as an irrigant, results in the observed changes in treatment outcomes at 3 months becoming statistically significant (p < 0.05).This indicates that inclusion of this study introduced heterogeneity into the meta-analyses, which led to underestimation of the improvements in PPD and CAL at 3 months, provided that boric acid is administered as a gel in situ rather than as an irrigant. Other sources of heterogeneity include the fact that there was no standardized protocol for non-surgical periodontal therapy across the studies, and the level of disease evaluated across the studies may not have been identical.Whilst all studies defined the patients as having 'chronic periodontitis', no stage and grade of disease was Inclusion of a trial with high risk of bias 22 may affect the validity of the meta-analyses.As aforementioned, this was addressed through means of sensitivity analyses, which brought up two pertinent points: (i) whether inclusion of this study resulted in underestimation of the efficacy of adjunctive boric acid as a whole and (ii) whether administration of boric acid as an irrigant is less effective than administration as an in situ gel.These observed differences in the efficacy of delivery as a gel versus delivery as an irrigant may be accounted for by two main reasons: (i) it is postulated that a gel may remain in situ for a greater period of time than an irrigant and hence exert its beneficial antimicrobial and immunomodulatory properties for a greater length of time, and (ii) differences in the measurement protocols used: the study investigating boric acid delivered as an irrigant (Saglam et al., 2013) provided both whole-mouth and site-specific changes and found significant differences between boric acid and placebo at the site-specific level, but not the whole-mouth level.However, the site-specific measures could not be incorporated for meta-analysis due to the authors only reporting on site-specific measures for the three deepest, non-contiguous sites, and it is likely that if the site-specific measures for all sites were provided (allowing for inclusion in meta-analyses), then significant improvements with boric acid would also be seen, in line with the trials investigating gel delivery.Furthermore, outcomes were only reported up to 6 months posttherapy.Longer follow-up periods are needed before judgements on the long-term effectiveness of boric acid can be made. In order to allow for more accurate pooling of data, it would be advised that future researchers: | Principal findings Adjunctive boric acid use is associated with improvements in clinical outcomes compared to non-surgical periodontal therapy alone. Improvements are seen in both probing pocket depth and clinical attachment level, with no adverse events reported thus far. | Practical implications There is evidence that boric acid used as an adjunctive agent may improve the outcomes of non-surgical periodontal therapy.Low risk All outcomes were reported on.There was a 93% recall rate and a per protocol analysis was carried out.Intention-to-treat analysis would be difficult given that the data evaluated was continuous. Selective reporting (reporting bias) Low risk Outcomes were evaluated against the methods section of the paper and no discrepancies were found. Other bias Low risk No other sources of bias were identified. Four electronic databases were searched from inception to May 2020: PubMed, Cochrane Central Register of Controlled Trials, EMBASE via OVID and Web of Science.Additionally, reference list follow-ups of all included studies were conducted.The following search term was used: "(((((((((boric acid) OR orthoboric acid) OR boracic acid) OR sassolite) OR optibor) OR borofax) OR trihydroxyborane) OR boron trihydroxide)) AND ((periodont*) OR gum disease)".The full search strategy for PubMed, with MeSH terms, is outlined in Appendix 1. puted using the correlation coefficient method recommended by the Cochrane Collaboration.The correlation coefficient calculations and subsequent generation of standard deviations for Mamajiwala et al. (2019) are outlined in Appendix 3. The data for changes in PPD and CAL for all included studies are presented in Table 2. for, the study being excluded and the new observed change in outcome measure.In bold are the studies for which, when excluded, a change in statistical significance in the results was observed.Regardless of the study excluded, adjunctive boric acid produced an improvement in both treatment outcomes for both time periods assessed.Changes in significance were observed when Saglam et al. (2013) were excluded from the analyses, and this made the improvement in both PPD and CAL at 3 months post-therapy statistically significant (p < 0.05). F I G U R E 4 Forest plots summarizing effect of adjunctive boric acid on clinical attachment level given as defined in the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Diseases and Conditions.This may make comparison across studies less accurate if the level of disease is not the same between the participants.In addition, the exact sites evaluated differed; Singhal et al. (2017) evaluated outcomes in areas of furcation defects, whilst all other studies evaluated full mouth outcomes. 1 . 6 | 6 . 1 | Enrol a greater number of participants into randomized controlled trials 2. Implement methods to minimize risk of bias, such as a triple-blind study design 3. Develop and use a standardized protocol for the administration of boric acid 4. Develop and use a standardized protocol for the administration of non-surgical periodontal therapy 5. Report on stage and grade of the periodontitis being evaluated 6. Evaluate outcomes over a longer time period5 | CON CLUS IONSWithin the limitations of this review, it can be concluded that:1.Boric acid as an adjunct to non-surgical periodontal therapy may improve treatment outcomes 2. Adjunctive boric acid at 0.75% concentration does not increase the risk of adverse events, as compared with placebo 3.There is a paucity of literature surrounding the subject, necessitating more high-quality, adequately powered, randomized controlled trials CLINIC AL RELE VAN CE Scientific rationale for the study Despite trials having been conducted on the subject, there have been no systematic reviews evaluating the efficacy of boric acid as an adjunct to non-surgical periodontal therapy. Table 3 Characteristics of included studies is the outcome measure which the analysis was TA B L E 1
2021-02-03T06:18:22.469Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "7593581aedc9320bdc769f202cd188165937d5fb", "oa_license": "CCBY", "oa_url": "http://pure-oai.bham.ac.uk/ws/files/117524027/idh.12487.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c33c5058b3f6cea259626898a2b560b4e09ca24", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225547804
pes2o/s2orc
v3-fos-license
Examining Trans-Provincial Diagnosis of Rare Diseases in China: The Importance of Healthcare Resource Distribution and Patient Mobility : (1) Background: Rare disease patients in China usually have to travel a long distance, typically across provinces, for an accurate diagnosis due to the uneven distribution of healthcare resources. This study investigated the impact factors of their trans-provincial diagnosis. (2) Methods: An analysis was made of 1531 cases (1032 adults and 499 children) garnered from the 2018 China Rare Disease Survey, representing a large patient community inflicted with 75 rare diseases from across 31 Chinese provinces. Logistic regression models were used for separate analysis of adult and child patient groups. (3) Results: Nearly half (47.2%) of patients obtained their accurate diagnosis outside their home provinces. The uneven geographical distribution of high-quality healthcare had a significant impact on variation in trans-province diagnosis. Adult patients with lower family income, rural hukou and severer physical disability were disadvantaged in accessing trans-provincial diagnosis. Families with a child patient tended to pour resources into obtaining the trans-provincial diagnosis. The rarity of the disease had only a minimal e ff ect on healthcare utilization across the provinces. (4) Conclusions: In addition to medical care, more attention should be paid to the socioeconomic factors that prevent the timely diagnosis of a rare disease, especially the uneven geographical distribution of high-quality healthcare resources, the financial burden on the family and the di ff erences between adult and child patients. Introduction Coping with the globally accelerating challenge of rare disorders, the International Rare Diseases Research Consortium (IRDiRC) has a vision to "enable all people living with a rare disease to receive an accurate diagnosis within one year of coming to medical attention" [1]. An accurate diagnosis is the first step toward improving the quality of life of people with rare diseases and their families. It means not only the possible treatment and relief of pain for patients, but also various benefits such as access to ancillary social welfare, subsidies for special needs, connection with rare-disease support groups and access to information for life planning and reproductive decision-making [2]. Recent decades have witnessed increasing endeavors to improve the medical understanding of rare disease [3], especially through genetic techniques [4]. However, as Andersen's classic healthcare utilization model suggests [5,6], accessibility to diagnosis is affected also by the characteristics of patients and the healthcare delivery system, the impacts of which, on the diagnosis of rare disease, have been the subject of few studies to date. Materials and Methods The 2018 China Rare Disease Survey was a systematic investigation of patient access to accurate diagnosis across the country. The survey was conducted with the support of the Illness Challenge Foundation-a national umbrella organization providing support to rare disease patients. Rare disease patients are usually involved in a patient group that functions as a platform for information sharing and mutual support. The Illness Challenge Foundation helped to reach out to multiple patient groups to organize the survey. At the time of the survey, the Illness Challenge Foundation had formed an official alliance with 29 rare disease patient organizations in China. The Foundation has a wide recognition among China's rare disease patients due to its former entity, the China-Doll Center for Rare Disorders, being the most well-known rare disease patient organization in China. Hence, distributing the survey via the Foundation's network enabled us to reach the widest population of rare disease patients in China. Encouraged by the Illness Challenge Foundation and the patient groups, the willingness to participate in this survey was raised. As patients are widely dispersed in the country, we used online questionnaire to reach a maximum number of patients. Some 50% of the questionnaires were filled out by caregivers due to the young age or disability of the respondent. In total, 2040 valid questionnaires were collected from across the country, from which 1032 adult cases (18 years and older in 2018) and 499 child cases were identified with full information on each item for analysis, accounting for 75.1% of the total. The 1531 cases form a sample of patients, inflicted with 75 different rare diseases from across 31 provinces in Mainland China. Logistic regression models were used to investigate the factors affecting trans-provincial diagnosis. As rare disease patients may experience several misdiagnoses, we only refer to the time and location of the accurate diagnosis to identify the trans-provincial diagnosis. The control factors were the rarity of the disease and patients' demographics, including age, sex and ethnicity. The factors examined were the geographic distribution of healthcare resources and patient mobility. To control the effect of different diseases, we constructed a "rarity of disease" variable by categorizing diseases into three classes based on the reported prevalence of each disease, being "extremely rare", with an incidence below 1/100,000; "rare", with an incidence range of 1/100,000 to 1/10,000; and "somewhat rare", with an incidence above 1/10,000. The prevalence of each disease is Sustainability 2020, 12, 5444 3 of 11 listed in detail in the Appendix A. A rarer disease can be assumed to be associated with a greater possibility of trans-provincial diagnosis. Three factors captured geographical distribution of healthcare: the number of 3-A hospitals, the number of licensed hospital doctors and the number of hospital beds in each province (all measured for 2017). Healthcare in mainland China is provided in primary care institutes, public health institutes and hospitals. Different from the patient referral systems in the USA or UK, patients in China can directly seek healthcare in hospitals. Among the hospitals, those classed as 3-A are the highest-ranking facilities in China's hospital classification system [17]. Since an accurate diagnosis of rare disease often requires a higher level of experience and more advanced diagnostic technologies, most patients resort to 3-A hospitals for diagnosis. In this paper, we used the number of 3-A hospitals to represent the amount of high-quality healthcare in each province. Besides, two indicators widely used to measure the amount of healthcare, the number of licensed doctors and the total number of hospital beds, are also used. Due to the limited medical understanding of rare diseases, we hypothesized that only the amount of high-quality healthcare is associated with trans-provincial diagnosis outcomes. We chose not to use per capita high-quality healthcare resources as a factor, as it is more likely that total high-quality healthcare capacity is more directly linked to the attraction (or lack of attraction) of patients seeking a diagnosis in a region [18]. Data were obtained from the China Health Statistics Yearbook 2018. As Figure 1 shows, the distribution of 3-A hospitals is quite distinct from the total number of licensed doctors and hospital beds, revealing different mechanisms and patterns of high-quality healthcare and average healthcare resources distribution. Sustainability 2020, 12, x FOR PEER REVIEW 3 of 12 listed in detail in the Appendix. A rarer disease can be assumed to be associated with a greater possibility of trans-provincial diagnosis. Three factors captured geographical distribution of healthcare: the number of 3-A hospitals, the number of licensed hospital doctors and the number of hospital beds in each province (all measured for 2017). Healthcare in mainland China is provided in primary care institutes, public health institutes and hospitals. Different from the patient referral systems in the USA or UK, patients in China can directly seek healthcare in hospitals. Among the hospitals, those classed as 3-A are the highestranking facilities in China's hospital classification system [17]. Since an accurate diagnosis of rare disease often requires a higher level of experience and more advanced diagnostic technologies, most patients resort to 3-A hospitals for diagnosis. In this paper, we used the number of 3-A hospitals to represent the amount of high-quality healthcare in each province. Besides, two indicators widely used to measure the amount of healthcare, the number of licensed doctors and the total number of hospital beds, are also used. Due to the limited medical understanding of rare diseases, we hypothesized that only the amount of high-quality healthcare is associated with trans-provincial diagnosis outcomes. We chose not to use per capita high-quality healthcare resources as a factor, as it is more likely that total high-quality healthcare capacity is more directly linked to the attraction (or lack of attraction) of patients seeking a diagnosis in a region [18]. Data were obtained from the China Health Statistics Yearbook 2018. As Figure 1 shows, the distribution of 3-A hospitals is quite distinct from the total number of licensed doctors and hospital beds, revealing different mechanisms and patterns of high-quality healthcare and average healthcare resources distribution. Trans-Provincial diagnosis poses multiple challenges to the mobility of patients. Studies in Europe have revealed the effect of various factors on patient mobility, such as affordability of healthcare, patients' physical limitations, the need to be accompanied by caregivers, transportation costs and the ability to obtain information on specialized healthcare [19][20][21][22]. For patients with rare diseases in China, the high costs involved may be the primary challenge to trans-provincial diagnosis, even though partial health insurance coverage may be available. Many rare diseases result in physical disability, making it even harder for patients to travel long distances; and even if they can travel, they usually need to be accompanied. As the disease is rare, an additional barrier may be finding the right hospital. With these considerations in mind, four groups of factors relating to the mobility of patients were examined: (1) affordability of healthcare, including factors such as family income, measured by Trans-Provincial diagnosis poses multiple challenges to the mobility of patients. Studies in Europe have revealed the effect of various factors on patient mobility, such as affordability of healthcare, patients' physical limitations, the need to be accompanied by caregivers, transportation costs and the ability to obtain information on specialized healthcare [19][20][21][22]. For patients with rare diseases in China, the high costs involved may be the primary challenge to trans-provincial diagnosis, even though partial health insurance coverage may be available. Many rare diseases result in physical disability, making it even harder for patients to travel long distances; and even if they can travel, they usually need to be accompanied. As the disease is rare, an additional barrier may be finding the right hospital. With these considerations in mind, four groups of factors relating to the mobility of patients were examined: (1) affordability of healthcare, including factors such as family income, measured by the relative income grade in the patient's home city, hukou status (registered as an urban or rural citizen) and medical insurance, including Urban Employee Basic Medical Insurance (UEBMI) and Basic Medical Insurance for Urban and Rural Residents (BMIURR) coverage; (2) patients' physical disability, measured by extent of dependency on assistive devices; (3) support by caregivers, measured by patients' marital status and number of other family members, which can significantly affect the mobility of patients with physical disabilities; and (4) education level, measured by the most education years of patient and their parents, which is a surrogate for ability to find a suitable hospital. We hypothesized that a greater chance of trans-provincial diagnosis is associated with greater affordability, less disability, more support by caregivers and higher education level. The adult and child cases were analyzed separately due to difference in the incidence of disease and the ability of the patient to act on their own. Comparing these two groups also reveals differences in attitudes of families toward, and input into, seeking diagnosis in other provinces. The data were analyzed using SPSS 24.0. Table 1 presents a descriptive analysis of the data. Trans-Provincial diagnoses accounted for 47.2% of the total, with a slight difference between adult (47.6%) and child (46.5%) groups. As Figure 2 shows, coastal provinces delivered more accurate diagnoses and had a lower proportion of trans-provincial diagnosis. The destination hospitals were concentrated in Beijing and Shanghai, host cities of the largest number of best hospitals in China. Notes: 1 Refers to Mean (S.D.). 2 The Provincial-Level cities include capital cities in each province and five other cities specifically designated in the state plan, i.e., Dalian, Qingdao, Ningbo, Xiamen and Shenzhen. The bivariate analysis in Table 2 shows that trans-provincial diagnosis was significantly associated with longer diagnosis delay, more hospitals to visit and a higher possibility of misdiagnosis, for both adult and child patients. Many patients had to resort to trans-provincial diagnosis after several failures in local hospitals. The average age for adult patients was 35.5 years and 6.0 years for child patients. Overall, 53.6% of adult patients were female, and the proportion was 35.7% for child patients. About 3/4 of adult and child patients were afflicted with a disease with a medium degree of rarity. Most patients reported their family income was around or below the average of the local city. The majority of patients came from lower-ranked cities. Half of the patients held an urban hukou. Only 35.5% of adult patients were covered by UEBMI, while 53.6% of adult patients and 78.8% of child patients were covered by BMIRUP. About half of patients always depended on the assistive devices. Sixty-five percent of adult patients were married. The average longest schooling years among family members was 12.03 for adult patients and 11.18 for child patients. Descriptive Analysis The bivariate analysis in Table 2 shows that trans-provincial diagnosis was significantly associated with longer diagnosis delay, more hospitals to visit and a higher possibility of misdiagnosis, for both adult and child patients. Many patients had to resort to trans-provincial diagnosis after several failures in local hospitals. Factors Affecting the Trans-Provincial Diagnosis For both adult and child patient groups, the dependent variable in the binary logistic regression model is a trans-provincial accurate diagnosis, in which 0 represents a diagnosis within the home Sustainability 2020, 12, 5444 6 of 11 province while 1 represents a diagnosis outside the home province. Among the independent variables, sex, ethnicity, hukou, marriage status and UEBMI and BMIRUP coverage are dummy variables. Based on this, a composite reference is created, being an unmarried male of Han Ethnicity with a rural hukou and with UEBMI and BMIRUP coverage. For child patients, the model setting is slightly different. The marriage status and UEBMI coverage are excluded as they are not applicable to the child. The results are presented in Table 3. Table 3. Logistic regression models estimating the effects of healthcare distribution and physician mobility on trans-provincial diagnosis in adult and child patients. The models controlled the effects of demographic characteristics of patients and the rarity of the disease. The patient's age was significantly associated with the trans-provincial diagnosis but only affect the child group. For child patients, each additional year of age was associated with a 10.3% (OR = 1.103; 95% CI, 1.054-1.154; p < 0.001) increase in the odds of trans-provincial diagnosis. In contrast, the rarity of disease only had a significant effect in the adult group, with a higher level of rarity associated with a 41.0% (OR = 1.410; 95% CI, 1.070-1.858; p < 0.015) increase in the odds of trans-provincial diagnosis. Sex and ethnic minority did not show significance. Adult Regarding the impact of healthcare resource distribution, the more 3-A hospitals there are in a patient's home province, the less likely they traveled to another province for an accurate diagnosis. An additional 3-A hospital had a 3.7% decrease in the odds of trans-provincial diagnosis for both adult (OR = 0.973; 95% CI: 0.963-0.983; p < 0.001) and child patients (OR: 0.973; 95% CI: 0.956-0.990; p = 0.003). The number of hospital beds and licensed doctors did not show significance. Regarding the impact of physician mobility, only factors related to affordability and physical disability showed significance, but they affected adult and child groups differently. For adult patients, a higher level of family income in local city was associated with a greater likelihood of trans-provincial diagnosis (OR = 1.349; 95% CI: 1.079-1.686; p = 0.009), although patients in higher-level cities were more likely to obtain an accurate diagnosis in their home province (OR = 0.739; 95% CI: 0.671-0.812; p < 0.001). This is likely to be ascribable to the fact that high-level cities usually have more 3-A hospitals [12]. However, for child patients, the level of family income in the local city did not significantly affect the odds of trans-provincial diagnosis. An urban hukou was associated with a 46.7% (OR = 1.467; 95% CI: 1.061-2.028; p = 0.020) increase of the odds of trans-provincial diagnosis for adult patients, but showed no significance for child patients. The more severe was the disability, the less likely adult patients were to travel to other provinces. A higher level of dependency on assistive devices was associated with 13.1% (OR = 0.869; 95% CI: 0.781-0.966; p = 0.009) decrease in the odds of trans-provincial diagnosis. Nevertheless, physical disability did not show significance in the child group. Statistical significance was not found in factors related to education level and support by caregivers. Discussion Our study evaluated cross-sectional associations between geographic distribution of high-quality healthcare and successful diagnosis of a rare disease secured by trans-provincial mobility. This is an important relationship to investigate because reducing the delay of diagnosis of rare diseases can have significant benefits for patients in terms of prognosis and quality of life. The study makes a significant contribution to the diagnosis of rare diseases, as currently most concerns focus on deepening the medical understanding of rare disorders. The 2018 China Rare Disease Survey showed that around half of patients found subsequently to have a rare disease had to travel to another province for an accurate diagnosis. Our bivariate analysis suggests that trans-provincial diagnosis was significantly associated with a more arduous experience in accessing quality healthcare, including longer waiting times, more hospitals visited for consultation and a higher propensity of misdiagnoses before a final correct diagnosis. Regression models fitted to identify significant associations with trans-provincial diagnosis identified four issues that should be taken into account in addressing a healthcare policy response. The first is the limited impact of the rarity of disease on the patient's healthcare utilization behavior. Disease rarity accounts for only a tiny proportion of the variability of trans-provincial diagnosis for adult patients and is not significant for children. This suggests that patients' healthcare utilization behavior may vary significantly, even with diseases of the same degree of rarity. It also suggests that more attention should be paid to socioeconomic difficulties in accessing accurate diagnosis. The second is the impact of the uneven geographical distribution of high-quality healthcare. This is a key factor determining the likelihood of a trans-provincial diagnosis for both adults and children. The total quantity of healthcare resources shows no significant impact, in that patients in provinces such as Henan and Hunan, where there are significant healthcare resources, still have to go to other provinces to obtain an accurate diagnosis. However, high-quality healthcare is significantly unevenly distributed in China. For example, six out of the top 10 hospitals in 2018 were located in Beijing and Shanghai [23], which could be a major reason why 40.8% of patients were finally accurately diagnosed in these two cities. To reduce the delay of diagnosis of rare diseases and to relieve the burden on patients, more high-quality hospitals are needed in the provinces in Central and West China. Moreover, as the number of patients afflicted with each rare disease is limited, specialist centers targeted at rare diseases could be useful to receive enough patients with very rare conditions and thus contribute to clinical research. These specialist centers should be developed at the national level rather than being dispersed in provinces. The third issue is the differences in revealed behavior of families seeking an accurate diagnosis, comparing adult and child patients. Trans-Provincial diagnoses are less constrained by such factors as income and hukou for child patients than for adult patients. Put differently, the families of children with a rare disease tend to invest more in seeking an accurate diagnosis, regardless of their socioeconomic status. Particularly, as child patients get older, parents are more eager to seek treatment in high-quality hospitals, even those outside their home province. This greater effort is understandable, as the confirmation of a disease is extremely important when establishing a life plan for the child. The finding Sustainability 2020, 12, 5444 8 of 11 indicates that low-income families are likely to suffer a heavier burden, in that they invest a greater proportion of their family assets into seeking a diagnosis. The fourth is the disparities in mobility among adult patients. Aside from family income, trans-provincial diagnoses were significantly affected by a patient's physical disability and hukou status. This indicates that disparities in the mobility of adult patients should be addressed, and more support should be provided to disabled and rural patients. The result is consistent with both accessibility and social discrimination explanations: rural hukou holding adult rare disease sufferers may have a worse experience in seeking diagnosis because of greater distances, lower affordability, poorer urban knowledge and connections, and/or there may be inequalities/discrimination in some parts of the system, e.g., the institution of healthcare insurance. As this is a pioneering investigation of the trans-regional diagnosis of rare diseases in China, we need to acknowledge its limitations. The first is the nonprobability sampling method coupled with a limited sample size may introduce the risk of sampling bias to our study. The limited number of cases in such provinces as Tibet, Hainan and Qinghai mean that the constraints of high-quality healthcare on patients' healthcare utilization are not properly represented. The second is that some patients obtain an accurate diagnosis abroad and these cases are not well represented by the cases in this survey. Our spatial unit of analysis presents another limitation. The uneven distribution of healthcare and unequal patient mobility are also significant within a province, and a study on a finer scale would produce different results. Lastly, the study has a cross-sectional design and we acknowledge the problem of endogeneity and the risks in inferring causality. we can also not be sure, for example, of the extent to which families or individuals with rare diseases move, on a permanent or long-term basis, to big cities with better health hospitals [12]. Conclusions Although the advancement of medical knowledge of rare disorders is fundamentally important in coping with rare diseases, the socioeconomic dimension of accessibility to accurate diagnoses also needs to be attended. Contrasting to the limited effect of the rarity of disease, the geographical distribution of healthcare resources and the mobility of patients have been shown to be significantly associated with the trans-provincial diagnosis of rare diseases. Among adult patients, aside from those with a poor economic status, those living in rural areas and the disabled are also less likely to travel between provinces in search of an accurate diagnosis. Families of child patients tend to pour more resources into seeking an accurate diagnosis than those with an adult patient, regardless of the family's socioeconomic status, and this increases the economic burden on low-income families. Moreover, more systematic surveys on the accessibility to rare disease diagnoses are needed in the future. Gaucher disease 1-9/100,000 orpha.net 33 Glycogen storage disease due to acid maltase deficiency 1-9/100,000 orpha.net Table A1. Cont.
2020-07-09T09:15:18.894Z
2020-07-06T00:00:00.000
{ "year": 2020, "sha1": "cdcffc59ab51f9a0228c0176b359100f0512426d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/13/5444/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f8f35adfe2f39e5a8a39975527e1d1d9be611f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53568679
pes2o/s2orc
v3-fos-license
ZNF185 is a p53 target gene following DNA damage The transcription factor p53 is a key player in the tumour suppressive DNA damage response and a growing number of target genes involved in these pathways has been identified. p53 has been shown to be implicated in controlling cell motility and its mutant form enhances metastasis by loss of cell directionality, but the p53 role in this context has not yet being investigated. Here, we report that ZNF185, an actin cytoskeleton-associated protein from LIM-family of Zn-finger proteins, is induced following DNA-damage. ChIP-seq analysis, chromatin crosslinking immune-precipitation experiments and luciferase assays demonstrate that ZNF185 is a bona fide p53 target gene. Upon genotoxic stress, caused by DNA-damaging drug etoposide and UVB irradiation, ZNF185 expression is up-regulated and in etoposide-treated cells, ZNF185 depletion does not affect cell proliferation and apoptosis, but interferes with actin cytoskeleton remodelling and cell polarization. Bioinformatic analysis of different types of epithelial cancers from both TCGA and GTEx databases showed a significant decrease in ZNF185 mRNA level compared to normal tissues. These findings are confirmed by tissue micro-array IHC staining. Our data highlight the involvement of ZNF185 and cytoskeleton changes in p53-mediated cellular response to genotoxic stress and indicate ZNF185 as potential biomarker for epithelial cancer diagnosis. In addition to its roles in cell death, p53 has also been implicated in cytoskeleton assembly, cell motility and mechanosignaling, as negative regulator of cancer cell mobility, invasion and metastasis [18][19][20]. Integrin expression and signalling pathways, which play a key role in tumour cell invasion and metastasis, have been reported to be regulated indirectly by p53 [18]. For instance, Nutlin-3a, an MDM2 antagonist that acts as p53 activator, decreases the expression of integrin alpha5 in colorectal cancer and glioma cells [21,22]; also the expression of integrin beta3 decreases upon DNA-damage in wild-type p53 expressing cells [23]. p53 also regulates focal adhesion and Rho signalling pathways by regulating Rho GTPase activity [24] and effector protein genes of RhoA/RhoC and Cdc42 pathways [25][26][27][28]. In addition, F-actin formation is negatively or positively regulated by p53 in response to DNA damage depending on the anti-tumour drug used and cell type. For instance, while doxorubicin increases the expression of RhoC and LIM kinase 2 in a p53dependent manner promoting actin stress fibers formation [29], etoposide and camptothecin attenuate this process through p53-dependent expression of RhoE [30]. It has been also reported that upon etoposidemediated DNA damage, p53 alters actin cytoskeleton by transcriptionally induction of the expression of the cytoskeleton adaptor protein ankyrin-1 [31]. The relevance of cytoskeleton remodelling and cell mobility in tumours is evidenced by the fact that mutant p53 promotes tumour cell invasion and results in loss of directionality during migration [32]. Cytoskeleton remodelling and cell migration in cancer is a complex process and is controlled by many proteins and pathways, the specific role of p53 in these mechanisms is not yet completely understood. Here, we describe a novel p53 target gene, ZNF185, which codifies for a Zn-finger protein belonging to LIM-family, activated upon genotoxic stress caused by DNA-damaging drug etoposide. ZNF185 itself is not necessary for p53-dependent cell cycle arrest and apoptosis, yet its silencing affects actin cytoskeleton changes and cell polarity upon etoposide treatment. At mRNA and protein level, ZNF185 is strongly reduced in different types of epithelial tumours, including skin and head and neck squamous cell carcinomas, suggesting that depletion of ZNF185 in cancer cells facilitate cancer cell migration and spreading. ZNF185 is a p53 target gene We have previously shown that the p53 family member p63, using a novel promoter region and a specific enhancer, directly regulated ZNF185 expression in keratinocytes [33]. To investigate whether also p53 could regulate ZNF185 expression and expand the p53 target genes involved in cytoskeleton regulation and cell polarity, we further analysed ZNF185 promoter region using UCSC genome browser (Fig 1A). We observed several regions showing high accessibility and conservation between the species, and an enrichment in different transcription factors (TF) binding. Analysis of the publicity available ChIP-seq data for p53 performed in MCF7 cells after p53 stabilization by nutlin (GSE86164, [34]), revealed a strong peak within ZNF185 promoter only in nutlin-treated cells (Fig 1B), suggesting p53 involvement in regulation of ZNF185 transcription. Using previously described bioinformatic tool for p53 binding site (bs) prediction [35], we identified a putative binding site for p53 within the genomic region corresponding to the peak from the ChIP seq shown in Fig 1B (Fig 1C). Interestingly, this region is conserved only in primates and is absent in other species (Fig 1C). To confirm physical binding of p53 on ZNF185 promoter, we performed ChIP assay in p53 Tet-On inducible SaOs-2 cells previously generated in the laboratory [36]. As a positive control, we used the promoter of CDKN1A, the gene coding for p21 ( Fig 1D). To confirm if p53 could directly regulate ZNF185 expression binding its promoter, we cloned the genomic locus harbouring p53 bs up-stream of the luciferase reporter gene. Luciferase activity assay showed a strong activation (120-fold, P<0.01) upon p53 overexpression. Interestingly, the overexpression of the two different p53 mutants frequently found in human cancers (R175H and R273H) did not show any strong activation compared to the control (Fig 1E), indicating that ZNF185 is target of wild-type p53. Furthermore, the substitution of cytosines and guanines to adenines within ZNF185 promoter sequence led to dramatic decrease of the luciferase activity (83% reduction, P<0.01) upon p53 overexpression (Fig 1F). To investigate if p53 is able to regulate ZNF185 transcription, we induced p53 expression by doxycycline in SaOs-2 Tet-On cells and measured by RT-qPCR a significant increase of ZNF185 mRNA after p53 induction paralleling CDKN1A increases (15-fold for ZNF185 and 5-fold for CDKN1A at 24 h of induction, P<0.05; Fig 1G). We also confirmed this result changing cellular system. Indeed, overexpression of p53 in H1299 also led to a 20-fold increase of ZNF185 mRNA, meanwhile the overexpression of two p53 mutants didn't show any significant modulation ( Fig 1H). Altogether, these data indicate ZNF185 as a bona fide transcriptional target of wild-type p53. ZNF185 is up-regulated upon DNA damage We investigated whether ZNF185 is transcribed as consequence of p53 activation following DNA damage. Using two different carcinoma cell lines harbouring wild-type p53 (HCT116 and MCF7), we analysed ZNF185 expression after 0, 8, 16, and 24 hours of etoposide treatment. In both cases, we saw p53 stabilization as indicated by the western blots ( AGING B) and, as a consequence of p53 activation, significant up-regulation of ZNF185 mRNA (3-4-fold over control at 24 h of etoposide treatment, P<0.05), and p21 as positive control, both at mRNA and protein levels (Fig 2A-B). Interestingly, analysis of publicity available ChIP seq data (GSE56674, [37]) for p53 performed in keratinocytes showed that p53 binds to the locus within ZNF185 promoter identified by us in this study. Moreover, this binding is observed only upon cisplatin or doxorubicin treatment ( Fig 2C). As a model of basal layer keratinocytes, we used the commercial cell line of immortalized keratinocytes, Ker-CT. We confirmed that also in Ker-CT cells etoposide treatment leads to p53 stabilization and ZNF185 up-regulation both at mRNA (3-fold, P<0.01) and protein levels ( Fig 2D). To confirm that ZNF185 up-regulation is p53-dependent, we performed siRNA-mediated knock-down of p53 in Ker-CT cells. As expected, depletion of p53 abolished up-regulation of ZNF185 upon etoposide treatment ( Fig 2E). Since the major source of DNA damage in the human keratinocytes is UV irradiation, we irradiated Ker-CT cells and analysed ZNF185 level. Also in this case, we saw an up-regulation of ZNF185 at protein level. Altogether, these findings show that upon DNA damage we detected up-regulation of ZNF185 expression in p53-dependent manner both in tumour cell lines and in normal human keratinocytes. ZNF185 is involved in the cytoskeleton remodelling upon DNA damage Since the major functions of p53 activation upon DNA damage relate to the cell cycle arrest and apoptosis, we asked whether depletion of ZNF185 could alter cell AGING cycle content under this specific stress condition. We performed siRNA mediated knock-down of ZNF185 in Ker-CT cells with two different siRNAs and treated the cells with etoposide. Cytofluorimetric analysis did not reveal any significant modulation in cell cycle distribution and apoptosis respect to the control ( Fig 3A). It was previously reported that ZNF185 regulates proliferation of prostate cancer cells [38], to further investigate this point we generated Ker-CT cell line, stably expressing shRNA against ZNF185 (shZNF185). AGING We performed the EdU-incorporation assay to evaluate the number of cells in S-phase, but we did not observe any significant difference in cell proliferation respect to the control (Fig 3B). Given that several LIM-domain Zn-fingers can migrate into the nucleus under stress conditions [39], we asked whether DNA damage can alter ZNF185 localisation. We found that ZNF185 localised in the cytoplasm and at the cell periphery ( Fig 3C) also after etoposide treatment. Due to the presence of the actin-interacting domain within ZNF185 protein, we hypothesised that ZNF185 could be involved in cytoskeleton remodelling upon DNA damage. To this aim, we performed immunofluorescence analysis using phalloidin as a marker of filamentous actin and vinculin as a marker of focal adhesion. Under normal conditions, most of the cells had migratory phenotype showing vinculin accumulation on the leading edge. After etoposide treatment, cells lost planar polarity as visualised by homogeneous vinculin distribution on the cell periphery (percentage of polarized cells from 100% to 35%). Surprisingly, this phenotype was abolished in the shZNF185 cells which retained planar polarization also upon etoposide treatment (percentage of polarized cells from 100% to 82%) ( Fig 3D). Altogether, these results suggest that ZNF185 is involved in the loss of the planar polarity of cells upon DNA damage. ZNF185 is down-regulated in epithelial cancers p53 is frequently mutated in human cancers and we have shown that ZNF185 is positively regulated by wild-type p53, therefore we asked if ZNF185 level is decreased in epithelial cancers and particularly in the skin carcinomas. Firstly, we analysed ZNF185 mRNA expression in different types of epithelial cancers from TCGA collection. Five cancer types -prostate adenocarcinoma (PRAD), chromophobe renal cell carcinoma (KICH), head and neck squamous cell carcinoma (HNSC), oesophageal carcinoma (ESCA), and adenoid cystic carcinoma (ACC) -showed a significant decrease in ZNF185 mRNA level respect to the normal tissues from both TCGA and GTEx database (Fig 4A). Furthermore, we analysed correlation between the expression of ZNF185 and two distinct targets of p53 -PERP and CDKN1A. Interestingly, a strong positive correlation was observed only in the cancers AGING arising from squamous epithelia -oesophageal and head and neck carcinomas (Fig. 4B). Since there are only few datasets of skin cancer with a very low number of samples, we decided to analyse ZNF185 expression in skin cancer by immunohistochemistry using tissue microarray, containing 42 samples of the cutaneous squamous cell carcinoma (cSCC), 14 samples of the cutaneous basal cell carcinoma (cBCC), 12 samples of cutaneous malignant melanoma (cMM), and 10 samples of the normal skin. As a marker of proliferation, we used Ki67. Analysis of ZNF185 expression pattern at protein level in the normal skin confirmed previously published data from our laboratory [33], in which ZNF185 highest expression occurs in the differentiated spinous and granular layers ("SS/SG") of the epidermis with low expression in the proliferating basal layer AGING ("SB"). Cornified layer ("SC") and dermis ("D") were found negative for ZNF185 ( Figure 5A). Analysis of skin cancer samples revealed that ZNF185 expression is dramatically down-regulated in the cutaneous squamous and basal cell carcinoma ("cSCC" and "cBCC") and malignant melanoma ("cMM") samples ( Figure 5B). Furthermore, ZNF185 was found only in welldifferentiated subpopulations of squamous cell carcinoma ("WD" of cSCC) in contrast to poorlydifferentiated basal-like subpopulations ("PD" of cSCC) ( Figure 5C). All the tumour samples showed a significant decrease (P<1x10 -5 ) of ZNF185 H-score respect to the differentiated layers of the normal epidermis ( Figure 5D). These findings reveal a dramatic down-regulation of ZNF185 at protein level in the skin cancer and suggest that ZNF185 could be a potential biomarker for epithelial cancer diagnosis and prognosis. Recently, the importance of actin-cytoskeleton remodelling and cell polarity during cancer cell spreading and metastasis has emerged [114][115][116][117], and p53 is in part involved in counteracting this specific aspect. Indeed, wild-type p53 can influence actin cytoskeleton dynamics controlling integrin and cadherin signalling and extracellular matrix degradation, suppressing EMT via different pathways [20,23,[118][119][120]. Interestingly, also tumour microenvironment influences actin cytoskeleton [121,122], in part repressing wild-type p53 functions [123]. In fact, when p53 is inactivated, cancer cells invasion increases [124]. Here, we demonstrated that p53 wild-type transcriptionally activates ZNF185 in cells upon DNA damage, which could make part of p53 negative regulation of cancer cell mobility, invasion and metastasis. Interestingly, similarly to another p53 target gene Rap2B [125], down-regulation of ZNF185 does not affect cell cycle progression or cell death, but its silencing abolishes the actin cytoskeleton rearrangements and cell polarity changes upon etoposide treatment. ZNF185 is an actin-cytoskeleton-associated Lin-l 1, Isl-1 and Mec-3 (LIM) domain-containing protein [126]. The domain interacting with actin is located at the Nterminus, and it is necessary to mediate actincytoskeleton targeting of ZNF185, while the C-terminus LIM domain is dispensable for actin binding [38]. The LIM domain is a protein-protein interaction domain found in a wide range of proteins whose functions are related to the dynamics of the cytoskeleton [39,127,128]. In keratinocytes and epidermis ZNF185 has been described highly expressed in differentiating conditions, physically interacting with E-cadherin, a component of the adherens junctions, one of the critical cell-cell adhesive complexes crucial in the pluristratified epithelia [33]. ZNF185 involvement in pathologies, such as cancer [38], has not been completely investigated yet. Few studies reported ZNF185 as an unfavourable prognostic marker in ductal carcinoma of pancreas [129]. Its expression was found upregulated in colon cancer and likely correlated with liver metastasis [130]. On the other hand, other studies described epigenetic silencing of ZNF185 associated with high grade and metastatic prostate tumours [131], lung tumours and head and neck squamous cell carcinomas [33, [132][133][134]. Recently, it was reported that ZNF185 expression is negatively correlated with lymph node metastasis of lung adenocarcinoma and its overexpression leads to down-regulation of p-AKT, p-GSK3β, VEGF and MMP-9 expression [135]. These studies suggest a possible tumour-specific contribution of ZNF185 expression in tumour formation. We confirmed ZNF185 down-regulation in different epithelial tumours and, by analysing the expression of ZNF185 at protein level, we found a significant decrease of ZNF185 in all the tumour samples analysed. Moreover, we found ZNF185 positive signal only in well-differentiated subpopulations of squamous cell carcinoma in contrast to poorly-differentiated basal-like aggressive subpopulations, suggesting a tumoursuppressor role of ZNF185. The possible involvement of ZNF185 in cytoskeleton remodelling upon DNA damage suggests its role in the metastasis promotion which is in line with previous reports [135]. The identification of p53-ZNF185 axis could contribute to determine how p53 controls cell spreading by actin cytoskeletal remodelling, in which both the mechanical properties of the cytoskeleton of the cell as well as the microenvironment of the tumour cells seem to play an important role. Further investigation on the mechanisms by which p53 controls actin cytoskeleton reorganization and cell polarity, including the identification of novel target genes and pathways, would possibly be useful in developing new anti-cancer strategies and therapies. Western blotting The cells were collected by trypsinization, washed in PBS and lysed in RIPA buffer (50 mM Tris-cl pH 7.4, 150 mM NaCl, 1 % NP40, 0.25 % Na-deoxycholate, 1 mM AEBSF, 1 mM DTT). 20-50 µg of total protein extracts were resolved in SDS polyacrylamide gel using the Mini-PROTEAN Tetra cell System (Bio-Rad, Hercules, CA, USA) and blotted onto a Hybond PVDF membrane (GE Healthcare, Chicago, IL, USA) using the Bio-Rad Mini Trans-Blot Cell system Bio-Rad). Membranes were blocked with 5 % non-fat dry milk (Bio-Rad) in PBS/0.1 % Tween-20 buffer, for 1 h at room temperature in agitation. Membranes were incubated with primary antibodies over night at +4 °C, washed and hybridized for 1 h at room temperature with the appropriate horseradish peroxidase-conjugated secondary antibodies (goat anti-rabbit and goat antimouse antibodies, Bio-Rad). Detection was performed with the ECL chemiluminescence kit ( RNA extraction and RT-qPCR analysis Total RNA was isolated using the RNeasy Mini Kit (Qiagen, Hilden, Germany) following the manufacturer's protocol. Total RNA (1 µg) was used for cDNA synthesis by GoScript Reverse Transcription System kit (Promega, Madison, WI, USA). RT-qPCRs were performed using the GoTaq Real-Time PCR System (Promega) in Applied Biosystems 7500 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) using appropriate qPCR primers (Supplementary Table 1). TBP was used as housekeeping gene for normalization. The expression of each gene was defined from the threshold cycle (Ct), and relative expression levels were calculated using the 2 −ΔΔCt method. All reactions were run in triplicate. Analysis of ZNF185 genomic locus To analyse ZNF185 genomic locus, different publicity accessible high-throughput sequencing data from ENCODE database (ChIP seq for H3K4me3 in different cell lines, CpG islands, DNase clusters, vertebrate conservation, TF binding) were visualised in the UCSC Genome Browser. Several ChIP-seq data from NCBI GEO database were analysed to assess p53 binding to the ZNF185 promoter locus (GSE56674 ( [37]) and AGING GSE86164 ( [34])). To identify putative p53 binding sites was used the "p53 scan" software [136]. The conservation analysis of ZNF185 promoter locus was performed within UCSC genome browser. Chromatin immunoprecipitation assay 1x10 6 of SaOs Tet-On p53 cells, induced to overexpress p53 for 16 h, were used for ChIP assay. Cells were collected, fixed in 1% formaldehyde, and subjected to sonication for DNA shearing. The chromatin immunoprecipitation was performed with HA antibody (BioLegend) or unspecific immunoglobulin G (IgG, Invitrogen) using the MAGnify ChIP Kit (Invitrogen). Specific primers were used to amplify the putative p53 response element identified within ZNF185 promoter region (Supplementary Table 1). Luciferase activity assay Promoter region of ZNF185 containing the putative p53 binding site was amplified from human genomic DNA using specific primers (Supplementary Table 1). PCR products were digested by Kpn1/Nhe1 restriction enzymes (New England Biolabs, Ipswich, MA, USA) and subcloned into the pGL3-Promoter reporter vector (Promega). The constructs were completely sequenced. For luciferase activity assay, a total of 1.2x10 5 H1299 cells were seeded in 12-well dishes 24 h before transfection. 100 ng of pGL3 reporter vector, 2 ng of pRL-CMV-Renilla luciferase vector (Promega) and 300 ng of either pcDNA-HA-p53, pcDNA-HA-p53-R175H, pcDNA-HA-p53-R273H, or empty pcDNA-HA vector (as a control) were cotransfected using Effectene transfection reagent according to the manufacturer's instructions (Qiagen). The luciferase activity was measured 24 h after transfection using a Dual Luciferase Reporter Assay System (Promega). The light emission was measured over 10 sec using a Lumat LB9507 luminometer (EG&GBerthold, Bad Wildbad, Germany). The transfection efficiency was normalized to Renilla luciferase activity. The overexpression of p53 was confirmed by western blotting. Mutagenesis For mutagenesis of p53 binding site was performed a PCR on 100 ng of pGL3 vector carrying p53 binding site using specific primers (Supplementary Table 1). PCR product was digested with DpnI restriction enzyme (New England Biolabs). The presence of mutated site was confirmed by sequencing. Cell proliferation The incorporation of EdU during DNA synthesis was evaluated using the Click-iT EdU flow cytometry assay kit according to the manufacturer's protocol (Thermo Fisher Scientific). The cell cycle was analysed using an Accuri C6 flow cytometer (BD Biosciences). Fifteen thousand events were evaluated using the Accuri C6 (BD Biosciences) software. For cell cycle analysis, cells were fixed in 50% methanol/acetone 4:1 mix for 30 min at +4 °C, then treated with 13 Kunitz U/mL RNase for 15 min and stained with 50 µg/mL of propidium iodide for 20 min. Twelve thousand events were acquired using FACScalibur (BD Biosciences). Cell cycle distribution was calculated using FloxJo software. Bioinformatic analysis Analysis of ZNF185 expression in normal and tumour samples from TCGA/GTEx databases was performed using GEPIA [137]. ZNF185, PERP, and CDKN1A AGING expression data in ESCA and HNSC samples from TCGA collection were obtained using R2: Genomics Analysis and Visualization Platform (http://r2.amc.nl/). Statistical analysis The significance of differences between two experimental groups was calculated using unpaired, two-tailed Student's t-test. Values of P < 0.05 were considered significant. For RT-qPCR and luciferase assay, values reported are the mean ± SD. For statistical analysis of TMA scoring was used Mann-Whitney U test. All statistical analyses were performed using GraphPad Prism 7.0 Software. Violin plots were generated in R using ggplot2 package. AUTHOR CONTRIBUTIONS AS, AC, AML and LA performed the research, EC designed the research, EC, AM, NDD, MAP and GM analysed the data, EC wrote the paper and all the authors read the paper and made comments. CONFLICTS OF INTEREST The authors declare no conflict of interest. FUNDING This work has been supported by grants from Associazione Italiana per la Ricerca contro il Cancro (AIRC): IG15653 (to G.M.), and by Istituto Dermopatico dell'Immacolata, IRCCS, RC to EC and joint research program Italia-Cina. This work has been also partially supported by the Medical Research Council, UK.
2018-11-19T16:24:29.022Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "53383c5ccfd390cef765995e60711e4151a75d45", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.101639", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2dcba2eb7d5f6ee62bfb51ea0bd003cacf8da26a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
188659106
pes2o/s2orc
v3-fos-license
Discrepancies between Lexical and Tonal Variation: A Case Study of the Thai Dialect of Samui Island Over the past twenty five years the study of Thai dialects has concentrated on the geographical variatio.n of either tones or 1 This article pays tribute to two prominent Thai linguists Professor Dr. Vichin Panupong on the occasion of her 72"d birthday anniversary and Professor Dr. Amara Prasithrathsint on her retirement from Cbulalongkom University. Professor Dr. Vichin Panupong a native of Songkhla province is herself a S9uthern Thai speaker. She is the most prominent scholar in Thai Dialectology. Professor Dr. Amara Prasithrathsint taught Sociolinguistics and other fields in linguistics e.g. ethnolinguistics and syntax for many years. She has encouraged numerous students to carry out research in some sociolinguistic aspects of Thai. This paper uses the methodology of both dialectology and sociolinguistics to investigate a Southern Thai dialect of Samui Island a fitting tribute to these two scholars who have done so much to advance linguistic studies in Thailand. 2 Assistant Professor, Department of Linguistics, Faculty of Arts, Chulalongkorn University, Bangkok, Thailand. 3 M.A. graduate, Department of Linguistics, Faculty of Arts, Chulalongkorn University, 2004. 4 M.A. graduate, Department of Linguistics, Faculty of Arts, Chulalongkorn University, 2005. lexical items. In the 1990s another type of Thai dialect study began to take shape a combination. of geographical and social variation study. Age has been identified as the main factor influencing variation in Thai dialects. The new type of study has so far concentrated on lexical variation. This paper deals with both geographical and social variation and both lexical and tonal variation. The Thai variety investigated in this study is that of Southern Thai spoken on Samui Island in Sural Thani province. The areas covered are the seven subdistricts of the Samui Island district. Two parallel studies were undertaken culminating in two MA. theses. Research planning and data collection in these two studies were carried out jointly. Results show that there is no geographical variation in Samui Thai either in tonal or lexical usage. When considering social variation, however, this study confirms that age plays a very important role. It clearly influences lexical variation in Samui Thai but it does not influence tonal variation. While the 60-70 years old speakers still use Southern Thai and Samui Thai lexical items and tones, the 10-20 years old speakers readily adopt Standard Thai lexical items but they still use the same tone system and tonal characteristics as the 60-70 years old speakers. It is suggested that future studies should investigate age-based tonal and lexical variation in Standard Thai and Thai dialects further to obtain a better picture of the process of ongoing change in Thai. lexical items. In the 1990s another type of Thai dialect study began to take shape -a combination. of geographical and social variation study. Age has been identified as the main factor influencing variation in Thai dialects. The new type of study has so far concentrated on lexical variation. This paper deals with both geographical and social variation and both lexical and tonal variation. The Thai variety investigated in this study is that of Southern Thai spoken on Samui Island in Sural Thani province. The areas covered are the seven subdistricts of the Samui Island district. Two parallel studies were undertaken culminating in two MA. theses. Research planning and data collection in these two studies were carried out jointly. Results show that there is no geographical variation in Samui Thai either in tonal or lexical usage. When considering social variation, however, this study confirms that age plays a very important role. It clearly influences lexical variation in Samui Thai but it does not influence tonal variation. While the 60-70 years old speakers still use Southern Thai and Samui Thai lexical items and tones, the 10-20 years old speakers readily adopt Standard Thai lexical items but they still use the same tone system and tonal characteristics as the 60-70 years old speakers. It is suggested that future studies should investigate age-based tonal and lexical variation in Standard Thai and Thai dialects further to obtain a better picture of the process of ongoing change in Thai. Introduction Thai dialects have been intensively investigated over the past twenty five years. However, a review of those studies shows that almost all of them investigated just geographical variation and just one of these linguistic aspects-vocabulary, tone, or consonant. There are some studies of social variation m Thai dialects (Maryprasith, 1992 ;Sapproong, 1994;Tantinimitrkul, 2001). The variables most frequently selected are age, sex, education background, area of residence, and attitude toward the local dialect under study. The usual practice in these social variation studies is to deal with only a single linguistic variable e.g. a consonant, a tone, or a set of lexical items. This study of Samui Thai differs from the previous studies in that it is multidimensional in nature including lexical variation and tonal variation as well as variation by area of residence and age. The objective is to find out whether lexical variation matches tonal variation in those two social aspects. The linguistic situation on Samui Island suits a study of this type. Samui Thai has its own distinct tone system (Brown, 1965;Diller, 1976;L.Thongkum, 1978) and its lexical items are a mixture of varieties of Southern Thai and Standard Thai. Moreover, Samui Island is a famous tourist destination for Thais and foreigners. The influence of Standard Thai on Samui Thai can be expected to be considerable. This study will therefore investigate the extent to which Samui Thai is a mixed language. We will also compare the tone system and a set of lexical items in the speech of the young and the old residents in the seven sub-districts on Samui Island to see how the social variable and the linguistic variables interact. Background Samui is an island situated about 20 kilometres off the eastern coast of 116 Southern Thailand. It is a district in Thani province covering an area of square kilometers -the third largest· in the country. There are 7 :.uu-u1:.1JIIOD the district of Samui Island: Ang Mae Nam, Bo Phut, Lipa Noi, Ngam, Na Mueang, and Maret. these sub-districts, Taling Ngam, Mueang, and Maret are largely · by the local people. The others are tourist areas. There is an airport on island with several flights per day it with Bangkok and some other cities. Car ferries link the island to mainland with fifteen services per day. Samui Thai is a variety of Southern The identification is based on its system. Using the tone-box methods finds in Samui Thai the distinct Thai pattern of tone splits and mergers one tone occurs in AI and Bland tone in A2, B2, A3 and B3 (see I). All of the three varieties of Thai shown on the diagram have characteristic. Such splits and clearly differ from those of Standard shown in Diagram 2. Samui Thai system differs from that of Southern Thai and Western Southern in one important aspect -a single occurs in B4, C2 and C3. In the varieties one tone occurs in B4 another in C2 and C3. It should be that this special pattern in Samui Thai occurs in Standard Thai. The characteristics of the column A tones Southern Thai are also distinct. The in A1 -B1 (TI in Diagram I) is falling, in A2-B2-A3-B3 (T2) is falling, in A4 (T3) is low-falling. studies (Brown, 1965;L.Thongkum. show that the column A tones in As far as the lexical items in Samui Thai are concerned, Ache (1986) found the lexical items from Eastern Southern Thai and Western Southern Thai (Chittham, 1970;Pankhuenkhat, 1988;Boonthip, 1992) in Samui Thai. She also discovered that several isoglosses separating these two sub-dialects of Southern Thai were located on the mainland near Samui Island. Moreover, our own preliminary investigation showed that Samui Thai had its own lexical items that were not used elsewhere. Moreover, we observed that Standard Thai words were adopted in Sarnui Thai. This is to be expected as the variety is exposed quite intensively to that presttgwus variety due to the status of Samui Island as a tourist destination. Such a rich mixture of types of lexical items drew our attention to this variety. Past studies have proved that age has much influence on the variation in Thai dialects (Maryprasith, 1992;Sapproong, 1994;Tantinimitrkul, 2001). This study will investigate variation by age to detect the process of ongoing change in Samui Thai. The most important question that we would like to answer is whether lexical variation and tonal variation are parallel to one another. Diagram 2 The pattern of tone splits and mergers of Standard Thae Adapted from Brown, 1965 andL.Thongkum, 1978. In this paper only the tones on the live syllables are considered since those on the checked syllables are treated as allotones of the tones established in the context of the live syllables. 7 Adapted from Brown 1965 p.l62. Methodology Local residents were selected by areas of residence -the seven sub-districts of the Samui Island districtand by age-groups -10-20 years old and 60-70 years old. There were ten speakers per age-group per sub-district in our study of lexical variation and three speakers per age-group per sub-district in our study of tonal vanahon. Fewer informants were interviewed in the tonal study because tonal analysis involves considerable amount of analysis per speaker. The three speakers in the tonal study are also the informants in the lexical study. In all there are 140 informants in the lexical study and 42 informants in the tonal study. All of the informants must be born on the island and have lived there permanently. Those who had stayed elsewhere longer than one year were not selected. The tone questionnaire consists of 15 monosyllabic words. · All of them begin with an initial voiceless stop. Nine of these words are open syllables ending in /aa/-khaaAI 8 , taaA2, thaaA4, khaaBl , paaB2, thaaB4, phaaCl , paaC2, thaaC4. They were included in the questionnaire to check the tones on live syllables i.e. the syllables ending in long vowels or nasals. The other six words-khaatDLJ, thaapDL3, khatDSl, patDS2, were included to check the tones long and short checked syllables syllables ending in stops preceded by or short vowels. Two lists of words constructed. The first list consists tokens of each of the nine live words and the second I 0 tokens the six checked syllable words. The appear at random. It is taken care adjacent tokens always differ. Data collection of tones was carried Kitivongprateep m 2004. speakers on Samui interviewed: six per sub-district into two age-groups -three in the years old group and three in the years old group. Pictures were used elicit the required words. Each · was asked to pronounce all of the the two wordlists. The recording of tokens of each word was acoustically to obtain the frequency values. Praat -the analysis software -was used for purpose. The remaining five tokens kept as back-ups and used when selected token could not be analyzed. normalize duration, measurement was done at every 1 0 % point from 0% 1o 100%.The values obtained from tbe five tokens of each word were recorded in a table using Microsoft Excel. Average values at the 11 points of measurement were calculated and converted into semitone using the formula: =12*LOG (the average value at each point in HertzJ440.2). Using only the average semitone values of ~I of the words, line graphs were drawn. Whenever two I ine graphs were almost identical, they were regarded as showing the same tone and one was discarded 9 . Eventually tbe tonal characteristics of all of the tones for each person were obtained in 1he form of line graphs. At the same lime the tone splits and mergers were worked out for each person. At the next step, the tonal characteristics of the three informants il the same age-group and the same sub-district were compared. A set of a single speaker was selected to ~epresent the group on the basis of its sbaring of most features with the OO!ers (see Figure 5). Then the tonal characteristics of each tone in the speech of all of the representatives were compared to find out the discrepancies -if any -between the age of each The data for the lexical study were elicited Choophan in 2004. One hundred and speakers on Samui Island were ,." •~~·•an...l twenty per sub-district including ten in the 20 years old group and ten in the 60-70 old group. The forty-two interviewed in the tonal part of this study also included in this part. Pictures were elicit the 200 words. The data in the groups were analyzed separately. was used to check whether the frequency of occurrence found was statistically significant. Bar graphs of tbe frequency of different types of lexical itemsia groups 2-4 were shown. Results This study shows very clearly that lexically Samui Thai is a mixed variety. It uses A comparison of lexical usage in groups 2, 3, and 4 among the seven sub-districts shows that none is statistically significant (see Table 1-3). x 2 = 438.069 df=2 p < 0.001 x 2 = s57.7I9 df = 3 p < 0.001 Analysis of the tones yields quite different results. All of the speakers of both age-groups in all seven subdistricts use the same system with 6 tones i.e. three falling tones -high falling /khaal/, mid falling /taa2/, and low falling /thaa3/; two level tones -high level /phaa5/ and low level /thaa6/; and one rising tone /thaa4/ (See Figure 4). Moreover, there is just a single pattern of tone splits and mergers of Samui Thai in this study (see Diagram 3). It is exactly the same pattern as found in the previous studies (Brown, 1965;Diller, 1976;L.Thongkum, 1978) . It is found that the tonal characteristic of each tone is very similar in all of the speakers -both young and old and in all of the sub-districts-as follows : I Lexical usage in group 2 by sub-districts Tone 1 High falling as in /khaal/ This tone in all cases except one is either high rising falling or high level falling. The end point is low. In just . one case it is mid. The one exception of the tonal characteristic of this tone is in the speech of the old speakers in the Maret sub-district. In this case it is high gliding up and gliding down. Its end point is high . Maenam (Young) Tone 2 Mid falling as in /taa2/ This tone is always mid rising falling. The end point is low. The highest point of this tone is in most cases at the middle of the syllable. In a few cases it is further back. There is one exception. This tone in the old speakers in the Maret subdistrict is not mid falling but mid rising. Tone 3 Low falling as in /thaa3/ This tone is low rising falling in most cases. The highest point could be quite high and it is around the middle of the syllable. Only in the speech of the old speakers of Lipa Noi, this tone does not rise much above the starting level. In all of the cases the starting point is between mid and low and the end point is low. Tone 4 Rising as in /thaa4/ This tone is always low rising. The starting point is between mid and low. In some cases the tone dips a little before rising. The end point is mostly high but could be between mid and high. The end sometimes has a slight fall. Tone 5 High level as in /phaaS/ This starting point and the endpoint of this tone are almost the same between high and mid. There could be some gliding up or gliding down. Tone 6 Low level as in /thaa6/ This tone is very similar in shape to Tone 5. The starting point and the end point are between mid and low. This study shows that the tonal characteristics of all of the tones are very similar in both age-groups and in all of the sub-districts except Maret. The two tones in Maret -tone 1 and tone 2 -that do not fall have to be investigated further. Individual variation is the likely cause of the other discrepancies. Conclusion This study confirms that Samui Thai contains both Western Southern Thai and Eastern Southern Thai lexical items. The items in the former variety occur more frequently than those in the latter. The occurrence of the lexical items that are peculiar to Samui Thai is confirmed. Standard Thai lexical items are also widely used. Variation by age in lexical usage is very clear. The younger speakers increasingly use Standard Thai lexical items in their speech. The lexical items that are losing a lot of grou~d to Standard Thai are the ones used only on Samui Island. The investigation of the tone system of Samui Thai gives quite a different picture. The tone system of Samui Thai is still in tact in the speech of both the younger speakers and the older speakers in all of the seven sub-districts. This study confirms that studying just one linguistic aspect of a variety does not give a true picture of how it is transforming under the influence of a· more prestigious variety. Phonological and lexical variation should always be investigated together to detect the process of ongoing change more effectively. In the case of Samui Thai, the influence of Standard Thai has initially affected the lexicon. It would be interesting to check a few years from now whether the tone system and the tonal characteristics of Samui Thai will be modified under the influence of Standard Thai.
2019-06-13T13:24:31.277Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "7bdcd8f062c5e59b65a6f9996bc3c95cfe826761", "oa_license": null, "oa_url": "https://brill.com/downloadpdf/journals/mnya/10/3/article-p115_7.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "af6bfbf868afc0acf3bbf1f00dda5b925f7d526a", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Geography" ] }
15633417
pes2o/s2orc
v3-fos-license
Propagation of Ultra-High Energy Cosmic Rays in Extragalactic Magnetic Fields In this paper we will discuss the problem of Ultra High Energy Cosmic Rays (UHECR) and show that the idea of a Single Source Model established by Erlykin and Wolfendale (1997) to explain the features seen in cosmic ray energy spectra around the 10**15 eV region can be successfully applied also for the much higher energies. The propagation of UHECR (of energies higher than 10**19 eV) in extragalactic magnetic fields can no longer be described as a random walk (diffusion) process and the transition to rectilinear propagation gives a possible explanation for the so-called Greisen-Zatzepin-Kuzmin (GZK) cut-off which still remains an open question after almost 40 years. A transient"single source"located at a particular distance and producing UHECR for a finite time is the proposed solution. Introduction The phenomenon known as cosmic rays, and particularly the observed flux of particles of extremely high energies, is a perfect example of the situation where the subject of study is "one and only" in nature -i.e., we only have one set of data. Thus, in trying to explain it, one does not have to rely on the "most probable" or "average" solution. The phenomenon as we see it, here and now, could be the result of a particular chain of coincidences. If this chain is not "very improbable," it may just be the right solution. This concept was used by Erlykin and Wolfendale a few years ago [1] in the Single Source Model (SSM) of CR origin. Originally it was established to explain the shape of the so-called "knee" in the CR energy spectrum seen in many experiments over a period of almost 50 years. Careful analysis of very accurate data on Extensive Air Showers collected by different experiments made in [2] shows the existence of sharp structures around the estimated primary CR particle energy of a few times 10 15 eV. In subsequent papers by Erlykin and Wolfendale [3], it was shown that the Single Source Model could be used to explain a number of observed CR phenomena. Here, we are going to follow the SSM idea and go further up in energy to the very end of the cosmic ray energy spectrum. Many of the experimentally observed features in the UHECR domain (e.g., the anisotropy studies in [4]), confirm that we actually see there the vanishing Galactic component and the new Extra-Galactic (EG) one which starts to dominate above an energy of 3 × 10 18 eV. The analysis of all available data made in [5] shows that the EG component may start as power-law with an index of about 2, and then, above about 10 19 eV, continue with observed index of ∼ 3 up to the end of measurements (i.e., 10 20 eV or slightly higher). The CR sources, especially for ultra high energies, are unknown. Two general classes of potential sources have been studied in the literature: (i) astrophysical objects, such as active galactic nuclei (AGN), quasars, and colliding galaxies (see, e.g., [6]), where the usual cosmic matter constituents are accelerated to extremely high energies in so-called "bottom-up" processes; and (ii) some exotic "top-down" mechanisms such as the decay of (super-heavy) dark matter particles, topological defects, or monopoles (see, e.g., [7]) . The serious problem for "bottom-up" theories is the UHECR general isotropy. There is no significant excess in any direction to a potential source. In recent work, evidence has been presented that UHECR particles have a distribution of masses [8], generating obvious difficulties for "top-down" ideas. This finding is essential to the present work-a significant fraction of UHECR particles are multiply charged (up to Z = 26 in the case of iron), which makes them more sensitive to extragalactic magnetic fields. The "average" approach to UHECR spectrum calculations, found already in the first Greisen, Zatzepin, and Kuzmin [9,10] papers, is to assume that because we know nothing about the sources, that for every point in space and time the production of UHECR is equally probable. The UHECR spectra shown in [10], but also the frequently quoted spectrum published by the AGASA group [11], were obtained assuming constant and uniformly distributed UHECR source power in the whole Universe. It gives a perfectly isotropic distribution of UHECR directions and a clear GZK cut off, which is the consequence of interactions of UHE nucleons with the 3 K cosmological microwave background photons. However, in the real Universe the distribution of matter is not exactly uniform. Structures known as galaxy clusters exist, and, if the UHECR sources are astrophysical, they should follow the usual matter (galaxies) distribution. Our Galaxy is within the Virgo cluster, about 15 Mpc from its center, and it is obvious that particles of energies above 10 19 eV, if created there, should point more or less exactly to their sources. Some enhancement is actually seen, but it is statistically not very significant, and, as will be discussed in this paper, far too small when compared with expectations. Assumptions about some particular, non-uniform distribution of UHECR sources in extragalactic space have also been carefully studied recently, in Refs. [12,13]. The UHECR spectrum and small and large scale correlations (anisotropies) calculated there are significantly closer to the measured cosmic ray features than in models with a uniform source distribution. The present work goes, in some senses, a step further in this direction. A single source is certainly far from isotropy, but here we also reject the assumption about its constancy in time. This introduces an additional parameter-the dimension of time, but at the expense of requiring an essentially new solution of the general anisotropy problem, as will be shown below. For a continuous UHECR source, the very energetic particles should propagate along (nearly) straight lines, reaching the observer after a time ≈ R/c and giving evident directional correlation with some astrophysical objects, which is not the case in practice. We will discuss here the possibility that the UHECR sources are of transient naturethat they are in an active state for some time, say 10 7−9 years (an interval so big that it covers the collision time for galaxies passing each other, the estimated time of activity of AGN, etc.), and then remain quiet. UHECR are assumed to be produced only in the active phase. The idea is that the bulk of UHECR were produced by one or a few sources located relatively nearby (on the extragalactic scale), but which are at present not active. This is a simple solution of the isotropy problem. The very energetic particles traveling rectilinearly have passed Earth already, and what we see now as the UHECR flux is only those particles which are deviated enough by extragalactic magnetic fields to be delayed, relative to the light signal, by a substantial amount of time. The only problem is to see if such a mechanism can really work-if the magnetic fields are strong enough to curve the trajectories of particles of energies around 10 20 eV. Propagation of UHECR in the Intergalactic Magnetic Field The UHECR under consideration are electrically charged, so their propagation in intergalactic space is therefore affected by the magnetic fields along their path. The intergalactic magnetic field strength is believed to be on the order of 10 −8 -10 −9 G, and for the distances of interest of about 1-100 Mpc and particle energies above 10 18 eV, some deviations from rectilinear propagation are expected. Experimental knowledge of large-scale magnetic fields is rather scarce (see, for example, [14] and discussions given in [15] and [16]). These fields will have both regular and random components. The former can be, in principle, a relic of distant epochs (occasionally compressed and magnified or amplified by dynamo-like mechanisms). However, at present we have no evidence of the existence of such, so we neglect it. The irregular component is present in intergalactic space, as it is in our Galaxy (and others). Its source can be ionized plasma emitted by galaxies and clusters of galaxies, some of which will have come from supernova remnants bursting out of the host galaxies. The escape of galactic cosmic rays into the intergalactic medium (IGM) is a special case of this "process." Insofar as the energy density of cosmic rays in the IGM-coming from escape from galaxies, is ∼ 10 −6 eV cm −3 (obtained by integrating the extragalactic flux of cosmic rays), the corresponding magnetic energy density will give an rms field of ∼ 3 × 10 −9 G assuming equipartition. Another source of extragalactic magnetic field is from active galactic nuclei and other near-cataclysmic events. The magnetic disturbances evolve in time in accordance with the conventional turbulence picture, transferring energy consecutively down to smaller scales where the energy is finally dissipated. There are various possibilities for the manner in which particles propagate through the IGM, but here we consider just two: the cubic domain model and the Kolmogorov turbulence model. We now examine how the particular random field structure influences UHECR propagation across large distances. Cubic domain model for the random magnetic field The transport of charged particles when well-known conditions are fulfilled can be described as a diffusion process. The diffusion itself can be thought of as the limit of the constant step random walk process, this being defined by one parameter only: the length of a single step. On the other hand, there is a limit to the large-scale random magnetic field arising from the results of Faraday rotation measurements. It can be said that where λ B is the magnetic field coherence length, which can be treated as the distance over which the orientation of the magnetic field changes randomly. The simplest model of a chaotic magnetic field is just the cubic domain model, in which the space is divided into equal cubic cells of size λ cell , the field in each cell is equal to B , and its orientation changes randomly from cell to cell. In such a picture, due to the fact that the cubic lattice orientation as a whole is obviously not fixed, the effective coherence length λ B is defined precisely as (where the integration goes along the straight line over a distance much greater than any of the regular component scales of B). λ B is not exactly equal to λ cell , but the difference for our purposes (extragalactic UHECR propagation) is not significant. The magnetic field coherence length in the case of UHECR cannot be used as a random walk step size for the propagation calculations. If the Larmor radius of a particle of charge Z and energy E is bigger than λ B , then after traversing the distance λ B the particle velocity still remembers (on average) its initial direction. To find out the random walk step length, we performed simulations of charged particles in a magnetic cubic lattice of size 0.1 Mpc with a random magnetic field of 10 nG. This is comparable with the Larmor radius (∝ E/(ZB)) for protons of energy 10 18 eV. The propagation coherence length λ c is defined, by analogy with Eq.(2), as The integration is similar to that in Eq.(2). It is a function of particle charge and energy. To study this in more detail, we plot in Fig.1 the correlation coefficient for the proton velocity direction defined as for different energies traversing our cubic magnetic domain space (where energy losses are neglected). A turbulent random magnetic field A more realistic picture of the intergalactic magnetic field uses the Fourier modes and their power spectrum where φ(k) are random phases, and 2π/L min < k < 2π/L max with L min and L max are the lower and upper limits of the magnetic field turbulence scales, respectively. In the present paper, a particular turbulent random field was realized by replacing the integration in Eq.(5) by the sum of 1000 independent Fourier components, each with ∆x (Mpc) randomly chosen value of k (limited by L min and L max ) and random phase φ. The sum was then normalized to yield the assumed |B(x)| 2 . In the calculations, we have used |B| = 2 × 10 −9 G and L min = 0.01 ÷ 0.1 Mpc and L max = 2 Mpc. Concerning the UHECR transport problem, the lower turbulence size limit is of no importance, and the upper limit (in the reasonable range given above) has only a minor influence on the normalization of |B| ). The average value of B 2 here is different from the one assumed for the cubic cell model (as well as the scale of its irregularities), but the propagation of charged particles just for such values is similar in both models, as will be shown below. The power spectrum B 2 (k) is proportional to the energy density contained in the k mode. For the power-law turbulence spectrum, For the general case of Kolmogorov turbulence, the index n is equal to 5/3. The magnetic field coherence length for this case can be calculated analytically [15] and is equal to L max /5 for small L min /L max . For proton propagation in the Kolmogorov turbulent magnetic field, the correlation coefficient for velocity direction ρ given by Eq.(4) has been calculated and the results are given in Fig.2. Comparison of particle propagation in random magnetic field models The propagation coherence length λ c defined in Eq.(3) for the turbulent medium in comparison with the one for the cubic domain model is shown in Fig.3. It can be seen from all the figures that the transport of charged particles in the two types of random magnetic field model should be very similar, in spite of the fact that the detailed structure of the field is so very different. Not only is the average magnetic field strength |B| different (10 × 10 −9 G for cubic domains and 2 × 10 −9 G for a Kolmogorov turbulent medium), but the spectrum of the field is different. The spectrum B 2 (k) calculated by Fourier decomposition of generated chaotic fields in each model, and the respective magnetic field correlation coefficients, are shown in Figs For a given energy, if the observer is located at a distance bigger than λ c , as shown in Fig.3, the propagation is diffusive. The deviation from rectilinear propagation starts around λ c . This can be seen in Figs. 6 and 7, where the mean distance reached by the particle as a function of time is shown, and where the distance distributions are given. For rectilinear propagation, the respective line slope is approximately unity (on a log×log) plot; when the diffusion starts to dominate, the slope changes from 1 to 1/2. It is seen that particles with energies of 10 18 eV diffuse while those with energies of 10 20 eV propagate along (almost) straight lines, through distances of the order of Gpc. Energy loss processes The UHECR domain is quite rich in physical processes involving energy losses. Starting with protons of relatively low energies, about 10 18 eV, e + e − pair production on the cosmic microwave background (CMB) photons starts to play a role, which reaches maximal importance slightly below 10 19 eV. The main GZK process of energy loss is due to ∆ resonance excitation (and its subsequent decay, dissipating energy, eventually to low energy γs) on CMB photons. The energy losses of heavier nuclei relative to electronpositron pair creation are Z 2 stronger, but, due to the different rest mass and therefore different Lorentz factor, the respective total nucleus energy should be A times higher than that for protons. The same scaling in energy ought to be applied for ∆ resonance creation (but without the Z 2 enhancement). This makes the GZK mechanism for heavy nuclei not as important. The dominating process for nuclei is photo-disintegration on background photons. The significant rise in fragmentation cross section just at the energies of our present interest is due to the existence of giant dipole resonance. This excitation energy is close to 20 MeV for (almost) all interesting heavy nuclei. This is about one order of magnitude below the ∆ resonance excitation energy, and thus, if only the collisions with CMB photons are considered, the threshold energy for nuclear disintegration is of order A/10 higher than the proton GZK cut-off energy. A review of the whole situation is presented in Fig.8 . To compare the e + e − pair production, ∆ resonance creation by nucleons, and disintegration of heavy nuclei, the cross sections have to be convoluted, not only with the photon energy spectrum, but also with the inelasticity of the respective process. In Fig.8, the inverse average length for 1% energy loss is shown. It is different from the commonly used (d ln(E)/dx) −1 describing the average length for losing a (1 − e −1 ≈ 63%) fraction of particle energy. But it is actually more illustrative specifically for cases where cross sections change substantially with the energy (by more than a decade for nuclei above 10 20 eV when the energy changes by e −1 ). Anyway, the difference between our λ 1% and (d ln(E)/dx) −1 is in the constant factor ≈ 0.63/0.01 by which the vertical scale in Fig.8 should be multiplied to match the convention. The CMB is assumed to be of temperature 2.7 K and for higher energy photons we take the spectrum obtained in [17](the one labeled there the "best estimate" intergalactic IRB). Simulations of particle transport in both magnetic field models described above were performed assuming that their source emits protons and composite nuclei each time in newly generated field realizations. The particle trajectory was calculated in small steps of 3 kpc (3 kpc/c in time intervals). In each step, the field was assumed constant and the trajectory was calculated analytically, giving position and velocity direction for the traced particle after certain short time intervals. Continuous energy losses were taken into account by diminishing the particle energy after each step. Abrupt losses of particle energy due to photo-pion production and spallation in the case of nuclei were included by generating in each step the actual interaction lengths for reactions with one or two nucleons released (n, p, 2n, pn, 2p separately) according to cross sections given in Ref. [19]. If the shortest of these interaction lengths was within the spatial step length, the length of the step was reduced to this value and the actual position of the interaction point (and particle direction there) was calculated. In the case of photo-pion reaction, the average energy loss due to ∆ resonance decay was subtracted. For nuclei, all the secondary products were included in the memory, to be propagated along with the initial nuclei until they were eventually lost after reaching the overall energy threshold, or until the propagation time limit (3 Gpc/c for the present calculations) was reached. During propagation, particles were recorded each time they were within a spherical shell of thickness of 100 kpc and radius 3, 5, 7, 10, 15,... Mpc around the source. Their direction, energy, type (mass number), and time since emission (or since the emission of their initial progenitor) were later used to obtain the distributions of interest. The initial energy spectrum was sampled in very short intervals in logarithmic scale and integrated, weighting events by the power-law emission spectrum (with a differential index of 2.1 for this paper). Small scale clustering of UHECR The UHECR, if they come from a relatively close source, are expected to be directionally correlated. Their arrival directions could point to the particular source in the sky. Several attempts have been made to verify this hypothesis, but all are based on limited statistics, and their significance has been limited. We present here some results concerning the existence of small scale clusters relevant to the subject of the present paper. Our analysis is similar to the one in [18] based on the whole available Northern hemisphere data on cosmic ray events of energies above 4 × 10 19 eV. The data consist of 113 events from AGASA, Haverah Park, Yakutsk, and Volcano Ranch experiments (19 of them with energies greater then 10 20 eV). We used a technique which was developed in searches for correlations among particles created in high energy accelerator experiments. Factorial moments in integral form are the best tools to be used for our purpose. Precisely, we used the so-called "star integral" method for factorial moment calculations discussed extensively in [20] and defined as F k (∆) = ρ k (y 1 , y 2 , . . . , y k ) Θ 12 Θ 13 . . . Θ 1k dy 1 dy 2 . . . dy k ρ 1 (y 1 ) ρ 1 (y 2 ) . . . ρ 1 (y k ) Θ 12 Θ 13 . . . Θ 1k dy 1 dy 2 . . . dy k , where ρ k (y 1 , y 2 , . . . , y k ) is the k-dimensional probability density, and Θ ij are equal to Heaviside step functions with argument (∆ − y i , y j ): where y i , y j is the distance between two points defined in our case as the angle between directions of UHECR events. The interpretation of factorial moments in integral form, thanks to the Θ ij functions, is clear. The factorial moment gives the number of groups of events (doublets, triplets etc.) where the relative distances within each group are smaller than ∆ in the data sample analyzed, normalized by the number of such groups calculated for the sample with the same marginal distribution for all y i variables and lack of any correlation among any of them, i.e., where ρ k norm (y 1 , y 2 , . . . , y k ) = ρ 1 (y 1 )ρ 1 (y 2 ) . . . ρ 1 (y k ). The normalization factor can be obtained using the "event mixing" method, but in general, factorial moments can be used for comparison of observations with any model for the background. Due to the small statistics, only the first two orders could be studied with some confidence. The factorial moments are related to integral cumulant moments K: which represent genuine correlation of the given order present in the sample analyzed. In Fig. 9, results concerning two-point correlations are shown. The observations are represented in the figures as the thick solid histogram. To see the significance of the observed correlation, upper limits can be calculated exactly using the Monte Carlo method by generating hundreds of thousands of times the uncorrelated "mixed events" pools and counting fractions of events exceeding each value of δ. The limits are shown as dotted histograms for confidence levels of 90%, 95%, 99%, and 99.9%. The analysis has been performed for two event samples. In the first data sample (labeled as "low E" in the figures), all events with energy of more than 4 × 10 19 eV were used and for the second ("high E"), it was required that at least one event in the doublet or triplet had to be of energy greater than 10 20 eV. Such a division gives the possibility of checking if the correlation is indeed increasing with the particle energy, as one would expect. Results on third order factorial moments and cumulants are shown in Fig. 10. Concerning doublets analysis (F 2 ), clustering appearing below 3-4 • can be seen for both data samples. The probability that this is pure coincidence is of the order of 1%. For triplets (F 3 ), the same can be said, suggesting that very close events may really exist in the data (but they are still at low confidence level). Second factorial moment calculated for all event sample ("low E"-left), and for pairs for which at last one event in the pair is of energy greater than 10 20 eV ("high E"-right). The result of the data analysis is shown by the solid histogram. Dotted lines represent the 90%, 95%, 99%, and 99.9% confidence limits, respectively. It is known [18] that there is one very close triplet in the data. Its angular dimension is of the order of experimental angle determination accuracy, estimated to be a few degrees. There is a possibility that it is a real cluster correlated with a UHECR source superimposed on all the other isotropic UHECR directions. To generalize this concept (however the low (1-5) percent confidence is too small for any radical claims), we can try to find out how strong the real correlation should be to produce the effect. The hypothesis in question is that UHECR arrive in most cases from completely random directions, but there is a small probability that the single UHECR event is accompanied by another one from the same (within a few degrees) direction. The factorial moment method allows us to examine this hypothesis in a straightforward way. Making the "mixing event sample" to evaluate the denominator in Eq.(8), we can make it in a not exactly random way, but in the way described above, introducing a new parameterthe probability of accompaniment. With such a constructed reference sample, the full analysis can be performed, giving as a result the cumulants equal to 0 (and F 2 equal to 1), if the real data follow the "additional accompaniment" idea. In Fig. 11, we present the results of such an analysis with the additional accompaniment probability equal to 3%. This means that on average, the "mixed sample" contains a few (∼3) close artificial doublets (there are 113 events in the sample) for the "low energy" case (all events with E > 4 × 10 19 eV). This is certainly not a big number, but one can see that the difference it makes is quite substantial. Summarizing, the existence of close clusters of UHECR, if real (hypothesis verified by existing data at the 95-99% confidence level), can be interpreted as the small, on the level of one percent, probability that there exist a few UHECR sources emitting particles reaching Earth with directions pointing to the source. Because we are working with statistics of correlated events in the data of order of a few, there is nothing more which can be said with reasonable confidence. Non-interacting protons To examine the small scale clustering (at most on the few percent level, as was shown above), and to answer the questions: (i) do they create a nuisance for the "single UHECR source" model, and, (ii) in general, what are predictions for the propagation calculations in realistic intergalactic magnetic fields, and what deviations can be expected, extensive Monte Carlo calculations are needed. For a given particle energy, the angle between the direction to the source and the observed particle velocity is called the deviation angle. This angle depends finally on the distance from the source and the time of particle propagation. Even for particle energies and distances for which rectilinear propagation dominates, there are particle trajectory fluctuations which allow the particle to be observed long after the R/c time and with big ∆φ (deg) Second order factorial moments and third order cumulants for the model with artificially introduced correlation (accompaniment probability of 3%) calculated for all triplets ("low E") and only for those including one UHECR of energy greater than 10 20 eV ("high E"). deviation angles. Because we are interested in fractions of events as small as 3%, these fluctuations could be important. To see the effects of the extragalactic magnetic field, the calculations were first performed for singly charged particles without any energy loss processes. In Table 1, the fractions of particles arriving in given delay time intervals are given, and in Table 2, average deviation angles are given for different particle energies and source distances with five ranges of delay time (with respect to the light signal). The first delay time range contains UHECR which propagate almost rectilinearly, and the last range contains the diffusive component whose velocities are oriented completely randomly. It can be easily seen that for singly charged particles of energies of about 5 × 10 19 eV, if the source is within ≈ 15 Mpc, the propagation is almost rectilinear, delay times are not bigger than 10 6 years, and mean deviation angles are less than 10 • . For particles of energy greater than 10 20 eV, the mean deviation is very small, even if the particles come Table 1 Fractions of cosmic ray flux (non-interacting protons) arriving in given delay time intervals for different particle energies at different distances to the source. from 50 Mpc away. The propagation of non-interacting particles scales with Z. To see what the situation is with iron nuclei, one has to look in Tables 1 and 2 for energies 26 times smaller. The iron nucleus of energy 5 × 10 19 eV travels on average longer than 10 8 years and arrives almost isotropically even for sources as close as 5 Mpc. For a source at 15 Mpc and energy of 10 20 eV, still no trace of anisotropy can be expected. This situation, however, can change if energy loss processes are taken into account. Particles traversing intergalactic space can interact with the matter and fields there. The longer they propagate, the bigger the energy losses are. It is expected that the general effect of UHECR interactions will be to favor the shorter paths, corresponding to smaller delays and deviation angles. Table 2 Mean deviation angle for non-interacting protons propagating from a source located at different distances as a function of particle energy and delay time. For some particular values, the flux of particles is negligible, so the values cannot be given. Introduction of energy loss processes Results of calculations for the propagation of protons and iron nuclei are shown in Tables 3 and 4. From Tables 3 and 4, for about 10% of the events, the mean deviation for iron nuclei is about 40 • at an energy of 5 × 10 19 eV. Going a little further up in particle energy, to 10 20 eV, approximately 20% have mean deviation angle of about 20 • (delays less than 10 7 years) and the next 50% have mean deviation 40 • and arrive not later than 10 8 years after the light signal. This is enough to see some slight enhancement of UHECR from the region on the sky where the source is (where galaxies collide? [6]), but obviously the general anisotropy constraint is still fulfilled. It is easy to achieve more or less anisotropy because there is some freedom with the magnetic fields (they can be eventually smaller or larger than assumed in this work). In the case of protons, however, if one wants to see them above 10 19 eV, a strong anisotropy has to be observed and the source must be active at present, so there is the general possibility of the identification of the UHECR source with some astrophysical object on the sky. Concerning the small scale clustering problem, we have to say that between our results and the widely discussed in the literature existence vs. lack of coincidences between UHECR direction and astrophysical objects, two solutions are possible. The first is that the clustering is by pure chance coincidence, which can be accepted at the 95 or 99% confidence level. The second possibility is that there is only one relatively close source of UHECR active "at present" (or at least only a few sources) and protons from there form the cluster(s) in question. In this case, however, the bulk of UHECR events are produced in other sources, or in one "single source" which is "at present" not active. A review of the actual experimental situation was presented in the XXVIII th International Cosmic Ray Conference in Tsukuba [21]. Recently, the AGASA group confirmed their findings of close doublets and triplets for energies from 10 19 eV to the very end of the spectrum, and the HiRes experiment does not see anything like this above 10 19 eV. The problem with high statistic monocular HiRes data is that the angular resolution is rather poor (and obviously not symmetrical). With such conditions, however, they give an upper limit for doublets and it is equal to 4. The more precise HiRes stereo data set consists only of 164 events above 10 19 eV, and nothing more significant than 1σ is seen protons 5 × 10 17 there. Both HiRes statements, in spite of being negative, do not contradict at a high significance level the AGASA statement. On the other hand (as was shown above), the significance of AGASA clustering is in fact only on the 95-99% confidence level. Predicted UHECR flux The exact propagation calculations were performed for different times of source activity. The source composition of protons and iron and oxygen nuclei was assumed, and their relative proportions were adjusted to the data on the extragalactic UHECR flux [5]. The continuous background consisting of cosmic rays produced by sources identical to the Single Source, uniformly distributed in the Universe (one per 1000 Mpc 3 per 10 9 years) was also assumed. Its contribution is given in Fig. 12 by the thin solid line. This background is about 10% of the total UHECR flux and does not play a significant role here. The experimental points used in the present work are taken from the analysis of the Northern hemisphere "world data" given in Ref. [5]. The points shown in Fig.12 represent the combined UHECR spectrum from Haverah Park, AGASA, Volcano Ranch, Yakutsk and Fly's Eye experiments, after subtraction of the galactic component (dominating still at about 10 18 eV, but negligible above 10 19 eV). The agreement between different measured data sets was achieved by adjusting the individual energy estimation accuracy and overall normalization. It was found satisfactory, thus allowing the authors to give the extragalactic UHECR energy spectrum free of particular instrumental biases. The very recent discussion of the UHECR spectrum [22] and the discrepancies between the last reported AGASA spectrum which contains 11 events of energies above 10 20 eV and HiRes (mono and stereo) spectra (compatibile with the Fly's Eye spectrum) with only 2 such cases with comparable exposures shows that there is, probably, systematic bias in energy determination on the order of 30% in one of the experiments. The method of combining energy spectra applied in Ref. [5] takes into account such uncertainties. Thus, experimental points in Fig.12 represent well the actual situation in the UHE region. The problem of whether the GZK cut-off exists (as claimed by the AGASA group) or not (according to HiRes) is still an open question, but the data seems to be as shown in the figure. lg(E) (eV) The number of parameters adjusted to the seven points representing extragalactic UHECR flux at first sight seems to be unreasonably big: normalization, composition (2 parameters), source spectral index, T 0 , source activity duration time, distance to the source, background normalization (density of the sources averaged in large space and time intervals)-all together eight of them. This is big if all of the parameters are uncorrelated. But they are in fact strongly correlated, and the freedom of choice does not represent the real number of degrees of freedom of the fitting procedure. It is clear that the "best" spectrum shown in Fig.12 does not match exactly the experimental points used. Additionally, it can be mentioned here that some of these parameters are fixed to some reasonable boundaries (as spectral index or background normalization). In general the main purpose of the present paper is not to fit the data perfectly by a single line; the uncertainties of many astrophysical parameters of extragalactic space (magnetic field structure, matter and radiation densities, etc.) and the limited statistics of registered UHECR events do not permit the derivation of any strong physical conclusions from this kind of fit. We want, rather, to show only that general agreement (or, more precisely, lack of experimental contradiction) can be achieved within the proposed UHECR origin model. Thus the particular values of T 0 , p:O:Fe composition, and distance to the source taken to draw the lines in Fig.12 should be treated not as the main result of this work, but rather as an example, showing that with values like these, quite satisfactory agreement between the "single UHECR source model" and the measured extragalactic UHECR spectrum can be obtained. Our fit to the UHECR spectrum was found with the Single Source inactive for the last 3 × 10 8 years. The composition (p:O:Fe about 10:5:3) is not extraordinary if compared with the one derived from experimental information in the energy region of about three orders of magnitude lower, i.e., at "the knee." It is important to mention that for such "light" source composition, the observed UHECR flux above 3 × 10 19 eV is quite heavy, and above 10 20 eV is completely iron-dominated. More detailed discussion of the particular parameter values obtained is given in [16]. Conclusions Extensive Monte Carlo calculations for the propagation of UHECR in extragalactic magnetic fields have been performed. The directional small scale clustering in the available data gives a 5% limit on its chance origin. The data are consistent with the hypothesis that the overall isotropic UHECR direction distribution is enhanced by the additional clustering probability on the level of no more than a few (∼ 3) percent of observed UHECR events. Propagation calculations show that such enhancement can be related to primary protons from only the source (or sources) active at present. It is possible that the bulk of the UHECR are created in a Single Source at 15 Mpc distance which was active about 3 × 10 8 years ago. This model is consistent with the large scale anisotropy data (most of the flux is composed of isotropized iron and heavy nuclei), as well as with the measured flux of extragalactic cosmic rays of energies above 10 18 eV. No new physics concerning the GZK cut-off mechanism is needed.
2014-10-01T00:00:00.000Z
2004-06-01T00:00:00.000
{ "year": 2004, "sha1": "a138b5b43aad2c68c2ca3f976dc77e49a9975ee9", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "b61a32fe19d0b2d8d1507fc0fb5202d6ac2e217e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235731693
pes2o/s2orc
v3-fos-license
The Siebeck-Marden-Northshield Theorem and the Real Roots of the Symbolic Cubic Equation The isolation intervals of the real roots of the symbolic monic cubic polynomial $x^3 + a x^2 + b x + c$ are determined, in terms of the coefficients of the polynomial, by solving the Siebeck-Marden-Northshield triangle - the equilateral triangle that projects onto the three real roots of the cubic polynomial and whose inscribed circle projects onto an interval with endpoints equal to stationary points of the polynomial Introduction The elegant theorem of Siebeck and Marden (often referred to as Marden's theorem) [1]- [5] relates geometrically the complex non-collinear roots z 1 , z 2 , and z 3 of a cubic polynomial with complex coefficients to a triangle in the complex plane whose vertices are z 1 , z 2 , and z 3 , on one hand, and, on the other, the critical points of the polynomial to the foci of the inellipse of this triangle. This ellipse is unique and is called Steiner inellipse [6]. It is inscribed in the triangle in such way that it is tangent to the sides of the triangle at their midpoints. The real version of the Siebeck-Marden Theorem, as given by Northshield [7], states that the three real roots (not all of which are equal) of a cubic polynomial are projections of the vertices of some equilateral triangle in the plane. However, it is the inscribed circle of the equilateral triangle that projects onto an interval the endpoints of which are the stationary points of the polynomial. The goal of this work is to consider a cubic equation with real coefficients and, using the Siebeck-Marden-Northshield theorem [7], solve the equilateral triangle and find the isolation intervals of the real roots of the symbolic monic cubic polynomial x 3`a x 2b x`c. Analysis Construction: Any three real numbers, not all equal, are the projections of the vertices of some equilateral triangle in the plane. For the monic cubic polynomial ppxq " x 3`a x 2`b x`c with three real roots x 1 , x 2 , and x 3 , not all equal, the vertices of the equilateral triangle -points P , Q, and R on Figure 1 -with coordinates px 1 , px 2´x3 q{ ? 3q, px 2 , px 3´x1 q{ ? 3q, and px 3 , px 1´x2 q{ ? 3q, respectively, project on the roots [7]. This is the Siebeck-Marden-Northshield triangle. The inscribed circle of this triangle projects to an interval with endpoints equal to the critical points µ 1,2 "´a{3˘p1{3q ? a 2´3 b of the cubic polynomial -the roots of the derivative p 1 pxq " 3x 2`2 ax`b of ppxq [7]. The centroid of the triangle is at φ "´a{3 on the abscissa -the first coordinate projection of the inflection point of ppxq -the root of the second derivative p 2 pxq " 6x`2a. Each side of the triangle is equal to α " p ? 12{3q ? a 2´3 b. The radius of the inscribed circle is r " p1{3q ? a 2´3 b. The radius of the circumscribed circle is 2r " p2{3q ? a 2´3 b. has only one real root. Proof. The discriminant of the monic cubic polynomial It is quadratic in c and the discriminant of this quadratic is Siebeck-Marden-Northshield Triangle: When the cubic polynomial ppxq " x 3`a x 2`b x`c has three real roots x 1,2,3 which are not all equal, they can be obtained as projections of the vertices of an equilateral triangle (P QR) with coordinates px 1 , px 2´x3 q{ ? 3q, px 2 , px 3´x1 q{ ? 3q, and px 3 , px 1x 2 q{ ? 3q, respectively [7]. As b ą a 2 {3, one has ∆ 2 ă 0 for all a and thus ∆ 3 ă 0 for all a and c. Hence, the cubic polynomial ppxq " x 3`a x 2`b x`c with b ą a 2 {3 has only one real root (and a pair of complex conjugate roots). This can be seen in an easier way: the discriminant of the derivative p 1 pxq " 3x 2`2 ax`b is 4pa 2´3 bq, hence no critical points of ppxq exist when b ą a 2 {3 and thus ppxq has only one real root. Note that existence of critical points of ppxq, warranted by b ď a 2 {3, does not warrant three real roots. The following Lemma addresses this. Lemma 2. The monic cubic polynomial ppxq " x 3`a x 2`b x`c with b ď a 2 {3 has three real roots, provided that c P rc 2 , c 1 s, where c 1,2 are the roots of the quadratic equation namely: where c 0 pa, bq "´2 27 a 3`1 3 ab. Proof. The discriminant ∆ 3 "´27c 2`p 18ab´4a 3 qc`a 2 b 2´4 b 3 of the monic cubic polynomial x 3`a x 2`b x`c is positive between the roots of the equation ∆ 3 " 0, which is quadratic in c. This is exactly equation (3) and its roots are the ones given in (4) and (5). Lemma 3. The maximum distance between the three real roots of the monic cubic poly- In this case, one side of the Siebeck-Marden-Northshield triangle is parallel to the abscissa. This is achieved in the case of the "balanced" cubic -the one with c " c 0 "´2a 3 {27`ab{3. For any other c such that c 2 ď c ď c 1 , the three real roots of the cubic lie within a shorter interval. Presented here are four cubics and their Siebeck-Marden-Northshield triangles: the "balanced" cubic with c " c 0 (second from top) whose roots are ν 1,3 , equidistant from φ "´a{3, and ν 2 " φ and whose triangle P 0 Q 0 R 0 has the side P 0 R 0 parallel to the abscissa; the two "extreme" cubics -with c " c 1,2 (top and bottom) having double real roots µ 1,2 and a simple root ξ 1,2 and whose triangles P 1,2 Q 1,2 R 1,2 have a side perpendicular to the abscissa and a vertex on the abscissa; and the general cubic (second from bottom) x 3`a x 2`b x`c with distinct real roots x 3 ă x 2 ă x 1 and triangle P QR. Increasing c rotates the Siebeck-Marden-Northshield triangle counterclockwise about its centroid. Decreasing c results in its clockwise rotation. The isolation intervals of the roots of the latter can be immediately determined from the graph. Proof. Given that the root x 2 " ν 2 " φ "´a{3 of the "balanced" cubic equation , is the midpoint between its other two roots x 1,3 " ν 1,3 "´a{3˘aa 2 {3´b, one has x 1´x2 ( ? 3 times the second coordinate of point R) being equal to x 2´x3 ( ? 3 times the second coordinate of point P ) -see Figure 2. Hence P and R are both above the abscissa and are equidistant from it. Thus P R is parallel to the abscissa. Hence, the distance between x 3 and x 1 is exactly equal to the length α " p ? 12{3q ? a 2´3 b of the side P R. In any other case of three real roots (c P rc 2 , c 1 s and c ‰ c 0 ), the side P R will not be parallel to the abscissa and hence the projection of P R onto the abscissa will be shorter than the length of P R, that is, the three real roots of the cubic polynomial will lie in an interval of length smaller than α " p ? Note that the Siebeck-Marden-Northshield triangle rotates counter-clockwise when increasing the free term c and clockwise otherwise. The triangle cannot be rotated counterclockwise or clockwise further than the triangles of the two "extreme" cubics (with c " c 1,2 ) as three real roots exist and, hence, the Siebeck-Marden-Northshield triangle exists itself, only for c P rc 2 , c 1 s. Also observe a completely geometric in nature proof that the projection of the incircle of the Siebeck-Marden-Northshield triangle coincides exactly with the interval given by the two critical points of the cubic: the incircle is invariant when varying the free term c from c 2 to c 1 and this variation moves the graph up from the position of a local maximum tangent to the abscissa -the "extreme" cubic with c " c 2 (the lowermost curve on Figure 2) to a local minimum tangent to the abscissa -the "extreme" cubic with c " c 1 (the uppermost curve on Figure 2), whose triangles are P 2,1 Q 2,1 R 2,1 , respectively. and c P rc 2 , c 1 s, has three real roots x 3 ď x 2 ď x 1 , at least two of which are different and any two of which are not farther apart than p ? 12{3q ? a 2´3 b, with the following isolation intervals: (I) For c 2 ď c ď c 0 : x 3 P rν 3 , µ 2 s, x 2 P rµ 2 , φs, and x 1 P rν 1 , ξ 2 s. (ii) ν 1,2,3 are the roots of the "balanced" cubic equation x`c will have three real roots. The two "extreme" cases, the cubics x 3`a x 2`b x`c 1 and x 3`a x 2`b x`c 2 , will each have a double root (as ∆ 3 vanishes for c " c 1,2 ) and a simple root. Otherwise, for c 2 ă c ă c 1 , the cubic polynomial will have three distinct roots. If µ 1,2 is the double root of the "extreme" cubic x 3`a x 2`b x`c 1,2 and ξ 1,2 -the corresponding simple root, then, when c " c 1,2 , one has (due to Viète formulae): 2µ i`ξi "´a, µ 2 i`2 µ i ξ i " b, and µ 2 i ξ i "´c (for i " 1, 2). Expressing from the first ξ i "´a´2µ i and substituting into the second yields´3µ 2 i´2 aµ i´b " 0, that is, the double roots µ 1,2 of each of the "extreme" cubics x 3`a x 2`b x`c 1,2 are the roots of the quadratic equation 3x 2`2 ax`b " 0, that is µ 1,2 "´a{3˘r "´a{3˘p1{3q ? a 2´3 b. Hence one finds: ξ 1,2 "´a´2µ 1,2 "´a{3¯2r "´a{3¯p2{3q ? a 2´3 b. Due to Lemma 3, the biggest distance between the roots of the cubic will be α " p ? 12{3q ? a 2´3 b. The roots of the "balanced" cubic equation x 3`a x 2`b x´2a 3 {27`ab{3 " 0 (see the proof of Lemma 3) are symmetric with respect to the centre of the inscribed circle: ν 3 "´a{3´aa 2 {3´b, ν 2 " φ "´a{3, and ν 1 "´a{3`aa 2 {3´b. The "balanced" equation has triangle P 0 Q 0 R 0 and the side P 0 R 0 is parallel to the abscissa (Figure 2). When c " c 1 ą c 0 , the Siebeck-Marden-Northshield triangle is P 1 Q 1 R 1 and its side P 1 Q 1 is perpendicular to the abscissa. Hence the roots x 2 and x 1 coalesce into the double root µ 1 . The vertex R 1 is on the abscissa at the smallest root ξ 1 (Figure 2). When c " c 2 ă c 0 , the Siebeck-Marden-Northshield triangle is P 2 Q 2 R 2 and its side R 2 Q 2 is perpendicular to the abscissa. The roots x 3 and x 2 coalesce into the double root µ 2 , while the biggest root x 1 is equal to ξ 2 , as the vertex P 2 is on the abscissa at ξ 2 ( Figure 2). The isolation intervals of the roots of the cubic polynomial are then easily read geometrically -see Figure 2. The lengths of the isolation intervals of the roots are as follows: For the smallest root x 3 , the length is µ 2´ν3 " rp ? 3´1q{3s ? a 2´3 b; for the middle root x 2 one has φ´µ 2 " p1{3q ? a 2´3 b; and for the largest root x 1 it is ξ 2´µ1 " rp2´?3q{3s ? Proof. Given on Figure 3 are the two "extreme" cubics -with c " c 1 (second from top) and with c " c 2 (second from bottom). Their corresponding triangles are P 1 Q 1 R 1 and P 2 Q 2 R 2 , respectively. Each of these cubics has a double root µ 1,2 and a simple root ξ 1,2 , respectively. Cubics with c such that c 2 ă c ă c 1 are between those two and they are the only ones with three distinct real roots. When c ą c 1 (uppermost cubic), there is a pair of complex conjugate roots and a single real root x 1 ă ξ 1 "´a{3´p2{3q ? a 2´3 b. When c ă c 2 (lowermost cubic), there is a pair of complex conjugate roots and a single real root x 1 ą ξ 2 "´a{3`p2{3q ? a 2´3 b. The isolation intervals of the single real root for either of the two latter cubics can be found by the determination of the lower (respectively, upper) root bound of the cubic. As polynomial upper root bound, one can take one of the many existing root bounds. For example, it could be the bigger of 1 and the sum of the absolute values of all negative coefficients [8]. Or one can consider the bound [9]: 1`k ? 0 and b ă 0, and k " 3 if a ą 0 and b ą 0, and c ă 0 (if a, b, and c are all positive, the upper root bound is zero). H is the biggest absolute value of all negative coefficients in x 3`a x 2`b x`c. The lower root bound is the negative of the upper root bound of´x 3`a x 2´b x`c. (I) c ă c 2 , the cubic has only one real root: (II) c ą c 1 , the cubic has only one real root: When b " a 2 {3 and: (I) c ă p1{27qa 3 , the cubic has only one real root: (II) c " p1{27qa 3 , the cubic has a triple real root: (III) c ą p1{27qa 3 , the cubic has only one real root: Theorem 3. The monic cubic polynomial ppxq " x 3`a x 2`b x`c, for which b " a 2 {3 and: (I) c ă p1{27qa 3 , has only one real root: x 1 "´a{3`3 a a 3 {27´c ą´a{3; (II) c " p1{27qa 3 , has a triple real root: x 1 " x 2 " x 3 "´a{3; (III) c ą p1{27qa 3 , has only one real root: x 1 "´a{3`3 a a 3 {27´c ă´a{3. Proof. Shown on Figure 4 is the special case of b " a 2 {3. One immediately gets that c 1 " c 2 " a 3 {27 in this case. The only cubic with three real roots is the one with c " a 3 {27. This is the cubic x 3`a x 2`p a 2 {3qx`a 3 {27 " px`a{3q 3 (middle curve). Clearly, this cubic has a triple real root x 1 " x 2 " x 3 "´a{3. If one increases c above a 3 {27 (top cubic), there is a pair of complex conjugate roots and a single root x 1 ă´a{3. If one increases c above a 3 {27 (bottom cubic), there is a pair of complex conjugate roots and a single root x 1 ą´a{3. The single real root for either of the two latter cubics can be immediately found completing the cube: Proof. Re-write the cubic equation x 3`a x 2`b x`c " 0 as x 3`a x 2 "´bx´c. Such "split" of polynomial equations of different degrees has been proposed and studied in [10,11,12] The rest of the proof is graphic -see the captions of Figures 5-8 for the four cases (I)-(IV) respectively. When a ě 0 and c ď 0, the isolation interval of the single root x 1 is: 0 ď x 1 ď´c{b. When a ě 0 and c ą 0, the isolation interval of the single root x 1 is: mint´a,´c{bu ď x 1 ď maxt´a,´c{bu. When a ă 0 and c ă 0, the isolation interval of the single root x 1 is: mint´a,´c{bu ď x 1 ď maxt´a,´c{bu. When a ă 0 and c ě 0, the isolation interval of the single root x 1 is: 0 ď x 1 ď´c{b. (b) For any given a, the coefficients b of the linear term of x 3`a x 2`b x`c determines the radius r " p1{3q ? a 2´3 b of the inscribed circle. The circumscribed circle of the equilateral triangle has radius 2r " p2{3q ? a 2´3 b. If a cubic polynomial has two stationary points, the distance between them is always 2r " p2{3q ? a 2´3 b. The inflection point of the graph of x 3`a x 2`b x`c is always the midpoint (´a{3) between the stationary points of the cubic polynomial. Hence, the analysis of the cubic polynomial x 3`a x 2`b x`c should start with what the value of b, relative to a 2 {3, is. (i) c 2 ď c ď c 0 , then the polynomial x 3`a x 2`b x`c has three real roots with the following isolation intervals: x 3 P rν 3 , µ 2 s, x 2 P rµ 2 , φs, and x 1 P rν 1 , ξ 2 s (Figure 2). (ii) c 0 ď c ď c 1 , then the polynomial x 3`a x 2`b x`c has three real roots with the following isolation intervals: x 3 P rξ 1 , ν 3 s, x 2 P rφ, µ 1 s, and x 1 P rµ 1 , ν 1 s (Figure 2). Roles of the Coefficients and Root Isolation Intervals -Summary and Application of the Analysis In the above, c 1,2 " c 0˘p 2{27q a pa 2´3 bq 3 , with c 0 "´2a 3 {27`ab{3, are the values of c for which, for any a and b ă a 2 {3, the discriminant ∆ 3 of the cubic polynomial x 3`a x 2`b x`c is zero (∆ 3 positive for c between c 2 and c 1 ). Namely, these are the roots of the quadratic equation (3): Also in the above, ν 3 "´a{3´aa 2 {3´b, ν 2 " φ "´a{3, and ν 1 " a{3`aa 2 {3´b are three real roots of the "balanced" cubic polynomial x 3`a x 2`b x`c 0 (Figure 2). The roots of the "extreme" cubic x 3`a x 2`b x`c 1 are the double root µ 1 "´a{3`p ? 3{3q a a 2 {3´b and the simple root ξ 1 "´a´2µ 1 "´a{32 r "´a{3´p2{3q ? a 2´3 b. Likewise, the roots of the "extreme" cubic x 3à x 2`b x`c 1 are the double root µ 2 "´a{3´p ? 3{3q a a 2 {3´b and the simple root ξ 2 "´a´2µ 2 "´a{3`2r "´a{3`p2{3q ? a 2´3 b (Figure 2 and Figure 3). The biggest distance between any two of the three real roots of the cubic equation ? a 2´3 b -achieved for the roots of the "balanced" cubic equation x 3`a x 2`b x`c 0 ( Figure 2). For any other cubic equation with c 2 ď c ď c 1 , the three real roots are within an interval of length 3r " ? a 2´3 b ă α ( Figure 2). (iii) c ă c 2 , then the polynomial x 3`a x 2`b x`c has only one real root: (Figure 3). The root x 1 can be bounded from above by a polynomial root bound. (III) If b ą a 2 {3, the discriminant of the cubic polynomial is negative and thus x 3`a x 2`b x`c has one real root x 1 and a pair of complex conjugate roots. The isolation interval of x 1 depends on the signs of a and c and is as follows: (c) The coefficient c of x 3`a x 2`b x`c rotates the equilateral triangle (which exists if b ă a 2 {3) that projects onto the roots x 3 ď x 2 ď x 1 (at least two of which are different) of the cubic polynomial. The vertices P , Q, and R of the triangle are points of coordinates px 1 , px 2´x3 q{ ? 3q, px 2 , px 3´x1 q{ ? 3q, and px 3 , px 1´x2 q{ ? 3q, respectively. Point Q is always below the abscissa and points P and R -always above it. When c " c 0 "´2a 3 {27`ab{3, the side P R is parallel to the abscissa. This corresponds to the "balanced" cubic equation x 3`a x 2`b x´2a 3 {27`ab{3 " 0, the roots of which are symmetric with respect to the centre of the inscribed circle: ν 3 "´a{3´aa 2 {3´b, ν 2 " φ "´a{3, and ν 1 "´a{3`aa 2 {3´b. The "balanced" equation has triangle P 0 Q 0 R 0 (Figure 2). When c increases from c 0 towards c 1 ą c 0 , the equilateral triangle P QR rotates counterclockwise around its centre from the position of triangle P 0 Q 0 R 0 of the "balanced" equation. When c " c 1 , the roots x 2 and x 1 coalesce into the double root µ 1 , while the smallest root x 3 becomes equal to ξ 1 "´a´2µ 1 "´a{3´2r " a{3´p2{3q ? a 2´3 b. The triangle in this case is P 1 Q 1 R 1 and its side P 1 Q 1 is perpendicular to the abscissa. The vertex R 1 is on the abscissa. The triangle cannot be rotated further counterclockwise as, when c ą c 1 , the polynomial x 3`a x 2`b x`c has only one real root ( Figure 2). When c decreases from c 0 towards c 2 ă c 0 , the equilateral triangle P QR rotates clockwise around its centre from the position of triangle P 0 Q 0 R 0 of the "balanced" equation. When c " c 2 , the roots x 3 and x 2 coalesce into the double root µ 2 , while the biggest root x 1 becomes equal to ξ 2 "´a´2µ 2 "´a{3`2r " a{3`p2{3q ? a 2´3 b. The triangle in this case is P 2 Q 2 R 2 and its side R 2 Q 2 is perpendicular to the abscissa. The vertex P 2 is on the abscissa. The triangle cannot be rotated further clockwise as, when c ă c 2 , the polynomial x 3`a x 2`b x`c has only one real root ( Figure 2). Examples Each possible case -for each Theorem (1 to 4, with the relevant subsection of the Theorem given in brackets in Roman numerals) -is illustrated with an example. The roots of the cubics in these examples are found numerically with Maple 2021. There is only one real root x 1 and its isolation interval is 0 ď x 1 ď´c{b, that is 0 ď x 1 ď 1.5. The roots are: x 1 " 0.844 and x 2,3 "´0.922˘1.645i.
2021-07-06T01:15:48.787Z
2021-07-05T00:00:00.000
{ "year": 2021, "sha1": "07419882615ddc6c6c47ac973ebb2e669288b690", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00025-022-01667-8.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "0ee34fec0fa04e5d6da2cb3c4d4eb5d72da2aa85", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
246019716
pes2o/s2orc
v3-fos-license
Mitofusin-2 Restrains Hepatic Stellate Cells' Proliferation via PI3K/Akt Signaling Pathway and Inhibits Liver Fibrosis in Rats The mitochondrial GTPase mitofusin-2 (MFN2) gene can suppress the cell cycle and regulate cell proliferation in a number of cell types. However, its function in hepatic fibrosis remains largely unexplored. We attempted to understand the mechanism of MFN2 in hepatic stellate cell (HSC) proliferation and the development of hepatic fibrosis. Rat HSC-T6 HSC were cultured and transfected by adenovirus- (Ad-) Mfn2 or its negative control (NC) vector (Ad-green fluorescent protein (GFP)); a rat liver cirrhosis model was established via subcutaneous injection with carbon tetrachloride (CCl4). Seventy-two rats were randomly divided into four groups: CCl4, Mfn2, GFP, and NC. Ad-Mfn2 or Ad-GFP was transfected into the circulation via intravenous injection at day 1, 14, 28, 42, or 56 after the first injection of CCl4 in the Mfn2/GFP groups. Biomarkers related to HSC proliferation and the development of hepatic fibrosis were detected using western blotting, hematoxylin-eosin and Masson staining, and immunohistochemistry. In vitro, Mfn2 interfered specifically with platelet-derived growth factor- (PDGF-) induced signaling pathway (phosphatidylinositol 3-kinase- (PI3K-) AKT), inhibiting HSC-T6 cell activation and proliferation. During the process of hepatic fibrosis in vivo, extracellular collagen deposition and the expression of fibrosis-related proteins increased progressively, while Mfn2 expression decreased gradually. Upregulating Mfn2 expression at the early stage of fibrosis impeded the process, triggered the downregulation of type I collagen, and antagonized the formation of factors associated with liver fibrosis. Mfn2 suppresses HSC proliferation and activation and exhibits antifibrotic potential in early-stage hepatic fibrosis. Therefore, it may represent a significant therapeutic target for eradicating hepatic fibrosis. Introduction Hepatic fibrosis, characterized by necrosis and compensatory proliferation of liver cells as well as abnormal accretion of fibrous tissue, is the critical pathological feature of various chronic liver diseases and the necessary intermediate link in the occurrence of liver cirrhosis [1,2]. e activation of hepatic stellate cells (HSC) is the cytological basis for the formation of hepatic fibrosis, while the normal morphology is in a stationary state [3]. Activated HSC synthesize large amounts of extracellular matrices (ECM); the imbalance between ECM secretion and degradation leads to collagen deposition in the liver. Further research has indicated that activated HSC release cytokines, including transforming growth factor beta (TGF-β) and platelet-derived growth factor (PDGF) via an autocrine mechanism, resulting in sustained activation of HSC and the development of hepatic fibrosis [3][4][5][6]. Accordingly, HSC have become a focal target in studies on hepatic fibrosis. Several researchers have attempted to inhibit HSC activation and proliferation by reducing the cytokines required during the process or by interfering with the signal transduction [7][8][9][10]. However, experiments inhibiting cell activation directly or inducing apoptosis are seldom reported. Mitochondria are multifunctional organelles highly related to the functional state of cells and play an important role in the cell cycle, metabolism, proliferation, and apoptosis. e mitochondrial GTPase mitofusin-2 (MFN2) gene (aka, the hyperplasia suppressor gene) was originally identified in vascular smooth muscle cells from spontaneously hypertensive rats by Chen et al. [11]. Located on the outer mitochondrial membrane, MFN2 regulates mitochondrial morphology and function and plays a crucial role in mitochondrial fusion and mitochondria-mediated apoptosis [12][13][14]. Low expression of intracellular MFN2 is a necessary condition for cells entering the proliferative phase [15]. In addition, MFN2 performs proapoptotic and antiproliferative functions in various cell lines, including mammary, cervical, colon, hepatocellular, and lung cancer cells [16][17][18][19]. Our previous work suggested that MFN2 had a negative regulatory effect on HSC proliferation, but the exact mechanism remained unclear. PDGF, which is the strongest mitogen for HSC known to date, regulates cell proliferation and division through phosphorylation by binding to the corresponding receptors on the cell membrane [5,20,21]. Phosphorylation of PI3K (phosphatidylinositol 3-kinase) plays a critical role in HSC activation and mitosis; specific inhibitors of PI3K can restrict PDGF-induced proliferation [22]. MFN2 suppresses cell proliferation by inhibiting the PI3K-AKT signaling pathway [23]. However, the correlation between MFN2 and PI3K-AKT signaling in hepatic fibrosis remains largely unexplored. We hypothesized that MFN2 plays a role in antiproliferation via the PI3K-AKT signaling pathway during the process of hepatic fibrosis. Here, we used a recombinant adenovirus (Ad) vector for transfecting Mfn2 into HSC-T6 cells, a rat HSC line, to evaluate the effect of Mfn2 on proliferation. We also investigated the mechanism of Mfn2-regulated antiproliferation effects on HSC-T6 cells in vitro. Furthermore, Wistar rats were transfected to reveal the role of Mfn2 in hepatic fibrosis. e aim of this article is to study the antifibrotic potential of Mfn2, as well as its role in the cell cycle of HSC, which is seldom reported in the existing literature. Mfn2 probably provides new therapeutic methods for hepatic fibrosis in the near future. Materials and Methods 2.1. Cell Lines, Cell Culture, and Treatment. HSC-T6 HSC were obtained from the Chinese Academy of Science Center for Excellence in Molecular Cell Science. e cells were cultured in growth medium consisting of Dulbecco's modified Eagle's medium (DMEM; Gibco Life Technologies, Carlsbad, CA, USA) containing 4.5 g/L glucose, 5000 IU/L penicillin, 5 mg/L streptomycin, and 10% fetal bovine serum (FBS; Gibco Life Technologies) in an incubator at 37°C with a humidified atmosphere of 5% CO 2 and 95% air. For the experiments conducted under serum-free conditions, the cells were cultured in serum-free medium for 24 h. For chemokine treatment, the cells were exposed to 20 ng/mL PDGF-BB (PeproTech, Rocky Hill, NJ, USA) for 48 h. Animals and Experimental Design. All experimental protocols were conducted in accordance with the Animal Research: Reporting In Vivo Experiments (ARRIVE) guidelines, and the study was approved by the Animal Care and Use Committee of Sun Yat-sen University. Adult male Wistar rats (average body weight, 200-250 g) (Laboratory Animal Center of Sun Yat-sen University, Guangdong, China) were used in the study and were given ad libitum access to food and water at room temperature (20-22°C) with a 12-h light-dark cycle. e rats were randomly divided into two groups (n � 24 per group): carbon tetrachloride (CCl 4 ) and negative control (NC). e rats in the CCl 4 group received subcutaneous injection of CCl 4 at a dose of 3 mL/kg (mixed with olive oil (50% V/V)) twice a week. e NC group was treated with vehicle only (olive oil) equivalent to the CCl 4 group. Six rats per group were randomly selected and euthanized on days 14, 28, 42, and 56 after the first injection, separately, and the livers were harvested for further study. Another 72 rats were randomly divided into four groups: CCl 4 (n � 6), Mfn2 (CCl 4 + Ad-Mfn2, n � 30), GFP (green fluorescent protein) (CCl 4 + Ad-GFP, n � 30), and NC (n � 6). e Mfn2 and GFP groups were each randomly divided into five subgroups (n � 6 per subgroup). In these subgroups, Ad-Mfn2 or Ad-GFP was transfected into the circulation via intravenous injection on day 1, 14, 28, or 56 after the first injection of CCl 4 . All rats were sacrificed on day 70, and their livers were removed for further study. e HSC-T6 cells were transfected according to standard protocols. Briefly, the cells were cultured in 6-well plates, and the medium was changed every day until 70-80% confluence was achieved. e cells were transfected with adenovirus vector at multiplicity of infection (MOI) � 250 PFU (plaque-forming units) in serum-free DMEM. At 4 h after transfection, the medium was replaced with normal DMEM supplemented with 10% FBS, and the cells were cultured for 24 h. e cells were then cultured for another 24 h in medium containing 10% FBS and PDGF-BB to detect HSC-T6 cell proliferation. e transfection efficiency was ∼70% for all experimental groups. e transfection into the animal models was as follows: 1 × 10 10 PFU Ad-Mfn2 or Ad-GFP was injected via the tail vein. e rats were anaesthetized with 2% pelltobarbitalum natricum, and the liver tissues were obtained and cut into pieces with an average weight of 500 mg. A portion of the specimen was stored in formaldehyde for histopathological examination, and the other portion was immediately frozen at −80°C for western blotting studies. Cell Proliferation. Cell proliferation capability was detected using Cell Counting Kit-8 (CCK-8, Dojindo Molecular Technologies, Kumamoto, Japan) shade selection experiments. e cells (3 × 10 3 per well) were plated in triplicate in 96-well plates and cultured for 24 h. At 24, 48, and 72 h after transfection, 10 μL CCK-8 (5 mg/mL) was added to each well, and the cells were cultured for 4 h. e absorbance was determined at 450 nm (Varioskan Flash, ermo Fisher Scientific, Waltham, MA, USA). e experiments were repeated at least three times. Western Blot Analysis and Antibodies. e HSC-T6 cells and liver tissue lysates were extracted with radio immunoprecipitation assay (RIPA) cell lysis buffer (Beyotime Biotechnology, China), and the protein concentration in the lysates was quantified using an enhanced bicinchoninic acid (BCA) protein assay kit ( ermo Fisher Scientific) with bovine serum albumin as a standard. Equal amounts of total protein extracted from the cells or liver tissues were resolved by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene fluoride (PVDF) membranes (Millipore, Burlington, MA, USA) and then probed with the following anti-rat primary e next day, the membranes were incubated with the appropriate secondary horseradish peroxidase-conjugated secondary antibodies (1 : 10,000, Boster Biological Technology, Wuhan, China). Specific proteins were visualized using enhanced chemiluminescence (ECL, Millipore). For quantitative analysis, band density was measured and normalized to GAPDH. Hematoxylin-Eosin (HE) and Masson Staining. e rat liver tissues were fixed in 10% formaldehyde and dehydrated by graded ethanol (70%, 80%, 90%, 95%, and 100%). After permeabilization with xylene, the tissues were immersed and embedded in paraffin. e paraffin blocks were cut into 4μm slices, mounted on glass slides, and stained using standard HE staining and Masson staining techniques according to previous studies [24]. Tissue damage was evaluated by observing the inflammation, cell infiltration, interstitial edema, and cell vacuolar degeneration within the liver parenchyma under a microscope. e severity of interstitial fibrosis was estimated by scanning 10 nonrepeated fields in each sample with Masson staining and graded according to the Laennec fibrosis scoring system [25]. Immunohistochemistry. To analyze the protein expression of p-PDGFR-β, α-SMA, and COL1 in the liver tissues, immunohistochemistry staining assays were performed as described previously. 24 After baking in a 60°C incubator for 1 h, tissue sections were deparaffinized in xylene, hydrated by graded ethanol, and immersed in 3% H 2 O 2 methanol solution for 30 min to block endogenous peroxidase activity. Next, the sections were sealed with goat serum (C-0005, Bioss Antibodies, China) and incubated at room temperature for 30 min. en, the sections were incubated in diluted primary antibodies against p-PDGFR-β (1 : 100), α-SMA (1 : 100), and COL1 (1 : 100) in a wet box at 4°C overnight. After adding the secondary antibody, the sections were incubated for 30 min at room temperature, followed by coloration with a diaminobenzidine (DAB) horseradish peroxidase color development kit (Dako, Glostrup, Denmark) for 30∼60 sec. e nucleus was counterstained with hematoxylin for 1∼1.5 min. Afterwards, the sections were differentiated by 0.1% hydrochloric acid and alcohol, colorized to blue, dehydrated, and cleared. Finally, the sections were sealed with neutral gum and examined for expression of the target proteins using an optical microscope under ×200 magnification. e mean optical density (MOD) was measured by Image-Pro Plus 6.0 image analysis software. Statistical Analysis and Image Processing Software. All statistical analyses were performed using SPSS for Windows version 20.0 (SPSS, Armonk, NY, USA). e t-test was used for comparing two groups; multiple groups were compared using one-way analyses of variance (ANOVA). All cell culture experiments were independently performed in triplicate and the measurement data are expressed as the mean ± standard deviation (SD). P < 0.05 was considered statistically significant in all cases. Canvas 16 Pro and Photoshop 7.0 were used for image gathering and processing manipulations. HSC-T6 Cells Transfected with Ad-Mfn2 Constitutively Expressed Mfn2. Transfection efficiency was highest when the MOI value between the adenovirus and cell was 250 PFU, as we have shown previously. Compared with the untransfected cells, HSC-T6 cells transfected with Ad-Mfn2 or Ad-GFP emitted green fluorescence under inverted fluorescence microscopy (Figure 1(a)). We verified MFN2 protein expression by western blotting 48 h after transfection. MFN2 protein expression levels were significantly increased in the cells transfected with Ad-Mfn2, compared with that of the cells transfected with Ad-GFP and the normal control (P < 0.01) (Figure 1(b)). (Figure 2(b)). CCK-8 assay of cell proliferation activity indicated that HSC incubated with Ad-Mfn2 had significantly reduced cell proliferation compared with the control group and the GFP group, while HSC incubated with PDGF exhibited significantly increased cell proliferation (Figure 2(c)). ese results indicate that Mfn2 can restrict HSC proliferation. Mfn2 Suppressed Fibrosis of HSC-T6 Cells via the PDGFRβ-PI3K-AKT Signaling Pathway. e PI3K-AKT signaling pathway is essential for PDGF-induced cell growth in vitro [26] and is responsible for upregulating COL1 expression in HSC [27]. To elucidate the molecular mechanism by which Mfn2 inhibits PDGF-induced HSC proliferation, the protein expression of PDGFR-β, PI3K, AKT, and their phosphorylated forms, as well as α-SMA, TGF-β1, and COL1, were detected by western blotting. Figure 3 shows that PDGF led to PDGFR-β, PI3K, and AKT phosphorylation, and α-SMA, TGF-β1, and COL1 protein expression in the PDGF-induced cells was higher than that in the NC group. In addition, Mfn2 significantly reduced the PDGF-induced phosphorylation of PDGFR-β, PI3K, and AKT, and α-SMA, TGF-β1, and COL1 protein expression levels were significantly lower in cells overexpressing Mfn2 than in cells from the PDGF + GFP, PDGF, or NC groups. e PDGFR-β, PI3K, and AKT protein expression levels did not differ significantly among the four groups. Liver Tissue Damage and Interstitial Fibrosis Gradually Deteriorated under the Influence of CCl 4 . HE staining demonstrated that there were no histological changes in NC group livers, which had normal morphology and regular lobular structure, while CCl 4 group livers developed remarkable pathological changes such as inflammatory cell infiltration, interstitial edema, and cell vacuolar degeneration. Masson staining showed that collagen deposition and interstitial fibrosis were significantly increased in the CCl 4 group; there was remarkable fibrosis in the portal tract. e portal and central veins were surrounded by fibrous septa, and the lobular structure was fuzzy with clearly visible false lobules (Figure 4(a)). At days 28, 42, and 56 after the first injection of CCl 4 , the Laennec fibrosis score for the CCl 4 group was 2.67 ± 0.52, 4.50 ± 0.55, and 5.67 ± 0.52, respectively, which was significantly different compared with that in the NC group, which was 0.50 ± 0.55, 0.42 ± 0.49, and 0.33 ± 0.41 (P < 0.05), respectively, indicating that the severity of interstitial fibrosis was aggravated as the modeling duration increased (Figure 4(b)). Protein Expression of p-PDGFR-β, TGF-β1, α-SMA, and COL1 Increased While MFN2 Decreased Gradually in the CCl 4 Group. Western blotting indicated that, during the duration of modeling, p-PDGFR-β, TGF-β1, α-SMA, and COL1 protein expression increased gradually, while MFN2 protein expression decreased gradually in the CCl 4 group compared with the NC group ( Figure 5(a)). p-PDGFR-β, α-SMA, and COL1 protein expression was also investigated by immunohistochemical staining, which showed that expression increased gradually over time compared with the NC group (all, P < 0.05) and shifted from the portal area to the lobules, demonstrating that liver tissue fibrosis was aggravated in the CCl 4 group ( Figure 5(b)). Upregulated Mfn2 Expression at the Early Stage of Hepatic Fibrosis Alleviated Tissue Damage and the Deposition of Extracellular Collagen . HE staining showed that the histological lesions were alleviated in the Mfn2 group compared with the CCl 4 and GFP groups (Figure 6(a)). Consistent with pathological changes in the liver, the amount of collagen deposition was remarkably decreased in the Mfn2 group ( Figure 6(b)). However, such effects were significantly related to actuation duration of Ad-Mfn2. e histological sections revealed that transfection on day 1 of the establishment of the hepatic fibrosis model led to a much lower amount of collagen deposition in the Mfn2 group compared with the CCl 4 and GFP groups by the end of the experiment (P < 0.001). As the time of the influence of the Mfn2 gene decreased, the amount of collagen deposition increased gradually. Transfection after the model had been established, that is, day 56, and was followed by no difference between the amount of collagen deposition in the Mfn2 group and the CCl 4 and GFP groups (P > 0.05) (Figure 6(c)). p-PDGFR-β, α-SMA, and COL1 Expression Decreased under the Administration of Mfn2 in the Early Stage of Hepatic Fibrosis. As it was shown in Figure 7, Western blotting and immunohistochemical staining indicated that p-PDGFR-β, TGF-β1, α-SMA, and COL1 protein expression was markedly decreased and restricted in the portal area when Ad-Mfn2 was transfected on the first day of CCl 4 injection in the Mfn2 group compared with the CCl 4 and GFP groups (P < 0.05). p-PDGFR-β, TGF-β1, α-SMA, and COL1 expression increased gradually with the delay in Ad-Mfn2 transfection and shifted from the portal area to the lobules. Transfection with Ad-Mfn2 when the model had been established, that is, 56 days after the first injection of CCl 4 , and was followed by no difference in the expression of the above proteins between the Mfn2, GFP, and CCl 4 groups (P > 0.05) (Figure 7). Discussion is study was designed to increase our understanding of the function of Mfn2 in HSC proliferation and in CCl 4 -induced liver fibrosis. We found that Mfn2 interfered specifically with PDGF-induced signaling, resulting in the inhibition of HSC proliferation. In addition, Mfn2 exhibited an antifibrotic effect at the early stage of fibrosis in vivo. Liver fibrosis is a progressive pathology of tissue damage and ECM deposition within the liver parenchyma, which may develop into cirrhosis and cancerous lesions. HSC play a critical role in excessive ECM production and secretion, leading to the deposition of collagen and fibrous septum formation [28]. In the present study, HSC proliferation was significantly inhibited after Mfn2 transfection. HSC activation induces the release of PDGF, a highly potent HSC mitogen, which binds to PDGFR-β, activating Ras and sequentially propagating the stimulatory signal via the PI3K-AKT signaling pathway [29,30]. PDGF regulates cell proliferation and division through phosphorylation by binding to the corresponding receptors on the cell membrane [5,20,21]. Moreover, Mfn2 suppresses cell proliferation by inhibiting the PI3K-AKT signaling pathway [23]. To explore the underlying mechanism of the antiproliferation effect of Mfn2, we detected the protein expression of PDGFR-β, PI3K, AKT, and their active forms. Our results indicate that Mfn2 treatment dramatically decreased the protein levels of p-PDGFR-β, p-PI3K, and p-AKT, while PDGFR-β, PI3K, and AKT levels were not significantly different from that in the control group. us, we believe that Mfn2 blocked the PI3K-AKT signaling pathway by preventing PDGF binding to its receptors in the cell membrane and decreasing the phosphorylation of the corresponding receptor. Interestingly, our results also show that Mfn2 downregulates the expression of TGF-β1, which stimulates ECM synthesis and inhibits its degradation [31]. However, the mechanism is unclear and remains to be addressed in further studies. e activation of HSC and their transformation into myofibroblast-like cells (MFBLC) are the core events of hepatic fibrosis, while increased α-SMA expression is the hallmark of the process. e activated HSC secrete large amounts of ECM, the components of which include COL1. Accordingly, we considered the expression of p-PDGFR-β, α-SMA, and COL1 to be appropriate indicators for evaluating the severity of fibrosis, consistent with previous studies [24,32,33]. Our data indicate that α-SMA and COL1 expression were significantly decreased under the administration of Mfn2 compared with the GFP control group. As Mfn2 has antiproliferative and antifibrotic potentiality in vitro, we hypothesized that it may have a similar effect in vivo. Hepatic fibrosis, with pathological features that include fibrous tissue hyperplasia around the portal area and central vein, destruction of the lobular structure, and regenerative nodules, is a progressive disease [34]. Consistent with this, we found that these pathological changes deteriorated gradually in the CCl 4 gradually; the destruction of the lobular structure changed from fusion necrosis to bridging necrosis, and the affected range expanded as the modeling time was prolonged. ECM secretion increased, resulting in pseudolobuli formation. Liver fibrosis induced by CCl 4 is similar to the mechanism involved in human liver fibrosis, as well as the staging of pathological changes, which are stable and reliable [35,36]. We therefore considered the CCl 4 -induced rat hepatic fibrosis model appropriate for subsequent exploration. Liver fibrosis, which is mainly manifested by excessive deposition of ECM such as COL1, is a common histological change in chronic liver disease [37]. e collagen content in liver protein increases significantly during liver injury, becoming an important ECM component and ultimately leading to irreversible cirrhotic changes [38,39]. In addition, ECM synthesis greatly influences HSC proliferation and activation, resulting in the development of fibrosis [40][41][42]. In the present study, both western blotting and immunohistochemistry showed that p-PDGFR-β, α-SMA, and COL1 expression increased gradually and extended within the liver parenchyma in the CCl 4 group. Conversely, the NC group had minimal expression of the previously mentioned proteins, and they were restricted to the periportal area, which may represent the normal physiological function of the liver. Accordingly, in our opinion, the expression and location of the previously mentioned proteins are of great relevance to the severity of liver fibrosis. Our data show that p-PDGFR-β, α-SMA, and COL1 expression was markedly decreased under the administration of Mfn2, and they were restricted around the periportal area compared to that in the GFP and CCl 4 groups. However, this was only observed in the rats that received Mfn2 intervention in the early stage of liver fibrosis; as the actuation duration of CCl 4 was prolonged, the effect of Mfn2 was gradually attenuated. the previously mentioned proteins between the Mfn2 group and the CCl 4 group. ese results all suggest that the antifibrotic effect of Mfn2 may be related to the inhibition of HSC proliferation, which results in the downregulation of p-PDGFR-β, α-SMA, and COL1 expression. Conclusion To conclude, based on our findings, we have established the framework that Mfn2 suppresses rat HSC proliferation and activation via the PI3K-AKT pathway by directly targeting p-PDGFR-β in the process of fibrosis. Moreover, Mfn2 exhibits antifibrotic potential in the early stage of hepatic fibrosis. Hence, Mfn2 probably provides new therapeutic methods for hepatic fibrosis in the near future. Data Availability Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. Ethical Approval All experimental protocols were conducted in accordance with the Animal Research: Reporting In Vivo Experiments (ARRIVE) guidelines, and all surgeries were performed under anesthesia. e study was approved by the Animal Care and Use Committee of Sun Yat-sen University. Conflicts of Interest e authors have no conficts of interest to declare that are relevant to the content of this article. Authors' Contributions Yunle Wan, Changku Jia, and Zhiping Chen conceived and designed the project. Zhiping Chen, Zeyu Lin, and Haifeng Zhong performed the experiments and acquired the data. Jiandong Yu and Xianhua Zhuo analysed and interpreted the data. Zhiping Chen and Haifeng Zhong wrote the manuscript. All authors approved the final version of the article.
2022-01-19T16:40:28.427Z
2022-01-17T00:00:00.000
{ "year": 2022, "sha1": "08a84c2a840c2dfc60ade03884789e067cc79a6d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jhe/2022/6731335.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a99f77f060e2c20a0d0366b4a8e275b6a3cbd458", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221236177
pes2o/s2orc
v3-fos-license
First Cases of Natural Infections with Borrelia hispanica in Two Dogs and a Cat from Europe Canine cases of relapsing fever (RF) borreliosis have been described in Israel and the USA, where two RF species, Borrelia turicatae and Borrelia hermsii, can cause similar clinical signs to the Borrelia persica in dogs and cats reported from Israel, including fever, lethargy, anorexia, thrombocytopenia, and spirochetemia. In this report, we describe the first clinical cases of two dogs and a cat from Spain (Cordoba, Valencia, and Seville) caused by the RF species Borrelia hispanica. Spirochetes were present in the blood smears of all three animals, and clinical signs included lethargy, pale mucosa, anorexia, cachexia, or mild abdominal respiration. Laboratory findings, like thrombocytopenia in both dogs, may have been caused by co-infecting pathogens (i.e., Babesia vogeli, confirmed in one dog). Anemia was noticed in one of the dogs and in the cat. Borrelia hispanica was confirmed as an infecting agent by molecular analysis of the 16S rRNA locus. Molecular analysis of housekeeping genes and phylogenetic analyses, as well as successful in vitro culture of the feline isolate confirmed the causative agent as B. hispanica. Background The genus Borrelia comprises three phylogenetic clusters, namely: Lyme Borreliosis (LB) borreliae, relapsing fever (RF) borreliae, and the recently described reptile-associated and echidna-associated Borrelia (REP borreliae, Borrelia (B.) turcica, and Candidatus B. tachyglossi, respectively) [1,2]. LB in dogs in Europe has been described as a condition of arthritis and/or glomerulonephritis; the incidence of clinical disease is rare, despite a high exposure of dogs to the tick vector (Ixodes ricinus), and there may be a breed/genetic predisposition (reviewed by the authors of [3]). LB in cats in Europe is a rare condition-the seroprevalence is significantly lower compared with dogs, which may, in part, be explained by different tick exposures [4]. Apart from LB species, which are vectored by hard ticks of the genus Ixodes, soft ticks in Europe may carry Borrelia species that belong to the RF group of spirochetes, in particular Borrelia hispanica. Borrelia hispanica represents a spirochete species transmitted by Ornithodoros ticks, which can cause TBRF (tick-borne relapsing fever) in humans in the Mediterranean area [5,6]. Trape et al. [7] Anamnesis: Podenco (hound); short hair; male; 6 years old; approximately 10 kg body weight (BW); hunting dog living in La Luisiana, Seville (Spain); and sleeps under a roof. The following prophylactic measures were established: imidacloprid, chlorfenvinphos, and diazinon; no prophylaxis for leishmaniosis; treatment for internal parasites using albendazole and a combination of praziquantel/pyrantel/febantel; and annual rabies vaccine plan. The dog was presented to a local veterinarian in February (3 February 2014) being lethargic, with pale mucosa, anemia (hematocrit 30.2%), and thrombocytopenia (Table 1). No data about the occurrence of microorganisms in a stained blood smear were available at this time point. Because of a positive Ehrlichia canis serologic result (SNAP ® 4Dx ® ), treatment was performed with 5 mg/kg/BW of doxycycline, every 12 h, for three weeks. Despite a good appetite, at the end of the first doxycycline cycle (27 February 2014), the hematocrit was still low (20.9%). In March (4 and 13 March 2014), the hematocrit was still not within the reference range (38.3-56.5%), with values of 32.8% and 31.4%, respectively. Therefore, a second course of doxycycline, as described above, was performed. The dog improved clinically and by the end of March 2014 showed no more anemia. On 30 October 2014, the dog showed non-regenerative anemia (hematocrit 29.4%, erythrocytes 3.98 m/µL, hemoglobin 8.9 g/dL, and reticulocytes 58,108/µL), thrombocytopenia (23 tsd.), mild lymphopenia, and eosinopenia, as well as many spirochetes in a Giemsa-stained blood smear ( Figure 1A, left panel). A diagnosis of relapsing fever was established and treatment with doxycycline was started the following day at the dose and interval mentioned above. Unfortunately, no information is available about the efficacy of the treatment and clinical progress. The laboratory tests were performed at IDEXX Ludwigsburg, Germany (November 2014). PCR: At IDEXX Ludwigsburg, a diagnostic real-time PCR was conducted for B. burgdorferi sensu lato (target gene was flagellin B (flaB)) with a negative result. Canine Case 2 (Dog 2; Sample ID VM736940) Anamnesis: This dog lives in the Gandia area (Valencia), in a residential area in the countryside. It is an outdoor dog and sleeps in the courtyard. It was usually dewormed with a combination of febantel/pyrantel/praziquantel, and treated for ectoparasites with a deltamethrin-containing collar on an irregular basis. The dog showed anorexia and apathy at a body temperature of 39 • C (Table 1). Within a blood smear, low level of Babesia spp. was observed and only few Borrelia were visible ( Figure 1A Treatment with doxycycline (for four weeks) and amoxicillin (for two weeks) was started after the suspected diagnosis of a spirochetal infection. After treatment, the dog recovered completely. Feline Case (Cat; Sample ID 10827448) Anamnesis: one-year old street cat, male, plenty of fleas, and no tick prophylaxis. At presentation, the cat showed cachexia (extreme weight loss), mild abdominal respiration, and had a good appetite ( Table 1). The body temperature was not elevated at the time of Borrelia detection. The cat was living in the Cordoba area (Andalucia). Laboratory abnormalities included severe regenerative anemia (low erythrocytes (2.45 m/µL), hemoglobin (3.3 g/dL), and hematocrit (12.9%) at high reticulocyte numbers (139,895/µL)) and mild monocytosis. No other clinic-pathological abnormalities were found. Treatment was started with doxycycline after an additional blood sample was drawn for culture and molecular analyses. The duration of antibiotic treatment was 30 days. No clinical signs were observed after the treatment, and all of the altered values of hematology returned to normal. Samples Included in Our Study All of the samples originated from animals living in Spain. The samples were sent to the IDEXX laboratory in Barcelona from local veterinarians, where the animals were presented because of clinical signs. The samples were obtained as part of a routine diagnostic evaluation. Written informed consent was obtained from the owner. All investigations comply with the current laws of the countries in which they were performed. Diagnostic laboratory analyses (serology and PCR) were performed at the IDEXX reference laboratories in Ludwigsburg/Germany, and were subsequently sent for culture and further molecular analyses to the German National Reference Center for Borrelia at the Bavarian Health and Food Safety Authority, Oberschleißheim/Germany. The sample IDs are as follows: dog 1 = VM531519; cat = 10827448; dog 2 = VM736940. Serology IDEXX Ludwigsburg, Germany All of the tests were performed according to the literature [3,10,11]. For the immunoblot, slight modifications were implemented. Briefly, the Borrelia + OspA/B ViraStripe IgG Testkit (Viramed; Planegg, Germany) was performed, which contains specific purified antigens from B. afzelii (PKo) and B. burgdorferi sensu stricto, as well as a recombinant VlsE. As a secondary antibody, a phosphatase-labeled affinity purified antibody to dog IgG (H+L; produced in goat) at a dilution of 1:500 was used. PCRs IDEXX Ludwigsburg, Germany All of the tests were performed according to the literature [10,11], apart from Borrelia and Leptospira. Briefly, the total nucleic acid was extracted from the blood by using a QIAamp DNA Blood Mini kit (QIAGEN; Hilden, Germany), according to the manufacturer's instructions. A real-time-PCR assay was performed using the LightCycler 480 (Roche, Mannheim, Germany) with proprietary forward, reverse primers, and hydrolysis probes. The target gene for Leptospira spp. was the lipl32/hap-1 (accession number AF245281.1) and for B. burgdorferi sensu lato flaB (MF150071.1). The PCR assays that were employed during this study were shown to have a reproducible average analytical sensitivity of 10 DNA molecules per reaction. In Vitro Culture from Cat Blood at NRZ Borrelia, Oberschleißheim, Germany The Ethylenediaminetetraacetic acid (EDTA) blood obtained from the cat was used to set up in vitro cultures in microwell plates. All of the cultures were kept at 33 • C under a 5% CO 2 atmosphere. Four different media were used for the in vitro cultivation: (1) Modified-Kelly-Pettenkofer (MKP) medium (MKP basic medium supplemented with 5% bovine serum albumin and 6% rabbit serum) [12], (2) Barbour-Stoenner-Kelly (BSK)-H complete medium (Sigma-Aldrich; Darmstadt, Germany), (3) BSK-Y medium, and (4) RF-medium (MKP basic medium supplemented with bovine serum albumin (5%) and 50% fetal calf serum (FCS); CC-pro, Germany) [13]. For all of the cultures, 10 µL of EDTA blood was placed into 300 µL of medium. DNA Extraction and PCR (NRZ Borrelia, Oberschleißheim) DNA was extracted from the EDTA blood using the Maxwell ® 16 Blood DNA Purification Kit according to the manufacturer's instructions (Promega, Mannheim, Germany). The extracted DNA was subjected to PCR amplification, as described below. Fragments of the 16S rRNA were amplified using primers and PCR conditions, as described previously [14]. Multilocus sequence typing (MLST) on housekeeping genes (clpA, clpX, nifS, pepX, pyrG, rplB, recG, and uvrA) was performed principally, as described (see www.pubmlst.org/borrelia, [15]). The sequences for the primers used are given in Table 2. For all of the PCR reactions, HotStarTaq Mastermix (Qiagen, Germany) was used. A touch-down protocol was employed for the first nine cycles with annealing temperatures of 55 • C to 48 • C, decreasing by 1 • C each cycle, followed by 32 cycles at a 48 • C annealing temperature. The temperature profile was 95 • C for 15 min (activation of Taq polymerase), denaturation 94 • C for 30 s, annealing 30 s, and elongation 72 • C for 60 s. A final step of elongation was at 72 • C for 5 min, and the samples were maintained at 12 • C. Table 2. Primer * used to amplify MLST housekeeping genes in Borrelia hispanica. Locus Primer Name Primer Sequence Molecular Analyses Commercial sequencing was done by GATC Biotech AG (Konstanz, Germany). We used MEGA5 [16,17] for sequence alignment, genetic distance analyses, and construction of phylogenetic trees. The Basic Local Alignment Search Tool (BLAST) [18] was used to compare the sequences obtained here to the sequences in GenBank using standard settings. Genetic distance analyses were conducted in MEGA5 [16,17] using the Kimura 2-parameter model [19]. The evolutionary history was inferred by using the maximum likelihood method based on the general time reversible model [16,20]. The initial tree(s) for the heuristic search were obtained automatically by applying neighbor-joining and BioNJ algorithms to a matrix of pairwise distances estimated using the maximum composite likelihood (MCL) approach, and then selecting the topology with a superior log likelihood value. The bootstrap values were calculated for 1000 repetitions. A discrete Gamma distribution was used to model the evolutionary rate differences among sites (+G). The rate variation model allowed for some sites to be evolutionarily invariable (+I). The trees are drawn to scale, with branch lengths measured in the number of substitutions per site. The codon positions included were 1st+2nd+3rd+Noncoding for the housekeeping gene sequences. All of the positions containing gaps and missing data were eliminated. Further information is given in the figure legend. Sequence Deposition The 16S rRNA sequences were submitted to GenBank with accession numbers MN173954 (cat) and MN175320 (dog2). The sequences of the MLST housekeeping loci can be obtained from the Borrelia MLST database under pubmlst.org/borrelia/. Allele numbers are given in Table 3. Results and Discussion The report presented here describes the first clinical cases of natural infections with B. hispanica in two dogs and one cat living in various places in Spain (Cordoba, Valencia, and Seville). Clinical signs were reported as lethargy, pale mucosa, anorexia, cachexia, or mild abdominal respiration, and were consistent with previously described symptoms in animals infected with different RF Borrelia species [8,9]. Dog 1 (sample ID VM531519) represents a case with clearly visible spirochetes in the blood ( Figure 1A) that were identified as B. hispanica by molecular analysis. This dog had the following antibodies to other canine vector-borne diseases (CVBDs): high E. canis titer but also low Leishmania and Babesia antibodies. Thus, the clinical signs and laboratory abnormalities (such as non-regenerative anemia and thrombocytopenia, as well as a mild lymphopenia and eosinopenia) could have been triggered or exacerbated by the presence of a co-infection. Another explanation may be that the dog experienced an Ehrlichia-infection in early February 2014-which was successfully treated-and then acquired an RF Borrelia infection that was diagnosed in October 2014. Co-infections in animals with different disease agents should always be considered; in our study, dog 2 (sample ID VM736940) showed a co-infection of B. hispanica (microscopically/molecular) and Ba. vogeli (microscopically/molecular). In a previous study on dogs infected with B. persica and co-infected with Babesia (PCR and microscopy/blood smear), additional treatment with imidocarb was initiated [8], but this was not performed in the present case as dog 2 improved with only antibiotic treatment (amoxicillin and doxycycline). Interestingly, dog 1 showed a positive serologic reaction to B. burgdorferi in ELISA, whereas dog 2 did not. Possible reasons for these differences in the diagnostic response include an actual exposure to both (relapsing fever and Lyme borrelia) in dog 1; an anamnestic titer against B. burgdorferi, after a previous successfully cleared infection; or a varied cross-reactivity of antibodies (i.e., IgM) against RF Borrelia with B. burgdorferi due to different time points of infection. Support for the latter supposition comes from the fact that dog 1 showed a high number of spirochetes in the blood smear ( Figure 1A, left panel), whereas only a few bacteria were identified in the blood smear of dog 2 ( Figure 1A, right panel). Moreover, dog 2 was the only animal without anemia, and showed signs pointing toward chronic infection, such as leucocytosis and thrombocytopenia. A recent experimental infection of six dogs with the RF species B. turicatae lends support to the cross-reactivity hypothesis, as five of the dogs showed positive results in a whole-cell-based test (IFA) for LB borreliae, three of them had slight positive reactions in a Quant-C6-ELISA (13 to 24 U/mL), but all six dogs reacted negatively in the SNAP ® 4Dx ® test (as shown for both dogs and the cat in the present study) [21]. A limitation of this study was that only one time point post infection (43 dpi) was tested. Therefore, the SNAP ® 4Dx ® test utilizing a C6 peptide of B. burgdorferi can currently be regarded as the only serological test for LB in dogs not cross-reacting with RF borreliae. At present, there are no commercially available serological tests for RF borreliosis in dogs that could improve the diagnosis, i.e., based on GlpQ and BipA [21]. Another RF Borrelia species in Europe, B. miyamotoi, may represent also a diagnostic challenge in terms of cross-reactivity with LB borreliae. Because it has the same vector as B. burgdorferi, co-exposure could be even more probable. Borrelia miyamotoi has already been detected in Ixodes ticks collected from dogs in Germany [22], and was detected in questing Ixodes ticks in Spain [23][24][25]. Further studies perhaps based on experimental infections may clarify the issue of serological responses related to B. hispanica or B. miyamotoi in dogs. The cat presented with weight loss, abdominal breathing, and a regenerative anemia, but in contrast to both canine cases, without thrombocytopenia, supporting the idea that the observed reduction in platelets in both dogs could have been the result of a co-infection. As a result of the immediate onset of doxycycline treatment, the culturing of Borrelia in both canine cases was not successful. As the cat blood was taken before the onset of antibiotic treatment, an in vitro cultivation of Borrelia was successful ( Figure 1B), but only with an RF-medium (MKP basic medium supplemented with FCS). Highly mobile spirochetes were observed after 10 days of cultivation. In the BSK-H medium, a non-motile spirochete was found, while in the MKP medium and BSK-Y medium, no spirochetes were observed. In the molecular analyses of 16S rRNA and MLST, PCR products were obtained for all three samples. For dog 1 and the cat, sequences for several housekeeping loci were obtained, while for dog 2, only the 16S rRNA locus was successfully amplified. To obtain initial information on the species designation of the isolates, a BLAST search was conducted using 16S rRNA sequences. In this search, the highest similarity scores (100 %, Table 3) were received for B. hispanica. For four housekeeping loci, clpX, pepX, pyrG, and recG, sequences of good quality were obtained for the cat isolate, while for dog 1, three loci produced good sequence data (clpX, pyrG, and rplB). A comparison of these sequences with the available data in GenBank showed that the highest BLAST scores (97% or 98%) were in all cases for B. crocidurae, B. duttonii, and B. recurrentis, while the BLAST similarity scores dropped to much lower values for B. persica (Table 3). A search in the MLST database at www.pubmlst.org/borrelia revealed that all of the MLST housekeeping sequences showed closest matches to B. hispanica-clpX was identified to be allele 210 and rplB allele 244, both representing B. hispanica. The closest matches (meaning the sequences were not identical, but differed in some bases) were found for pepX to allele 225 (three differences), for pyrG to allele 232 (seven and one differences in dog 1 and the cat, respectively), and for recG to allele 244 (three differences), further confirming that the isolates belonged to B. hispanica. The phylogenetic analyses of 16S rRNA and the concatenated sequences of four housekeeping loci from the cat isolate showed that the isolates formed a sister clade to B. hispanica (Figures 2 and 3). The phylogenetic analyses of 16S rRNA and the concatenated sequences of four housekeeping loci from the cat isolate showed that the isolates formed a sister clade to B. hispanica (Figures 2 and 3). Conclusions This is the first report of clinical cases caused by the relapsing fever spirochete B. hispanica in dogs and cats from Europe (Spain). Some clinical signs and/or laboratory values might have been influenced by the presence of other vector-borne pathogens. Conclusions This is the first report of clinical cases caused by the relapsing fever spirochete B. hispanica in dogs and cats from Europe (Spain). Some clinical signs and/or laboratory values might have been influenced by the presence of other vector-borne pathogens.
2020-08-20T10:11:45.943Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "2f66b868de1c4423d6cbf541ade10427715079ab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/8/8/1251/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86bc9a455041a2010d59b95515fd49820cdae885", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219317308
pes2o/s2orc
v3-fos-license
Gestational sleep deprivation is associated with higher offspring body mass index and blood pressure Abstract Study Objectives The objective of this study was to evaluate the association between gestational sleep deprivation and childhood adiposity and cardiometabolic profile. Methods Data were used from two population-based birth cohorts (Rhea study and Amsterdam Born Children and their Development study). A total of 3,608 pregnant women and their children were followed up until the age of 11 years. Gestational sleep deprivation was defined as 6 or fewer hours of sleep per day, reported by questionnaire. The primary outcomes included repeated measures of body mass index (BMI), waist circumference, body fat, serum lipids, systolic and diastolic blood pressure (DBP) levels in childhood. We performed a pooled analysis with adjusted linear mixed effect and Cox proportional hazards models. We tested for mediation by birthweight, gestational age, and gestational diabetes. Results Gestational sleep deprivation was associated with higher BMI (beta; 95% CI: 0.7; 0.4, 1.0 kg/m2) and waist circumference (beta; 95% CI: 0.9; 0.1, 1.6 cm) in childhood, and increased risk for overweight or obesity (HR; 95% CI: 1.4; 1.1, 2.0). Gestational sleep deprivation was also associated with higher offspring DBP (beta; 95% CI: 1.6; 0.5, 2.7 mmHg). The observed associations were modified by sex (all p-values for interaction < 0.05); and were more pronounced in girls. Gestational diabetes and shorter gestational age partly mediated the seen associations. Conclusions This is the first study showing that gestational sleep deprivation may increase offspring’s adiposity and blood pressure, while exploring possible mechanisms. Attention to glucose metabolism and preterm birth might be extra warranted in mothers with gestational sleep deprivation. Introduction A suboptimal intrauterine environment is now a recognized risk factor to overweight/obesity and higher blood pressure during later life [1,2]. Pregnancy is a period when lifestyle interventions are encouraged, and parents are aware of their choices. Current interventions are mainly focused on maternal physical activity and/or a healthful diet, and appear effective in decreasing gestational weight gain and diabetes, with some evidence for positive maternal and child outcomes [3][4][5][6]. Current evidence to support the hypothesis that sleep disorders during pregnancy has long-term cardiometabolic effects on offspring comes solely from mice studies. Sex dimorphism has been found in a mice study on metabolic dysfunction due to late gestational sleep fragmentation; male offspring had higher food intake, body weight, visceral fat mass, and insulin resistance and lower adiponectin levels, but not female offspring. Dyslipidemia was apparent in both male and female offspring after gestational sleep deprivation [22]. Two other mice studies found that gestational sleep deprivation increases blood pressure in offspring via alterations in cardiovascular autonomic regulation and renal morphofunctional changes [23,24]. The effects of gestational sleep deprivation were similar between male and female mice, but in females, the effects were bigger in mice that underwent an ovariectomy and lacked female hormones. In epidemiologic studies, poorer sleep in children has been associated with metabolic risk, adiposity, and altered lipid profile [25][26][27][28][29][30], and these effects in children have been more prominent in girls compared with boys [25,31,32]. As far as we know, there is no published human-based research on the role of sleep during pregnancy on childhood obesity and metabolic health. Our aim was to evaluate the association between gestational sleep deprivation and childhood adiposity and cardiometabolic profile in a pooled analysis of mother-child pairs from two European birth cohorts, with attention to possible interaction by sex and plausible factors mediating these associations. Study population This study utilized data from two European birth cohorts, the Greek "Rhea" birth cohort [33] (n = 1,363) and the Dutch Amsterdam Born Children and their Development (ABCD) study [34] (n=12,379). Both studies are population-based birth cohorts that started during pregnancy. Children from the Rhea cohort were examined at ages 4 (n = 879) and 6 (n = 606) years, while children from the ABCD study were examined at ages 5 (n = 3,260), 10 (n = 2,162), and 11 years (n = 935). Gestational sleep deprivation Information on sleeping habits of the participating mothers of the Rhea cohort was collected through a computer-assisted interview in the third trimester of pregnancy (median (25th-75th) gestational week: 32 (31)(32)(33)(34)(35) week) [13]. Sleep duration was obtained by the following close-ended question: "During the past month, how many hours did you sleep per day?" The mother reported sleep duration as 5 or fewer hours, 6-7 h, 8-9 h, and 10 or more hours [13]. Sleep deprivation was defined as five or fewer hours of sleep. Information on gestational sleep duration was available in 685 children with available outcome data at age 4 years and in 436 children with data available at age 6 years. Pregnant women in the ABCD-study received a written questionnaire (median: [25th-75th] gestational week: 16 [14][15][16][17][18] week) and were asked an open-ended question: "How many hours did you sleep or rest lying down per day (of 24 h) on average in the past week." Sleep deprivation was defined as 6 or fewer hours of sleep, compared with 5 for Rhea, in order to account for the extra daytime resting hours that were reported. Information on gestational sleep duration during pregnancy was available in 3,191,2,112, and 917 children with available outcome data at age 5, 10, and 11 years old, respectively. Gestational sleep deprivation was used as a binary variable to assess the associations of extremely short gestational sleep with the outcomes of interest instead of sleep duration differences in hours. The cutoff was set at 5 hours of sleep for Rhea and at 6 hours for ABCD due to differences in the sleep questionnaires administered in the two cohorts. We decided on this as extremely short sleep is generally considered as unhealthy, whereas sleep duration needs may vary from person to person and differ across cultures. However, as sensitivity analysis we also used two additional cutoffs at 5 and 7 h of sleep in both cohorts. Child outcome measurements Details of child anthropometry, blood pressure, and serum lipids outcomes are given in the online supplement. In summary, children's weight and height, waist circumferences, percentage of body fat, diastolic (DBP) and systolic (SBP) blood pressure, and lipid profile were measured in the two cohorts at health clinic visits and/or planned follow-up study assessments. For both cohorts we defined overweight using the same procedure. First, we calculated BMI (weight/height 2 ) [35] and then categorized children into normal, overweight, or obese according to the cutoff points for sex and age proposed by the International Obesity Task Force (IOTF) definitions [36]. As a sensitivity analysis we also used age and gender specific z-scores for the outcomes BMI and blood pressure. Serum lipids included: fasting plasma, total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C). Statistical analysis We conducted descriptive analysis using standard univariate statistic procedures (chi-square, t-test). We compared motherchild pairs with normal sleep duration or gestational sleep deprivation for baseline characteristics between cohorts. Additionally, we compared the mother-child pairs with follow-up and complete data on covariates (participants) to mothers that participated in the study during pregnancy but missed information on one or more covariates on follow-up after pregnancy (nonparticipants) per cohort. Our main analysis was a pooled analysis of the Rhea and ABCD cohorts. For continuous outcomes, we used linear mixed models and for overweight/obesity, we used Cox proportional hazards models. Linear mixed models included random effects for cohort and child and a random slope for child age. Mixed models also included an interaction term between the exposure and child age at examination. Child age at examination in the interaction term was used categorically (4, 5, 6, 10, and 11 years in the models for BMI; 4, 5, 6, and 11 years in the models for all other outcomes). The overall effect of the exposure was evaluated using the marginal effects and the difference between the two groups was tested using Wald's test. The associations are reported in terms of beta coefficients and their corresponding 95% CIs. In Cox proportional hazards models, shared frailties for cohort were introduced in order to account for the shared risk within each cohort and hazard ratios (HR) and 95% CIs were reported. Birth was considered as the time of study entry and age at study visit as the time scale in our analysis. The exact age at the visit during which the child first became overweight/obese was used as the time of event. Children who did not become overweight/obese during follow-up were censored at the end of study follow-up or when lost to follow-up. The proportional hazards assumption was tested using both graphical inspection methods and Schoenfeld residuals. We constructed a directed acyclic graph (DAG) based on previous knowledge and selected the set of confounders using DAGitty version 3.0 (Supplementary Figure S1) [37]. The confounders included in all models were maternal age at conception, parity (nulliparous and multiparous), maternal smoking early in pregnancy (yes/no), pre-pregnancy BMI (normal weight/ overweight/obese), maternal education (low/middle/high), and maternal origin (country of cohort/other). Child sex and age at assessment (years) were also included. Models with blood pressure as an outcome were further adjusted for child height and BMI and models with lipids as an outcome were further adjusted for child BMI. In order to evaluate potential effect modification by sex, we included a multiplicative exposure-sex interaction term in each model. As a sensitivity analysis, we performed a random effects meta-analysis. For this, we obtained cohort specific estimates using mixed effects models with child random effects and a random slope for child age for continuous outcomes and Cox proportional hazards models for binary outcomes. Consequently, we combined these estimates using random effects metaanalysis, in order to check the consistency with the pooled analysis and for quantifying heterogeneity among included studies with chi-square test from Cochran's Q and I 2 statistics. We tested if there was significant mediation by three plausible mediators (gestational diabetes; gestational age; and birthweight) on the association between gestational sleep deprivation and childhood BMI and other outcomes ( Figure 1). We made two separate mediator models with structural equation modeling (SEM); the first one with the continuous mediators gestational age and birthweight as parallel mediators; the second one with gestational diabetes as a single binary mediator. The second model included only mother-child pairs from the ABCD study, as gestational sleep deprivation was measured in early pregnancy, before the diagnosis of gestational diabetes would be made. The a-path of a mediator reports the association between gestational sleep deprivation and the mediator and the b-path reports the association between the mediator and offspring BMI at age four to five years. The indirect effect is a product of the a-and b-path. A 95 percentile bootstrap CI was calculated based on 1,000 bootstrap resamples for the indirect effect (ab), in order to test for significance. The total indirect effect is a sum of both indirect effects in a parallel model. The total direct effect (c′-path) refers to the association between gestational sleep deprivation and offspring BMI, corrected for the b-path. The total effect (c-path) is the association between gestational sleep deprivation and offspring BMI. The following confounders were added to the simple adjustment model: child sex, age at assessment (years). Considering the small numbers in each group, we did not perform a mediation analysis with full adjustment for all confounders. All analyses were conducted using Stata version 13 and 15 and significance level for all two-sided tests was set at the 5% level. We used capture program for mediation analysis. Participant characteristics In the present analyses complete data on exposure, outcome, and covariates were available in a total of 661 and 453 Rhea mother-child pairs at ages 4 and 6 years, respectively and in a total of 2,947, 1,957, and 874 ABCD mother-child pairs at ages 5, 10, and 11 years, respectively. Table 1 shows maternal and infant characteristics. In total, 144 (4.0%) mothers were sleep deprived during the index pregnancy (5.6% in Rhea and 3.6% in ABCD). Cardiometabolic characteristics of the children are presented in Supplementary Table S1. In the Rhea-cohort, 21.4% of the children was overweight at age 6 years and 11.0% was obese; whereas in the ABCD-cohort 7.1% of the children was overweight at age 5 years and 1.5% was obese. In the ABCD, sociodemographic characteristics of the mothers with gestational sleep deprivation varied significantly from mothers with adequate gestational sleep; they had higher rates of gestational diabetes (6.5% vs. 1.5%); and children were born at lower gestational age (39.1 vs. 39.5 weeks) and with a lower birthweight (3,364 vs. 3,477 g). Besides parity, we did not see these differences in the Rhea cohort (Supplementary Table S2). Nonresponse analysis revealed that participants were of higher education and lower BMI-pregnancy in both cohorts compared to lost to follow-up mother-child pairs (Supplementary Table S3). Table S4). Gestational sleep deprivation and childhood cardiometabolic health There was significant effect modification by sex on the observed associations (p-values for interaction < 0.05; Table 2). When stratified by sex, short sleep duration in pregnancy was significantly associated with higher DBP, BMI and risk for overweight/obesity in girls only, whereas these associations in boys were smaller and not significant. The adverse associations of short maternal sleep with child's waist circumference and SBP was also stronger in girls compared to boys, however the interactions did not reach statistical significance (Table 2). Sensitivity analysis When using age and sex specific z values for BMI and blood pressure, we also found BMI and DBP to be associated with gestational sleep deprivation in girls (Supplementary Table S5). The second sensitivity analysis showed us that using the same cutoff of ≤5 h of sleep/day for gestational sleep deprivation in both cohorts made the associations stronger and still significant, even with a prevalence of gestational sleep deprivation of 2%. When we used ≤7 h as a cutoff in both cohorts, the prevalence of gestational sleep deprivation was 19% and associations remained significant for overweight/ obesity and blood pressure in girls (Supplementary Table S6). The random effects meta-analysis of the cohort specific estimates from the mixed models confirmed the girl-specific associations of short maternal sleep during pregnancy with BMI, waist circumferences, and blood pressure (Figure 2 and Supplementary Table S7). The associations were stronger and only significant in the ABCD-cohort, compared to the Rhea-cohort. There was significant interaction by age for the association with BMI, waist circumference, total cholesterol, and LDL as the effects of gestational sleep deprivation became stronger with age (Supplementary Table S8). The I 2 statistic for BMI was suggestive for heterogeneity of the effect in the two studies (I 2 = 71.6, p-value = 0.061) but the stratification according to child sex, revealed evidence for heterogeneity among boys (I 2 = 71.6, p-value = 0.115) and not among girls (I 2 = 0.0%, p-value = 0.323). No heterogeneity was observed for the other outcomes (I 2 = 0.0%, p-values < 0.1; Figure 2). Table 3 presents results for the mediation analysis on BMI. The total direct effect (c′ path) was 0.5, meaning that children of mothers with gestational sleep deprivation had a 0.5 kg/m 2 higher mean childhood BMI. Gestational diabetes was a significant mediator in the association between gestational sleep deprivation and offspring BMI. Mothers with gestational sleep deprivation during early pregnancy had 4.5 times higher odds of gestational diabetes (a-path), and gestational diabetes was associated with a mean increase of 1.1 kg/m 2 in offspring BMI. The confidence interval of the indirect effect was wide, due to small numbers. Gestational age was also a significant mediator in the association between gestational sleep deprivation and offspring BMI, leading to on average a 0.06 point higher BMI. We found that children of mothers with gestational sleep deprivation were born with half a week shorter gestational age (a-path), and that a shorter gestational age was associated with a higher offspring BMI (b-path). Both indirect effects were found significant as the bootstrap confidence interval of the indirect effects did not contain zero, even though the numbers for gestational diabetes were small resulting in a wide confidence interval. Mediation by gestational diabetes, gestational age, and birthweight Low birthweight was not a significant mediator. The effect of gestational sleep deprivation on birthweight was not significant (a-path), but a higher birthweight was associated with a higher offspring BMI (b-path). Apart from the BMI outcome, we also tested mediation for the other metabolic outcomes of interest. Gestational diabetes was a mediator for overweight/obesity, waist circumference, Gestational sleep deprivation was defined as at least 5 and 6 h for Rhea and ABCD cohort, respectively. All models are adjusted for child sex, age at assessment (years), parity (nulliparous and multiparous), maternal smoking early in pregnancy (yes/no) maternal age at conception, pre-pregnancy BMI (normal weight/overweight/obese), maternal origin (country of cohort/other), and maternal education (low/middle/high). Bold-faced text indicated significant associations (p-value < 0.05). *Point for sex and age that was proposed by the IOTF. † Additionally adjusted for child height and BMI at assessment. ‡ Hazard ratios and 95% CIs obtained by Cox proportional hazard models with shared cohort frailties. § Defined with use of the BMI cutoff point for sex and age that was proposed by the IOTF. || Beta coefficient and 95% CIs as marginal effect estimates obtained by mixed effects models with cohort and child random effect and age interaction. BMI, body mass index; BP, blood pressure. Figure 2. A random effects meta-analysis of adjusted associations between gestational sleep deprivation and adiposity and blood pressure in childhood. Gestational sleep deprivation was defined as at least 5 and 6 h for Rhea (n = 661) and ABCD cohort (n = 2,947), respectively. Cohort specific estimates were obtained by mixed effects models with child random effects and a random slope for child age. Cohort-specific estimates were adjusted for child sex, age at assessment (years), parity (nulliparous and multiparous), maternal smoking early in pregnancy (yes/no) maternal age at conception, pre-pregnancy BMI (normal weight/overweight/obese), maternal origin (country of cohort/other), and maternal education (low/middle/high). Models for blood pressure were additionally adjusted for child height and BMI at assessment. and per cent body fat, but not for DBP and SBP. Gestational age was a mediator for overweight/obesity and waist circumference. Low birthweight was not a mediator for the outcomes of interest (Supplementary Table S9). Discussion This is the first human epidemiological study showing that gestational sleep deprivation could be associated with offspring cardiometabolic profile. Children born to mothers with short sleep duration during pregnancy had higher adiposity and blood pressure levels with associations being more pronounced in girls than in boys and the effects becoming stronger with age. The effect estimates for each cohort separately were in the same direction, but stronger and significant in the ABCD cohort. The associations with adiposity were partly mediated by gestational diabetes and shorter gestational age. Both sleep duration and sleep quality are known to change during pregnancy [38]. A recent meta-analysis found that about half of pregnant women experience poor sleep quality and that median sleep quality decreases from the second to third trimester [39]. Studies in the general population, as well as in pregnant women, suggest that sleep disturbances may alter the neuroendocrine homeostasis of the body, with an increased activity of the sympathetic nervous system and hypothalamicpituitary system, as well as the stress and pro-inflammatory responses which are associated with numerous health consequences [40,41]. Syntheses of findings from epidemiological studies in general populations suggest that lack of sleep is associated with obesity and a wide range of adverse cardiometabolic outcomes affecting both adults and children [42][43][44][45]. Importantly, during pregnancy the adverse physiologic response to sleep deprivation may lead to a suboptimal intrauterine environment, with subsequent effects on the placenta function, direct maternal, and fetal effects, but also with long-term consequences [2,40]. Gestational sleep disruption has been associated with gestational diabetes, pre-term delivery, and birthweight [11][12][13][14][15][16], factors also being associated with child's risk of overweight/obesity and cardiometabolic status [17][18][19][20][21], thus may be involved in the causal pathway. In agreement with that, for the association between gestational sleep deprivation and offspring BMI, overweight, waist circumference, and per cent body fat, we found partly mediation by gestational diabetes. Mothers with gestational sleep deprivation during early pregnancy had higher odds of gestational diabetes during later pregnancy and consequently gestational diabetes was associated with higher offspring BMI. The underlying pathogenic mechanisms behind gestational diabetes and the abnormal metabolic risk profile in offspring are unknown, but epigenetic changes induced by exposure to maternal hyperglycemia during fetal life may be implicated in impaired insulin sensitivity in the offspring [46]. We also found that part of the association between gestational sleep deprivation and offspring adiposity in our cohort was mediated by gestational age; children of mothers with gestational sleep deprivation were born on average half a week earlier, and that was associated with a small increase in offspring BMI. Studies suggest the balance between pro-and anti-inflammatory cytokines may vary in each trimester, and sleep deprivation can adversely affect pro-inflammatory response with endothelial dysfunction in the placenta, which along with impaired glucose metabolism and can lead to preterm labor [14,47,48]. This causal pathway is further supported by another cohort study showing that obesity at the age of 2 years among children who were born extremely preterm was associated with perinatal systemic inflammation [49]. We found interaction by sex in our associations, with associations being more pronounced in girls than in boys. A sex-specific effect of poor sleep has also been observed by epidemiological studies in children, where sleep disruption was associated with more prominent effects on metabolic risk, adiposity, and altered lipid profile in girls compared with boys [25,31,32]. Also, during pregnancy sexual dimorphisms have been observed in the effects of maternal obesity on childhood growth [17]. A possible mechanism could be differences in placenta function between boys and girls, which are caused by differences in gene expression in response to maternal health [50]. The differences in adaptation between males and females may be context, species and stage specific, and therefore it is difficult to say whether one sex copes better than the other [50]. Our findings in human are not in line with studies in mice, where associations between sleep fragmentation were stronger in male offspring [22] and sleep deprivation had similar associations with blood pressure in both sexes [23,24]. In a mouse study with female offspring the effects of gestational sleep deprivation were bigger in females that underwent an ovariectomy and lacked female hormones [24]. Future research could investigate if there is still interaction by sex when the children reach adolescence. Strengths and limitations Our study has several strengths. We were able to test longitudinal mediation in a large number of mother-infant pairs from different Gestational sleep deprivation was defined as at least 5 and 6 h for Rhea and ABCD cohort, respectively. Mediation model based on SEM, adjusted for child sex and age at assessment (years). Bold-faced text indicates significant associations (p-value < 0.05). OR, odds ratio. countries. By doing this, we were able to test potential mechanisms for the association between gestational sleep deprivation and adiposity. In the mediation analysis, the number of mothers with gestational sleep deprivation and gestational diabetes was low, but we still found a significant mediation effect with the minimal adjustment set. However, these results should be interpreted with caution due to the small sample size. Although our data are observational, the sequence of events and associations over time might implicate causal relationships. All data were collected prospectively and outcome measurements during childhood were all performed by research staff. Third, we tested the association in a pooled analysis from two cohorts, but we do also provide cohort specific estimations for the benefit of quantifying the heterogeneity between cohorts and plotting the associations. There were several limitations, mostly inherent to the cohorts' study design. Our exposure variable of gestational sleep deprivation was composed from a self-administered questionnaire and therefore recall bias and possible under-or overreporting may occur. We measured sleep at two different points during pregnancy, during the third (Rhea) and second (ABCD) trimester, capturing two stages of pregnancy. Effects of sleep duration, as well as sleep duration itself, may vary during pregnancy, and that may, besides other unknown factors, explain the different associations between the two cohorts. Also, the phrasing of the sleep question differed between both cohorts. Therefore, we used different cutoffs in the main analysis, correcting for resting time during the day that was included in the ABCD-study. However, our sensitivity analysis where two different common cutoffs in both cohorts were used, showed the same associations. We have no details about the timing of sleep during the daytime and nighttime, for example, the effects of nocturnal sleep might be different from daytime naps, and we have no information about gestational weight gain in the ABCD-cohort. Moreover, there are important differences in demographics between the two cohorts, causing some heterogeneity in our analysis. There are higher rates of maternal smoking; obesity; gestational diabetes; and cesarean section in the Rhea cohort. The smaller numbers in the RHEA cohort (for the random effects meta-analysis n = 661 vs. n = 2,947 for ABCD cohort) resulted in limited power which might be one of the reasons for the non-significant findings in this cohort. However, effect estimates were in the same direction, specifically with regard to stronger associations in girls. Nevertheless, the random effects meta-analysis indicates low to moderate heterogeneity for most of the outcomes, and pooled analysis was adjusted for cohort and other relevant covariates. Due to numerical difficulties we were not able to provide a measure of risk (OR or RR) for overweight/obesity, instead we calculated Hazard Ratios assuming that the development of overweight/obesity happened at the exact time of the follow-up visit. SEMs allowed us to assess multiple potential mediators but it makes strong assumptions that the relations between all variables are unconfounded. For this reason, we consider the mediation analysis an explorative study and do not claim causality. Lastly, loss to follow-up over the years of childhood caused our analysis to have a lower rate of mothers with short sleep duration in the participant group versus non-participants. We hypothesize that this difference was most likely attributed to higher loss to follow-up rates in non-Greek or non-Dutch origins, as ethnicity was previously shown to be associated with shorter sleep duration in a Dutch population [51], and we corrected our analyses for that. Gestational sleep deprivation and clinical implications Pregnancy is a period where lifestyle interventions are encouraged and parents are more aware of their choices [52]. Healthy gestational sleep has several perinatal benefits, whereas based on our findings, it probably also has positive long-term effects on childhood cardiometabolic health. Primary prevention may be limited to few socioeconomic factors previously related to sleep deprivation, for example, ethnicity and occupation [53]. But also secondary prevention could have a great impact for mothers with sleep disturbances already in early pregnancy. Closer monitoring for glucose metabolism and preterm birth might be extra warranted in mothers with sleep deprivation during pregnancy. Although sleep needs may vary by age and gender, both the National Sleep Foundation and American Academy of Sleep Medicine and Sleep Research Society have recommended 7-9 h of sleep per 24 h for adults [54,55]. In a sensitivity analysis, we found that the associations are stronger for more severe sleep deprivation (≤5 compared to ≤7 h). During some circumstances sleeping more than 9 h per night might be appropriate too and for other it is uncertain if this is associated with health risk. There are no official sleep recommendations for pregnant women, but we postulate based on our findings that sleep deprivation (meaning a sleep duration of less than 6 h) should be avoided at any stage during pregnancy. Future perspectives Future studies should be done to replicate our findings in other populations, different stages of pregnancy, and to further study the underlying mechanism. Also, the associations we found for gestational sleep deprivation with child adiposity and blood pressure should be further explored in relation not only to sleep deprivation, but also in relation to sleep quality during pregnancy. We tested potential mechanisms with an explorative mediation analysis. Further research on the effects of gestational sleep deprivation on gestational diabetes and shorter gestational age and subsequent childhood metabolic health are needed to investigate causality and opportunities for prevention. We tested for three potential perinatal mediators, however other potential mediators (e.g. childhood lifestyle and sleep) could exist during gestation and early life which may warrant further study. There is one published research protocol of a prospective cohort study that investigates the effects of circadian rhythm on birth and infant outcomes, which can replicate the studied associations [56]. Conclusion Our study is the first analysis on the association between maternal sleep duration during pregnancy and later childhood health. We used data from two ethnical and demographical diverse European cohorts and found that gestational sleep deprivation may be associated with increased risk for overweight and higher blood pressure in offspring, up until the age of 11 years, with more pronounced significant effects in girls than boys. Gestational diabetes and gestational age partly mediated these effects, pointing to altered glucose metabolism and inflammatory pathways as possible biological mechanisms underlying the observed associations. Supplementary material Supplementary material is available at SLEEP online.
2020-06-05T13:02:17.612Z
2020-06-04T00:00:00.000
{ "year": 2020, "sha1": "111114fda2f25c21414d854802eb534e3e878198", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/sleep/article-pdf/43/12/zsaa110/34875475/zsaa110.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c25b80975c3e3d9cc54096320887e685620c44d8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221090246
pes2o/s2orc
v3-fos-license
Some open problems in the theory of composites A selection of open problems in the theory of composites is presented. Particular attention is drawn to the question of whether two-dimensional, two-phase, composites with general geometries have the same set of possible effective tensors as those of hierarchical laminates. Other questions involve the conductivity and elasticity of composites. Finally, some future directions for wave and other equations are mentioned. Introduction The theory of composite materials has seen a resurgence of interest thanks to the discovery of novel properties and a dramatic rise in our ability to manufacture desired microgeometries: see for instance the review [49] and references therein. Back in the 1980's and 1990's there was also a rapid increase in interest, partly due to the recognition that the solution of optimal design problems often require composite microstructures in the design. This gave rise to the area of topology optimization which has had enormous impact, moving into the mainstream of engineering design: see, for example, the book [8]. From a mathematics perspective there were accompanying rapid developments: in our understanding of homogenization, which underlies the use of effective moduli to describe macroscopic responses; in bounds on effective moduli, coupled with the identification of microstructures that attain them; in the theory governing microgeometry independent exact relations satisfied by effective moduli; and in the discovery of composites with unexpected properties, as surveyed in the books [9, 114,21,109,1,72,108,91,39]. Given the recent interest it is perhaps appropriate to draw attention to some of the open problems generated in the mathematical research that is now mostly over three decades old, as well as questions generated by more recent investigations. The problems here are by no means exhaustive. Rather they are ones I have encountered in my research work and found quite difficult, usually because I have no idea how to solve them. Some are just of theoretical interest, while others should be of interest to both experimentalists and theorists alike. The problems reflect my own research interests, both past and present, and other experts in the field would undoubtedly choose a different set. Many are old outstanding problems, where it is difficult to dig in the hard soil, but some address new topics where the soil is more fertile and it is easier to break ground. Open problems involving quasiconvexification Here we present a selection of open problems that are related to quasiconvexification. For a recent survey of selected results pertaining to quasiconvexity, and the closely related topic of weak lower semicontinuity, see [29,10] and references therein. The focus is largely on two-phase composites, and the corresponding two-well quasiconvexification problems, since these are perhaps of greatest interest in the field of composites (though some effects, such as getting negative or unbounded thermal expansion coefficients from materials having only positive thermal expansion coefficients, require at least three phases [57,103]). In this age of 3d-printing it is now relatively easy to manufacture tailored microstructures of one phase plus void that can then be infilled to obtain a two-phase material. One is interested in the range the effective tensors can have as the microgeometry varies over all configurations. This range is known as the G-closure and provides limits for what one can expect to achieve when one tries to optimize the local response using relatively simple practical microstructures obtained, for example, by topology optimization. The question we explore is whether it suffices to consider only hierarchical laminate geometries rather all conceivable microstructures. Hierarchical laminate geometries have the advantage that it is relatively easy to calculate their effective properties (see, for example, [107,33], Chapter 9 in [72], and references therein). We start with: Problem 1: Is rank convexity equal to quasiconvexity for the two well problem in two spatial dimensions? Given two self-adjoint positive definite mappings L 1 and L 2 on the space S m of real 2 × m matrices, equipped with the standard inner product where Tr denotes the trace, and A 1 , A 2 ∈ S m , and given F 1 , F 2 ∈ S m , and two reals c 1 and c 2 , consider the two well "energy", where the W j (F), j = 1, 2, are the quadratic wells The quasiconvexification of W (F) is given by where the infimum is over all m-component periodic potentials u(x) and the average · is over the unit cell of periodicity. (We adopt the convention that the elements of ∇u are {∇u} ij = ∂u j /∂x i .) An energy W 0 (F) is said to be rank-one convex if for all real p ∈ [0, 1], all real 2-component vectors a, and all real m-component vectors b. The rank-one convexification of W (F), denoted RW (F), is the highest rank-one convex energy that lies equal or below W (F) for all F. So the question is whether QW (F) = RW (F) for all choices of m, K 1 , K 2 , F 1 , F 2 , c 1 , c 2 ? We will see that this can be reduced to the problem with F 1 = F 2 = 0. Clearly the problem does not change if we add the same constant to c 1 and c 2 . So without loss of generality we can assume that c 1 and c 2 are sufficiently large so that In terms of these we have in which the inner product is the obvious generalization of (2.1). In the field of composites problem 1 is equivalent to the following question: Problem 2: For two-phase composites in two spatial dimensions, such that phase 1 occupies a volume fraction f , is the G f -closure equal to its lamination closure when the fields on the right of the constitutive law have n components, each being the sum of a real 2 component vector and the gradient of a scalar periodic potential, while the fields on the left of the constitutive law also have n components, each having zero divergence, in which n is an arbitrary positive integer? The constitutive law takes the form  where the j (i) (x), e (j) (x), L(x) all have the same periodicity and satisfy in which the e (j) 0 are constant vectors, the V j (x) are periodic potentials, χ(x) is the indicator function χ(x) = 1 in phase 1, = 0 in phase 2, (2.10) satisfying χ = f , in which the angular brackets denote a volume average over the unit cell of periodicity, and L 1 and L 2 are self-adjoint positive definite mappings on S n . Thus L 1 and L 2 take the block matrix form determines the effective tensor L * . The G f -closure, G f (L 1 , L 2 ), is the closure of the set of values L * takes as χ(x) ranges over all possible indicator functions satisfying χ = f . In other words the microstructure varies over all possible configurations in which phase 1 occupies a volume fraction f . The lamination closure, G L f (L 1 , L 2 ) is the closure of the set of values L * takes as χ(x) ranges over the indicator functions of multiple-rank laminate materials satisfying χ = f . Multiple-rank laminate materials are hierarchical materials, obtained by an iterative process of lamination in different directions on larger and larger length scales, ideally with an infinite ratio between the length scales at each stage of construction. A rank 1 laminate is just a simple laminate of the phases, which can be regarded as rank 0 laminates. A rank m laminate is obtained by layering together a rank m − 1 laminate with a laminate of rank m − 1 or less. Remark 2.1 The equivalence of G f (L 1 , L 2 ) and G L f (L 1 , L 2 ) in the case n = 1 has been established by Nesi [94] and Grabovsky [36,37], subject to certain assumptions about L 1 = σ (11) 1 and L 2 = σ (11) 2 . (The n = 1 case where L 1 and L 2 do not commute, and L 1 − L 2 is neither positive nor negative semidefinite, is unresolved to my knowledge). They built on earlier work of Lurie and Cherkaev [60] and Murat and Tartar [92] who treated, using a variational approach known as the translation method, or method of compensated compactness, the case where σ are both proportional to the identity matrix, corresponding to isotropic materials. For n = 2 it is an open question as to whether they are equivalent. In planar elasticity with two, possibly anisotropic, phases with fixed orientations, which is a subcase of the n = 2 case, existing evidence points to them being equivalent. In three-dimensional elasticity one needs microstructures, such as pentamode materials [84], that are stiff with respect one loading, yet compliant with respect to all other loadings (which span a five-dimensional space), and it is by no means clear that their behavior can be mimicked by hierarchical laminate structures. Remark 2.2 In two spatial dimensions Grabovsky [40] has an example of a manifold M of tensors L that is stable under lamination but not under homogenization. This suggests that by picking anisotropic L 1 , L 2 ∈ M one might find a χ(x) such that L * is not in M, thus establishing that G(L 1 , L 2 ) and G L (L 1 , L 2 ) differ. However, the analysis showing that M is stable under lamination [38] extends directly to all two-phase composite geometries as can be seen from [42] once takes the "reference tensor" L 0 equal to L 2 . We conclude that L * ∈ M. The same analysis applies to any manifold M stable under lamination in any spatial dimension: if L 1 , L 2 ∈ M then also L * ∈ M, for any indicator function χ(x), not just those corresponding to laminate geometries. Remark 2.3 If indeed G(L 1 , L 2 ) and G L (L 1 , L 2 ) differ for some n and some L 1 ≥ 0 and L 2 ≥ 0, the next questions become: can one identify the minimum value n 0 of n for which they differ for some L 1 and L 2 , and given n ≥ n 0 can one identify the set of pairs (L 1 , L 2 ) for which they differ, or for which G f (L 1 , L 2 ) and G L f (L 1 , L 2 ) differ for fixed f ? More generally, if one has a composite with k phases, what is the smallest value of n for which G(L 1 , L 2 , . . . , L k ) and G L (L 1 , L 2 , . . . , L k ) differ or for which G(K 1 , K 2 , . . . , K k ) and G L (K 1 , K 2 , . . . , K k ) differ? A variant of an example ofŠverák [106] shows that G(K 1 , K 2 , . . . , K 7 ) and G L (K 1 , K 2 , . . . , K 7 ) differ when n = 3 (see section 31.9 of [72]). Remark 2.4 In three spatial dimensions it seems quite likely that there are two phase geometries such that G(L 1 , L 2 ) and G L (L 1 , L 2 ) differ. To obtain a candidate example, one considers the conductivity equations in the presence of a small magnetic field h = (h 1 , h 2 , h 3 ). In a two-phase medium where phase 1 is isotropic while phase 2 is void, these take the form: where R H is the Hall coefficient of phase 1, and ρ is its resistivity tensor. Assuming that the microstructure is isotropic or has cubic symmetry, the effective resistivity tensor ρ * = σ −1 * (if it exists) to first order in h takes the form (2.14) Numerical results [50,55] and corresponding physical experiments [53] show that in certain microstructures of interlinked tori, arranged to have cubic symmetry, R H * and R H can have opposite signs. While it was commonly believed that the sign of Hall coefficient corresponds to the sign of the charge carrier, these composites provide a counterexample as they show the macroscopic Hall coefficient can be opposite in sign to the Hall coefficients of the constituent materials, assuming their Hall coefficients are zero or share a common sign. The argument that the Hall coefficient corresponds to the sign of the charge carrier assumes that the electrons, or holes, travel in straight lines, which of course is not the case in these composite materials. The microstructures were motivated by a three-phase example [16] having cubic symmetry, where it was rigorously shown that the Hall coefficients R H 1 , R H 2 and R H 3 of all three isotropic phases can be non-negative, while at the same time R H * is negative. One can explain this [16,55] in terms of the "matrix valued" electric field E(x) whose three column vectors e 1 (x), e 2 (x), and e 3 (x) each solve the conductivity equations, with zero magnetic field (i.e. the same χ(x) and ρ = ρI). Assuming E = I, a perturbation argument [14,16] shows that the sign change of the Hall coefficient is related to the fact that the trace of the cofactor matrix of E(x) changes sign, at least in certain regions in the unit cell of periodicity. On the other hand, in any multiple rank laminates (with E = I) Briane and Nesi show that the determinant of E(x) remains positive [18], whereas it does take negative values in certain regions in the interlinked tori geometries [17]. While they show that the trace of the cofactor matrix of E(x) can change sign in three phase multiple rank laminates, it is an open question as to whether it can change sign in two phase multiple rank laminates. If it cannot, then the path is clear to establishing that there are three-dimensional two phase geometries such that G(L 1 , L 2 ) and G L (L 1 , L 2 ) differ. We add that while in (2.13) the conductivity tensor σ(x) = χ(x)ρ −1 is not symmetric, one can perturb the problem slightly so that phase 2 is slightly conducting, and then, using ideas of Cherkaev and Gibiansky [24], make a transformation to an equivalent problem where the tensor entering the constitutive law is real, symmetric, and positive definite (see [70] and section 12.11 of [72]). Also one can introduce a periodic vector potential v for j − j in (2.13) so that j − j is expressed in terms of the antisymmetric part of ∇v using the completely antisymmetric Levi-Civita tensor giving j − j = ∇ × v, while on the other hand the Levi-Civita tensor applied to ∇V gives an antisymmetric field that has zero divergence. Then the equations can be manipulated into the same form as (2.8)-(2.11) with real σ Equivalence between problems 1 and 2 The connection between problems 1 and 2 is implicit in existing results. To see this, we first consider a problem associated with, and in fact equivalent to, problem 2. This is to characterize the G-closure associated with the equations in which J(x) and E(x) satisfy the same constraints as in problem 2, K 1 and K 2 are positive definite and given by (2.6), the indicator function χ(x) is again given by (2.10), but not subject to the constraint that χ = f , θ is a constant scalar, and s(x) is an arbitrary scalar valued function having the same periodicity as χ(x). The effective tensor K * is defined by the linear relation The G-closure, G(K 1 , K 2 ) is the closure of the set of values K * takes as χ(x) ranges over all possible indicator functions, with no constraint on the volume fraction. Now when θ = 0 (2.15) when solved for J(x) is exactly the same as (2.8). This implies that K * takes the form where L * is the exactly the same effective tensor associated with problem 2, defined by (2.12). Furthermore if we assume that L 1 − L 2 is non-singular (by, if necessary, perturbing the problem) then we can find constant fields J(x) = J 0 and E(x) = E 0 that solve (2.15), and thus obtain formulas for V * and c * . This is a standard technique in the theory of composites (see, for example, Chapter 5 and in particular Section 5.4 in [72] and references therein). Specifically, (2.15) and (2.16) imply and these have the solutions So c * and V * are determined entirely in terms of L * , f , and the elements of K 1 and K 2 . Conversely, if we know K * , then from (2.17) we know L * , V * and c * , and the last equation in (2.19) allows us to determine f . Thus solving problem 2 is equivalent to solving this problem. One is often concerned with the quadratic form associated with K * that sometimes may correspond to the energy stored or dissipated in the material. For constant fields E 0 and θ (with E 0 not restricted to be given by (2.19)) standard variational principles [47] show that If we are interested in the lowest value of this over all K * ∈ G(K 1 , K 2 ), normalized with say θ = 1, and use an idea of Kohn [56], we get where W (F) is given by (2.2) and (2.3). So we arrive back at the quasiconvexification of W (F) as in problem 1, with m = n. If χ is restricted to multiple rank laminate geometries we arrive back at the rank-one convexification of W (F) (see [2] and section 31.6 of [72]). So problem 1 is solved according to whether or not To have equality it is sufficient, but not necessary, to have G(K 1 , On the other hand, we know the sets G(L 1 , L 2 ) and G f (L 1 , L 2 ) have sufficient convexity (as guaranteed by their stability under lamination) to be completely characterized by their "W-transforms". These generalize the idea of the Legendre transform for characterizing convex sets. First note that a linear operator A on S n has elements A ijkℓ such that if the matrix C ∈ S n has elements C kℓ then AC has elements (2.23) Introducing the inner product between two linear operators A and B on S n , the W-transform of G(L 1 , L 2 ) is (2.25) where N and N ⊥ range over all real, positive semidefinite, and symmetric operators such that NN ⊥ = N ⊥ N = 0. When N ⊥ = 0 and N is not restricted to be positive semidefinite, this is just the standard Legendre transform. That G(L 1 , L 2 ) may be characterized in this way is suggested by results of Cherkaev and Gibiansky [22,23] for particular examples and proved, in general, in [32] (see also [82] and section 30.3 of [72], and references therein). Writing where some of the E k or J k could be zero and, without loss of generality, assuming Each of the terms in the first sum can be expressed in variational form, similar to (2.20), while the remaining terms in the second sum can be expressed in the dual variational form, where the infimum is over all periodic functions v k , and is the matrix for a 90 • rotation. Let us introduce a constant superfield E 0 and supertensors L 1 and L 2 given by {(E 0 + ∇u) · L j (E 0 + ∇u)} , (2.33) in which the infimum is over all periodic potentials Thus finding G(L 1 , L 2 ) is reduced to a set of two-well quasiconvexification problems, each indexed by the value of h = 0, 1, . . . , n and with m = n 2 . The problem of finding G f (L 1 , L 2 ) can be handled in a similar way [32]. Instead of (2.25) one considers where the constant c acts as a Lagrange multiplier for the volume fraction f = χ . One easily sees that this again reduces to a two-well quasiconvexification problem. 3 Some open problems related to the effective conductivity as a function of the component conductivities To see this, we start by following Straley [105] and Milgrom and Shtrikman [66] (see also Chapter 6 in [72] and references therein) and introduce a non-singular matrix W having block entries proportional to I, Now we rewrite (2.8) in the form . Thus we have reduced the problem down to a set of uncoupled conductivity problems and the associated effective tensor L ′ * = W T L * W is given by where σ * (σ) is the effective conductivity tensor associated with the equations in which V (x) is a periodic potential, and defines the function σ * (σ). Allowing for complex values of σ, the properties of this function have been studied in [12, 69,35]. We remark that complex values of σ and hence σ * or, equivalently, complex values of the dielectric constants of the phases and hence the effective dielectric constant ǫ * have a physical significance for elecromagnetic waves propagating through the structure when the wavelengths and attenuation lengths of the waves in each phase are much larger than the microstructure. This is called the quasistatic regime. In particular Im ǫ * is related to the energy absorption in the composite, and hence is positive semidefinite when the dielectric constants of the phases are non-negative. Reflecting this, the function σ * (σ) satisfies the Nevanlinna-Herglotz type property, Im σ * (σ) ≥ 0 when Im σ > 0. (3.8) Additionally, the function is analytic in σ except along the negative real σ-axis, satisfies the constraints that and, in two-dimensions, the Keller-Dykhne-Mendelson relationship [51,31,65], where R ⊥ , with transpose R T ⊥ is the matrix for a 90 • rotation given by (2.31). Conversely, any function satisfying these properties can be approximated arbitrarily well by a rational function that corresponds to the effective conductivity function σ L * (σ) of a hierarchical laminate geometry [81] (see also Section 18.5 in [72]). Roughly speaking, given this rational function one can retrieve information about the last two layerings in the corresponding laminate by either setting σ = 0 or σ = ∞. One strips this last layering away, and accordingly modifies the associated conductivity function. Then one makes the opposite choice σ = ∞ or σ = 0, respectively, and proceeds by induction, until one is left with purely phase 1 or purely phase 2. This establishes that the lamination closure and the G f -closure coincide when the block entries of L 1 and L 2 are all proportional to the 2 × 2 identity matrix I. Explicit expressions for the G f -closure were given in the case n = 1 by Lurie and Cherkaev [60] and Murat and Tartar [92] (extended to the three dimensions in [62,92]), in the case n = 2 by Cherkaev and Gibiansky [22], and for general n, using the analytic properties of σ * (σ), by Clark and Milton [28]. It is an open question as to whether the G f -closure for general n can be obtained via the translation method. One can speculate that there should be some sort of inductive procedure using the translation method, but it is difficult to see how to formulate this. In three-dimensions one would like to address the analogous question, and focusing on isotropic composites this becomes: Problem 3: For three-dimensional isotropic composites each having an effective conductivity σ * I and being built from two isotropic materials having conductivities σI and I, can one characterize all possible conductivity functions σ * (σ)? The conductivity function σ * (σ) = σ * (σ)I still satisfies (3.8) and (3.9), but in place of (3.10) it has been established [19,12] that and, additionally [69,6,93,112,113], that the inequality holds for all real positive σ (and is satisfied as an equality for multicoated sphere assemblages). The question is whether there exist additional constraints satisfied by σ * (σ), and, if so, to identify them. An associated problem is: Problem 4: For three-dimensional isotropic composites of two isotropic phases, are all possible conductivity functions σ * (σ) achievable by multiple rank laminate microstructures and, if so, does it suffice to consider laminate microstructures where one laminates only in mutually orthogonal directions? We remark that it does not suffice (even in two-dimensions) to consider laminate microstructures where one laminates in mutually orthogonal directions if one considers anisotropic composites of two isotropic phases since if σ is complex the real and imaginary parts of σ * (σ) do not necessarily commute, while they do commute if one laminates in mutually orthogonal directions. These results motivate one to consider periodic composites of two anisotropic phases where the conductivity tensor takes the form where the indicator function χ(x) is given by (2.10) and σ 1 and σ 2 are the 2 × 2 matrix-valued conductivity tensors of the two phases. The associated effective conductivity tensor is found by looking for current fields j(x) and electric fields e(x), with the same periodicity of the composite, that solve j(x) = σ(x)e(x), ∇ · j = 0, e = −∇V (x). (3.14) In these equations V (x) is the electric potential, and the volume average, e , of the electric field e(x) is prescribed. Here and later the angular brackets · denote an average over the unit cell. The average current field j depends linearly on e , and it is this linear relation, that determines the effective tensor σ * . We arrive at problem 5, again closely related to problems 1 and 2: Problem 5: For two-dimensional anisotropic composites of two anisotropic phases, are all possible conductivity functions σ * (σ 1 , σ 2 ) achievable by multiple rank laminate microstructures? Some progress in characterizing the possible conductivity functions σ * (σ 1 , σ 2 ) has been made by finding suitable representations of the underlying operators so that they satisfy the required algebraic properties [74]. Once one has these representations one can, in principle, determine not only σ * (σ 1 , σ 2 ) but also L * (L 1 , L 2 ) for all real positive definite L 1 and L 2 taking the block matrix form (2.11). Thus if one could show a direct correspondence between the operator representations for an arbitrary χ(x) and the operator representations for multiple rank laminate microstructures, one would have resolved problem 2, establishing that the G f -closure equals its lamination closure. Such a correspondence between operator representations was used in [27,26] to establish that in two dimensions the effective conductivity function σ * (σ 0 ) of any polycrystal with conductivity of the form (3.16) and σ * given by (3.14) and (3.15), corresponds to the conductivity function of a laminate microstructure. A question of obvious importance is to identify those two-phase microstructures that absorb as much electromagnetic energy as possible, no matter what the direction of the incident radiation. In the quasistatic limit, where the wavelength of the radiation is much larger than the size of the unit cell of periodicity, the electromagnetic equations decouple into separate electric equations and magnetic equations involving complex fields and complex electrical permittivities and complex magnetic permeabilities, respectively. Each decoupled equation is equivalent to a conductivity equation, with complex conductivities. Four decades ago bounds were derived on the effective complex electrical permittivity (or equivalently the complex magnetic permeability, or complex conductivity) of isotropic composites of two isotropic phases, mixed in fixed proportions [13,68]. The bounds confine the effective electrical permittivity to a lens-shaped region of the complex plane bounded by two circular arcs. The problem becomes one of identifying microstructures that have have the maximum imaginary part of the effective complex electrical permittivity. In twodimensions these are assemblages of doubly coated disks (corresponding to the transverse electrical permittivity of doubly coated cylinders) as they attain the bounds [69]. In three-dimensions new bounds [54] show that assemblages of doubly coated spheres provide one bounding circular arc. The previously known second bounding arc [13,68] corresponds to conductivity functions σ * (σ) that have just one pole at a finite negative real value of σ. Originally just five microgeometries were identified that correspond to five points on the circular arc [69]. Depending on the material moduli, these can have the maximum possible absorption. Now an extra 3 additional multiple rank laminate geometries have been identified with effective electrical permittivities lying on the arc, and which can have the maximum possible absorption [54]. This leads to the following question: Problem 6: Are there other geometries with isotropic effective permittivities that lie on the arc? There is also a close connection with finding isotropic geometries that attain bounds on the complex effective bulk modulus [34], and which can provide the maximum possible absorption under oscillatory hydrostatic loadings, and that attain bounds coupling the real effective moduli of two conductivity type problems that may separately correspond to say, magnetic, thermal, particle diffusion, or fluid permeability problems [11,12] Another question is the following one: Problem 7: Can any of these discovered geometries, having maximum absorption, can be replaced by simpler ones? In particular, can the assemblages of doubly coated disks or coated spheres be replaced by periodic ones with only one inclusion per unit cell? In the case of assemblages of coated spheres (isotropic composites having the minimum and maximum conductivities for given real positive conductivities of the two phases, mixed in given proportions) equivalent periodic geometries having only one inclusion per unit cell are known [110,41,59]. 4 Bounds on the elastic moduli of an elastic material with voids, and the ultimate auxetic material in this class of materials Characterizing the possible elasticity tensors of anisotropic composites is a daunting task. Elasticity tensors have 18 invariants in three dimensional space and 5 invariants in two dimensions, and correspondingly the set of all possible elasticity tensors built from two isotropic phases in prescribed volume fractions is represented by a set in an 18 or 5, dimensional space, or 21 and 9 if we include the bulk and shear moduli of both phases. The difficulty of this is indicated by the observation that a distorted hypercube in 18 dimensions has 2 18 ≈ 26, 000 vertices and 18 numbers are needed to specify the coordinates of each, bringing the total to about 4.7 million numbers, just to specify an 18-dimensional distorted cube. The G-closure has only been completely characterized, and consists of all positive definite elasticity tensors, in the limit as one phase becomes arbitrarily compliant while the other phase becomes arbitrarily stiff [84]. A lot of progress has been made in the case where one phase is void, while the other is isotropic has with fixed positive elastic moduli, [82] (or when a rigid material replaces the void phase [85]). Still, there are still parts of the G-closure that have not been mapped. We arrive at Problem 8: Can one complete the characterization of the G-closure for a void (or rigid) phase mixed with an isotropic elastic phase? It may be the case that the necessary insight for progressing further, at least in the case that one phase is void, comes from a consideration of the possible pairs of the effective bulk modulus, κ * , and effective shear modulus µ * , of isotropic composites of an elastic material, having bulk and shear moduli κ and µ, and void. One has the elementary bounds [47]: Naturally the void has minimum effective bulk and shear moduli, both being zero, and the pure elastic phase has maximum effective bulk and shear moduli. Also one can construct composites with (κ * , µ * ) arbitrarily close to (κ * , 0) for all positive κ * < κ, and arbitrarily close to (κ, µ * ) for all positive κ * < µ [82,98,83] On the other hand, the question remains as to what microstructures have high effective shear modulus and low effective bulk modulus. We are led to Problem 9: The bounds (4.1) imply µ * − cκ * ≤ µ for all c > 0. Can this inequality be improved, in 2 and/or 3 dimensions, for a range of c > 0? Alternatively, can one construct composites of an elastic phase with void with (κ * , µ * ) arbitrarily close to (0, µ)? A related question is Problem 10: Identify, for given c > 0, in 2 and/or 3 dimensions, isotropic microstructures of an elastic phase with void that have the largest possible value of µ * − cκ * (or a sequence of isotropic microstructures with moduli such that µ * − cκ * converges to its largest possible value). When c is extremely large, this amounts to identifying isotropic microstructures that have the largest possible value of µ * subject to the constraint that κ * is arbitrarily close to zero. This is what one may call the ultimate auxetic material within the class of materials built from an isotropic elastic phase with voids. Auxetic composites have a negative Poisson's ratio, so that they fatten when they are pulled, corresponding to a ratio κ * /µ * < 2/3. When one seeks materials built from an isotropic elastic phase with void, that have Poisson's ratios close to the limiting value of −1 and thus with κ * /µ * close to zero, it is generally the case that both κ * and µ * are very small, not just κ * . This is a feature of auxetic composites built from rotating elements [71,99,44] and is less than ideal as one wants to retain shear stiffness. In two-dimensions one can construct a candidate for the title of the ultimate auxetic material as follows. One first takes the elastic phase and slices it into slabs of uniform thickness with the interfaces perpendicular to the x 1 -axis. The slabs are separated by microstructured layers, very thin compared to the slab thickness. The microstructured layers are such that their only easy mode of deformation is compression of the layer in the direction x 1 . The thin microstructured layers may, for example, contain the third rank laminate material with a herringbone structure depicted in figure 13 of [71] or in the second subfigure of figure 8 in [82]. The macroscopic constitutive relation of the sliced material separated by these microstructured layers, is where the σ ij are the Cartesian components of the average stress, while the ǫ ij are the Cartesian components of the average strain. The effective elastic moduli are where ε is a small parameter, reflecting the easyness of the easy mode of compression in the x 1direction, and the appearance of E = 4κµ/(κ+µ) reflects the fact that the effective Young's modulus for compression in the x 2 -direction is approximately the same as the pure elastic phase, namely E. We now treat this material as a crystal and construct from it the polycrystal with the largest possible effective shear modulus µ * and smallest possible effective bulk modulus κ * . According to the bounds and laminate constructions in [5] these are Substituting (4.3) in these, and taking the limit ε → 0 gives The formula for µ * has the required invariance property that if 1/µ and −1/κ are shifted by the same constant, then 1/µ * is shifted by this constant too [61,25]. Due to this invariance we may assume, without loss of generality, that the initial elastic phase is incompressible (1/κ = 0) so that (4.5) implies µ * = 4µ/5. The question is then: Problem 11: Is 4µ/5 the largest possible value of µ * for a two-dimensional elastic material with voids, given that κ * = 1/κ = 0? From a practical standpoint the answer to this question is moot, as not only are such multiple rank laminates impossible to build and subject to buckling, but also the linear elastic moduli are largely irrelevant under finite but small deformations as the microstructured layers will undergo large deformations relative to their thickness. Ideally one wants to address Problem 12: Can one obtain bounds that correlate the possible compressive and shear deformations of composites when these deformations are not infinitesimal? Returning back to the theoretical problem of finding the ultimate auxetic material, one could use in principle a similar construction in three-dimensions. However the barrier is that the polycrystals having the largest µ * with κ * = 0 have not yet been identified. Thus one arrives at Problem 13: What are the possible (κ * , µ * )-pairs for three-dimensional isotropic elastic polycrystals (composites built from a single crystal in various orientations)? The bounds of Hill [47] are optimal for κ * [7], but improved bounds for µ * or (κ * , µ * ) pairs are lacking. Hashin and Shtrikman obtained improved bounds on µ * [46], but only under additional assumptions about crystal orientations, that are not generally valid. For conductivity the analogous problem has been solved [101,6,95], but the G-closure containing all possible effective conductivity tensors of anisotropic polycrystals has not yet been fully mapped. More generally, moving back to isotropic composites of two isotropic elastic phases, one possibly rigid or void, the tightest known bounds on the possible (κ * , µ * )-pairs are those of Cherkaev and Gibiansky [23], in two-dimensions, and those of Berryman and Milton [15], in three-dimensions. Its seems highly likely that these bounds are not optimal. Gal Shmuel and myself are progressing on a nontrivial route for improving the three dimensional bounds using the "translation method" approach (see Chapters 24 and 25 of [72] and references therein) used by Cherkaev and Gibiansky, but even so these improved bounds are unlikely to be optimal. Thus we come to Problem 14: Can one obtain improved bounds on the elastic moduli pairs of isotropic composites of two isotropic elastic phases, and ultimately find the optimal ones? Numerical explorations of the possible (κ * , µ * )-pairs have been made, for example in [4,98]. From a practical viewpoint such numerical explorations are probably more useful than the theoretical developments. On the other hand, it is difficult to numerically explore multiscale structures that may be necessary to obtain desired extreme responses, such as in resolving Problem 11. Some future directions for wave and other equations An impressive body of research addresses the problem of bounding the response of bodies to electromagnetic or other waves, and addressing limitations to how one can manipulate these waves. A few examples include the results in [104,45,100,67,102] and references therein. There are many problems to be addressed and new approaches are needed to improve existing bounds, or to reveal novel ones. A framework suited to most linear equations in physics [76,77,78,79], including wave and diffusion equations, is to express them in the form where the first equation is the constitutive law, with the tensor L(x) representing the local material properties, s(x) is the source term, while E and J are orthogonal spaces embodying the differential constraints on the fields. Here x represents a point in space, or space time with x 0 representing time. Scattering problems can also be expressed in this form [73] by incorporating the fields "at infinity" appropriately. The analog for quadratic forms of quasiconvexity is then Q * -convexity: a quadratic form f (P) is Q * -convex if f (E) ≥ 0 for all E ∈ E. Q * -convex functions allow one to place bounds on the spectrum of the operator relevant to the problem [75,80]. The subject of Q * -convexity remains to be explored, and simple examples of Q * -convex functions need to be found for the various equations, beyond quasiconvex functions and those discovered for the Schrödinger equation (Sections 13.6 and 13.7 of [91]) . For wave and diffusion equations it seems likely that they will provide a powerful tool for addressing other bounding problems, and this provides an avenue for future work. In connection with this, variational principles have been developed for acoustic, elastic, and electromagnetic equations at constant frequency in lossy materials [88,90]. These are the direct analogs of those of Gibiansky and Cherkaev [24] that have proved very powerful, in conjunction with the use of quasiconvex functions, for obtaining bounds on the quasistatic response of composites: examples include bounds on effective complex electrical permittivities (Section 22.6 of [72] and [54]) and bounds on complex bulk moduli [34]. So one expects there should be useful bounds resulting from these variational principles for wave equations in lossy media. Recently it has been discovered that associated with exact relations for composites, as reviewed in Chapter 17 of [72] and the book [39], are exact relations satisfied by the infinite body Green's function in certain inhomogeneous media, and boundary field equalities [87]. Boundary field equalities are exact identities satisfied by the fields at the boundary of the body, given that the fields in the interior of the body satisfy some constraints that do not uniquely determine the interior fields in terms of their boundary values. A classical example is that a field with zero divergence has zero net flux through the boundary. The theory of these exact relations for the Green's function and boundary field equalities extends to wave and diffusion equations [87], or more generally to equations expressible in the form (5.1), but examples, and in particular useful examples, need to be generated. Another topic to be explored is that of neutral Inclusions for wave equations. For static and quasistatic problems there are many studies of neutral inclusions (see, for example, Section 7.11 of [72], the review [48], and references therein). These are inclusions that one can insert in a homogeneous medium without disturbing the surrounding fields, provided these fields fall into an appropriate class. Thus, for example, one may obtain neutrality for a single applied uniform fields, for any uniform field, or for any applied field satisfying the underlying equations. For conductivity, or equivalently for the dielectric problem, coated ellipsoids can be neutral and invisible to any uniform field [52]. In two-dimensions there are other shaped inclusions that can be neutral to a uniform field in a specified direction [89]. Coated dielectric cylinders, where the core, coating, and surrounding medium have dielectric constants of 1, −1 + iδ, and 1 become neutral and hence invisible to large classes of fields in the limit δ → 0 [97], and can cloak sources and objects [86,96]. Transformations allow one to obtain other inclusions that are neutral and thus invisible to any exterior field, and also cloak objects [43]. The transformation approach also yields neutral inclusions that are invisible to constant frequency electromagnetic waves [30]. Even appropriately coated spheres can be invisible in the far field when the incident is planar [3]. Quite simple inclusions have been found that are neutral and hence invisible to a single incident planar electromagnetic wave [111,58]. One, possibly difficult, research direction, is to explore whether there are other simple geometries, not obtained from a transformation approach, that are invisible to one or more incident plane waves. Most analysis of wave equations in lossy media has been done at constant frequency, which makes sense as this avoids convolutions in time. However recent work on bounds in the time domain [63,64] show that it is possible for the temporal response of a two-phase mixture to be untangled at specific times when the applied field has an appropriately tailored dependence on time. This shows it may be productive to depart from focusing on bounds at constant frequency, and to consider bounding responses as a function of time. Beyond the analytic approach used in these papers, the variational approach of Carini and Mattei [20], may be helpful if one can modify it to obtain bounds at each instant in time, rather than to bounding the response over at interval of time.
2020-08-11T01:00:21.125Z
2020-08-07T00:00:00.000
{ "year": 2021, "sha1": "745d16f0ec8083e8d06db869b4f5258565e059a7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.03394", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f3f3a67c0cc3b2c94ce10ca1d402b2a81ef27c51", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Medicine", "Materials Science" ] }
117539056
pes2o/s2orc
v3-fos-license
Geometric properties of some linear operators defined by convolution . Let A denote the class of normalized analytic functions in the unit disc U and P γ R. AGHALARY, A. EBADIAN AND S. SHAMS Abstract.Let A denote the class of normalized analytic functions in the unit disc U and P γ (α, β) consists of f ∈ A so that In the present paper we shall investigate the integral transform where λ is a non-negative real valued function normalized by 1 0 λ(t )d t = 1.Actually we aim to find conditions on the parameters α, β, γ, β 1 , γ 1 such that V λ,α ( f ) maps P γ (α, β) into P γ 1 (α, β 1 ).As special cases, we study various choices of λ(t ), related to classical integral transforms. Introduction and Definitions Let A denote the class of functions of the form a n z n which are analytic in the open unit disc U = {z ∈ C; |z| < 1}. For β < 1, α ≥ 0 and γ ≥ 0, let P γ (α, β) denote the class of all analytic functions f in A such that where power is taken to be principle value.Some properties of subclasses of this class has known in the literature.It is obvious that P 1 (α, 0) with η = 0 is the subclass of Bazilevic functions, which is known to be univalent in U .We refer the reader for more information on the subclasses of P γ (α, β) to previous works which have been studied by Singh [12], Liu [7], Ding et al. [4]. The familiar hypergeometric function F (a, b; c; z) defined by the series is analytic in the unit disc U .Here (a, 0) = 1 for a = 0, and (a, n) is the shifted factorial function We use two different representations for this function, (see for more details [1], [6]). We recall that the operator V λ,α ( f ) contain some of well-known operators such as Libera, Bernardi, and Komatu operators as its special cases.This operator has been studied by a number of authors for various choices of λ(t ) (see e.g.[1], [3], [6], [8]). For proving our main results we need to the following lemma. Then assumption f ∈ P µ (α, β) means that ℜ(e i η (F (z) − β)) > 0, for some η ∈ R.After some algebraic calculations and in view of (2.1) we observe that 3).Then we can write which, by Lemma 1.1, implies G ∈ P 1 (α, γ) and it will complete the proof.Therefore, it suffices to verify the inequality (2.2).Using the identity (which can be checked by comparing the coefficients of z n on both side) Hence, in view of (2.3), and The stated condition on β shows that the right-hand side of the last expression is 1 To prove the sharpness, let f ∈ P µ (α, β) be the function defined by Using a series expansion we see that Then we can write where From the given value of β in the Theorem, it follows that Finally from (2.5) we obtain This shows that the result is sharp. We note that by putting α = 1 in Theorem 2.1, we obtain the result of Barnard et al. [3] and upon setting µ = 1 in Theorem 2.1 we deduce the following result. The value of β is sharp. The special case of Corollary 2.1 has been obtained by Fournier and Ruscheweyh [5]. Theorem 2.2.Let α > 0, γ < 1, 0 ≤ µ ≤ 1 be given, and define β = β(γ) by The assumption and Lemma 1.1 on (2.7) we obtain the result.The proof of sharpness follows much the same method in the proof of Theorem 2.1 for function which is defined by (2.4) and we omit the details. The constant β is sharp.b,c,α be the convolution operator defined by Suppose that f ∈ P 0 (α, β).Then we have H ( f ) ∈ P γ (α, β 1 ), where The result is sharp. where Therefore, where Using the relation (which may be verified by comparing the coefficient of z n on both sides ) Now, in view of the integral representation (1.2), M 1 (z) has the integral form with Which is clearly nonnegative as 0 . Now by applying Lemma 1.1 we obtain the result.Finally, to prove the sharpness, let function f and M 1 are defined by and the desired conclusion follows. Putting a = 1 in the Theorem 2.3 we get the following result. and h(z) be the function given by Suppose that f ∈ P 0 (α, β).Then we have h ∈ P γ (α, β 1 ), where Then it is easy to see that (by comparing coefficient of both side), where This operator for the special case α = 1 has been introduced in [11] and has been studied by a number of authors [1], [2], [8].Because of the symmetry, we may assume b > a if b = a. Theorem 2.4.Let 0 < γ, α ≥ γ, −1 < a < 0, b > a and f ∈ P 0 (α, β 1 ).Then G defined by (2.9) is in P γ (α, β 2 ), where Proof.Let b > a and G be defined by (2.9).Differentiating the both sides of (2.9), we get where Also from (2.9) we have where . Now, with combining the relations (2.10), (2.11) and using a direct integration we obtain where Since t a > t b and bt b − at a > 0 as −1 < a < 0 and b > a. Therefore we have ℜM (z) > M (−1) and the result follows by applying Lemma 1.1 to (2.12).Now for the case b = a by taking the limit as b → a in the previous case we obtain the result. Finally our next result deals with generalization of the Kamatu operator [9] which is defined by For p > 1 and −1 < a ≤ α γ −1, we conclude that ℜM (z) > M (−1) and the result follows as in the proof of Theorem 2.4.
2019-04-16T13:29:54.411Z
2008-12-01T00:00:00.000
{ "year": 2008, "sha1": "c5595d25445a4553cdb61d0f30252f99fee4cb3a", "oa_license": "CCBYNCSA", "oa_url": "https://journals.math.tku.edu.tw/index.php/TKJM/article/download/6/22", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c5595d25445a4553cdb61d0f30252f99fee4cb3a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
31905930
pes2o/s2orc
v3-fos-license
Quality assessment of patient leaflets on misoprostol-induced labour: does written information adhere to international standards for patient involvement and informed consent? Objectives The need for thorough patient information is increasing as maternity care becomes more medicalised. The aim was to assess the quality of written patient information on labour induction. In most Danish hospitals, misoprostol is the first-choice drug for induction in low-risk pregnancies. Misoprostol has been associated with adverse side effects and severe outcomes for mother and child and is not registered for obstetric use in Denmark. Setting Secondary care hospitals in Denmark. Data Patient information leaflets from all hospitals that used misoprostol as an induction agent by June 2015 (N=13). Design Patient leaflets were evaluated according to a validated scoring tool (International Patient Decision Aid Standards instrument, IPDAS), core elements in the Danish Health Act, and items regarding off-label use and non-registered medication. Two of the authors scored all leaflets independently. Outcome measures Women's involvement in decision-making, information on benefits and harms associated with the treatment, other justifiable treatment options, and non-registered treatment. Results Generally, the hospitals scored low on the IPDAS checklist. No hospitals encouraged women to consider their preferences. Information on side effects and adverse outcomes was poorly covered and varied substantially between hospitals. Few hospitals informed about precautions regarding outpatient inductions, and none informed about the lack of evidence on the safety of this procedure. None informed that misoprostol is not registered for induction or explained the meaning of off-label use or use of non-registered medication. Elements such as interprofessional consensus, long-term experience, and health authorities' approval were used to add credibility to the use of misoprostol. Conclusions Central criteria for patient involvement and informed consent were not met, and the patient leaflets did not inform according to current evidence on misoprostol-induced labour. Our findings indicate that patients receive very different, sometimes contradictory, information with potential ethical implications. Concerns should be given to outpatient inductions, where precise written information is of particular importance. IPDAS checklist. No hospitals encouraged women to consider their preferences. Information on side effects and adverse outcomes was poorly covered and varied substantially between hospitals. Few hospitals informed about precautions regarding outpatient inductions, and none informed about the lack of evidence on the safety of this procedure. None informed that misoprostol is not registered for induction or explained the meaning of off-label use or use of non-registered medication. Elements such as interprofessional consensus, longterm experience, and health authorities' approval were used to add credibility to the use of misoprostol. Conclusions: Central criteria for patient involvement and informed consent were not met, and the patient leaflets did not inform according to current evidence on misoprostol-induced labour. Our findings indicate that patients receive very different, sometimes contradictory, information with potential ethical implications. Concerns should be given to outpatient inductions, where precise written information is of particular importance. BACKGROUND Health professionals must respect a patient's right to take decisions about own health and her right to informed consent to a proposed treatment. 1 2 Today, one in four deliveries is being induced following a rapid increase from 7% in 1997 to 25% in 2013. 3 One reason for the increase is a reduction in the accepted normal length of pregnancy from 14 to 7-10 days past due date. 4 Previously, the prostaglandin-E 2 drug, Minprostin, was the first-choice drug for labour induction. In the beginning of the 2000s, however, 50 μg Cytotec (containing the prostaglandin-E1 substance, misoprostol) was introduced as an offlabel alternative, that is, misoprostol was not registered for labour induction, and Cytotec tablets were produced for peptic ulcer treatment. 5 Misoprostol was considered superior to Minprostin with regard to induction efficiency, and today misoprostol is common for induction in low-risk pregnancies in Denmark Strengths and limitations of this study ▪ The study had updated and complete data from all Danish hospitals that performed labour induction with misoprostol by the time of data collection. ▪ Patient leaflets were scored independently by two of the authors. ▪ Patient leaflets were evaluated against a validated scoring tool and according to national legislation. ▪ Data included written patient information alone, and so the study cannot conclude on other aspects of patient information. and other countries. 6 7 Despite the widespread use of misoprostol and the dramatic increase in induced deliveries, no scientific knowledge exists on the quality of patient information regarding misoprostol-induced deliveries. Acknowledging the fact that good patient information is (almost) always required in modern healthcare, we argue below why certain circumstances highlight the urgent need for thorough information in the case of misoprostol-induced deliveries. First, using a drug outside its registered indications is unusual when a registered drug is available. When introduced, misoprostol was not registered for labour induction in Denmark, and until 2014 local hospital pharmacies produced vaginal suppositories from Cytotec tablets. Cytotec is produced for peptic ulcer treatment, but due to its misoprostol content it also has a uterotonic effect. Following an increased control with hospital pharmacies in 2014, 8 the production of off-label misoprostol from Cytotec stopped. Some years earlier, another misoprostol product, Angusta, was introduced in Denmark. Angusta is produced in India and is not a registered drug in Europe. 9 In order to use a nonregistered drug as common treatment, a compassionate user permit is required, and thus, after the first permit for Angusta was launched by the Danish Health and Medicines Authorities in 2012, 18 of the 22 Danish obstetric departments had a compassionate user permit by June 2013. 10 Such a launch of compassionate user permits to several hospitals is the first example of Danish authorities allowing the routine use of a nonregistered drug in a situation where a registered drug is available. In 2011, the legal advisor to the Danish Government stated that information prior to a treatment is essential in the case of off-label medication, and that off-label treatment should only be considered when no appropriate registered alternatives exist. 11 In 2013 the Danish Health and Medicines authorities informed all Danish labour wards that health professionals' duty to inform patients about adverse side effects were sharpened if a medication is used outside its approved indications. 12 Further, patients who are offered nonregistered medication encounter barriers when they search for information. Registered drugs have product information sheets with information on effects, side effects, and how to react to and report side effects. For non-registered drugs, however, no such standardised information exists. Information about product-name, active substance, and the legal status of the drug is required if patients and professionals wish to search for further information. Second, misoprostol and other induction agents have been associated with hyperstimulation of the uterus and fetal heart rate abnormalities. 7 13 When the uterus is stimulated extensively, the oxygen flow to the placenta and fetus is decreased. Misoprostol is a highly potent drug for which adverse effects have been reported even from low doses. 7 The incidence of hyperstimulation after low-dose oral misoprostol (25 μg) induction has been reported as 1-9%. [13][14][15] Misoprostol has also been associated with severe side effects such as fetal death, fetal brain damage, uterine rupture/perforation, retained placenta, amniotic fluid embolism and abnormal uterine contractions, 16-20 as well as with more frequent side effects, such as hyperstimulation, impaired fetal heart rate and meconium stained amniotic water. 7 13 16 19 20 The US Food and Drug Administration (FDA) has questioned the safety of obstetric use of Cytotec. 16 21 Even so, low-dose misoprostol (<50 μg) is recommended by the Danish Society of Obstetrics and Gynaecology and the Regional Official Authorities. 22 23 The concerns raised make it crucial that patients receive information before misoprostol-induced delivery. Third, in Denmark, labour induction often follows an outpatient procedure, even though the product information for Minprostin (the former first-choice drug) designates that treatment with medical induction agents should be monitored in a hospital setting. 24 In Norway and other countries, continuous clinical observation is mandatory throughout misoprostol treatment. 25 Also, the WHO states that induction should only be carried out when facilities for monitoring and emergency treatment for mother and child are available. 26 Misoprostol is administered up to four times a day, and it is normal Danish practice to discharge low-risk women to their own home after misoprostol application to await the establishment of regular uterine contractions or to medicate themselves at home. 6 There have, however, been reports on tetanic labour occurring in the woman's home several hours after misoprostol application, and the safety of this practice lacks evidence. 13 27 Since tetanic labour must be treated with tocolytic drugs or emergency caesarean section, and since such treatments cannot be immediately performed outside the hospital, potential health risks may be associated with the Danish practice, and the need for adequate patient information is critical. 16 17 20 24 25 28 29 According to the Regional Official Authorities, the majority of inductions in Denmark are performed in an outpatient setting, 23 and according to the Danish Society of Obstetrics and Gynaecology this is apparently without increased risks among a low-risk population. 22 According to the WHO, treatment must not be initiated without patient consent, and expected benefits from a treatment or an intervention should outweigh its potential harm. 26 The principle of informed consent is also reflected in the Danish Health Care Act. 1 Healthcare professionals are important providers of information, 30 and information must be delivered in respect for the individual, her integrity and self-determination. These values are stipulated in the Danish Health Care Law. 1 Women today want to participate in decisions regarding interventions in their pregnancy, and thus healthcare professionals are an important source of information. 30 We assessed the quality of patient information leaflets on labour induction according to a validated patient decision tool, 31 and core elements on patient information in the Danish Health Act, 1 that is, (1) women's involvement in decision-making and their right to informed consent, (2) benefits and harms associated with the treatment, and (3) other justifiable treatment options including watchful waiting (defined as a regimen for monitoring fetal well-being regularly while awaiting spontaneous onset of labour). Also, specific issues related to non-registered medication were analysed. MATERIAL AND METHODS During calendar week 25 in 2015, we contacted the leading midwife or the midwife responsible for patient information material in all obstetric departments in Denmark (N=22) by phone to ask if they used misoprostol for labour induction. Danish hospitals use either the registered drug Minprostin (dinoprostone) or misoprostol for medical induction. We received written patient information material on labour induction from all hospitals that performed misoprostol inductions (N=13) by postal mail, email or downloaded from the internet. Five hospitals had two different leaflets, and so we received 18 leaflets. For those five hospitals, we assessed each leaflet pair as one, resulting in the assessment of 13 hospitals' written information. All leaflets were in Danish language. We used the revised International Patient Decision Aid Standards (IPDAS) checklist together with a scoring tool developed by the Picker Institute, which had a few adaptions to the original IPDAS checklist. 32 The scoring tool comprises eight major sections (as presented in table 1) with 2-7 subitems (for the list of subitems, please refer to online supplementary table S1). Each section had a maximum score of 5 points, no matter the number of subitems, giving a total maximum of 40 points. We generated a ninth section regarding non-registered medication (as displayed in table 2), and thus our checklist had a total score of 45 points. In the new section 9, subitems 1-3 built on the Danish Health Act and the legal advisor to the Danish Government's statement regarding information and consent. 1 11 Subitems 4-7 on product name and active substance were included because such information is required if patients wish to seek further information on the medication. Subitem 8 was based on the legislation that midwives and physicians have an increased duty to report adverse side effects from nonregistered drugs and from medications used on a compassionate user permit. 33 Two of the authors, Rydahl and Clausen, made an individual scoring of all leaflets. There was a high agreement between their scorings, and smaller disagreements were resolved by discussion. For the five hospitals with two leaflets, each was assessed individually and subsequently scored as one. In case of inconsistency within a leaflet pair, the scores from the better performing leaflet were chosen. To obtain five points in a section, the leaflet should fulfil the IPDAS criteria of all subitems of the section. Scores of 4, 3 and 2 were assigned if the leaflet partially fulfilled the criteria, and 1 was given if the leaflet did not meet the criteria in any way. From this, it follows that in cases where only very sparse information was given for a section, two points were assigned. This choice was made to make a clear distinction between no information at all and touching on a subject. For example, a leaflet that mentioned trivial inconveniences (eg, stings) but did not give information on severe adverse effects was assigned two points for the section on side effects. All citations from patient leaflets represent the authors' translation. In cases of poor Danish language in the original, this was sought to be maintained in the English translation. Since data for this study did not include information on individual subjects, no ethics approval was required. Table 1 shows hospital scores according to IPDAS. Generally, the hospitals scored low with a mean hospital score of 18 (range 12-25), compared with a possible maximum of 45 points (table 1). Also, the section scores were generally low with a mean 2.0 points (range 1.2-3.3). Leaflet structure and layout were best covered in the leaflets while information on accuracy, disclosure of conflicts of interest, and information on treatment outcomes probabilities had the lowest mean scores. Decision-making While the decision-making process of a patient is inherited in the IPDAS checklist (ie, table 1, section 5), it is also specified as one of the core elements in the Danish Health Act and thus addressed separately in this paper. Overall, women's involvement in the decision on labour induction and on methods for induction was not, or only sparsely, supported in the patient information leaflets. One leaflet explicated that the woman and her partner make an agreement with the midwife whether they prefer to have the labour induced or to await spontaneous onset of labour, and, if the woman does not wish to get induced, close surveillance of the child will be offered (Herlev). Six of the 13 hospitals vaguely touched on the decision-making process, and phrases such as 'we recommend' or 'we offer' were commonly used and always in favour of induction. An interaction between the obstetrician or midwife and the woman was indicated in phrases such as: 'Your doctor and midwife will inform you, why we recommend induction of labour' (Odense/Svendborg), or 'Induction of labour is decided between you and a midwife or one of the ward's obstetricians' (Hvidovre). Three leaflets did not mention the decision-making process. The remaining two addressed the decision about induction like this: 'The decision to induce labour is always medically justified on the basis of either the mother's or the child's condition' (Viborg), and 'basically there are two options: 1: prostaglandins (a pill you eat); 2: induction Explain what it means to use medication off-label/ non-registered? (4) State the name of the medical product name? (5) State the name of the active substance? Advise patients how to report side effects? Total section 9 score † Open Access with artificial rupture of membranes (to break the water)' (Randers). Benefits and harms Another core element in the Danish Health Act regards information on benefits and harms of the treatment, corresponding to the IPDAS checklist, section 3 (table 1, section 3). Regarding benefits, induction was generally presented as a 'prophylactic intervention' that could prevent harm, and benefits from induction were given in all leaflets. The rationale for induction was typically phrased as 'The reason why we offer induction at this time is that some children begin to receive too little nourishment from the placenta, when they stay in the uterus this long' (Bornholm). The tone in the leaflets was generally reassuring, and many leaflets implied that induced labour is close to a non-interventional delivery, for example, 'The pills used for induction are synthetic hormone ( prostaglandin), which corresponds to the hormone the body produces itself during labour' Regarding the general side effects, six hospitals mentioned some of the following: diarrhoea, nausea, vomiting, abdominal pain, rash, headache, dizziness and fever. This list corresponds to the side effects described in the product information on Cytotec for treatment of peptic ulcers. 19 Regarding the obstetric side effects, eight hospitals presented some information, while five did not include any information. Considerable variations were observed between the hospitals that presented obstetric side effects, both in terms of what types of side effects were presented and whether they were described as frequent or rare. One of the most frequent adverse side effects from labour induction, hyperstimulation, was described as a side effect by less than half of the hospitals. The hospitals used various terms, such as over-stimulation, frequent contractions, frequent contractions without pauses, too frequent contractions, tetanic labour or 'an unusually strong reaction to the treatment resulting in a too fast progress of delivery'. One hospital described hyperstimulation as a risk only if the medication was administered incorrectly: 'If you have a too large dose of the drug, or if the tablets are taken with too short time intervals, frequent and strong contractions can occur, which can be disadvantageous to your child.' (Viborg). In some cases, 'powerful labour work' or 'very fast delivery' were used as an implicit indication of risk. Adverse fetal outcome was mentioned in four of the leaflets. Fetal death was mentioned by one hospital as a rare risk (Hvidovre). Three hospitals gave vague indications of fetal asphyxia, for example, 'the child can be momentarily stressed after birth' (Odense/Svendborg, Rigshospitalet), or 'use of Angusta-tablets (Misoprostol) and other induction medications […] can in rare occasions affect the child' (Roskilde). Two hospitals informed about an increased risk of additional interventions, such as medical augmentation of labour, epidural anaesthesia or instrumental delivery, while one hospital stated that labour induction was not associated with an increased risk of caesarean section or instrumental delivery (Viborg). Regarding disadvantages, longevity of labour was mentioned by six hospitals, for example: 'When a birth starts by itself, it is important to be patient, as it may take long until the contractions are effective. It is also important to be patient when labour is induced' (Randers) or 'so it is good to be patientjust like at a normal birth' (Bornholm). It appeared from the leaflets that all hospitals performed outpatient inductions of low-risk women. Most hospitals gave some information about the timing of admission to the hospital, varying from beginning contractions/early labour (N=4), as recommended by the Danish Society of Obstetrics and Gynaecology, 22 to frequent contractions. One hospital addressed hyperstimulation thus: 'If you have very strong or frequent contractions, it is important, that you contact the midwife at once' (Bornholm). Other reasons for contacting the labour ward included non-specified contractions (N=6), loss of amniotic fluid (N=6), bleeding (N=3), pain (N=2), or less fetal movements (N=1). Four hospitals did not provide any information on when to contact the labour ward. One hospital offered the woman the opportunity to remain hospitalised if she felt unsafe about the outpatient setting (Hillerød). No hospitals informed about the need for continuous monitoring or the lack of evidence for ambulant induction. Regarding information on probabilities of outcomes, one hospital provided probability scores according to the risks and benefits of induction: 'If 1000 pregnant women in week 41+3 choose to await spontaneous labour, at least 999 babies will still be well in week 41+5. At this time, we recommend induction' (Herning/ Holstebro). Two hospitals quantified the risk of hyperstimulation after induction as 'a small risk (less than 1 in 100)' (Odense/Svendborg, Rigshospitalet), or 'utmost rare (less than 1 in 10 000)' (Hvidovre), which is a difference in probabilities by a factor of 100. Other justifiable treatment options The IPDAS checklist addresses information about options in section 2 (table 1, section 2), which is a third core element in the Danish Health Act. In our data, watchful waiting was mentioned as a possible alternative to induction by one hospital: 'If you do not wish to have your labour induced, you will be offered examination and consultation about how the rest of the pregnancy can continue' (Odense/Svendborg). Two others gave the impression of watchful waiting as an option, for example: 'If you choose not to be induced at week 41+5, you will be offered close monitoring. We cannot recommend any women to continue pregnancy beyond 42 completed weeks of gestation' (Rigshospitalet), or 'at this consultation [41+3] the midwife will clarify with you and your partner, whether you wish to have the labour induced, or if you would rather wait a couple of days to await spontaneous labour' (Herning/Holstebro). In the latter example, the woman was given the opportunity of a maximum of another 2 days before induction. Issues regarding non-registered drugs Table 2 shows hospital scores on specific issues related to non-registered drugs. Overall, the hospital scores were low, that is, an average of 2.0 of the 5 possible points (table 2). The best covered subitems concerned information on the active substance misoprostol and on the route of administration (oral/vaginal). None of the hospitals informed that misoprostol is not registered for labour induction in Denmark or explained the meaning of offlabel use or use of non-registered medication. Two hospitals addressed the topic indirectly by saying that misoprostol had been developed for another purpose or by mentioning the compassionate user permit for Angusta issued by the Danish Health Authorities, yet without explaining the meaning of such a permit. A few others added credibility to the use of misoprostol in phrases such as: '[…] misoprostol, is developed for another medical purpose, but has for more than 10 years been approved by the Danish Health Authorities for labour induction' (Odense/Svendborg), or 'misoprostol has been used for labour induction for many years, both in Denmark and in larger parts of the world' (Rigshospitalet), or by referring to a consensus between midwives and obstetricians on the choice of treatment. Even though several hospitals gave information about the former first-choice drug, Minprostin, only two presented this as optional for 'all' women. One of them said: 'if you do not wish to be treated with Angusta, we can instead induce labour with Minprostin…' (Odense/ Svendborg). In the other, the message was kind of hidden, that is, in the leaflet section on side effects, it was mentioned that Minprostin vagitories could be an alternative to misoprostol, and that Minprostin has the same side effects as misoprostol 'but [with Minprostin®] there are more deliveries that end in a caesarean section' (Rigshospitalet). Otherwise, Minprostin was mentioned to describe the induction agent for certain conditions (eg, twin pregnancy or previous caesarean section). Nine hospitals gave the name of the active substance misoprostol, and six gave the medical product name, that is, Angusta (N=5) and Cytotec (N=1). Most hospitals mentioned prostaglandins in general terms and explained its cervical ripening effect. Most hospitals also informed about the route of administration (ie, oral or vaginally). About half of the hospitals informed about the course of treatment during one or more days of induction, or they informed about the number of tablets, capsules, etc, at different stages of treatment. Such detailed information was, however, presented without any information on drug dose, for example: 'the next day you will be treated with misoprostol again, but now in double dose' (Herning/Holstebro). One hospital informed about the drug dose (25 μg) (Herlev). No hospitals advised patients on how to report side effects from the treatment. DISCUSSION This survey showed that written information about induction of labour to pregnant women by Danish hospitals lacked several important criteria for patient involvement and informed consent, and that the written information varied considerably between hospitals. According to the IPDAS scoring tool, several elements should be included in order to provide unbiased information. We found that information on health condition was addressed by pointing out the risk for the fetus (in carrying on the pregnancy), that the leaflets did not describe the natural course of pregnancy without treatment, that only one hospital informed about watchful waiting as a genuine alternative option to induction, that benefits of options were given only for induction and not for alternative options, such as watchful waiting with the possibility of spontaneous onset of labour, that risks of options (harms, side effects, disadvantages) were sparsely or inadequately communicated, that no hospital informed about the uncertainty around current evidence, and finally, that most hospitals described procedures on the course of treatment. Hence, overall, the leaflets provided information in favour of induction and of misoprostol. These findings were further supported by the tone and wording in the text. For example, frequently used terms such as 'we recommend' or 'we offer you' indicate a paternalistic attitude, which is not conducive to patient participation in decision-making. Also, terms associated with a natural or normal birth are used in more of the leaflets, eg, 'The pills used for induction are synthetic hormone ( prostaglandin), which corresponds to the hormone the body produces itself during labour' (Hvidovre) or 'Vaginal suppositories […] is the method that best resembles the normal birth's start' (Roskilde), or 'even though your contractions have been assisted […], you have as a starting point the same options […] as if your labour had started spontaneously' (Viborg). Such terms, can be used to downgrade the understanding of labour induction as a medical intervention, since they usually relate to non-interventional childbirth. 34 Regarding the unorthodox use of misoprostol for labour induction, trustworthy elements such as interprofessional consensus, long-term experience or a reference to national health authorities' approval were used to add credibility to the practice. If a hospital offers a woman the opportunity to wait another 2 days before induction, the woman can feel that she has a choice, but both options are still within the Danish Society of Obstetrics and Gynaecology's recommended time frame of pregnancy termination before 42 gestational weeks. 4 Unlicensed misoprostol is mentioned in the WHO essential drug list 35 and is recommended for induction of labor in under-resourced settings. 26 According to the legal advisor to the Danish government, a stricter requirement for patient information applies to off-label medications, that is, medication used outside its indication. 5 When the peptic ulcer-registered medication, Cytotec, is used as an induction agent, this is an example of off-label use. This is different to Angusta, which was introduced in Danish obstetrics after a period of off-label use of Cytotec. Since Angusta is not registered as a medication in Europe, the term 'off-label' does not apply to its use in Denmark. Angusta has not been tested in any published trials, and the procedure that Danish hospitals have compassionate user permits for Angusta is an extreme and unusual case. Hence, it is unlikely that the legal advisor's stricter patient information requirements for off-label use should not apply to women who are offered Angusta, that is, a nonregistered drug. The one leaflet that informed about the compassionate user permit for Angusta issued by the Danish Health Authorities gave confusing information. The one leaflet that mentioned the compassionate user permit for Angusta issued by the Danish Health Authorities gave confusing information. Hence, the term compassionate user permit was mentioned, but no explanation as to the meaning and purpose of such permit was given. The compassionate user permit allows a hospital to use Angusta in cases where there is a lack of other suitable and registered drugs available, 11 and when, in the case of labour induction, for example, Minprostin is available, it can be argued that the routine use of Angusta in Danish hospitals does not apply to the formal conditions for a compassionate user permit. Information on side effects from the treatment was highly inconsistent, showed large variations between hospitals, and sometimes required substantial professional or linguistic skills to disentangle. For example, the fact that 'strong contractions' in one leaflet should be understood as a sign of danger was apparent only because this was placed in the side effect section. In a common understanding, strong contractions might be understood as a part of the normal course of labour, while specialists will know that, in this context, it may refer to hyperstimulation or tetanic labour. During analysis, it became obvious that more hospitals had included standard side effect information (from eg, Cytotec) directly in the leaflets, for example, to present abdominal pain as a side effect seems meaningless in relation to induction of labour. The majority of leaflets did not provide any information on the risk of additional interventions after induction, such as medical augmentation of labour, epidural, instrumental delivery or caesarean section. Such interventions are all relevant to consider prior to treatment. Probabilities of hyperstimulation after misoprostol induction were presented by three hospitals with risk estimates from 'a small risk less than 1 in 100' to 'an extremely rare event less than 1 in 10 000'. The outcome probabilities presented by three hospitals are without references. These probabilities are lower than those reported by The Cochrane Collaboration and differ from the Cytotec product information. 7 13 19 Cochrane reports 1-9% hyperstimulation from low dose oral misoprostol trials, while the Cytotec product information reports 0.1-1% uterine tetany and 'un-known incidence' of uterine rupture, bleeding, emboli and abnormal uterine contractions. 13 19 It is crucial that women receive information on when to contact the labour ward, how to react appropriately to adverse effects, and the lack of evidence on the safety of this procedure. The Danish Society of Obstetrics and Gynaecology recommends fetal surveillance early in labour after misoprostol induction. 22 To make this possible, the woman must arrive at the hospital in early labour, but only four leaflets gave this information in their written patient information material. Hence, it may be argued that the fact that most of the leaflets failed to give crucial information on how to react while at home during labour induction poses a risk to the mother and/or child. While poor fetal outcome was presented as the main reason for terminating the pregnancy, it is also found to be a possible adverse effect of labour induction. From a patient perspective, the information on which to make a choice is not balanced when induction is presented as an action to prevent poor fetal outcome and, at the same time, the fetal risk associated with the intervention itself is not presented. This could be addressed by including balanced information about options and by presenting absolute risks or probabilities in the leaflets. Since emergency treatment of fetal asphyxia is not possible at home, outpatient inductions carry a special safety concern. The strengths of the study include the use of the revised IPDAS checklist scoring tool, which is validated and has been used to evaluate patient information material in other healthcare areas, 32 and independent scoring by two of the authors. Also, patient leaflets from all relevant hospitals were included. Weaknesses include the fact that the extra section on non-registered drugs was developed for the present study and thus not tested or validated previously. Also, since the study included written patient information alone, and since the IPDAS checklist only concerns written patient information material, our results cannot conclude on other aspects of the information material and the decision-making process. 36 In conclusion, the assessed patient information leaflets lacked several central elements of patient involvement criteria, and they presented unbalanced information on benefits versus harms. The leaflets did not inform adequately on current evidence on labour induction, including treatment options, outcome probabilities or possible risks related to non-intervention, and they did not help the women to make appropriate decisions or to judge the material's reliability. In some cases, the leaflets might even be hazardous due to lack of crucial information. If a woman shall give informed consent on labour induction, she needs information about side effects and on the consequences of induced versus spontaneous onset of labour. Overall, there was considerable inconsistency in the information provided across hospitals. Women admitted to different hospitals will thus receive different information, which has ethical implications. Producing appropriate written patient information is not an easy task, and the challenge is further increased when a non-registered drug is suggested as standard treatment, as is the case with Angusta. The authors encourage clinicians and researchers to work together in the development of written patient information material, and recommend the use of contemporary decision aid tools. Contributors JAC and ER conceptualised the idea of the study. JAC, ER and MJ contributed considerably to the design of the work; and to the acquisition, analysis and interpretation of the data. They also contributed considerably to the drafting and revision of the paper; and to important intellectual discussions forming the final submitted version. All authors have approved the submitted version and agree to be accountable for all aspects of the work. Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement Data for the study consist of written patient information leaflets. The material can be requested by contacting the authors, if not available from the relevant hospitals. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/
2018-04-03T05:51:09.237Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "f82b9e17684a40ec58d05f6c4aabd74d6021ccdd", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/6/5/e011333.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "f82b9e17684a40ec58d05f6c4aabd74d6021ccdd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221376072
pes2o/s2orc
v3-fos-license
Evaluation of ultrasound-guided erector spinae plane block for postoperative management of video-assisted thoracoscopic surgery: a prospective, randomized, controlled clinical trial Background Video-assisted thoracoscopic surgery (VATS) is a commonly performed minimally invasive procedure that has led to lower levels of pain, as well as procedure-related mortality and morbidity. However, VATS requires analgesia that blocks both visceral and somatic nerve fibers for more effective pain control. This randomized controlled trial evaluated the effect of erector spinae plane block (ESPB) in the postoperative analgesia management of patients undergoing VATS. Methods We performed a prospective, randomized, single-center study between December 2018 and December 2019. Fifty-four patients were recruited to two equal groups (ESPB and control group). Following exclusion, 46 patients were included in the final analysis. Patients were randomly assigned to receive preoperative ultrasound-guided ESPB with either ropivacaine or saline. The primary outcome was the numeric rating scale (NRS) score, assessed 12 hours postoperatively. Secondary outcomes were the Riker Sedation-Agitation Scale (SAS) score for emergence agitation, postoperative cumulative opioid consumption, length of post-anesthesia care unit (PACU) stay, incidence of postoperative nausea and vomiting (PONV) and dizziness, and ESPB-related adverse events. Results The NRS in the ESPB group during the postoperative period immediately after PACU admission was significantly lower than that in the control group (5.96±1.68 and 7.59±1.18, respectively; P<0.001) and remained lower until 6 hours postoperatively (P=0.001 at 1 hour and P=0.005 at 6 hours). At 12 hours postoperatively, NRS scores were not significantly different between groups (P=0.12). The median [interquartile range (IQR)] of the postoperative rescue pethidine consumption in PACU was significantly lower [25 mg (25 mg)] in the ESPB group than that in the control group [50 mg (56.2 mg); P=0.006]. The median (IQR) of PACU residual time was significantly lower [25 min (10 min)] in the ESPB group than that in the control group [30 min (15 min); P=0.034]. The median (IQR) Riker SAS was also lower in the ESPB group [4 (1.0)] than that in the control group [5 (1.25); P<0.001] in PACU. Conclusions A single preoperative injection of ESPB with ropivacaine may improve acute postoperative analgesia and emergence agitation in patients undergoing VATS. Introduction Post-thoracotomy pain syndrome (PTPS) is a serious and common problem after thoracic surgery, which has a significant effect on the quality of life in 25-60% of patients. The International Association for the Study of Pain defined chronic pain after thoracotomy as "pain that recurs or persists along a thoracotomy scar at least 2 months following surgical procedure." (1). Videoassisted thoracoscopic surgery (VATS) is increasingly being used to manage primary lung cancer and helps reduce postoperative pain (2,3). However, it is a fact that pain following VATS can be severe and long-lasting. According to Takahiro Homma et al., 18.8% of patients who undergo VATS present with persistent pain 2 months after surgery (4). Similar to several other chronic postsurgical pain syndromes, acute postoperative pain is one of the powerful predictors of PTPS; however, its mechanism remains uncertain (5,6). Therefore, it is important to apply multimodal methods of postoperative pain control. Numerous modalities to alleviate post-thoracic surgery pain have been described in studies, ranging from various medications for patient-controlled analgesia to diverse regional analgesic methods. Thoracic epidural analgesia (TEA) is a classic effective regional blockade to reduce pain following thoracic surgery (7). Thoracic paravertebral block (PVB) with a local anesthetic (LA), which is comparable to an epidural block for pain relief, is widely applied in thoracic surgery (8,9). The erector spinae plane block (ESPB), first described by Forero et al. in 2016 (10), is a new rising technique representative of indirect PVB methods. Several studies have shown that ESPB has strengths regarding safety and ease of use. ESPB targets a plane remote from the pleura and neuraxial structures to inject an LA into the fascial plane deep to the erector spinae muscle (Figure 1). ESPB results in blocking not only the dorsal and ventral rami of the spinal nerve in the paravertebral space via penetration of the intertransverse connection tissues but also the lateral cutaneous branches of the intercostal nerves (10,11). Numerous clinical studies have reported that ESPB can provide effective analgesia in the thoracoabdominal region, including for breast surgery, cardiothoracic surgery, and laparoscopic cholecystectomy (12,13). There is a lack of research on randomized post-VATS ESPB studies. There are mainly case reports, with only one randomized . The participants were randomly allocated to undergo ESPB with either 30 mL of 0.5% ropivacaine (ESPB group) or 0.9% physiological saline (control group) before general anesthesia. Randomization was conducted at a 1:1 ratio using a web-based response system (http:// www.randomization.com). Assignments were sealed in serially numbered envelopes. Randomization and blinding procedures were performed by an independent researcher who was not involved in the trial. The patients and physician assessing the outcomes were kept blinded to the grouping process (double-blind study). At the end of surgery, anesthesia was discontinued, and extubation of the patient was completed after reversal of muscle relaxant by pyridostigmine (0.25 mg/kg) and glycopyrrolate (0.012 mg/kg). The patients were then transferred to the post-anesthesia care unit (PACU). Application of ESPB ESPB was performed in the prone position before general anesthesia induction under standardized monitoring. The patient's back was sterilized and draped in a sterile fashion. After an initial anatomic scan to confirm the thoracic levels, appearance, and depth of structures, the procedural site was identified. A 5-12-MHz linear array ultrasound transducer (Sonosite ® X-porte-) was placed in a sterile sheath. US-guided ESPB was administered at the T5 vertebral level. An in-plane paramedian longitudinal block was performed with the probe (approximately 2-3 cm lateral to the midline). After visualizing the trapezius, rhomboid major, and erector spinae muscles, a 60-mm 23-gauge b-bevel needle was inserted into the interfascial plane between the erector spinae muscle and transverse process of the vertebra using an in-plane technique. After the correct location was confirmed by hydrodissection of the interfascial plane with 2 mL of physiological saline solution, 25 mL of 0.5% ropivacaine or saline was injected ( Figure 2). Postoperative analgesia management Postoperative pain management was performed in an identical manner in the two groups according to our institutional protocol. An intravenous patient-controlled analgesia (IVPCA) device (Ambix Anaplus ® AP 1020, E-Wha Fresenius Kabi Inc., Gunpo, Republic of Korea) was connected at the PACU and was maintained postoperatively using the following protocol: 2 mL/h (fentanyl 5 µg/mL) basal infusion with 0.5-mL bolus and 15-minute lockout time. Meperidine 25 mg was administered intravenously as a rescue analgesic on demand (when the numeric rating scale (NRS) score was ≥4). The side effects of postoperative opioid consumption, such as a nausea, vomiting, breathing depression, sedation, urinary retention, and itching, were also recorded. Outcome measurement The data collected included the NRS score for pain (primary outcome) to assess the quality of effective analgesia upon immediate arrival at the PACU, and 1, 6, and 12 h postoperatively. Secondary outcomes included the Riker SAS score [1= minimal or no response to noxious stimuli; 2= arousal to physical stimuli but non-communicative; 3= difficult to arouse but awakens to verbal stimuli or gentle shaking; 4= calm and follows commands; 5= anxious or physically agitated but calms on verbal instructions; 6= requires restraint and frequent verbal reminders of limits; and 7= attempting to remove tracheal tube or catheters or striking at staff (15)] to assess emergence agitation, postoperative cumulative opioid consumption, length of PACU stay, incidence of PONV and dizziness, and ESPBrelated adverse events. A single trained researcher blinded to group assignments assessed all outcomes. Statistical analysis According to our preliminary study, the sample size was calculated on the basis of the mean difference in NRS between the ESPB-treated group and control group [ESPB group: mean ± standard deviation (SD) =4.1±1.59, n=10; control group: mean ± SD =5.5±1.35, n=10] collected retrospectively from 20 consecutive cases. We estimated that 27 subjects would be needed per group to provide a type I error of 0.05, power of 90%, and predicted dropout rate of 20% to detect a 1-point difference, which was considered clinically relevant, between the two groups. SPSS Statistics 24.0 for Windows (IBM Corp, Armonk, NY, USA) was used to process the clinical data and perform statistical analyses. Continuous variables are expressed as mean ± SD or median (interquartile range). Frequency and percentages are used as appropriate for categorical variables. The Kolmogorov-Smirnov test was used to assess the assumption of normality. The chi-squared, Student t-test, or Mann-Whitney test were used to test significance according to the normality and types of variables. The postoperative pain scores were analyzed using repeated measures analysis of variance (ANOVA) to evaluate the relationship between the NRS pain scores over time and groups. Post hoc testing after repeated measures ANOVA was performed to compare groups at each time point using Bonferroni correction. Results Fifty-four patients were equally randomized between the two groups as shown in the Consolidated Standards of Reporting Trials (CONSORT) flow-chart ( Figure 3). Seven patients were excluded from the study because the operation technique was changed to emergency exploratory thoracotomy during surgery. One patient underwent wedge resection rather than lobectomy. Forty-six patients were included in the analysis. Demographic data and surgical durations were comparable between the two groups ( Table 1). The repeated measures ANOVA of NRS pain scores showed that the NRS pain scores over time were significantly different between the two groups (P=0.030). The NRS scores in the ESPB group in the postoperative period immediately after PACU admission were significantly lower than those in the control group (5.96±1.68 and 7.59±1.18, respectively; P<0.001) and remained lower until Table 2). Postoperative nausea and vomiting occurred in one patient in the ESPB group, while no complications were detected in the control group. There were no complications such as pneumothorax, LA systemic toxicity, or hematoma in either group. Discussion Our study has shown that ultrasound-guided unilateral single shot ESPB performed before general anesthesia induction in VATS patients significantly lowered NRS at rest in the first 6 h postoperatively when compared to that in a control group. ESPB helps reduce pain, as well as opioid consumption, in PACU. Reduction of opioid administration in the PACU could also reduce the patients' residual time in the PACU, as shown in this study. Notably, ESPB could help the incidence of emergence agitation, which is a post-anesthetic condition and indication for physical or chemical restraint in order to avoid serious consequences for the patient, such as physical injury, increased pain, hemorrhage, and removal of catheters. According to Fields et al., a chest tube results in a higher incidence of emergence agitation (16). The chest tube increases the chance of developing emergence agitation, which may cause functional problems concerning the chest tube due to the violent movement of the patient. Therefore, it is important that ESPB lowers the incidence of emergence agitation in thoracic surgery. ESPB lowers pain scores immediately after surgery, reducing opioid administration in the PACU, and thus reducing retention time in the PACU and improving patient safety. In addition, the pain relieving effect of ESPB is maintained for 6 hours; thus, a sufficient analgesic effect can be expected during the most painful time after surgery. TEA or PVB has been used for thoracic analgesia since many years. ESPB, as an alternative to PVB, is a peri-paravertebral block and relatively safer method in which the transverse process plays the role of an anatomical barrier and avoids needle insertion into the pleura. This may mean reducing the risk of pneumothorax, the most worrying complication of PBV. In addition, it is not technically difficult to locate the target point, the interfascial plane between the erector spinae muscle and transverse process, using ultrasound. Anatomical dissection in cadaveric investigation and magnetic resonance imaging (MRI) in imaging study following ESPB showed spread of the injectant from the epidural and neural foraminal spaces over two to five levels to intercostal spaces over five to nine levels (17,18). In addition to these anatomic and MRI studies, ESPB has effectively controlled somatic and visceral pain in breast, laparoscopic cholecystectomy, and ventral hernia surgery in several studies (12,(19)(20)(21). Therefore, ESPB analgesia can be effective in treating not only somatic, but also visceral pain originating from lung resection and port entry sites. Previous studies showed the efficacy in reducing postoperative pain following cardiothoracic surgery. For the first time, Forero et al. in 2016 demonstrated a successful application of ESPB in two cases of thoracic pain after VATS (10). Irem Kaplan et al. reported a case of continuous erector spinae plane catheter for analgesia following thoracotomy in an infant (22). Following cardiac surgery, good results for continuous erector spinae plane catheter insertion were reported in five cases (23). Since the initial publications in 2016, studies that applied ESPB to acute and thoracic pain steadily for the past three years have been conducted; however, these are mainly case studies. There is only one randomized study of ESPB for VATS published in 2019 (14), and there is one randomized study comparing PBV and ESPB for VATS, which showed no difference between the two groups (24). Our study found a conclusion consistent with that of the randomized study comparing ESPB and control groups and found that ESPB is effective in controlling pain after VATS. The previous RCT study concluded that the time for pain reduction after ESPB was significant for 24 hours. However, our study showed a different duration of analgesia after ESPB. In our study, the significant difference in NRS between the two groups was present for 6 h. Therefore, our study concludes that single shot ESPB alone may not be sufficient to sustain the analgesic effect. Continuous catheterization might be considered for more lasting pain control according to our study. Our study has several limitations. In this study, follow-up was performed until 12 hours after surgery. If follow-up for 2 months after surgery could be performed, the relationship of acute pain after surgery and chronic pain could be considered. In other words, it could have provided evidence to support the preemptive analgesic mechanisms of ESPB, which is assumed that performing ESPB before the application of noxious stimuli may prevent sensitization of the nervous system and reduce the incidence of PTPS. The measurement of opioid consumption was accurate up to the time in the PACU; however, the consumption in the ward was not due to use of IVPCA and opioids as routine rescue analgesic. Therefore, this may be the cause of the difference between the two groups in the amount of opioid used in PACU, but not in the amount of opioid used in the ward. Conclusions Ultrasound-guided ESPB leads to effective analgesia in first 6 h postoperatively in patients undergoing VATS. ESPB was helpful in reducing rescue analgesic opioid consumption and recovery time in PACU. Performing ESPB as routine
2020-08-20T10:12:26.766Z
2020-07-29T00:00:00.000
{ "year": 2020, "sha1": "662b7b1a597e5315621c6c47a7b94270b87b3828", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.21037/jtd-20-689", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6f27f7690cd48d1473476f292dc57f387486ca7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228847427
pes2o/s2orc
v3-fos-license
Power Sharing, Capacity Building, and Evolving Roles in ELSI: The Center for the Ethics of Indigenous Genomic Research Persistent, unresolved issues stemming from a legacy of scientific exploitation and bio-colonialism have kept many tribal nations from participating in genomic research. The Center for the Ethics of Indigenous Genomic Research (CEIGR) aims to model meaningful community engagement that moves toward more inclusive and equitable research practices related to genomics. This article reflects on key successes and challenges behind CEIGR’s efforts to shape Ethical, Legal and Social Implications (ELSI) research in ways that are informed by Indigenous perspectives, to locate community partnerships at the center of genomics research, and to conduct normative and empirical research with Indigenous communities that is grounded in the concepts of reciprocity, transparency and cultural competency. The structure of CEIGR represents an important shift away from a traditional model centered on a university-based principal investigators toward a partner-centered research approach that emphasizes equity and community control by distributing power and decision-making across all CEIGR partner sites. We discuss three features of CEIGR that have contributed to this shift towards an equitable, community-driven partnership: 1) balancing local priorities with collective goals; 2) distributing power in ways that promote equitable partnerships; and 3) capacity building and co-learning across partner sites. The discussion of these three areas in this article speaks to a particular strength of our Center: the interdependence among partners and collective willingness to maintain a plasticity of leadership that creates space for all of our partners to lead, support, exchange and strengthen ELSI research. leadership enabled CEIGR to embrace the challenges and dissimilarities among diverse members and work toward a strengths-based, partner-centered model for conducting collaborative research. The structure of CEIGR represents an important shift away from a traditional model centered on a university-based PI toward a partner-centered research approach that emphasizes equity and community control by distributing power and decision-making across all CEIGR sites. This approach enhances collaborative partnerships and research capacity across diverse community sites, which in turn promotes research that is grounded in local community needs and concerns. Below, we discuss three features of CEIGR that have contributed to this shift towards an equitable, community-driven partnership: 1) balancing local priorities with collective goals; 2) distributing power in ways that promote equitable partnerships; and 3) capacity building and co-learning across partner sites. Balancing Local Priorities and Collective Goals Since its inception, CEIGR has sought to develop research initiatives, some described below, that are inclusive of community-based investigators and prioritize community driven initiatives (Woodbury et al., 2019a;Hiratsuka et al., 2020a;Reedy et al., 2020b). Establishing a partner-centered research agenda required identification of collective goals that could be implemented in three distinct AI/AN communities that varied widely in their geographies, cultures, histories, and research capacities. The process of forming such a partnership was an experiment in how to effectively communicate and mobilize our individual strengths to achieve a point of mutual coordination. Survey Development A goal of CEIGR was to conduct public deliberations in three tribal communities on genetic research , see also Hiratsuka et al., 2020a andReedy et al., 2020b). To achieve this goal, it was necessary to engage in a multi-year process of building relationships between the University staff and community sites, working towards a consensus on the appropriateness of deliberation as a form of engagement in each respective community, and developing an approach to how this work could be conducted with a common goal. To gain awareness of the variety of potential engagement practices, the consortium conducted a scoping review that summarized current practices regarding participatory research conducted with AI/AN communities Woodbury et al., 2019a). An ancillary objective of the scoping review was to find out if deliberation had been done in tribal communities and, if so, to determine whether this method was an acceptable form of engagement in the tribal settings. We found that much of the research on deliberation that addressed minority group members focused on ensuring minority populations participated in deliberative forums (O'Doherty & Burgess, 2009; see also Ratner, 2004, Carson et al., 2013 or on conducting 'enclave' deliberation among minority populations in order to inform a larger deliberative process (Karpowitz et al., 2009). Our coordinated effort to design and implement a cross-site approach to deliberation exclusively in AI/AN tribal settings had no precedent and led to deepened relationships among CEIGR partners, putting into practice CEIGR's partner-driven research approach. In considering materials that might be shared with individuals participating in each site's public deliberation, the site leads perceived that the AI/AN community members in their respective locations would have differing understandings, attitudes, beliefs, and preferences based on their different exposures to genetic research. After extensive discussion on how to meet these unique informational needs, one site lead suggested conducting a survey to determine community member interests and educational needs and using the resulting dataset to develop targeted briefing materials and expert presentations for the public deliberations. After other site leads agreed to the approach, the Center's students, community-based researchers, and faculty members brainstormed potential survey topics, including cultural cognition of scientific consensus; Indigenous spirituality; ancestry and migration; data sovereignty; attitudes toward research in general; knowledge of genetics; and, attitudes and beliefs related to biorepositories, precision medicine, and genetic testing. As a group, Center members refined the survey purpose and list of survey topics; in smaller workgroups, they developed and nominated existing survey items or scales to be considered for inclusion in the survey. This process resulted in a composite 137-item pilot survey using 4 scales previously tested in other populations as well as four scales we designed specifically to address AI/AN community specific concerns, such as specimen handling. As there was disagreement on the topics for inclusion and because the site leads disagreed on the tone and wording of the items, the AI/AN site leads and OU PI decided to employ cognitive interviewing as a means of incorporating AI/AN community member viewpoints into the survey development process. Cognitive interviewing is a process in which draft survey questions are administered while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that the author intends (Willis, 2010). Through the cognitive interviewing process, the CEIGR partners found that AI/AN participants at all sites understood the instruction text, and items and scales generated no cognitive difficulties. However, participant responses indicated a need for wording changes in survey instructions and items to improve understanding of key constructs. Problems noted included participants being unfamiliar with some terms used describing genetic and biological specimens. In several cases, participants' written response in the survey and verbal response in the interview did not align. In several cases, written response in paper survey and verbal response in the interview did not align. In one of the CEIGR partner sites, when participants did not know what a word or phrase meant, they would mark down a neutral answer for the most part in one community however in another community when participants didn't understand a word or phrase they would often mark down strongly disagree. These differences in response item selection across sites highlighted local item response behavior that could lead to response misinterpretation at a site level and when responses are aggregated across sites. Cognitive interview results per item were reviewed by the CEIGR partners to select items and determine item rewording that would accommodate and correct for potential sources of response error, issues with item interpretation, and face validity. Following a shared cognitive interviewing process used across the three sites and led by the CEIGR team, a 52-item survey was finalized. The final survey included items that addressed several topics including conduct of research; personal beliefs; perceptions of researchers and research regulations; benefits and harms of research; research oversight; genetic testing benefits and risks; direct-to-consumer testing; and, demographics for use in AI/AN communities. The cognitive interviewing process took longer than the site leads had initially intended, as training of site staff was needed and because one site had delayed recruitment. Fortunately, in the planning of the deliberations, it was decided that briefing materials were not needed, and the site leads moved forward with fielding the cross-site survey at their sites. The cognitive interviews and cross-site surveys allowed each site to engage and better understand their communities' reactions to delicate topics ahead of the upcoming deliberations. Deliberation As mentioned above, a goal of the Center was always to conduct public deliberations on genetic research across all of our partner sites. The scholarship on public deliberation suggests it is a particularly promising approach for promoting deeper discussions around complex issues like genetics research and advancing public deliberation in AI/AN communities has been and continues to be a major focus of the Center ; see also Reedy et al., 2020b). There was an understanding that the Center's effort to facilitate these deliberations was happening across diverse tribal settings, each with their own goals for public deliberation. The challenge was figuring out how to design the deliberations to have common research components and to remain sensitive to the unique needs and priorities of each partner site. In July 2018, after months of conference call discussions that seemed to render the task of cross-site deliberations unattainable, representatives from each site held an in-person meeting to discuss the feasibility of designing three public deliberations that addressed distinct goals while somehow maintaining the integrity of the project as a cross-site initiative. Building consensus across all partner sites, while also preserving the preferences unique to each community setting, set the stage for discernable levels of tension and disagreement, but hindsight underscores these tensions and the process of navigating them were a critical part of the planning phase. This particular meeting represented a number of important firsts for our consortium: it was the first opportunity for all sites to work directly with the deliberation scholar selected to facilitate all of our deliberations; it was the first meeting held at an otherwise "neutral" location not affiliated with any of the partner sites; and, it was the first of many coordinated efforts where team members other than the site leads were making critical decisions about the direction of the Center's work. These firsts introduced an entirely new set of interpersonal and cross-site dynamics to the research planning process; it is possible that what might have been perceived as challenges at that time would have been difficult to work through without the collective commitment to work towards something larger. Face-to-face meetings had always been a central feature of our consortium, in part because of the geographically dispersed locations of the partner sites but also because these in-person meetings provided the opportunity to build trust and interpersonal relationships across sites. That particular moment in the planning process also underscored the importance of the in-person meeting format for moving the work of our unique Center along; the time and space needed for all partners to adequately express their perspectives and establish a sense of collective goals required the opportunity to workshop ideas that conference calls and emails could not. Balancing the local and collective goals in the deliberation planning was a lofty task and possibly the first extended test of collaboration we had embarked upon. To date, we have completed deliberations at each of our three community partner sites , Reedy et al., 2020b. The truly unique feature of our deliberations, aside from being conducted in exclusively tribal contexts with AI/AN participants, is that they were designed around the questions, priorities, and social dynamics associated with each community site. For instance, SCF had been involved in pharmacogenetic research for a number of years (Hiratsuka et al., 2020b), with several research projects exploring AI/AN views on biological specimen use (Hiratsuka et al., 2012a, Hiratsuka et al., 2012b, Dirks et al., 2019 and preferences for the conduct of pharmacogenetic research (Avey et al., 2016;Beans et al., 2020;Shaw et al., 2013). The questions of interest for this site were therefore focused on community preferences for return of results from genomic research as the site was conducting pharmacogenetic research projects and was seeking to improve understanding of community preferences for dissemination of findings. The CN, on the other hand, was beginning to think about the role of genetic research for its own community. While MB had participated in select genetic studies over the years (Claw et al., 2020;Khan et al., 2018;Fohner et al., 2015;Woodahl et al., 2014), many questions remain about perceptions of genetic research that had never been asked of its community before the deliberation. Findings from this deliberation have been disseminated to the appropriate tribal authorities for review and consideration (Reedy et al., 2020b). Finally, the deliberation at MB posed very different questions than the two previous sites, in part because this site is not integrated into a specific tribal health care delivery system with defined research policies and processes, but is a private AI-owned research entity working toward the development of a biorepository within a tribal jurisdiction. These particular circumstances prompt important questions to ask of community members, especially about developing solutions for expanding research capacity and how to govern genomic data in ways that honor tribal sovereignty. These distinctions across sites necessitated different deliberation questions, and our approach reinforces the prospect of designing research that is both applicable across diverse sites and also directly responsive to the goals and needs and local communities. Deliberations can yield informed and egalitarian discussions and are particularly valued by members of some minority groups (Gastil et al., 2010;Goold et al., 2005;Knobloch et al., 2013;Wang et al., 2015), but until now there has been little work that examines public deliberation exclusively in Indigenous contexts (Carson et al., 2013. Our work in deliberation became a feasible approach for engagement and dialogue in AI/AN contexts only through a coordinated process of melding disciplinary expertise with mutual learning and cooperation across all sites. Our cross-site research initiatives all strive to strike a balance between collective goals and local priorities. It is never guaranteed that all of our research initiatives will resonate across the three sites, nor will they necessarily lead to generalizable results, but the process of working together and creating spaces for different team members to lead as necessary ensures that we are constantly learning and challenging each other to explore more ethical approaches to genetic research. Face-to-face meetings were key to facilitating the process of developing the surveys and the deliberations described above; in both cases, it was only after we transitioned from conference calls and emails to in-person workshop sessions that substantial progress began. In the site-specific planning sessions, team members co-developed the deliberation facilitator guide to be used at the site which laid out individual responsibilities of team members, processes for deliberative activities, and the specific materials and questions that would be asked to site deliberation participants. It became clear early in the process that the planning work being done at one site, regardless of specific deliberation questions, could be of use to the other sites. As such, sharing of document templates and planning experiences across sites occurred regularly. Another specific example of sites engaging one another is the use of case scenarios at each site. Case scenarios are hypothetical depictions centered around topics related to each deliberation, designed to be read by deliberants and to facilitate discussion that includes reactions and considerations of how each scenario resonates within each community. While not initially part of the collective approach to deliberations, the success of the case scenarios at the first site's deliberation prompted their continued use at the other two sites. The scenarios enabled participants to consider issues related to genomics research in very personalized-albeit hypothetical-ways. The scenarios were co-designed by the sites and the CEIGR researchers and were tailored in ways that drew upon local concerns, terminology, family and kin structures, current events, and specific tribal experiences. The specific approaches outlined here offer tangible considerations for how other research partnerships might approach the co-design and implementation of research in ways that align with one another and maintain the integrity of local community goals. The description of our sequential use of cognitive interviews, surveys, and deliberation provides a blueprint for how to implement research across unique sites in ways that is reflective and builds upon the lessons at each site. We have also learned that the interpersonal nature of partnership building requires considerable attention and that our ability to conduct ethical and engaged community research must begin with our willingness to engage each other and appreciate the differences across sites. Power Distribution CEIGR is unique in that the University site is not a coordinating center; rather, each partner-SCF, the CN, and MB-shares responsibility in the development of all aspects of the Center from administrative functions, the development of research agendas, and manuscript development. This commitment to power sharing and effort distribution is evident in the budget allocation. Over half of the CEIGR's direct costs are equitably distributed to the community partners so that each site can conduct site-specific work (e.g., data collection) without fiscal stress and in a manner that contributes to the collective goals of the Center. This budget structure reflects the centrality of the partner sites in accomplishing the kind of work prioritized at each site and it contributes to the collective effort to accomplish activities in ways that are appropriate at each site. Our Center recognizes this as a more inclusive partnership model that elevates community-based investigators and is more responsive to community-driven initiatives, thereby establishing a more equitable approach to ELSI work in AI/AN contexts. Decentralizing Budgets Academic and community standards and expectations related to the research process are often misaligned. Budget decentralization creates opportunities to prioritize activities and personnel outside of conventional academic achievements and faculty. The equitable distribution of funds across all partner sites also promotes a plasticity of leadership within CEIGR that presents opportunities for partner sites to lead specific initiatives. The distribution of funds across sites presented an opportunity for sites to manage budgets according to their own research agendas, but it also underscored differences in each site's experience creating and managing large National Institutes of Health-funded research budgets. The SCF site lead, for example, developed a budget that was used as a template for the other two sites, thereby helping to build capacity in the other sites, and to assist in the coordination of CEIGR activities. One specific goal of our Center is the training and advancement of AI/AN junior scholars and early stage investigators. To this end, the University budget maintains designated support for undergraduate and graduate students and post-doctoral researchers. Consistent financial support is an ideal that many students cannot always achieve, yet it is necessary for realizing the increased representation of AI/AN scholars in academia. Through consistent support from CEIGR, AI/AN students are able to be a part of the collective efforts of the Center while simultaneously building their own research networks for their future. Further, students benefit from participation on many CEIGR projects and contribute in leadership roles alongside all of the Center's members. The various disciplines and professional levels represented in the consortium have created a malleable space that allows for growth at all professional levels. Manuscript Writing and Dissemination Following the CEIGR practice of open, transparent group conversation on research design, presentations and manuscripts describing CEIGR work have developed in a similar manner. Development of a process to co-develop manuscript ideas, coordinate writing, and mentor partners on the publishing process were key steps in implementing a process for manuscript development. Our processes to invite writing contributions ranged from a process to share manuscript proposals across the partnership to a bi-weekly manuscript workshop session. We sought to involve all members in consensus-building processes and writing teams. To advance manuscript development, three in-person meetings have focused solely on manuscript concept development with facilitated conversations. We implemented virtual writing sessions whereby all consortium members participate via conference call. Finally, using a tracking spreadsheet developed by the SCF, the Center has developed a tracking and consortium authorship concept. Of note within the partnership, a large number of non-academic partners are actively involved in manuscript authorship. Just as publishing is necessary for those in faculty positions/settings, so is it vital for those partners actively pursuing grant funding. Graduate students are mentored and actively participate in the manuscript writing process, participate as part of larger writing teams and lead the conceptual development of specific manuscripts. Further, all CEIGR partners have participated in the dissemination of our work in a variety of formats, including poster presentations, oral presentations, round table discussions, community presentations and townhalls, professional panels, radio shows, webinars, and other community-specific outlets. Our Center works across multiple tribal settings, and all partners coordinate their independent efforts to navigate the tribal research review processes our work must undergo. Akin to data ownership in tribal settings (Hudson et al., 2020;Woodbury et al., 2019b), manuscripts describing processes and outcomes associated with tribes, tribal data, and tribal members can be subject to tribal oversight (Blue Bird Jernigan et al., 2015;Hiratsuka et al., 2017). Tribal entities may not wish to have their research results published in journals or disseminated in certain public spaces (Tsosie et al., 2019). Navigating the internal manuscript and abstract review and approval processes for each tribal partner requires forethought and planning to orchestrate timely approvals prior to dissemination activities. CEIGR partners have dual roles of staffing the tribal processes and being subject to the processes. Our Center operates according to an informal principle of "coordination without a single coordinator" and we understand that achieving collective goals requires respectful collaboration, ongoing communication, and fluid leadership that responds to the emergent needs and challenges inherent in doing community-engaged research. This model of partnership alleviates the potential for any one site to be over-burdened and creates opportunities for each site to contribute expertise and assume leadership roles. Co-learning/Capacity building The Center for the Ethics of Indigenous Genomic Research, as introduced earlier, is a multidisciplinary consortium comprised of researchers with expertise in genomic sciences, anthropology, public health, communication, political science, bioethics, Native American studies, and a diversity of lived experiences to inform our approach to research and engagement. Beyond the assortment of disciplinary backgrounds within our Center, differences in the capacity and experience of our partners was also key to moving CEIGR's goal of cross-site research activities in all partner sites. As one partner in our Center had extensive expertise in conducting original research in their own site, other partners were looking to grow their experience beyond data collection to research design. Discrepancies between partners presented discernable opportunities to work together in ways that promoted capacity building through the execution of cross-site research activities. Data collection needs presented opportunities to co-learn new methods, which in turn created opportunities to explore new approaches to data analysis and sharing. There was an iterative, building-block nature to our collective approach to the research process, so that our ability to complete cross-site research activities as a Center rested on our collective skills and willingness to learn from one another. Co-learning was an essential piece of our research process; the ebb and flow of mutual learning opened up space to acknowledge our needs and enable everyone to contribute. The process of conducting cognitive interviews and deliberations at all three partner sites underscores the importance of capacity building through co-learning. Cognitive Interviews This community-site driven process strengthened synergy across the consortium, built community site capacity and informed future empiric recruitment and data collection. Within the development and implementation of a cross-site survey as describe in detail above, the SCF site led the overarching scientific approach. To develop and implement consistent data collection processes, a community partner training was conducted, and support of community partners occurred. SCF led this initiative by first facilitating dialogue across the consortium during in-person meetings in April 2017 and August 2017 and between in-person meetings via teleconference and email. Consortium members put forward survey items covering a variety of topics including: direct-to-consumer testing, genetic testing risks and benefits, science and society, and personal beliefs about biological specimens. We used cognitive testing across three sites to systematically evaluate the appropriateness of the survey questions. SCF developed the cognitive interviewing plan. Staff from the CN and MB traveled to Anchorage, Alaska where SCF hosted a cognitive interview training workshop that included: hands-on practice on recruitment, informed consent, survey data collection, data entry, interviewing, and interview notes. All three sites were trained using the same interview questions, survey items, data collection tools, and data entry Excel sheets that were used during data collection. Once cognitive interview data collection at each site was complete, the sites discussed the findings via a video conference call. Cognitive interview findings were discussed as well as implications of those findings on the survey items. From this discussion the cross-site survey was developed with site specific items. Through the cognitive interview process, staff members at each site were able to gain confidence in research skills. Staff learned to comfortably discuss study aims and answer questions about the study, gained familiarity with recruitment locations, and were able to practice systematic and interactive approaches involved in the conduct of research. The confidence and familiarity with conducting research prepared the community sites for survey and deliberation work. Deliberation The deliberation planning process was, as mentioned above, a test of our collective commitment to develop and implement cross-site work. Community and University partners met at a face-to-face meeting to come to consensus on the cross-site approach we were going to take as a Center. Together, we decided that the process of each deliberation would be the same, but the content discussed would be site-specific. To accomplish this task a core deliberation team, comprised of individuals from the University of Oklahoma (OU) and the University of Washington (UW), worked with each partner site to design the specific deliberation details. Each planning group maintained their own series of conference calls, complemented by a set of regularly scheduled consortium-wide calls. Maintaining a separate set of conference calls for each site and for the entire Center could be cumbersome and time consuming, but was also necessary for allowing each site to pursue their own directions independent of group consensus. This model of cross-site deliberation planning revealed some unexpected group dynamics. SCF, for example, provided tremendous leadership early in the development of documents needed for the deliberation protocols. Their willingness to share these documents provided critical assistance to the other sites as they began their deliberation planning. CN was the first site to conduct their deliberation. As a result of being the first site in our consortium to conduct a deliberation, their insight and experience proved crucial in helping the other sites finalize their deliberation plans. MB was the final partner to conduct their deliberation, and the methods that we had similarly employed at each site were received quite differently at this community site; this preliminary finding suggests that the MB deliberation team may offer some important feedback on the evaluation of our deliberative approach because it was received differently at their site. The cross-site deliberations presented each site an opportunity to lead, reinforcing the importance of conducting research in a way that promotes equitable opportunities to lead, mutual learning among all partners, and capacity building. The deliberations centered around local questions stemming from specific needs and concerns of each community; as such, the deliberations provided concrete input on issues of central importance to each community. A preliminary report summarizing input from each deliberation was sent to all deliberants and revised according to their recommendations, before final reports were disseminated to the appropriate tribal leaders and administrators at each site for consideration of next steps. We will be reporting on those next steps as they are decided upon locally. Navigating Capacity Building Center and individual community site capacity has required much patience and persistence of each CEIGR member but it was necessary to accomplish equitable cross-site work. Together, we have learned to allow one another to take the lead and allow the group with expertise to lead when necessary. In some cases this had been University staff, in other cases it has been community site staff; often, we lead together. The deliberations, as discussed above, were an example of the University and community sites needing to work together and recognizing the need for expertise on the deliberation. The invitation of the UW deliberation expert created an equal playing field for mutual learning of all CEIGR members and facilitation by the deliberation expert opened up space for all consortium members to bring specific strengths forward to conduct the deliberations. While the introduction of new team members and new approaches requires careful attention to the effect on the group dynamic and research direction, the absence of a strict hierarchical structure within CEIGR promotes an environment that is receptive to new partners and new ideas. Building Trust through Communication As noted, maintaining face-to-face communication in constant and consistent ways is key, as the dynamics of the consortium are always changing. There is a tremendous amount of interpersonal communication necessary to ensure that we achieve enough common footing across the consortium to allow for equitable research across the consortium. Further, regular meetings provide opportunities to establish agreement on broad, ethical principles thereby strengthening the foundation upon which diverse stakeholders can manage power dynamics and building relationships (Hoover et al., 2019). Achieving this common footing requires continual check-ins with each other to understand the changing needs of each partner. The differential capacities of each site mean that one site may experience feelings of "being territorial" over certain research activities or that one site may struggle to garner recognition in comparison to the successes of other sites. One strategy for overcoming this is to institute face-to-face meetings as a regular activity of the center. It is essential for all partners to be able to communicate changing comfort levels with center activities and regular meetings that all partners expect and plan for create a space for being able to communicate feelings about things. Building trust in AI/AN communities is the guiding imperative in the work we seek to do. Centering research around community-placed researchers and community-based organizations-as opposed to academic institutions removed from the communities most impacted by research-establishes a more consistent presence and places the entire research process within the socio-political, historical and cultural contexts that shape the experiences of community members. The prolonged presence of community-placed researchers helps establish long-term relationships, facilitates trust, and provides opportunities to receive community input and to incorporate community feedback to improve data collection strategies and the questions we ask. The Value of ELSI Work The CEIGR consortium is comprised of several organizations with complementary, but non-identical research priorities and capacities that differentially affect their interest in and ability to support a range of ELSI research projects. Research priorities affect the perceived value of ELSI because conducting this research imposes opportunity costs on CEIGR partners. For example, conducting ELSI research consumes personnel, material, and financial resources that could be dedicated to other activities. Similarly, time and resources spent developing expertise in ELSI shapes future research opportunities, since funding decisions are based in part on prior research experience. The perceived value of ELSI is also affected by differences in research capacity that alter the nature and scope of commitments that CEIGR partners must make in order to contribute to the consortium's research projects. Partners that must first build capacity in order to conduct ELSI research will commit more resources to these projects than those that already possess most or all of the necessary expertise. The perceived value of ELSI will be greater among organizations with research priorities that are advanced by engaging in this area of research and that can obtain the benefits of ELSI without substantial investments in capacity. In most contexts, the focus on normative questions within ELSI research means that its impacts on human health can be indirect and difficult to quantify, especially by comparison with those arising from basic science, translational, and clinical research (Parker et al., 2019). Given the sensitive nature of genomics as the subject matter, a history of research exploitation, and lawsuits and policy change in tribal contexts, ELSI work seems essential to ameliorating these issues for the increased benefit and quicker implementation of state-of the-art research (Walker & Morrissey, 2012). The deliberative work completed by CEIGR, for example, is based on systematic inquiry into ELSI issues but the actionable direction and policy guidance that emerged from this work is a direct service to the partnering AI/AN communities. In addition, members of the CEIGR consortium have advocated for enhanced protections for research participants and increased commitment to community engagement in and shared control over research processes Chadwick et al., 2014Chadwick et al., , 2019Hudson et al., 2020;Tsosie et al., 2019;Woodbury et al., 2019b). These actions offer ethical and scientific benefits, but can also increase the cost, complexity, and duration of research (Buffalo et al., 2019). Funding mechanisms that support inquiries into ELSI research are essential for many AI/AN communities working to give voice to concerns that have been unheard or underrepresented in conventional research arenas, and to elevate the urgency of these concerns to be on par with larger, more powerful institutions. Despite such challenges, there is evidence of sustained interest in the findings and recommendations of ELSI research. In particular, the kind of robust community engagement in health-related research studied and advocated by CEIGR partners has received attention from researchers and communities interested in utilizing these approaches in their own research. CEIGR takes seriously the significance of sharing our successes and challenges as it relates to addressing persistent questions in ELSI and to cultivating a new relevance for ELSI work for tribal and other extra-jurisdictional communities, for whom research protections have often been more reflective of colonial constructs & agendas (Hudson et al., 2020), rather than in service to the unique political designations and worldviews of sovereign AI/AN tribes and other Indigenous peoples. Conclusion Our Center emerged in response to the persistent, unresolved issues that kept all too many tribal nations from participating in genomic research. We coupled this understanding with a commitment to pursue engagement in ways that promoted increased representation, dialogue, and inclusion of AI/AN researchers and community perspectives. The partnership itself is always in a constant state of becoming. Moving forward, we continue to be guided by these principles and have structured the direction of CEIGR in ways that shift power away from a traditional university-PI model toward a model of engagement and research that is inclusive of community-based investigators and prioritizes community-driven inquiries. At the center of our collective efforts is respect for tribal authority and research oversight in all aspects of this work. There is no doubt that histories of missteps with respect to tribal authority have done significant damage to research progress in AI/AN communities and our respect for these processes, both in gaining approval for specific projects and also in the review and approval of manuscripts, has resulted in a kind of research process that challenges conventional timelines and outputs. Nonetheless, in our fourth year as a Center, we have achieved some significant milestones allowing for cross-site efforts led by the tribally based partners. We funded a series of local pilot projects that permitted each of our community partners to articulate a research agenda that could be brought into dialogue with us and with the other partners. We jointly developed and administered the first systematic survey of attitudes toward genomics in AI/AN communities ever attempted. We developed deliberations in each of our community partners. Finally, we had successes in areas of professional advancement, supplemental funding, and program development. CEIGR was founded upon a commitment to do the kind of work that can be difficult to fund under many established funding mechanisms-the building of relationships with tribal communities-in our case to advance honest dialogue about the place of genomics in AI/AN communities. This commitment rested upon the collaboration between three community partners, new to each other, to jointly articulate a research agenda. This work cannot be the standard kind of hypothesis testing that has shaped research for so long. Rather, this work requires an openness to community concerns and adherence to a research agenda that can be difficult to specify and negotiate between multiple partners. Partnership building is not without challenges, but a mutual commitment to community-centered research and a concerted effort to open communication is key to finding success within this model.
2020-11-26T09:04:01.722Z
2020-11-18T00:00:00.000
{ "year": 2020, "sha1": "0dcd397af736e2ec5afff0e1b020a09bfbed6be1", "oa_license": "CCBY", "oa_url": "http://collaborations.miami.edu/articles/10.33596/coll.71/galley/106/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db4528191959f20d020119819f69f9c03a727c8c", "s2fieldsofstudy": [ "Sociology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
254762851
pes2o/s2orc
v3-fos-license
Developing an Optimal Model of Iran’s Countermeasures against the Threats of Economic Plans of the Major Powers in Central Asia In this article, the authors have sought to develop and present an optimal model of Iran’s countermeasures against the threats of the economic plans of the major powers in Central Asia. This qualitative research uses mixed-methods (i.e., thematic matrix and thematic network) to collect and analyze data. Since the thematic matrix was used as a data analysis method, an indirect observation study (analysis of textual material) was performed, data were collected through a purposeful sampling of existing textual materials, and finally, a comparison and an analysis were made to specify commonalities and differences. In addition, since the thematic network, research data were collected using a semi-structured interview with 10 experts, who were selected using theoretical sampling; the collected data were analyzed using the thematic network analysis method. Eventually, a conceptual network model was constructed and interpreted. Findings of qualitative research while identifying opportunities and threats revealed that the optimal model of Iran’s countermeasures against the economic plans of the major powers in Central Asia with three global themes, including the adoption of economic diplomacy by Iran, the adoption of soft diplomacy by Iran, and efforts to exit sanctions and remove sanction barriers, had reached theoretical saturation. The most appropriate strategy for Iran to confront the threats of the economic plans of the major powers in Central Asia is to adopt convergent diplomacy in the form of various kinds of diplomacy and the removal of the sanction barriers. Introduction The economy and its importance have been a cause of the rise and fall of many civilizations throughout human history.Those governments and civilizations that managed to have a strong economy and reduce poverty and unemployment flourished.On the contrary, those civilizations that could not reach a conceptualization of economic power diminished.Therefore, financial and economic activities have always been of immense importance.This is significant since human beings, whether individually or collectively and nationally, perceive it and find themselves engaged with it. Its importance for society is also because growth, prosperity, affluence, fortune, power, independence, greatness, and achieving the desired perfection for society are all affected by economic issues.Of course, it does not imply that the economy must be considered as a substructure; rather, it merely indicates its great importance and suggests that if a country is economically, industrially, and technologically advanced and developed, it will be superior in other political and cultural aspects as well.According to this argument, we can declare that the economy is not defined only in terms of subsistence and home management; rather, it encompasses a wide range of collective living dimensions.This issue has caused different countries to employ various strategies and methods to achieve development, especially in the economic field, not confine themselves to domestic potentials and resources and seek their financial and national interests beyond their national borders (Wu, 2018, pp. 5-12). Thus, it can be stated that along with the process of globalization and the increase in the significance of economic development, countries are trying to pursue their national interests in economic dimensions and different regions through adopting various plans and programs.One of these regions is Central Asia: the Central Asian region, due to various reasons, has an exceptional significance in global geopolitics.This importance manifested itself differently for regional and international powers, specifically after the dissolution of the Soviet Union and the independence of the countries of the region, each having its unique ethnic, geographical, economic, and religious characteristics.Such an exclusive and unique location has led this important geopolitical area to undergo significant developments across its historical context and add to its economic importance in geoeconomical discussions.This happens because of two substantial issues.Firstly, the Central Asian region was a relatively closed region during the Soviet era in which regional and trans-regional powers could not enter and were relatively unknown.However, for more than two decades since the independence of these countries, the new economic situation has dominated the region.Secondly, given its geopolitical and economic importance, and human and natural resources, the Central Asian region has been a contact point for many cultural, civilization, political, and economic projects.We observe numerous economic projects such as Eurasian Economic Union, the New Silk Road, the Great Central Asia Initiative, "One Belt One Road," TAPI Economic-Energy Plan, and North-South Corridor.Through the world's great and middle powers, each of these plans could have opportunities and threat to Iran's national interests. The importance and necessity of this research are clear because the vast presence of these powers in the area may intensify the competition, and these great powers can become a serious competitor for Iran in terms of energy transit and transportation of goods and implementation of economic programs and they can affect Iran's national interests seriously.Therefore, based on the arguments mentioned above, the present study aims to examine the opportunities and threats of the economic plans of the great powers in Central Asia for Iran and to develop an optimal model of Iran's countermeasures against the threats of the economic plans of significant powers in Central Asia based on a qualitative study. Statement of the problem Central Asia is a region that has always been the center of attention of regional and supra-regional powers.All these powers are pursuing their economic goals in Central Asia, so they have implemented special plans in this region in the form of "Eurasian Economic Union," "New Silk Road," "The Wider Central Asia Initiative," "One Belt One Road," "Organization of Turkic States," "The Turkic Council's Modern Silk Road," "TAPI," and "International North-South Transport Corridor."The most important incentives that forced these powers to implement the above plans are geopolitical calculations, economic interests, or maybe both.Thus, the interaction between economics and geopolitics affects the goals and intentions of these powers and makes two rational assumptions.The first assumption is that the great powers and the Central Asian states are pursuing their national economic interests.The second assumption is that the economic plans that the great powers implement in Central Asia may lead to many opportunities and threats for Central Asian states such as Iran. Concerning heeding the opportunities, Saddiq (2004), Tammana (2006), Karami andKuzagar Kaleji (2014), Ordabayev (2015), Rezapour and Simbar (2018), and Mishra (2015) have acknowledged opportunities, including the following issues: (a) Increasing Iran's security role in the developments of the region, (b) the presence of the United States in the region and Afghanistan to establish security and fight with Taliban and its opportunities for Iran, (c) business and transportation, (d) increasing the significance of Iran's transit position and Iran's transportation, (e) increasing the significance of Iran's transit position and Iran's transportation, (f) upgrading Iran's position in a global economy and diversifying energy importation sources, and (g) upgrading the significance of the North-South Corridor, respectively. Furthermore, Garlick and Halova (2020) believe that Iran's participation in the SCO and the China Belt and Road Project improves Iran's influence and regional role.Thus, Iran can facilitate the development of its infrastructure and internal transportation network by participating in these frameworks. Regarding the threats, Yazdani and Fallahi (2016), Taheri and Bayat (2018), Shafiee (2017), Mousavi et al. (2014), and Rezapour and Simbar (2018) have enumerated threats such as (a) Russia's preventing Iran from entering energy exchanges in the region, (b) India's policy in the area and assisting in the progression of America's goals and targets, (c) America's contribution to TAPI project and threatening Iran's interests, (d) failure to Failure to start the peace pipeline (Iran-Pakistan-India gas pipeline) and decline in the role and position of the Islamic Republic of Iran in the region, and (e) China's supremacy over Iran's geoeconomics structure, respectively. Moreover, Graber (2020) outlined Russia's goal in implementing its economic plans to restore its influence and authority in the region and decrease the influence of the US, the European Union, Iran, and Turkey.Kazantsev et al. (2021) reported the threats such as the conflict between Russia and China and the intensification of multilateral foreign policy by countries in the region due to this conflict and the arrival of major powers into the region, which can pose security challenges for Iran. According to the analysis of those abovementioned empirical records and the enumeration of the threats and opportunities of the economic plans of powers in Central Asia, it can be stated that the investigation of the financial goals of the major powers in Central Asia is significant in these respects.Moreover, these plans can have myriad threats and opportunities for Iran's national interests.Yet, what distinguishes this study from other previously conducted studies is that this research's attempt to present an optimal model and appropriate solutions for Iran's policymaking apparatus enables them to deal with threats arising from these plans and use the available opportunities within them properly. Therefore, the general questions of the present study are the following: What are the threats and opportunities for the economic plans of the great powers in Central Asia for Iran?What will be the optimal model of countermeasures by Iran against the hazards of the economic plans of the great powers in Central Asia? Iran's economic diplomacy in Central Asia Based on the 20-Year Vision Document in the 2025s Horizon of Iran, which is considered as the most significant upstream document of the country after the Constitution of the Islamic Republic of Iran, the interactions and foreign trade development strategies are particularly emphasized with the Southwest Asian countries including Central Asia, the Caucasus, the Middle East, and neighboring countries.Accordingly, Iran pursues practical and constructive cooperation with its neighbors in North and Central Asia in the form of different organizations: The Economic Cooperation Organization (ECO) and cooperation in the Caspian Basin and the Shanghai Cooperation Organization.Figure 1 displays a map of Iran's geographical location among its Central Asian countries. In general, evaluating the bilateral economic relations and different strategic economic plans of Iran in the Central Asian region indicates that Iran's actions, especially in developing economic relations with Central Asian countries more than two decades after the independence of countries in the region, are considerable and have followed an upward and forward trend.In this regard, the volume of Iran's trade relations with the countries of the Central Asian region during 27 years reached from about US$300 million in 1995 to about US$4 billion in 2005(Kozhanov, 2012, p. 8 cited in Dehghani Firoozabadi & Daman Pak Jami, 2016, p. 49).Furthermore, this amount increased from about US$3.7 billion in 2011 to US$5.3 billion in 2015(Kuzegar Kalchi, 2015, p. 126).Finally, the amount of Iran's trade relations in 2022 was estimated at US$5.63 billion (IRNA-News Agency of the Islamic Republic of Iran, 2022.04.9). Literature review In general, the importance of the subject of the economy has made its role and influence undeniable in the formation of the theories of international relations.One of these theories is Neorealism.Based on this theory, the economy plays a pivotal role in international relations and the relations among the world powers since major powers and governments seek access to natural resources such as raw materials, oil, and gas in the form of energy sources to maintain their hegemony.Therefore, energy and energy transmission pipelines are of utmost use to attract foreign investment, provide a reasonable ground for developing the regional cooperations, consolidate economic infrastructure, increase the influence and political roles of the countries as an important diplomatic tool to accomplish and promote the bilateral/multilateral economic, political, and cultural goals of the countries and enhance cooperations among neighboring countries, and establish regional peace and stability. According to the developments since the 1980s, theoreticians such as Cohen and Gilpin, citing theories like "hegemonic stability," believed that one of the fundamental characteristics of the hegemonic power in any age is having control over resources, pipelines, and routes of energy transmission, and given that oil is a type of energy.Energy can be converted to money, and money generates control.Control is considered power.Accordingly, the ascendancy of hegemonic government relies upon having control over four types of resources, including control over the world's raw materials and energy, control over the world's capital resources, control over global markets, and control over the production of high value-added goods (Sadeghi, 2012, p. 22).This has led these powers to adopt economic plans in varying parts of the world and significantly impact geopolitical, economic, political, and security areas. In this regard, the theory of Institutionalism also believes that individual interests should be neglected and collective economic interests must be pursued by forming unions and cooperative organizations in different countries.It is on this basis that financial plans in the form of the "Eurasian Economic Union," "One Belt One Road Initiative," "the Great Central Asia Initiative," and "the New Silk Plan" can be investigated and studied.That is, powers such as Russia, China, and the United States seek to design and adopt economic policies that can serve the interests of themselves and their partners. One of the critical points in the framework of the neoliberal institutional theory that Richard Haass points out is the gradual politicization of the actors' goals.That is, although actors may initially pursue technical and nonargumentative goals within the framework of cooperation, they gradually agree to use all possible and available instruments to achieve their technical-economic goals, which are called horizontal-vertical expansion or expanding logic and are claimed to broaden transnational cooperation from one sector to another to overcome new issues arising from the initial agreements (Daneshnia, 2012, pp. 148-149).Regarding the geoeconomic discussion, it can be stated that over the course of time and entering the 21st century, we are witnessing the replacement of the economic component with the military component, and countries determine their position in the world in this way.Geoeconomics is the geographical context of a country's economy that defines and determines the foundations of the economy in power relations through implementing an outlooking approach.When part or all of a country's financial capabilities depend on geographical considerations, a geographical economy or geo-economy is formed.Geopolitics offers an economic recitation of the status quo and assumes a geoeconomic aspect, where the economy is the motive for power struggles.Thus, geoeconomics studies the impact of national, regional, or global factors or economic infrastructures on the political decision-making and power struggles and the influence on the formation of regional or global geopolitics (Gholizadeh and Zaki: 2008, p. 27).While in the past, superiority was based on military and strategic concepts or geopolitics, geoeconomics describes a different form of competition in which governments' power is measured by economic progress in the current era (Mahkoubi & Goudarzi, 2019, pp. 521-522).Therefore, countries that can achieve economic development and manage to dominate natural, raw, and energy resources can impact international relations. On the other hand, according to the Copenhagen School, it can be declared that security today does not have only hardware and physical dimension; rather, what constitutes the current structure of security is the eradication of destitution, unemployment, economic development, high per capita income, exceeding exports over imports, and so on.The definition of economic development demonstrates how the concepts of "economic development" and "security" are complementary and closely related.On that account, if a government fails to ensure economic security along with other security issues, it lacks any national security (MaVafca, 2014, pp 23-89).This affinity led to the redefinition of the concept of security based on economic components, since security, whether economically, socially, or politically, is one of the most important indicators and components of the development of countries that are related to the institutional characteristics of an economy, and can be defined as an institutional framework that encourages and builds confidence for savers and investors. Anyhow, what is of utmost significance in summarizing the debate is that, firstly, economic interests and having access to raw and natural materials are the priorities for the major powers, which has led them to pursue their economic interests beyond their national borders.Secondly, given the sensitivity of the target countries to the political, economic, and security goals, the great powers are trying to follow their economic and political profits in the form of cultural goals under the banner of cultural diplomacy; since in cultural diplomacy, there is less opposition and resistance on the part of target countries, and civil society organizations mainly pursue it.Based on this critical and according to the definition by Milton Cummings, cultural diplomacy is the exchange of ideas, information, art, lifestyle, value system, traditions, and beliefs to achieve common concepts and enhance mutual understanding among nations and countries, and due to the long-term sustainability and effectiveness of cultural diplomacy, recognizing its sensitivity, investment, and policy-making in this area seem to be essential for those in charge.In this respect, the most important requirement and obligation are the training of efficient human resources who are familiar with the fundamentals of cultural development, fluent in the language, and competent in principles of negotiation in the world system (Salehi-Amiri & Mohammadi, 2016, p. 14).In an article entitled "Cultural Diplomacy, Political Influence, and Holistic Strategy," John Lankowski lists the tools of cultural diplomacy.In addition to the elements such as art, history, cultural exhibitions, educational programs and exchange, language teaching, and media, he introduces religious diplomacy as an instrument and a topic for cultural diplomacy (Shaykh Al-Islami, 2012, pp. 104-106). Therefore, in addition to cultural diplomacy, the world's major powers have recently employed two approaches, namely, soft power and scientific and technological diplomacy.Joseph Nye, in the book Soft Power: The Means to Success in World Politics, mentions that a country's soft power is the opposite side of its hard power, and unlike hard power that relies upon coercion, soft power emphasizes persuasion and strives to attract others through appeal.Soft power is the attractiveness of a country in the eyes of others (Nye, 2011).The premise of soft power is that actors need to attract others to their specific viewpoints that are considered legitimate and valid.If an actor attracts others to his point of view, more expensive sources of hard power will no longer be required.In the current situation, soft power results from the provision of information and the creation of attractiveness.Accordingly, it is a power source for countries on the international scene (Nye, 2011). Furthermore, a country's scientific and technological capabilities and interactions with other international actors in this field are called the diplomacy of science and technology.In other words, governments and other international actors (governmental and non-governmental international organizations) pay attention to both aspects of science and technology diplomacy (Riazi et al., 2019, p. 653).Since the progression of science and technology, which are developed in the world's most influential scientific centers, has undeniably influenced international relations, the global economy, and the world community. Concerning the Central Asian countries, this kind of diplomacy can be prominent for various reasons.Because, firstly, the countries of the region, due to scientific and technological flaws, are extremely required to expand their relations in this regard.Secondly, the dissolution of the Soviet Union provided the necessary ground for these types of relations with the great powers.Thirdly, this kind of diplomacy is largely non-sensitive and does not provoke opposition from other countries and governments.Eventually, it can be claimed that today, what distinguishes cultural diplomacy, soft power, and scientific diplomacy from different types of diplomacy is its direct relationship to national goals and interests.However, other sorts of diplomacy often are formed aiming at an economic or commercial benefit.Without the government's direct involvement, cultural and scientific diplomacy is constructed upon the principle of mutual interests and common purposes.The relationship between the two parties can be the chief motive for cooperation.Consequently, it can be inspected that the world's major powers have serious interests in pursuing and consolidating influence in their transregional territory which is exerted through the formation of a security coalition or the establishment of multilateral economic regimes.Both security alliances and economic initiatives can lead to integration and regional cooperation regardless of force or pressure on the external parties. The important points of this discussion are as follows: (a) economic interests and access to raw materials and natural resources are the first priorities of the great powers; therefore, they pursue their economic interests outside their borders; (b) given the sensitivity of the target countries to the political, economic, and security goals, the great powers are trying to pursue their economic and political interests in the form of cultural goals and cultural diplomacy because target countries usually do not disagree with cultural diplomacy (Hasan-Khani, 2007, pp. 138-139).Therefore, the great powers have recently used soft power and scientific and technological diplomacy besides cultural diplomacy because (a) the Central Asian states are weak in science and technology field and need help; (b) the collapse of the Soviet Union paved the ground for this change of approach; (c) states are less sensitive to this kind of diplomacy and do not oppose it; (d) what distinguishes cultural, soft power, and scientific diplomacy from other types of diplomacy is its direct relationship with national goals and interests because sometimes the aim of other forms of diplomacy is economic and commercial interests without any direct participation of the government while cultural and scientific diplomacies are based on the mutual interests and common goals and the relationship that exists between the two parties is the main motivation for cooperation (Quoted by Koolaei & Azizi, 2017, p. 1049). Research method The researchers used a qualitative research method to answer the research questions.The data were analyzed through network theme analysis.Theme can be defined as an indicator of important information regarding data and research questions.To some extent, it demonstrates the meaning of the existing pattern in a set of data (Braun & Clarke, 2006).In other words, a theme is a repetitive and distinctive property in the text that manifests a thorough understanding and experience concerning research questions (King & Horrocks, 2010).From this point of view, thematic analysis is a method for identifying, analyzing, and reporting existing patterns within qualitative data.This method analyzes narrative and textual data and transforms diverse and scattered data into technical and detailed data (Braun & Clarke, 2006).Thematic analysis is conducted in different ways.In this research, thematic matrix and thematic network were used.Thus, the thematic matrix was used to identify and compare the opportunities and threats of the economic plans of the great powers in Central Asia for Iran.Also, this thematic study has been used to identify and formulate an optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia. Thematic matrix is a method first proposed by Miles and Huberman (1994).Thematic matrix is used for comparing themes with each other in textual data.In this way, textual data from various sources or individuals are compared to identify similarities and differences. Therefore, the research data were collected as a purposeful sampling method from existing documents (traditional and virtual including books, sites, articles, and research) using thematic analysis method and thematic matrix type through indirect observation (text analysis).Then, the coding research data and basic concepts were organized through thematic coding.In addition, the opportunities and threats of economic plans of the great powers in Central Asia were identified and discovered and finally compared and analyzed in terms of similarities and differences. The thematic network is a thematic analysis method and was developed by Attride-Stirling (2011).To obtain a thematic network, the following steps need to be performed: A. discovering basic themes (identifiers and key points within the text), B. discovering organizing themes (themes obtained from combining and summarizing basic themes), and C. discovering global themes (high themes containing the principles that govern the text as a whole).After performing these steps, the obtained themes are drawn as web maps. Therefore, through thematic network analysis, the research data were gathered and theoretically saturated by conducting a semi-structured interview with 10 experts and scholars in the fields of regional studies and international relations, who were selected through theoretical sampling. According to Glaser and Strauss (1967), theoretical sampling is the process of collecting data for theorizing through which the analyst simultaneously collects, codes, and analyzes his data and decides what data to include in the next step.Next is to collect them and where to find them in order to formulate his theory during its formation.The theory being developed controls the data collection process (quoted from Flick, 2006).Therefore, the samples here do not have the authority to be visible and representative of the statistical population; rather, they are important in the sense that they help to construct the investigated phenomenon and formulate the theory. Furthermore, to analyze the data, research data were codified through the process of theoretical coding (consisting of open coding, axial coding, and selective coding), and basic themes, organizing themes, and global themes were described and analyzed so as to identify and develop an optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia.Moreover, to validate (credibility) the themes and qualitative findings, a communicative method of evaluation and focus group formation has been implemented (Flick, 2006, pp. 220 and 415).In the communicative validation method, data validation has been performed by the subjects of the study (i.e., interviewees).Additionally, through forming a focus group consisting of six experts and scholars (in the fields of regional studies and international relations), qualitative data of the research were controlled and evaluated.Also, to assure consistency (reliability) of the qualitative findings, the two methods of replicability and transferability or generalizability were employed (Strauss & Corbin, 1990, pp. 283-284;Sarukhani, 2014, p. 289).Therefore, concerning the reproducibility of the qualitative study findings, the agreement coefficient method between the coders (researcher and coresearcher) was used, and the inconsistencies were eliminated through reviewing the data codification process.Besides, regarding the transferability or generalizability of the findings of the qualitative study, a regular and comprehensive theoretical sampling method (through interviewing various levels of academic and executive experts) was done as much as possible so that the results have the generalizability feature.Table 1 presents the list of sample members of the research in the semi-structured interview: Research findings The findings of the content matrix Different studies have investigated the economic plans of the great powers in Central Asia.Therefore, this section analyzes the threats and opportunities from domestic and international studies using the thematic matrix.When each of these threats and opportunities is highlighted, their commonalities and differences are also examined so that a model of coping strategies can be presented in the second section. Thematic coding was conducted on the opportunities and threats that the economic plans of the great powers in Central Asia have for Iran, then the basic, organized, and inclusive themes. 1 They were identified, and then the opportunities and threats of different economic plans were compared based on the identified inclusive themes, and finally, their similarities and differences were specified. The findings of the thematic network In this section, qualitative findings are analyzed in three steps as follows: Step one: Discovering basic themes.In the first step of thematic analysis, to achieve the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia, the data obtained from interviewing the experts have been compiled as declarative statements in the first place.Next, using the theoretical coding process and specifically, the open coding, basic themes or the identifiers and key points within the text were identified.Therefore, through the theoretical coding procedure (open coding type), 156 basic themes of the optimal model of the threats of the economic plans of the major powers in Central Asia were identified and enumerated*.(To avoid prolonging the article, the open coding table extracted from declarative statements, which were obtained from a semi-structured interview with experts, was not mentioned*.) Step two: Discovering organizing and global themes.Following the theoretical coding, mainly axial and selective coding, first, organizing themes were identified and enumerated by combining and summarizing basic themes.Then global themes of the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia were identified and recounted by combining and summarizing organizing themes.The process of this analysis is shown in Table 2. Step three: Developing the thematic network.Before developing the thematic network, firstly, the number of global, organizing, and basic themes of the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia were extracted from the qualitative data obtained from a semi-structured interview with 10 experts in the field of regional studies and international relations and are shown in Table 3. As Table 3 demonstrates, the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia has reached theoretical saturation with three global themes, ten organizing themes, and 156 basic themes based on a semi-structured interview with 10 experts in the field of regional studies and international relations. After enumerating and extracting global, organizing, and basic themes of the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia, in this section, an attempt is made to draw a conceptual model or the formation of a thematic network of the optimal model of Iran's countermeasures against the economic plans of the major powers in Central Asia (Figure 2). Discussion Central Asia is one of the regions which has always been and will always be the primary focus of regional and transregional powers.This has led to the formation of unique policies in this region.Once, there was a discussion of "The Big Game" between Britain and Russia in the 19 th century, and now the rivalry between regional and trans-regional powers has led to the formation of a "New Big Game" in the region in which each actor pursues their political, military, cultural, and economic goals.What regional and transregional powers share in common in Central Asia is the issue of economic goals.Therefore, each of the powers has devised special plans for this region in this regard, some of which have been addressed in this study.What is of utmost significance in respect to heeding to these plans is the type of their designs and the purposes behind them, which can be accompanied by a manifold of threats and opportunities for Iran (Table 4). Thus, regarding the opportunities and threats of the economic plans of the powers in Central Asia, and to achieve an optimal model of Iran's countermeasures against the economic plans of the major powers in Central Asia, a semi-structured interview with 10 experts and academic pundits in the field of regional studies and international relations was conducted.Therefore, the following results are obtained: A. Adoption of economic diplomacy by Iran: Iran must heed the following points in this type of diplomacy: (1) Efforts to strengthen internal infrastructures for economic activities with the region, (2) involvement in the economic plans of the major powers in the region, (3) Iran's investment and economic cooperation with the countries in the region bilaterally and multilaterally, (4) having an active and pragmatic strategy for Iran's economic relations in the region, (5) paying attention to energy diplomacy and playing an active role in the energy market, and ( 6) paying attention to road diplomacy by Iran. The above result, the adoption of economic diplomacy by Iran, is in line with the theoretical approach of Neorealism in the sense that economic factor plays a vital role in international relations and relations among the powers.In this regard, the theory of Institutionalism also believes that individual interests must be overlooked.The collective economic interests should be pursued in the form of unions and cooperative organizations in different countries.Moreover, geoeconomic theory, which studies economy and the relationship between geography and the power of countries, also emphasizes the importance of economy in the global arena on the formation of regional groupings based on the economy. Therefore, according to the abovementioned theoretical records (the theory of Neorealism, theory of Institutionalism, and geoeconomic theory), Iran can pursue economic diplomacy in the form of heeding to and strengthening internal infrastructure, playing an active role in the energy market and the transit of goods, investment and economic partnership with the countries of the region, and collaboration in the economic plans of the powers in the region provided that it is based upon a wise policy and includes the greatest national interests of Iran.Removing the obstacles and activating the internal capacities for economic cooperation with countries in the region Creating capacity for geopolitical use Playing the role of a geopolitical and geoeconomic complement for the countries of the region by Iran due to its appropriate geographical capacity Strengthening Iran's infrastructure capacity for regional and trans-regional economic cooperation Trying to create capacity, not a competitive advantage, in the current situation Eliminating the costs of economic cooperation with Iran by activating internal capacities Capacity building and Iran's use of its geographical location advantage to improve the economic position at the regional and global levels Completing hardware infrastructures through software components Involvement in the economic plans of the major powers in the region Iran's role in two initiatives, namely, China Road Belt and the Eurasian Economic Union Capacity building to activate the Iran Peace Pipeline alongside the TAPI project Changing attitudes toward participation in the economic plans of the powers in the region Building trust and cooperation with Russia for an active presence in Central Asia Iran's maximum use of the pivotal relationship with Russia and China Iran's well-informed and intelligent involvement in China and Russia's economic plans Heeding to economic activity and cooperation with all major world powers Approving India and America's economic plans along with China and Russia to create balance in the region Iran's economic immunity from sanctions through cooperation with and engagement in the economic plans of the major powers in the region Creating an equilibrium in the attitude and interaction with the West and the East by Iran Creating a consensus in foreign policy about involvement in the economic plans of the major powers in the region Iran's competition with major regional and trans-regional powers along with building trust Transforming opportunities into threats through the establishment of economic unions and cooperations Facilitation of a comprehensive commercial relationship between Iran and Russia through Iran's membership in the Eurasian Economic Union A major blow to the US sanctions against Iran through Iran's membership in the Eurasian Economic Union More opportunity and freedom of action for Russia and Iran through reducing the Eurasian Union's dependence on the dollar Enhancing Iran's economic strength and political influence in the region through membership in the Eurasian Focusing and updating the business information of Iran's political representatives Thorough and timely notifications to the Iranian traders and producers about the markets of Central Asian countries Employing active diplomacy to reduce possible ambiguities and misunderstandings with the neighbors Highlighting the development of Iran's stable relations with its neighboring countries in Central Asia Expanding regional ties by Iran pragmatically Having a foreign policy orientation based on utilizing geographical capacities and interaction with neighbors Heeding regionalism in Iran's foreign policy economic interactions among countries at various regional and trans-regional levels. Thus, according to the theoretical records mentioned above (cultural diplomacy, soft power, and diplomacy of science and technology), Iran can operationalize soft diplomacy in the form of cultural diplomacy and scientific and technological diplomacy against the economic plans of the great powers in the region.Because on the one hand, cultural diplomacy, due to its multifaceted impact and civilizational affinity of Iran with Central Asian countries, enjoys more effectiveness and legitimacy.On the other hand, conducting joint scientific and technological activities can prepare the ground for joint economic plans and programs.Therefore, cultural and scientific fields are one of the most important contexts that Iran can improve its level and amount of cooperation with Central Asian countries to revive its geo-culture and enhance its scientific influence in Central Asia and pave the way for more convergence in foreign policy, particularly in the economic field. C. Efforts to exit sanctions and remove sanction barriers: Iran must adopt and pursue appropriate mechanisms to bring maximum benefits for the country and eliminate (1) sanction barriers for economic activities and (2) geopolitical isolation. The results mentioned above, efforts to exit sanctions and remove sanction barriers by Iran, are consistent and in line with the theory of Copenhagen School in the sense that security is a The presence and influence of regional and supra-regional powers in Central Asia may increase and be strengthened Perhaps when the Indian plans are implemented, the influence of regional and supra-regional powers will increase in Central Asia, and the impact of Iran will decrease in the region *The geopolitical and commercial role of Iran may be highlighted and increased The Turkmenistan-Afghanistan-Pakistan-India (TAPI) Source: Research Findings. multifaceted concept and includes various developmental levels (social, political, and economic).As a result, the removal of development and growth obstacles can provide the citizens of a country with security in many aspects, specifically the economic dimension.The existence of sanctions and sanction barriers preclude regional and trans-regional economic collaborations and impede economic advancement that needs to be dealt with and addressed immediately. Conclusion The present study results indicated that the implementation of the economic plans of the great powers in Central Asia for Iran in terms of opportunities and threats have some similarities and differences.In this regard, the similarity of the economic plans of the great powers in Central Asia for Iran in terms of opportunity was an increase in Iran's geopolitical importance and the expansion of Iran's relations with the countries involved in such economic plans.In addition, the difference between the economic plans of the great powers in Central Asia for Iran in terms of threat were negative political and economic consequences for Iran through the implementation of Russian economic plans, increasing the influence of regional and supra-regional powers in Central Asia, and reducing Iran's influence in the region through the implementation of India's economic plans, the discrepancy between the development policies of Iran and Turkey in Central Asia and the weakening of Iran's position because of pursuing the pro-Western policies in the region by Turkey, insecurity and reduced influence and political and economic interests of Iran and Russia in the region due to the implementation of the US economic plans and the emergence of tensions and imbalances of interests between the countries involved in China's economic plan.Despite the opportunities and threats of the economic plans of the powers in Central Asia for Iran, the most appropriate strategy for Iran is to engage in convergence diplomacy in the form of cultural and economic diplomacy and remove sanction barriers to the implementation of Iran's joint economic plans with its Central Asian neighbors to achieve the desired model of Iran's confrontation strategies against the threats of the economic plans of the great powers in Central Asia.Undoubtedly, the realization of this issue for both sides will lead to a win-win game that increases Iran's regional and international position, eliminate the threats of economic plans of regional and supra-regional powers, and provide the context for mutual economic development. Limitations With respect to the limitations of the present study, two cases can be mentioned in general.To begin with, the lack of sufficient empirical literature, nationally or internationally, concerning the subject of the present study was one of the scientific obstacles in order to compare the constructed conceptual model with their findings and results so that the commonalities and differences of the constructed model of the present study can be evaluated and compared to them in this perspective.Next, this study was based on a qualitative research strategy.Therefore, despite the efforts to assess the validity (credibility) and consistency (reliability) of the qualitative findings of the research, the external validity decreased to some extent due to the lack of evaluation and suitability of the measured conceptual model of the qualitative study in the quantitative study (specifically survey study).However, this is one of the weaknesses of all qualitative research and not just the present research. In addition, this qualitative study failed to mention some internal factors (weakness and strength), such as the role of domestic politicians in foreign policy, the desire to expand cooperation with countries in the region and the world, capacity-building for the effective management of foreign relations, and the dominance of the hardware approach over the software approach in foreign policy and foreign factors (opportunities and threats) such as using the capacities of international and regional organizations, developments in Iran's geopolitical environment, the transition of the international system, the dispute over Iran's nuclear program, and the fragility of the Central Asian economy and their dependence on Russia.Thus, these factors can be further investigated following the SWOT study and the design of an optimal model for Iran's strategic economic management in Central Asia. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Figure 1 . Figure 1.Map of Iran's geographical location among its Central Asian countries. Union Making a logical and sensible use of membership in the Eurasian Economic Union by Iran Facilitation of Iran's membership in the Eurasian Union through accession to the Palermo Convention and the CFT Iran's investment and economic cooperation with the countries in the region bilaterally and multilaterally Developing Iran's economic exchange with Tajikistan regarding oil and gas swaps for freshwater Cooperation with fossil energy-producing countries in the region Cooperation and partnership with the Central Asian countries in various fields of oil and gas derivatives Revival of Iran's textile industry based on cotton and textile products of Central Asian countries Cooperation with oil-and gas-producing countries in Central Asia in the form of a joint coalition Expanding commercial relations and long-term ties with the developed countries of Central Asia Carrying out joint economic projects with the Central Asian countries in the field of banking Having joint activities in the field of insurance Having joint activities in the field of shipping Iran's cooperation in the hardware sections of various Central Asian countries' industry Iran's cooperation in the Chinese Silk Road project in Central Asian countries Regional joint projects in the field of trading Having regional collaborations in the energy sector in Central Asia Workforce exchange Having effective multilateral collaborations Having diverse and multilateral collaborations with different Central Asian countries Defining economic projects with the participation of Afghanistan and Tajikistan Iran's full membership in the Eurasian Economic Union Establishing an aluminum container producing factory in Tajikistan Iran's joint economic cooperation with Central Asian countries in the field of alterant and food packaging industries Establishing meat packaging, Salambor processing, and leather industries in Kyrgyzstan and Tajikistan Cooperation with Central Asian countries in the field of metals and minerals Joint cooperation in the field of civil industries Emphasizing and accentuating the export of manufactured and technology goods by Iran The economic advantage of the factor led to Iran's presence in Eurasia Efforts to establish multilateral cooperation between Iran and Central Asian countries Deepening economic relations with the Central Asian countries Establishing a cooperation organization between Iran and Central Asian countries Having joint investment in the region Iran's investment in the industrial, mining, and construction sectors of the Central Asian region Developing local trading by Iran (continued) Iran's attention to being integrated into the global economic system Adopting an open geopolitical strategy by Iran Having a plan and a roadmap for economic and commercial relations with Central Asian countries Developing a practical strategy for a greater presence in the region by Iran Establishing strategic relations along with sustainable economic cooperation Having solidarity in internal management and policy to implement appropriate policy Paying attention to energy diplomacy and playing an active role in the energy market Attracting foreign technology and investment in the field of energy Emphasis on attracting foreign investment in the field of energy Heeding to energy diplomacy and increasing energy exports Assigning economic projects to the private sector to implement successful and exemplary economic projects Removing obstacles and heeding to the issue of energy transit to Central Asia Creating competitive advantages through diversifying energy business areas Ensuring energy security by Iran through the sale of gas to consumer markets Diverse partnership and cooperation in the field of energy with Central Asian countries Iran's gas exportation through the Turkmenistan-Xinjiang gas pipeline Paying attention to road diplomacy by Iran The desirability of Iran's transit route for Central Asian countries Connection with Kazakhstan via Turkmenistan and the Caspian sea Optimizing and streamlining of transportation routes by Iran Developing railway and port infrastructure for the North-South corridor in interaction with the countries benefiting from it Negotiating and clarifying the dimensions of energy transit via the North-South corridor to avoid possible challenges Iran's geostrategic situation as a port of entry and a regional transportation bridge prepares the ground for economic interactions Upgrading infrastructures for the construction of roads, railways, pipelines, and electricity networks Launching Chabahar-Sarakhs and Jask-Mashhad railways to provide Central Asian countries with access to the open waters India and other South Asian countries' access to the Mediterranean Sea and the Black Sea through the construction of a North-South corridor in Iran Chabahar-Sarakhs railway, Zabul-Zarang-Delaram, and Khaf-Herat, Afghanistan route for India's access to Afghanistan and Central Asia Devising severe plans to expand railway transportation in Iran Cost-effectiveness of Iran's route for Central Asian countries to access open waters North-South corridor: a unique and profitable plan for Iran The necessity of construction and completion of Chabahar-Sarakhs railway to reinforce and transport transit goods Enhancing Iran's role and contributing to its economic growth by establishing the international North-South transport corridor Activation of the transit line and the North-South corridor through the completion of the Chabahar-Sarakhs railway project Launching an express railway network from Iran to Tajikistan via Afghanistan and Turkmenistan Easy and safe access of Central Asian countries to open waters and other countries through Iran Completing economic and transit corridors The attractiveness of Iran's geopolitical location for neighbors and other countries to access open waters Adoption of soft diplomacy by Iran Adopting cultural diplomacy by Iran Cultural factor as a catalyst Forming the union of Persian-speaking countries Attention to the Persian language as a communicative media in other fields Heeding to the cultural contexts for the private sector's economic activities in the Central Asian region The influence of the role of culture in the framework of pragmatic plans by Iran Removing barriers and challenges to Iran's cultural diplomacy in Central Asia The merely facilitating role of cultural factors in the economic and political cooperation Avoiding controversial and sensitive topics in the field of cultural diplomacy Emphasis on tourism diplomacy Emphasis on sports diplomacy Using capacities such as Nowruz in the field of cultural diplomacy Cultural factor as Iran's first advantage in converging to the Central Asian countries for economic interaction Proper use of cultural factors in Central Asian countries Emphasis on familiar cultural figures and elements instead of Islamizing Central Asian countries Desisting the idea of superiority and following the concept of equality and emphasizing joint cultural and economic collaborations (continued) Figure 2 . Figure 2. Optimal network conceptual model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia. Table 1 . List of sample members in the semi-structured interview. Table 2 . Process of axial and selective coding to discover organizing and global themes of the optimal model of Iran's countermeasures against the threats of the economic plans of the major powers in Central Asia. Table 4 . A comparison of the similarities and differences of threats and opportunities that the economic plans of the great powers in Central Asia have for Iran.
2022-12-17T16:04:56.227Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "e98bca1c18e37ea2d5cc6575f77db88792db93e3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/18793665221145419", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "78c2ab8d43fa0c1253228a00f9cec818c88f315c", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [] }
55663340
pes2o/s2orc
v3-fos-license
Birth preparedness and complication readiness among primigravida women attending tertiary care hospital in a rural area Pregnancy in women is a very sensitive period during which unexpected life threatening complications may arise at any stage. Maximum maternal deaths occur during labor, delivery, or within 24 hours of childbirth. Maternal mortality is a huge burden in many developing countries. Globally, more than 40% of pregnant women may experience acute obstetric problems. The World Health Organization (WHO) estimates that 300 million women in the developing world suffer from short-term or long-term morbidities brought about by pregnancy and childbirth. Most of maternal deaths occur in the developing world. 1 This is because of several reasons one of which is inadequacy or lack of birth and emergency ABSTRACT preparedness, which is a key component of globally accepted safe motherhood programs (WHO1994). The current maternal mortality ratio (MMR) in India is 167 per one lack live births (2011)(2012)(2013), whereas the country's millennium development goal (MDG) in this respect is 109 per one lack live births by 2015. 2,3 High levels of infant mortality (50 per 1,000 births), neonatal mortality (29 per 1,000 live births), and maternal mortality (167 per 100,000 live births), and lower levels of deliveries with skilled assistance (45% -NFHS III) remain major public-health challenges in India. 2,4 Majority of maternal deaths occur during labor, delivery, and within 24 hours post-partum. Apart from medical causes, there are numerous interrelated socio cultural factors which delay care-seeking and contribute to these deaths. 5 Thaddeus and Maine documented three delays, (a) Delay in deciding to seek care if complication occurs; (b) Delay in reaching care; and (c) Delay in receiving care. 6 The Maternal and Neonatal Health (MNH) Program of Johns Hopkins Program for International Education in Gynaecology and Obstetrics (JHPIEGO) developed the birth-preparedness and complication readiness matrix to address these three delays at various levels, including the pregnant woman, her family, her community, health providers, health facilities, and policy-makers during pregnancy, childbirth, and the postpartum period. 7 At the basic level, the concept of BPACR includes identifying a trained birth attendant for delivery, identifying a health facility for emergency, arranging for transport for delivery and/or obstetric emergency, and saving money for delivery and identification of compatible blood donors in case of emergency. 7 Complication readiness raises awareness of danger signs among women, families, and communities, thereby improving problem recognition, reduce the delay in deciding to seek care and hasten reaching medical facilities. 7 A key strategy that can reduce the number of women dying from such complications is making a birth plan that constitutes birth-preparedness and complicationreadiness measures for pregnant women, their spouses and their families. 7 Since it is not possible to predict which women will experience life-threatening obstetric complications that lead to maternal and neonatal mortality, receiving care from a skilled provider (doctor, nurse, or midwife) during childbirth has been identified as the single most important intervention in safe motherhood. 8 However, the use of skilled providers in developing countries remains low. According to the National Family Health Survey (NFHS-III), the percentage of deliveries with skilled assistance is only 45%. 4 Despite the fact that birth preparedness and complication readiness is essential for further improvement of maternal and child health little is known about the current magnitude and influencing factors in India. Additionally, very little work has been done in this rural area, and more so in primigravida women. This study therefore aims to fill the gap by assessing the current status and factors associated with birth preparedness and complication readiness among primigravida women. METHODS Objectives of the study was to assess the status of birth preparedness and complication readiness (BPACR) among primigravida women and to determine the factors affecting the status of BPACR among primigravida women. Before the start of study, ethical clearance from institutional ethical committee was obtained. We have conducted the hospital based, cross sectional study among primigravida women attending ANC OPD of SRT Rural Government Medical College, Ambajogai, Dist. Beed, from August-December 2015. All Primigravida women who gave consent for study were included in study and who not willing to participate were excluded from the study. In absence of any reliable information in this area, assuming 50% of women exhibiting BPACR, with 95% confidence interval and 10% allowable error, sample size was calculated by formula, n=4(pq)/L. 2,11,13 The final sample size was 400. We hypothesized 50% prevalence for BPACR indicators as no prior estimates were available, and an assumed 50% prevalence provided the largest sample-size. 13 Before data collection, informed verbal consent was obtained from each participant. A pre formed questionnaire in local language was used for data collection. The study subjects were examined and assisted for ANC check-ups. During examination information was collected regarding identification data, socio-demographic characteristics. Apart from sociodemographic characteristics, status of BPACR, factors affecting BPACR was assessed. BPACR index was measured using a series of questions. BPACR index has been developed by the Johns Hopkins Bloomberg School of Public Health. This index has been used in different studies carried out all over the world including India. BPACR index was calculated by a set of indicators. These indicators are quantifiable and expressed in percentage of women having specific characteristics. BPACR index was calculated from the following indicators. [9][10][11] :  Percentage of the women who knew about > 8 danger signs of pregnancy. Collected data were entered in an excel data sheet. Data was processed by software package Epi Info™ 7 (7.1.2) and the information was analyzed and results were presented in the form of tables. Odds Ratio and Chi square tests (χ2) were applied to examine the association between each independent variable and BPACR indicators. P-value less than 0.05 were considered as statistically significant. Sample characteristics Total 400 women participated in our study. Mean age of participants is 22.9 years with approximately half (228) of participants were ≤ 22 years and 114 were ≤ 20 years old. No one is below 18 yrs, and 7 are ≥35 yrs. 72% of women were Hindu, one fourth of women were Muslims (26%) and rest from other religion. 61% (244) women from rural area, 39% (156) were from urban area. Most of women 63% (251) were unemployed, 29% (118) were involved in unskilled occupation, and 2% (7) were involved in each occupation like skilled, semiprofessional and professional and shop owner. However more than half 52% (208) of participant's husbands were engaged in unskilled occupation. 26% (103) were belongs to semiskilled and skilled profession, 11% (45) belongs to clerical and shop owner, 9% (37) belongs to semiprofessional and professional occupation and only 2% (7) were unemployed. Approximately half i.e. 45% (178) of women were educated up to higher secondary class. 35% (140) up to middle school and 13% (52) were educated up to graduation and above. Only 7% (30) women were illiterate. 41% (163) of participant's husbands were educated up to higher secondary class. 30% (119) up to middle class and 20% (81) were educated up to graduation and above. 9% (37) were illiterate. More than half 59% (237) of women belongs to joint family and 41% (163) to nuclear family. Table 1 shows the awareness of study participants about various aspects of ANC care. In this study TT2 immunization coverage was 77.75% (311). About 70% (281) of women had 4 or more than 4 ANC visits or planned for the same. Only 68.50% (274) of women had knowledge about consumption of 100 FSFA tabs, whereas 29.75% (119) of women had arrangements for blood donors in case of emergency. Table 4 shows factor associated with the independent components of BPACR. Education of women beyond middle school (P=0.000) and their husbands who were involved in semiprofessional and professional occupation (P=0.005), were more likely to register in first trimester of pregnancy as compared to illiterates and husbands involved in other occupations respectively. Relationship between some independent variables and BPACR status Again, education of women (P=0.000) and their husband (P=0.000) beyond middle school, semiprofessional and professional occupation of their husbands (P=0.001) and nuclear families (P=0.000) were more likely to identify the mode of transportation as compared to their counterpart. Whereas, education of women beyond middle school (P=0.000), husbands who are unemployed and involved in unskilled profession and nuclear families (P=0.000) were more aware about transportation provided through JSSK when compared to other part. Likewise education of women beyond middle school (P=0.000), and of their husbands up to middle school (P=0.000) and women belongs to joint family (P=0.032) were more aware about TT2 coverage in pregnancy as compared to their counterpart. Women aged 30 years and above were more aware about danger signals of pregnancy as compared to women below 20 years of age (P=0.000). In this study, education of women (middle school and above) was the most consistent factor which was significantly associated with registration in first trimester, awareness about TT2 coverage, knowledge about transportation provided in JSSK and arrangements of transportation during delivery. Birth Preparedness and Complication Readiness (BPACR) is a strategy to promote the timely use of skilled maternal and neonatal care, especially during childbirth, based on the theory that preparing for childbirth and being ready for complications reduces delays in obtaining this care. BPACR is an approach based on the premise that preparing for birth and being ready for complications, reduces delays in deciding to seek care in two ways. First, birth preparedness motivates people to plan to have a skilled provider at every birth. If women and families successfully follow through with this plan, the woman will reach care before developing any potential complications during childbirth, thus avoiding delays. Second, complication readiness raises awareness of danger signs, thereby improving problem recognition and reducing the delay in deciding to seek care. 10,14 The linkage of referral transport scheme with utilization of antenatal and intra-natal services might be the reason, especially among poor and marginalized women. Identifying skilled provider and arrangement of a vehicle for emergency transportation are vital steps in BPACR. 1,5 Majority of the participants in this study identified skilled provider for delivery (98.14%) and had made arrangements of vehicle for transportation in emergency (72.25%) as well as were aware of transport schemes (63%). Comparable findings were shown by a study conducted by S.S. Medical College, Rewa in Madhya Pradesh (2008-09), where majority of the women had identified skilled provider for delivery (71.1%) and had made arrangement for transport during emergency (78.7%). But both figures were far less being 32% and 29.5% respectively in the study conducted by Agrawal S et al at Slum in Indore, M. P, where nearly three-fourths of the deliveries took place in the home. 13 These findings were contributed by a fact that in rural areas dais had been trained for delivery and women minimally expect to have at least dais during delivery and the government scheme which provide transport. An important aspect of assessing BPACR in the study subjects is measuring prior knowledge of key danger signs during pregnancy. Awareness of the danger signs of obstetric complications is the first step in appropriate and timely referral for essential obstetric care. The awareness of respondents in this study about any one key danger signs of pregnancy was low (40.75%) which was similar to findings shown by study conducted by Tanuka Findings of different study showed that, the level of BPACR was low in many societies. Antenatal care provides a golden opportunity to all the pregnant women to provide information, education and communication so that they along with their families can make the correct choices especially in event of any complications arising during delivery, childbirth or post-partum. Limitation of study; this was a hospital based study & there is a need to carry out larger community based study. CONCLUSION The BPACR index in the present study was 55.83% and there is lot of scope for improvement. BPACR above 50% may be related to higher levels of identification of trained birth attendant. Although the values affected by lower levels of respondents who saved money and lower awareness of danger signs during pregnancy. In this study TT2 coverage was low and arrangement of blood donors was also very low. Education of women beyond middle school was the most important factor associated with awareness regarding various components of BPACR. Opportunity should be taken to provide information, education and communication to all the pregnant women, during ANC checkups, so that they along with their families can make the correct choices especially in event of any complications arising during delivery, childbirth or post-partum. This opportunity is missed many a times due to a number of reasons which should be addressed at the individual, family, community and the health provider's level. One more positive approach may be access to loans through the community health funds of rural area-based groups. It is important to identify and stimulate rural individuals with a social responsibility to form community groups. Community groups once formed should be encouraged to discuss and determine the need for such a community health fund before encouraging them to begin generating and managing collective savings. All these will be positive steps toward achieving the millennium development goal 5 of safe motherhood and reduction in maternal mortality.
2019-03-16T13:03:40.729Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "b18737dd77986051a29e381ab09049f2150148f8", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/456/421", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6539f4dc52789779ff6198e69b578dcc468479d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227233795
pes2o/s2orc
v3-fos-license
Predictive efficacy of neutrophil-to-lymphocyte ratio for long-term prognosis in new onset acute coronary syndrome: a retrospective cohort study Background Inflammation is involved in the pathogenesis and progression of coronary artery diseases (CADs), including acute coronary syndrome. The neutrophil-to-lymphocyte ratio (NLR) has been identified as a novel marker of the pro-inflammatory state. We aimed to evaluate the predictive efficacy of the NLR for the prognosis of patients with new-onset ACS. Methods We retrospectively included consecutive patients with new-onset ACS treated with emergency coronary angiography. NLR was measured at baseline and analyzed by tertiles. The severity of coronary lesions was evaluated by the Gensini score. Correlations of NLR with the severity of CAD and the incidence of major adverse cardiovascular diseases (MACEs) during follow-up were determined. Results Overall, 737 patients were included. The NLR was positively correlated with the severity of coronary lesions as assessed by Gensini score (P < 0.05). During the follow-up period (mean, 43.49 ± 23.97 months), 65 MACEs occurred. No significant association was detected between baseline NLR and the risk of MACEs during follow-up by either Kaplan–Meier or Cox regression analysis. Multivariable logistic regression analysis showed that a higher NLR was independently associated with coronary lesion severity as measured by the Gensini score (1st tertile vs. 3rd tertile hazard ratio [HR]: 0.527, P < 0.001, and 2nd tertile vs. 3rd tertile HR: 0.474, P = 0.025). Conclusions The NLR may be associated with coronary disease severity at baseline but is not associated with adverse outcomes in patients with new-onset ACS. Ethics Approval Number 2019XE0208 Background The current understanding of the pathogenesis of atherosclerosis is focused on the "inflammatory hypothesis of atherothrombosis" theory [1,2]. Inflammatory cells and inflammatory signaling pathways play complex roles in the process of atherosclerosis, including initiating repair after vascular injury and mediating plaque instability and rupture, finally leading to acute coronary events [3][4][5][6]. Patients with acute coronary syndrome (ACS), particularly those with new-onset ACS, often have an unstable clinical status and a poor prognosis, and optimization of risk stratification is clinically important in this patient group [7,8]. Pathological studies have confirmed an increase in white blood cell mobilization in necrotic areas of the myocardium [9]. Moreover, white blood cell count, a clinical marker of universal inflammation, was shown to be independently associated with the risk of mortality and incidence of major adverse cardiovascular events (MACEs) in ACS patients [10,11]. However, white blood cell count is unstable and tends to be affected by comorbidities such as infection. Interesting, it has also been indicated that decreased lymphocyte numbers may be associated with acute coronary events [12]. Recent studies showed that the neutrophil-to-lymphocyte ratio (NLR), which incorporates two major subgroups of white blood cells, may confer prognostic efficacy in many diseases, including inflammatory diseases, cardiovascular diseases, and malignancies [13,14]. It has been suggested that an elevated NLR is associated with increased longterm mortality in patients with acute myocardial infarction (AMI) complicated by left main-and/or three-vessel disease [15]. Moreover, the role of the NLR for the management of patients with coronary artery disease (CAD) has also been evaluated, and the results showed that the NLR is correlated with CAD severity [16][17][18]. However, these studies were of limited scale and patients with a previous diagnosis of CAD were not excluded. Overall, clinical and experimental data support an important role for inflammation in CAD [1,2]. Whether the NLR remains a significant prognostic factor after control for the severity of coronary lesions in new-onset ACS remains to be determined. Therefore, in this study, we retrospectively enrolled patients with newonset ACS to comprehensively evaluate the potential prognostic role of the NLR in these patients. Patients and study design Consecutive patients with a first diagnosis of ACS who were admitted to the Xinjiang Uygur Autonomous Region Traditional Chinese Medicine Hospital affiliated to the Xinjiang Medical University from January 2011 to January 2019 were included. ACS was diagnosed in accordance with previously established guidelines [19]. Patients with the following clinical conditions that may affect the NLR were excluded: hepatic or renal dysfunction, malignant tumors, acute infection, connective tissue disease, physical and chemical damage, previously proven systemic inflammatory disease, and recent surgery. Moreover, patients with a previous diagnosis of CAD were also excluded. The protocol of the study was approved by the ethics committee of our local institution before enrollment of the patients. Informed patient consent was not needed due to the retrospective design of the study. Blood sampling and definitions of CAD risk factors Venous blood samples were taken when patients initially presented to the emergency department or prior to angiography, and the samples were sent immediately for laboratory analysis. Hypertension was defined if the patient was taking any antihypertensive medications or had blood pressure measurements over 140/90 mmHg on least two separate occasions [20]. Diabetes was diagnosed based on medical history or measurements of fasting and/or postprandial glucose according to previous guidelines [21]. The estimated glomerular filtration rate (eGFR) was calculated with the Modification of Diet in Renal Diseases equation [22]. Coronary angiography and Gensini score All patients underwent coronary angiography within 12 h of admission. Two independent investigators assessed the degrees of stenosis of the coronary lesions. Consensus with a third investigator was indicated if disagreement occurred. The Gensini score (GS), which incorporates both the extent of luminal narrowing and the geographic importance of the lesion, was calculated to reflect the severity of coronary lesions [23]. We used the GS instead of the SYNTAX system to reflect the severity of coronary lesions, because the calculation method of SYNTAX integral is more complicated. This limits its use in clinical practice and makes it difficult to apply to emergency patients, such as those with new-onset ACS. Moreover, research has shown that the SYNTAX score cannot be utilized to define future risk as the Gensini score can in patients with non-obstructive CAD [24]. Outcomes Patients were followed by telephone interview or clinic visit. The primary outcome was all-cause mortality. The secondary outcome was a composite of MACEs, including cardiac mortality, non-fatal myocardial infarction and stroke, stent thrombosis, and revascularization (unplanned repeat PCI). Statistical analysis Continuous variables were expressed as mean and standard deviation (SD) or median and interquartile range (IQR), whereas categorical variables were presented as percentages. Patients were grouped according the tertiles of the NLR or GS. One-way analysis of variance (ANOVA) was used to evaluate the difference in normally distributed numeric variables among the groups, while for the non-normally distributed variables, Mann-Whitney U test or Kruskal-Wallis variance analysis was used. For the categorical variables, a chi-square (χ 2 ) test was employed. Linear regression analysis was performed to identify the factors associated with the GS. Prognostic factors for the occurrence of mortality and MACEs were analyzed with Kaplan-Meier survival method. Univariate analysis was first performed, and then significant variables were included in the multivariate Cox analysis. A P value < 0.05 indicated a statistically significant difference. All analyses were performed using SPSS 22.0 (SPSS Inc, Chicago, IL, USA). Characteristics of patients according to NLR A flow chart outlining patient enrollment is shown in Fig. 1. A total of 737 patients with new on-set ACS were included. The baseline characteristics of the included patients according to the tertiles of NLR are shown in Table 1.The results showed that patients with a higher NLR were more likely to have dyslipidemia and ST-elevation myocardial infarction (STEMI; both P < 0.05). Incidence of mortality and MACEs according to the NLR The incidences of clinical outcomes during follow-up (mean, 43.49 ± 23.97 months) for the included patients with new-onset ACS according to the NLR are shown in Table 2. No significant differences in the incidences of all-cause mortality, overall and components of MACEs, or bleeding events were detected among the three groups (all P ≥ 0.05). Characteristics of patients according to GS The baseline characteristics of patients according to the tertiles of GS (1st tertile GS < 49; n = 250, 2nd tertile GS: 49 ~ 85; n = 246, and 3rd tertile GS > 85; n = 241) are shown in Table 3. The percentage of male patients, age, prevalence of diabetes mellitus, and history of smoking differed significantly among the groups according to GS tertile (all P < 0.05). However, we found no relationship between other indicators and coronary severity (all P > 0.05). Factors associated with coronary lesion severity as detected by Gensini Score The results of multivariable logistic regression analysis showed that a higher NLR was independently associated with coronary lesion severity as measured by the GS (1st tertile vs. 3rd tertile hazard ratio [HR]: 0.527, P < 0.001, and 2nd tertile vs. 3rd tertile HR: 0.474, P = 0.025). The other factors independently related to GS included advanced age (HR: 1.033, P < 0.001), male gender (HR: 1.835, P < 0.001), and the absence of diabetes (HR: 0.507, P < 0.001; Table 4). Predictors of clinical outcomes Overall, 65 patients experienced MACEs during followup, including 23 (35.38%) cases of cardiac mortality, 6 (9.23%) cases of nonfatal MI, 2 (3.08%) cases of ST, 33 (50.77%) cases of revascularization, and three (4.62%) cases of nonfatal stroke. The NLR was not correlated with MACEs either as a continuous variable or according to tertiles (both P > 0.05). Kaplan-Meier analysis did not show a significant difference in the event-free survival rate among the NLR tertiles (P < 0.775, Fig. 2). The results of univariable Cox regression analysis showed that age, systolic blood pressure, diastolic blood pressure, red blood cell count, left main coronary stenosis, stenosis of the right coronary artery, and high GS were predictors of MACEs ( Discussion The results of this retrospective cohort study showed that, although a higher NLR at baseline was independently associated with the severity of coronary lesions in new-onset ACS patients as evidenced by GS, the NLR was not a predictor of adverse clinical outcome during follow-up. We found that advanced age, elevated systolic BP, and higher GS are potential independent predictors of poor outcomes. Taken together, our results do not support incorporation of baseline NLR as a prognostic factor for new-onset ACS patients. The key pathophysiologic processes for ACS include the rupture of a vulnerable plaque and subsequent formation of thrombosis [25,26], and the role of inflammation in these processes has not only been confirmed by pathological studies, but also shown in some optical a P < 0.05 compared to the 1st tertile; b P < 0.05 compared to the 2nd tertile coherence tomography-based studies [6,27]. Therefore, it has been proposed that the NLR, a novel but easily obtained marker of inflammation, may be a prognostic factor for ACS patients. Indeed, some previous studies suggested a prognostic role for the NLR in CAD patients. In a recent study with 636 STEMI patients, the NLR was significantly associated with in-hospital mortality [28]. Moreover, a post-hoc analysis showed that the NLR is associated with increased long-term mortality in patients with acute myocardial infarction (AMI) complicated by left main-and/or three-vessel disease [15]. However, in our retrospective cohort study, we did not find a significant association between a high NLR and poor prognosis in these patients, despite the relatively longer follow-up duration in our study compared with previous studies. The mechanisms have yet to be fully determined. Previous studies showed that the NLR changes dramatically, with the maximal level seen during the occurrence of inflammatory-related events [29]. Because neutrophils have a short life span and faster turnover, it is better to observe neutrophils in a dynamic manner rather than in a single measurement. Moreover, our study had a longer follow-up duration than previous ones, which may indicate that the potential prognostic role of the NLR in ACS is only acute. The relationships of NLR with ACS, overall mortality, and cancer survival have generally been thought to be driven by chronic inflammation [1,2]. However, patients with a previous diagnosis of CAD were excluded in our study, and whether the NLR is associated with new-onset ACS has not been well established and remains incompletely understood. To the best of our knowledge, the potential link between NLR and new onset ACS has not been reported. Another explanation is that the potential prognostic role of the NLR in ACS is confounded by factors related to the severity of coronary lesions, such as the GS. Therefore, the prognostic efficacy of the NLR is limited in a model that incorporates factors reflecting the severity of coronary lesions. Our results indicated that the NLR is significantly correlated with coronary lesion severity as evidenced by the GS. The results of our present study confirm the previous concept that inflammation correlates with the degree of coronary stenosis in CAD patients. Pathophysiologically, myocardial ischemia can induce an immediate rise in the plasma NLR, the magnitude of which is proportional to the severity of ischemia, although the neutrophil half-life is short [30]. Subsequently, a state of stress and inflammation, as seen in ACS patients, could result in increased levels of inflammatory markers in the blood circulation, accompanied by increased blood cortisol levels. An increase in cortisol has been shown to induce apoptosis, which in turn leads to lymphopenia and even inversion of the CD4 + /CD8 + T lymphocyte ratio [31]. Therefore, an elevated NLR represents an exaggerated inflammatory response that may reflect coronary atherosclerosis progression [16][17][18], and to some extent, may predict the acute prognosis in these patients [15,[32][33][34]. On the other hand, medications such as statins are well known to have anti-inflammatory actions [35], and the common use of statins during the post-acute phase of ACS may also reduce the prognostic efficacy of the NLR for long-term outcomes in ACS patients. Xinjiang is characterized by the integration of diverse ethnic cultures, but people in Xinjiang generally do not have a deep understanding of cardiovascular disease. Accordingly, a low treatment rate and poor adherence are common problems of hypertension management in this area. Therefore, it appears that although approximately 50% of the patients had hypertension, only 3-15% of patients were receiving treatment with antihypertensive agents. We are working hard to actively promote popularization of the science of cardiovascular diseases in different forms and languages in this region. Study limitations First, as a retrospective observational single-center study with a small sample size, our study may be confounded by recall bias. Our results should be validated in prospective studies. Second, our study included consecutive patients with an initial diagnosis of ACS, and the diagnosis of the patients varied. Third, the NLR was only measured once at admission, and whether changes in the NLR during hospitalization or the NLR at discharge have an impact on the prognosis of these patients remains unknown. Finally, as the study was conducted over 8 years, PCI techniques and medical therapies likely evolved and changed with increasing evidence, which is likely to impact the outcome.
2020-12-01T14:38:30.284Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "b6feedf3995e9e9e16a8c88f004badc5f3bf1918", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-020-01773-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a274673d6d629f315347952f107648e05363790", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3137372
pes2o/s2orc
v3-fos-license
Evaluation of the implementation of the Montreal at home/chez soi project Background Homelessness and mental disorders constitute a major problem in Canada. The purpose of the At Home/Chez Soi pilot project was to house and provide supports to marginalised groups. Policymakers are in a better position to nurture new, complex interventions if they know which key factors hinder or enable their implementation. This paper evaluates the implementation process for the Montreal site of this project. Methods We collected data from 62 individuals, through individual interviews, focus groups, questionnaires, observations and documentation. The implementation process was analysed using a conceptual framework with five constructs: Intervention Characteristics (IC), Context of Implementation (CI), Implementation Process (IP), Organizational Characteristics (OC) and Strategies of Implementation (SI). Results The most serious obstacle to the project came from the CI construct, i.e., lack of support from provincial authorities and key local resources in the homelessness field. The second was within the OC construct. The chief hindrances were numerous structures, divergent values among stakeholders, frequent turnover of personnel and team leaders; lacking staff supervision and miscommunication. The third is related to IC: the complex, unyielding nature of the project undermined its chances of success. The greatest challenges from IP were the pressure to perform, along with stress caused by planning, deadlines and tension between teams. Conversely, SI construct conditions (e.g., effective governing structures, comprehensive training initiatives and toolkits) were generally very positive even with problems in power sharing and local leadership. For the four other constructs, the following proved useful: evidence of the project’s scope and quality, great needs of services consolidation, generous financing and status as a research pilot project, enthusiasm and commitment toward the project, substantially improved services, and overall user satisfaction. Conclusion This study demonstrated the difficulty of implementing a complex project in the healthcare system. While the project faced many barriers, minimal conditions were also achieved. At the end of the study period, major tensions between organizations and teams were significantly reduced, supporting its full implementation. However, in late 2013, the project was unsustainable, calling into question the relevance of achieving a significant number of positive conditions in each area of the framework. Background The literature shows that access to housing and support interventions are effective weapons against homelessness [1,2]. One evidence-based practice that is considered effective for people with severe mental disorders and chronic homelessness is the "Housing First" program [3]. Contrary to the residential continuum model, where independent living accommodations are offered only after completion of particular rehabilitative programs of activities, Housing First programs provide immediate access to subsidised housing based on user preferences and ensures appropriate clinical follow-up [4]. In the Housing First program, housing is not dependent on treatment, and users who continue to abuse substances do not lose their lodging [1,5]. Introduced in New York in 1992 with Pathway to Housing [6], this program has since been successfully tried in various settings in the United States and other countries [5][6][7][8]. Nine randomised controlled trials have acknowledged the Housing First program as an evidence-based practice [3]. In 2008, the Canadian federal government allocated Can$110 million to the Mental Health Commission of Canada (MHCC) for the implementation of a fouryear research pilot project to replicate and adapt the Housing First program (2009)(2010)(2011)(2012)(2013). The At Home/ Chez Soi project [4] was then launched in five Canadian cities: Vancouver (British Columbia), Winnipeg (Manitoba), Toronto (Ontario), Moncton (New Brunswick) and Montreal (Quebec). It provided access to three essential services: 1) affordable and safe housing, using rental money as support for housing units, monetary subsidies for certain landlords or offering housing units owned by the project; 2) assertive community treatment (ACT; multidisciplinary team follow-up including a psychiatrist offering services several times a week; one service provider full-time equivalents (FTE) for ten users) for homeless people with severe mental disorders having high needs; 3) intensive case management (ICM; individual follow-up by a case manager at least one time a week; one service provides FTEs to 20 users) for homeless people with severe mental illness having moderate needs. The recovery paradigm, dominant in the mental health field [9], where all decisions and interventions focus on user needs and where the users are a close partner of services, was also at the heart of the project's vision and practice. As well, each local site could include components suited to their specific needs and conduct sub-studies focusing on key local issues, as long as such activities did not interfere with the core of the Canadian project. In Montreal, the At Home/Chez Soi project appeared on a dynamic political scene. In 2008, the government of Quebec had established a parliamentary commission on homelessness, and an Inter-ministerial Action Plan on Homelessness (2010-2013) published in December 2009 had recommended identifying best practices to fight homelessness. This plan acknowledged that the Housing First program could be a promising avenue for long-time homeless people with severe mental disorders [10]. The Montreal project also responded to the changing mental healthcare context. In 2005, the Quebec Mental Healthcare Action Plan (2005-2010) set targets for housing services supported by ACT and ICM teams, and promoted the Housing First program as an innovative solution for the homeless with severe mental disorders [11]. However, the Montreal At Home/Chez Soi project was also lunched in a context of strong long-standing debates between the Quebec and Canadian governments about their respective jurisdictions. Starting at the turn of the last decade, the federal government had sponsored extensive, non-recurring health initiatives throughout Canada, which was later transferred to provinces without additional funding, thus adding pressure to provincial budgets. The Quebec government especially disapproved of federal involvement in health and social services, which are areas of provincial responsibility within the Canadian context. In Montreal, the At Home/Chez Soi project aimed at recruiting 500 participants, including 300 in test groups receiving housing and clinical support (100 in each of the ACT and the two ICM teams). Control groups comprising 100 individuals for each level of need were formed for research purposes exclusively (no service were provided). Additional pilot projects, no required at the national level, were included, for example, the offer of both social and private housing choices to users. Monetary subsidies for private landlords or social housings were thus provided. As opposed to Pathway to Housing in New York, which was a single organization, the Montreal At Home/ Chez Soi project was sponsored by three principal partners: a mental health university institute (MHUI), a health and social service centre (HSSC), and a community agency. The MHUI handled the housing team and provided leading research expertise primarily in the mental health field. The Montreal project managers, i.e., the local At Home/Chez Soi coordinator (representing the MHCC, was responsible for ensuring the successful project implementation as planned at the national level and in accordance with appropriate adjustments at the site level) and the principal site investigator, were from the MHUI. The HSSC oversaw the ACT team and one of the two ICM teams, and brought complementary research expertise mainly to the social service and homelessness areas. The community agency managed the second ICM team. Since the project involved several organizations and teams (ACT, ICM, Housing, user recruitment team) three governance structures were set up to integrate the project: a steering committee, an operational integration committee and a peer users council. The mandate of the steering committee was to vet strategic decisions of the At Home/Chez Soi project in Montreal. Under the direction of the site coordinator and principal investigator, it comprised a representative from each of the organizations involved in the project at all levels. The operational integration committee comprised team leaders (housing, clinical, recruiters), the staff psychiatrist (from the ACT team), and representatives from the peer users council, along with the site coordinator, principal investigator and research coordinator. The mandate of the operational integration committee was to oversee the operations of project components and the execution of the teams' mandates. The role of the peer users council was to represent users' points of view on the various project governance committees and to organise activities for them. It was constituted of individuals with lived experience of mental disorders and homelessness. The Montreal At Home/Chez Soi project lends itself to an interesting study as its implementation involved a complex set of actors, including federal and provincial governments, MHCC and local governance structures, public (MHUI, HSSC) and community organizations, health and social services, stakeholders from the mental health and homelessness fields (clinicians, managers, researchers and users) and teams with specific mandates. Implementation marks the transition between the planning of a new strategy or project and its acceptance as a regular program among all stakeholders [12,13]. It involves specific activities to meet established requirements of the project [14]. Implementation is a social process [12,15] in that it involves contextual factors, and organizations and individuals that contribute to its success or failure by their attitudes and actions (or inaction) [8,16]. It is difficult to effect any substantive change in health and social services systems. As the literature shows, almost two-thirds of such attempts fail [12]. This is why policy makers need to understand factors that can mean the difference between success and failure of new projects or services. Several conceptual models now exist that describe these factors [12,13,[16][17][18][19]. Few studies however have looked at complex implementation processes related to the Housing First program [20,21]. This paper proposes to do just that based on an examination of the first implementation phase of the At Home/ Chez Soi pilot project in Montreal, Canada (2009Canada ( -2010. Basing ourselves on a conceptual framework, we will identify and comment on the foremost aspects that created roadblocks during the implementation of the Montreal At Home/Chez Soi project in order to achieve a clearer understanding of the dynamics of this process. Setting Montreal (Quebec) is the Canada's second largest urban centre. According to the 2006 Census, it was home to 1.9 million people or 25% of Quebec's total population. In 2006, 32.3% of households were below the lowrevenue threshold, and 9.5% of the population received social welfare [22]. An estimated 30,000 individuals were homeless for at least part of 2005 [22,23]. There was a long tradition of cooperation and partnership on the issue of homelessness in Montreal before the arrival of Montreal At Home/Chez Soi project between the city, the health authority of Montreal, the HSSC and community organizations. Data collection This research was a mixed methods study, using both qualitative and quantitative methods. The opinions of 62 various stakeholders were sought (service providers, decision makers, users, peer support workers and researchers) between October 2009 and December 2010. With the exception of the users, these included the main stakeholders involved in the Montreal project. They were chosen in view to reflect a diversity of opinion. Users were selected from each of the clinical teams by the team leaders, according to their availability and their varying degrees of commitment to the project. Figure 1 shows the flowchart of the study stakeholders and data collection used for each type of stakeholders. The 62 individuals surveyed, included 37 professionals and 25 users (from ACT and both ICM teams). The professionals were as follows: a) 15 managers, team leaders, psychiatrists or researchers in charge of the project; b) 19 service providers; and c) 3 representatives from the peer users council. The following methods were used: semi-directed interviews, focus groups, observations of meetings of the governance structure, and minutes of meetings of governing committees and questionnaires. Qualitative investigations were used primarily to understand the implementation process [24]. The interviews and focus groups (except for users) covered the following dimensions: 1) implementation context of the At Home/Chez Soi project (e.g., team development, recruitment process); 2) role and operation of project teams and governing structures, including values and practices; 3) relationships across teams within the At Home/Chez Soi project and between the project and the local mental health and homelessness networks; 4) perceived impact of the project on users and homelessness; and 5) issues and challenges for the project. Users described their experience and appreciation of 1) their integration within the At Home/Chez Soi project including housing; 2) their clinical treatment (by key service providers, other professionals and external resources); and 3) the project in general (e.g., most useful aspects, needed improvements). Individual interviews took about 45 minutes; focus groups about two hours. Interviews and groups were recorded, transcribed and rendered anonymous, each participants being identified by a number. Participants' observations were realised throughout the entire period of the study within the project's governing committees by the authors of this manuscript. The purpose was to observe interpersonal relationships between stakeholders as well as the level of leadership assumed by each of them. The minutes of the project's governing committees completed the information of observations (e.g., subjects covered, actions taken, problem resolutions). The research also drew on correspondence related to the project. However, researchers did not observed interventions of clinical teams. Quantitative data were used secondarily to complete qualitative data and for measuring intervention outcomes [24]. Three questionnaires were administrated (all quantitative data in the Results section are from those). First, all respondents received a questionnaire on sociodemographic data (e.g., education, time involved in the At Home/Chez Soi project). For clinical teams, we added the following items: training received during the period under study (number of days), work satisfaction (e.g., workload, work climate), and perceived impact of service providers' intervention on users. This second questionnaire required categorical or continuous responses (yes/no, number or percentage), with some five-or ten-point Likert-scale questions (e.g., from very unsatisfactory to highly satisfactory). Lastly, in a third questionnaire, the users' time spent with the project, users' time in homelessness, and the reason why they lived on the streets was asked in addition to sociodemographic data. The research response rate was 100%, i.e., all asked participants agreed to be part of our study. All participants also signed a consent form. The study protocol (MP-IUSMD-09-023) was approved by the Douglas Hospital Research Ethic Board Committee, the Centre Hospitalier de l'Université de Montréal (CHUM) Ethic Board Committee and the Jeanne-Mance Health and Social Service Centre (HSSC) Ethic Board Committee. Analyses The qualitative data analysis used a thematic analysis method [24]. The initial coding structure was based on the general interview topics identified above, but allowed the inclusion of emerging issues such as sustainability, intergovernmental relations and other contentious aspects. SPSS software was used to compile quantitative data and produced descriptive analyses on questionnaire items according to type of participants, specifically providers or users. Univariate statistics comprised frequency distributions for categorical variables and mean values along with standard deviations for continuous variables. Information was triangulated across stakeholders, and types of data collection, including qualitative and quantitative methods. Results were drafted in a research report, validated by the main stakeholders, and subsequently submitted to the National Research Team [25] on which this article is based. The analysis was also guided by a conceptual framework based on previous models [12,13,16,17,19] and on the implementation literature [6,8,14], and elaborated on by the authors after consensus. Factors associated with implementation were grouped in five key areas, detailed in Figure 2. Results The socio-demographic description of the 37 professionals is provided in Table 1, and those of the 25 users in Table 2. The results section that follows is based on the analysis of the strengths and weaknesses of the Montreal At Home/Chez Soi implementation process according to the five key areas of the conceptual framework: intervention characteristics, context of implementation, implementation process, organizational characteristics and strategies of implementation. Examples of interview quotes for each of those key areas are presented in Table 3. (1) Intervention characteristicsevidence strength and quality, relative advantage, adaptability and complexity Since the At Home/Chez Soi project was an evidencebased practice, the project was strongly endorsed by the Montreal stakeholders, particularly the public organizations (MHUI, HSSC) and mental health network. Stakeholders perceived the project as a strong quality-based intervention, which could help reduce homelessness in the City of Montreal, especially for individuals dealing with severe mental disorders and chronic homelessness, who are more difficult to reach, and for whom few services are available. Moreover, the National Research Team, which spearheaded the project, as well as the local coordinator and principal investigator of the Montreal site, all three from the mental health field, brought to the project considerable expertise in the field of psychiatry. However, homelessness was a problem generally addressed by health and social service providers and community organizations, which might have made it more difficult to establish the legitimacy of the project. In addition, as the At Home/Chez Soi project was a multiple-site research initiative sponsored at the national level (MHCC), it favoured standardised practices derived from evidence-based data, which had promoted an approach mostly viewed by stakeholders as very top-down. Appropriation of the project was thus more difficult at the site grassroots level, according to the majority of managers, since they felt that adaptations or local ways of implementing were negated. The MHCC sought high fidelity to the initial program and its many components Adaptability: "I am frustrated a bit by certain types of information, or certain ways of seeing things. I did not have a chance to explain how we, especially in Quebec in our specific culture, see things. I learned this later in life and when you react later, it is often not seen as it should have been seen." (12-Manager) Adaptability: "Well, we came with a housing first approach, which really disrupted things […]. This is something different, so it always requires efforts on the part of those involved in spite of the information and everything. For some, it's easy, and for others it's more difficult. I think it is not over. One cannot acquire the skills and abilities in one year… there is still work to be done." (31-Manager) Complexity: "For owners who were not aware, because they never had had previous experience, we took the time to explain our agreement to them, in what we were getting ourselves into, and told them that it was the person that we would introduce them to would be signing the lease, and that that person would be a tenant like everyone else, committed to his or her responsibilities and his or her rights on both sides, and that we were a team there to support them in this project, that we would be real partners. Yes, it is a bit long, because we really have to make this long-winded speech; we really have to fully inform the owners so that they knew what type of project they were embarking on." ( Context of Implementation Opposition to Housing First: "In the community, in other organizations, we heard people speaking against it. It was a bit of a wave against the Housing First Project. It will be like taken over. They (the users) will be used. After that, they will be dropped, etc. There was a lot of prejudice in that regard." (28-ICM Community Agency Team) Shock of culture: "It is the basis of the Housing First approach, the subsidies for rent in the private sector that they found morally wrong […]. It is their way of seeing the world, that many of us do not share…" (01-Manager) Incentives: "There were many meetings, and basically it was to generate some interest in those in the grassroots so that this project could come to Montreal […] to try to see how we could deal with the problems with intergovernmental affairs, the resistance of the Government of Quebec, and thus the Health and social services ministry and the Health regional agency regarding this project. Essentially, to develop an interest by the grassroots…" (04-Manager) Implementation Process Conflicts between the Housing Team and the Clinical Teams: "There is indeed an issue about the mandate, the housing team, their clients… they are the owners. We, our clients, they are participants who sometimes have a relationship with the owner that is not always satisfactory. When we speak, we do not have the same objective." Shock of cultures: "With the clinical team, it was more focused on the participant and the participant's problems, and history. […] We, by definition the Housing Team, are more associated with the "Community". We are not in the life of the participant. We deal with the owner, the territory, the resources, etc." Lack of Qualification of the Teams: "As nurses, this is also new. We do a lot of legwork, observation, evaluation, whereas we work far more now with the social worker, paperwork, the local job centre… all that for us is major. It's new. We do not see that at school." (49-HSSC ACT Team) Lack of Qualification of the Teams: "What is an issue between ICM and Housing, as with ACT and Housing, is that the Housing Team was not made up of people trained in mental health." (30-HSSC ICM Team) Staff Turnovers: "We got to know them at the time, and then we had new stakeholders, and then… I mean, we had to start everything all over. Then… No matter how much they talk among themselves, but I mean, you know, it's…" (User HSSC ACT Team) Loneliness: "So I will say like she did: "You say…" I will come back to that, the loneliness and then … The loneliness, and that's it… At the start, for the first four months, it was hell." (User ICM Community Agency Team) Organizational Characteristics Climate: Difficulty to integrate activities for big organizations: "You know, the HSCC is a strange machine. I can't believe we have this in Quebec, but it's a strange machine. It's complex, you have very little autonomy, everything is regulated, even the furniture." (03-Manager) Culture: "In fact, these problems, these are the problems with the entire project, i.e., that it is an ad-hoc alliance between several partners with philosophies from the outset that are not necessarily that similar, which have been brought together by the project and are starting off from very different traditions, which do not have a long history of working together, so we still talking about institutions that are not accustomed to working together, with each having its own particular culture." (04-Manager). Positive Outcomes: "What we are in the process of creating that will remain is all the learning in terms of daily living activities and domestic life. We are teaching them ways of living in a healthier apartment setting. I think that will remain with them." (44-HSSS ICM Team) (e.g., ACT, ICM, housing), including research, from all five settings. The fact, as well, that the project included several organizations contributed to its perceived complexity, hindering its implementation. Greater effort was therefore put on stakeholders to coordinate the project to answer the comprehensive needs of the users. (2) Context of implementationexternal policies and incentives and community endorsement The Quebec government or its representatives took no official part in the Montreal At Home/Chez Soi project, which they felt encroached upon their constitutional jurisdiction, thus seriously hindering its sustainability. Moreover, the project was launched quickly with little consultation with the provinces, which did not help endorsement by the Quebec government. Community organizations, especially those active in the fight against homelessness or offering housing services, were also reluctant about the project, because it favoured private housing over social housing as the prime choice for users. Social housing has been historically the orientation approved in the homelessness field, which clashed with the rent-supplement orientation of HF. Furthermore, the At Home/Chez Soi project had a bio-psycho-social approach, both in terms of combating homelessness and promoting mental health. This vision did produce a culture shock and pose challenges since stakeholders involved in each field of homelessness and mental health had their own history, values and ways of doing things. Mental health is grounded in the field of psychiatry and under the governance of the health branch of the Health and Social Services ministry, while the homelessness sector is grounded in social service and community organizations. Many stakeholders coming from the field of homelessness felt that the expertise and skills that they had acquired and developed over the years were summarily dismissed by this pilot project involving a consortium of providers who came mostly from the field of psychiatry. Nonetheless, given the At Home/Chez Soi project was a pilot project, financed by the federal government, and bringing major funds to the field ultimately generated support for it with the increasing needs and lack of resources to deal with homelessness. (3) Implementation processstages of implementation and related dynamics and impacts The implementation process of the Montreal At Home/Chez Soi project can be divided into three distinct periods [26]. The first period (October 2009 to March 2010) involved recruitment of team staff and project users, followed by a two-month hiatus to give the MHUI time to draft an agreement with landlords allowing users to rent accommodations. During this period, clinical teams and recruiters for the project developed tools, approaches and strategies. The innovative character of the project posed a serious challenge for everyone. At this stage, research considerations were front and centre and set the pace for the project (recruitment and housing targets, and level of treatment activity). According to the great majority of managers and teams leaders, teams were dynamic, committed to the project's success, and willing to provide valuable services to the homeless often disregarded by the healthcare system. During the second period (April to August 2010), pressures to meet research deadlines were paramount, i.e., maintaining the pace of recruitment of users, rapidly finding housing for them, and providing the right intensity of service while recognising the long travel time for user visits by the staff. This situation contributed to team exhaustion and a crisis management mode. The social housing option, which was to be part of the study, had to be cancelled, resulting both of the difficulty in finding such accommodations for individuals with substance abuse issues, the virtual boycott of the project by organizations having these resources, and preferences of users for private housing. More demands were thus placed on the Table 3 Example of quotes (Continued) Loss of Meaning and Lack of Discretion: "I know that, in the first planning report that was produced as part of this study, there was an issue that there was not enough discretionary powers given to the local level during the planning and development phase. I did not think that there would be more latitude. And that we would still have to refer to the national level for fundamental questions. It is more the national level that provides direction." (05-Manager) About the Steering Committee: "I do not personally believe that is vested managerial power. Because the real management issues are a matter for the national level. I believe this, because after all it is a multi-site party, so we are just one site among others, yes…" (07-Manager) About the Peer Users Council: "We did not really participate in the planning. It was a housing program first and foremost, and then the Housing Team came at the same time as us, so, to me, it was like you starting a car, and you did not have your two main wheels. Honestly, I really felt like that." (18-Peer User Council) private sector, which made it increasingly difficult to find affordable housing. The third key period (October to December 2010) saw further tensions between the housing and clinical teams, related to their respective responsibilities and mandates, and inadequate coordination between the teams. Under the At Home/Chez Soi project structure, the Housing team felt pressured to rapidly find apartments for newly recruited users. There was, however, a lack of coordination with the clinical teams whose job it was to assist these new users while providing the required intensity of service for those who were already settled. In addition, since the Housing team did not want to lose its stock of apartments, it was severely criticised by the clinical teams for defending the interests of landlords over those of users. During an operational integration committee meeting, the Housing team maintained that when a user refused three or four apartments, he became a no-priority case. For the clinical teams, conversely, it was normal that a user visit several apartments before selecting one. Moreover, if a user expressed his will to move due to a dispute with his landlord or his neighbours, the clinical teams could be in favour of that, while the Housing team invoked the obligation of the user to respect bail. Other difficult challenges that teams had to deal with during this period included non-payment of rent or abandonment of housing accommodations, long delays in finding housing, having to develop intervention plans for users with complex profiles and largely unknown histories, repeatedly missed or cancelled appointments, refusal of treatment and having to serve a vast area. Nonetheless, near the end of this period, considerable efforts and gains led to the roll-out of strategies and conditions more likely to result in the successful implementation of the project. At the beginning of the winter of 2011, the Montreal site, with the agreement of the National Research Team, decided to reduce the high mental health needs cohort to 160 (before: 200), i.e., 80 individuals in each of the experimental and control groups. (4) Organizational characteristicsclimate, structure, staff and teams outcomes Each of the three organizations that sponsored the project had its own culture, driven by their respective team function. Compared with the HSSC and the MHUI, the community agency had few staff and resources, and a flat hierarchical structure that encouraged administrative and procedural flexibility and ensured close supervision. The other two partners were large organizations, and thus found it difficult to integrate activities such as the hiring process and the introduction of new planning or followup tools within their structure. For exemple, according to HSCC managers, the intervention plan forms could not be adapted because such a change would have required approval by the archive service only after long and complex negotiations. As well, managers were not as accessible in the larger organizations, according to a few members of the clinical teams. This had considerable impact on team operations, especially the two HSSC clinical teams, which had to deal with high turnover and frequent understaffing. The HSSC had to deal with the departure of both leaders of the ACT and ICM teams and later of the program manager, thus hindering ongoing supervision of the teams. Teams were to be completed progressively as new users joined the project. Under the MHUI's direction, the Housing team was not fully staffed until March 2010 (n = 7 FTE). The community agency's ICM team (five case managers FTE) was constituted from the very start of the Montreal At Home/Chez Soi project (fall 2009), and remained stable and fully staffed throughout the study period. The three participating organizations of the At Home/Chez Soi had also different levels of familiarity and background with Housing First and related community recovery concepts, resulting in an easier adoption of the latter approach by the community agency compared to the MHUI or the HSSC. The MHUI and the HSSC both entered uncharted territory, the former in trying to develop private housing, and the latter in providing ACT and ICM services for homeless people with severe mental disorders. The questionnaire results from the service providers indicated mixed perceptions of the organizational features. Only a small majority of service providers were satisfied or highly satisfied with their inter-professional relations within their team (65%, n = 11), or their work climate (59%, n = 10). A minority reported being satisfied or highly satisfied about their team workload (29%, n = 5), since recruitment was intensive, and the demands for engaging and housing new participants were high. Conversely, 76% (n = 13) were satisfied or highly satisfied with their working conditions, 71% (n = 12) with their training, and 88% (n = 15) with the leadership of the At Home/Chez Soi project. Only 48% (n = 8) of service providers, however, expressed satisfaction with interprofessional relations with other project teams, and 56% (n = 9) with relations within the healthcare system. These numbers showed managers and teams leaders that most conditions within and across teams required significant improvement. The marked level of satisfaction among service providers with regard to the leadership of the At Home/Chez Soi project and, to a lesser extent, toward their working conditions, was nevertheless indicative of a strong commitment to the project. In spite of these difficulties, 84% of the members of the clinical teams believed that their work was judged satisfactory or highly satisfactory by users. They felt that they had achieved a therapeutic alliance with 74% of the users they had served. Surveyed users also were generally satisfied with the help provided by the teams, although some noted that there was too much staff turnover, while a few reported having to wait too long for housing. These concerns were voiced in focus groups. Throughout the latter, the great majority of users nevertheless reported being concerned with key problems such as loneliness, social isolation, poverty and difficult integration within the community. (5) Strategies of implementationgovernance structures, training programs, toolkits and assessments of fidelity to the program's components Governance These different strategies were identified by stakeholders as key enabling factors in the project implementation process. In terms of governance, the National Research Team played a leading role in the Montreal site because of the need for standardization across sites, and because the majority of decisions having an impact on the research parameters had to be reported to them. This standardization led to a certain "loss of the project's meaning" and to a certain disinterest among some local stakeholders, who were unsatisfied with their role as essentially project operators. At the Montreal site, the coordinator exerted considerable control, acting as a buffer between the various interest groups within the project (e.g., clinical, organizational, research, users, and national/local). This power, however, was more persuasive than authoritative, since there was no hierarchical control over the organizations involved in the project. During this implementation phase, the local coordinator was also the head of the Housing team, which was the focus of much criticism and conflicts with the clinical team leaders. The neutrality of such a position was a key issue, and at the end of this study period, the steering committee recommended that the coordinator be appointed full time, and that a new head be nominated for the Housing team. The coordinator exercised leadership within a two-headed structure involving also the Montreal principal investigator. A significant number of people thought the steering committee's mandate was unclear. The in-between position of the steering committee relative to the National Research Team and the operational integration committee made it even more difficult to define its mandate, and thus the steering committee ended up playing more of a consultative role. Conversely, the majority of stakeholders considered that the operational integration committee was the most inclusive and successful committee given its ability to achieve results and resolve tensions. It considered information about what worked well and difficulties met along the way to reach consensual solutions. Concerning the peer users council, its members regularly attended the meetings of all the project's committees and took real ownership of the issues, understood them fully and acted as effective advocates, according to the majority of managers. However, the peer users council did not follow through on proposed activities, remained little known among users and then failed to deliver up to expectations. According to the peers themselves, this state of affairs was the result of the council not having been involved closely enough in the project's planning and thus not having had the necessary tools to conduct its activities. Training programs During the period under study (essentially 2010), teams also benefitted from extensive training, webinars and coaching, as needed. Communities of practice emerged across sites and served to improve the teams' functional capacity and practices. On average, team members had 10.4 days of mental-health-related education, 9.7 days of training in ACT or ICM techniques and 5.1 days of instruction about homelessness. According to most service providers, training fostered a sense of belonging to the project and among the newly constituted teams. Constant staff turnover among HSSC teams hindered learning activities, however, as did the need to meet urgent needs (e.g., user's crisis) while trying to integrate new concepts. Toolkits Various toolkits were developed to support the At Home/Chez Soi project. The Housing team introduced a list of vacant housing units, a description of each unit (e.g., number of rooms, brightness level), a quality evaluation form (e.g., safety, cleanliness), a spreadsheet on the percentage of rent to pay, a photo gallery of apartment dwellings and a geographical map showing their location. Meanwhile, the clinical teams, especially the community agency ICM team, developed a scale of readiness to change and other instruments such as crisis plans, records of user needs, life stories, neighbourhood maps including resources available for users (e.g., food banks, day care centres). The toolkits developed by HSSC teams were more formal given they had to follow the institution's established standards of clinical practice (e.g., use of computer resources, intervention plans). All teams favoured motivational interviewing and a strength-based approach, but this was especially true of the community agency ICM team. In addition, new team recruits were always paired with another professional so as to adapt to the different aspects of the work. While all teams prioritised in-house training, especially role-playing, staff turnover at the HSSC undermined the appropriation of the concepts and approaches put forward by the At Home/Chez Soi project. Fidelity of the program's components During the third implementation phase in the fall of 2010, the National Research Team conducted an assessment of fidelity to the program's components [3]. This evaluation created expectations and subsequent tensions between teams, although it had been meant to be followed up upon to improve team functioning. A consultant specialised in ACT and ICM was also hired in an effort to define more clearly the teams' duties and improve coordination between them. Although the four team leaders met regularly at the operational integration committee, and had occasional conversations, the information, according to the majority of stakeholders, did not trickle down systematically to service providers. There was no formal mechanism or boundary spanner to bring the teams closer together. Nonetheless, at the end of the period under study, the majority of stakeholders agreed that there were positives changes with respect to the overall synergy between the project components (e.g., consolidation of HSSC teams, improved task distribution among teams and governing structures, clarification of the peer users council's mandate; fulltime employment of the site coordinator and a better neutrality of such function). Discussion This study analysed the initial phase (October 2009 to December 2010) of the implementation process of the Montreal At Home/Chez Soi project, offering housing and community follow-up to homeless individuals with severe mental disorders. Using a conceptual framework including five key areas, the crucial aspects that created roadblocks during the project's implementation were identified. The results confirmed the presence of positive and negative factors in each of the five key areas. Concerning the Intervention Characteristics, the experience of the Montreal At Home/Chez Soi project showed that an evidence-based practice with obvious strengths and advantages does not automatically lead to success [16]. Implementation always involves negotiations with stakeholders. If the project is top-down, as was the case for At Home/Chez Soi, its supporters have to argue a strong case for gains to be made and needs to be fulfilled [27]. In this instance, the severe needs of the homeless and the lack of services were serious considerations in favour of the project's implementation. The complexity of the At Home/Chez Soi project, however, constituted a significant obstacle to its implementation [16]. It involved the recruitment within a short range of 500 users, 300 of whom were to have access to housing of their choice and receive the services of an ACT or ICM team according to the severity of their mental disorders. Any delay could jeopardise the entire process. Moreover, the project could not be fragmented into more manageable parts and progressively implemented [16]. The literature tells us that simple innovations are more likely to be well received and successfully implemented [16,28]. Another major barrier to the implementation of the At Home/Chez Soi project was its operation as a multiple-site research pilot project [12,29] and its top-down approach conceived at the national level, which left relatively little leeway for adjustments at the local level. According to the literature, it is easier to start a new venture if local resources are brought to bear on the process [16,30]. It was within the Context of Implementation, however, that serious barriers to the implementation of the Montreal At-Home/Chez Soi project were most evident. It is strongly acknowledged that a positive relationship with government or mental health authorities facilitates the implementation process [27,31]. While the Montreal At-Home/Chez Soi project could rely on the MHCC as a firm champion, it met resistance from the provincial government, which hampered its success. Project promoters presented it as "the solution" to homelessness, and a better approach than social housing. This did not play well in Quebec where there is widespread support for social housing programs. There was a real clash of cultures among professionals. Stakeholders involved in the fight against homelessness felt that their expertise and long experience were being dismissed by the project promoters who worked primarily in the field of psychiatry [26]. In a previous study concerning the implementation of the Housing First program in an American suburban county, Felton [6] found that this program was also described as a new practice having unique expertise, and this again met resistance from local authorities. In Montreal, key community organizations that were active players in the homelessness field opposed the project, and much work has been invested in view to persuade them to not boycott the project [26]. This brought to the fore the importance of considering the views of external networks and sustaining healthy relationships with them to ensure the success of a new undertaking [12,32]. Advocates of innovations, like Housing First, and government authorities should plan and anticipate these tensions. With regard to the implementation process, several authors have determined that it occurred in different stages and did not follow a linear trajectory. For instance, Greenhalgh and colleagues [16] identified three stages of implementation: knowledge-awareness, evaluationchoice and adoption-implementation. Fleury et al. [27] refer to these as "problem-setting", "direction-setting", and "structuring". The At Home/Chez Soi project also followed three stages of implementation. The first was characterised by the firm belief that the project and the massive investments it entailed could achieve significant results in terms of user recovery. This could be seen as the "honeymoon and strong-support phase" of the project. Then came the "pressure-to-achieve and problem-solving phase" when the project's implementation gathered momentum and expectations grew. The third phase of "crises and adaptation" evolved out of the collapse of the second phase, but brought accommodations leading to much optimism over the project's implementation and achievements. Over the full implementation process, users were highly satisfied with the project but faced struggles such as loneliness, poverty, and difficult community integration. Serious barriers to the implementation of the Montreal At-Home/Chez Soi project were also evident within the Organizational Characteristics. This pilot project was innovative in that it brought together three leading organizations and promoted collaboration with a large network. It can be described as a virtual integration program, i.e., a set of service providers having to coordinate their actions to offer diversified and ongoing services to a specific clientele [33][34][35]. These organizations nonetheless differed extensively in terms of structure, values and practices, which is why it was difficult to create an esprit de corps among various teams. This experiment shows-and the literature confirms [16]-that implementation is easier when organizations have some structural flexibility. According to Rosenheck [36], large organizations are often characterised by conflicting goals and inconsistent participation of key actors. It is also easier for organizations, such as the community agency involved in the Montreal At-Home/ Chez Soi project (ICM), to offer their cooperation if their values align with those of the new initiative [31,37]. Moreover, the quality of the providers' supervision had a significant positive effect on the implementation process [38]. Previous studies have found that frequent and abrupt turnover of supervisors or staff seriously hinders implementation [32,38,39]. The other side of the coin is that turnover can lead to the hiring of more willing and competent staff [32]. Finally, the experience of the Montreal At-Home/Chez Soi project confirmed the necessity of effective communication between services providers, teams and the network involved [40]. According to the literature, clear communication of mission and goals among the various providers and positive relationships between them promotes cooperation, which contributes to the ultimate success of an endeavour [12,39,[41][42][43]. Regarding the Strategies of Implementation, the literature and the project's history both attest to the positive impact of strong leadership [38]. According to Brunette et al. [31], successful projects tend to benefit from the active participation of mid-level managers. In the case of the Montreal At-Home/Chez Soi project, the operational integration committee exercised leadership at the operational level, but the absence of true strategic leadership able to promote local interests at the national level posed a serious barrier to the project's implementation and resulted in disinterest among some local stakeholders. The experience of the Montreal At-Home/Chez Soi project also confirms the value of staff training [16,31] in the success of the implementation process. Training contributes to the propagation of knowledge, allows staff to become familiar with the tools needed for effective functioning, and, consequently, improves the confidence of employees in their ability to perform their job [12], and also fosters unity among team members [26]. To be effective, however, training and information dissemination need to be integrated with strategies encouraging the acquisition and maintenance of sound practices, coupled with coaching [14]. Many studies have emphasised this aspect [27,44] and have shown it to be at least as valuable as planning and other clinical or administrative procedures at the strategic and operational levels. The At Home/Chez Soi project involved considerable effort in that regard. Lastly, project evaluation such as fidelity assessment serves to take in account challenges in the implementation process and make necessary corrections. While there is no denying the importance of this step, it can still be a source of severe stress [12,39]. Conclusions This study of the Montreal At-Home/Chez Soi project demonstrated the difficulty of implementing a complex and new program in the social and healthcare system. While the project faced many barriers, minimal conditions were also achieved. At the end of the period under study, major tensions between organizations and teams were significantly reduced, which support its full implementation. However, at the end in 2013, although it had positive user impacts [45,46], the Montreal At-Home/ Chez Soi project unfortunately was unsustainable, which calls into question the relevance of achieving a significant number of positive conditions in each area of the conceptual framework. In the specific case of the Montreal At-Home/Chez Soi project, most hindering factors stem med from the context of implementation, followed by organizational characteristics, intervention characteristics, implementation process and strategies of implementation. While there are limitation in generalizing our results to other studies on implementation, the Montreal At-Home/Chez Soi project thus served to emphasise the importance of identifying all the conditions that could hinder or enable a project and trying to fix most negative aspects before launching a project. It also showed that the success of a project depends largely on achieving the following conditions: support of the key actors within the social network, especially government authorities and long-term coalitions in the field, adaptation of the project at the site level, and compatible visions and approaches among project stakeholders. Other factors of successful project implementation are close supervision and support of staff at all hierarchical levels, human resources stability, collaboration among teams and with the social network (promotion of boundary spanners) and adequate training and effective deployment and integration of tools into practices. Others are related to the governance of the project and the various levels of authority, namely a clear definition of the mandate of each authority, and collegial distribution of power among stakeholders to let them play meaningful roles.
2016-05-04T20:20:58.661Z
2014-11-28T00:00:00.000
{ "year": 2014, "sha1": "c369f356deebb38ed4ac7983746d69c773a6b8df", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-014-0557-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fa4490b7095b340d457c8370918bccda2a188b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54642172
pes2o/s2orc
v3-fos-license
A sequential insertion heuristic for the initial solution to a constrained vehicle routing problem The Vehicle Routing Problem (VRP) is a well-researched problem in the Operations Research literature. It is the view of the authors of this paper that the various VRP variants have been researched in isolation. This paper embodies an attempt to integrate three specific variants of the VRP, namely the VRP with multiple time windows, the VRP with a heterogeneous fleet, and the VRP with double scheduling, into an initial solution algorithm. The proposed initial solution algorithm proves feasible for the integration, while the newly introduced concept of time window compatibility decreases the computational burden when using benchmark data sets from literature as a basis for efficiency testing. The algorithm also improves the quality of the initial solution for a number of problem classes. Introduction The Vehicle Routing Problem (VRP) is a well-researched problem in the Operations Research literature.The main objective in this type of problem is to minimize an objective function value, which is typically distribution cost for individual carriers.The area of application is wide, and specific variants of the VRP transform the basic problem to conform to application specific requirements.It is the view of the authors that the various VRP variants have been researched in isolation, with little effort to integrate various problem variants into an instance that is more appropriate to the South African particularity with regards to logistics and vehicle routing. The VRP may be described as the problem of assigning optimal delivery or collection routes from a depot to a number of geographically distributed customers, subject to side constraints.The most basic version of the VRP may be defined in terms of a bi-directed complete graph G = (V, E), where V = {v 0 , v 1 , . . ., v n } is a set of vertices with v 0 representing the depot where m identical vehicles, each with capacity Q, are located.The remaining vertices, denoted by V \{v 0 }, represent customers each having a non-negative demand q i and a non-negative service time s i [17].The edge set connecting the vertices is given by E = {(v i , v j ) | v i , v j ∈ V, i = j}.A distance matrix C = {c ij } is defined on E. In some contexts, c ij may be interpreted as travel cost or travel distance from vertex v i to vertex v j .Hence, the terms distance, travel cost, and travel time are used interchangeably.The VRP consists of designing a set of m vehicle routes having a minimum total length such that: • each route starts and ends at the depot, • each remaining vertex (V \{v 0 }) is visited exactly once by exactly one vehicle, • the total demand of a route does not exceed Q, and • the total duration (including service and travel time) of a route does not exceed a preset limit L. The VRP is an NP-hard combinatorial optimization problem for which several exact and approximate solution methods have been proposed (see Laporte [8] for a review).Early researchers, such as Clarke and Wright [1], realized that exact algorithms can only solve relatively small problems, but a number of heuristic algorithms have proved very satisfactory, in many cases yielding near-optimal solutions to relatively large problems. The basic VRP is based on a number of assumptions, such as utilizing a homogeneous fleet, a single depot, and allocating one route per vehicle.These assumptions may be eliminated or relaxed by introducing additional constraints to the problem.This implies increasing the complexity of the problem, and, by restriction, classifies the extended problem as an NP-hard problem.It should be noted that most of these additional constraints are often implemented in isolation, without integration, due to the increased complexity of solving such problems.Thus the problem statement: Is it possible to solve a vehicle routing problem with multiple integrated constraints? Finding a feasible, and integrated initial solution to a hard problem is the first step in addressing the scheduling issue.In this paper an algorithm is proposed that integrates three specific variants of the VRP.The paper also contributes to reducing the computational burden by proposing a concept referred to as time window compatibility (TWC) to evaluate the insertion of customers on positions within routes intelligently.The authors investigate the feasibility of integrating multiple soft time windows, a heterogeneous fleet and double scheduling constraints into a single problem instance, referred to simply in this paper as the Vehicle Routing Problem with Multiple Constraints (VRPMC). A time window is the period of time during which deliveries can be made to a specific customer, indexed by i, and has three main characteristics: the earliest allowed arrival time denoted by e i , the latest allowed arrival time denoted by l i , and whether the time window is considered soft (allowing a penalized late service) or hard (no late service allowed).It is an extension of the VRP that has been researched extensively [5,14,15,16]. Gendreau et al. [3] propose a solution methodology for cases where the fleet is heterogeneous, that is, where the fleet is composed of vehicles with different capacities and costs.Their objective is to determine what the optimal fleet composition should be, and is referred to as either the Heterogeneous Fleet Vehicle Routing Problem (HVRP), or the Fleet Size and Mix Vehicle Routing Problem (FSMVRP).Taillard [14] formulates the Vehicle Routing Problem with a Heterogeneous fleet of vehicles (VRPHE) where the number of vehicles of type t in the fleet is limited; the objective being to optimize the utilization of the given fleet.Salhi and Rand [12] incorporate vehicle routing into the vehicle composition problem, and refer to it as the Vehicle Fleet Mix problem (VFM). Double scheduling occurs where vehicles are routed in a manner that allows a vehicle to complete one route, return to the depot to replenish its capacity, i.e. load for deliveries or unload collected cargo, before embarking on a subsequent route.The aggregated routes for a vehicle is referred to as a tour, and a vehicle is required to complete its tour within the depot's provided time window. The concept of TWC is introduced in §2 along with the initial solution algorithm.The results from simulated data sets are presented in §3, before conclusions are drawn, and a research agenda is established in §4. An initial solution approach Heuristics typically use a greedy approach to obtain a good initial solution in an efficient manner and then incrementally improve the solution by neighborhood exchanges or local searches.Solomon [13] divides VRP tour-building algorithms into either sequential or parallel methods.Sequential procedures construct one route at a time until all customers are scheduled.Parallel procedures are characterized by the simultaneous construction of routes, while the number of parallel routes may either be limited to a predetermined number, or formed freely.Solomon concludes that, from the five initial solution heuristics evaluated, the Sequential Insertion Heuristic (SIH) proved to be very successful, both in terms of the quality of the solution, as well as the computational time required to find the solution [9]. When finding an initial solution to a routing problem, the initialization criteria refers to the process of finding the first customer to insert into a route.The most commonly used initialization criteria is the farthest unrouted customer, and the customer with the earliest deadline, or the earliest latest allowed arrival.The first customer inserted on a route is referred to as the seed customer.Once the seed customer has been identified and inserted, the SIH algorithm considers, for the unrouted nodes, the insertion place that minimizes a weighted average of the additional distance and time needed to include a customer in the current partially constructed route -referred to as determining the insertion criteria.The third step, the selection criteria, tries to maximize the benefit derived from inserting a customer in the current partial route rather than on a new direct route.Note that the terms nodes and customers are used interchangeably.It can easily be shown that the number of criteria calculations for the SIH algorithm is a third order polynomial function of the number of nodes in the network. A shortcoming of Solomon's SIH [13] is that it considers all unrouted nodes when calculating the insertion and selection criteria for each iteration.The fact that all unrouted nodes are considered makes it computationally expensive.The VRP variant considered in this paper has multiple additional constraints.The occurrence of infeasible nodes, due to their incompatible time windows, in a partially constructed route therefore becomes significant.The introduction of the TWC concept assists in identifying and eliminating the obvious infeasible nodes.This results in a more effective and robust route construction heuristic. The purpose of TWC is to determine the time overlap of all edges, or node combinations, (v i , v j ), where i, j ∈ {0, 1, 2, . . ., n}.During the route construction phase, TWC may be tested, and nodes that are obviously infeasible may be eliminated from the set of considered nodes.The Time Window Compatibility Matrix (TWCM) is a nonsymmetrical matrix as the sequence of two consecutive nodes, v i and v j , is critical.The following notation is used in the problem formulation: e i : the earliest allowed arrival time at customer i, l i : the latest allowed arrival time at customer i, s i : the service time at node i, t ij : the travel time from node i to node j, a e i j : the actual arrival time at node j, given that node j is visited directly after node i, and that the actual arrival time at node i was e i , a l i j : the actual arrival time at node j, given that node j is visited directly after node i, and that the actual arrival time at node i was l i , and T W C ij : the TWC when node i is directly followed by node j. Here T W C ij indicates the entry in row i, column j of the TWCM.Five scenarios exist and are covered in more detail by Joubert [6].The scenarios depend on the level and direction of overlap between the time windows of two consecutive customers, and are represented in Figure 1. Each scenario represents a relationship between e i , l i , a e i j and a l i j , and assumes customer j to be serviced directly after customer i.In its generalized form, the expression for T W C ij is given by The higher the value of T W C ij in (1), the better the compatibility of the two time windows considered.Therefore an incompatible time window is defined to have a compatibility of negative infinity. Consider an improved case where node v u is considered for insertion between nodes v i and v j .As the TWCM is already calculated, it is possible to test the compatibility of node v u with the routed nodes v i and v j .If either T W C iu or T W C uj is negative infinity (−∞), indicating an incompatible time window, the insertion heuristic moves on and considers the next edge, without wasting computational effort on calculating the insertion and selection criteria.Only if the time windows are considered compatible will the insertion and selection criteria be evaluated.The improvement of the computational burden is a direct function of the characteristics of the customer time windows.The computational complexity is of the same order as that of the SIH, with expected constant factor improvements possible due to the TWCM. As opposed to the two most common initialization criteria, namely customer with earliest deadline, and furthest customer, as suggested by Dullaert et al. [2], the authors of this paper also use the TWCM to identify seed nodes based on their TWC (the number of infeasible TWCs is calculated for each customer).The customer with the highest number of TWCs is identified as the seed customer.Ties are broken arbitrarily.It may be possible to not have any infeasible time window instances.In such a case a total compatibility value may be determined for each node v a by means of the expression where M denotes the number of unrouted nodes.The customer with the lowest total compatibility is selected as seed customer.A graphical presentation of the initial solution algorithm is given in Figure 2. Results The algorithm's objective function is defined as the total scheduling distance.For algorithmic evaluation purposes the basic concept of Solomon [13] is used in terms of classifying benchmark data sets as being either clustered (C), randomly distributed (R), or a combination of the two (RC), and having either short, or long scheduling horizons. To incorporate multiple time windows, an extended data set from Homberger [4] is used.The data set was generated in a similar fashion to that of Solomon [13], but contains 200 customers.The time windows from customers 101 through 200 were used as second time window for customers 1 through 100.Where an overlap of time windows occurred, a single (wider) time window was created by combining the two overlapping windows.The earliest opening time of the two time windows is used along with the latest closing time [7]. The fleet structure proposed by Liu and Shen [11,10] is used to introduce their insertion based savings heuristic, and to incorporate a heterogeneous fleet.It is important to evaluate the contribution that the proposed TWC has on the results of the algorithm. For this purpose a comparative control algorithm is created.The control algorithm differs only in two respects from the proposed algorithm: • It does not evaluate nodes for TWC when calculating the insertion criteria, and therefore considers every node for insertion on every edge of a partially constructed route. • As no TWC is calculated for any node, the initialization criterium is changed to identify the seed customer as the unrouted customer with the earliest deadline. The control algorithm was executed for all problem classes, and the initial solution summaries are provided in Table 1. Although the results from the various problem classes, and even among the instances within a problem class, vary significantly, there is sufficient evidence that the notion of TWC improves the quality of the initial solution, on average from 60 problem instances, by more than 9%.There is, in general, a direct relationship between the distance saving and the computational saving for the proposed algorithm.It may be observed that when there are distance savings, there are also computational savings. Although the specific problem instance impacts both the quality of the solution found, and the time required to find such a solution, it is clear that when customers with tight time windows are either randomly distributed or distributed in a semi-clustered manner, that the proposed algorithm performs consistently worse than the control algorithm, both in terms of finding a good quality solution, and in the computational time required.Should the R1 and RC1 problem classes be omitted, i.e. the proposed algorithm not be applied to such problem instances, then an average distance saving of 14%, and an average CPU time saving of 19% may be achieved, respectively. Conclusions The results of the initial solution algorithm proposed in this paper demonstrate that multiple variants of the vehicle routing problem, i.e. multiple soft time windows, a heterogeneous fleet and double scheduling, may indeed be integrated into a single initial solution algorithm. The initial solution algorithm is the first step in obtaining near optimal solutions for vehicle routing and scheduling type problems.In this paper the concept of time window compatibility is introduced to ease the computational burden of the algorithm, and to find a better seed customer.The amount by which the proposed algorithm eases the computational burden is a direct function of the time window characteristics of the customers in the network.Numerical results indicate that initial solution algorithms are highly sensitive to the specific problem instance.This paper claims that the proposed algorithm holds both distance and savings opportunities for problem instances where customers are clustered, or where longer time windows exist for customers.At worst, the number of evaluative criteria calculations is a third order polynomial function of the number of nodes in the network, similar to the SIH on which it is based.Currently the TWC is determined after the selection criteria, and future research could evaluate the positioning of the TWC portion of the algorithm. As the proposed algorithm is but the first step in obtaining a final solution, integration with metaheuristics is required before the algorithm can be implemented.Some metaheuristic algorithms, such as Tabu Search, are sensitive to the quality of an initial solution.The concept of TWC should again be introduced and evaluated at metaheuristic level as a potential performance improvement tool. Figure 2 : Figure 2: Overview of the initial solution algorithm. Table 1a : Results for the C1-class of problems. Table 1b : Results for the C2-class of problems. Table 1c : Results for the R1-class of problems. Table 1d : Results for the R2-class of problems. Table 1e : Results for the RC1-class of problems. Table 1f : Results for the RC2-class of problems.
2018-12-08T01:50:01.480Z
2006-06-01T00:00:00.000
{ "year": 2006, "sha1": "339f6f6920ead658681f0374d3911c6501213dc4", "oa_license": "CCBY", "oa_url": "https://orion.journals.ac.za/pub/article/download/36/36", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "339f6f6920ead658681f0374d3911c6501213dc4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
270177005
pes2o/s2orc
v3-fos-license
Research on Multi-Modal Pedestrian Detection and Tracking Algorithm Based on Deep Learning : In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power. Introduction In public transportation, pedestrians on the road as the most extensive group of participants have always been one of the most noteworthy objects in the field of safe traffic, and pedestrian detection on road traffic is also the most difficult target detection problem for intelligent transportation.The object detection task aims to find out the corresponding object category and coordinate position from the given input image or video.Because of its wide application, it has been given great importance both in academia and industry.However, in the real public road scenarios, many problems such as the different sizes of the target scale, the mutual occlusion between pedestrians and objects, and the serious interference of the lighting environment make target detection relatively complicated.Therefore, it is very difficult to develop an accurate and highly robust pedestrian detection system.Traditional pedestrian detection systems have not achieved satisfactory detection results for road pedestrian detection tasks under complex conditions, and a big reason is that it is too one-sided to pay attention to the pedestrian information carried by visible light modal images and ignores the different performance and recording forms of the target under different information acquisition methods. In the current situation of the development of the Internet of Things, more and more intelligent sensors continue to appear; these can provide a variety of modal information about pedestrians on the road and the surrounding environment.However, how to realize the combination of these multi-modal image features to enhance the task of pedestrian detection in special complex scenes has been widely discussed by researchers.R. Joseph (2016) proposed a YOLO target detection algorithm, which abandoned the mode of regional feature extraction + classification detection regression boundary box of the R-CNN series [1].In Jin (2016), the detector uses extended ACFs (aggregated channel features) to detect pedestrian targets in complex scenes and then proposes an aggregated pedestrian detection method based on binocular stereo vision [2].Lin (2017) introduced a top-down hierarchical pyramid structure on the basis of the original regional convolutional neural network, proposed the FPN (feature pyramid network) target detection algorithm, and realized the ability to construct high-level semantic feature information at different scales.It improves the ability of the network to capture the target [3].Konig, D., (2017) proposed a new fusion region proposal network (RPN) for the pedestrian detection of multi-spectral video data. Experiments verified the optimal convolution layer for the fusion of multi-spectral image information, but only the single feature layer fusion was considered.Hence, the detection of pedestrians with small targets is not accurate enough [4].Qiu (2021) extracted infrared images processed by ICA with the SURF algorithm and fused them with the weighted fusion algorithm.In order to test the effectiveness of infrared cameras under the conditions of no occlusion, partial occlusion, and severe occlusion [5], Wang (2021) proposed the fusion of visual contrast mechanism and ROI to detect pedestrians in infrared images, but the effect was not good under complex dim lighting [6]. Through the summary of the above-mentioned literature on pedestrian detection using multi-mode fusion technology, it can be seen that the difficulty of road pedestrian target detection for intelligent transportation lies in the fact that no matter how good the algorithm model is, it cannot distinguish the pedestrian target in the image from the simple single-mode visible light image in an efficient and reliable way.Therefore, it is very necessary to introduce an infrared light mode image into the detection algorithm to improve the actual detection effect.Visible light mode images retain the color and texture information of objects better in the daytime, with good contrast and brightness, the pedestrian imaging effect is better, and it is easy to distinguish the background and foreground, so it is very suitable for normal road pedestrian detection tasks. However, at night or even in the special case of uneven illumination or overexposure during the day, the imaging effect of visible light mode image will be very inferior, making the model unable to easily obtain any information about the target, which also results in the conventional pedestrian detection algorithm not being able to achieve the desired detection effect. Considering the diverse economic conditions of various regions, including countries, cities, towns, and rural areas, the importance of accessible and user-friendly detection algorithms becomes evident.This study aims to develop a multi-modal pedestrian detection algorithm that performs efficiently on low-configuration GPUs, such as the 1080 Ti.Additionally, it seeks to extend the algorithm's applicability, ensuring affordability and practicality across both urban and rural settings.This research targets the broader issue within the realm of smart cities and remote areas with limited computational resources, striving to improve the accuracy and robustness of detection outcomes in varied environments. The research framework depicted in Figure 1 below elucidates the structure of this study.The key contributions of this paper are presented as follows: (1) A two-stream parallel feature extraction backbone network is devised based on YOLOv4, accompanied by the design of a channel stack fusion module to address the challenge of multimodal feature fusion. (2) The process of multi-scale feature fusion layer prediction and enhancement is analyzed, culminating in the determination of the loss function and network calculation method during training to facilitate more effective learning. (3) By employing a combination of qualitative and quantitative analysis methods, this paper verifies and analyzes the pedestrian detection performance of the algorithm through experimentation. Dual-Stream Parallel Backbone Network In this paper, the CSPDarknet53 architecture is employed as the feature extraction backbone network in conjunction with the YOLOv4 algorithm.Through this setup, after three network blocks, each output characteristic of three different sizes are generated, thereby accomplishing multi-scale prediction.CSPDarknet53 introduces the CSPNet (cross-stage partial network) structure to address the drawbacks of the original Darknet53 network, such as high computational complexity and limited learning and feature extraction capabilities.By leveraging channel compression, CSPNet reduces the number of parameters in subsequent network modules by utilizing input feature matrices from two parallel branches.This optimization aids in minimizing video memory consumption, facilitating calculation, and enabling the propagation of segmented gradient information.The structure of CSPNet is depicted in Figure 2. The key contributions of this paper are presented as follows: (1) A two-stream parallel feature extraction backbone network is devised based on YOLOv4, accompanied by the design of a channel stack fusion module to address the challenge of multimodal feature fusion. (2) The process of multi-scale feature fusion layer prediction and enhancement is analyzed, culminating in the determination of the loss function and network calculation method during training to facilitate more effective learning. (3) By employing a combination of qualitative and quantitative analysis methods, this paper verifies and analyzes the pedestrian detection performance of the algorithm through experimentation. Dual-Stream Parallel Backbone Network In this paper, the CSPDarknet53 architecture is employed as the feature extraction backbone network in conjunction with the YOLOv4 algorithm.Through this setup, after three network blocks, each output characteristic of three different sizes are generated, thereby accomplishing multi-scale prediction.CSPDarknet53 introduces the CSPNet (cross-stage partial network) structure to address the drawbacks of the original Darknet53 network, such as high computational complexity and limited learning and feature extraction capabilities.By leveraging channel compression, CSPNet reduces the number of parameters in subsequent network modules by utilizing input feature matrices from two parallel branches.This optimization aids in minimizing video memory consumption, facilitating calculation, and enabling the propagation of segmented gradient information.The structure of CSPNet is depicted in Figure 2. Multi-Modal Feature Fusion Module To facilitate the fusion of visible and infrared mode feature information, this paper introduces the channel stack fusion mode feature fusion scheme based on the aforementioned dual-stream parallel feature extraction backbone network.Ref. [8] In this scheme, the feature matrices from the two different modes are first stacked in depth, followed by the utilization of a channel attention module to re-weight each stacked channel.Subsequently, a CBM (convolutional + batch normalization + mish) layer is employed to reduce the dimensionality of the fused feature matrix.It is worth noting that the CBM layer serves as a fundamental component in the YOLOv4 network architecture and is primarily responsible for feature extraction and transformation.The structure of the fusion module utilizing channel overlay is depicted in Figure 4 below. Multi-Modal Feature Fusion Module To facilitate the fusion of visible and infrared mode feature information, this introduces the channel stack fusion mode feature fusion scheme based on the afor tioned dual-stream parallel feature extraction backbone network.Ref. [8] In this sc the feature matrices from the two different modes are first stacked in depth, follow the utilization of a channel attention module to re-weight each stacked channel.quently, a CBM (convolutional + batch normalization + mish) layer is employed to r the dimensionality of the fused feature matrix.It is worth noting that the CBM layer as a fundamental component in the YOLOv4 network architecture and is primar sponsible for feature extraction and transformation.The structure of the fusion m utilizing channel overlay is depicted in Figure 4 below. Multi-Scale Feature Fusion Network The feature pyramid network (FPN), built upon the original multi-scale featu traction framework, introduces a bottom-up fusion branch to enhance the fusion o feature maps with shallow feature matrices.This integration enables the shallow det branch to access more abstract feature semantic information, thereby significantly ing the network's multi-scale target detection performance.Ref. [9] Although FPN rectly integrates shallow semantic features through P3, further enhancements are re as the multi-scale feature fusion effect of FPN can still be improved.This is due elongated information flow route and the presence of multiple convolution oper within it. In the YOLOv4 target detection algorithm proposed in this paper, a path aggre network (PANet) is introduced to address these limitations.PANet facilitates faster mation fusion by introducing bottom-up bypass connections between the FPN's P P3 layers, thereby shortening the information path between low-level and high-lev tures.This enriches the network's feature representation capabilities and improv accuracy of information storage.The PANet multi-scale feature fusion network stru an enhancement built upon FPN, is depicted in Figure 5. Multi-Scale Feature Fusion Network The feature pyramid network (FPN), built upon the original multi-scale feature extraction framework, introduces a bottom-up fusion branch to enhance the fusion of deep feature maps with shallow feature matrices.This integration enables the shallow detection branch to access more abstract feature semantic information, thereby significantly boosting the network's multi-scale target detection performance.Ref. [9] Although FPN indirectly integrates shallow semantic features through P3, further enhancements are required as the multi-scale feature fusion effect of FPN can still be improved.This is due to the elongated information flow route and the presence of multiple convolution operations within it. In the YOLOv4 target detection algorithm proposed in this paper, a path aggregation network (PANet) is introduced to address these limitations.PANet facilitates faster information fusion by introducing bottom-up bypass connections between the FPN's P1 and P3 layers, thereby shortening the information path between low-level and high-level features.This enriches the network's feature representation capabilities and improves the accuracy of information storage.The PANet multi-scale feature fusion network structure, an enhancement built upon FPN, is depicted in Figure 5. The hybrid mode multi-scale pedestrian detection network, proposed in this paper and based on YOLOv4, enhances the spatial pyramid pooling network (SPPNet) utilized in YOLOv4 by integrating it with the CSPNet structure.The SPP module employs multiple pooling layers to generate features of varying abstraction levels, culminating in the fusion and output of a fixed-size matrix.This significantly enhances the perception of deep-level feature maps [10].In conjunction with the CSPNet, the SPP module in this paper amplifies the gradient propagation path, thereby augmenting the network's learning capabilities.This combination leverages the strengths of both SPPNet and CSPNet, enabling the retention of SPPNet's capacity to process multi-scale features while addressing gradient disappearance issues through CSPNet, consequently enhancing the network's learning capabilities.The hybrid mode multi-scale pedestrian detection network, proposed in this paper and based on YOLOv4, enhances the spatial pyramid pooling network (SPPNet) utilized in YOLOv4 by integrating it with the CSPNet structure.The SPP module employs multiple pooling layers to generate features of varying abstraction levels, culminating in the fusion and output of a fixed-size matrix.This significantly enhances the perception of deep-level feature maps [10].In conjunction with the CSPNet, the SPP module in this paper amplifies the gradient propagation path, thereby augmenting the network's learning capabilities.This combination leverages the strengths of both SPPNet and CSPNet, enabling the retention of SPPNet's capacity to process multi-scale features while addressing gradient disappearance issues through CSPNet, consequently enhancing the network's learning capabilities. The hybrid mode multi-scale pedestrian detection network presented herein is adept at tackling challenges inherent in pedestrian detection tasks, such as scale changes, variations in pose, and complex background scenarios.Consequently, it elevates the accuracy and robustness of detection outcomes.Figure 6 illustrates the CSP-SPPnet structure, showcasing the integration of CSPNet and SPPNet components.The hybrid mode multi-scale pedestrian detection network presented herein is adept at tackling challenges inherent in pedestrian detection tasks, such as scale changes, variations in pose, and complex background scenarios.Consequently, it elevates the accuracy and robustness of detection outcomes.Figure 6 illustrates the CSP-SPPnet structure, showcasing the integration of CSPNet and SPPNet components.The hybrid mode multi-scale pedestrian detection network, proposed in this paper and based on YOLOv4, enhances the spatial pyramid pooling network (SPPNet) utilized in YOLOv4 by integrating it with the CSPNet structure.The SPP module employs multiple pooling layers to generate features of varying abstraction levels, culminating in the fusion and output of a fixed-size matrix.This significantly enhances the perception of deep-level feature maps [10].In conjunction with the CSPNet, the SPP module in this paper amplifies the gradient propagation path, thereby augmenting the network's learning capabilities.This combination leverages the strengths of both SPPNet and CSPNet, enabling the retention of SPPNet's capacity to process multi-scale features while addressing gradient disappearance issues through CSPNet, consequently enhancing the network's learning capabilities. The hybrid mode multi-scale pedestrian detection network presented herein is adept at tackling challenges inherent in pedestrian detection tasks, such as scale changes, variations in pose, and complex background scenarios.Consequently, it elevates the accuracy and robustness of detection outcomes.Figure 6 illustrates the CSP-SPPnet structure, showcasing the integration of CSPNet and SPPNet components. Prediction Layer Design In this paper, the YOLOv4 algorithm serves as the backbone of the proposed pedestrian detection framework.It partitions the input feature map into a grid of S × S at different scales, with each grid responsible for generating B bounding boxes.These bounding boxes are constrained by anchor boxes, which encode the geometric characteristics of the predicted targets, thus expediting the convergence rate of the prediction module and enhancing detection accuracy. Consequently, the primary role of the prediction layer is to regress B bounding box parameters for each grid on the feature map.These parameters include the coordinates (t yx ) of the center point relative to the upper-left corner, as well as the width (t w ) and height (t h ) parameters of the bounding box.Additionally, the prediction layer is tasked with outputting confidence scores (conf) indicating the likelihood of a bounding box containing the target, along with scores for various target classes [c 1 , c 2 , . .., c n ]. Therefore, the prediction layer generates vectors of size B × (4 + 1 + num_classes) for each grid, resulting in an S × S × B × (4 + 1 + num_classes) matrix for the entire input feature map.The process of the prediction layer is illustrated in Figure 7 below. boxes are constrained by anchor boxes, which encode the geometric characteristics of the predicted targets, thus expediting the convergence rate of the prediction module and enhancing detection accuracy. Consequently, the primary role of the prediction layer is to regress B bounding box parameters for each grid on the feature map.These parameters include the coordinates (tyx) of the center point relative to the upper-left corner, as well as the width (tw) and height (th) parameters of the bounding box.Additionally, the prediction layer is tasked with outputting confidence scores (conf) indicating the likelihood of a bounding box containing the target, along with scores for various target classes [c1, c2, ..., cn]. Therefore, the prediction layer generates vectors of size B × (4 + 1 + num_classes) for each grid, resulting in an S × S × B × (4 + 1 + num_classes) matrix for the entire input feature map.The process of the prediction layer is illustrated in Figure 7 below.To obtain the relative center position and width and height values of the bounding box in the entire input image, the following formula is also needed to achieve the correct mapping [11].The process is completed by the YOLO layer after the prediction layer.When using the Sigmoid activation function in the YOLO layer to the regression bounding box, the values of and can be kept between [0, 1], which can effectively ensure that the target center is in the center of the grid unit performing the prediction and prevent the occurrence of deviation. In addition, the confidence and category score in the prediction vector corresponding to the output of each grid anchor box are also noteworthy parameters when detecting the output of the YOLOv4 algorithm based on this paper.For confidence, on the one hand, it indicates whether there is a prospect target in the current prediction box [12].On the other hand, it also reflects the overlap degree between the predicted target boundary box and the marked real boundary box, that is, IoU.Therefore, the expected calculation expression of confidence is as follows: To obtain the relative center position and width and height values of the bounding box in the entire input image, the following formula is also needed to achieve the correct mapping [11].The process is completed by the YOLO layer after the prediction layer.When using the Sigmoid activation function in the YOLO layer to the regression bounding box, the values of t x and t y can be kept between [0, 1], which can effectively ensure that the target center is in the center of the grid unit performing the prediction and prevent the occurrence of deviation. In addition, the confidence and category score in the prediction vector corresponding to the output of each grid anchor box are also noteworthy parameters when detecting the output of the YOLOv4 algorithm based on this paper.For confidence, on the one hand, it indicates whether there is a prospect target in the current prediction box [12].On the other hand, it also reflects the overlap degree between the predicted target boundary box and the marked real boundary box, that is, IoU.Therefore, the expected calculation expression of confidence is as follows: In the formula, C j i is the JTH predicted boundary box confidence for the i-th grid, P r (object) predicts the probability of the existence of an object in the bounding box, and IoU truth pred is the intersection ratio between the predicted boundary box and the marked actual boundary box. Determine the Anchor Frame To obtain anchor frame data that effectively represents the dataset, traditional methods like faster R-CNN often rely on the empirical judgment of researchers.In YOLOv2 and its subsequent iterations, researchers commonly utilize the k-means algorithm to determine suitable anchor frame data by clustering the centers of real bounding boxes from annotated information in the dataset.However, the conventional k-means algorithm, which relies on Euclidean distance, may not be ideal for clustering scenarios involving anchor frames.Hence, it becomes imperative to employ a suitable distance metric. The residual cross-merge area ratio (RCMAR) offers a robust measure of similarity between the shapes and sizes of two bounding boxes.A smaller output value signifies a higher similarity between two bounding boxes, effectively addressing the issue. Therefore, this article adopts RCMAR as the measurement method, with the expression as follows: Based on the clustering results obtained by k-means using the residual intersection area ratio measurement method, this paper introduces genetic algorithms to further optimize and obtain anchor frame data that best represents the dataset.This optimization aims to expedite the convergence process of the network and enhance the algorithm's robustness [13]. In this paper, the fitness function calculation method involves dividing the width and height of each bounding box in the dataset by the width and height of the anchor box combination.Then, the Maximum Matching Box Ratio (MMBR) is employed to compute the matching score between the current annotated bounding box and anchor box.Finally, the matching scores exceeding a specified threshold (default value is 0.25) are aggregated to calculate the average score, which reflects the overall fitness of the anchor box.The fitness function calculation formula and MMBR calculation formula proposed in this paper are as follows: MMBR(w, h, anchors) = max{min γ( w anchor_w 1 ), γ( h anchor_h 1 ) . . ., min γ( In the formula, h, w is the height and width information of all annotated boundary frames of the training set, anchors are the anchoring frame combination obtained by the genetic algorithm, n is the total number of annotated boundary frames, thr is the threshold value calculated for MMBR, wi is the width of the i-th annotated boundary frame, hi is the height of the i-th annotated boundary frame, anchor wi is the width of the i-th prior frame in the anchor frame combination, and anchor i is the height of the i-th prior frame in the anchor frame combination. Loss Function The proposed detection algorithms involved in the loss function mainly includes three aspects: boundary box loss, loss of confidence, and classification loss.The confidence level and classification loss calculation formula is: For the calculation of positioning loss, CIoU (Complete IoU) Loss is used in this paper instead of MSE [14].The advantage of this kind of loss function is that it not only takes into account the influence of the overlapping area between the predicted boundary frames on the loss calculation but also takes into account the influence of the center point distance and aspect ratio.The expression of the function is as follows: In the formula, ρ(b, b gt ) is the center distance between the predicted boundary box and the real boundary box; c is the minimum diagonal distance between the smallest external frame between the two boundary boxes; α and ν are the balance coefficient and the aspect ratio coefficient, respectively; w gt is the width of the marked boundary box; w is the width of the predicted boundary box; h gt is the height of the marked boundary box; and h is the height of the predicted boundary box. Experimental Design To verify the detection performance of the algorithm proposed in this paper, the actual effect of pedestrian detection based on the mixed modes of visible light and infrared light is tested and compared with the traditional single-mode detection algorithm based on visible light only in order to prove the effectiveness of the mixed mode detection algorithm proposed in this paper [15].Therefore, this chapter evaluates the actual performance of the algorithm by quantitative analysis and qualitative analysis.Among them, quantitative analysis mainly obtains the most objective result through the above evaluation indicators, while qualitative analysis artificially makes subjective judgments by observing the actual pedestrian detection effect under different road scenes. Experimental Configuration The experiment's code was implemented using the PyTorch deep learning framework.Before training, the network model loads pre-trained weight parameters from the COCO dataset, this transfer learning strategy greatly reduced the training time [16]. In the actual experiments, the training iteration epochs are set to 50, with a training image batch size of 8.The Adam optimization algorithm is employed, with an initial learning rate set to 0.0013.Additionally, the cosine vector annealing strategy is utilized for learning rate decay, with a weight decay coefficient of 0.0005.The first three rounds of training serve as a warm-up period. Multi-scale training is adopted to enhance the network's robustness during training.During verification, images are scaled to 512 by 512 pixels.The experimental equipment parameters are detailed in Table 1 below.During the experiment, the input image size was set to 512 × 512.The k-means clustering algorithm and genetic algorithm were utilized to obtain nine types of anchor frame data of different scales based on the KAIST dataset, as illustrated in Table 2 below [17].The KAIST dataset is a large dataset used for pedestrian detection tasks.It comprises 95,328 images, each containing a visible image and a corresponding long-wave infrared image, totaling 103,128 intensive annotations.It serves as a widely used benchmark in the field of multi-spectral pedestrian detection, providing abundant resources for the research and development of pedestrian detection algorithms. Analysis of Results In this experiment, the proposed multi-modal pedestrian detection algorithm based on YOLOv4 using the channel stack fusion module is called Double-YOLOv4-CSE, and the common pedestrian detection algorithm based on single visible light mode is called visible-YOLOV4.All the above-mentioned detection algorithm models were trained, verified, and tested on the cleaned KAIST dataset.Through the analysis and comparison of the samples predicted by the experimental model, it was found that the detection method using the mixed mode feature fusion method proposed in this chapter had better detection effects, as shown in Figure 8 below.From this, it can be seen that the Double-YOLOv4-CSE proposed in this paper can more accurately detect dark pedestrians on the roadside in low-light scenes such as night and can effectively improve the ability to capture dark pedestrians. At the same time, to further verify the effect of introducing multi-modal and corresponding feature fusion methods on improving pedestrian detection performance in the whole scene.This algorithm was compared with the logarithmic average miss rate and average accuracy obtained by the current mainstream and mature multi-modal pedestrian detection algorithms ACF + T + THOG [18], Fusion-RPN [4], GFD-SSD [19], and AR-CNN [20] on the KAIST dataset.The AR-CNN algorithm serves as a mechanism for amalgamating RGB images and infrared images.It leverages the global feature maps derived from both modalities to identify candidate boxes.By expanding the coverage area of candidate boxes identified by the region proposal network (RPN), feature maps corresponding to individual modalities are extracted.Employing the infrared mode as the reference, the algorithm predicts the relative offset of the RGB mode.Subsequently, alignment of the two feature maps is performed based on this offset.From this, it can be seen that the Double-YOLOv4-CSE proposed in this paper can more accurately detect dark pedestrians on the roadside in low-light scenes such as night and can effectively improve the ability to capture dark pedestrians. At the same time, to further verify the effect of introducing multi-modal and corresponding feature fusion methods on improving pedestrian detection performance in the whole scene.This algorithm was compared with the logarithmic average miss rate and average accuracy obtained by the current mainstream and mature multi-modal pedestrian detection algorithms ACF + T + THOG [18], Fusion-RPN [4], GFD-SSD [19], and AR-CNN [20] on the KAIST dataset.The AR-CNN algorithm serves as a mechanism for amalgamating RGB images and infrared images.It leverages the global feature maps derived from both modalities to identify candidate boxes.By expanding the coverage area of candidate boxes identified by the region proposal network (RPN), feature maps corresponding to individual modalities are extracted.Employing the infrared mode as the reference, the algorithm predicts the relative offset of the RGB mode.Subsequently, alignment of the two feature maps is performed based on this offset. The default IoU threshold was 0.5, and the experimental results are shown in Table 3 below.From the data presented in the table, it is evident that the proposed Double-YOLOv4-CSE algorithm exhibits superior detection performance compared to other multimodal pedestrian detection algorithms.Compared with AR-CNN algorithm, the proposed model has certain advantages in average accuracy and log-average omission rate.Even when compared with the excellent Visible-YOLOv4 algorithm, the proposed Double-YOLOv4-CSE algorithm achieves a 5.0% improvement in accuracy and a 6.9% reduction in the logarithmic average missing rate.These results strongly indicate the outstanding performance of the Double-YOLOv4-CSE algorithm in pedestrian detection and tracking tasks. The core concept of YOLO algorithms is to treat object detection as a regression problem, achieving fast and accurate detection by directly predicting bounding boxes and object categories.This fundamental principle remains consistent across different versions of the YOLO algorithm.Consequently, strategies such as multimodal information fusion and contextual information enhancement employed in the Double-YOLOv4-CSE algorithm can also be implemented in other versions of the YOLO algorithm. Therefore, the methods developed in this research are also suitable as embeddable techniques.The approach is considered as an extendable module that can be applied to various state-of-the-art (SOTA) methods.As shown in Table 4 above, integrating the multimodal approach into advanced algorithms like YOLOv6 and YOLOv8 [21] results in improved accuracy, demonstrating the potential of this method to enhance existing advanced algorithms.Furthermore, several pedestrian detection methods currently recognized as state-of-theart (SOTA) were selected and compared with the proposed Double-YOLOv4-CSE method: With the iterative upgrades of YOLO versions, including YOLOv6 to YOLOv8, resource consumption and computational requirements increase.Despite this, YOLOv4 offers a commendable balance between performance and cost-effectiveness, requiring only hardware facilities such as the NVIDIA GeForce GTX 1080 Ti for implementation. Given the varying economic strengths of different countries, cities, towns, and rural areas, the accessibility and ease of use of detection algorithms are particularly important.The proposed Double-YOLOv4-CSE algorithm offers the widest coverage and is better suited for smart cities with diverse economic development situations.Additionally, its deployment is comparatively straightforward. Overall, whether integrating the proposed method as an embeddable module into YOLOv6 or YOLOv8 or comparing it directly with methods such as Faster R-CNN [22], RetinaNet [23], CenterNet [24], or HRNet [25] (as shown in Table 5 above), the experimental results are consistently favorable.This indicates the versatility and effectiveness of the proposed method within various YOLO frameworks as well as among other state-of-the-art (SOTA) techniques. Conclusions In summary, this paper has proposed a novel multi-modal pedestrian detection algorithm based on YOLOv4, leveraging the complementary features between infrared mode images and visible light mode.The key contributions include designing a dual-stream parallel feature extraction backbone network, analyzing improvements in multi-scale feature fusion and prediction layer processing, determining the loss function calculation method, and introducing a fusion module using channel stacking. Experimental analysis has validated the effectiveness of the proposed algorithm, particularly in low-light scenarios such as nighttime, showcasing superior performance compared to existing pedestrian detection algorithms.However, it is worth noting that the feature information inherent in visible light mode images differs significantly from that in infrared mode images in terms of form, quantity, and distribution. Moving forward, future research will delve deeper into this topic, particularly exploring the utilization of more targeted network architectures for feature extraction in infrared mode images.This endeavor aims to further enhance the performance and robustness of multi-modal pedestrian detection algorithms. Future Directions Looking ahead, this research can further enhance the accuracy and efficiency of multimodal pedestrian detection and tracking algorithms, especially in dynamic and changing traffic environments.Additionally, research can extend to vehicle detection and other ITS components, like traffic flow analysis and accident prevention, for more comprehensive traffic management and safety.By integrating a wider array of data sources and utilizing advanced deep learning models, such as graph convolutional networks (GCNs) and spatiotemporal networks, future studies are poised to make significant advancements in all ITS aspects, supporting safer and more efficient urban transportation systems, marking a significant step forward in the development of smart urban environments and the pioneering technologies shaping the future of the Internet. This expansion not only broadens the application range of current technologies but also injects new vitality and direction into the evolving landscape of the future internet and smart city infrastructure, fundamentally augmenting urban safety measures, transportation efficacy, and environmental sustainability.Moreover, the prospective exploration into the synergistic amalgamation of pedestrian detection technologies with other pivotal smart city Figure 2 . Figure 2. CSPNet structure.Based on the above content, this chapter designs a dual-stream parallel mixed-mode feature extraction network based on CSPDarknet53 to realize the synchronous extraction of visible and infrared mode image features and facilitate the subsequent fusion of mixed mode features [7].In the dual-flow parallel network, the branch of visible light mode feature extraction is CSPDarknet53-Visible, while the branch of infrared light mode feature extraction is CSPDarknet53-Infrared.Set the size of the input image of the network to 3 × 512 × 512, then the backbone network will output the feature matrix of 256 × 64 × 64, 512 × 32 × 32 and 1024 × 16 × 16, respectively, in the positions of CSPBlock3, CSPBlock4, and CSPBlock5.The outputs of the visible light mode are denoted as V1, V2, and V3, and the outputs of the infrared light mode are denoted as I1 I,2, and I3.The dual-flow parallel backbone extraction network architecture is shown in Figure 3 below. Figure 2 . 15 Figure 2 . Figure 2. CSPNet structure.Based on the above content, this chapter designs a dual-stream parallel mixed-mode feature extraction network based on CSPDarknet53 to realize the synchronous extraction of visible and infrared mode image features and facilitate the subsequent fusion of mixed mode features [7].In the dual-flow parallel network, the branch of visible light mode feature extraction is CSPDarknet53-Visible, while the branch of infrared light mode feature extraction is CSPDarknet53-Infrared.Set the size of the input image of the network to 3 × 512 × 512, then the backbone network will output the feature matrix of 256 × 64 × 64, 512 × 32 × 32 and 1024 × 16 × 16, respectively, in the positions of CSPBlock3, CSPBlock4, and CSPBlock5.The outputs of the visible light mode are denoted as V 1 , V 2 , and V 3 , and the outputs of the infrared light mode are denoted as I 1 I, 2 , and I 3 .The dual-flow parallel backbone extraction network architecture is shown in Figure 3 below. Figure 3 . Figure 3. Dual-stream parallel backbone feature extraction network architecture.Figure 3. Dual-stream parallel backbone feature extraction network architecture. Figure 3 . Figure 3. Dual-stream parallel backbone feature extraction network architecture.Figure 3. Dual-stream parallel backbone feature extraction network architecture. Figure 5 . Figure 5. Improved PANet multi-scale feature fusion network based on FPN. Figure 5 . Figure 5. Improved PANet multi-scale feature fusion network based on FPN. Figure 5 . Figure 5. Improved PANet multi-scale feature fusion network based on FPN. Figure 7 . Figure 7. Processing flow of prediction layer. Figure 7 . Figure 7. Processing flow of prediction layer. Future Internet 2024, 16, x FOR PEER REVIEW 11 of 15 Figure 8 . Figure 8.Comparison of traditional single-mode pedestrian detection algorithm and multi-mode pedestrian detection algorithm. Figure 8 . Figure 8.Comparison of traditional single-mode pedestrian detection algorithm and multi-mode pedestrian detection algorithm. Table 2 . Related data of nine kinds of anchor frames used in the experiment. Table 3 . Comparison of detection performance of pedestrian detection algorithms. Table 4 . Comparison of detection performance of other YOLO methods. Table 5 . Comparison of detection performance of SOTA methods.
2024-06-02T15:08:04.561Z
2024-05-31T00:00:00.000
{ "year": 2024, "sha1": "365c48560384115b7b04222150ed9b54aa5de434", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-5903/16/6/194/pdf?version=1717141061", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e345cb87dc94177c98c29083973e5e23f48e0b53", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
256296049
pes2o/s2orc
v3-fos-license
New Insights into the Geochemical Processes Occurring on the Surface of Stuccoes Made of Slaked Lime Putty : The fresco technique performed with slaked lime putty as binding material has been well known since Antiquity. However, the geochemical processes that occur on the surface have been generally described as part of the carbonation process of the intonaco itself. When approaching this technique from experimental archaeology, it has been observed for the first time that during the execution period (from 0 to 20 h, approximately) the processes occurring on the surface of the stucco are different from those occurring inside. Furthermore, these processes lead to the formation of an epigenetic film of specific texture, stiffness and compactness. This study investigates the formation and evolution of this surface film using a series of slaked lime putty stucco test tubes. Samples were extracted at different intervals and subsequently analyzed by polarized optical microscopy, scanning electron microscopy, and Fourier transform infrared spectroscopy. Results indicate that the development of the film, composed of an amorphous gel-like stratum and a micro-crystalline stratum, occurs in parallel to the carbonation occurring inside the stucco. Moreover, this process does not respond to the classical geological processes of calcium carbonate formation. It was also observed that its presence slows down the carbonation in the underlying strata ( intonaco, intonachino, arriccio , etc.) and that the surface becomes more crystalline over time. The identification of this film has implications for the field of the conservation – restoration of fresco paintings and lime-based wall paintings. Introduction Studies conducted to date on lime carbonation [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] note that the aerial carbonation of slaked lime putty does not occur continuously from the surface inwards but is instead a discontinuous process that follows the well-known Liesegang carbonation pattern [7]. This mechanism for the formation of calcium carbonates is produced by the diffusion of reagents via a colloidal phase that fills the pores and interparticle spaces in the mortar. Consequently, calcite crystals are precipitated in the form of rings at regular time intervals. These rings have been detected in slaked lime putties [7,17], but this phenomenon does not explain the mineral ontogenesis of the aqueous film that appears on the surface of the mortar. Amongst the investigations that have looked into the mechanisms related to the aerial carbonation of lime, those focused on the reaction rate and mineral phase modifications of lime carbonation in real time are remarkable. Cizer et al. [21] proposed the use of thin layers of Ca(OH)2 water solution on glass slides for studying the behavior of Ca(OH)2(ac)-CO2(atm) system in the short time. According to these authors, in this process, the transformation of portlandite (Ca(OH)2 into calcite (CaCO3) occurs in three phases: initially, there is a high uptake of CO2 on the surface that generates calcite precipitation but which quickly becomes passive due to the formation of amorphous CaCO3 (ACC) on the faces of the portlandite crystals. After that, a decrease in the rate of CO2 uptake is observed together with a consequent reduction in CaCO3 formation. Finally, CO2 diffusion occurs through the created stratum, giving rise to a new, slower carbonation phase. These studies also show that the carbonation rate of slaked lime is faster than that of powdered lime and that this is related to the morphology of portlandite crystal. This is due to the growth of portlandite crystals soaked in water favoring the development of well-shaped crystals, especially in terms of faces 100, 101 and 001, which are the most reactive due to higher atomic density (Bravais law) [16,22]. After this seminal work, new experiments have been reported using a similar method, i.e., based on the study of the Ca(OH)2(ac)-CO2(atm) system behavior in Ca(OH)2 water solution droplets deposited on glass slides [23,24]. However, the results obtained [21,23,24] refer to a thin layer or droplet of slaked lime putty spread on a microscope slide. The goal of this study is to characterize the mechanisms for which the aqueous surface film is formed in real conditions on a stucco. We also describe the subsequent stages that take place in the Ca(OH)2(ac)-CO2(atm) system during the execution of a true stucco prepared with traditional raw materials. It is worth emphasizing the novelty of the analytical procedure for monitoring the behavior of the Ca(OH)2(ac)-CO2(atm) system in the first 24 h, which has been the subject of a patent [25]. This methodology has been extended to establish how the Ca(OH)2(ac)-CO2(atm system evolves in the long term. Materials and Methods To study the formation and development of this aqueous surface film, slaked lime stucco test tubes were prepared from which samples were extracted at different intervals. These samples were subsequently analyzed as described below. Test Tubes The slaked lime stucco specimens were produced using traditional materials and techniques. Table 1 presents the materials used, as well as a description. These were produced by following the fresco technique procedure, which requires prior preparation of a series of layers of slaked lime and salt-free sand or marble dust on which the final touch is performed (brushing with water). The mixture used for the innermost layers contains larger sand aggregates and in greater proportions than the subsequent layers. This proportion is progressively decreased until reaching the surface, where aggregate is no longer incorporated. Figure 1a presents a cross-section of the test tube, showing the succession of layers, namely, arriccio, intermedium, intonaco, intonachino and epigenetic superficial film. The proportions of slaked lime to sand in each and the aqueous film that is produced on the surface are also included. Environmental Conditions In order to generate inter-comparable data and to be able to assess how the aqueous surface film evolves in real time, specific conditions were established in terms of temperature (21 °C) and relative humidity (60-65%). These conditions were maintained from the moment of execution of the test tubes until the extraction of the samples. Time Sequence To determine the evolution of the stucco surface, the time period that needs to be studied had to be established first. This involved a preliminary study of a test tube for which organoleptic observation enabled us to define five evolutionary phases on the surface (Table 2). During the first three phases (between 0 and 24 h), a more rapid evolution of the stucco surface was observed. Hence, the interval of sample extraction between 0 and 24 h was performed following the logarithm of 24 [7,26,27]. From the fourth phase (>24 h) onwards, sample extraction was performed every 24 h for 7 days, gradually reducing the number of samples as shown in Table 3. For the POM study, samples were extracted from the same test tube. For the FTIR and SEM, the "young" samples were removed from test tubes specially prepared for the occasion, while the "aged" ones were extracted from the same test tube used for POM. All test tubes were made following the same procedure and by using the same slaked lime and aggregate, so that they featured the same characteristics. Extraction System Samples were extracted and encapsulated in accordance with the protocol described by [25]. This procedure guarantees isolation of the samples from the CO2 in the air, thus interrupting their evolution and permitting observation and analysis of the different physical-chemical transformation processes. Moreover, this does not deform them and enables subtraction of the surface layer under study. Instrumentation To optically and morphologically characterize the evolution of the components of the aqueous surface film, a polarized optical microscope (POM) PM-2085 by Motic was used, equipped with four lenses (40x, 100x, 400x and 1000x), including crossed (XP) and parallel (PP) polarizers, two λ and 1/4 λ accessory plates and a Bertrand lens. It also featured an attached Moticam 1sp 1.3MP digital camera for on-screen observation and image capture. Compositional and morphological characterization of smaller size particles (<0.5 µ m) was performed with a scanning electron microscope (SEM) EVO ®® MA 10. Observations were performed under vacuum conditions with a voltage of 20k. Identification of compounds, determination of their relative concentrations and degree of disorder of the lattice of formed calcium carbonate particles were carried out using a Fourier transform infrared (FTIR) spectrometer VERTEX 70 (Bruker Optics). This instrument included a fast recovery deuterated triglycine sulphate (FRDTGS) temperature-stabilized coated detector and an MKII Golden Gate Attenuated Total Reflectance (ATR) accessory. Thirty-two scans were collected at a resolution of 4 cm −1 . IR spectra from three different replicates were acquired at each time to control and measure the advance of the carbonation process in the specimens prepared. Processing of the IR spectra was performed using the OPUS 7.2/IR software (Bruker Optik GmbH, Ettlingen, Germany). To discern the IR bands of calcite and amorphous calcium carbonate embedded in the ν3 stretching band of carbonate, we applied the curve-fitting method. The algorithm of Levenberg-Marquardt, based on the least squares method, was employed. Between the two possible Gauss and Lorentz band shapes, the former was selected, as it provided the best results. POM The POM study was used to characterize the epigenetic film that forms on the stucco surface and observe its mineral ontogenesis. It was thus determined that this film evolves from an initial aqueous dispersion and transforms into two defined strata of a few microns: a shallower stratum made up of amorphous compounds (hereinafter "gel-like stratum") and an underlying microcrystalline stratum (hereinafter "microcrystalline stratum") ( Figure 1b). Figure 2 shows the different particles identified in both strata. A detailed description of each stratum is presented below: • Gel-like stratum: o External face ( Figure 2a): It is exposed to the atmosphere. Translucent, with a microgranular texture and low birefringence, it is composed of a nebula of submicrometer particles that, in association with each other, present an incipient anisotropy. It is observed from 3 min after the execution of the stucco. As it evolves, it increases both in thickness and birefringence, acquiring a soft golden hue over time. Formation of this nebula does not seem to depend on standard aerial carbonation processes that require longer timeframes as stated by [21]. SEM While the POM analysis provided a considerable amount of information about the epigenetic surface film, it was sometimes challenging to identify the layer on which the observations were being made. SEM study was used to accurately identify the different particles previously detected by POM and chemically characterize the set of strata that constitutes the film ( Figure 3). [29]). Note that although the gel-like stratum is mainly composed of sub-micron particles of calcium carbonate (vide infra), some of the crystallochemical phases identified in the microcrystalline stratum may be also present in the outermost layer. Letters correspond to crystalline specimens and other particles shown in Figure 2. The gel-like stratum is made up of calcium carbonate (vide infra), whose particle size is in the nanometric range and which forms the translucent nebula observed by POM (Figure 2a). Under the SEM, this nebula is initially characterized on its outer face by the presence of amoeboid particles arranged in a discontinuous manner. Between 12 and 24 h after preparing the test tube, the stratum acquires a gel-like, micro-porous appearance and is made up of flaky, interpenetrated particles. Columnar growths are also observed but only on the inner face of the gel-like stratum, following the growth patterns of floating calcite described by [29] as shown in Figures 3 and 4. This gel-like stratum stabilizes physicochemically over time (>160 d). Regarding the underlying microcrystalline stratum, SEM study ( Figure 4) has confirmed the typological variety previously established with POM and enabled better observation thereof. In addition to the crystals described in the previous section, the presence of acicular and lenticular crystals were identified inside the interstitial spaces. These largely develop on the faces of the euhedral and sub-euhedral crystals that arise from the disaggregation of sectoral crystals and which contribute to the densification of the stratum. FTIR Spectroscopy The analysis with this technique enabled the identification of the compounds present in the studied epigenetic surface films and the characterization of their structural changes. IR absorption spectra of the epigenetic surface film were acquired along the drying process of the test tubes. The time program was as follows: 1, 3, 6, 12, 30 min; 1, 4, 8, 16, 24 h, and 160 days (3840 h). To characterize the IR bands occurring in the IR spectra, the experimental values of the band maximum were compared to those reported in the literature [23,[30][31][32][33][34][35][36][37][38]. Table 4 shows a summary of the specific frequency values for the diagnostic vibration modes of calcium hydroxide (portlandite), the different types of calcium carbonate reported in the literature and the values obtained in this study. Figure 5 shows the sequence of IR spectra acquired along the time interval in the study. This illustrates the evolution of the composition of the epigenetic surface film. The progress in the carbonation reaction can be followed through the gradual reduction in the intensity of the hydroxyl bands in the calcium hydroxide together with the concomitant increase of the carbonate group bands in the newly formed calcium carbonate particles. The main changes are observed in the IR spectra shown in Figure 6a, acquired at 1 min, 24 h, and 160 days. The IR spectrum of the sample obtained at the beginning of the experiment is dominated by the absorption bands corresponding to the stretching (3640 and 3300 cm −1 ) and bending (1637 cm −1 ) vibrations of the hydroxyl bound and surface hydroxyl groups associated with calcium hydroxide in suspension. IR absorption bands of the carbonate group in calcite and its polymorphs are occurring in the three shown spectra. In particular, ν3, ν2, and ν4 vibration bands, the three symmetry-allowed phonon modes of calcium carbonate, are used for diagnostic purposes. They are characterized more accurately in Figure 6b. The progressive increase over time of the broad ν3 stretching carbonate band with the maximum at 1420 cm −1 can be seen, as well as the sharp ν bending band at 872 cm −1 and the growth of the ν4 bending band at 712 cm −1 , almost absent at the beginning (Figures 5 and 6). According to data listed in Table 4, the experimental values found in these spectra correspond to calcite. The asymmetry observed in the shape of the broad ν3 stretching carbonate band suggests that it is composed of at least two overlapped bands ascribed to ACC with maxima at 1470-1490 cm −1 and C + ACC with the maximum at 1420 cm −1 . This hypothesis has been confirmed by applying the iterative method of curve fitting on the ν3 stretch band. Figure 7A(a-f) show the original overlapped band, the sum spectra obtained iteratively, and the two bands that compose the theoretical sum band obtained in the IR spectra acquired in the first 12 h. The individual bands exhibit maxima in the ranges 1414-1423 and 1451-1496 cm −1 , approaching those previously reported in the literature 36. This confirms the presence of amorphous calcium carbonate (ACC) in the epigenetic surface film together with calcium carbonate. The excellent match of the experimental envelope band (blue line) and the theoretical sum band (red line) can be seen for all the samples with values of root mean square error in the range 0.004-0.02. It is possible to study the role of the ACC in forming the epigenetic surface film if it is assumed that there is a direct correlation between the intensity (area or height) of the overlapped bands and the assigned component. A significant difference between the ACC and C + ACC band areas is observed in Figure 7A(a,b). The greater ACC band area in the first IR spectra indicates that this compound is prevalent at this early epigenetic surface film formation stage. It is also observed that this band increases over time in the first 3 min. After this, the C + ACC band grows and surpasses the ACC band. This second step lasts 12 min ( Figure 7A(c,d). The increase of the C + ACC band goes on over time until 12 h ( Figure 7A(e,f)). A schematic view of the evolution of the epigenetic surface film may be observed in Figure 7B. Information about the behavior of the epigenetic surface film in these initial stages can also be obtained by depicting the dependence of the ACC/(C + ACC) band-area ratio versus time. Figure 8 shows that the process of epigenetic surface film formation takes place in three steps: Step 1: There is a rapid increase of the ACC/(C + ACC) band area ratio corresponding to the initial step in which the ACC nanoparticles are generated in the epigenetic surface film from the dense supersaturated solution close to the air phase. This process is fast, spending at ca. 3 min (see insert in Figure 8). At the same time that the ACC particles are generated in the film's core, ACC nanoparticles, in contact with the air phase and, therefore, with a high supply of CO2, form a thin upper gel-like stratum. This upper sublayer quickly becomes denser and starts to act as a barrier to the diffusion of CO2 from the atmosphere (see Figure 7B-step1). Step 2: The trend is inverted, and a decrease in the ACC/(C + ACC) band area ratio is observed (Figure 8). This second step lasts up to 12 min. This behavior is associated with the beginning of the formation of calcite grains from the precursor ACC nanoparticles and the growth of calcite particle aggregates in the underlying microcrystalline stratum (see Figure 7B-step2). Step 3: the crystalline calcite formation rate is drastically slowed down up to 12 min. This behavior lasts up to 160 days (see Figure 7B-step3). The evolution of the carbonation process can also be observed in the graph shown in Figure 9. It depicts the dependence of the band height (I) ratio (ID/IA) of the A band (IA) to the D band (ID) versus time. The A band is ascribed to the hydroxyl groups, and the D band (ID) corresponds to the carbonate group (see Table 5). There is a significant formation of calcium carbonate particles within the first hour (steps 1 and 2), followed by a reduction of the velocity of the carbonation reaction until 160 days (step 3). A theoretical model proposed by some authors [39][40][41][42] enables a comparison of the atomic disorder degree in the calcite lattice originated by geogenic, biogenic, and anthropogenic processes. This model is based on the distinct sensitivity of the ν2 (E) and ν4 (F) carbonate bending bands. The ν4 (F) carbonate bending band is more sensitive to the atomic ordering of the calcium carbonate particles. Therefore, the value of the ratio (IE/IF) at the maxima of ν2 to ν4 bending bands is a suitable indicator of the structural changes in the particles during the maturation of the epigenetic film on the stucco surface. The time interval of the curve depicted in Figure 10 has been enlarged to 365 days, including values provided by [17]. The graph shows that the IE/IF ratio decreases over time. This behavior indicates that the crystalline order of the particles in the epigenetic superficial film increases over time. These changes are associated with the progressive transformation of the ACC nanoparticles into calcite crystals. Discussion Formation of solid calcium carbonate during stucco preparation is based on complex chemical equilibria reactions and diffusion processes that are difficult to separate and investigate independently. Two well-known theories have been proposed to describe the mechanisms by which solid calcium carbonate is formed from a supersaturated solution in different biological, geological, or industrial environments. Figure 11 shows a schematic view of the different steps proposed by both classical and non-classical theories. The classical theory establishes that precritical clusters formed by the reversible addition of ions from the solution are nucleated, becoming a post-critical nucleus. This is only possible if specific energy and structural conditions that guarantee its stability are met [43]. The nucleation is a first-order phase transition, and nuclei form as result of the stochastic density fluctuations of a homogeneous supersaturated aqueous solution [44]. After this, nuclei become crystals through a growth process. On the other hand, the starting point for the non-classical theory is the formation of stable precritical clusters composed of ions and other related species present in the solution to produce a postcritical nucleus. The pre-nucleation clusters are nanometer-sized. Although thermodynamically stable, the high solubility of those species results in a weak phase boundary with the surrounding solution [45]. Those nuclei undergo an internal reconfiguration, resulting in more ordered structures that can become crystalline. Further growth of these protocrystals results in the final crystal [45]. The ACC particles play an essential role the development of the polyamorph pathways used during the shell formation or the stiffening of the exoskeletal cuticle [44]. In this context, the present study investigates and establishes how the characteristic surface film forms in carbonating traditional slaked lime mortars, with a twofold goal: first, to determine the structure of this epigenetic surface film (i.e., pre-nucleation versus post-nucleation phenomena); second, to explore different instrumental methodologies for investigating the formation and evolution of this epigenetic surface film and understanding why it exhibits different characteristics from the internal stucco core. Sequential examination at different times by POM and SEM during the drying process of the stucco was carried out on samples of the epigenetic surface film. This methodology enabled the identification of a colloidal-like suspension, composed of spherulitic nanometric particles, in the gel-like stratum within the first minutes (see Figures 2a and 4a). These particles, which should be formed by aggregation from precursor nanoparticles, have been associated with a post-critical nucleus according to the non-classical theory. These species have been previously recognized as aggregates of ACC nanoparticles in Ca(OH)2 water solution drops on glass slides by SEM and FTIR [23]. The same features are also recognized in the present study in the ACC typical band in the carbonate ν3 stretch region. Interestingly, it is observed that a blueshift of this individual band takes place over time. Figure 12 shows the evolution of the ACC band maximum over 160 days. The first step is characterized by the rapid shift of the maximum towards higher wavelengths within the initial 60 min. Then, this value slightly increases. These changes have been tentatively correlated with the evolution of the ACC particles during the formation of the gel-like stratum composing the outer part of the epigenetic surface film. During the so-called "sol-like phase," higher CO2 content in the slaked lime suspension/atmosphere interface promotes the development of pre-nucleation clusters and their rapid transformation to type-ACC particles. These particles with spherulitic shapes were already identified (see Figure 2a) and are characterized by the lowest band maximum wavenumber. They have been associated with type I or hydrated ACC particles. These particles are initially isolated in the solution due to the solvation with water molecules, abundant at this moment, but progressively become closer due to the rapid emergence of new ACC particles. The increase of ACC particles results in the formation of clusters of type II anhydrous ACC particles, structurally reducing their water content. In this early stage, the ACC particles remain in solution configuring a "sol-like phase." After 12 h, these micellarsized particles evolve and behave like coalescent micelles, forming the "gel-like phase" and adopting a laminar morphology (Figure 4b,c) of vitreous appearance under POM. At this point the upper gel-like stratum is formed. In parallel, the IR spectra showed a relative decrease in ACC content and a concomitant increase in calcite content. According to the non-classical model, this is evidence of the occurrence of internal rearrangements in the ACC aggregates. These rearrangements give rise to nucleation of the crystalline phases and further growth of the crystalline particles. This increase of crystallinity over time is confirmed by the progressive increase of the intensity of the enveloped band at 1420 cm -1 and the decrease of the Iν2/Iν4 ratio observed over time (see Figure 10). The different profiles that feature the curves displayed in Figures 8, 10 and 12 suggest that the transition of ACC into calcite crystals and the formation of the film take place through different mechanisms; therefore, their dependence over time is different. Results also suggest that the carbonation and formation of calcite crystalline particles are progressively extended through the whole epigenetic surface film, where ACC particles are also identified (see Figures 2e-l and 4d-f). Conclusions Results obtained in the present investigation indicate that the surface of a slaked lime stucco evolves in three stages in which the presence of two newly formed strata can clearly be differentiated on the surface: the surface gel-like stratum and the underlying microcrystalline stratum. Diffusion towards the surface of the components that form the colloidal dispersion contained in the slaked lime (CO3 2− , Ca(OH)2 and Mg(OH)2) occurs in stage 1, due to the presence of the aqueous surface film applied at the end of stucco production. This colloidal dispersion quickly becomes supersaturated due to evaporation of water. After 3 min, the epigenetic surface film begins to develop, formed on the surface by calcium carbonates of low crystalline order that could be considered amorphous (gel-like stratum), and on the inside by a liquid inter-phase that is in contact with the surface of the stucco. At the same time, the first phases of carbonation occur inside the stucco. Stage 2 is influenced by the presence of this gel-like stratum, which acts like a semipermeable membrane. Two faces can be distinguished in the gel-like stratum: one in contact with the air, with a gel-like texture and formed by particles of amorphous calcium carbonate, and the inner one, where hanging calcite structures develop. This last stratum conditions the carbonation process that occurs inside the stucco and favors the gradual supersaturation of the liquid inter-phase, thus giving rise to a stratum with a microcrystalline texture in which different crystalline "split growths" have been observed, such as heterogeneous nucleation processes and sectoral growths. During this stage, the mechanical properties of the surface vary, with an increase in its hardness and a decrease in its plasticity being noted. All these phenomena occur during the first 20 h. This is the moment when burnishing and/or fresco painting techniques are performed. Finally, stage 3 is characterized by the densification of both strata (gel-like stratum and microcrystalline stratum) and the slowing down of the physicochemical processes that occur inside (mainly carbonation) as pore size is reduced. The evolution of the underlying strata that make up the stucco (intonaco, intonachino, arriccio, etc.) is influenced by the presence of this epigenetic surface film that slows down the standard carbonation reaction. From a chemical point of view, results obtained by FTIR confirm the formation of an epigenetic surface film with different compositions on the stucco layer. These results agree with the rest of the analytical techniques applied in the study. The position and changes in the intensity of the bands in the IR spectra indicate that the epigenetic surface film is mainly composed of ACC particles that progressively transform into calcite crystals. The maturation process of the epigenetic film on the stucco during the drying process of the intonachino is described by three different and complementary approaches: the study of the profile of the IA/ID and IE/IF ratios over time, the curve-fitting method applied to the enveloped ν3 stretch band intensity ratios, and further study of its dependence over time. These changes may be due to the evolution of the gel-like stratum itself or to the evolution of the inter-phase between the gel-like stratum and the stucco surface, which changes from a liquid phase to a solid microcrystalline stratum one.
2023-01-27T16:03:22.210Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "ebe34757c7a847c2a9294912051d5e44ff97ed8e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/13/2/219/pdf?version=1674616028", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e54e379bf380ce70d01b4d44fbb6e246e08c6b99", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
260864543
pes2o/s2orc
v3-fos-license
The effect of metal remediation on the virulence and 1 antimicrobial resistance of the opportunistic pathogen 2 Pseudomonas aeruginosa 3 Metal contamination poses both a direct threat to human health as well as an indirect 28 threat through its potential to affect bacterial pathogens. Metals can not only co-select 29 for antibiotic resistance, but also might affect pathogen virulence via increased 30 siderophore production. Siderophores are extracellular compounds released to 31 increase ferric iron uptake — a common limiting factor for pathogen growth within 32 hosts – making them an important virulence factor. However, siderophores can also 33 be positively selected for to detoxify non-ferrous metals, and consequently metal 34 stress can potentially increase bacterial virulence. Anthropogenic methods to 35 remediate environmental metal contamination commonly involve amendment with 36 lime-containing materials, but whether this reduces in situ co-selection for antibiotic 37 resistance and virulence remains unknown. Here, using microcosms containing metal- 38 contaminated river water and sediment, we experimentally test whether metal 39 remediation by liming reduces co-selection for these traits in the opportunistic 40 pathogen Pseudomonas aeruginosa embedded within a natural microbial community. 41 To test for the effects of environmental structure, which can impact siderophore 42 production, microcosms were incubated under either static or shaking conditions. 43 Evolved P. aeruginosa populations had greater fitness in the presence of toxic 44 concentrations of copper than the ancestral strain, but this effect was reduced in the 45 limed treatments. Evolved P. aeruginosa populations showed increased resistance to 46 the clinically-relevant antibiotics apramycin, cefotaxime, and trimethoprim, regardless 47 of lime addition or environmental structure. Although we found virulence to be 48 significantly associated with siderophore production, neither virulence nor siderophore 49 production significantly differed between the four treatments. We therefore 50 demonstrate that although remediation via liming reduced the strength of selection for 51 metal resistance mechanisms, it did not mitigate metal-imposed selection for antibiotic resistance or virulence in P. aeruginosa . metal-contaminated environments may select for antibiotic resistance and virulence traits even when lime. 27 Metal contamination poses both a direct threat to human health as well as an indirect 28 threat through its potential to affect bacterial pathogens. Metals can not only co-select 29 for antibiotic resistance, but also might affect pathogen virulence via increased 30 siderophore production. Siderophores are extracellular compounds released to 31 increase ferric iron uptakea common limiting factor for pathogen growth within 32 hostsmaking them an important virulence factor. However, siderophores can also 33 be positively selected for to detoxify non-ferrous metals, and consequently metal 34 stress can potentially increase bacterial virulence. Anthropogenic methods to 35 remediate environmental metal contamination commonly involve amendment with 36 lime-containing materials, but whether this reduces in situ co-selection for antibiotic 37 resistance and virulence remains unknown. Here, using microcosms containing metal-38 contaminated river water and sediment, we experimentally test whether metal 39 remediation by liming reduces co-selection for these traits in the opportunistic 40 pathogen Pseudomonas aeruginosa embedded within a natural microbial community. 41 To test for the effects of environmental structure, which can impact siderophore 42 production, microcosms were incubated under either static or shaking conditions. 43 Evolved P. aeruginosa populations had greater fitness in the presence of toxic 44 concentrations of copper than the ancestral strain, but this effect was reduced in the 45 limed treatments. Evolved P. aeruginosa populations showed increased resistance to 46 the clinically-relevant antibiotics apramycin, cefotaxime, and trimethoprim, regardless 47 of lime addition or environmental structure. Although we found virulence to be 48 significantly associated with siderophore production, neither virulence nor siderophore 49 production significantly differed between the four treatments. We therefore 50 demonstrate that although remediation via liming reduced the strength of selection for 51 metal resistance mechanisms, it did not mitigate metal-imposed selection for antibiotic 52 resistance or virulence in P. aeruginosa. Consequently, metal-contaminated 53 environments may select for antibiotic resistance and virulence traits even when 54 treated with lime. 55 56 57 58 unexplored. A key microbial trait likely to change after liming is the production of 83 siderophore compounds (18). The canonical function of siderophores is to aid iron (Fe) 84 sequestration from the extra-cellular environment (25,26). Fe is vital for microbial 85 growth as a cofactor for a number of essential enzymes (27,28), but is most commonly 86 present as insoluble Fe 3+ and therefore is of limited bioavailability, especially at near-87 neutral pH (27)(28)(29)(30)(31)(32). Siderophores are released by cells where they form extracellular 88 complexes with Fe 3+ , these are then taken up by selective outer-membrane transport 89 proteins before Fe 3+ is reduced to bioavailable Fe 2+ and the siderophore made 90 available for reuse (33). Siderophores are important virulence factors as they allow 91 pathogens to grow within hosts that actively withhold iron (34,35). Apart from iron, 92 siderophores can also chelate toxic metal ions, but these complexes cannot re-enter 93 the cell due to the selectivity of the outer-membrane transport proteins (28,29). This 94 means siderophore production can be selected for as a detoxifying method in the 95 presence of bioavailable toxic metals (26,29,36). Consequently, toxic metal 96 concentrations can select for greater virulence by selecting for increased siderophore 97 production (37). Lime remediation of metal-contaminated environments thus could 98 potentially select either for the upregulation of siderophore production when it 99 predominantly results in decreased bioavailability of Fe, or for the downregulation of 100 siderophore production when it predominantly results in lower metal toxicity, with 101 concomitant expected changes in virulence. Previous work has shown siderophore 102 production to decrease as a consequence of liming at the level of whole microbial 103 communities (18), but whether this also occurs in environmental pathogens that rely 104 on siderophore-mediated iron uptake remains untested. 105 106 It is well established that some mechanisms that bacteria use to resist metal 107 contamination also confer resistance to antibiotics (38). This can occur through cross-108 resistance when a single mechanism provides resistance to both types of stressors 109 (e.g. efflux pumps (38-44)), through co-resistance when metal and antibiotic 110 resistance genes are located on the same genetic element (45, 46)), or through co-111 regulation when transcriptional and translational responses to both stressors are 112 linked (38,43,(47)(48)(49). However, to our knowledge, it remains untested whether metal 113 remediation could decrease such co-selection for antibiotic resistance. 114 115 In this study, we use the opportunistic pathogen Pseudomonas aeruginosa to test 116 whether liming alters virulence by influencing siderophore production, and whether it 117 decreases co-selection by metals for antibiotic resistance. We applied an experimental 118 evolution approach, utilising microcosms containing water and sediment and the 119 resident microbial community from a river heavily contaminated with historical mine 120 waste (50, 51). We embedded P. aeruginosa within this natural microbial community 121 and quantified antibiotic resistance, siderophore production and virulence in this focal 122 species after 14 days. P. aeruginosa is responsible for a significant proportion of 123 nosocomial infections, particularly those in intensive care units and 124 immunocompromised patients (52). This species is of significant clinical importance 125 as it is resistant to many treatments, both intrinsically and due to its ability to rapidly 126 evolve resistance (53). Outside of the clinical setting, P. aeruginosa is commonly found 127 in soil and water (54). The production of siderophores by P. aeruginosa is well-studied 128 as a virulence factor, metal resistance mechanism and public good (25, 27, 29, 55-129 57). Furthermore, the growing interest in its use along with other siderophore 130 producing species to assist phytoremediation of metals using plants (28, 58), makes 131 it an ideal focal species for this study. 132 133 The insect infection model Galleria mellonella (Greater Wax Moth larvae), a low-cost 134 and ethically expedient alternative for mammalian virulence screens (59), is used here 135 to quantify P. aeruginosa virulence (60). We quantified total siderophore production 136 using a CAS assay (61) and pyoverdine productionthe main siderophore produced 137 by P. aeruginosa (62)by measuring fluorescence; and tested whether these are 138 correlated with virulence. Extracellular siderophore-metal complexes offer a fitness 139 advantage not only to the producer but also to neighbouring cells, whether these are 140 fellow-producers or not (63-65). Non-siderophore producing 'cheats' could gain a 141 selective advantage as they benefit from siderophore production but do not carry the 142 cost of production (25,30,31,66). Cheat fitness is increased in spatially unstructured 143 environments because the greater mixing increases the opportunity to take up 144 siderophore-iron complexes and benefit from siderophores detoxifying the area (65). 145 To take into account the effect of spatial structure on siderophore production, and 146 consequently virulence, we performed our experiments in both static and shaken 147 microcosms. We tested whether the addition of lime or a change in spatial structure 148 affects P. aeruginosa resistance to the antibiotics apramycin, cefotaxime and 149 trimethoprim. Both apramycin and cefotaxime have been declared 'critically important' 150 for human medicine, and trimethoprim 'highly important' by the WHO (67). Moreover, 151 apramycin has been shown to be effective against highly drug resistant strains of P. and contains high concentrations of non-ferrous metals. Sediment was collected using 163 a sterile spatula and water was collected by filling a sterile 1000 mL duran bottle 164 (Schott Duran, Munich, Germany). Sediment (3 g +/-0.1 g) and river water (6 mL) was 165 added to each microcosm (25 mL, Kartell, Noviglio, Italy). The combined water and 166 sediment pH was measured using a Jenway 3510 pH meter (Jenway, Essex, UK). 167 Experimental design 169 Two treatmentsliming (lime amendment/no amendment) and spatial structure 170 (shaken/unshaken)were carried out in a full factorial design (Fig. 1); six replicates 171 were used per unique treatment combination resulting in a total of 24 microcosms. All 172 microcosms were incubated at 20°C. To raise the pH from 5.8 to ~7.0 to represent a 173 metal remediation scenario, 30 mg (+/-1.0 mg) of undissolved hydrated lime (Verve 174 Garden lime, Eastleigh, U.K. (18)) was added to each relevant microcosm, then left 175 for 14 days to equilibrate. To observe differences between structured and non-176 structured environments, microcosms were either kept static or were continuously 177 shaken at 210 rpm (Stuart orbital incubator S1600, Staffordshire, UK). Shaking began 178 on day 14 and ended on day 28 (Fig.1). 179 Microcosms were destructively sampled on day 28. Six replicates were used for all 184 treatments (24 microcosms in total). 185 On day 14, 30 µL (7.3 x 10 8 colony forming units: cfu) of Pseudomonas aeruginosa 186 (PAO1 R lacZ: (69)) was added to each microcosm. This lab-strain is both lacZ marked 187 and gentamicin resistant allowing it to be easily distinguished from the rest of the 188 community on agar containing X-gal (5-bromo-4-chloro-3-indolyl-β-D-189 galactopyranoside; 100 µg/L; VWR Chemicals) and gentamicin (30 µg/mL Sigma). P. 190 aeruginosa was grown overnight in shaking microcosms containing 6 mL of King's 191 medium B (KB; 10g glycerol, 20g proteose peptone no. 3, 1.5 g K2HPO4,1.5 g MgSO4,192 per litre). To remove residual nutrients, cultures were centrifuged at 3500 rpm (1233 193 g) for 30 minutes, after which supernatant was decanted, and the pellet resuspended 194 in half the volume of M9 salt buffer (3 g KH2PO4, 6 g Na2HPO4, 5 g NaCl/L) followed 195 by plating on KB agar to calculate the inoculation density. On day 28, all microcosms 196 were destructively sampled by adding sterile glass beads and 12 mL of M9 buffer and 197 vortexing for one minute. Samples were then aliquoted and stored in glycerol (25% 198 final volume) at -80 o C. 199 Iron analysis (ferrozine assay) 200 To determine if liming affected Fe speciation and therefore bioavailability, a ferrozine 201 assay was used to measure relative concentrations of Fe 2+ and total bioavailable iron 202 (70, 71). The first step of this assay quantifies Fe 2+ which is easily obtainable by 203 bacteria and so does not require siderophores. The second step quantifies both Fe 2+ 204 and Fe 3+ and therefore gives a measure of total bioavailable iron including that which 205 requires scavenging mechanisms, such as siderophores. By dividing the first 206 measurement by the second it is possible to estimate the proportion of iron in each 207 treatment that is of relatively high bioavailability to P. aeruginosa (70, 71). The first 208 measurement is given by digesting 100 µL of fresh sample (n=3) in 4.9 mL of 0.5 M 209 hydrochloric acid for 1 hour before 50 µL was mixed with 2.45 mL of ferrozine solution 210 (1 g ferrozine, 11.96 g (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid/L; adjusted 211 to pH 7) in a cuvette (n = 3 per replicate). This was left to stand for exactly one minute 212 before absorbance at 562 nm was measured using a spectrophotometer (Jenway 213 7315, Essex, UK). To quantify total bioavailable Fe (step 2), 200 µL of 6.25 M 214 hydroxylamine hydrochloride was added to the digested samples and left to stand for 215 another hour. This was then added to ferrozine solution in cuvettes and measured as 216 before. Standards of known concentrations of FeSO4.7H2O were measured to allow 217 conversion of absorbance to Fe concentrations. 218 219 Copper growth assay 220 To confirm that metal concentrations in our river water and sediment samples were 221 sufficiently high to select for metal resistance mechanisms, and to test whether liming 222 impacted this selection, we used a copper growth assay. Specifically, we added 20 µL 223 of either the ancestral P. aeruginosa strain or defrosted samples of the evolved 224 populations to a 96-well plate well containing 180 µL of plain Iso-Sensitest broth 225 (Oxoid) and 20 µL to a well containing 180 µL of Iso-Sensitest broth at a concentration 226 of 1 g/L of copper sulphate (CuSO4; Alfa Aesar, Massachusetts, United States). The 227 optical density OD600 was then read every 10 minutes for 18 hours using a Biotek 228 Synergy 2 spectrophotometer. We used 1 g/L of copper sulphate as this equates to a 229 copper concentration (6.26 mM) previously found in highly polluted environments (72, 230 73). 231 232 233 Siderophore (CAS) assay 234 Total siderophore production was quantified using the Chrome Azurol S (CAS) assay 235 (74). Samples were plated onto tryptic soy agar (TSA: Oxoid) supplemented with 236 nystatin (Sigma: 20 μg/mL) to suppress fungal growth and X-gal. After 48 hours, P. 237 aeruginosa colonies were counted to quantify density, before 24 colonies per replicate 238 were randomly picked using sterile toothpicks. Selected colonies were resuspended 239 in 1 mL of KB media in a deep 96-well plate and grown overnight at 28 °C. These were The insect infection model Galleria mellonella was used to quantify P. aeruginosa 255 virulence (59, 60). Defrosted freezer stocks containing the whole sample microbiome 256 were diluted 100-fold using M9 salt buffer, before 10 µL was injected into twenty final 257 instar larvae per replicate using a 50 µL syringe (Hamilton, Nevada, USA). Injected 258 larvae were incubated at 37 °C and mortality was monitored hourly after 13 hours for 259 12 hours with a final check at 42 hours. Larvae were classed as dead when 260 mechanical stimulation of the head caused no response (60). M9-injected and non-261 injected controls were used to confirm mortality was not due to injection trauma or 262 background G. mellonella mortality; >10% control death was the threshold for re-263 injecting (no occurrences). Prior to assays on microcosms containing P. aeruginosa, 264 we confirmed the natural microbial community caused zero mortality by injecting 265 replicates not inoculated with P. aeruginosa as described above. 266 267 Antibiotic resistance assay 268 To test the evolved resistance of P. aeruginosa to the clinically relevant antibiotics 269 apramycin, cefotaxime and trimethoprim, we used the same P. aeruginosa colonies 270 isolated for the siderophore analysis. We first determined the minimum inhibitory 271 concentration of the three antibiotics for our ancestral strain, by growing the ancestral 272 strain for 24 hours (as described above), and plating it on TSA containing a range of 273 concentrations of the antibiotics that increased in 10 µg/mL increments from 0 -60 274 µg/mL. The minimum inhibitory concentrations were found to be 12 µg/mL, 30 µg/mL 275 and 40 µg/mL for apramycin, cefotaxime and trimethoprim, respectively. Next, the 276 individual evolved clones were defrosted before 2 µL of each was plated onto either Statistical analysis 284 The effect of liming and shaking, plus their interaction, on the final pH, density of P. 285 aeruginosa (log10(cfu mL -1 )), and the proportion of total bioavailable iron (Fe 2+ + Fe 3+ ) 286 that was Fe 2+ (Fe 2+ / (Fe 2+ + Fe 3+ )) was tested using linear models with liming and 287 shaking as explanatory variables. In general, model reduction was carried out by 288 sequentially removing terms from the full model and comparing model fits using F-289 tests; we report parameter estimates of the most parsimonious model. The effect of 290 pH on the density of P. aeruginosa populations was tested using a linear model with 291 density (cfu mL -1 ) log10 transformed. 292 To test whether evolved samples had greater resistance to copper than the ancestral 293 strain, we first calculated the relative fitness, w, of each population by dividing its 294 maximum optical density after 18 hours when grown with copper (ODmaxC) by its 295 maximum optical density when grown without copper (ODmaxWC), i.e. w = ODmaxC / 296 ODmaxP. We then carried out a one way ANOVA with w as the response variable and 297 treatment (including ancestor) as the explanatory variable. Secondly, we carried out a 298 Dunnett's test, using the 'DescTools' R package (76), to test whether each treatment 299 differed to the ancestor. Finally, we tested the effect of liming and shaking on the metal 300 resistance of the final populations in a linear model, with log(w) as the response 301 variable, and liming, shaking and their interaction as the explanatory variables. In all 302 tests w was log transformed to normalise the residuals 303 To test liming and shaking effects on total siderophore and pyoverdine production, 304 linear mixed effects models (LMEM) were carried out using the 'lme4' package (77) 305 with liming and shaking as explanatory variables, and random intercepts fitted for each 306 replicate to control for multiple clones being sampled from the same microcosm. For 307 these LMEMs, we used the 'DHARMa' package (78) to check residual behaviour, after 308 which the most parsimonious model was arrived at by comparing models with and 309 without the liming-shaking interaction using -tests. Two samples had pyoverdine 310 values much lower than the rest, so a Grubbs test ('outliers' package; (79)) was used 311 to check if they were significant outliers. They were therefore were removed from this 312 and all further models to improve model fit. To test the association between copper 313 resistance and both total siderophore and pyoverdine production, we carried out two 314 linear models with log(w) as the dependent variable and either mean total siderophore 315 per microcosm or mean pyoverdine production per microcosm as the explanatory 316 variable. 317 Virulence was analysed in three separate models. First, we tested whether larvae that 318 died before 42 hours were injected with samples containing more siderophores and 319 pyoverdine than those that remained alive after 42 hours. This was done by carrying 320 out two separate binomial generalised linear mixed models (GLMM) using the 'lme4' 321 package (77), with number of G. mellonella dead versus alive as the binomial 322 response variable, and either the production of total siderophores or pyoverdine as 323 the explanatory variable. In this model pyoverdine production was log10 transformed 324 to normalise residuals. Secondly, we tested whether the mean time it took deceased 325 larvae (20 per replicate) to die was associated with total siderophore and pyoverdine 326 production (both values taken from the mean of 24 clones) using a linear model. 327 Finally, we tested whether virulence differed between treatments. To do this, survival 328 curves were fitted using Bayesian regression in the R package 'rstanarm' (80) and the 329 package 'tidybayes' (81) was used to estimate parameters. A proportional hazards 330 model with an M-splines baseline hazard was fitted, with liming, shaking plus their 331 interaction as fixed effects. We additionally included random intercepts for each 332 sample to control for multiple (20) G. mellonella being inoculated with the same 333 sample. Models used three chains with uninformative priors and were run for 3000 334 iterations. Model convergence was assessed using Rhat values (all values were 1), 335 before we manually checked chain mixing. 336 The proportion of apramycin, cefotaxime, and trimethoprim resistance in each 337 treatment (number of resistant colonies out of 24 in total) was compared using Kruskal-338 Wallace non-parametric tests, with resistance proportion as the response variable and 339 treatment as the explanatory variable. All analyses were carried out in R version 3. Here, we tested whether liming of metal-contaminated aquatic environments 346 decreases co-selection for virulence and antibiotic resistance in the opportunistic 347 pathogen P. aeruginosa. To do this, we evolved P. aeruginosa with or without lime in 348 microcosms containing a mixture of metal contaminated river water and sediment in 349 the presence of the natural microbial community. We employed both shaking and 350 static microcosms to represent turbulent and stagnant aquatic environments, in order 351 to test whether liming effects were dependent on environmental structure (Fig. 1). 352 As expected, liming significantly decreased the acidity of sediment and water from the 353 initial pH of 5.8. However, the extent of this effect was significantly greater in the 354 shaking treatments (liming-shaking interaction: F1,20=23.1, p<0.001; Fig. 2), likely due 355 to increased mixing of lime and oxygen throughout the microcosms. The shaken-limed 356 treatment reached a pH of 7.2 (+/-0.11 SD) whereas the static-limed treatment 357 reached a pH of 6.7 (+/-0.25 SD). Both non-limed treatments had a final pH of 5.7 (+/-358 0.19 SD). As pH is often a good predictor of iron speciation (83), we tested how the 359 treatments affected the relative proportions of Fe 2+ and Fe 3+ . We found the proportion 360 of more bioavailable Fe 2+ to not significantly differ as a result of liming, shaking, nor 361 their interaction (lime main effect: F1,9=3.47, p=0.10; shaking main effect: F1,9=3.00, 362 p=0.12; lime-shaking interaction F1,8=0.73, p=0.42; Fig.2), with Fe 2+ making up 82% of 363 the total iron available on average across the treatments. Given that iron speciation 364 remained similar in all treatments, this indicates that the redox potential within the 365 microcosms did not change to become more anaerobic under static conditions (83). 366 Hence iron bioavailability was not significantly influenced by the different experimental 367 conditions and therefore iron limitation was unlikely to represent a significant driver for 368 siderophore production. 369 370 371 Figure 2 The final pH of microcosms containing river water and sediment after 28 days 372 incubation. We used a factorial design with limed and shaken treatments, each with 373 six replicates (each represented by a white circle). The starting pH was 5.8. The 374 significant effect of liming on pH (p<0.001) was increased through an interaction with 375 shaking (p<0.001). 376 377 P. aeruginosa populations incubated without lime had greater tolerance to copper 378 In order to test whether our river water and sediment samples selected for greater 379 metal resistance, we incubated the ancestral P. aeruginosa strain and final 380 populations in a medium containing a high concentration of copper (1 g/mL of copper 381 sulphate). We then compared the maximum optical density of each culture relative to 382 that of cultures grown without copper (w). Confirming that our samples contained toxic 383 metals, the ancestral strain had a lower relative fitness (w) when grown with copper 384 than all final populations (Dunnett's test: p=<0.013 for all contrasts; Fig. 3). Moreover, 385 when comparing the effect of the different treatments on w, we found populations from 386 the non-limed treatments to have greater relative fitness in a toxic copper environment 387 than those from the limed treatments (liming main effect: F1,21=4.44, p=0.047; Fig. 3 Neither liming nor shaking affected P. aeruginosa density or siderophore production 404 Next, we tested the treatment effects on P. aeruginosa density and siderophore 405 production. The final density of P. aeruginosa after two weeks of evolution varied 406 substantially between samples (1.1 x 10 6 ± 1.6 x 10 6 SD cfu/mL), but was not 407 significantly affected by liming, shaking, nor their interaction (liming main effect: 408 F1,21=1.96, p=0.18; shaking main effect: F1,21=2.77, p=0.11; liming-shaking interaction: 409 F1,20=0.70, p=0.41). There was also no significant effect of pH on P. aeruginosa 410 density (F1,22=0.97, p=0.36). Although pH can affect bacterial density (84), our finding 411 of no effect is consistent with previous results demonstrating that P. aeruginosa 412 densities are similar across an equivalent pH range as used here (85). 413 414 To test whether liming and shaking affected siderophore production, both total 415 siderophore production and the production of pyoverdinethe primary siderophore 416 produced by P. aeruginosa (56)were measured for 24 clones per replicate (24 x 417 24 clones). Quantifying pyoverdine production in addition to total siderophores is 418 important, as it is a key virulence factor in P. aeruginosa but its production does not 419 necessarily correlate with that of other siderophores, such as pyochelin (62). We found 420 neither liming, shaking nor their interaction significantly affected mean total 421 siderophore production (liming main effect: 425 However, we note that there was a large variation in production between the 24 clones 426 used to represent each microcosm (mean production: total siderophores = 4.23; 427 pyoverdine = 766; replicate: total siderophores = 1.94; pyoverdine = 69.3), and that two 428 pyoverdine values were significant outliers and consequently were removed from all 429 further analysis in order for model assumptions to be met (these were one from the 430 non-limed shaken treatment (pyoverdine production = 26.9, p<0.001) and one from 431 the limed-static treatment (pyoverdine production = 174, p<0.001); which were lower 432 than the pre-removed mean pyoverdine production of 710.6 and median of 789.8). 433 434 That siderophore production, which is regulated by iron availability and the presence 435 of toxic metals, did not significantly differ between treatments concurs with the non-436 significant differences in Fe 2+ availability between treatments. However, it is surprising 437 that siderophore production was not reduced by liming, given that P. aeruginosa 438 populations from the limed treatments were less tolerant to toxic copper. To explore 439 this further, we tested whether either total siderophore or pyoverdine production was 440 associated with copper tolerance, and found neither of them to be (Total siderophores: 441 F1,20=0.013, p=0.91; Pyoverdine: F1,20=0.294, p=0.59). This suggests other metal 442 resistance mechanisms, such as decreased outer membrane permeability and 443 increased induction of ATPase efflux transporters, could be responsible for the 444 increased copper tolerance of evolved populations (86). Our finding of no significant 445 differences in siderophore production contrasts with that of Hesse and co-workers 446 (18), who found that the addition of lime to soils collected in the near vicinity of our 447 locality significantly reduced community-wide siderophore production. This difference 448 is most likely due to shifts in siderophore production driven by changes in community 449 composition with liming selecting for non-producing isolates (18), whereas here we 450 solely focused on siderophore production by P. aeruginosa. This suggests that 451 although liming reduces community-wide siderophore production in metal-452 contaminated acidic soils, this effect may not be seen in specific species. Interestingly, 453 P. aeruginosa has been proposed as a suitable siderophore-producing bacterium for 454 use in phytoremediation, which relies on the combined of use of microorganisms and 455 plants to aid toxic metal remediation (87, 88). It has been proposed that liming, by 456 reducing siderophore production, may hinder phytoremediation (18) as metal-uptake 457 by plants is often increased when metals are bound to bacterial siderophores. Given 458 that no significant effect of liming on siderophore production by P. aeruginosa, was 459 observed, we suggest that liming and P. aeruginosa-assisted phytoremediation could 460 be used simultaneously without compromise. Virulence did not differ between treatments, but was positively associated with 465 siderophore production 466 As we found a large variation in siderophore production, which is a known virulence 467 factor in P. aeruginosa (64), we tested whether virulence, quantified using the G. 468 mellonella infection assay, differed as a consequence of pyoverdine production, total 469 siderophore production or treatment. Firstly, we tested whether G. mellonella larvae 470 alive at the final time check (42 hours) had been injected with populations producing 471 less total siderophores and pyoverdine compared with larvae that died before this 472 point, and found that they were (total siderophores:  2 =6.11, d.f.=1, p=0.013; 473 pyoverdine:  2 =6.98, d.f.=1, p=0.004). Next, we tested whether increased siderophore 474 and pyoverdine production resulted in increased virulence. We found a significant 475 positive association between virulence (mean time to death per population) and both 476 total siderophore and pyoverdine production (total siderophores: F1,22=8.9, p=0.007; 477 Finally, virulence was compared between treatments using survival curves (Fig. 4C). 485 Virulence did not significantly differ as a function of treatment, with the credible 486 intervals for liming, shaking and their interaction all crossing 1. No significant treatment 487 effect on virulence is concurrent with the finding that the treatments did not significantly 488 affect siderophore production. Finding virulence to not be significantly different 489 between structured (static) and unstructured (shaking) environments contrasts with 490 findings by Granato and co-workers (92), who found that pyoverdine-mediated 491 virulence in P. aeruginosa was greater when grown in solid media than in liquid. The 492 lack of changes detected in siderophore production and virulence between the 493 experimental treatments might be due to the more subtle (and arguably more realistic) 494 conditions under which spatial structure was varied in our study, as well as the 495 presence of a resident microbial community. 496 aquatic communities as a function of (A) mean total siderophore production and (B) 500 mean pyoverdine production. Virulence was quantified using the Galleria mellonella 501 infection model (n = 20 per replicate) and given as the mean time to death. Pyoverdine 502 and total siderophore production were measured in standardised fluorescence units 503 per OD600. Individual circles show the mean production by 24 clones from each 504 replicate. Colours and shapes represent different treatments: grey and □ = static, no 505 lime, blue and + = static, limed, black and △ = shaken, no lime, and red and ✕ = 506 shaken, limed. Panel C shows the change in survival probability of larvae over time 507 within each treatment. These do not significantly differ from one another. Shaded 508 areas represent 95% confidence intervals. 509 Antibiotic resistance evolution 510 As metal pollution has been shown to co-select for antimicrobial resistance (38), we 511 tested whether lime addition altered P. aeruginosa resistance to the clinically relevant 512 antibiotics apramycin (15 µg/mL), cefotaxime (50 µg/mL) and trimethoprim (60 µg/mL) 513 after evolution in metal-contaminated river sediments. Increased resistance was 514 observed in all treatments (Fig. 5), with neither lime nor shaking affecting resistance 515 to any of the antibiotics tested (apramycin: chi-squared=2.35 p=0.50 df=3; cefotaxime: 516 chi-squared=2.98 p=0.40 df=3; trimethoprim: chi-squared=5.25 p=0.16 df=3; Fig. 5). 517 Of note, one sample from the shaken, non-limed treatment consistently had the lowest 518 resistance to all three antibiotics, with no isolates from this population being resistant 519 to cefotaxime or trimethoprim, and fewer than 50% being resistant to apramycin. Our 520 observation of rapid evolution of antibiotic resistance in the other replicates and 521 treatments supports existing evidence that metal contamination can pose an important 522 co-selective pressure for resistance (44,49,93), including in P. aeruginosa (42). That 523 resistance did not differ significantly between treatments in our experiment 524 demonstrates that liming to pH ~7 is not effective at remediating this co-selective 525 effect, and neither was the loss of spatial structure via shaking. A plausible reason for 526 this is that liming reduces metal bioavailability by precipitating ions from solution into 527 the solid phase. This would mean cells in the sediment (the vast majority of the 528 population) would still be exposed to metals where, although at a lower bioavailability, 529 they can still be a cause of co-selection (48). This is supported by P. aeruginosa 530 evolving greater tolerance to copper in the limed treatments. Although we did not 531 determine the mechanistic basis of co-selection, we note that cross-resistance, co-532 resistance and co-regulation mechanisms have all been reported for Pseudomonads, 533 and the altering of cellular targets is a mechanism commonly used by P. aeruginosa 534 to tolerate metal, trimethoprim and beta-lactam antibiotics such as cefotaxime (42, 535 94). We are aware of a single study testing the effects of liming on antimicrobial 536 resistance {Ramos, 1987 #4}. This study found liming decreased the susceptibility of 537 Rhizobium species from soil to multiple antibiotics, and hypothesised this was due to 538 a greater production of natural antibiotics at near-neutral pH selecting for resistance. 539 Although we note that increasing soil pH will generally decreases the bioavailability of 540 any metals present, the authors {Ramos, 1987 #4} stated that metal effects would not 541 be operative in their study, suggesting no metal contamination was present. (60 µg/mL) antibiotics. Clones were tested after two weeks of evolution in microcosms 547 containing metal-contaminated river water and sediment while embedded in the 548 resident microbial community. Circles show individual replicates; those with a red 549 outline are from the same sample, which is the least resistant to all three antibiotics. 550 551 Conclusion 552 553 P. aeruginosa populations evolved metal resistance after two weeks, and liming 554 reduced this effect. However liming and spatial structure (shaking) were observed to 555 have little effect on P. aeruginosa pathogenic traits. Despite finding a positive 556 association between siderophore production and virulence, neither siderophore 557 production nor virulence systematically differed between treatments, suggesting that 558 liming does not alter the effect of metals on siderophore-mediated virulence in P. 559 aeruginosa. This finding also implies that concurrent use of liming and P. aeruginosa-560 assisted phytoremediation techniques is possible in scenarios were this bacterium can 561 persist in a natural community. Moreover, we found P. aeruginosa rapidly evolved 562 resistance to three clinically relevant antibiotics regardless of treatment. We therefore 563 show that a common metal remediation method did not reduce metal pollution-based 564 co-selection for virulence or antibiotic resistance. Importantly, these findings further 565
2022-09-23T13:29:23.917Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "918825b053419d8eeab9751c1cd349bea586c2db", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/09/22/2022.09.20.508257.full.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "918825b053419d8eeab9751c1cd349bea586c2db", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
118824924
pes2o/s2orc
v3-fos-license
Overcoming the Sign Problem at Finite Temperature: Quantum Tensor Network for the Orbital $e_g$ Model on an Infinite Square Lattice The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied for the first time to a model suffering the notorious quantum Monte Carlo sign problem --- the orbital $e_g$ model with spatially highly anisotropic orbital interactions. Coarse-graining of the tensor network along the inverse temperature $\beta$ yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension $D$ --- limiting the amount of entanglement --- is a natural refinement parameter. Increasing $D$ we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature $T_c$, provide a numerically exact estimate of~$T_c$, and give the critical exponents within $1\%$ of the 2D Ising universality class. I. INTRODUCTION Frustration in quantum spin systems occurs by competing exchange interactions and often leads to disordered spin liquids [1,2]. This is in contrast to Ising spins on a square lattice where periodically distributed partial frustration in form of exchange interactions with different signs does not suppress a phase transition at finite temperature T c [3], while complete frustration gives a disordered classical phase [4]. Frustration may also be generated by a different mechanism -when Ising-like interactions for different pseudospin components compete on a square lattice in the two-dimensional (2D) compass model [5][6][7][8] or on the honeycomb lattice in the Kitaev model [9]. While the short-range spin liquid is realized in the Kitaev model [10], the pseudospin nematic order stabilizes below T c in the 2D compass model [11,12]. In such cases entanglement plays an important role [13] and advanced methods of quantum many-body theory have to be applied. In this article we investigate a phase transition at T c in the 2D orbital e g model. A better understanding of the signatures of this phase transition provides a theoretical challenge. We present a very accurate estimate of T c and the critical exponents being in the 2D Ising universality class. These results could be achieved due to a remarkable recent progress in tensor networks due to the formulation of an algorithm at finite temperature using a projected entangled-pair operator (PEPO) [45]. The paper is organized as follows. Sec. II gives brief overview of tensor network methods. Sec. III introduces simulated model. Sec. IV introduces 2D finite temperature tensor network method used to simulate the model. Numerical results are presented in Sec. V. Sec. VI summarizes the paper. Appendix A gives detailed description of results convergence analysis which enabled us to obtain trustworthy results for the model. Technical details of simulations are given in Appendix B. Finally Appendix C gives additional results for low temperature regime of the model. II. TENSOR NETWORKS Since the discovery of the density matrix renormalization group (DMRG) [46,47] -that was later shown to optimize the matrix product state (MPS) variational ansatz [48] -quantum tensor networks proved to be an indispensable tool to study strongly correlated quantum systems [49]. MPS ansatz was later generalized to a 2D projected entangled pair state (PEPS) [50,72] and supplemented with the multiscale entanglement renormalization ansatz (MERA) [51]. The networks do not suffer from the notorious sign problem [52] and in the doped case fermionic PEPS provided better variational energies for the t-J model [53] and the Hubbard model [54] than the best available variational Monte Carlo results. A combination of different tensor networks, supplemented with other sign-error free methods, seems to have finally settled the controversy on the ground state of the underdoped Hubbard model [55]. The networks -both MPS [56][57][58] and PEPS [59][60][61] -also made some major breakthroughs in the search for topological order. This is where, like in the e g model [40], geometric frustration often prohibits the traditional quantum Monte Carlo. Thermal states of quantum Hamiltonians were explored much less than their ground states. In one dimension they can be represented by an MPS ansatz prepared with an accurate imaginary time evolution [62,63]. A similar approach can be applied in 2D models [64,65], where the PEPS manifold is a compact representation for Gibbs states [66] but the accurate evolution proved to be more challenging. Alternative direct contractions of the 3D partition function were proposed [67] but, due to local tensor update, they are expected to converge more slowly with increasing refinement parameter. Even a small improvement towards a full update can accelerate the convergence significantly [68]. In order to avoid these problems, in the pioneering work [45] two of us introduced an algorithm to optimize variationally a projected entangled-pair operator (PEPO) representing the Gibbs state e −βH of a 2D lattice system (β ≡ 1/T ). Its first challenging benchmark applications include the quantum compass [12] and Hubbard [69] models where it provided accuracy comparable to the best conventional methods. It was not quite unexpected. Just like for the groundstate PEPS, the accuracy of the thermal PEPO is limited by its finite bond dimension D, i.e., the size of tensor indices connecting nearest-neighbor lattice sites. This size limits the entanglement within the ground/thermal state. However, by its very definition the Gibbs state is the mixed state that maximizes the entropy for a given average energy. Since this maximal entropy is actually the entropy of entanglement with the rest of the universe, then -thanks to the monogamy of entanglement -the Gibbs state also minimizes its internal entanglement. Among all states with the same average energy it is the one most suited to be represented by a tensor network. Encouraged by the benchmarks tests, in this work we apply the algorithm for the first time to a model that evades treatment by quantum Monte Carlo [40,41]. Numerical convergence and self-consistency alone allow us to make definitive statements on the physics of the model demonstrating the power of this method. III. THE eg ORBITAL MODEL The quantum e g model on an infinite square lattice is defined by the Hamiltonian Here j labels lattice sites, e a (e b ) are unit vectors along the a(b) axis and τ α j are orbital operators represented by A route towards a tractable 2D PEPO network: (a) a small time step U (dβ) as a PEPO network with a bond dimension 4; (b) the operator e −βH/2 ≡ U (β) as a product of N small steps U (dβ) N -contraction of (b) along each column gives (c) a 2D network with a huge bond dimension 4 N where each bond line is inserted with (d) an orthogonal projection of dimension D made of two isometries; next each isometry is absorbed into its (e) nearest tensor truncating the dimension of its bond index from 4 N down to D. It leads to a network U (β) depicted in (f) with a bond dimension D. Pauli matrices: The coupling in the orbital space depends on the spatial orientation of the bond. In what follows J = 1. At low temperature a spontaneous breaking of symmetry takes place and the system orders according to the strongest interaction ∝ 3 16 σ x i σ x j [14]. This symmetry breaking implies a finite real order parameter Unlike the 2D compass model [11], the model (1) is not tractable by Monte Carlo [41], but the order parameter suggests the 2D Ising universality class for the finite temperature transition which is confirmed by our simulations. IV. THE ALGORITHM AT T > 0 The algorithm was described in all technical detail elsewhere [12]. Its aim is to represent matrix elements of the operator ρ = e −βH/2 by the 2D tensor network in Fig. 1. Here we show only a small 4 × 4 unit of an infinite square lattice and each geometrical shape (here a green ball) represents a tensor. There is one tensor at every lattice site. Each line sticking out of the tensor represents one index. A (black) line connecting two tensors represents a tensor contraction through the connecting index. There is one bond index along every nearest neighbor bond. It has a finite bond dimension D. The dashed bond lines connect the 4 × 4 unit with the rest of the lattice. The open (red) vertical indices number the orbital basis' states. Those pointing up/down number bra/ket states. The desired 2D network in Fig. 1(f) -known as PEPO -can be contracted efficiently to obtain local expectation values. A finite D is sufficient to represent Gibbs states with their limited entanglement. On the other hand, the 2D operator e −βH/2 ≡ U (β) can be naturally represented by a 3D network, the third dimension being the imaginary time β. The evolution is split into N small time steps (dβ 1), U (β) = U (dβ) N . With a Suzuki-Trotter decomposition, each step can be represented by a 2D layer in Fig. 1(a). In the e g model, its bond indices have dimension 4. The product of N steps is the 3D network in Fig. 1(b). Here we show only three layers; the remaining N − 3 ones are represented by the vertical dashed lines. The 3D network is too hard to treat directly. Formally, it can be compressed to a 2D network by contracting along each vertical column first. The resulting 2D network in Fig. 1(c) arises at the price of a huge bond dimension 4 N . Fortunately, we know that just a tiny Ddimensional subspace in the 4 N dimensions is enough to accommodate all correlations. Therefore, it is justified to insert every bond line with a D-dimensional projection made of two isometries. There are two independent projections along the axes a and b, see Fig. 1(d). After the insertion, every isometry is absorbed into its nearest tensor truncating its bond index down to a tractable size D, see Fig. 1(e). The outcome is the desired PEPO U (β) in Fig. 1(f), and the Gibbs state is e −βH = U † (β)U (β). Now the problem is how to handle the huge isometries from 4 N to D. Fortunately, by a divide-and-conquer strategy, each of them can be split into a hierarchy of smaller isometries connected into a tree tensor network [12]. It is possible to optimize the smaller isometries oneby-one to obtain the most accurate projection available for a given D. The cost of the algorithm is polynomial in D and only logarithmic in the number of steps N , allowing for dβ small enough to make the Suzuki-Trotter decomposition numerically exact at very little expense. V. NUMERICAL RESULTS For each T < T c the order parameter m (3) was converged in D in the symmetry broken phase, see Fig. 2. For each D it was fitted with a power law, T c = 0.35661 and β = 0.125, respectively. For more details see Appendix A. In the symmetric phase above T c , we calculated the magnetic susceptibility using the linear approximation, Here h is an infinitesimal symmetry-breaking field h i τ x i added to the Hamiltonian (1). The derivative was approximated accurately by a finite difference between h = 10 −6 and h = 0. More details on χ(T ) numerical calculation are given in Appendix B, see Fig. 7 and Table I. The susceptibility was converged in D (Fig. 4) and fitted with a power law, deduced from the scatter of the data for D ≥ 7 in Fig. 3(b) multiplied by a factor of 3, see also Fig. 5(b). It is worthwhile to compare the above estimate (7) with the 2D Ising model [70] with interaction 1 4 σ z i σ z j , More details on m(0) simulation are given in Appendix C, see Fig. 8. The value in Eq. (9) was obtained by the present method and agrees with the ground state MERA calculations [14]. This shows that the quantum fluctuation effects in the e g orbital model (1) are very weak indeed at T = 0 [36], while at T > 0 the fluctuations are activated and reduce significantly the value of the critical temperature down to T c 0.3566, see Eq. (7). Indeed quantum fluctuations play a role here but are not as significant as for the 2D SU(2) symmetric Heisenberg antiferromagnet [39]. Yet, the entanglement between the orbital operators is here much reduced from that in the 2D compass model [45] and therefore such an accurate estimate of T c (7) is possible. VI. SUMMARY Being a paradigmatic frustrated system, the orbital e g model evades treatment by quantum Monte Carlo but it proves to be accurately tractable by our thermal tensor network. The notorious sign problem -often inescapable for quantum Monte Carlo -is not an issue for our method. Instead the relevant issue is if the entanglement in a thermal state can be accommodated within a bond dimension that is small enough to fit into a classical computer. This criterion is satisfied by the thermal state of the e g model and a four-digit estimate of the critical temperature and a better than 1% accuracy of the critical exponents could be achieved. Since the Gibbs state is the least entangled one among all excited states with the same average energy, it is potentially the easiest target for a suitable tensor network. ACKNOWLEDGMENTS We thank Philippe Corboz for insightful discussions. We kindly acknowledge support by Narodowe Centrum The bond dimension D (see Fig. 1) has to be large enough to accommodate the entanglement in the thermal state. Furthermore, an environmental bond dimension M that is used in the analysis of the effective 2D tensor network depicted in Fig. 1(f) (see Ref. [12] for details) has to be large enough to accommodate long range correlations. In general, these requirements cannot be satisfied at the critical temperature T c but the phase transition can be approached from both sides close enough to fit the critical power laws. In this appendix we demonstrate that indeed we are able to approach T c close enough to obtain stable and converged fits. All results presented here, which were obtained with M = 72, are converged in M . Another potential source of errors are Trotter errors. They are not a significant issue for our approach as its cost scales at most logarithmically with the the inverse Trotter time step 1/dβ. Our results were obtained with dβ ≤ 0.001 and are converged in dβ. The convergence of the critical exponents, β for the magnetization m(T ) and γ for the susceptibility χ(T ), is shown in Figs with the 2D Ising model exponents, For D ≥ 7 we see that the exponents approach the Ising values while T lim is approaching T c . For T lim sufficiently close to T c they no longer depend significantly on range of T depending instead primarily on D. In this regime all fitted exponents fall within 1% of 2D Ising universality class, drifting towards β Ising or γ Ising with increasing D. The obtained behavior of the exponents indicates the 2D Ising universality class of the transition. The data collected in Figs. 5(b) and 6(b) demonstrate similar convergence behavior of fitted T c as for the exponents. For D ≥ 7 fitted T c approaches T c = 0.3566 when T lim is approaching the critical point. For T lim sufficiently close to T c the critical point T c begins to depend primarily on D rather than on T lim . Reaching this regime where the fits become stable with respect to T lim justifies taking into account only their D dependence to obtain the final T c estimate Eq. (7). We remark that our estimate of T c is based on two independent T c estimates, coming either from the χ(T ) or m(T ) fits, which agree up to five digits for the largest D. In our simulations we use the algorithm described in detail in Ref. [12]. In prticular we use corner matrix renormalization (CMR) to contract approximately tensor networks representing thermal states [71,72]. To reach convergence of the observables m and χ approximately 10 iterations of the optimization loop were necessary. The isometries at the beginning of the loop were initialized by a local truncation scheme based on higher-order singular value decomposition. The CMR procedure made ∼ 1000 iterations in the whole loop. The further away from the phase transition, the fewer CMR iterations were necessary to reach convergence. Linear susceptibility χ(T ) defined by Eq. (5) was calculated from a finite difference of the order parameter δm corresponding to finite difference of the symmetry breaking field δh = 10 −6 : where δm = m(h = δh) − m(h = 0). Fig. 7 shows that χ(T ) is already converged in δh for δh = 10 −6 . More accurate benchmark of δh convergence is given by Table I showing that decreasing δh further results in changes of fitted γ and T c that are negligible as compared to their dependence on D or the range of T . All simulations were done in Matlab with an extensive use of the Ncon procedure [73]. To give an idea of the actual time and computer resources needed to generate the data, the most challenging data points nearest to the phase transition, with the largest bond dimensions D = 11 and M = 72, required 1 − 2 days on a desktop. Appendix C: Simulation of the low temperature phase The entanglement in the low T phase is small enough to converge the curve m(T ) in D already for D = 4, see Fig. 8. Thanks to a short correlation length at low temperature, the calculations are much less demanding numerically than close to the critical point. Because of that we were able to generate the data shown in Fig. 8 during one day using a laptop.
2017-07-18T18:29:38.000Z
2017-03-10T00:00:00.000
{ "year": 2017, "sha1": "c39b7a2e335df47f0c2ef56353a42451d7f0fb34", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.03586", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f2f2829d7f40d5ec5aab7b5f121a5daddbb27012", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
32313195
pes2o/s2orc
v3-fos-license
Smart Feedback and the Challenges of Virtualisation The use of audio feedback is becoming more prevalent and it would be possible to use avatars for this purpose. When audio feedback is recorded by a human tutor, the recording contains not only the text of the feedback, but also additional information associated with the intonation and manner of delivery of the voice. Experiments were conducted to investigate student’s responses to the use of audio in comparison with other forms of feedback. Students were generally positive about audio feedback; results also indicated that the conveyed emotion or intent is significant and that it is perceived by the student as an important part of the feedback. We also explore this in the context of strategies for the deployment of virtual agents in the provision of feedback. Introduction The development of intelligent agents, affective computing and virtual spaces for training and education, together with the convergence of media platforms, is allowing the development of smart educational environments. Automated systems for providing advice and feedback could, where appropriate, provide rapid support for students in their learning. This supports the encourages and facilitates existing identified good practice, but does not place an unrealistic burden on the tutor. One of the challenges in deploying such systems is to take full advantage of the new technologies while retaining the benefits of existing tried and tested methods. A strategy that has emerged recently and has been successfully introduced into many courses is the use of recorded audio as feedback. However, the reasons for this success are not entirely clear. In this paper we explore some factors in the use of audio feedback, including student responses to audio feedback compared to other forms and the significance of tone of voice, in order to better understand students perceptions of this mode of feedback. This in turn allows us to consider requirements for the provision of audio in smart educational environments. 1940s, although recently the use of satellite broadband technology for this purpose has become more prevalent. The children in these programmes live in remote communities and rely on this communication for both their formal education and for socializing with their fellow pupils. The system has been shown to be at least as effective, if not more so, than face-to-face teaching [1]. The main issues with these schemes appear to have been reluctance on the part of schools to engage with the material [2], preferring to do things their own way, rather than specific issues with the characteristics of the Allport and Cantril [3] point out that the place of visual aids and supply the personality of the tea This view is supported by Lehman [4], who considered the role of emotion in distance education and the importance of presence, and concluded A more complete understanding of emotion as a component of cognition and behavior and of the role of emotion in creating a sense of presence in teaching and learning can help instruct us in effective teaching, instructional design, In order to effectively provide a context for this work, we explore the nature and importance of student feedback, the use of voice in feedback and emotion analysis and what can currently be achieved in terms of expressing emotion in artificial voices. A flexible and useful model of the role of feedback in learning is presented by Nicol and Macfarlane-Dick [5], in which they consider the learning process to comprise both internal and external feedback cycles that are followed in an iterative manner. There is a great deal of published work on the importance of feedback in the learning cycle and a number of heuristics for assessing the quality of feedback have emerged from identified good practice, some of which are: Timeliness Useful for improving future performance Personal Understandable Puts grade into context Encourage teacher and peer dialogue Encourage positive motivation and self-esteem Facilitate self assessment Gibbs [6] explored the problem of increased workload for staff in providing feedback of appropriate quality to large cohorts of students. Findings by previous studies have concluded that, with appropriate tools and workflow, the provision of audio feedback can reduce the time taken to provide feedback when compared to written feedback. Speech contains information not only in that which is said, but also in the manner in which it is said, and the potential ability of smart environments to analyse for emotion and stress cues has implications for privacy in addition to potentially leading to more responsive systems. The merging of emotion and computing is an example of affective computing, which was first described by Picard [7]; it describes the potential for emotions to be both analysed and expressed by computational devices. Emotion is difficult to define, and difficult to measure, which makes it an interesting challenge [8,9]. Linnenbrink [10] explores how emotions play an integral role in education and brings together a wide range of theories and models to explore the integration of affect, motivation and cognition. It is clear that there are many challenges and this is a relatively new area of research. Robison et al [11] developed an automated system to investigate the consequences of affective feedback in intelligent tutoring systems. The system was text based, but did identify the importance of identifying appropriate Previous studies [12,13,14,15,16] have found that the use of audio feedback had a wide range of benefits for both students and tutors. The students appreciated the feedback for a wide range of reasons, including the additional detail often provided, the tone of voice in which comments are made and the feeling that they were being exposed to a thinking process. Kapas et al [17] differentiate between different studies and consider Emic and Etic markers, which refer to those voice parameters that can be identified by a human as characteristic of a given emotion and those that can be identified by analysis, but not by another human. With audio feedback, user interpretation of emotion and intent is based on their cultural framework, experience and the human-identifiable markers. Issues such as the number of identifiable emotional states and how these differ ethnographically, depend on the parameters chosen and the model for emotion adopted [8]. Some research has focussed on considering a limited range of emotions to suit the relevant purpose, which makes recognition more accurate [18]. Cowie et al [8] consider some of the difficulties associated with resolving emotion and the range of existing models for detecting emotion in the voice. Generating emotion-based speech is less complicated, but it still presents considerable challenges. An example is Papous the Virtual Storyteller [19], in which the use of emotion tags allows a virtual storyteller to express a range of emotions. The authors concluded that the voice was more synthetic than they had hoped for; that is, it did not sound like a human voice. Another strategy for audio EAI Endorsed Transactions on Future Intelligent Educational Environments 09 2014 | Volume 1 | Issue | e 3 feedback would be to use combinations of pre-recorded phrases, as is often used for public transport announcements. The use of pre-recorded phrases would limit the potential richness and individualisation of the feedback, but would have the advantage of sounding natural. Their use in audio systems might be similar to the use of feedback banks [20]. Tao et al [21] summarise a wide range of speech synthesis strategies and conclude that continued work is necessary to improve synthetic speech quality. Work Undertaken responses to pre-recorded audio feedback, in terms of emotional perception and content (although these factors are not independent). Our studies take an emic approach, where we are interested in the perceptions of the students and not on any automated analysis of emotion. Three studies were carried out to obtain qualitative data on human-voice audio feedback and a pilot study to understand the implications of the use of virtual audio feedback. In the first study, forty students were asked for their views on the use of audio feedback in two pieces of formative coursework (towards a technical report) in a final year undergraduate I.T. module. In the second, eighty students from the same course and two independent tutors were asked to identify emotion and intent in the voice used for audio feedback in two pieces of formative coursework. The third study was in respect of summative audio feedback on a multimedia artefact for fourteen final year multimedia computing students. The students were asked the same questions as in the second survey. In each study, the audio files were recorded on a Zoom H2 recorder and compressed and processed using the batch facility in Audacity. The purpose of the first study was to determine whether the use of audio feedback was appropriate for the task. The factors being considered were: Was it simple for the lecturer to produce the feedback? Were there any benefits for the lecturer in using audio feedback? Did the students find audio feedback as useful as written feedback? audio feedback was straightforward, once a workflow had been established. It was also possible to provide more feedback in a given amount of time using this method. Figure 1. shows the structure of the assignment for the first two studies. The students submit two 500 word drafts, before submitting a final 3000 word consultancy report. This allows them to make mistakes early on and learn from them prior to any summative work. It also allows them to develop a clear understanding of expectations and the quality required to achieve a good grade. It is important to note that the provision of the audio feedback was generated in real time and that the audio files provided to the students were not edited or produced in any way other than basic noise reduction and compression as part of the batch processing in Audacity. One student with profound hearing loss was given their feedback as a text file. After receiving audio feedback for their first formative assignment, the students were asked whether they wanted the same approach to be used for their second formative submission or whether they would prefer text-based feedback. All of the forty students chose to receive audio files and felt that they were useful and appropriate; one and it that could be provided as text. At this stage, students were not asked for any other information. After their second assignment, the students were asked two questions and also asked to provide further responses if they had any additional comments. The questions asked were: Was the audio feedback useful? All of the students felt that the feedback had been provided earlier than previous written feedback and that it was easier to understand, a typical student comment being we can tell what the tutor really likes by the tone in their voice when talking about a certain attribute . Students were not generally concerned that the recordings had been made in real time and contained pauses and additional noise, although one student reported that the file was very noisy and in this case, the file was sent again. These results are in line with findings from other institutions [12,13,14,15,16]. In the second study, 80 students and the independent tutors were asked to identify emotion and intent in audio feedback for two assignments and they were also invited to comment more generally on the delivery of the feedback. Fifty-four students responded positively to the format of the feedback, of which 22 responded directly to the questions about emotion and intent. One student asked if they could be provided with text based feedback and two files had to be compressed again and resent to students as a result of noise generated in the batch conversion process. The questions asked were: When you listen to the feedback, does my tone of voice help you with understanding what I mean? Would it be better if the feedback was written? Would it be better if I tried to keep my voice more formal? How would you describe my tone of voice? Do you think that feedback by voice allows you to understand more than text alone? Responses indicated that Students felt that the audio feedback contained more detail than written feedback. An informal tone of voice was the most appropriate. Receiving audio feedback provided a similar experience to receiving one-to-one physical feedback from the tutor. The tone of voice helped with understanding of the content. Audio files should not be too long, as it is more difficult to rewind to a section. The independent tutors felt that the feedback sounded consistently positive and supportive, and supported the idea of providing feedback in this way. The third study used students from a different subject area, namely multimedia technology. Whilst the previous studies had involved formative feedback on written work, the third study used summative feedback on a YouTube video recording of an individual project. Thirteen of the fourteen students surveyed felt that the tone of voice was important in understanding the feedback. All the students felt that audio feedback helped them understand more than text alone. Two students would have liked to receive additional text feedback. Students mostly preferred an informal voice to a more formal one, but two students felt that a more formal tone would have been appropriate. One student commented Comments also gives a feeling as if I am getting direct feedback from a It is interesting to note that the comments received were very similar to those of the second survey and that the nature of the comments were subject independent. For the next stage of this work, we wish to explore the effect of using an artificially generated voice, perhaps with an avatar-based interface, for providing feedback. Issues here would include the ty with the voice and the extent to which appropriate emotions could be embodied in it. A small pilot study was conducted with 10 students, who were given audio feedback provided via an artificial speaker. In order to create this effectively, the audio feedback was provided by the lecturer and transcribed before being played through a text to speech engine. The students had all received audio feedback using the for an earlier piece coursework and had responded positively to its use. They were asked if the machine-generated audio feedback was as useful and whether it was preferable to written feedback. The response was unanimous; they felt that the audio feedback via the text to speech engine was not as useful as that using nts asked if they could receive the feedback as text in preference to the text to speech engine. The pilot study indicated that the emotion and sense of presence could only be provided by the voice of the lecturer and not by the artificial speaker. It is difficult to know, without further study, the role that expectation plays in student perception, as these students had become accustomed to receiving audio feedback from their tutor. It is important to note that this was a qualitative study; we were not attempting to obtain statistical data based on a detailed questionnaire, but rather to tease out any EAI Endorsed Transactions on Future Intelligent Educational Environments 09 2014 | Volume 1 | Issue | e 5 insig as to the effectiveness of audio feedback. An example was the unanimous perception among the students that they had received feedback earlier when it was provided in audio form. This was not actually true, and the perception was probably due to the students being more ready to engage with the feedback in audio form than they had been when it was provided in text form. It appears that students often ignored or failed to remember text-based feedback, with the audio feedback had a greater impact on the students. Of course, this could be a short-term effect, due to the novelty of the method, but only time will tell. Of course, there are always caveats. Students sometimes tell their tutors what they want to hear and this might have skewed the results. Although our study was concerned with emotion in verbal feedback, the overall conclusion that students preferred a friendly, cheerful voice and felt that this was appropriate does not necessarily explore the potentially complex changes in emotional state that the student might be experiencing when listening to the feedback [10], or any deep understanding of how to leverage these for optimal motivation and engagement. Discussion of Implications Our studies show that the use of the recorded voice for feedback provides a richer experience for the recipient, as more information can be extracted from listening than is possible with the written word alone. The same words spoken with a positive, supportive tone of voice are more motivating than they would be if the recipient were reading them from a screen. However, this is a two-edged sword, as unconscious, negative nuances in the voice of the tutor might also be picked up on by the student. People are very good at tuning in to such subtleties, and this places an onus on the provider of feedback to try to avoid intonation that might demotivate the recipient. The other side of this coin is that the student will not be able to read the visual cues that are an important part of face-toface conversation, which makes the quality of the aural cues even more important. The recording of verbal feedback in real time does not allow the tutor as much hen providing written feedback and this might cause them to use their natural mode of speech, thereby revealing emotional content that they might otherwise have hidden in the interest of motivating the student. It is often said that one should emphasise the rather than picking out the faults, but this strategy could be undermined in the above circumstances. Värlander [22] cannot be turned off automatically, and may last for days. In such situations, a learner may be unreceptive to emotional content whether in writing or verbal, can be taken as criticism of the individual rather than their work, and can arouse feelings of failure or inadequacy in the student that can persist for a long time. This emphasizes the need for care when presenting feedback. The problem can arise in written feedback, particularly when this is given in a terse style. For example, it is often noted that emails and text messages can unintentionally appear abrupt and sometimes offensive. However, with verbal feedback, the range of expressible emotions is much greater, as there is clearly more room for subtle, nuanced expression of emotion in this form in communication than in the written form. The very advantage of rapidly produced verbal feedback recordings, i.e. the impression for the student of a personal dialogue with their tutor, can also be a danger, as any perceived negative nuances will also be seen as coming directly from the tutor. Another possible issue with recorded verbal feedback is that, when speaking, professionals will tend to use the common, shared language idioms and vocabulary of their profession. This is often the case even when they are discussing subjects not related to their discipline, as noted in the work on cognitive discourse analysis by Tenbrink et al [23]. With written feedback, tutors might moderate their language level, but with verbal feedback, they are more likely to speak in the manner that comes naturally to them. Of course, one of the things that the students are supposed to be learning is the language of their chosen field of study, so perhaps this is not always a bad thing. However, tutors operate across two domains and will be using not just language specific to their specialist subject areas but also that of education itself. Evidence from sources such as the National Student Survey suggests that students often struggle with education jargon and do not understand concepts such as feedback , reflective approaches , paradigms heuristics etc. It is therefore doubly important for tutors to use language appropriate to the It would clearly be desirable for virtual agents to be able to provide audio feedback. According to Ivanovic, [24] a lot of evidence has been gathered to suggest that virtual agents induce positive feelings in humans during interaction, if the agents are capable of displaying emotions. Our results indicated that with audio feedback the role of emotion was critical; however no students expressed a desire to hear a range of emotions. Cafaro et al [25] interpersonal worked when one of the participants was a virtual agent that exhibited non-verbal cues. They found that it took an average of only 12.5 seconds for people to form an EAI Endorsed Transactions on Future Intelligent Educational Environments 09 2014 | Volume 1 | Issue | e 6 impression of the virtual agents; in other words, their natural reactions to the virtual agents were similar to those they would have exhibited when encountering another human. In the context of feedback, therefore, it would be important that the text-to-speech virtual avatar could accurately express the emotions implicit in the associated text (and, of course, that the latter was appropriate in terms of student motivation in the first place). Although there has been a lot of research into creating avatars that can express human-like emotions, state of the art virtual agent systems still do not allow a wide range of emotions to be accurately expressed. For example, Lee et al [26] attempted to develop an avatar capable of conveying Ekman s six classic emotional states i.e. anger, disgust, fear, happiness, sadness and surprise, via facial features. Their avatar managed to accurately reproduce happiness and sadness, but had mixed results with the other four states. This emphasizes the difficulty with incorporating emotion into avatar-based systems. However, our studies revealed a general consensus among our students that a cheerful, informal tone was preferred. This limited emotion would be easier to implement with a virtual agent than a system with a wide range of emotional expressions. Even if this problem was solved, there would still be the linguistic problem of automatically and accurately interpreting the emotional content of written text, so that the avatar could respond appropriately. Given that producing tutor-generated verbal feedback can be quick and effective (speaking the feedback does not take longer than typing it), it seems that such systems would not be appropriate for feedback provision, and indeed, one of the most positive features of verbal feedback for our students was the perception of personal contact with their tutor. Another issue for a virtual agent would be generating the content of the feedback. In most cases this involves high-level cognitive activity on the part of the tutor, which is beyond the capabilities of current virtual agents. However, certain elements of assessment feedback do lend themselves to automation. For example, it is possible to automatically analyse documents for structure and general use of language, or to seek key words and phrases. It is also possible to automate assessment of documentation and style in computer programs submitted as assessment, to automatically test the functionality of such programs against predetermined test suites [27], or to use a model which may involve AI techniques to allow analysis of a structured response [28]. Assessment of some mathematics exercises can also be automated. Kumar [29] considered the feasibility of automated tutors that could help students learn and considered two different purposes; those that assess and those that learn. The important distinguishing feature is the provision of feedback. The feedback may be immediate, or demand feedback provided when the problem is solved. Kumar pointed out that if an answer is incorrect, then ideally the tutor can point out why it is incorrect and how this may be fixed. Where such examples are based on logic and rules, it is simpler to code. It is possible to provide some more general feedback from rule-based systems, although this does require significant upfront work on the part of the tutors. For combinations of predetermined phrases can be generated in response to combinations of answers to multiple-choice questions but these are rather limited applications. Current virtual agent systems do not have the sophistication to produce generalised feedback in the manner of a human tutor. Furthermore, although feedback using such systems can be very fast, which is appreciated by students, the loss of the impression that the tutor is spending the time to engage with the work might reduce the impact of a virtual tutor. As we have found, students like to hear the familiar voice of their tutor; this makes the feedback feel more personal to them, and perhaps, therefore, would make them more likely to act on it. Programmed Learning approaches [30] traditionally use a linear approach and it would be possible to apply them in this context, but the feedback is often very limited in its scope, with the core concept being one of progress only when a response is correct. Conclusions and Future Work The provision of audio feedback seems to be valued by students for its timeliness and for its clarity in terms of meaning. Such feedback is viewed by students as more personal and immediate, and gives the impression that the lecturer is engaging with and interested in the students work. The method is also advantageous for the tutor as such feedback can be recorded quickly, without too much concern for production values. The intent is to provide personalised, supportive and informative content for the student, and not to produce broadcast-quality material. The caveat is that the tutor should maintain an empathetic, supportive tone throughout in order to engage the student. It is important to gain an understanding of how this might translate to artificial voices in the virtual world. Our pilot study with the text to speech system revealed that, not only did students prefer feedback with voice (which might be expected) but they also preferred written feedback to the artificial voice. The text to speech system does not provide feedback more quickly, or save from which the voice is generated still has to be produced, so at this stage it seems there is little point in pursuing this method. might accrue if such a system could be implemented with a rule based approach using a virtual agent, to generate the feedback automatically, but this is currently only possible in a limited number of areas. We have not explored the role that expectation plays in the response to feedback. If students were submitting to a virtual environment expecting automated feedback, they might respond very differently to tone and have no expectations of a personal approach. There was also an interesting suggestion from one student, that the recorded audio feedback is not only personal, but that it seems fair because every student is getting a similar share of the le t always feel that this was the case with face to face dialogue. Another possible strategy enable students to obtain their own feedback by answering a series of questions from a virtual tutor. Each ques s answers to previous questions, thereby providing more personalised feedback and encouraging them to take a more reflective attitude to their work. Nicol and Macfarlane-Dick [5] considered elements internal to the student and how they are linked by paths of internal feedback, as shown in Figure 2, which is adapted from [5]. They do state that feedback might be provided from a range of sources including computer-generated feedback. The Virtual Mirror approach encourages students to reflect on their understanding of their own knowledge, goals and learning outcomes by facilitating articulation of these processes; it does not provide feedback on the students work. We are not suggesting modification of the model proposed in [5], but the deployment of this in the development of a reflective strategy. Lei et al [31] explored the use of agents that collect the self-reflections of learners in simulation based e-learning. Although this was a text-based approach, it allowed the use of a simple natural language processing technology to provide a path through questions developed using a semantic network approach. The problems encountered included the use of slang and the conversation database was updated to take this into account. Figure 2. Virtual Mirror (Adapted from [5]) A planned future extension to this work is to employ screen capture software, to produce video feedback in which scrolling through an essay or computer program is augmented with voiceover feedback. Another planned extension arises from the observation that the students surveyed in all cases came from diverse cultural backgrounds. We intend to explore whether there are any differences in interpretation of the emotional cues by different cultures.
2018-01-14T04:48:47.511Z
2015-06-16T00:00:00.000
{ "year": 2015, "sha1": "682dd479140f6597be816cce372727451739821c", "oa_license": "CCBY", "oa_url": "http://eudl.eu/pdf/10.4108/fiee.1.2.e6", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "682dd479140f6597be816cce372727451739821c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14651535
pes2o/s2orc
v3-fos-license
A Language-Independent Approach to Keyphrase Extraction and Evaluation We present Likey , a language-independent keyphrase extraction method based on statistical analysis and the use of a reference corpus. Likey has a very light-weight pre-processing phase and no parameters to be tuned. Thus, it is not restricted to any single language or language family. We test Likey having exactly the same configura-tion with 11 European languages. Furthermore, we present an automatic evaluation method based on Wikipedia intra-linking. Introduction Keyphrase generation is an approach to collect the main topics of a document into a list of phrases. The methods for automatic keyphrase generation can be divided into two groups: keyphrase assignment and keyphrase extraction (Frank et al., 1999). In keyphrase assignment, all potential keyphrases appear in a predefined vocabulary and the task is to classify documents to different keyphrase classes. In keyphrase extraction, keyphrases are supposed to be available in the processed documents themselves, and the aim is to extract these most meaningful words and phrases from the documents. Most of the traditional methods for keyphrase extraction are highly dependent on the language used and the need for preprocessing is extensive, e.g. including part-of-speech tagging, stemming, and use of stop word lists and other languagedependent filters. Related Work In the statistical keyphrase extraction, many variations for term frequency counts have been proposed in the literature including relative frequencies (Damerau, 1993), collection frequency (Hulth, 2003), term frequency-inverse document frequency (tf.idf) (Salton and Buckley, 1988), among others. Additional features to frequency that have been experimented are e.g. relative position of the first occurrence of the term (Frank et al., 1999), importance of the sentence in which the term occurs (HaCohen-Kerner, 2003), and widely studied part-of-speech tag patterns, e.g. Hulth (2003). Matsuo and Ishizuka (2004) present keyword extraction method using word co-occurrence statistical information. Most of the presented methods need a reference corpus or a training corpus to produce keyphrases. The reference corpus acts as a sample of general language, whereas the training corpus is used to tune the parameters of the system. Statistical keyphrase extraction methods without reference corpora have also been proposed, e.g. (Matsuo and Ishizuka, 2004;Bracewell et al., 2005). The later study is carried out for bilingual corpus. Reference Corpora The reference corpus of natural language processing systems acts as a sample of general language. The corpus should be as large as possible to get sufficiently many examples of language use. In our study, we used the Europarl corpus that consists of transcriptions of European Parliament speeches in eleven European languages, including four Romance languages (Spanish, French, Italian and Portuguese), five Germanic languages (Danish, German, English, Dutch and Swedish), Finnish and Greek (Koehn, 2005). The number of words in the corpora is between 23 million in Finnish and 38 million in French, while the number of word types differs from 98 thousand in English to 563 thousand in Finnish. The Likey Method We present a keyphrase extraction method Likey that is an extension of Damerau's method (Honkela et al., 2007). In Damerau's (1993) method, terms are ranked according to the likelihood ratio and the top m terms are used as index terms. Both single words and bigrams are considered to be terms. Likey produces keyphrases using relative ranks of n-gram frequencies. It is a simple languageindependent method: The only language-specific component is a reference corpus in the corresponding language. Likey keyphrases may be single words as well as longer phrases. The preprocessing phase of Likey consists of extraction of the main text body without captions of figures and tables, and removing special characters (except for some hyphens and commas). Numbers are replaced with <NUM> tags. An integer rank value is assigned to each phrase according to its frequency of occurrence, where the most frequent phrase has rank value one and phrases with the same frequency are assigned the same rank. Rank values rank a and rank r are calculated from the text and the reference corpus, respectively, for each phrase. Rank order rank is calculated separately for each phrase length n. Thus we get ranks from unity to max rank for each n. This way n-gram frequencies for n ≥ 2 are scaled to follow approximately the same distribution as 1-grams in the corpus. The ratio of ranks is used to compare the phrases. In highly inflective languages, such as Finnish, and languages with frequent word concatenation, such as German, many of the phrases occurring in the analysed document do not occur in the reference corpus. Thus, their ratio value is related to the maximum rank value, according to Eq. 2, ratio = rank a max rank r + 1 ( 2) where max rank r is the maximum rank in the reference corpus. The ratios are sorted in increasing order and the phrases with the lowest ratios are selected as the extracted keyphrases. Phrases occurring only once in the document cannot be selected as keyphrases. Evaluation The most straightforward way to evaluate the extracted keyphrases is to first decide which phrases are appropriate to the document and then calculate how many of the extracted keyphrases belong to the appropriate phrases set, e.g. by using precision and recall measures. There are two widely used approaches for defining the appropriate phrases for a document. The first method is to use human evaluators for rating extracted keyphrases. The other approach is to analyse documents that have author-provided keyword lists. Each document has a list of keyphrases which are easy to accept to be correct. Anyway, automated keyphrase extraction methods are usually poor in predicting author-provided keyphrases since many of the provided phrases do not exist in the document at all but they are sort of superconcepts. Multilingual Approach In our framework, there are keyphrases in 11 languages to be evaluated. Due to many problems related to human evaluation in such a context, we needed a new way of evaluating the results of our language-independent keyphrase extraction method. We took our evaluation data from Wikipedia, a free multilingual online encyclopedia. 1 We present a novel way to use Wikipedia articles in evaluation of a multilingual keyphrase extraction method. Wikipedia corpus has lately been used as a resource for automatic keyword extraction for English (Mihalcea and Csomai, 2007) as well as to many other tasks. We suppose that those articles which are linked from the article at hand and which link back to the article, are potential keyphrases of the article. For example, a Wikipedia article about some concept may link to its higher-level concept. Likewise, the higher-level concept may list all concepts including to the group. Evaluation Data Finding Wikipedia articles of adequate extent in all the languages is quite challenging, basically due to generally quite short articles in Greek, Finnish and Danish. We gathered 10 articles that have sufficient amount of content in each of the 11 Europarl languages. These 110 selected Wikipedia articles were collected in March 2008 and their English names are Beer, Cell (biology), Che Guevara, Leonardo da Vinci, Linux, Paul the Apostle, Sun, Thailand, Vietnam War, and Wolfgang Amadeus Mozart. The average lengths of articles in Finnish, Dutch and Swedish are below 2 000 words, the lengths of articles in Portuguese, Greek and Danish are around 3 000 words and the rest are between 5 000 and 7 000 words. The normalised lengths would switch the order of the languages slightly. Among the 67 links extracted from the English Wikipedia article Cell include phrases such as adenosine triphosphate, amino acid, anabolism, archaea, bacteria, binary fission, cell division, cell envelope, cell membrane, and cell nucleus. The extracted links serve as evaluation keyphrases for the article. Results In our study, we extracted keyphrases of length n = 1 . . . 4 words. Longer phrases than four words did not occur in the keyphrase list in our preliminary tests. As a baseline, the state-of-the-art keyphrase extraction method tf.idf keyphrases were extracted from the same material. Tf.idf (Salton and Buckley, 1988) is another simple and non-parameterized language-independent method that can be used for keyphrase extraction. For tf.idf we split the Europarl reference corpora in 'documents' of 100 sentences and used the same preprocessing that for Likey. To remove uninteresting tf.idf-produced phrases like of the cell, a Likeylike post processing was tried, and it gave slightly better results. Thus the post processing is used for all the reported results of tf.idf. Generally, Likey produces longer phrases than tf.idf. Each keyphrase list characterises the topic quite well, and most of the extracted keyphrases recur in every language. Both methods extracted a French word re that is frequently used in the article as an acronym for réticulum endoplasmique. The same word in Dutch is extracted by tf.idf in a form endoplasmatisch reticulum er. We compared our Likey keyphrase extraction method to the baseline method tf.idf by calculating precision and recall measures according to the Wikipedia-based evaluation keyphrases for both methods. We extracted 60 keyphrases from each document for the first evaluation round and the number of keyphrases available in the evaluation keyphrase list for the document for the second evaluation round. Precision and recall values of both Likey and tf.idf evaluated with Wikipedia intra-links are comparatively low (Table 1) Table 1: Average precisions and recalls for Likey, tf.idf and tf.idf with post processing (p). N keyphrases refers to the amount of evaluation keyphrases available for each article. The obtained precisions and recalls of the first evaluation differed significantly between languages. In Figure 1, the precision and recall of Likey and tf.idf with post processing for each language is given. Within the 11 European languages, English and German performed best according to the precision (Likey: 23.0% and 22.8%, respectively), but not that well according to the recall, where best performed Dutch and Greek (Likey: 33.4% and 31.8%, respectively). Conclusions and Discussion In this paper, we have introduced Likey, a statistical keyphrase extraction method that is able to analyse texts independently of the language in question. In the experiments, we have focused on European languages among which Greek and Finnish differ considerably from Romance and Germanic languages. Regardless of these differences, the method gave comparable results for each language. The method enables independence from the language being analysed. It is possible to extract keyphrases from text in previously unknown language provided that a suitable reference corpus is available. The method includes only lightweight preprocessing, and no auxiliary language-dependent methods such as part-ofspeech tagging are required. No particular param- eter tuning is needed either. A web-based demonstration of Likey is available at http://cog.hut. fi/likeydemo/ as well as more detailed information on the method. The system highlights keyphrases of a document written in one of eleven languages. Future research includes an extension of Likey in which unsupervised detection of morphologically motivated intra-word boundaries (Creutz, 2006) is used. This extension could also handle languages that have no white space between words. We also plan to apply the method within statistical machine translation. A methodological comparison of keyphrase-based dimension reduction and e.g. PCA will also be conducted.
2014-07-01T00:00:00.000Z
2008-08-01T00:00:00.000
{ "year": 2008, "sha1": "a86a96b11436845b937d5b715a31a92098096852", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "a86a96b11436845b937d5b715a31a92098096852", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
202024231
pes2o/s2orc
v3-fos-license
Relationship between population density and viral infection: A role for personality? Abstract Conspecific density and animal personality (consistent among‐individual differences in behavior) may both play an important role in disease ecology. Nevertheless, both factors have rarely been studied together but may provide insightful information in understanding pathogen transmission dynamics. In this study, we investigated how both personality and density affect viral infections both direct and indirectly, using the multimammate mice (Mastomys natalensis) and Morogoro arenavirus (MORV) as a model system. Using a replicated semi‐natural experiment, we found a positive correlation between MORV antibody presence and density, suggesting that MORV infection is density‐dependent. Surprisingly, slower explorers were more likely to have antibodies against MORV compared to highly explorative individuals. However, exploration was positively correlated with density which may suggest a negative, indirect effect of density on MORV infection. We have shown here that in order to better understand disease ecology, both personality and density should be taken into account. The benefits of behavioral traits such as exploration, boldness, and activity may come with potential fitness costs if they increase the probability or rate at which individuals encounter predators (Boon et al. 2008;Jones and Godin 2010) and/or pathogens (Barber and Dingemanse 2010;Boyer et al. 2010). The magnitude of these fitness costs is predicted to co-vary with prey personality and predator foraging strategies or modes of pathogen transmission. For example, more exploratory chipmunks Tamias sibiricus have higher parasite loads than less exploratory individuals because they are more active and cover a larger area, which increases their encounter rate of parasites (Boyer et al. 2010). Alternatively, aggressive interactions may increase the transmission of some viruses via infectious saliva in bite wounds, as with Seoul virus (Glass et al. 1988;Klein et al. 2004). Aggressive behaviors may also correlate with other personality traits, such as boldness, thus forming a behavioral syndrome (Sih et al. 2004). Bold deer mice Peromyscus maniculatus, for instance, are 3 times more likely to be infected with Sin Nombre virus than shy deer mice, presumably because they engage more frequently in aggressive interactions that are predicted to increase the probability of virus transmission (Dizney and Dearing 2013). Similar relationships have been found in feral domestic cats Felis catus between boldness and the prevalence of Feline Immunodeficiency Virus, another virus transmitted via saliva (Natoli et al. 2005). Alternatively, if pathogens are shed into the environment via feces or urine, other personality traits such as exploration or activity may increase the likelihood of encountering contaminated environments and hence infection (Hughes et al. 2012). Here, we use the multimammate mouse Mastomys natalensis as a model organism to investigate the relationship between 2 personality traits (exploration and activity), reproductive age, and infection with Morogoro virus (MORV). Viral RNA particles can be found in the blood of infected individuals up to 7 days after infection after which it declines rapidly, but they continue to shed virus particles in their excretions around up to 40 days after infection (Borremans et al. 2015b;Mariën et al. 2017). Still, it is unknown how long these excretions stay infectious in the environment. Transmission of MORV is mainly horizontal (Borremans et al. 2011) and is believed to occur via exposure to these virus particles excreted in feces, urine, and saliva (Borremans et al. 2015b), and thus potentially via direct contacts (e.g., grooming, licking, and mating) or through indirect exposure to virus particles in the environment. Infection appears to be acute, followed by a lifelong immunity, although a small proportion of animals seems to become chronically infected (Mariën et al. 2017). We hypothesized that exploration and activity are drivers of MORV transmission in M. natalensis since the virus can potentially be transmitted via direct and indirect contacts. Mating in M. natalensis is believed to occur via a scramble competition mating system, in which males search competitively for females (Kennis et al. 2008), possibly in combination with a dominance hierarchy . Male reproductive success in this species is correlated with weight, but is also highly heterogeneous, with a relatively small percentage (17-40%) of males recorded as fathering all offspring in a population (Kennis et al. 2008). Furthermore, territoriality is low during the breeding season and both males and females have overlapping home ranges ). This means that highly active or exploratory individuals of both sexes are more likely to enter home ranges of other individuals, which could lead to a higher probability of encountering MORV-infected individuals and excretions. To test whether activity or exploration might play a role in the transmission of MORV in M. natalensis populations, we used field-based measures of activity, in combination with a series of behavioral trials to characterize exploration, and quantified the relationship between each individual's personality traits and their MORV infection status. We hypothesized that exploration and activity would increase exposure to MORV. Specifically, we predicted that MORV-specific antibody prevalence should be higher in more exploratory and active individuals. In addition, we predicted that juveniles would be more exploratory than adults, as they are in a greater need to gather information about their environment (Hughes 1997;Biondi et al. 2013), but that adults would be more active than juveniles, because of their larger home ranges ) and a potential need to cover a larger area when searching for mates (Kennis et al. 2008). Study site and species Mastomys natalensis is the most common indigenous rodent in sub-Saharan Africa and a well-studied agricultural pest species (Leirs et al. 1994). The species' reproductive cycle is strongly related to seasonal rainfall patterns, and populations can reach high densities in habitats where food is abundant (Leirs et al. 1994;Leirs et al. 1997). The analysis of movement patterns during a long-term field study has shown that male home ranges decrease and those of females increase during periods of high resource availability and population density . During these periods, home ranges overlap greatly, indicating a low level of territoriality and reduced spatial activity. Home range sizes of both sexes are similar during the breeding season . We conducted fieldwork on the campus of the Sokoine University of Agriculture (SUA; Morogoro, Tanzania) between 29 July and 18 October 2013 (dry season-tail end of the breeding period). We trapped animals on 6 grids of 1 ha (100 traps in a 10 Â 10 arrangement, 10 m among traps) in agricultural fields. Grids were spaced at least 700 m apart for spatial independence . Within a trapping session, we implemented capture-mark-recapture trapping for 3 consecutive nights every 2 weeks for each grid, using Sherman LFA live traps (Sherman Live Trap Co., Tallahassee, FL) baited with a mix of peanut butter and maize flour. Traps were set in the evening and checked in the early morning and captured rodents were transported to the nearby SUA Pest Management Center for behavioral tests and blood sampling (details below). Rodents were released in the evening at their site of capture, after which we rebaited and re-set all traps. We conducted a total of 6 trapping sessions for all grids except 1, for which only 4 sessions were completed. We used toe clipping to uniquely mark individuals at their first capture (Borremans et al. 2015a), and we recorded the weight, sex, and reproductive age (following Leirs et al. 1994) of individuals at each capture. We considered mice to be juvenile if signs of sexual activity could not be observed (scrotal testes in males; perforated vagina or pregnancy in females). In order to minimize any potential effects of stress, we recorded the behavior of each individual (see below for details) before blood sampling and toe clipping. Blood samples were taken from the retro-orbital sinus and preserved on pre-punched filter paper ($15 lL/punch; Serobuvard, LDA 22, Zoopole, France). Saliva was collected by placing a small slip of filter paper into the mouth of the animal for approximately 20 s. If the animal urinated, a urine sample was collected on filter paper. Samples on filter paper were dried and stored in the dark, at ambient temperature (<28 C) for 2 months, after which they were preserved at À20 C as suggested by Borremans (2014). All experimental procedures were approved by the University of Antwerp Ethical Committee for Animal Experimentation (LA1100135) and adhered to the EEC Council Directive 2010/63/EU, and followed the Animal Ethics guidelines of the Research Policy of Sokoine University of Agriculture. Behavioral trials We conducted behavioral trials in the morning in 75 (L) Â 55 (W) Â 44 (H) cm semi-translucent arenas, the walls of which were covered with red plastic (Figure 1). We conducted trials under low-level natural daylight, which mice should have perceived as dark due to the red plastic sheets coating the walls, and recorded all trials using a digital video camera installed above each arena. Sixteen rectangles (19 Â 13 cm) were marked on the floor of the arena to facilitate the automatic extraction of behavioral data (Figure 1). At the start of each trial, we placed a trap containing an individual at one end of the arena, with the trap opening facing the inside of the arena. The behavioral trial started when the trap was manually opened. Each behavioral trial consisted of 2 tests to quantify an individual's exploratory behavior. First, an open field (OF) test which measured each individual's reaction to a novel environment (Archer 1973). The OF test assumes that movement within the experimental arena is an index of exploration, as animals move around to investigate their surroundings (Dingemanse et al. 2002). After 5 min, the second test, a novel object (NO) test, began when we introduced a NO (a blue plastic box) into the arena, on the opposing side of the trap opening. In combination, these tests measure an individual's exploration of a novel environment, and toward a NO (Réale et al. 2007). NO tests ran for 5 min, after which the animals were removed from the arena. The experimenter was only present at the start of the OF test, to open the trap, and at the beginning of the NO test for the introduction of the NO. To remove scent and dirt, we cleaned experimental arenas and NOs after every trial using 70% ethanol. Individuals were released at their point of capture following the completion of all behavioral tests and were held for a maximum of 5 h. Consecutive tests for individuals were separated by a minimum of 11 days (21 6 9 days, mean 6 SE). Video analysis We developed an imaging processing algorithm in R 3.0.2 (R Core Team 2013) to automatically extract behavioral data from the video files (code available on request): (i) locomotion, measured as the total number of times the animal changed squares, calculated separately for OF and NO tests (see Figure 1) and (ii) entrance latency, the time (in seconds) an animal took to leave the trap in the OF test, and after the introduction of the NO. If an animal did not leave the trap after 5 min in either test we recorded 300 s. Animals were not forced to leave the trap, as this would induce fear and/or anxiety behavior instead of exploration (Misslin and Cigrang 1986). Detection and quantification of MORV RNA and antibodies against MORV We analyzed blood, saliva, and urine samples at the University of Antwerp for MORV-specific IgG antibodies using immunofluorescence assay protocols described in Gü nther et al. Statistical analysis Individual exploratory behavior We conducted 295 behavioral tests on 122 individuals (N male ¼ 42, N female ¼ 80). All individuals were recorded at least twice (N recorded twice ¼ 82, N three times ¼ 30, N four times ¼ 9, N five times ¼ 1), which allowed us to estimate the repeatability of the behavioral responses measured in the behavioral tests (Réale et al. 2007). We used a principal component analysis (PCA) to reduce the number of behavioral variables from the OF and NO tests, and applied the Kaiser-Guttman criterion (eigenvalue >1;Kaiser 1991;Peres-Neto et al. 2005) when selecting the number of components to retain. We used a linear mixed model (LMM) with maximum likelihood (Pinheiro and Bates 2000;Crawley 2012) to determine the effect of independent variables on the component (PCA) scores. We used sex (male/female), reproductive age (adult/juvenile), and a binomial variable describing whether it was the first time an individual had been caught and recorded (1 or 2, further referred to as first recording) as fixed effects, and a 3-way interaction between all the fixed effects. Grid and M. natalensis identity (ID) were included as random effects to correct for repeated measures effects, and to estimate the between-and within-individual variance required to calculate repeatability (Nakagawa and Schielzeth 2010;Wolak et al. 2012). To find the model that best fit our data, we removed statistically nonsignificant interactions and fixed effects from the model using a backward stepwise procedure (using P ¼ 0.05 as the level to reject a fixed effect) implemented in the R package lmerTest (version 2.0; Kuznetsova et al. 2014). We used a likelihood ratio test (LRT) to determine the significance of the random effects, by comparing the final LMM with a linear model (LM) without ID or grid as a random effect; a P-value < 0.05 indicates that a significant amount of variance can be ascribed to between-individual variance (Martin and Réale 2008). Although we used the OF and NO tests to quantify exploration behavior, a single exploration value was needed for further analysis in the generalized linear model (GLM). We therefore used the best linear unbiased predictors (BLUPs) from the final LMM to generate a single exploration value (an individual index of personality) per individual. BLUPs provide estimates of the random effects (ID) independent of the other terms within the model, and are standardized to a mean of zero (Kruuk 2004;Martin and Réale 2008). They are less sensitive to extreme values within the data and are a more appropriate estimate for personality type than the mean of all measurements (Pinheiro and Bates 2000). Trap diversity We used the live-trapping data, and specifically trap diversity (the total number of unique trap locations in which an individual was trapped), to estimate individual activity in the field (Boyer et al. 2010). To test which factors affected activity, we ran a GLM with a Poisson error distribution, with activity as the dependent variable, and sex, reproductive age, trappability (total number of times an individual was trapped), and personality type (BLUP) as independent variables, together with a 2-way interaction between sex and reproductive age (Crawley 2012). MORV infection status: We captured 776 different individuals (from 1,133 captures), on all grids during 108 trapping nights throughout the whole study period. We screened all individuals for MORV antibodies at least once during each recapture session in which it was encountered. All individuals that were recaptured during different trapping sessions (N ¼ 220) were screened for MORV RNA at least twice. More details about the individuals' initial infection state can be found in Mariën et al. (2017). We tested how MORV antibody status and MORV RNA status (binary response for each test: positive or negative) in the full dataset varied as a function of sex, reproductive age, and their interaction using separate generalized linear mixed models (GLMMs) with binomial error distributions. We included capture grid as a random effect to control for variation in prevalence among the grids. Prevalence of MORV antibodies and MORV RNA, and their 95% confidence intervals, were calculated manually. To test for the relationships between MORV infection status (antibodies, binary response) and personality type, we constructed a new GLM (with a binomial error distribution; Crawley 2012) with sex, reproductive age, trap diversity, and personality type (BLUP) as independent variables using the reduced, behavioral dataset. All statistical analyses were executed using R software 3.0.2 (R Development Core Team 2013). Individual exploratory behavior The PCA reduced the number of exploratory variables to 2 components with an eigenvalue >1 (Table 1) which, combined, explained 86.80% of the total variance. The first component (PC1) explained 56.98% of the variance and was positively correlated with locomotion in both the OF and NO tests, and negatively with the latency measurements from both tests. The second component (PC2) explained 29.82% of the total variance. PC2 was positively correlated with locomotion during the NO test, but negatively with locomotion in the OF test (Table 1). We chose to retain only PC1 in our study as it explained the majority of the variance and because is strongly correlated with movement during the OF and NO which is an index of exploration (Dingemanse et al. 2002). Individual exploration types (BLUPS) were calculated from this component. From hereafter, we will refer to PC1 as exploration behavior (PC1) and the individual indices of exploration (BLUP) will be referred to as personality type. The LMM on exploration behavior (PC1) revealed a significant effect of reproductive age (Table 2), where juveniles were significantly more explorative than adults (coefficient 6 SE ¼ 0.564 6 0.210; t 114 ¼ 2.687, P ¼ 0.008; Figure 2A). There were no differences between the sexes or an effect of recording order (first vs. later recordings), and no interaction terms were significant (Table 2). There were no differences in exploration behavior (PC1) between the 6 grids (LRT v 2 ¼ 0.00; P ¼ 1) but M. natalensis ID explained a significant proportion of the variance in exploration behavior (LRT v 2 ¼ 15.63; P < 0.001) and there were consistent differences in exploration behavior (PC1) through time between individuals with a repeatability of R ¼ 0.30 (95% confidence interval 0.21-0.36). Trap diversity Trap diversity (activity) was significantly positively correlated with the total number of times an individual was caught (trappability; coefficient 6 SE ¼ 0.106 6 0.023, z 119 ¼ 4.596, P < 0.001). Independent of trappability, adult individuals were trapped in significantly more different traps than juveniles were (coefficient 6 SE ¼ À0.275 6 0.122, z 119 ¼ À2.251, P ¼ 0.024; Figure 2B), but there were no significant differences between sexes (P > 0.8) or a significant interaction between sex and age (P > 0.8). In addition, there was no effect of personality type (BLUP) on trap diversity (P > 0.5), and thus no statistical evidence for a behavioral syndrome between activity and exploration in M. natalensis. Discussion It has been hypothesized that consistent individual differences in exploratory behavior may influence parasite or pathogen infection status, but this relationship has been investigated for only a limited range of disease agents (Barber and Dingemanse 2010). In this study, we have provided evidence that M. natalensis expresses Juveniles are significantly more exploratory than adults, but less active (lower trap diversity). MORV-specific antibody prevalence is significantly higher in adults than juveniles. consistent individual differences, or personality types, in exploration behavior with an overall repeatability of 30%. Contrary to our expectations we found no relationship between individual's MORV infection status and their exploration or activity level. Exploration is an information-gathering behavior used for purposes such as assessing predation risk and investigating new food resources (Hughes 1997;Tebbich et al. 2009;Reader 2015). As predicted, juveniles were, on average, more exploratory than adults. Such a decline in exploration with age has been found in several other taxa, for example brown rats (Rattus norvegicus; Ray and Hansen 2005), corvids (Miller et al. 2015), and chimango caracaras (Milvago chimango; Biondi et al. 2013), and has been attributed to individuals' need to gather information about their environment early in life (Reader 2015). Alternatively, because exploratory behavior can attract predators (Rö del et al. 2015), highly explorative individuals could be predated before reaching adulthood, hence adults may behave more carefully than juveniles due to experience (Rö del et al. 2015). It is also possible that juveniles are less efficient at gathering information and must therefore spend more time exploring their environment than adults to acquire the same amount of information (Biondi et al. 2013). Although adult M. natalensis were less exploratory than juveniles, they were more active in their natural environments (i.e., visited a greater variety of traps), independent of the number of times they were trapped. These activity patterns in adults possibly stem from the timing of our study during the breeding season. On the one hand, female home ranges increase during this period, presumably to gather more food . Males, on the other hand, are highly active in order to increase their reproductive success in the species' scramble mating competition (Kennis et al. 2008). Nonetheless, we found no statistical evidence for a behavioral syndrome between activity and exploration. The absence of a behavioral syndrome between these 2 traits has been found in other species (Patterson and Schulte-Hostedde 2011;Carter et al. 2013, but see Boyer et al. 2010Kekälä inen et al. 2014), and supports the results of a meta-analysis that showed that the average strength of the correlation between activity and exploration is weak (Garamszegi et al. 2012), and can depend on a range of environmental factors (e.g., predation pressure, Dingemanse et al. 2007). Exploration and activity may have potential fitness costs if they increase individuals' encounter rates with pathogens. Most individuals infected with MORV shed infectious particles acutely in their urine, feces, and saliva up to approximately 40 days after infection (Borremans et al. 2015b), although some individuals might become chronically infected (Mariën et al. 2017). More exploratory or active individuals may therefore have a higher probability of contacting infectious excretions and becoming infected. As antibodies indicate past infection and remain present in the host even after the virus is cleared (Mills et al. 2007;Gü nther et al. 2009;Borremans et al. 2015a), the higher antibody prevalence that we observed in adult M. natalensis is the result of cumulative opportunities to encounter the virus, as previously observed and discussed by Borremans et al. (2011), and also for other arenaviruses and host species (Demby et al. 2001;Mills et al. 2007). Nevertheless, we found no direct link between MORV infection status and exploration or activity. This may suggest that virus particles shed in the excretions of recent infected individuals are not as infectious as previously thought (e.g., compared with Lassa virus; Fichet-Calvet and Rogers 2009) and that MORV transmission may occur more commonly through direct contact (e.g., social interactions and mating) with infected conspecifics (Borremans et al. 2011). Our lack of significant results may also stem from our low sample size of MORV antibody positive individuals resulting in low statistical power. If MORV transmission is strongly linked to direct contact with infected conspecifics rather than through infected environments, MORV RNA prevalence should then increase when social contacts between conspecifics increases (Drewe and Perkins 2014). We found that MORV RNA prevalence, a clear indication of recent infection, is significantly higher in adults than juveniles; similar patterns have been reported for Lassa virus, another arenavirus (Fichet-Calvet et al. 2008). Furthermore, we showed that adults are significantly more active than juveniles, which is likely to increase their probability of encountering infectious individuals (Kennis et al. 2008). Combined, these results suggest that direct contacts between individuals may be important for the transmission of MORV. If this is the case, there are multiple, non-mutually exclusive behaviors that would be expected to increase transmission of MORV. Aggressive behaviors, for example, increase transmission in other disease systems [e.g., hantavirus in: R. norvegicus (Klein et al. 2004) and P. maniculatus (Dizney and Dearing 2013)], but seem unlikely in M. natalensis due to their low levels of aggression (Veenstra 1958;Perrin et al. 2001). Alternatively, MORV could be transmitted during mating, as has been found in the Machupo arenavirus (Webb et al. 1975). While transmission during mating may indeed happen for MORV, on-going transmission in sexually inactive juvenile M. natalensis (Borremans et al. 2011), as indicated by high RNA prevalence, suggests that this is not the major mode of transmission during the study period. This however does not preclude the possibility that transmission during mating is the main mode of transmission during the low-density breeding season, when animals are almost exclusively sexually mature. Social contacts and position within the social network may be more important for virus transmission through direct contacts (Godfrey 2013;Drewe and Perkins 2014). Individuals with a large number of contacts, for instance, are expected to play a key role in acquiring and transmitting the virus (Lloyd- Smith et al. 2005;White et al. 2017). Strong heterogeneity in social contacts has indeed been found for M. natalensis (Borremans et al. 2016) but has not yet been linked to infection status. A more detailed study examining the relationships between sociability, social networks, personality, and MORV infection status could provide us with a greater understanding of MORV ecology and transmission dynamics. It has been suggested that different personality types could vary in variations in disease susceptibility and/or transmission (Barber and Dingemanse 2010;Hawley et al. 2011;Barron et al. 2015;Ezenwa et al. 2016). While there has been a focus on the role of personality in disease transmission in tick (Boyer et al. 2010;Patterson and Schulte-Hostedde 2011;Bajer et al. 2015), trematode Koprivnikar et al. 2012;Seaman and Briffa 2015), and malarial (Dunn et al. 2011;Garamszegi et al. 2015;Garcia-Longoria et al. 2015) disease systems, more studies are acknowledging the importance of personality in viral models (Natoli et al. 2005;Dizney and Dearing 2013;Araujo et al. 2016). Our study provides the first evidence for the existence of personality types in M. natalensis, a significant pest species in sub-Saharan Africa (Leirs 1995), and reservoir host and vector for several important zoonotic infections (Frame et al.1970;Isaä cson 1975;Gü nther et al. 2009;Katakweba et al. 2012). We found that juveniles were typically more exploratory than adults under laboratory conditions, but also less active in the field. Nevertheless, we found no link between individuals' exploratory behavior or activity and their MORV infection status, which may suggest that environmental transmission of MORV is not as prominent as we hypothesized. Together our results may indicate that exploration and activity might not increase the individual's likelihood to come into contact with the virus suggesting that variation in viral infection between individuals is not affected by between-individual variation in exploration and activity.
2018-11-01T18:46:31.941Z
2017-09-12T00:00:00.000
{ "year": 2019, "sha1": "71708d97b3c96d709e916049a72c6251f7ad8ece", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5541", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e816d0adf05d203404e9eab5ce71846a395cbda", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
214641100
pes2o/s2orc
v3-fos-license
Phonon-mediated dimensional crossover in bilayer CrI3 In bilayer CrI3, experimental and theoretical studies suggest that the magnetic order is closely related to the layer staking configuration. In this work, we study the effect of dynamical lattice distortions, induced by non-linear phonon coupling, in the magnetic order of the bilayer system. We use density functional theory to determine the phonon properties and group theory to obtain the allowed phonon-phonon interactions. We find that the bilayer structure possesses low-frequency Raman modes that can be non-linearly activated upon the coherent photo-excitation of a suitable infrared phonon mode. This transient lattice modification, in turn, inverts the sign of the interlayer spin interaction for parameters accessible in experiments, indicating a low-frequency light-induced antiferromagnet-to-ferromagnet transition. The control of ordered states of matter such as magnetism, superconductivity or charge and spin density waves is one of the more sough after effects in the field. In equilibrium, this can be achieved by turning the knobs provided by temperature, strain, pressure, or chemical composition. However, the nature of these methods limits the possibility to integrate the materials into devices for technological applications due undesirably slow control and non-reversibility. In recent years, a new approach has emerged which allows in-situ manipulation: driving systems out of equilibrium by irradiating them with light . Recent experiments have demonstrated the existence of Floquet states in topological insulators [31,32], the possibility to transiently enhance superconductivity [33][34][35], the existence of light-induced anomalous Hall states in graphene [36], light-induced metastable charge-density-wave states in 1T-TaS 2 [37], optical pulse-induced metastable metallic phases hidden in charge ordered insulating phases [38,39], and metastable ferroelectric phases in titanates [40]. Finding suitable platforms to realize non-equilibrium transitions represents the first main challenge. Recently, interest in the van der Waals bulk ferromagnet chromium triiodide (CrI 3 ) [41,42] has been renewed with the discovery that it is stable in its monolayer form, where the chromium atoms arrange in a hexagonal lattice and the iodine atoms order on a side-sharing octahedral cage around each chromium atom as shown in Fig. 1(ab). Monolayer CrI 3 presents out-of-plane magnetization stabilized by anisotropies [43] and a Curie temperature T ∼ 45 K [44]. The origin of the anisotropies is still a subject of intense theoretical and experimental investigations [45][46][47]. In bulk form, CrI 3 exhibits a structural phase transition near T = 210 − 220 K. This structural transition is accompanied by an anomaly in the magnetic susceptibility, but no magnetic ordering [42]. At T = 61 K, CrI 3 exhibits a transition from paramagnet to ferromagnet [42], with an easy-axis perpendicular to the 2D planes. There is evidence that suggests CrI 3 is a Mott insulator with a band gap close to 1.2 eV [41,42]. Recent experiments have measured very large tunneling magnetoresistance [48,49], suggesting potential applications in spintronics devices. Experiments have determined that bilayer CrI 3 (b-CrI 3 ) presents an antiferromagnetic (AFM) groundstate [44,[49][50][51][52], with monoclinic crystal structure (see Fig. 1(c-d)). Single-spin microscopy [53] and polarization resolved Raman spectroscopy [54] measurements have established a strong connection between the magnetic order and the stacking configuration in few-layers CrI 3 . Furthermore, it has been shown that the magnetic order can be controlled in equilibrium by doping [55] and applying pressure [56] in b-CrI 3 samples. These experimental results have been followed by theoretical studies investigating the origin of the AFM order and its link to the lattice configuration [57][58][59] and the mechanism behind the AFM-FM transition in doped b-CrI 3 [60]. In this Letter, we leverage these theoretical and experimental results, and consider the possibility to dynamically tune the magnetic order in b-CrI 3 using low-frequency light to coherently drive suitable phonon modes. We start with a group theory analysis to determine the feasibility of the non-linear phonon process required. Guided by these results, we perform first principles calculations to find phonon frequencies, eigenmodes and non-linear phonon coupling strengths. We then analyze the equations of motion for the driven phonons and their impact on the lattice structure. Finally, we determine the effect of such transient lattice deformations in the magnetic order and find the possibility to induce a sign change in the interlayer exchange interaction using experimentally-accessible parameters. Group theory analysis. Recent first-principles studies indicate that there is a direct relation between the magnetic ground state and the relative stacking order between the layers [57,59,62]. The FM phase presents an AB stacking with space group R3 (point group S 6 ), while the AFM ground state is accompanied by an AB' stacking with space group C2/m and point group C 2h . In Fig. 1(d), we show the AB' stacking configuration (omitting the I atoms for clarity), which corresponds to an AA stacking shifted by 1/3 in lattice vector units. In bulk CrI 3 , the high-temperature phase belongs to the space group C2/m, while the low temperature phase to the space group R3. Both structures are related by a relative shift of the layers leaving each individual layer unaltered. Since experiments find an AFM order in the ground state [44], in our analysis we assume the configuration corresponding to the C 2h space group. The conventional unit cell is indicated by the solid black lines in Fig. 1(d), which contains 8 Cr atoms and 24 I atoms. For our calculations, we work with the primitive unit cell, which contains N = 16 atoms. The unit cell trans-formation, and the C 2h point group character table are listed in the Supplemental Material [63]. The total number of phonon modes is then 3N = 48. We have 3 acoustic modes and 3N − 3 = 45 optical modes. We obtain that the equivalence representation is given by Γ equiv = 5A g ⊕3B g ⊕3A u ⊕5B u . In the C 2h point group, the representation of the vector is Γ vec = 2A u ⊕B u , which leads to the lattice vibration representation Γ latt.vib. = Γ equiv ⊗ Γ vec = 13A g ⊕ 11B g ⊕ 11A u ⊕ 13B u . From the symmetry of the generating functions (see the character table in the Supplemental Material [63]), 24 modes are Raman active (13 with totally symmetric A g representation and 11 with B g representation) and 24 infrared active modes [64]. As we have discussed, based on DFT studies [57,58], the magnetic ground state is correlated with the relative position of the stacked layers. Therefore, we posit that a Raman mode involving a relative shift between the layers might influence the magnetic order. In order to test if such a mode is allowed by symmetry, we construct the projection operators [65,66] kl (C α ) is the irreducible matrix representation of element C α , h is the order of the group, and l n is the dimension of the irreducible representation. Finally,P (C α ) are 3N × 3N matrices that form the displacement representation. Applying the projection operatorsP Ag and P Bg to random displacements of the atoms, we find that modes with one layer uniformly displaced in the [1 1 0] direction, while the other in the [11 0] direction is allowed by symmetry and belong to the totally-symmetric A g representation ( Fig. 2(c)). Similarly, modes where one layer is displaced in the [0 0 1] direction and the other one in the [0 01] belongs to the A g representation (see Fig. 2(b)). On the other hand, layer displacements in the directions [1 1 0] and [ 110], belong to the B g representation (see Fig. 2(d)). We will show that these Raman modes can be effectively manipulated via indirect coupling with light to control the magnetic order. Phonon frequencies. Once we determine that these modes are allowed by symmetry, we calculate the phonon frequencies using density functional perturbation theory (DFPT) and finite difference methods as implemented in QUANTUM ESPRESSO [67,68] and VASP [69,70], respectively. We find excellent agreement among all the approaches considered. The details of the calculations are shown in the Supplemental Material [63]. In Fig. 2(a), we plot the full set of frequencies of the Γ-point phonons. We find that the three low-frequency modes (apart from the three omitted zero-frequency acoustic modes) are Raman active, and correspond to relative displacement between the layers in different directions, in agreement with the group theory results. The lowestfrequency mode, Ω = 0.460 THz, belongs to the A g representation, and the real-space displacement is shown in Fig. 2(c). The next phonon mode is very close in frequency, Ω = 0.467 THz, however, it belongs to the B g representation ( Fig. 2(d)). The mode with frequency Ω = 0.959 THz belongs to the A g representation and corresponds to a relative displacement perpendicular to the layers, as shown in Fig. 2 Non-linear phonon processes have been proposed for transient modification of the symmetries of the system, which can be accompanied by changes in the groundstate properties [3,4,[7][8][9][12][13][14][15]. Now, we derive the non-linear phonon potential resulting from coupling between infrared (Q IR ) and Raman (Q R ) active modes in b-CrI 3 . In an invariant polynomial under the operations of a given group, coupling between two modes is allowed only if it contains the totally symmetric representation [65,66]. We consider the coherent light-induced excitation of a non-degenerate infrared active mode with representation B u . In principle, this IR mode is allowed to couple non-linearly to all A g and B g Raman modes in the C 2h point group. However, the low frequency of the soft modes involving relative motion between the layers, compared with the rest of the Raman phonon modes allows us to focus on this modes, as we will show. Up to cubic order, the non-linear potential functional including the three low-frequency phonon modes is given by The numerical value of the coefficients is obtained using first-principles calculations. In the Supplemental Material [63] we outline the procedure we used following Ref. [4], we plot the energy surfaces obtained by varying the corresponding phonon mode amplitudes, and display the numerical values of the coefficients obtained by fitting Eq. (1). Under an external drive with frequency Ω, the potential acquires the time-dependent term [71,72] where E 0 is the electric field amplitude, and Z * is the mode effective charge vector [71,73]. F (t) = exp{−t 2 /(2τ 2 )} is the Gaussian laser profile, with variance τ 2 . Assuming that damping can be neglected, the general differential equations governing the dynamics of one infrared mode coupled to m Raman modes are obtained from the relations , which correspond to a set of m + 1 coupled differential equations that we solve numerically in the general case. The in- dex i = 1, · · · , m runs over the Raman modes. In the absence of coupling with the Raman modes, the IR mode dynamics are described by In the resonant case Ω = Ω IR , and impulsive limit [4]. The amplitude of the excited IR modes scales linearly with the electric field and the mode effective charge. Now we add coupling with one A g Raman mode. The potential in Eq. The cubic term γ is responsible for the ionic Raman scattering (IRS) [72,74]. Within this mechanism, the infrared active mode is used to drive Raman scattering processes through anharmonic terms in the potential, and leads coherent oscillations around a new displaced equilibrium position. Theoretical works have also proposed this cubic non-linear coupling mechanism to tune magnetic order in RTiO 3 [16,17], investigate light-induced dynamical symmetry breaking [4], modulate the structure of YBa 2 Cu 3 O and related effects in the magnetic order [18]. On the experimental side, the response of YBa 2 Cu 3 O 6+x to optical pulses has been investigated [75], and experimental detection of possible light-induced superconductivity has been reported [35]. From the equilibrium condition ∂ QR V [Q IR , Q R ] = 0, we find that the potential is minimized when [7] Q R = −γQ 2 IR /Ω 2 R . Therefore, we obtain larger displaced equilibrium positions effects for low-frequency Raman modes. This argument allows us to limit our discussion to the three low-frequency Raman modes shown in Fig. 2(b-d). Considering the cubic term as a perturbation, we find the time-dependence of the Raman mode is given by In the resonant limit Ω R = 2Ω IR , the solution is given by Additional constrains for the IR mode selection arise from current experimental capabilities for strong THz pulse generation. Strong fields of up to 100 MV cm −1 have been achieved in the literature in the range 15 − 50 THz [76,77]. Now we investigate the non-linear dynamics of the three Raman phonon modes of interest. We consider laser pulses incident along the z and y-directions, as shown in the insets of Fig. 3(c) and(d). The IR mode with frequency Ω IR = 7.19 THz (see Fig. 2(a), top grey dot) is parallel to the z-direction with a Born effective charge Z * four times larger than any other IR mode parallel to the z-direction with lower frequency. On the other hand, the IR mode with frequency Ω IR = 6.104 THz is parallel to the y-direction with a Born effective charge approximately two order of magnitude larger than any other mode with non-zero overlapping with the laser incident in the y-direction. Therefore, considering only one IR mode is justified in our case. The numerical solutions for Q IR and Q (i) R are shown in Fig. 3(a-b) for an experimentally accessible elec-tric field E 0 = 4 MV/cm, with τ = 0.2 ps incident in the z-direction. The totally symmetric mode c (Ω R modes, and the absence of non-linear coupling with the infrared mode Q IR up to cubic order. In Fig. 3(c) ((d)), we plot the averaged displacements as a function of electric field amplitude for a laser incident in the z-direction (y-direction). Assuming an electric field amplitude E 0 = 3.5 MV/cm incident in the y-direction, we obtain Q √ amu, which corresponds to a real-space displacement of 2.26% of the in-plane lattice parameter a = b = 6.85Å. Next, we will show that the relative displacements accesible using non-linear phonon processes are large enough to induce a change in the exchange interactions. Effective spin interaction. Recently, a combined study employing group theory and ferromagnetic resonance measurements [46] proposed that CrI 3 is described by the Heisenberg-Kitaev Hamiltonian H = H intra + H inter , where the intralayer sector is given by [78,79] where the first term corresponds to the isotropic Heisenberg interaction, the second term is the Kitaev interaction, which introduces a bond-dependent anisotropy [78], and the Γ term corresponds to off-diagonal exchange [79]. In Ref. [46], the intra-layer interaction constants for bulk CrI 3 were determined experimentally to be J = −0.2 meV, K = −5.2 meV, and Γ = −67.5 µeV. The Heisenberg-Kitaev Hamiltonian H intra has been extensively studied. The phase diagram has been determined [79], the spin-wave spectrum has been shown to carry nontrivial Chern numbers [80], and the magnon contribution to thermal conductivity has been determined [81]. The interlayer Hamiltonian has been assumed dominated by nearest-neighbor Heisenberg interactions, H inter = ij ∈int. J ⊥ s i · s j , with J ⊥ = 0.03 meV in Ref. [46], and J ⊥ = 0.59 meV in Ref. [45], as extracted from ferromagnetic resonance and inelastic neutron scattering measurements in bulk CrI 3 , respectively. Although both experiments propose different intralayer spin models, both find that the interlayer energy scale is much smaller than the intralayer scales. We map the interlayer Hamiltonian into a Heisenberg model of the form H inter = 1 2 ij∈int. J ij s i ·s j , and determine J ij from first principles (generalized gradient approximation with Hubbard U=1 eV fixed to reproduce the b-CrI 3 critical temperature T C = 45 K) using a Green's function approach and the magnetic force theorem (for a detailed explanation of the method, see Ref. [82]). The coupling between the spin and the phonons enters through the interatomic distance dependence of the exchange constants [83]. Under a lattice deformation, and for small deviations from the equilibrium position, the exchange interaction is given by where J 0 corresponds to the equilibrium interaction, δJ is the strength of the first-order correction in the directionδ, and u(t) is the real-space phonon displacement. Given that the infrared phonon frequencies we propose to use (Ω IR ≈ 7.19, 6.1 THz) are much larger than the relevant spin interactions ( 1 meV), to leading order, Floquet theory indicates that the effective interlayer exchange interaction becomes J eff = J 0 + δJδ · u R , where u R is the time-averaged Raman mode displacement. Therefore, in order to determine the effect of the nonlinear phonon displacements, we compute the effective exchange interactions in b-CrI 3 for layers displaced with respect to each other in the direction of the Q √ amu, within reach of the displacement amplitude induced by laser irradiation, two of the interlayer exchange interactions changes sign. Therefore, non-linear phononics offers a viable scheme to tune the exchange interactions in strongly correlated systems such as b-CrI 3 . Conclusions. In this work, we studied b-CrI 3 driven with low-frequency light pulses. We found that coherently driving an infrared mode can activate Raman modes involving relative displacements between the layers, which oscillate around new shifted equilibrium positions. The transient lattice distortions affect the exchange interactions and can lead to a change in sign in the interlayer interaction for a range of parameters accessible in experimental setups and allows the opportunity to change the magnetic order in a system via a drive with minimal heating effects. Similar results should be possible for other layer materials with weak inter-layer bonds. In this supplemental material, we discuss additional group theory aspects of bilayer and monolayer CrI 3 . Also, we describe in detail the phonon frequency and the non-linear phonon couplings first principles calculations. Γ-POINT PHONON FREQUENCIES AND MACROSCOPIC DIELECTRIC TENSOR In this section we list all the phonon frequencies at the Γ point, the macroscopic dielectric tensor, and the Born charges. We use five different approaches, which show a good agreement among all of them. Table [III] shows converged phonon frequencies at the Γ point of the CrI 3 bilayer with the C2/m space group. The different approaches we employ are described below: qe1: QUANTUM ESPRESSO [67,68] calculation using Density Functional Perturbation Theory (dfpt). GGA-PAW potentials were employed with Ecut=55 Ry and Ecutrho=490 Ry. k-grid sampling of 12x12x1 and Van der Waals correction of type grimme-d2 were used. LDA-PAW potentials were employed with ENCUT=600eV. k-grid sampling of 12x12x1 and NO VdW correction. Born effective charges The effective charge tensors Z * κ,ij (units of the electron electric charge e and in cartesian axis) of atom κ are listed below. The Born effective charge is defined as [71,73] where α labels the phonon mode, i the direction in cartesian coordinates, κ labels the atoms in the unit cell, m κ is the mass of atoms κ, and e α,κ,j corresponds to the dynamical matrix eigenvector α, atom κ, in the direction j normalized as κ,j (e α,κ,j ) * e β,κ,j = δ αβ . NON-LINEAR COEFFICIENTS In this section, we calculate the non-linear coefficients for the energy potential shown in the main text and repeated here for reference. To determine this coefficients, we follow the procedure described in Ref. [4]. The displacement of atom κ in the unit cell, direction j, in terms of the normal mode amplitude Q α , is given by where m κ is the mass of atom κ, and e α,κ,j is a dynamical matrix eigenvectors normalized as κ,j (e α,κ,j ) * e β,κ,j = δ αβ . The coefficients are listed in Table VI Now we perform a group theory analysis on monolayer and bilayer CrI 3 . Our goal is to determine the properties of the Γ point phonons such as irreducible representations, lattice displacements, Raman and infrared activity and non-linear phonon coupling. For this, we employ GTPack [66,84], ISOTROPY [85], and the Bilbao Crystallographic Server [64]. Infrared and Raman active modes Ab-initio studies of the Raman spectrum on monolayer CrI 3 have postulated that the space group is R32m (No. 166) [86]. However, more recent Raman experiments have identified the structure to belong to the p31m (D 1 3d ) double space group. In Fig. (6) we show a diagram with the point group operations we identified in the lattice shown in monolayer CrI 3 . The point group is found to be D 3d . The character table for the point group D 3d is shown in panel (b) [65]. We start determining the infrared and Raman active modes in this system. For this, we first calculate the equivalence representation Γ equiv keeping in mind that there are 8 atoms per unit cell, two Cr atoms and six I atoms. Γ equiv is given in Table VII. E C3 C2 i S6 σ d Γ equiv 8 2 2 0 0 2 Using the decomposition theorem [65] we find In this point group, the representation of the vector is Γ vec = E u ⊕ A 2u . Then, the representation of the lattice vibrations is Γ latt.vib. = Γ equiv ⊗ Γ vec From the character table, we can conclude that monolayer CrI 3 has six Raman active modes with representations A 1g , and E g . Two frequencies are non-degenarate and four are doubly-degenerate. The two A 1g modes correspond to "breathing" modes, where the lattice expands and contracts preserving all the symmetries. One of them is in-plane and the other one is out-of-plane. Modes that D 3d E 2C3 3C2 i 2S6 3 σ d A1g 1 1 1 1 1 1 x 2 + y 2 , z 2 A2g 1 1 -1 1 1 -1 Iz Eg 2 -1 0 2 -1 0 (Ix,Iy) (x 2 − y 2 , xy), (yz, zx) A1u 1 1 1 -1 -1 -1 A2u 1 1 -1 -1 -1 1 z Eu 2 -1 0 -2 1 0 (x, y) transform as A 2u , and E u are infrared active. This results can be corroborated with the Bilbao Crystallography Server. The vibration eigenvectors for a given representation Γ can be obtained using projection operators P (Γ) in the displacement representation as [65] Q Γ = P (Γ) ⊗ ζ, where ζ = x 1 , y 1 , z 1 , . . . , x N , y N , z N is an arbitrary vector of dimension 3N , and N is the number of atoms in the unit cell. Fig. 7 shows the monolayer CrI 3 vibrational modes, projected onto the xy-plane for clarity. BILAYER CHROMIUM TRIIODIDE GROUP THEORY ASPECTS. In this section, we show explicitly the character table for the relevant point group, and the transformation from the conventional to the primitive unit cell. The character table for the point group C 2h is
2020-03-26T01:00:59.555Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "4da074aab15b5561073206e168e0f9e80969a7fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.11158", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b86584567192eff5c7f37e243ea2a874dd3260c3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
62896586
pes2o/s2orc
v3-fos-license
Application of artificial neural networks to estimate soil organic carbon in a high-organic-matter Mollisol Soil organic carbon (SOC) has a key role in the global carbon (C) cycle. The complex relationships among the components of C cycle make the modelling of SOC variation difficult. Artificial neural networks (ANN) are models capable to determine interrelationships based on information. The objective was to develop and evaluate models based on the ANN technique to estimate the SOC in Mollisols of the Southeastern of Buenos Aires Province, Argentina (SEBA). Data from three long term experiments were used. Management and meteorological variables were selected as input. Management information included numerical variables (initial SOC (SOCI); number of years from the beginning of the experiment (Year), proportion of soybean in the crop sequence; (Prop soybean); crop yields (Yield), proportion of cropping in the crop rotation (Prop agri), and categorical variables (Crop, Tillage). In addition, two meteorological inputs (minimum (Tmin) and mean air temperature (Tmed)), were selected. The ANNs were adequate to estimate SOC in the upper 0.20 m of Mollisols of the SEBA. The model with the best performance included six management variables (SOCI, Year, Prop soybean, Tillage, Yield, Prop agri) and one meteorological variable (Tmin), all of them easily available and with low level of uncertainty. Soil organic C changes related to soil use in the SEBA could be satisfactorily estimated using an ANN developed with simple and easily available input variables. Artificial neural network technique appears as a valuable tool to develop robust models to help predicting SOC changes. RESUMEN El carbono orgánico del suelo (SOC) tiene un papel clave en el ciclo global del carbono. Las relaciones complejas entre los componentes del ciclo de C hacen difícil la modelización de la variación del SOC. Las Redes Neuronales Artificiales (ANN) son modelos capaces de determinar las interrelaciones existentes basadas en información disponible. El objetivo fue desarrollar y evaluar modelos basados en la técnica de ANN para estimar el SOC en Mollisoles del sudeste de la Provincia de Buenos Aires, Argentina (SEBA). Fueron empleados datos provenientes de tres experimentos de larga duración conducidos en el SEBA. Variables de manejo y meteorológicas fueron seleccionadas como entradas de las ANN. La información de manejo incluyó variables numéricas (SOC inicial (SOCI); número de años desde el inicio del experimento (Year), proporción de soja (Prop soybean), rendimiento de cultivos (Yield), proporción de la agricultura en la secuencia (Prop agri)) y variables categóricas (cultivo (Crop), sistema de labranza (Tillage)). Además, dos variables meteorológicas (temperatura mínima (Tmin) y temperatura promedio (Tmed)) Received: 14.08.2017 Revised: 25.10.2017 Accepted: 25.10.2017 DOI: 10.3232/SJSS.2017.V7.N3.03 Moreno R.111 Introduction Soil organic carbon (SOC) is both source and sink of atmospheric C dioxide and plays a key role in the global carbon (C) cycle. Besides, its content impacts on soil nutrient supply and on soil water storage capacity and, therefore, on crop yields. In addition, it is one of the most sensitive soil components to land use (Quiroga and Studdert 2015). However, the relationships among the components of C cycle and the factors that determine their fluxes, are very complex and, therefore, their study and prediction turn difficult (Parton et al. 1987;Smith et al. 1997). Empirical and stochastic models have been developed to describe complex interactions (Parton et al. 1987;Hansen et al. 1991;Franko et al. 1995;Liang et al. 2008;Kemanian and Stôckle 2010). However, their results tend to be over-simplified since they cannot take into account all the critical factors and non-linear relationships that influence C dynamics. On the other hand, some models are complex and/or require very detailed information that is not usually available or is difficult to estimate (e.g. Century model) (Levine and Kimes 1998), that make them unfeasible for generalized use. Some researchers appealed to the artificial neural networks (ANN) technique to overcome some limitations of other modeling techniques. Artificial neural networks allow describing complex interrelationships based on simple information available. The technique has been applied to estimate either properties or processes that define soil status variables, and among them, to characterize SOC dynamics in different environments (Levine and Kimes 1998;Ingleby and Crowe 2001;Somaratne et al. 2005). In Argentina, some estimation of SOC in soils of the Pampas and Chaco were satisfactorily performed (Álvarez 2008;Álvarez et al. 2009, 2012. Despite the high and stable SOC content of the soils of the Southeastern of Buenos Aires Province, Argentina (SEBA) soils, the progressive increase of cropping in the last decades, has led to a sharp SOC loss (Sainz Rozas et al. 2011;Reussi Calvo et al. 2014). The sustainable use of these soils requires the knowledge of the impact of management practices on SOC dynamics to be able to use soil preserving its health. Some simulation models have been locally calibrated and validated with acceptable results (Studdert et al. 2011;Moreno et al. 2016), but they were not developed for the SEBA conditions. On the other hand, some preliminary attempts were done to estimate and interpret the variation of SOC in soils of the SEBA under conventional tillage, using ANN with promising results (Moreno et al. 2014a(Moreno et al. , 2014b. We hypothesized that ANN models developed using available local information will satisfactorily estimate SOC changes in loamhigh-organic-matter-content soils under different cropping systems. The objective of this work was to develop and evaluate ANN models to estimate SOC content changes in soils of the SEBA. Experimental site Data from three long-term soil management experiments carried out in the experimental field of the Unidad Integrada Balcarce, Balcarce, Buenos Aires Province, Argentina (37º 45' S, 58º 18' W, 138 m over sea level) between 1976 and 2012 was used. The experiments were set on a soil complex of Typic Argiudoll (Soil Survey Staff 2014) (Mar del Plata series (INTA 1979)) and Petrocalcic Argiudoll (Soil Survey Staff 2014) (Balcarce series, with petrocalcic horizon below 0.7 m depth (INTA 1979)). Clay, silt, sand and soil organic matter concentrations of the soil complex surface layer (0-20 cm depth) are 232, 343, 425, and 63.0 g kg -1 , respectively, and the texture class is loam (INTA 1979). Cation exchange capacity, base saturation and pH are 24.0 cmol c kg -1 , 74.1% and 6.1, respectively. Bulk density varies between 1.1 and 1.25 Mg m -3 . The slope is less than 2% and, therefore, soil water erosion was considered negligible. Climate is mesothermal sub-humid to humid (according to Thornthwaite) or temperate-humid without a dry season (according to Köppen). The median annual rainfall is 939 mm yr -1 and annual mean daily temperature is 13.9 °C (Agri-Weather Station, Unidad Integrada Balcarce, located ~1000 m away from the experiments). Experiment description Information from three long term experiments carried out with a randomized complete block design and a split-plot treatment arrangement, was used: 1) "Continuous Cropping": carried out between 1984 and 1995 with 16 crop sequences including wheat (Triticum aestivum L.), soybean (Glycine max (L) Merr.), maize (Zea mays L.), and sunflower (Helianthus annuus L.) under conventional tillage (CT, moldboard plow, disk harrow, and field cultivator) and with and without N (WN and WON, respectively). This experiment is more thoroughly described in Studdert and Echeverría (2000). 2) "Crop-pasture Rotations": carried out between 1976 and 2006 with different combinations of periods under cropping (wheat, soybean, maize, sunflower, potato (Solanum tuberosum L.), and oat (Avena sativa L.) and vetch (Vicia sativa L) or red clover (Trifolium pratensse L.) as green manures) with and without N (WN and WON, respectively), and periods under grass-based pastures. Between 1976and 1993tillage system was CT and between 1994and 2006. More information about this experiment between 1976 and 1993 can be found in Studdert et al. (1997). The phase between 1994 and 2003 has been described in Eiza et al. (2005). Between 2004 and 2006, treatments and tillage systems were the same as described by Eiza et al. (2005). 3) "Tillage systems": carried out from 1997 with the sequence maize, sunflower, wheat, under two tillage systems (CT and NT) and with and without N (WN and WON, respectively). More information about this experiment can be found in Diovisalvi et al. (2008). Soil organic C concentration at 0-0.20 m depth in the fall of most of the years of each experiment (Moreno et al. 2016) had been determined through wet combustion with maintenance of the reaction temperature (120 °C) for 90 min (a variant of the Walkley-Black method, Schlichting et al. 1995). Concentration of SOC was converted into stock (Mg C ha -1 ) using bulk density determined or estimated as described by Studdert et al. (2011). Furthermore, crop productivity data was available as grain yield at commercial humidity content (14.0% for wheat, 14.5% for maize, 13.5% for soybean and 11.0% for sunflower), as tuber yield for potato and dry matter of aboveground biomass for oat and vetch (Moreno et al. 2016). Yields for grass-based pastures, expressed as dry matter of aboveground biomass, were estimated according to Agnusdei et al. (2001). ANN-based models An ANN is a parallel processing structure constituted by units (neurons) organized in layers that emulate biological neurons (Haykin 2001). The ANN have the capacity of identifying complex relationships from input information (different input variables, x 1 … x n , Figure 1) through the approximation of any mathematical function along a training procedure to yield a desired output. Besides, ANN are capable of storing knowledge about the relationships among input variables and about its proper functioning, that could be made available through different analysis techniques (Braga et al. 2007). An ANN is characterized by its structure or architecture, the training algorithm and the activation functions (Braga et al. 2007) and it is imperative to define them to develop an ANNbased model. A schematic representation of an artificial neuron (basic unit in an ANN model) is shown in Figure 1. The multilayer perceptron network (MLP) is one of the most commonly used feed forward ANN type. A MLP network consists of one input layer, one or more hidden layers and one output layer. The strength of the connection between two neurons in adjacent layers is represented by what is known as a 'synaptic weight'. The additive junction (Σ) represents the addition of signals in the inputs layer weighted by their respective synaptic weights (w k ). Then, the activation function (φ) limits the amplitude of the output of the neuron. The bias b k increases or decreases the input to activation function, assigning positive or negative values. According to the bias (positive or negative), the relationship between the induced field or activation potential (v k ) and the output (y k ) is transformed. Mathematically, an ANN can be described by the equations: where y k is the output neuron; φ is the activation function; x i is the i-th input variable; w ki is the synaptic weight of the k neuron for the i-th input variable, and b k is the bias. The artificial neuron computes its output (y k ) according to the Equation (1). In Equation (2) v k indicates the weighted inputs (u k ) affected by the bias (b k ). The size of the network is linked to the nature of the problem to be solved and the number of patterns or training pairs of inputs (x) -outputs (y) (Rogers and Dowla 1994). Then, the dimensionality of the models tends to be much higher in more complex problems (Maier and Dandy 2000). In addition, network architecture determines the number of connection weights (free parameters) and the way information flows through the network (Maier and Dandy 2000). The number of free parameters (N) is defined by: where n is the number of inputs, m is the number of hidden layers and x is the number of outputs. Development of ANN-based models Multilayer perceptron models with a unique hidden layer were developed to estimate SOC in the soil upper 0.20 m. It has been shown that only one hidden layer is required to approximate any continuous function (Cybenko 1989). In this study we developed MLP network models with one hidden layer and one output layer. Therefore, the size of each network was defined by the number of input variables and the number of neurons in the hidden layer. To go through the mechanism of model development we pre-selected 16 input variables (three categorical variables and 13 quantitative variables) based on availability of information and potential relationships with SOC stock variation: -Nitrogen fertilization (WN or WON) (categorical). -Crop: preceding crop to soil sampling for SOC content determination (categorical). -Year: number of years since the beginning of the experiment up to soil sampling for SOC content determination for each treatment (quantitative). -Yield: average grain yield of all the crops in the sequence (kg grain ha -1 ) since the beginning of the experiment up to the year before soil sampling for SOC content determination (quantitative). -C Input: average input of C by crop sequence (Mg ha -1 ) since the beginning of the experiment up to the preceding crop to soil sampling for SOC content determination (quantitative). To calculate C input, wheat, soybean, sunflower and maize grain yields, potato tuber yield, and oat and vetch aboveground dry matter production were used. The calculation of residue input mass by wheat, soybean, sunflower, maize, and potato was done using the grain or tuber yield, and harvest indexes (HI) and the below-(root biomass + rhizodeposition)/aboveground biomass (RB/ TAB) relationship used by Studdert et al. (2011). For oat and vetch, RB/TAB was assumed the same as for wheat (Studdert et al. 2011). Pasture aboveground dry matter production was estimated as reported by Agnusdei et al. (2001) for similar pastures. Pasture RB/TAB was estimated according to Bélanger et al. (1992). Carbon content of plant tissues was assumed as 0.43 kg C kg -1 (Sánchez et al. 1996). -Tmax: mean annual maximum air temperature (°C) (quantitative). -Tmed: mean annual mean air temperature (°C) (quantitative). Values for each meteorological variable were the result of the summation (Pp) or average (Tmin, Tmax, Tmed) of data over the 12 months previous to each soil sampling for SOC content determination. Total data was split into training, test and validation groups with the proportion 60:20:20. The training group (n = 1083) was used during model training. The validation group (n = 359) was used for cross-validation (Maier and Dandy 2000) and the test group (n = 359) was used to evaluate the final performance of each model (Haykin 2001). Data for each group was randomly selected and distribution of frequencies among groups were homogenous (Kruskal-Wallis test, p > 0.05) (data not shown). To define which of the 16 pre-selected variables would be used as better input variables we based on Spearman correlation analysis between observed SOC in the upper 0.20 m and each one of them ( (Moreno et al. 2014b). Even though this work was done for only one soil type and climatic condition, many of the selected variables resulted the same as those selected by other authors who developed ANN-based models for a broader range of environmental conditions of Argentina (Álvarez 2008;Álvarez et al. 2009, 2012. Artificial neural network-based models were performed including different combinations of management and meteorological variables, and trained to estimate SOC stock. Methods followed to arrange the inputs in each combination were based on a priori knowledge of the system being modelled and on correlations analysis. Models defined were organized in three subsets as follows: Table 1) and one of the other management variables (a total of five basic models). Fifteen additional models were defined including either each and both meteorological variables most correlated with SOC stock (Tmin and Tmed, Table 1). In summary, Subset 2 included 20 models. * Subset 3: ANN-based models with more than three management input variables resulting from the combination of the three selected management input variables showing the highest correlation with SOC (SOCI, Year, and Prop soybean) with one (four-variable models), two (five-variable models) or three (six-variable models) of the other selected input variables. Most models in this subset were defined without meteorological, but some of them were also defined including either each and both meteorological variables most correlated with SOC stock (Tmin and Tmed, Table 1). Total of models defined in Subset 3 was nine. To solve estimation problems, a supervised training has to be carried out for which input variables and target observed outputs are provided to the ANN. Training or learning of an ANN with a defined structure is achieved by adjusting the weights of the neurons through an iterative algorithm that minimizes the error between the predicted and the target outputs. This process is equivalent to parameter adjustment in conventional statistical model fitting. Bias values were initially set as 1 and the final value for each ANN was determined in the process. In this work, the selection of ANN architectures was based on the application of a selected algorithm integrated on the Intelligent Problem Solver (IPS) of the Neural Network module of Statistica Software (Statsoft 2009). The inputs and the outputs of data sets were automatically normalized to improve the performance of ANN models. The maximal number of neurons was fixed related to the number of examples trained. The Automated Network Search (ANS) of the software, was set to retain the five models with the lowest cross-validation error (over 200 ANN for each combination of input variables it was asked to train) and then, the ANN with the best performance for each combination was chosen and evaluated. Two types of transformed sigmoid activation functions (i.e. logistic and hyperbolic tangent) were applied in the hidden layer and linear ones in the output layer. The sigmoid response allows a network to map a non-linear process and is recommended to avoid saturation and convergence in approximation problems. Evaluation of ANN model performance The performance of the ANN was evaluated on test data group using several standard statistical performance evaluation criteria based on the difference between observed and simulated SOC stock values. Those statistical indicators were: mean of the differences between observed and simulated values (bias error, BE, Mg C ha -1 ), mean of those differences relative to the observed values (bias relative error, BRE, %), and root mean square error (RMSE, expressed as stock, Mg C ha -1 ) (Fox 1981). Performed ANN models were sort (increasing order) through each of the mentioned error types and ranked from the lowest to the highest and assigned a ranking number according to each of all three sorts. A final hierarchical overall ranking of performance was calculated as the average of the three ranking numbers achieved by each ANN-based model for all three sorts. This procedure enabled the determination of the ANN-based model with the best and that with the worst performance. Model performance was also evaluated through simple regression analyses between observed and simulated SOC stock values. The joint hypothesis of equality of intercept and slope of each simple linear regression to 0 and 1, respectively, was evaluated through F tests. All statistical analyses were performed with the R statistical package (R Core Team 2015). Results and discussion 3.1. Description of models A total 57 ANN-based models were developed (28 in Subset 1 ( Table 2), 20 in Subset 2 (Table 3), and nine in Subset 3 ( Table 4) to estimate SOC stock including between two and eight input variables and a maximum of ten neurons in the hidden layer. Most of the models had adequate structure, without problems during training, given the large number of training data (n = 1083). The models with Crop as input variable resulted in a higher number of free parameters, since this categorical variable presented 10 input options (i.e. ten different crops). Artificial neural networks with large structure (i.e. high number of input variables and/or of neurons in the hidden layer) could present problems of subtraining. However, Rogers and Dowla (1994) indicated that if the number of weights (or free parameters) does not exceed the number of examples for training, such training problems would not be expected to occur. Model performance According to the evaluation on test data group, linear regression analyses between observed and simulated SOC stock values were all significant (p < 0.05) (Tables 5, 6, 7). The joint hypothesis of equality of intercept and slope of each simple linear regression to 0 and 1, respectively, was not rejected (p > 0.05) in any case (Tables 5, 6, 7). However, R 2 ranged only between 0.1 and 0.6 (Tables 5, 6, 7). Other authors reported higher R 2 values when estimating SOC concentrations with ANN for several soil types of Argentina (Álvarez et al. 2011, 2012Berhongaray et al. 2013). The low R 2 obtained in this work could be associated to the large variability in observed SOC stocks among experiment replications. Studdert et al. (1997) reported significant differences (p < 0.01) for observed SOC stocks among blocks in the "Crop-pasture rotations" experiment, with 50% of standard deviations ranging between 1.8 Mg C ha -1 and 5.2 Mg C ha -1 , and an average standard deviation of 3.4 Mg C ha -1 . Likewise, Studdert and Echeverría (2000) also reported significant Table 4). Root mean square error, BRE, and BE values obtained on test data group when contrasting observed vs. simulated SOC stocks are presented in Figure 2. Most of the 57 ANN-based models defined showed acceptable results (Smith et al. 1997). In general, RMSE ranged between 4.97 and 7.39 Mg C ha -1 and did not differ from those reported by Álvarez et al. (2009) and some models yielded better indicators than those reported by Álvarez et al. (2011). Bias relative errors ranged between 4.69 and 7.26% and BE ranged between -0.39 and 0.49 Mg C ha -1 . Other authors (Levine and Kimes 1998;Somaratne et al. 2005) reported even lower errors but they used both management variables and chemical properties as input variables. Error variability among ANN-based models including only two management variables (Subset 1, Table 2, Figure 2) was high. On the other hand, error variability among ANNbased models of Subsets 2 (three management variables, Table 3, Figure 2) and 3 (more than three management variables, Table 4, Figure 2), were lower than those of Subset 1 and similar between them. In all cases, the inclusion of the selected meteorological input variables (i.e. Tmin and/or Tmed, Tables 2, 3, 4) improved model performance through reducing errors (Figure 2). Therefore, SOC stock could be satisfactorily estimated with ANN models including only three management input variables and selected meteorological input variables. Tables 2, 3, and 4 for model Subsets 1, 2, and 3, respectively). Table 8 shows the 10 ANN models with the lowest (best models, first 10 hierarchical positions) and the highest (worst models, last ten hierarchical positions) average of individual positions of ranking through RMSE, BRE, and BE. Root mean square errors of the ten best models ranged between 4.97 and 5.36 Mg C ha -1 , BRE ranged between 4.70 and 5.09%, and BE ranged between -0.01 and 0.33 Mg C ha -1 . Models with the best and the worst performances Only one of the ten best models belongs to Subset 1 (ANN 4, Tables 2 Table 8. Best and worst positions within the hierarchical ranking of the trained artificial neural network (ANN) models on the basis of the average of the ranking positions (increasing order) sorting by three statistical indicators. RMSE: root mean square error (Mg C ha -1 ); BRE: bias relative error (%); BE: bias error (Mg C ha -1 ). The ANN are described in Tables 2, 3, and 4. Best models Worst models Studdert et al. (2011) and Moreno et al. (2016) reported that the performance of RothC (Jenkinson et al. 1987) and AMG (Andriulo et al. 1999) models, respectively, to simulate SOC stock showed some differences between nitrogen fertilization levels and/or tillage systems. Therefore, we also evaluated the best (Figure 3) and worst (Figure 4) ANNbased model performances through RMSE and BE discriminated by agronomic management (i.e. separately for each tillage system level (regardless nitrogen fertilization level) and for each nitrogen fertilization level (regardless tillage system level). According to RMSE, the best ANN estimated better (lower RMSE) SOC stock under NT and WN. However, dispersion of BE was a little higher and some ANN-based models showed no difference between levels of tillage system nor between nitrogen fertilization levels, but some others showed an inverse trend than that of RMSE. Anyway, the best ANN model (ANN 55, Table 8) did not show differences between the levels of both management practices, and, despite the differences, the RMSE were all within acceptable levels (Smith et al. 1997). The ANN-based model with the best performance (ANN 55, Table 4, Figure 3) was developed based on all (five) management variables combined with Tmin (Table 4). On the other hand, the worst performance was achieved by the ANN-based model with only two management variables (SOCI and Tillage) (ANN 17, Tables 4, 8, Figure 4). The differences in statistical indicators between the best and the worst ANN-based models (Figure 2) were of 1.8 and 0.44 Mg C ha -1 , and 1.9 percent points in RMSE, BE and BRE, respectively. Taking into account the complexity of the processes and interactions involved in SOC formation and degradation in relation to soil use, those differences can be considered negligible (Smith et al. 1997). However, even though small, the improvement of an ANN-based model performance including six management variables and one meteorological one (ANN 55, Table 4), could be assumed as better representing the factors that define surface SOC dynamics in Mollisols of the SEBA. Besides, the input variables used by ANN 55 (Table 4) do not mean additional complication for potential users since they are easily available everywhere. The distribution of simulated and observed SOC stock over time, is a visual tool that can help to interpret model performance. Figure 5 shows the evolution of observed SOC stock values and those estimated with the best (ANN 55, Tables 4, 8) (Figure 5a) and the worst (ANN 17, Tables 2, 8) (Figure 5b) ANN-based models. Figure 6 shows the evolution of both observed and estimated with the best and worst models SOC stock values discriminated by tillage system and nitrogen fertilization levels. Whichever the fertilization treatment, both ANN (the best (ANN 55) and the worst (ANN 17) showed better performance over time under NT than under CT (Table 9). Soil organic C stocks estimated with ANN 55 (Table 4) showed the best match with observed values, especially up to 18 years since the beginning of the experiments. This model showed a better estimation to the observed changes and variability of SOC stock (Figures 3, 5a, 6). This may be attributed to the number of input variables involved, which made ANN 55 more representative of the variables influencing SOC dynamics. Anyway, input variables in ANN 55 are very few in relation to the high number of factors driving SOC variation. Other ANN models developed in Argentina to predict SOC variations based on some other input variables showed different statistical indicators than those achieved in this work. Álvarez (2008) used the average C input, silt plus clay content and air temperature as input variables and reported an RMSE of 4.7 Mg C ha -1 (similar to that achieved with ANN 55, Figures 2, 3). However, the R 2 reported by Álvarez (2008) (R 2 = 0.93) was much higher than that shown by ANN 55 (R 2 = 0.58, Table 7). Likewise, Álvarez et al. (2011) also developed ANN models based on crop type, average grain yield and precipitation to predict gains and losses of SOC under different cropping systems. They obtained better statistical indicators (R 2 = 0.85 and RMSE = 0.63) than those obtained with our ANN 55, although lower than those reported by Alvarez (2008). On the other hand, the ANN with the worst performance (ANN 17, Tables 2, 8) did not match observed SOC changes over time (Figures 5b, 6). Conclusions Artificial-neural-network-based models were adequate to estimate SOC in the upper 0.20 m of Mollisols of the SEBA. All ANN-based models trained could be used in the SEBA under different management situations. The model with the best performance (ANN 55) was developed including six management variables (SOCI, Year, Prop soybean, Tillage, Yield, Prop agri) and one meteorological variable (Tmin) as input variables, all of them easily available and with very low level of uncertainty. For our Mollisols, the composition of the ANNbased models with better performances (top average hierarchical ranking order) showed that management variables were predominant over the meteorological ones. The number of input variables used is yet recommendable and does not imply serious difficulties for users under environmental and management conditions of SEBA. However, future studies based on knowledge extraction from ANN should allow improving interpretations of these results and to support the use of the technique of ANN to develop models using simple and easily available local information. Acknowledgements The information shown in this work integrates the Master of Science Thesis (Facultad de Ciencias Agrarias, Universidad Nacional de Mar del Plata) of the first author. Table 9. Statistical indicators of models ANN with the best (ANN 55, Table 4) and the worst (ANN 17,
2018-12-20T22:02:34.488Z
2017-11-15T00:00:00.000
{ "year": 2017, "sha1": "004819fb9fb7bd9d20e019e7439bb56c07f06b8b", "oa_license": "CCBYNC", "oa_url": "https://digital.cic.gba.gob.ar/bitstream/11746/8154/1/Application%20of%20artificial%20neural.pdf-PDFA.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "004819fb9fb7bd9d20e019e7439bb56c07f06b8b", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Geology" ] }
56158949
pes2o/s2orc
v3-fos-license
Recent progress in applying lattice QCD to kaon physics Standard lattice calculations in kaon physics are based on the evaluation of matrix elements of local operators between two single-hadron states or a single-hadron state and the vacuum. Recent progress in lattice QCD has gone beyond these standard observables. I will review the status and prospects of lattice kaon physics with an emphasis on non-leptonic $K\to\pi\pi$ decay and long-distance processes including $K^0$-$\overline{K^0}$ mixing and rare kaon decays. Introduction Since the discovery of kaons, the kaon physics plays a key role in the building of the Standard Model. The main mission for lattice QCD in kaon physics is to evaluate the low-energy hadronic effects to test the Standard Model parameters or to constrain on new physics. Lattice QCD has been successful for the calculations of the observables such as the pion and kaon decay constants f K ± π ± , the K → π ν semileptonic form factor f + (0) and the neutral kaon mixing parameter B K . We refer these observables as "standard". Their relevant hadronic matrix elements have only one local operator insertion. The initial and final states involve at most one stable hadron. Besides, the spatial momenta carried by initial/final-state particles are much smaller than the ultraviolet lattice cutoff 1 a, with a the lattice spacing. These standard observables can be computed with high statistical precision and controlled systematic errors using lattice QCD simulations. Many interesting observables in kaon physics, however, are not "standard". One example is the calculation of K → ππ decay where the final state involves multiple hadrons. Another example is the evaluation of the long-distance contributions to flavor changing processes such as the calculation of the real and imaginary parts of K 0 -K 0 mixing amplitudes, which are related to the K L -K S mass difference ∆M K and the indirect CP violating parameter . Rare kaon decays including K → πνν and K → π + − also belong to this category. As these transitions proceed via the second-order weak interaction, the calculations would involve the construction of 4-point correlation function and the treatment of nonlocal matrix elements with two effective operator insertions. To tackle such quantities, one needs to develop new techniques. In this report, I will first summarize the lattice QCD calculation of standard observables. They include f K ± π ± , f + (0), and inclusive τ → s decay. All these quantities are related to the determination V us f + (0) = 0.2165 (4), Lattice inputs of f + (0) and f K ± f π ± together with the experimental data give a precise determination of the CKM matrix elements V us = 0.2231 (7), V us V ud = 0.2313 (7). In the Standard Model, the CKM matrix is unitary. Most stringent test of CKM unitarity is given by the first row condition V u 2 ≡ V ud 2 + V us 2 + V ub 2 = 1. Using the results of V us and V ud given in Eq. (3), one finds that V u 2 = 0.9798(82), which has a 2.5 σ deviation from CKM unitarity. Currently the most precise determination of V ud = 0.97420 (21) is from superallowed nuclear β decay [4,5]. Using V us from K 3 decay and V ud from nuclear β decay sharpens the unitarity test with a much smaller uncertainty. However, the deviation is still around 2.4 σ, as shown in the second line of Eq. (5). If using V us V ud from leptonic decays and V ud from nuclear β decay, then the result confirms the CKM unitarity; see the third line of Eq. (5). The above tests of CKM unitarity are put together here for a comparison 0.9798(82), K 3 + leptonic decays, 0.9988(5), K 3 + nuclear β decay, 0.9998 (5), leptonic + nuclear β decay. To clarify the 2.x σ deviation in the unitarity test, it is important to reduce the uncertainty from the lattice QCD determination of f + (0). One of the recent updates for f + (0) is from Fermilab Lattice-MILC collaboration. HISQ fermions on 2+1+1 flavor MILC configurations are used in the calculation and preliminary results are shown in Fig. 1. Compared to their report last year [6], more lattice ensembles are used in the analysis. Employing 4 ensembles at the physical pion mass and 2 ultra-fine lattice spacings allows them to reduce the statistical error to 0.14%. At 0.12 fm and m l m s = 0.1, they use three different volumes. Three volumes together with one-loop chiral perturbation theory (ChPT) [7] allow for a good estimate of the finite-volume effects. After chiral and continuum extrapolation, the total uncertainty is expected to be reduced to 0.2%, which is close to the current experimental uncertainty [2]. The calculation is performed at 5 lattice spacings 0.15, 0.12, 0.09, 0.06 and 0.042 fm, including 4 ensembles with physical pion mass. Open green symbols correspond to different volumes for a=0.12 fm and ml = 0.1 ms. The solid magenta line is the (preliminary) interpolation in the light-quark mass, keeping the strange-quark mass ms equal to its physical value, and turning off all discretization effects. The magenta diamond is the corresponding interpolation at the physical point. Data at the same light-quark mass but different lattice spacing are off-set horizontally. Another lattice calculation is recently reported by JLQCD collaboration [8,9]. In their calculation, the chiral symmetry is exactly preserved by using the overlap quark action, which enables a direct comparison of the lattice data with ChPT and hence a determination of relevant low energy constants within NNLO ChPT. A reasonable agreement between lattice results for the slope d f + (q 2 ) dq 2 (at q 2 = 0) and experiment is observed, although the error is still large due to the high cost of the usage of overlap fermion. τ inclusive decay and V us The average of V us is summarized and updated on Spring 2017 by Heavy Flavor Averaging group (HFLAV) [10]; see the left panel of Fig. 2, where f + (0) and f K ± f π ± take the value from PDG 2016 [11]. The result from leptonic decays shows consistency with CKM unitarity, while the one using K 3 decays has a ∼ 2 σ deviation from CKM unitarity. The largest discrepancy happens for the case using the τ → s inclusive decay, where a 3.2 σ deviation from CKM unitarity is observed. To explore the discrepancy, the main quantity of interest is the ratio of the decay rates where τ → s-hadrons ν τ indicates that in the decay the final-state hadrons contain net strange-ness. According to the optical theorem, the imaginary part of the hadronic vacuum polarization (HVP) functions can be related to the R-value through [12] dR where s is the invariant mass square of the final-state hadrons. S EW is a known short-distance electroweak correction [13]. Π (J) (s) are the HVP functions with the superscript (J) corresponding to angular momenta J = 0, 1. Once Im Π (J) (s) is known, Eq. (7) can be used to determine V us . Since Im Π (J) (s) is generically non-perturbative at small s, the conventional approach to determine Im Π (J) (s) is to use the dispersion relation [12] s0 where Im Π(s) on the left-hand side can be related to dR ds and V us , while the integral on the righthand side can be determined using QCD perturbation theory (pQCD) and operator product expansion (OPE). The parameter s 0 should be sufficiently large for a good convergence of pQCD and the validity of the OPE. W(s) is a weight function. If there is no pole inside the contour, then the integral along the branch cut is equal to the integral on the circle and then V us can be determined. A difficulty here is that the estimate of high-dimensional OPE terms relies on some assumptions and thus contains potentially large systematic effects. Using the conventional approach described above, it results in the low value of V us shown in the left panel of Fig. 2 [14]. An improvement is proposed by Ref. [15] to use different s 0 and weight functions W(s) and then study the dependence on s 0 and W(s). Through fit, not only V us , but also the OPE effective condensates are fit to experimental measurements (and also lattice QCD data). With this improvement, the 3.2 σ deviation is reduced to 1-2 σ level depending on using BaBar or 2014 HFAG result for These results are plotted on the right-panel of Fig. 2, denoted as "τ FB FESR, HLMZ17". Another new approach proposed by H. Ohki et. al. [16] is to let s 0 → ∞ and use the weight function containing the pole structures where the N different poles Q 2 k are spanned by a spacing ∆ = 0.2 (N − 1) GeV 2 , with a center point called C. Once W(s) is given, the contour of the integral is equal to the residues of the poles, which can be determined using lattice HVPs. Thus the value of V us can be determined accordingly. The strategy to choose Q 2 k is that it should not be too large to suppress the contribution from pQCD and Figure 2. Left: HFLAV summary of Vus . For the τ → s inclusive decay, there is a 3.2 σ deviation from CKM unitarity. Right: New implementations for τ → s inclusive decay. The read circle data points, denoted as "τ FB FESR, HLMZ17", show the improvement by fitting the OPE effective condensates to the experimental measurements and lattice QCD data [15]. The green square data points show the improvement using W(s) in Eq. (10) and using lattice HVPs for the residues of the integral [16]. Both improvements shed the light on the resolution of the puzzle from the τ → s inclusive decay. OPE at s > m 2 τ . It should not be too small to avoid large statistical error from lattice HVPs. The realistic calculation is performed using N f = 2 + 1 Möbius domain wall fermions at near-physical pion mass with the lattice spacings a −1 = 1.73 and 2.36 GeV and the lattice volume V = (5 fm) 3 . The corresponding results are summarized in the right panel of Fig. 2, denoted by the green square data points (The filled-square points are generated using τ → Kν τ data for the K pole, while the opensquare ones use the K µ2 data as input). Using different N and C, the lattice calculation shows the consistent results. Besides, all the data points are systematically larger than the conventional value of V us . At N = 4 and C = 0.7 GeV 2 , V us is obtained as (21), using τ → Kν τ input for the K pole 0.2245(16), using K µ2 input for the K pole. Both improvements proposed by Refs. [15] and [16] shed the light on the resolution of the puzzle from the τ → s inclusive decay. Neutral-kaon mixing parameter B K based on Standard Model and beyond The parameter B K is related to the CP violating part of K 0 -K 0 mixing and thus short-distance dominated. Using OPE, the effective Hamiltonian H ∆S =2 eff can be written as a product of the Wilson coefficient C(µ) and the ∆S = 2 local operator Q ∆S =2 (µ) with G F the Fermi constant and M W the W-boson mass. It is a convention to use the parameter as a measure of indirect CP violation, with the initial state given by K L S particle and the final state having total isospin zero. The contribution from H ∆S =2 eff serves as a dominant contribution to Here the angle φ ≡ arctan(−2∆M K ∆Γ K ) ≈ 43.52(5) ○ [11], with ∆M K = M KL − M KS and ∆Γ K = Γ KL − Γ KS . M LD 00 indicates the long-distance contribution to and A 0 is the K 0 → (ππ) I=0 amplitude. Both M LD 00 and A 0 only make few-percent contributions to . The progress in lattice QCD calculation of these two quantities will be discussed later. Here we only focus on the M SD 00 . Within Standard Model, there is only one ∆S = 2 operator with V − A structure where the subscripts α and β denote the color indices. For beyond-Standard-Model theories, 4 other operators are possible The neutral-kaon mixing parameter B K and B i in the MS scheme are defined as where µ is the renormalization scale, f K the kaon decay constant and M K the kaon mass. Given the anomalous dimension γ(g), the renormalization group independent B parameterB K is related to B K (µ) by the formulâ For the Standard ModelB K , the lattice calculation has reached a precision of 1.3% for 2+1 flavor For beyond-Standard-Model B i (µ) at the MS scale µ = 3 GeV, the uncertainties of 2+1 flavor lattice results are about 2-5% [1] The results for B i (µ) from various groups are summarized by FLAG [1] on the left panel of Fig. 3. There are clear discrepancies in B 4 and B 5 from different groups. To resolve these discrepancies, RBC-UKQCD collaboration undertakes a study using both RI-MOM and RI-SMOM renormalization [17][18][19]. The calculation is performed using N f = 2 + 1 flavor domain wall fermion at M π ≈ 300 MeV and two lattice spacings a = 0.08 and 0.11 fm. The corresponding results are shown by the three data points below the legend of "RBC-UKQCD '16" on the right panel of Fig. 3, where the four red data points use RI-MOM scheme while the two green ones use RI-SMOM scheme and the orange one uses 1-loop lattice perturbation theory. Including the new RBC-UKQCD updates, all RI-MOM results are compatible among different groups. However, these results are systematically smaller than that from the RI-SMOM renormalization and the 1-loop lattice perturbation theory. Particularly for RI-SMOM calculation, both ( q, q) and (γ, γ)-schemes are used and consistent results (shown by the two green data points) are obtained after conversion to MS scheme. For B 5 , the situation is very similar. According to the study by Ref. [20], RI-SMOM renormalization is expected to have smaller infrared contamination than RI-MOM due to the usage of the non-exceptional momenta. The studies by [17][18][19] confirm such expectation and suggest that for B 4 and B 5 RI-SMOM renormalization should be used to fully control the infrared contamination. In Ref. [21] an update is reported on the RBC-UKQCD measurement of B i (µ), simulated using N f = 2 + 1 domain wall fermions at the physical quark masses. K → ππ decay and direct CP violation CP violation is first observed in neutral kaon decays. Under CP transform, the K 0 state is related to The CP eigenstate can be defined as the combination of K 0 and K 0 with K 0 + − the CP-even/odd state. The physical states observed in the experiment are the weak eigenstates K S and K L . K S decays into two pions and K L decays into three pions. By neglecting CP violation, K S is equal to CP-even state and K L equal to CP-odd state. In 1964, BNL discovered that K L is able to decay into two pions, indicating the violation of CP symmetry. This discovery leads to the Nobel prize in 1980. Since K L and K S are not CP eigenstates, one can write them as a mixture of the CP eigenstates The parameter¯ is a measure of the strength of mixing. For K L → ππ decay, there are two contributions to the CP violation. The first part appears as the CP-even component of K L decays into two pions. This is called indirect CP violation and described by a parameter or in many cases written as K . receives its dominant contribution from¯ and a small contribution from A 0 due to its definition given in Eq. (13) The second contribution is from the CP-odd component of K L , which decays into two pions directly. This is called direct CP violation and denoted as ′ . The experiments measure the decay amplitudes of K L → ππ and K S → ππ and use the ratio to determine the parameter and ′ . Using the experimental measurements of η +− and η 00 as input, PDG quotes [11] ≈ 1 3 is at the order of 10 −3 and ′ is even 1000 times smaller. Due to its small size, direct CP violation ′ is very sensitive to new physics. For theoretical simplicity, it is convenient to study the decay amplitudes in the specific isospin channels, A 0 and A 2 , where δ I is the strong phase from ππ scattering. If CP symmetry were protected, then both the amplitudes A 2 and A 0 are real. To obtain the CP violation, one shall determine both real and imaginary part of A 2 and A 0 . The indirect CP violation only has a small dependence on A 0 as shown by Eq. (24). While for ′ , it is sensitive on both real and imaginary part of A 2 and A 0 The target for lattice QCD calculation is to determine A 2 and A 0 from first principles. The weak Hamiltonian for K → ππ decay is given by a series of ∆S = 1 local four-quark operators [22] Here τ = − VtdV * ts VudV * us = 1.543 + 0.635i is the ratio of CKM matrix elements. z i (µ) and y i (µ) are known perturbative Wilson coefficients that summarize the short-distance effects. The 10 local four-quark Electro-weak penguin Q 7 -Q 10 operators Q i can be matched to three types of diagrams in the full theory, shown in Fig. 4 with the notations "Current-current", "QCD penguin" and "Electro-weak penguin", respectively. The most recent updated results for the amplitude A 2 are given by RBC-UKQCD collaboration [23], where two ensembles are used, both at physical pion mass but with different lattice spacings. The parameters are given in Table 1. After continuum extrapolation the results for A 2 are given by which is calculated using Lüscher's formula [24] and consistent with the phenomenological curve from Ref. [25]. Table 1. Ensembles used in the recent lattice calculation of A2 by RBC-UKQCD collaboration [23]. In addition to the determination of A 2 and δ 2 , another outcome from Ref. [23] is the resolution of the puzzle of the ∆I = 1 2 rule. According to experimental measurement, the size of A 0 is about 22.5 times larger than that of A 2 . It is a more-than-half-century's puzzle since 1955 [26] on why the amplitudes in the different isospin channels are so much different. The Wilson coefficients only account for a factor of 2. The lattice calculation shows that Re[A 2 ] is dominated by diagrams C 1 and C 2 in the left panel of Fig. 5, where C 1 is color diagonal and C 2 color mixed. C 2 is 1 N suppressed relative to C 1 with C 2 equal to 1 3 of C 1 in leading order QCD perturbation theory. However, the lattice results in the right panel of Fig. 5 shows that C 2 is about −0.7 × C 1 , indicating very strong non-perturbative effects. As Re[A 2 ] is proportional to C 1 + C 2 , the observation that C 1 and C 2 have opposite signs leads to a significant cancellation between the two terms. While for Re[A 0 ], the opposite signs lead to an enhancement as Re[A 0 ] receives an important contribution from 2C 1 − C 2 . When considering the complete contribution to Re[A 0 ], including the disconnected diagrams, the size of Re[A 0 ] is more enhanced. In total, the hadronic matrix elements including the contributions from C 1 , C 2 and other diagrams would contribute another factor of ∼ 10. The cancellation between C 1 and C 2 is first observed in an earlier study [27] and is further confirmed by the latest calculation of A 2 [23]. So now the puzzle of ∆I = 1 2 rule is resolved from first principals. We have also seen a recent study of the ∆I = 1 2 rule with the scaling of the number of color [28]. The more demanding calculation is the K → ππ decay in the isopsin I = 0 channel. The latest calculation is performed at the physical kinematics M π = 143.1(2.0) MeV and M K = 490(2.2) MeV, using a 32 3 × 64 lattice volume and a lattice spacing a = 0.14 fm [29]. G-parity boundary condition is used and the lattice volume is chosen such that the kaon's mass is equal to the pion-pion's energy in the ground state. [24], the I = 0 ππ scattering phase shift is found to be δ 0 = 23.8(4.9)(1.2) ○ , which is smaller than the value δ 0 = 38.0(1.3) ○ , obtained by combining experimental data with the Roy equations [30,31]. It remains a puzzle for the discrepancy and needs to be understood in the future study. Using the lattice results for both A 0 and A 2 , the direct CP violation ′ can be determined: There is a 2.1 σ deviation from experimental value Re[ ′ ] = 1.66(23)×10 −3 [32]. As the uncertainties of the lattice results are larger than experimental measurement, to confirm whether new physics information can be found in the deviation, more accurate lattice calculations are required. It is reported by C. Kelly [33] that the statistics of the previous RBC-UKQCD calculation has been increased to 584 configurations. In the lattice calculation, the largest contribution to Im[A 0 ] comes from Q 6 operator. Fig. 6 shows the fit to obtain the matrix element ⟨ππ Q 6 K⟩. When the statistics increases from 216 to 584 configurations, uncertainty decreases as expected while the central values remain consistent. The aim of the RBC-UKQCD K → ππ program is to reduce the dominant statistical error for Re[ ′ ] in Eq. (33) by a factor of 2 within the next year. Besides for the effort to increase the statistics, there are also efforts for improvements of various systematic effects. For example, the σ field starts to be added into the calculation to account for the σ → ππ effects in the I = 0 ππ scattering channel. In Ref. [34] N. H. Christ reports on including electromagnetism in K → ππ decay. The ∆I = 1 2 rule may make the effects of electromagnetism on A 2 ∼20 times larger than a naive O(α e ) estimate due to the mixing with A 0 . Such effects will become important if the target of the future calculations is to determine ′ with a precision of ∼ 10%. In Ref. [35] M. Bruno presents a non-perturbative calculation of Wilson coefficients even including the W-boson by using the technique of step scaling. Although the current availability of lattice spacings restricts the calculation to unphysically light W-bosons with M W ∼ 2 GeV, the calculation opens a new direction in the future to non-perturbatively determine the Wilson coefficients with controlled uncertainties. In addition to the efforts from RBC-UKQCD collaboration to compute the K → ππ decay, N. Ishizuka et. al. are running a parallel program using the improved Wilson fermion action [36]. As a first step to verify the possibility of calculations with the Wilson fermion action, they consider the decay amplitudes at an unphysical quark mass M K ∼ 2M π . A large enhancement of the ratio A 0 A 2 is found at unphysical quark masses. Long-distance contributions to flavor changing process: ∆M K and Both the K L -K S mass difference ∆M K and indirect CP violating parameter are related to the mixing of the K 0 and K 0 . Such mixing is caused by the weak interaction as the strangeness differs by 2 in K 0 and K 0 . The time evolution of the K 0 -K 0 mixing system can be given by the equation where M is the mass matrix and Γ the decay width matrix. These 2 × 2 matrices are calculated to the 2 nd order of the weak interaction and given by where the indices i and j take the values 0 and0. H W is the ∆S = 1 weak effective Hamiltonian and P indicates that the principal part should be taken when an integral with a vanishing energy denominator is encountered. The mass matrix can be diagonalized. By neglecting the effects of CP violation, the mass difference ∆M K can be given by the real part of M¯0 0 through The parameter is related to the imaginary part of M¯0 0 and given explicitly in terms of the shortdistance and long-distance part of Im[M¯0 0 ] in Eq. (14). Both ∆M K and arise from an amplitude in which two W bosons and internal up-type quarks form a loop, shown by Fig. 7. The loop integral is proportional to the internal quark mass square m 2 q for q = u, c, t. As ∆M K is related to Re[M¯0 0 ], it is associated with the CP conserving part of K 0 -K 0 mixing amplitude. Although the top quark loop is enhanced by m 2 t , there is a significant suppression from the CKM factor λ t , where λ q = V qd V * qs . Due to the fact that Re[λ 2 c ] , the contributions to ∆M K are dominated by charm-charm quark loop. As it is sensitive to the charm quark mass, the K L -K S mass difference historically led to the predication of the charm quark fifty years ago [37][38][39]. For , it is related to the CP violating part of K 0 -K 0 mixing. The charm quark contribution is significantly suppressed as Im[λ 2 c ] ≪ Re[λ 2 c ]. In , the top-top, top-charm and charm-charm loops compete in size. As it contains important top-top loop contribution, is sensitive to the Standard Model parameter, λ t or V cb . As a subsequent work of Refs. [41,42], a recent calculation of ∆M K is performed on a 2 + 1 flavor 32 3 × 64 Möbius domain wall lattice with the Iwasaki + DSDR gauge action. A near-physical pion mass M π = 170 MeV and the kaon mass M K = 492 MeV are used. Since the calculation is performed at a coarse lattice spacing with a −1 = 1.38 GeV, the charm quark mass m MS c (3 GeV) = 750 MeV is unphysically light. The calculation has included all the contractions from Type 1 to Type 4 shown in Fig. 8. Based on 120 configurations, the preliminary lattice result is given by ∆M K = 3.85(46) × 10 −12 MeV, which is consistent with the experimental value ∆M K = 3.483(6) × 10 −12 MeV [11]. However, since the calculation uses unphysical kinematics, this agreement could easily be fortuitous. Note that in the calculation of ∆M K , the loop integral involves double Glashow-Iliopoulos-Maiani (GIM) cancellation [38] and thus, there is no short-distance divergence. On the other hand, the double GIM subtraction makes ∆M K significantly rely on the charm quark mass. As a consequence, it is important to carry out the calculation at the physical charm quark mass. Figure 7. K 0 -K 0 mixing in the full theory. ∆MK is related with the CP conserving part of K 0 -K 0 mixing and thus long-distance dominated. The process is described by two ∆S = 1 operators. is related to the CP violating part of K 0 -K 0 mixing and thus short-distance dominated. The dominant contribution is described by a single ∆S = 2 operator and the relevant hadronic matrix element can be converted to BK. The remaining long-distance contribution below the scale of the charm quark mass has been calculated by Ref. [40]. A new RBC-UKQCD project, reported by C. Sachrajda [43], uses both physical pion and charm quark masses in the calculation. The computation of ∆M K is performed on a 64 3 × 128 lattice with the Iwasaki gauge action and Möbius domain wall fermions at an inverse lattice spacing of 2.359(7) GeV. Various techniques such as the use of all-to-all propagators and all mode averaging are used to reduce the statistical uncertainty. Based on 59 configurations, the preliminary result of ∆M K = 5.5(1.7) × 10 −12 MeV is consistent with the experimental value. The project has planned to collect 160 measurements in total. The status and prospects of the determination of are updated by W. Lee in Refs. [44,45]. The estimate of is made using the FLAG value for B K , the angle-only-fit results for the Wolfenstein parameters and the CKM matrix element V cb from exclusive or inclusive decays. The preliminary Here the exclusive V cb is determined using the experimental measurements ofB → D * ν andB → D ν together with the lattice QCD calculation for the corresponding hadronic matrix elements [10,[46][47][48]. The inclusive V cb is determined using the inclusive decay processB → X c ν and QCD sum rules [49]. When using exclusive V cb as input, there is a 3.3 σ deviation between Standard Model value and experimental measurement exp = 2.228(11) × 10 −3 . Besides, V cb dominates the current 10% Standard Model uncertainty for . Therefore, it is important to have an accurate determination of V cb . On the other hand, it is also important to compute the long-distance contribution to precisely, whose size is expected to be a few percent but remains not well understood. To calculate the long-distance contribution to , it is better to write the GIM cancellation by subtracting the charm quark propagator [40,41] q=u,c,t λ q p By doing so, the double GIM subtraction results in three terms in the effective Hamiltonian, with the coefficients λ 2 u , λ u λ t and λ 2 t , respectively. The λ 2 u term is irrelevant for . The λ 2 t term is purely short-distance dominated. Therefore the only interesting term for lattice QCD calculation is the λ u λ t term. In the lattice QCD calculation of λ u λ t contribution, the top quark field shall be integrated out, leaving a QCD penguin operator, shown in Fig. 9. This QCD penguin operator can be neglected in the calculation of ∆M K as it carries a suppression factor of λ t λ u , but it is important for . The QCD penguin operator together with the current-current operator can form a new Type 5 diagram. Without top quark in the lattice calculation, there is only one GIM subtraction and as a consequence the loop integral is logarithmic divergent. This divergence is cut off by an unphysical lattice scale, the inverse lattice spacing 1 a. One can define a bilocal operator in the RI-SMOM scheme by subtracting the unphysical short-distance contribution, and then match the bilocal operator in the RI-SMOM scheme to the one in the MS scheme using perturbation theory. More details on short-distance correction can be found in Refs. [40,50,51]. The calculation of is performed on a 24 3 × 64 lattice with domain wall fermion and Iwasaki gauge action [40]. The inverse lattice spacing is a −1 is 1.78 GeV. The pion mass is 339 MeV and the kaon mass 592 MeV. It uses 200 configurations and includes all Type 1-5 diagrams. In Table 2 Z-exchange Figure 10. Examples of W-W and Z-exchange diagrams for K + → π + νν decay. scale µ RI ranging from 1.54 to 2.56 GeV. The µ RI dependence is accounted for as a systematic uncertainty. At µ RI =2.11 GeV, the long-distance contribution to is about 5% when compared to the experimental value exp = 2.228(11) × 10 −3 . To accurately estimate the long-distance contribution, the calculation needs to be performed at the physical kinematics. Long-distance contributions to flavor changing process: rare kaon decays Rare kaon decays have attracted increasing interest during the past few decades. As flavor changing neutral current processes, these decays are highly suppressed in the Standard Model and thus provide ideal probes for the observation of new physics effects. In this review, I will discuss the lattice QCD calculations of two classes of rare kaon decays: K → πνν and K → π + − [50][51][52][53][54][55][56][57]. The K + → π + νν decay is interesting because it receives the largest contribution from top quark loop and thus theoretically very clean. The required hadronic matrix elements can be obtained from leading order semi-leptonic K decays, such as K + → π + eν, via isospin rotation. The remaining long-distance contributions below the charm scale are expected to be a few percent. Though small, by including the long-distance contribution estimated from Ref. [58], the branching ratio Br(K + → π + νν) is enhanced by 6%, which is comparable to the 8% total Standard Model uncertainty [59]. The current known branching-ratio measurement [60] Br is a combined result based on the 7 events collected by BNL E787 [61][62][63][64] and its successor E949 [60,65]. Its central value is almost twice of the Standard Model prediction [59] Br(K + → π + νν) SM = 9.11 ± 0.72 × 10 −11 , but with a 60-70% uncertainty it is still consistent with Standard Model. The new experiment, NA62 in CERN [66], aims at an observation of O(100) events and a 10%precision measurement of Br(K + → π + νν). The status reported at the Flavor physics and CP violation workshop (FPCP 2017) is that the detector installation is completed in September 2016. 5% of the 2016 data has been analyzed but no event is found yet. If using full 2016 data, then O(1) events are expected to be found. Considering the fact that the Standard Model predictions will be confronted with Figure 11. Low-lying intermediate states contributing to K + → π + νν. As these states are related to exponentially growing unphysical contributions and potentially large finite volume effects, one shall calculate the hadronic matrix elements for these low-lying intermediate states from the relevant 2-point and 3-point functions. the new experiment soon, a lattice QCD calculation of the long-distance contribution to K + → π + νν is timely. There are two classes of diagrams, which contribute to K + → π + νν decays, called as W-W and Z-exchange diagrams. In the W-W diagrams the second-order weak transition proceeds through the exchange of two W-bosons, while for the Z-exchange diagrams the decay occurs through the exchange of one W-boson and one Z-boson. Examples of both classes of diagrams are illustrated in Fig. 10. In a lattice QCD calculation, the W and Z-boson have been integrated out, leaving two effective four-fermion local operators. The matrix element of the time-integrated bilocal operator is evaluated in Euclidean space. This matrix element can be related to the second-order amplitude of interest if a sum over intermediate states is inserted and the integration over Euclidean time performed: where H A B (t) stands for the two four-fermion operators, with the spatial variables integrated over space. The unphysical e (EK −En)T terms in the second line of this equation vanish for large T for intermediate states more energetic than the kaon. However, these terms grow exponentially with increasing integration range if E n < E K and must be removed from lattice calculation. When the intermediate state involves multiple particles, the branch-cut integral in the infinite volume is replaced by a discrete state summation in the finite volume. It could cause potentially large finite-volume effects when E n → M K , which need to be corrected following Ref. [67]. To deal with the exponentially growing terms and finite volume effects, the matrix elements for the lowing-lying intermediate states shall be calculated. These states include the leptonic + ν, semileptonic π 0 + ν, single pion and isospin I = 2 π + π 0 scattering state and are summarized by Fig. 11. So the study of the long-distance contribution to K + → π + νν decay does not only involve the calculation of 4-point function, but also includes the calculation of all relevant 2-point and 3-point functions for low-lying intermediate states. As the top quark contribution to the decay is completely short-distance dominated, one only needs to focus on the charm quark contribution. The first calculation is performed using the 16 3 × 32, N f = 2 + 1 flavor, domain wall fermion ensemble, with a −1 = 1.729(28) GeV [51]. This ensemble has pion and kaon masses of M π ∼ 421 MeV and M K ∼ 563 MeV. The MS charm quark mass is m MS c (2 GeV) ∼ 863 MeV. Both W-W and Z-exchange diagrams are logarithmically divergent and cutoff by unphysical scale 1 a. Similar as the computation of , the short-distance correction needs to be performed here [50]. The lattice results are shown in Fig. 12. Here P c gives the complete charm quark contribution to the K + → π + νν decay. The results from the W-W and Z-exchange diagrams, and their total, are shown in the left, center and right panels. The gray bands show the bilocal matrix element including the unphysical lattice artifacts. The red circles indicate the RI-renormalized, bilocal contribution. The blue diamonds give the total charm contribution P c , while the green squares show the difference between the lattice and perturbative results, P c − P PT c . The results from the exploratory lattice calculation with unphysical charm, down and up quark masses are: L (1, 1, 1) Figure 13. Dependence of the form factor for the decay K + → π + + − upon z = q 2 M 2 K . The lattice data is fit to a linear form V+(z) = a+ + b+z. The small size of P c − P PT c results from a large cancellation between the W-W and Z-exchange amplitudes. It is important to determine whether such a large cancellation persists for physical quark masses. Different from K + → π + νν decay, the CP conserving decays K + → π + + − and K S → π 0 + − receive the dominated long-distance contribution from γ-exchange diagram. Although the loop integral in the γ-exchange diagram is quadratically ultraviolet divergent by power counting, the electromagnetic gauge invariance reduces the divergence to be logarithmic. The GIM cancellation further reduces the logarithmic divergence to be ultraviolet finite. In the γ-exchange process, the hadronic part of amplitudes for K + and K S decays can be written in terms of electromagnetic transition form factor V + S (z) via [68] with p K π the kaon/pion momentum, q = p K − p π , z = q 2 M 2 K and r π = M π M K . The target for lattice QCD calculation is to extract V + S (z) from the bilocal hadronic matrix elements by building the relevant 4-point correlation functions. The strategy adopted in Ref. [53] is to use conserved vector current to protect the electromagnetic gauge invariance and use the charm quark as an active quark flavor to maintain GIM cancellation. The first exploratory calculation of K + → π + + − decay [57] is performed using a 24 3 × 64 lattice with domain wall fermion and Iwasaki gauge action. The inverse lattice spacing is 1 a = 1.78 GeV. The calculation uses a pion mass of M π ∼ 430 MeV, a kaon mass of M K ∼ 625 MeV and a MS charm quark mass m MS c (2 GeV) ∼ 533 MeV. Three momentum transfers are used in the calculation and a linear fit form V + (z) = a + + b + z is used to determine the momentum dependence of the form factor. The lattice data points for V + (z) together with the fit curve are shown in Fig. 13. Using 128 configurations, the results for a + and b + yield a + = 1.6(7), b + = 0.7 (8). The phenomenological study [69] decomposes the form factor into a linear form plus the unitarity ππ-loop correction V ππ Here V ππ + (z) is determined using chiral perturbation theory together with some model assumptions such as vector meson dominance model. The experimental measurements of the branching ratio together with the fit form (45) produce the much more precise results for a + and b + [70,71] ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ a + = −0.58 (2), b + = −0.78 (7), fit from K + → π + e + e − spectra, a + = −0.58 (4) where the first line uses the experimental measurement of K + → π + e + e − spectra and the second line uses K + → π + µ + µ − data. Note that these results carry the opposite signs to the lattice results in Eq. (44). Since the lattice calculation is performed at unphysical quark masses, it does not make much sense to compare these results in Eqs. (44) and (46). On the other hand, as the experimental data only yield the square of the form factor and does not tell the sign for V + (z), the signs for a + and b + are completely determined by the input of V ππ + (z). However, it is found that the polynomial contribution (linear in z) dominates over the unitarity loop correction. For K + → π + µ + µ − decay, the fit forms with and without V ππ + correction produce almost the same fit curves. For K + → π + e + e − decay, the fit curves differ at small z, where the experimental data is not available [68]. Therefore it is questionable to use V ππ + (z) to determine the signs for a + and b + . It is important to perform a lattice QCD calculation at the physical quark mass and examine the phenomenological fit ansatz (45) and confirm the sign for a + and b + . In order to perform the calculation at the physical point, the physical pion mass would require the large lattice volume and physical charm quark mass would require for the ultra-fine lattice spacing to control both finite-volume effects and lattice artifacts. Thus it is very high demanding on computer resources. One solution is to improve quark action to reduce the lattice artifacts for the charm quark. In Ref. [72] M. Tomii performs an exploratory study of dispersion relation and unphysical pole for Mobius domain wall fermion and seeks for a way to improve the action. Another solution is to integrate out of charm quark field using perturbation theory. In this case, lattice QCD calculation only requires the physical pion mass and a rather coarse lattice spacing. This would save the computer resources quite significantly. But a drawback is that since there is no GIM cancellation, the internal up quark loop will be logarithmically divergent. In Ref. [73] A. Lawson discusses on the renormalization to treat with the short-distance divergence in three-flavor theory. Besides for the CP conserving K → π + − decay, it is also interesting to study the CP violating K L decays. The K L decay amplitudes receive three major contributions: 1) a short-distance dominated direct CP violation, 2) a long-distance dominated, indirect CP violating contribution through K L → K + → π 0 + − , 3) a CP conserving component which proceeds through two-photon exchange. Total CP violating contributions to K L decay branching ratios, including 1), 2) and their interference, are given by [74,75] Br(K L → π 0 e + e − ) CPV = 10 −12 × 15.7 a S 2 ± 6.2 a S Im λ t 10 −4 + 2.4 Im λ t 10 −4 , Br(K L → π 0 µ + µ − ) CPV = 10 −12 × 3.7 a S 2 ± 1.6 a S Im λ t 10 −4 + 1.0 where the λ t ≈ 1.35 × 10 −4 . The parameter a S is given by the K S transition form factor at zero momentum transfer, namely a S = V S (0), and it is a quantity of size O(1). The ± sign arises because only the magnitude of a S is determined from experiment. Therefore even a determination of the sign of a S from lattice QCD is desirable. Conclusion The worldwide lattice QCD community has developed a successful kaon physics program. It inspires the consideration of constructing a CKM unitarity triangle purely from kaon physics [76]. For standard quantities such as f K ± f π ± , f + (0) andB K , they are computed with a precision of ∼ 1 percent or even much better, as shown in Table 3. In these cases lattice QCD calculations play important roles in precision flavor physics. With the development of the lattice QCD techniques, it is also the time to explore the non-standard quantities. Here I report the recent progress of the calculations on K → ππ decay, long-distance contributions to ∆M K and as well as rare kaon decays. Lattice QCD is now capable of first-principals calculation of these non-standard quantities. Some of the calculations are even performed at the physical kinematics. We can foresee that with the new techniques and new generation of super-computers today's non-standard observables will become standard in the near future. Table 3. Summary of the FLAG average of f K ± f π ± , f+(0) andBK.
2017-11-15T16:18:32.000Z
2017-11-15T00:00:00.000
{ "year": 2017, "sha1": "df35a078cc7c97614a35f0734ad89f250115957f", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/10/epjconf_lattice2018_01005.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ea17068f0a103695952dfe86cc439b92d700ab47", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14206904
pes2o/s2orc
v3-fos-license
Genetic ablation of GINIP-expressing primary sensory neurons strongly impairs Formalin-evoked pain Primary sensory neurons are heterogeneous by myriad of molecular criteria. However, the functional significance of this remarkable heterogeneity is just emerging. We precedently described the GINIP+ neurons as a new subpopulation of non peptidergic C-fibers encompassing the free nerve ending cutaneous MRGPRD+ neurons and C-LTMRs. Using our recently generated ginip mouse model, we have been able to selectively ablate the GINIP+ neurons and assess their functional role in the somatosensation. We found that ablation of GINIP+ neurons affected neither the molecular contents nor the central projections of the spared neurons. GINIP-DTR mice exhibited impaired sensation to gentle mechanical stimuli applied to their hairy skin and had normal responses to noxious mechanical stimuli applied to their glabrous skin, under acute and injury-induced conditions. Importantly, loss of GINIP+ neurons significantly altered formalin-evoked first pain and drastically suppressed the second pain response. Given that MRGPRD+ neurons have been shown to be dispensable for formalin-evoked pain, our study suggest that C-LTMRs play a critical role in the modulation of formalin-evoked pain. Deciphering the functional specialization of molecularly defined subpopulations of neurons is one of the most challenging issues in today's neurobiology. Dorsal Root Ganglia (DRG) neurons represent a powerful model system to address this fundamental question. These neurons are highly heterogeneous by myriad of morphological, anatomical and molecular criteria. However, the functional significance of this remarkable diversity is under intense investigation within the sensory biology community. For example, genetic ablation of MRGPRD + neurons led to a selective deficit in noxious mechanical pain sensitivity with no interference on noxious heat or cold sensation 1 . Pharmacological ablation of TRPV1 central projections selectively abolished noxious heat but not cold or mechanical sensitivity 1 . Interestingly, combined ablation of both subsets of neurons yielded an additive phenotype with no additional behavioral deficit 1 . In line with these findings, developmental ablation of Nav1.8-expressing neurons altered multiple sensory modalities, including an almost complete absence of the second phase of formalin-evoked pain, demonstrating, for the first time, that primary sensory neurons play an important role in sensing and transducing formalin-evoked pain 2 . Following this study, attempts to identify the specific subpopulation of neurons specialized in sensing and transducing formalin-evoked pain has been unsuccessful. Indeed, it has been shown that ablation of MRGPRD-and TRPV1-expressing neurons, both of which represent the vast majority of nociceptors, had no effect on formalin-evoked pain 3 , suggesting that formalin-evoked pain can be triggered by a small subset of neurons ablated in the Abrahamsen et al. study 2 . We and others have shown that low threshold mechanoreceptors Aβ , Aδ , C-LTMRs and the MRGPRB4 + neurons, express neither MRGPRD nor TRPV1 in mice [4][5][6][7] , implying that these populations of neurons are likely involved in sensing and transducing formalin-evoked pain. Here we used our recently engineered ginip versatile mouse model that allows an inducible and tissue specific ablation of GINIP-expressing neurons. We show that injection of diphtheria toxin selectively ablates MRGPRD + neurons and C-LTMRs with no effect on Aβ and Aδ LTMRs or MRGPRB4 + neurons. Very interestingly, ablation of GINIP + neurons significantly affected formalin-evoked first pain and strongly altered the second pain. As our genetic ablation approach selectively targets MRGPRD + neurons and C-LTMRs, and knowing that MRGPRD + neurons are dispensable for formalin-evoked pain, our results suggest that C-LTMRs play a critical role in formalin-evoked pain. Furthermore, in line with the selective ablation of C-LTMRs and the sparing of hairy skin innervating Aβ and Aδ LTMRs, GINIP-DTR mice displayed a partial but significant defect in the detection of touch-evoked sensation. Surprisingly, in contrast to MRGPRD-DTR mice, dual ablation of C-LTMRs and MRGPRD + neurons had no effect on acute and injury-induced mechanical sensitivity, suggesting that C-LTMR and MRGPRD fibers may antagonize each other in sensing mechanical stimuli. Results Tissue specific and inducible ablation of GINIP-expressing neurons. In a recent study 6 , we generated a versatile mouse model that allows ginip gene global inactivation and an inducible and tissue-specific ablation of GINIP-expressing neurons (Fig. 1A). To gain insights into the in vivo functional specialization of GINIP-expressing neurons, we crossed GINIP flx/+ mice with mice expressing the CRE recombinase from Nav1.8 locus 2,8 . GINIP flx/+ ;Nav1.8 cre/+ mice (hereafter GINIP-DTR mice) were undistinguishable from their WT littermates. Double labeling experiments using anti-GINIP and anti-hDTR antibodies showed the expression overlap between GINIP and hDTR only in GINIP-DTR but not in wild type (hereafter GINIP +/+ mice) or in GINIP flx/+ mice (Fig. 1B). This data demonstrates that CRE recombination occurs in a high fidelity manner and specifically targets neurons that drive expression of hDTR from ginip locus. Diphtheria toxin (DT) injection had no effect on GINIP + neurons in GINIP +/+ mice and led to a selective and specific ablation of all GINIP + neurons in GINIP-DTR mice without affecting the neighboring neurons expressing TrkA (Fig. 1C). To further characterize the selective ablation of GINIP-expressing neurons in GINIP-DTR mice, we performed a thorough quantitative and qualitative analysis of L4 DRGs using the pan-neuronal marker SCG10 in combination with a variety of DRG neuronal markers ( Fig. 2A). Consistent with the previously (B) Expression of hDTR is restricted to GINIP + neurons. Double immunostaining using goat anti-hDTR (red) and rat anti-GINIP (green) antibodies on DRG sections from GINIP-DTR, GINIP fl/+ and GINIP +/+ littermates. hDTR expression is restricted to GINIP + neurons, only in GINIP-DTR mice. Scale bar: 100 μ m. (C) Injection of DT induced selective ablation of GINIP + neurons only in GINIP-DTR mice. Double immunostaining using rabbit anti-TrkA (red) and rat anti-GINIP (green) antibodies shows a selective loss of GINIP + in GINIP-DTR mice without affecting TrkA + neurons. In-situ hybridization on DRG sections using antisense probes for genes that are known to be expressed in GINIP + neurons (red). Each in situ hybridization is followed by immunostaining using rat anti-GINIP (green) to confirm the ablation of GINIP + neurons in GINIP-DTR mice. Scale bar: 100 μ m. (C) In-situ hybridization on DRG sections using antisense probes for genes that are known to be excluded from GINIP + neurons (red). Each in situ hybridization is followed by immunostaining using rat anti-GINIP (green) to confirm the ablation of GINIP + neurons in GINIP-DTR mice. Scale bar: 100 μ m. described percentage of GINIP-expressing neurons in L4 ganglia, we found a 36% decrease in the total number of DRG neurons in GINIP-DTR mice (8367 ± 541 for the GINIP +/+ mice and 5360 ± 784 for GINIP-DTR mice, n = 3) ( Fig. 2A). Accordingly, the total number of Ret+ neurons decreased by 60% in GINIP-DTR mice (3316 ± 446 for the GINIP +/+ mice and 1326 ± 192 for GINP-DTR mice, n = 3), whereas quantification of TrkA + neurons showed no difference between GINIP-DTR and GINIP +/+ mice ( Fig. 2A). Consistently, molecular markers that are expressed in GINIP + neurons, such as GFRα2, MrgprD, TH, Tafa4, TRPA1 low-expressors, mrgprA3, Gα14 and the small diameter Ret + neurons were massively or completely absent in DT-injected GINIP-DTR mice (Fig. 2B), whereas those that are excluded from GINIP + neurons, such as TrkA, TrkB, TrkC, the subsets of Ret + neurons expressing GFRα1 and GFRα3, piezo2, CGRP and MrgprB4 were unaffected (Figs 2C and 3A). In line with these data, dorsal horn spinal projection of CGRP afferents, most of which express TrkA, occurs normally in DT-injected GINIP-DTR mice, whereas there was a massive decrease of IB4 afferents projections in laminae II of the dorsal horn spinal cord (Fig. 3B). Interestingly, IB4 afferents innervating the most lateral part of the spinal cord, known to express MRGPRB4 4 are present in both animals (Fig. 3B). Finally, laminar organization of the dorsal horn appears normal as the PKCγ + interneurons distribution remains intact in the GINIP-DTR mice (Fig. 3B). Very importantly, GINIP + neurons in the brain were not affected by DT injection in GINIP-DTR mice as ginip transcripts are detected in GINIP +/+ as well as in GINIP-DTR brain slices (Fig. 3C). Altogether, these data show that our mouse model allows a highly controlled tissue specific and inducible neuronal ablation of GINIP + neurons and suggest that the spared neurons undergo no changes both at the molecular and anatomical levels, thus opening the possibility to unravel the functional specialization of GINIP-expressing neurons in somatosensation in adult mice. GINIP-expressing neurons are dispensable for temperature sensation. To gain insights into the functional role of the GINIP + neurons in somatosensation, we subjected GINIP-DTR mice to a large battery of somatosensory tests under acute and tissue or nerve injury conditions. GINIP-DTR mice have a normal body weight, and behave normally during the open field or rotarod tests, demonstrating that loss of GINIP + neurons has no impact on motor activity or anxiety-like behaviors ( Fig. 4A and B). We then subjected both genotypes to a variety of thermal tests including hot and cold plates and the thermal gradient tests. In these paradigms, GINIP-DTR mice behaved the same way as their GINIP +/+ littermates, suggesting that GINIP + neurons are dispensable for the detection of temperature (Fig. 4C,D,E). Ablation of GINIP-expressing neurons causes a slight alteration of gentle touch sensation but not noxious or injury-induced mechanical sensitivity. We next tested the consequences of GINIP + neurons ablation in mechanosensation. Given that C-LTMRs massively innervate the hairy part of the skin, we used the tape response assay to test how GINIP-DTR mice would react to a gentle mechanical stimulus applied to their hairy skin. In this assay, both genotypes had the same latency to the first response. However, GINIP-DTR mice exhibited significantly less attempts to remove the tape from their back in comparison to the control mice (Fig. 5A, GINIP +/+ 58.3 ± 7.2 bouts n = 9 and GINIP-DTR 36.3 ± 4.6 bouts n = 11). In a previous study, Cavanaugh and colleagues demonstrated that MRGPRD + neurons play a critical role in acute and inflammation-induced mechanical pain 1 . Given that GINIP + neurons encompass MRGPRD + neurons and C-LTMRs 6 , we sought to analyze the mechanical sensitivity of GINIP-DTR mice under acute, inflammatory and nerve injury conditions using the Von Frey test. We found no differences in acute mechanical thresholds between GINIP +/+ and GINIP-DTR mice before and after DT injection (Fig. 5B). We also found that Completed Freund Adjuvant (CFA)-and Chronic Constriction nerve Injury (CCI)-induced mechanical sensitivity of GINIP-DTR mice was similar to that of their GINIP +/+ littermates ( Fig. 5C and D). This data demonstrates that dual ablation of C-LTMRs and MRGPRD + neurons does not recapitulate the acute and CFA-induced mechanical hyposensitivity due to the selective ablation of MRGPRD + neurons alone, and suggests that C-LTMRs and MRGPRD + neurons might play antagonistic roles in the modulation of acute and inflammation-induced mechanical sensitivity. GINIP-expressing neurons are required for formalin-evoked pain hypersensitivity. The forma- lin test is a widely used chemical test in pain research. However, the molecular mechanisms and the neuronal subpopulations underlying the nocifensive behavior triggered by formalin are largely unknown. Recent studies strongly suggested that a yet to be identified small subset of DRG neurons is required for formalin-evoked pain 2,3 . In GINIP-DTR mice, intraplantar injection of 10 μ l of 2% formalin triggered a significant decrease in the formalin-evoked pain response during the first phase and a nearly complete absence of the second phase pain response (Fig. 5E). Of note, ablation of MRGPRD + neurons alone or together with TRPV1 + neurons had no effect on formalin-evoked pain 3 . Combined ablation of both MRGPRD + neurons and C-LTMRs led to a drastic deficit in formalin-evoked second phase pain hypersensitivity, indicating that GINIP-expressing neurons, most likely the C-LTMRs, are required for this pain process. This result is consistent with our previous finding in which we demonstrated that loss of TAFA4, a C-LTMRs-enriched chemokine-like protein, led to enhanced formalin-evoked pain specifically during the second phase 9 . Discussion In this study, we used a genetic approach to selectively ablate the GINIP + neurons encompassing two distinct subpopulations of cutaneous primary sensory neurons: MRGPRD + neurons and C-LTMRs. We show that ablation of GINIP + neurons affected neither the molecular contents nor the central projections of the spared neurons in GINIP-DTR mice, suggesting undetectable compensatory molecular or anatomical plasticity due to lack of GINIP-expressing neurons, and opening the possibility to unravel the functional specialization of GINIP + neurons. MRGPRD + neurons have been described to play a critical role in mechanical pain 1,3 , whereas C-LTMRs ensure a dual function: they sense gentle touch under normal conditions 10 and contribute to mechanical pain under pathological conditions 9 . GINIP-DTR mice had normal acute and inflammation-induced mechanical sensitivity, exhibited a slight abnormality in sensing gentle touch and nerve injury-induced mechanical pain and displayed a drastic alteration of formalin-evoked pain. The formalin test is a valid, reliable and tonic model of continuous pain 11 . However, the neuronal subpopulations underlying the nocifensive behavior triggered by formalin are largely unknown. Genetic ablation of Nav1.8-expressing neurons completely abolished the second phase of formalin-evoked pain 2 , demonstrating that DRG neurons largely contribute to the prototypical biphasic pain response evoked by formalin injection. A follow up study from Shields and colleagues showed that MRGPRD + and TRPV1 + neurons, both of which were largely eliminated in Nav1.8-DTA mice, were dispensable for formalin-evoked pain 3 , demonstrating that formalin-evoked nocifensive behavior requires a small population of primary sensory neurons. Here, we show that ablation of GINIP + neurons led to a significant decrease in the first phase and a nearly complete abolition of the second phase of formalin-evoked pain. Given that GINIP is expressed in MRGPRD + neurons and in C-LTMRs and that this protein is totally excluded from TRPV1 + , MRGPRB4 + , Aβ , and Aδ low threshold mechanoreceptors 6 , our results suggest that C-LTMRs likely represent the subpopulation of neurons that contribute to the modulation of formalin-evoked pain. How a population of neurons known to exclusively innervate the hairy skin, could modulate pain that is evoked by an inflammatory agent injected in the glabrous skin? The most plausible explanation to this question resides on the type of response that formalin injection triggers in the mice. Indeed, upon formalin injection, mice will vigorously shake their paw; they will also grab it and intensely lick it from all sides, sometimes up to the tibial area. This behavior which consists of strong and repetitive innocuous mechanical stimuli applied to both glabrous and hairy skin of the hind paw will activate low threshold mechanosensory neurons, including C-LTMRs. The next question is how these shaking, licking and biting behaviors modulate formalin-evoked pain? The answer to this question is depicted in our working model shown in Fig. 6 which is largely inspired from a recent short review by Arcourt and Lechner 12 . In this model, we propose that C-LTMRs connect an inhibitory interneuron, connected to a second inhibitory interneuron, which itself is connected to an excitatory interneuron. With such model, in WT mice, mechanical activation of C-LTMRs will lead to the release of glutamate and TAFA4. Glutamate and TAFA4 will exert opposing actions: glutamate will activate the first inhibitory interneuron that will repress the inhibitory tone of the second inhibitory interneuron on the excitatory interneurons thus promoting formalin-evoked pain. On the other hand, TAFA4 will limit the glutamate-mediated activation of the first inhibitory interneuron to control the intensity of formalin-evoked pain. Accordingly, in the TAFA4 knock-out mice, the TAFA4 modulatory effect is no longer excreted leading to exacerbated formalin-evoked second pain 9 . In GINIP-DTR mice, loss of C-LTMRs will fail to activate the first inhibitory neuron, freeing the second inhibitory neuron to silence the excitatory interneuron, thus decreasing the first pain and preventing the onset of the second phase of formalin-evoked pain. Further investigations aimed at confirming/consolidating this putative working model are warranted. Our behavioral studies also showed that GINIP-DTR mice exhibit a slightly impaired gentle touch sensation, further consolidating the role of C-LTMRs in light touch sensation. They also revealed that GINIP-DTR mice (C) No difference in CFA induced mechanical hypersensitivity between GINIP-DTR mice and GINIP +/+ littermate (n = 10 and 11, respectively). (D) GINIP-DTR and GINIP +/+ littermate developed a clear CCIinduced mechanical hypersensitivity during the first two weeks post injury with no significant difference (two-way RM ANOVA, t = 0,188, p = 0,854) between genotypes (n = 9 and 8, respectively). (E) Impaired formalin-evoked pain in DT-injected GINIP-DTR and GINIP +/+ littermate mice (n = 11 and 12 respectively). GINIP-DTR mice response to formalin-evoked pain is drastically altered with a moderate first phase (p = 0.006) and an almost complete abolition of the second phase (p < 0.001) compared to biphasic GINIP +/+ response. Scientific RepoRts | 7:43493 | DOI: 10.1038/srep43493 display normal acute and injury-induced mechanical sensitivity. Impressively, this later phenotype is opposite to that described by Cavanaugh and colleagues who showed that selective ablation of MRGPRD + neurons caused strong mechanical hyposensitivity under acute and CFA-induced inflammation 1 , suggesting that C-LTMRs and MRGPRD + neurons play antagonistic roles in modulating acute and injury-induced mechanical sensitivity. Support for this hypothesis can be found in different studies: Zhang and colleagues 13 have shown that ablation of MRGPRD + neurons reduced the firing of superficial dorsal horn nociceptive-specific neurons in response to graded mechanical stimulation, and Lu and Perl 14 identified a neural circuitry in the substantia gelatinosa in which innocuous impulses activating C-LTMRs suppress nociceptive inputs. Based on our model we can postulate that under inflammatory and nerve injury conditions, C-LTMRs-derived TAFA4 becomes dominant over glutamate, thus reducing glutamate-mediated activation of the first inhibitory interneuron, leading to disinhibition of excitatory interneuron 2 and increased mechanical hypersensitivity. In the absence of MRGPRD + neuron and C-LTMRs, the gate is open for the violet primary sensory neuron to activate excitatory interneuron 2 thus explaining the opposite phenotypes between MRGPRD-DTR and GINIP-DTR mice. In conclusion, although we ablated two distinct subpopulations of neurons, our study strongly suggests that C-LTMRs is likely the subpopulation of neurons responsible for the modulation of formalin-evoked pain and consolidates a previous study suggesting that C-LTMRs negatively modulate inputs from nociceptors through an excitatory drive onto GABAergic interneurons in lamina II. Our study also encourages finding the best genetic approach to selectively eliminate C-LTMRs. Materials and Methods Mice. Mice were maintained under standard housing conditions (23 °C, 40% humidity, 12 h light cycles, and free access to food and water). GINIP flx/+ mice were previously generated in the laboratory 6 . Special efforts were made to minimize the number as well as the stress and suffering of mice used in this study. All protocols are in agreement with European Union and national recommendations for animal experimentation and have been approved by "le ministère de l' éducation nationale, de l' enseignement superieur et de la rechercherche" under the reference number: APAFIS#1537-2015070217242262v6 Diphtheria Toxin (20 μ g/kg) was injected i.p. on 2 days; separated by 72 h. Behavioral tests were performed 2 to 4 weeks after the initial DT injection. In line with this, TAFA4 null mice display exaggerated response to formalin during the second phase. Our model also provides a rational explanation of how C-LTMRs regulate noxious mechanical information flow from nociceptors as previously described by Lu and Perl. In GINIP-DTR mice (B), loss of C-LTMRs opens the gate for inhibitory interneuron 2 to exert a strong inhibitory tone on the excitatory interneuron 3, leading to abolition of formalin-evoked pain. Our model predicts that loss of C-LTMRs would exacerbate acute and injury-induced mechanical sensitivity. Cavanaugh et al. showed that loss of MRGPRD + neurons led to acute and inflammation-induced mechanical hyposensitivity. In this study, mice lacking MRGPRD + neurons and C-LTMRs exhibited normal mechanical sensitivity, suggesting that the hyposensitivity due to loss of MRGPRD + neurons is counterbalanced by the hypersensitivity due to loss of C-LTMRs. A definite answer to this hypothesis will be provided by the selective ablation of C-LTMRs. Scientific RepoRts | 7:43493 | DOI: 10.1038/srep43493 In situ hybridization and immunofluorescence. In situ hybridization and immunofluorescence were carried out following standard protocols 15 . To obtain adult tissues, animals were deeply anesthetized with a mix of ketamine/xylazine and then transcardially perfused with an ice-cold solution of 4% paraformaldehyde in PBS. Then, DRGs and spinal cord were dissected; they were post-fixed ON in the same fixative at 4 °C. Tissues were then transferred into a 30% (w/v) sucrose solution for cryoprotection before being frozen 24 h later and stored at − 80 °C. Samples were sectioned at 12 μ m (DRG section) or 16 μ m (spinal cord section) using a standard cryostat (Leica). Cell counts and statistical analysis. We adopted a strategy that has been previously validated for DRG cell counts 16 . Briefly, 12 μ m serial sections of thoracic DRG were distributed on 6 slides which were subjected to different markers including the pan-neuronal marker SCG10. This approach allowed us to refer all counting's to the total number of neurons (SCG10 + ). For each genotype, lumbar (L4) DRG were counted in three independent animals. All cell counts were conducted by an individual who was blind to mice genotypes. Statistical significance was set to p < 0.05 and assessed using one way ANOVA analysis followed by unpaired t-test. Behavioral assays. All behaviour analyses were conducted on littermate males 8-10 weeks old. Animals were acclimated for one hour to their testing environment prior to all experiments that are done at room temperature (~22 °C). Experimenters were blind to the genotype of the mice during testing. The number of tested animals is indicated in the figure legends section. Statistical significance was set to p < 0.05 and assessed using one way ANOVA analysis followed by unpaired t-test (for open-field and tape tests), two-way ANOVA followed by post-hoc Bonferroni t-test (for gradient assay), or two-way repeated measures ANOVA followed by post-hoc Bonferroni t-test (for rotarod, hot and cold test, formalin test, and CFA and CCI model pain) using SigmaPlot 12.5 software. All error bars represent standard error of the mean (SEM). Gradient, Thermal plates, open-field, and Von Frey apparatus were from BioSeb instruments France. Open-field test. The Open-field test is commonly used to assess locomotor, exploratory and anxiety-like behavior. It consists of an empty and bright square arena (40 × 40 × 35 cm), surrounded by walls to prevent animal from escaping. The animals were individually placed in the center of the arena and their behavior recorded with a video camera over a 5 min period and the time spent in the corner versus the center of the arena is recorded. Rotarod test. A rotarod apparatus (LSI Letica Scientific Instruments) was used to explore coordinated locomotor and balance function in mice. Mice were placed on a rod that slowly accelerated from 4 rpm to 44 rpm over 5 min and the latency to fall off during this period was recorded. The test was done 4 consecutive days. Each day, the animals were tested three times separated by at least 5 min resting period. Temperature gradient assay. Response to the temperature gradient assay was performed as described previously 17 . Briefly, mice were individually video tracked for 90 min in four separate arenas of the thermal gradient apparatus (Bioseb). A controlled and stable temperature gradient of 14 °C to 55 °C was maintained using two Peltier heating/cooling devices positioned at each end of the aluminium floor. Each arena was virtually divided into 15 zones of equal size (8 cm) with a distinct and stable temperature. Floor temperature was measured with an infrared thermometer (Bioseb). The tracking was performed using a video camera controlled by the software provided by the manufacturer. Hot plate test. To assess heat sensitivity, mice were placed individually on a metal surface maintained at 48°, 50°or 52 °C and the latency to nociceptive responses are measured (licking, shaking of hind paws or jumping). To prevent tissue damage, mice were removed from the plate immediately after a nociceptive response or a cut-off 90 s, 60 s and 45 s was applied respectively. Each mouse has been tested three times with a 5 min interval between each test. The withdrawal time corresponds to the mean of the three measures. Cold plate test. To test cold sensitivity, mice were placed individually on a metal surface maintained at 22°, 10°, 4 °C or 0 °C. The rearing time of the mice is monitored for one minute. Each mouse is exposed three times to each temperature with a minimum of 5 min resting period between trials and one hour separating periods between temperatures. Tape Response Assay. This test was achieved as described in Ranade and colleagues 18 . Briefly, a piece of 3 cm of tape was gently applied to the back of the mouse. Mice were then observed for 5 minutes and the total number of responses to the tape was counted. A response was scored when the mouse stopped moving and bit or scratched the piece of tape or showed a visible "wet dog shake" motion in an attempt to remove the foreign object on its back. Formalin test. Mice were housed individually into Plexiglass chambers 20 min before injection. Following intraplantar injection of 10 μ l of a 2% formalin solution (Fischer Scientific) into the left hind paw, time spent shaking, licking or lifting the injected paw was monitored for 60 min and analysed at 5 min intervals. Von Frey test of mechanical threshold. Mice were placed in plastic chambers on a wire mesh grid and stimulated with von Frey filaments (Bioseb) using the up-down method 19 starting with 1 g and ending with 2.0 g filament as cutoff value. Baseline measures of untreated or DT-treated WT or GINIP-DTR mice were performed on separate lots. Complete Freund's Adjuvant (CFA)-induced mechanical allodynia. We made an intraplantar injection of 10 μ l of a 1:1 saline/CFA (Sigma, St. Louis, MO, USA) emulsion with a 30 gauge needle and measured mechanical thresholds one, three and seven days after the injection using the Von Frey hair filaments using the up-down method. Unilateral peripheral mononeuropathy. For the chronic constriction of the sciatic nerve (CCI) model, unilateral peripheral mononeuropathy was induced in mice anaesthetized with Ketamine (40 mg/kg ip) and Xylasine (5 mg/ kg ip) with three chromic gut (4_0) ligatures tied loosely (with about 1 mm spacing) around the common sciatic nerve 20 . The nerve was constricted to a barely discernable degree, so that circulation through the epineural vasculature was not interrupted 21 . For the chronic constriction model, mechanical allodynia was assessed before the surgery and three and seven days and then once a week post-surgery using the up-down Von Frey hair filaments method.
2018-04-03T01:46:48.615Z
2017-02-27T00:00:00.000
{ "year": 2017, "sha1": "792efb05e970d57440b43cab8788bf9204e27edb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep43493.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "792efb05e970d57440b43cab8788bf9204e27edb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
221091361
pes2o/s2orc
v3-fos-license
Characterization and imaging of surgical specimens of invasive breast cancer and normal breast tissues with the application of Raman spectral mapping: A feasibility study and comparison with randomized single-point detection method A mapping technique was used in the present study to explore the biological and imaging characteristics of invasive breast cancer and normal breast tissues in Raman examination data and construct a diagnostic model for breast cancer. Raman examination data reflect the biochemical or molecular characteristics of the target tissues. A total of 45 specimens from patients with breast cancer who underwent surgery and 25 adjacent normal breast tissue specimens were included in the present study. Using the specimens, a total of 53 sets of mapping data and 2,597 pieces of Raman spectral data were obtained. The collected spectra were corrected and fitted, the Raman spectra were analyzed by robust statistical methods, and a diagnostic model was constructed using the k-Nearest Neighbor (KNN) method. The KNN classification method was applied to analyze the characteristics of the mapping test application. The percentage of outliers in the mapping data for malignant and normal breast tissues was 12.7 and 6.6%, respectively. The percentage of outlier data in the conventional single-point detection data for malignant and normal breast tissues was 24.5 and 26.0%, respectively. Analysis using a t-test identified a significant difference in the number of outliers between mapping and single-point detection for malignant (t=−6.169; P<0.001) and normal breast tissues (t=−8.873; P<0.001). Based on the mapping data, the accuracy, sensitivity and specificity for breast cancer detection by the diagnostic model constructed using the KNN method was 99.56, 96.6 and 98.48%, respectively. The positive and negative predictive value of this model was 99.56 and 89.04%, respectively. The data obtained by mapping technology demonstrated improved stability and contained less outliers compared with single-point detection. The diagnostic model constructed using the mapping data demonstrated excellent diagnostic performance and good correspondence with pathological results. The findings of the present study demonstrated the feasibility of the application of the diagnostic model for intraoperative real-time imaging for patients with breast cancer. This study provided the foundation of Raman spectroscopy-based diagnostic imaging at the molecular level. Introduction Breast-conserving surgery (BCS) for patients with breast cancer is gaining popularity due to the comparable survival outcomes of BCS and modified radical mastectomy (MRM) demonstrated by several studies (1). Currently, BCS accounts for ~60% of all breast cancer surgeries in the USA compared with ~20% in China (2,3). The most important aspect of BCS is complete resection of the lesion area to ensure that the surgical margin is negative as a positive surgical margin means a higher local recurrence rate. At present, there is a lack of a rapid and accurate diagnostic methods for breast cancer and pathological diagnosis is the most common method used in clinical settings (4). However, histopathological examination of paraffin sections is a time-and resource-intensive procedure. In addition, it does not meet the requirements of rapid diagnosis and may result in a higher rate of secondary surgery (5). Intraoperative frozen section pathology is relatively fast; however, the diagnostic accuracy is limited due to the small amount of obtained tissues (6). Thus, the development of a rapid, convenient and accurate diagnostic method is required to improve outcomes of BCS and to minimize the chances of secondary surgery. Raman spectra are generated by inelastic scattering of photons on the surface of molecules. Each molecule has a set of energy levels, and the scattered spectrum is dependent on the energy transfer between the molecule and the photons. By analyzing the relative strength and position of each characteristic peak in the Raman scattering spectrum, the biochemical composition information of the sample can be obtained (7). Raman spectroscopy is often used for the detection and imaging of crystals (8,9). Raman spectroscopy diagnosis at the molecular level offers the advantage of possible detection of lesions prior to the development of overt morphological changes; therefore, it may facilitate identification of the tumor edge and help reduce the local recurrence rate (10). Raman spectra can provide molecular vibration and rotation information (11,12). Raman spectroscopy is a simple, fast and sensitive method that can directly analyze tissues without any preprocessing and offers the potential for rapid diagnosis of pathological features of tissues during surgery (13). Raman spectroscopy is promising for clinical application and has become a research hotspot for rapid diagnosis of breast tumors (14)(15)(16)(17). Raman spectroscopy identifies molecular vibrations and so allows breast cancer tissue to be distinguished from surrounding normal tissue using mathematical processing and computer modeling (18). The majority of recent studies have generated data based on random single-point detection or targeted random-point detection (12,19). Some researchers have used surface enhanced Raman spectroscopy and other surface enhancement methods to enhance the signal to obtain improved results (20). For unprocessed fresh breast cancer tissues, the heterogeneity of the biochemical components in the tissues and the blindness of the detection location lead to the great difference in the results of the Raman data in the detected tissues; it is difficult to analyze such results and construct diagnostic models (14,21), even with use of mathematical methods such as linear regression, partial least squares and the artificial neural network. During the test, a laser beam is emitted from the microscope of the Raman instrument. Since the laser detection point is very small, the beam may focus on the nucleus, the cytoplasm, an organelle or the interstitium during the detection process, thus not achieving the ideal result (21). The peak assignment and difference of Raman spectra between cancer and normal tissues have been explored and described by numerous studies (14,15,22,23). Raman data of single-point detection may be used to identify benign and malignant breast cancer; however, it can only provide chemical information about the tissue. Since the obtained tissue often contains a number of different components with similar spectral characteristics, it is difficult to provide reliable information about the tissue structure (15). The Raman spectroscopy mapping method can provide information pertaining to the chemical composition as well as information about specific areas by helping identify specific areas of the tissue. The analysis results obtained using mapping display the morphological structure inside the tissue and the spatial distribution of the relative content of biochemical components, such as DNA and actin (24,25). Yu et al (25) successfully identified normal cells and cancer cells by using Raman mapping spectra, which indicated the feasibility of using the mapping technique for the detection of breast cancer tissues. Breast tissue is inherently heterogeneous. The fingerprint characteristics of Raman spectra also enable the analysis of Raman spectral data of breast cancer tissues to provide in-depth information on the malignant transformation process of breast tissues. Several research groups investigated the effectiveness of Raman imaging in clinical diagnosis (13,24). However, Raman spectroscopy has not been used to image breast tissue. Our earlier studies showed that the single point detection technique of Raman spectrum could distinguish normal breast tissue, breast cancer tissue and benign breast tissue (15). However, due to the heterogeneity of tissues and detection methods, the obtained data varied greatly and showed poor regularity. In order to improve the stability of Raman data obtained in the present study, Raman spectrum mapping detection was used to minimize the influence of organizational structure and human factors, for example slight movement of the microscope lens after fixation, on the results. This technique was used to construct a breast cancer diagnostic model to conduct imaging analysis and create pseudocolor Raman maps. A map similar to the pathological hematoxylin and eosin (H&E) images were obtained, demonstrating the reliability of Raman spectroscopy method for tissue imaging. The results of the present study may inform future studies investigating real-time imaging of incisor edge lesions. Patient information. A total of 45 breast cancer tissue specimens from patients who underwent MRM or simple mastectomy at the First Hospital of Jilin University (Changchun, China) between July 2015 and January 2016 were used in the present study. Of these, 22 tissue specimens were subjected to mapping and 23 were subjected to random single-point Raman spectroscopy. A total of 25 samples of adjacent normal breast tissues were also collected. Mapping and random-single point detection was performed in 15 and 10 normal breast specimens, respectively. All patients were female with a median age of 52 years (age range, 32-63). The only inclusion criterion for the present study was invasive breast cancer. The exclusion criteria were patients who refused to participate in the trial. Following further examination by preoperative biopsy and postoperative pathology, the breast tissues were confirmed as invasive ductal carcinoma; however, breast cancer stage was not assessed). All patients agreed to participate in the study and provided written informed consent. The study protocol was approved by the Ethics Committee of the First Hospital of Jilin University (approval no. 2013-168). Specimen collection. Breast cancer tissue and adjacent normal breast tissue (as far away as possible from the lesion, ≥5 cm) of the same patients were collected, and the adipose tissues were removed. The specimens were immediately frozen at -25 to -20˚C. Subsequently, two contiguous sections were sliced using a freezing microtome (cat. no. CM3050S; Leica Microsystems GmbH). Following this, the specimens were stained with H&E for further examination. H&E staining was performed as follows. First, the specimens were immersed in 10% neutral buffered formalin for 12 h at room temperature and then dehydrated in 75, 80, 95 (I), 95 (II), 100 (I) and 100% (II) ethanol for 1 h, respectively. Next, specimens were washed in in xylene for 1 h twice, paraffin embedded for 4 h, and then cut into 3-5 µm-thick sections. H&E staining was performed at 22˚C with H staining for 5 min, then sections were washed with running water for 1 min, dipped in 0.1% HCl for 10 sec and E counterstained for 1 min. Stained sections were then dehydrated in 75, 80, 95 (I), 95 (II), 100 (I) and 100% (II) ethanol for 1 min, respectively. Finally, specimens were dipped in xylene for 1 min three times. Dried sections were sealed with neutral gum. In the process of making pathological sections, one of the two adjacent sections was stained with H&E for routine histopathological analysis by two experienced breast pathologists, and the other section was transported in liquid nitrogen for Raman spectroscopy without any further processing. To ensure that the detection area was malignant, the H&E-stained section was used as a guide to determine the detection area of the frozen section. Ra m a n spect roscopy. Ra ma n sp e ct roscopy was performed using a confocal Raman System (HORIBA) (http://www.horiba.com/en_en/) at the State Key Laboratory of Supramolecular Structures and Materials of Jilin University (Changchun, China). It is a combination of a Raman spectrometer and a standard optical microscope, with an optical microscope at the bottom for image acquisition and a Raman spectrometer at the top. The optical microscope is used to capture images of the area being examined, and the laser beam excited by the instrument is focused through the optical microscope as a tiny spot of light with a diameter of 1.5 µm. The Raman signal in the area where the spot is located passes through the microscope back to the Raman spectrometer to obtain the Raman spectral information of the tissue. For the single point test, a 633 nm helium-neon laser was used in Duoscan mode, and the selected tissue area was scanned point by point. The Raman signal generated during the test was detected by a Synapse Thermoelectric cooled charge-coupled device camera (Horiba Jobin Yvon SAS) with a spatial resolution of 3 λ. The power of the laser reaching the tissue surface was 20 mW. No photo damage was observed in the samples after the mapping data acquisition. Rayleigh scattered light was filtered using a 4-notch filter (Ηoriba Jobin Yvon). The scanning range was 400-3,000 cm -1 , the integration time was 20 sec and the number of integration times was 1. The test tissue was kept moist with saline to effectively reduce the spectral background and photodegradation. Prior to the Raman spectroscopy test, images of the H&E-stained sections of the breast tissue were captured using a light microscope (Olympus Corporation) at x10 and x50 magnification . The optical images were obtained at the same position as the corresponding continuous frozen section. The wave number calibration setting referred to the vibration frequency of the silicon wafer at 520.7 cm -1 , and these parameters remained unchanged during all measurement processes (Fig. 1). Data collection. From the H&E-stained sections, representative regions of malignant and normal cells were selected for Raman spectroscopy on frozen contiguous sections. During the collection of single-point Raman spectra of tissues, 20-30 spectra were obtained in each sample from different locations to ensure representative sampling and collection of varying signals. During collection of the mapping spectrum, each section was scanned for 1-3 regions to obtain a Raman map of this region by point-to-point detection; a total of 7x7 points were scanned. Data processing. A total of 53 sets of mapping data and 2,597 single-point Raman spectra were obtained for detection (each set of mapping data comprised 49 spectra). Among these, 34 sets were form malignant tissues with a total of 1,666 spectra and 19 sets were from normal breast tissue with a total of 931 spectra. A total of 1,280 Raman spectra were obtained from single-point detection including 720 from malignant tissues and 560 from benign tissues. The spectra were subjected to baseline correction by fitting and subtracting a trinomial polynomial using NGSLabSpec version 5.58.25 software (Horiba, Ltd.). The spectral data was then smoothed using a 15-point adjacent averaging algorithm. Average spectra of Raman data were calculated using Matlab version 7.9.0 software (The MathWorks, Inc.). k-Nearest Neighbor (KNN) method. KNN algorithm is run by self-programming program in MATLAB R2009b software (https://ww2.mathworks.cn/products/matlab.html), used to classify Raman data (26). By finding the k closest neighbors of the Raman characteristic peaks of the breast tissue and assigning the average properties of these neighbors to the sample, the characteristics of breast tissue in different lesions can be analyzed. Mapping imaging method. A total of 24 sets (70.6%) of malignant data were used to construct the diagnostic model and mapping imaging and 10 sets (29.4%) were used to test the model. The diagnostic model was also constructed and tested in 13 sets (68.4%) and 6 sets (31.6%) of normal breast tissues. Single-point test results were used in the malignant group and the normal breast tissue group using 504 and 392 spectra (70%), respectively, as training sets for construction of the models; 216 and 168 spectra (30%), respectively, were used as test sets for testing the models and analysis. t-tests (SPSS.20.0) were applied to analyze 2 sets of data. Raman spectroscopy images were analyzed by the KNN classification method and processed by diagnostic models to explore the advantages and feasibility of practical application. Results Mapping is more stable compared with single-point detection due to fewer outliers. Mapping data is more conducive to building a model with high discrimination efficiency. Table I presents the number of outliers in the measured data of each group; greater the outlier number indicates lower stability. The mapping data for malignant breast tissues consisted of 24 sets. The malignant outlier data comprised 149 pieces [mean There were 140 pieces of outlier data [average, 12.7 (8)(9)(10)(11)(12)(13)(14)(15)(16)(17) pieces/group; Table I]. A significant difference was observed in the number of outliers between mapping and single-point detection data in the malignant group (t=-6.169; P<0.001). A Table I. Outlier data of the mapping test. (Table II). Discussion The percentage of BCS is increasing yearly (27). The aim of BCS is to achieve negative margins to avoid re-excision and achieve good esthetic results (28). The results reported at the 2015 San Antonio breast cancer conference revealed that ~17% of patients underwent secondary surgery following BCS (5). Clinicians and researchers have been exploring ways to reduce the rate of secondary surgery through preoperative examination (29). There are many methods to evaluate the surgical margin of breast-conserving surgery, including clinical observation and palpation, in vivo or specimen imaging examination and pathological evaluation. A recent meta-analysis showed that the sensitivity and specificity of intraoperative ultrasound examination of the surgical margin of the tumor were 59 and 81%, respectively, and the sensitivity and specificity of specimen X-ray examination were 53 and 84%, respectively, which did not significantly reduce the incidence of secondary surgery (30). Thus, it is imperative to introduce a new technique for the detection of the surgical margin in BCS. Surgeons require a quick, comprehensive test to determine whether there are any residual cancer cells at the surgical margin. Indocyanine green and microwave technologies are ineffective at reducing the rate of secondary surgery following BCS (31,32). At present, the most popular methods for the determination of the surgical margin or tissue properties include Raman spectroscopy and real-time imaging of tumor cells with fluorescent nanoparticles (15,33). In the present study, a mapping technique of Raman spectroscopy was used to minimize the influence of tissue heterogeneity and inter-observer variability on the results. The detection method of the mapping technology involves fixed interval and point-by-point scanning per unit area; therefore, the results are more objective and stable. However, mapping is relatively time-consuming. H&E-stained malignant and normal breast tissue sections were used to identify areas with the most abundant cells; areas with a size of 10x10 µm were selected, and the corresponding positions on the frozen sections were identified for detection. In the mapping data set, the average number of outliers in malignant tissues was 6.2 pieces/group, which accounted for 12.7% of the total training set data; in the conventional single-point detection, the average number of outliers in the malignant group were 12 pieces/group, which accounted for 24.5% of all data. A significant difference between outlier numbers was observed between the two techniques. The data obtained in the present study demonstrated that the mapping method was more stable compared with single-point detection and contains fewer outliers. Mapping data are more conducive for construction of diagnostic models (25). Robust statistics is a statistical method that minimizes the effect of extreme results on the mean and standard deviation estimates (21). An in-depth discussion on the application of robust statistics was provided in our previous study (12). However, robust statistics is not relevant if the data is too discrete (21). In the present study, the number of outliers was small and the construction model could obtain effective spectral results, which was conducive to model construction. In the present study, the KNN method was used to construct a diagnostic model based on robust statistics. The diagnostic accuracy for malignant tumors was 99.56%, and the NPV was 89.04%, which was significantly higher compared with single-point test results. The diagnostic model was also superior to the results reported in previous studies (16,17) with respect to accuracy and specificity; this further verified the advantages of the mapping method for construction of breast cancer diagnostic models. The present study differed from previous studies (17,25) in that the training data for the model construction was not used to test the model; instead, new data were used, which reduced false high accuracy due to the same training and testing data. The present study identified new possibilities for sensitive detection of the tumor margins at the level of mapping molecules. Although limited reports have been published on the use of this method for detection of breast cancer tissue, it has recently become a new research hotspot (15,17). In the present study, a diagnostic model was used to evaluate the images. The imaging method for classification by the diagnostic model must be based on a good classification model. Behl et al (13) performed Raman spectroscopy imaging of the oral mucosa. The data was de-baselined and then analyzed by the K-means cluster analysis method; tumor, stroma and inflammatory cells were assigned different colors (red, green and blue), and these regions are distinctly depicted on Raman maps of tumor sections (13). Daniel et al (34), used a combination of principal component analysis and K classification to image the oral mucosa and classify the different principal components. In the present study, a breast cancer diagnosis model was constructed by KNN method, and the Raman map obtained by imaging the detected area could clearly depict the tumor margins. This correspondence demonstrated the application value of this method for real-time imaging of tumor margins. In the future, real-time figure 2A, x50) and (B) the mapping image. In (B), the large blue area corresponded to the nucleus of the pathological section, whereas the green area corresponded to the tumor stroma. In addition, the blue represents the malignant areas identified using the Raman spectroscopy diagnostic model that we build by KNN method and green represents the areas that are non-malignant. margin imaging by software determined properties may be used to guide surgical resection. In conclusion, confocal micro-Raman spectroscopy was used to detect the features of breast cancer and normal breast tissue by point-by-point scanning, and the results of the present study demonstrated that it was possible to obtain more stable spectral data compared with the data obtained using previous methods. Based on the spectral data, the KNN method was used to construct the diagnostic model software. The diagnostic accuracy was significantly higher compared with that of the model built using the single-point detection method, which demonstrated the advantage of the mapping method for data acquisition. The image obtained using the imaging method based on the diagnostic model classification closely corresponded to the pathological sections, which revealed the feasibility of the application of mapping for intraoperative imaging. The present study provided a foundation for the diagnostic use of the Raman spectrum at the molecular level.
2020-07-09T09:03:27.683Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "0cd940caf8f2320ae58624d9e6dd88f544b710de", "oa_license": "CCBYNC", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2020.11804/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a59c739f261ab4b4e7568ffc90743a5ebbe9f7b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251266160
pes2o/s2orc
v3-fos-license
A Systematic Approach to Providing COVID-19 Vaccinations in the Community by Student Pharmacists Doctor of Pharmacy (PharmD) students and faculty at University of California, San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences (SSPPS) were highly motivated to support local and regional COVID-19 vaccination efforts, which began in January 2021. A system was created to streamline requests for SSPPS volunteers, maximize opportunities for student learning and engagement, and ensure adherence to pharmacy practice standards and laws in the process of assisting with vaccination efforts in the community. An existing model for approving student organized events was modified to fit additional needs for COVID-19 vaccination efforts by SSPPS students and faculty. For each event, students completed a standardized form containing event details including location, date, time, pharmacist preceptors, and duties. All requests were screened by designated SSPPS faculty to ensure student safety, availability, and feasibility. After each event, students and faculty completed a unique online form designed to track volunteer hours. Students received course credit for volunteering and completing a standardized self-reflection. Comments from students’ reflections (n = 74) were analyzed to identify common challenges. Between 11 January 2021 and 31 May 2021, SSPPS faculty and students volunteered for 245 shifts, totaling 1346 h. Students encountered several logistical challenges, such as availability of vaccines. The system utilized allowed for SSPPS students and faculty to play an integral role in COVID-19 vaccination efforts throughout the region. Introduction Responding to the coronavirus disease 2019 (COVID-19) pandemic, in December 2020, the United States (U.S.) Food and Drug Administration (FDA) issued emergency use authorization for the Pfizer-BioNTech COVID-19 vaccine in individuals aged 16 years and above [1]. Shortly after, vaccines from Moderna and Janssen were also authorized for emergency use in December 2020 and February 2021, respectively [2,3]. With over 330,000 probable deaths due to COVID-19 by the end of 2020 in the U.S., there was an urgent need to quickly distribute the vaccine to eligible individuals [4]. Subsequently, there was high demand for trained volunteers to assist with vaccination efforts nationwide. University of California, San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences (SSPPS) is the only pharmacy school in California's second most populous county. SSPPS Doctor of Pharmacy (PharmD) students and faculty were highly motivated to support local and regional COVID-19 vaccination efforts, which began in January 2021. Due to our school's small student body (approximately 260 students) coupled with logistical challenges in procuring authorized vaccines, we decided to support numerous partner organizations with COVID-19 immunization efforts, as opposed to operating our own immunization clinics. Various organizations, ranging from community pharmacies to large health systems, were enthusiastic to receive support from SSPPS. A systematic process ("the System") was created to streamline external requests for SSPPS volunteers, maximize opportunities for student learning and engagement, and ensure compliance to pharmacy practice standards and laws, in the process of assisting with vaccination efforts. Further, the System tracked vaccine activities across all SSPPS volunteers and helped facilitate a variety of volunteer opportunities in diverse settings. Here, we describe the System employed at SSPPS, including benefits, challenges, and key factors for success. In doing so, we aim to help similar academic institutions that wish to utilize pharmacy students to support critical public health measures, such as vaccination programs, while also providing quality learning experiences and avenues for community engagement. Materials and Methods The System was built on an existing framework designed to approve student activities within the SSPPS curriculum. Following guidance from the Accreditation Council for Pharmacy Education (ACPE) Accreditation Standards, the SSPPS PharmD curriculum requires that student pharmacists complete Introductory Pharmacy Practice Experiences (IPPEs) during the Pre-Advanced Pharmacy Practice Experience (Pre-APPE) curriculum. This includes at least 50 h of patient care related community service and/or outreach activities (e.g., vaccination clinics). In addition, students are required to participate in activities within the Co-Curricular (CC) course, a one-unit mandatory course each year that aims to supplement the curriculum with a focus on selected educational domains. Most activities are planned by student organizations or individual students and overseen by a faculty team responsible for ensuring all activities meet intended learning objectives under the educational domain(s). For any new CC or IPPE activity, students complete a mandatory, online request form with information about the activity including location, date, proposed learning objectives, name(s) of pharmacist preceptor(s), type of training requirements, and additional legal and clinical requirements if the activity involves direct patient care. The online request form was created using a third-party survey tool and maintained by the faculty team. All students have access to the request form via a hyperlink that is posted within course materials for both IPPE and CC courses. The faculty team provides oversight for how students can safely and efficiently sponsor their activity and is responsible for final activity approval. Approved activities are shared on a school-wide calendar, where students may sign up to participate. Students receive IPPE and/or CC course credit by participating in an activity and completing a self-reflection that requires review and approval by the activity preceptor(s). A visual schematic of this process is shown in Figure 1. We adapted this existing framework for IPPE and CC activities, into the System that would encompass COVID-19 vaccine efforts. The same online activity request form was utilized, with additional information requested specifically for COVID-19 vaccination activities. This included information on current public health guidance for masking, social distancing, disinfecting, contact tracing, and viral testing. The Director of IPPE assumed the primary responsibility of reviewing this information with students, preceptors, and community partners. Additional modifications included weekly email updates directly to students that contained updated information about COVID-19 vaccines and pharmacy regulations. For example, the California State Board of Pharmacy modified existing law to allow a licensed pharmacist to oversee additional pharmacy interns when participating solely in immunization efforts [5]. This allowed more students to sign up for activities without an additional pharmacist preceptor, further expanding our capacity to provide immunization services. In addition to weekly emails, the faculty team also added this new information into the online request form for students. The System also included oversight by faculty to ensure events were safe and complied with pharmacy laws and regulations; this also extended to external organizations. Faculty were able to clarify areas of concern, request modifications, and ultimately decline an event that was not following acceptable practice standards. Additionally, the Director of IPPE was "on call" at any time for students that had urgent needs at their vaccination activity. Finally, the System maintained the existing structure that places responsibility on student organizers for submitting all required information for activity approval. This helped reduce the amount of faculty and staff effort to find new vaccine opportunities for students. Pharmacy 2022, 10, x FOR PEER REVIEW 3 of 8 (1) Community organizations are connected to student organization leaders (2) Students complete an online activity request form (3) Activity request form is sent to faculty team for review (4) Approved activity details are posted on a school-wide calendar (5) Student volunteers sign up for selected activity (6) Student volunteers complete self-reflections following activity (7) Preceptors approve self-reflections and students receive course credit (a) Faculty team works with community organizations to ensure appropriate learning environments We adapted this existing framework for IPPE and CC activities, into the System that would encompass COVID-19 vaccine efforts. The same online activity request form was utilized, with additional information requested specifically for COVID-19 vaccination activities. This included information on current public health guidance for masking, social distancing, disinfecting, contact tracing, and viral testing. The Director of IPPE assumed the primary responsibility of reviewing this information with students, preceptors, and community partners. Additional modifications included weekly email updates directly to students that contained updated information about COVID-19 vaccines and pharmacy regulations. For example, the California State Board of Pharmacy modified existing law to allow a licensed pharmacist to oversee additional pharmacy interns when participating solely in immunization efforts [5]. This allowed more students to sign up for activities without an additional pharmacist preceptor, further expanding our capacity to provide immunization services. In addition to weekly emails, the faculty team also added this new information into the online request form for students. The System also included oversight by faculty to ensure events were safe and complied with pharmacy laws and regulations; this also extended to external organizations. Faculty were able to clarify areas of concern, request modifications, and ultimately decline an event that was not following acceptable practice standards. Additionally, the Director of IPPE was "on call" at any time for students that had urgent needs at their vaccination activity. Finally, the System maintained the existing structure that places responsibility on student organizers for submitting all To assess the quality of student experiences during COVID-19 vaccine activities, we reviewed their self-reflection responses to the following prompt: "List one challenge related to this activity, and strategies to overcome this challenge. Describe how you designed and implemented (or would do so in the future) solutions to this challenge." Two independent reviewers (MB and KB) examined the responses from pharmacy students and identified statements that aligned with the World Health Organization's Behavioural and Social Drivers (BeSD) Framework of vaccine uptake, specifically the domain of Practical Issues, which includes the subdomains of Availability, Ease of Access, and Service Quality [6]. The two reviewers coded student reflections to the Practical Issues domain of the BeSD Framework and selected statements that represented one of the subdomains. Any disagreements were resolved through discussion and re-review of the student reflections. This preliminary qualitative analysis is part of a larger study on thematic content analysis, which is currently underway. Partial results of this analysis are presented here. This study was exempted from full review by the institutional review board of the University of California, San Diego. Vaccination Efforts SSPPS successfully partnered with 8 distinct organizations, including University of California affiliated health systems, large chain pharmacies, independent pharmacies, and non-profit community organizations. Overall, between 11 January 2021 and 31 May 2021, SSPPS faculty and students completed 245 volunteer shifts totaling 1346 h. This included 96 SSPPS students contributing a total of 1222 h (Table 1). Students from the graduating class of 2024 (2024 cohort) provided approximately 68% of the total vaccine effort; the 2023, 2022, and 2021 cohorts provided 26%, 5%, and 2%, respectively, of the total community vaccine effort from SSPPS students. Additionally, seven faculty members contributed approximately 124 h during the same period. Student Self-Reflections We collected students' perceptions of their experiences by asking questions about the challenges they faced and how they overcame these. Students identified non optimal workflow issues which resulted in vaccine administration delays. For example, one student highlighted that preparing doses of the COVID-19 vaccine was not performed at the same rate as administration, which led to a delay in vaccine administration. This also resulted in other providers having to assist with preparing doses, "We had to wait for syringes to be prefilled, and vaccinating faster than the loading the syringes. Some health care providers closed down their station to help with filling syringes before reopening their station to vaccinate". -Pharmacy Student B (class of 2024) During their time as a volunteer, SSPPS students provided consultations and education to patients receiving the COVID-19 vaccine. However, there was a shortage of printed educational information to assist with this task. One student identified a lack of COVID-19related printed materials as a challenge during their vaccination experience, "I almost had trouble keeping up with the vaccinators because I spent a considerable amount of time educating patients and helping them sign up [ . . . ] I wish we could have provided patients with pamphlets on the side effects, V-safe, and instructions on signing up for their second dose, as it was a bit confusing to navigate the website. If we provided patients with those documents, I would have had more time to address patient concerns, check-in with them during the 15 min wait, and chat with them. Many patients had issues signing up, so I had to figure it out and walk them through it". -Pharmacy Student C (class of 2024) One student commented that in the absence of available COVID-19 vaccines, they were still able to administer another appropriate vaccine. They also supported COVID-19 testing efforts, "The challenge related to this activity was that the COVID vaccines were not available. However, I was still able to vaccinate elderly individuals with the shingles shot. Although, I wasn't able to immunize the community with COVID vaccine I still felt like I was making the community healthier as a whole by vaccinating individuals against shingles. And I was also involved in checking patient in for their covid testing". Discussion The System was successful in the support of COVID-19 vaccination efforts in our community. This was possible due to many key factors. First, only minor modifications were needed to the existing system to incorporate COVID-19 immunization activities. Students, faculty, and preceptors did not have to learn a new process as they organized new vaccine activities. Second, strong collaborations were essential in creating and planning activities. For example, shortly after vaccine authorization by the FDA, there was a plethora of opportunities for SSPPS to support vaccine efforts through existing partnerships, such as experiential training agreements and our collaborative nature with UC San Diego Health (UCSDH), where many students complete experiential rotations and many faculty hold practice sites. In fact, UCSDH requested SSPPS support to help vaccinate the community at a large-scale clinic [7], and past collaborations along with personal connections were vital to ensuring SSPPS students and faculty were included in the scheduling and recruiting process, as well as troubleshooting issues. One such issue was ensuring UCSDH organizers were aware that SSPPS students were trained in vaccine administration, so they could be assigned to appropriate roles. In other examples of leveraging experiential training agreements, SSPPS students supported vaccinations at local independent pharmacies to help meet community demand and collaborated with large pharmacies that were immunizing highrisk populations, such as older adults in long-term care facilities. Overall, strong internal and external collaborations between SSPPS and partner organizations allowed for quick communication, resolution of issues, and the timely administration of COVID-19 vaccines. Another key factor for success of the System was the engagement, motivation, and mobilization of SSPPS pharmacy students. Incoming SSPPS students are taught about professionalism and their role as members of the healthcare team. They also complete the American Pharmacists Association (APhA) Pharmacy-Based Immunization Delivery course at the start of their first year, enabling the entire student body to provide vaccines. These factors, along with an unprecedented need for immunizers in the local community, made students enthusiastic about supporting vaccine efforts. The level of involvement between class cohorts followed a predictable pattern. First-year students-trained to provide immunizations just 3 months previously-were involved in most of the vaccine effort. Other class cohorts, presumably with more responsibilities in school, work, and extracurricular events, made up a smaller portion of the overall student effort. Further, it was apparent that faculty did not have the bandwidth to organize every potential event. Allowing student organizations to take a lead role, with faculty oversight, provided an efficient system where students assumed most logistical responsibilities and was consistent with how other activities in the IPPE and/or CC courses were organized. Based on review of self-reflections, we can conclude students encountered several challenges during COVID-19 activities that were associated with the Practical Issues domain of the BeSD Framework. These included logistical issues such as Availability, Ease of Access, and Service Quality. For example, efficient preparation of doses during vaccination clinics was noted, which has also been identified as a challenge at similar COVID-19 vaccination clinics [8]. SSPPS students provided evidence that they were able to overcome many of these challenges, such as utilizing their training to assist in preparing doses or providing additional vaccines to appropriate individuals. This provides some evidence that SSPPS students can apply their didactic training to identify immunization gaps and administer appropriate vaccines, even in settings that are focused on one vaccine (i.e., . This also reinforces keeping immunization training towards the beginning of the curriculum, giving students ample opportunities to apply it. Our System was not without challenges. Rapidly changing information on vaccines and public health guidelines, plus modifications to pharmacy laws and regulations, made updating our student organizers and participants in a timely manner difficult. Several of these policies, such as increasing the ratio of pharmacists to interns when engaged solely in immunization-related activities, affected the logistics of our vaccine events. Student organizers rearranged student volunteers and preceptors to maximize immunizers based on these new rules. Additionally, at mass vaccination sites the requirement to notify a patient's primary care provider after receipt of a COVID-19 vaccine was waived if certain conditions were met. These regulatory changes increased the burden for SSPPS faculty responsible for updating documents, forms, and other procedures pertinent to the System. Having a dedicated faculty member familiar with immunization practices and regulations-in our case, the Director of IPPEs-largely responsible for this aspect was beneficial in our System. However, the opportunity cost for dedicating a significant amount of time to ensuring all information was updated and accurate may have prevented the performance of higher functional duties of a clinical pharmacist such as patient care, mentorship, and education. Another challenge was understanding the extent to which students were participating with COVID-19 vaccines outside of events approved via the System. For example, many students provided vaccinations in their internship positions at community pharmacies or local health systems. Any activities completed during internships are not counted towards course credit, and therefore students would not submit these activities through the System. Further, students in the 2021 cohort provided vaccinations as part of their advanced pharmacy practice experiences (APPEs), which also was not captured by the System. Other students volunteered on their own time, with other organizations, and did not report the hours for class credit. Therefore, our total SSPPS efforts quantified here are likely under reported. Future iterations of the System should include ways to capture the full scope of efforts by all students. The ability to be flexible and adaptive has been cited as a key factor for success of other COVID-19 vaccination efforts with pharmacy involvement [9]. We also discovered this was vitally important. Initially, some scheduled events were canceled due to lack of vaccine availability. This information often became available at the last minute, and communication to participating volunteers needed to be triaged quickly. To facilitate this, our System included recording the contact information for all student organizers and volunteers and designating responsible persons for communicating time sensitive information. Despite last minute changes or cancellations, students and preceptors continued to volunteer at future events. The amount of data from the student's reflection provides us with an opportunity to perform thematic content analysis, which will help inform key stakeholders on how to improve the vaccination experience. In this manuscript, we provided a preliminary summary of our qualitative investigation focusing on Practical Issues of the BeSD Framework [6]. Work is currently underway to formalize the qualitative assessment of these reflections using existing theoretical frameworks. This may result in improvements to the logistics of vaccination administration, education, and patient experience. Other institutions may consider adopting a similar approach using their existing infrastructure to support vaccination efforts with partner organizations in their communities. By maintaining an approval mechanism for activities, leveraging existing partnerships, and fostering student engagement-this structure has the potential to serve many patients while providing quality learning experiences. Conclusions The System employed by SSPPS was efficient, flexible, and built on existing processes. Key factors for success included utilizing an existing activity approval system, leveraging strong collaborations with partner organizations, and robust engagement of student pharmacists. Further, student pharmacists overcame challenges encountered with vaccine activity logistics. Ultimately, the System contributed to the ability for SSPPS student pharmacists to maximize support for COVID-19 vaccination efforts.
2022-08-03T15:04:01.238Z
2022-07-30T00:00:00.000
{ "year": 2022, "sha1": "5f7d6ee807476d0c7173411ae7eccff17d30fa16", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2226-4787/10/4/93/pdf?version=1659163886", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06fb1f8763a44973d2bbdaef6be9c440a441599e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
8960676
pes2o/s2orc
v3-fos-license
The Effects of Horseback Riding Simulator Exercise on Postural Balance of Chronic Stroke Patients [Purpose] The aim of this study was to examine the effects of horseback riding simulator exercise on postural balance of chronic stroke patients. [Subjects] A total of 67 stroke patients were assigned either to a horseback riding simulator exercise group (HEG, n=34) or a mat exercise group (MEG, n=33). [Methods] The subjects exercised three times per week for 8 weeks. Static balance ability was determined by eyes open balance (EOB) and eyes closed balance (ECB), which was measured using a Kinesthetic Ability Trainer Balance system. Dynamic balance was evaluated using the Berg balance scale (BBS). [Results] EOB and ECB significantly decreased and BBS had significantly increased after the intervention in the HEG and the MEG, and ECB decreased and BBS increased significantly more in the HEG than in the MEG. [Conclusion] Horseback riding simulator exercise is more effective than mat exercise for improving the ECB and BBS of stroke patients. INTRODUCTION Patients with postural adjustment disorder have impaired balance and posture. In these patients, abnormal delivery of sensory information greatly influences postural sway and muscle activity, causing disability in activities of daily living and interfering with treatment interventions 1) . Diverse rehabilitation programs are available for functional improvement and balance recovery for hemiplegic patients. Among these, horseback riding is a treatment that focuses on three areas: physical rehabilitation, psychological treatment, and the development of social skills. The treatment uses kinesociological movement and different walking patterns such as walk, trot, and riding trot, and nature-friendly characteristics. Physical therapy and psychological treatment approaches are employed to correct the horseback rider's posture and to improve the function of the tendons, ligaments, the cardiopulmonary system, and blood circulation 2) . However, horseback riding therapy itself presents some difficulties as a treatment due to location, costs, and risks. One solution to these difficulties is the horseback riding simulator, which allows simulated horseback riding in a fixed space, and this exercise is possible anytime. And riding on a simulator assists A horseback riding simulator simulates the rhythmic movement of a horse, the development of equilibrium, flexibility, and whole-body muscles while improving the balance and postural adjustment normal and disabled people 3,4) . Studies of proprio-ceptive neuromuscular facilitation, Bobath, sling, and ball exercises have examined improvements in the postural balance of stroke patients, but research on the effects of horseback riding simulators is lacking. The aim of the present study was, therefore, to examine the potential benefits of exercise using a horseback riding simulator on the postural balance of chronic stroke patients. SUBJECTS AND METHODS This experiment included 67 stroke patients hospitalized at D hospital in Daejeon, Korea. All subjects read and signed consent forms, in accordance with the ethical standards of the Declaration of Helsinki. The possibility of natural recovery was minimized by selecting only patients whose onset of stroke was at least 6 months prior to the experiment. No patient had diabetes, heart disease, or orthopedic problems and their Korea-mental mini state examination score was 24 or higher. The subjects were able to walk independently for more than 15 minutes. All subjects listened to an explanation of the purpose of this study and exercise method and voluntarily participated in the experiment. They were able to maintain a standing position independently for more than 30 seconds and could walk in doors continuously for more than 30 m independently. Also, they had no problems with walking due to orthopedic surgery or impairment, a Modified Ashworth Scale stiffness 2 or less and a lower extremity muscle strength measured as F or higher in the Manual Muscle Test. The subjects were divided into a horseback riding simu- Stroke was the result of cerebral infarction in 15 subjects and cerebral hemorrhage in 19 subjects in the HEG, and cerebral infarction in 16 subjects and cerebral hemorrhage in 17 subjects in the MEG. The onset of stroke was between 7 and 12 months prior to the experiment in 19 patients, and more than 15 months prior in 15 subjects in the HEG. In the MEG, the onset of stroke was between 7 and 12 months prior to the experiment in 20 patients and more than 13 months prior in 13 patients. Both groups received exercise treatment three times per week for 8 weeks and received ordinary physical therapy 6 times per week. A physical therapist with more than 10 years of clinical experience administered the exercise programs. The HEG used a horseback riding simulator (FORTIS, Korea) with a shape and size similar to those of a real horse (FORTIS, Korea). The simulator had 100 different exercise programs. Since the subjects were patients, course number 71 was selected as it did not include abrupt rhythm and had a comfortable up and down and forward and backward rhythmic movement (the distance between up and down was 52 m/min, the distance between forward and backward was 39 m/min, the number of up and down rhythmic movements was 90 to 100 times, the number of forward and backward rhythmic movements was 90 to 100 times, and the rhythm distance was 65 m/min). Course number 74 was also selected as it had a large up and down and forward and backward rhythmic movement, which has a good exercise effect on the neck, shoulders, trunk, abdomen, thighs, and legs (distance between up and down was 73 m/min, distance between forward and backward was 40 m/min, the number of up and down rhythmic movements was 95 to 105 times, the number of forward and backward rhythmic movements was 95 to 105 times, and the rhythm distance was 98 m/ min). The exercise was administered for a total of 35 minutes, three times per week. Each course was administered for 15 minutes and the subjects rested for five minutes after a course. The exercise speed was set at a medium speed (50%), which was not fast when compared to the designated rhythmic speeds of the horseback riding simulator. The risk of falling was minimized by equipping the subjects with an automatic stop device. The MEG performed a trunk stabilization exercise using a mat for 35 minutes ( Table 1). The trunk stabilization exercises were performed using the lumbar spinal stabilization exercise methods developed by Norris 5) and Richardson and Jull 6) . The exercise session lasted 35 minutes in total, and warm-up and cool-down exercises were performed for five minutes at the beginning and the end of the exercises, respectively. Programs 1 through 3 were repeated 10 times per set and a total of three sets were completed. Static balance was measured using a Kinesthetic Ability Trainer (KAT) Balance system (KAT 2000 Breg Inc., USA) and the center of pressure sway was recorded The moving platform of the KAT Balance System is supported at the central point on a small pivot. The stability of the plate is regulated by an air pressure cushion between the platform and the floor. When the cushion is inflated, the platform is stabilized, and when the cushion is deflated, the platform becomes very unstable. A tilt sensor installed in front of the platform records the degree of tilt of the platform from the reference point at a rate of 18.2 times per second on a computer. In this experiment, the distance between the center of the plate and the subjects' center of pressure sway was measured in each test to calculate the Balance Index (BI). The BI is an index of a subject's ability to maintain his or her body close to the central point of the platform; so that a low BI score indicates a better balance sense. The subjects maintained a distance of 5-6 cm between their heels and stood comfortably on the moving platform. Eyes open balance (EOB) and eyes closed balance (ECB) were measured three times for 30 seconds each time and the average values were calculated. Dynamic balance prior to and after the intervention was evaluated using the Berg balance scale (BBS). The data were statistically processed using SPSS 12.0 for Windows. The paired t-test was used to examine within group changes after the intervention and the independent ttest was employed to examine between-group changes. The significance level was chosen as at α=0.05. RESULTS According to the result of this study, EOB and ECB significantly decreased and BBS had significantly increased after the intervention in both the HEG and the MEG (p<0.05). Trunk Stabilization Exercise Using the Upper Extremities -Place a square support below the knee joints in a supine position so that the hip joints and the knee joints are at 90°. The therapist holds both of the patient's hands and the patient raises the trunk and the head. In the comparison of the groups after the intervention, ECB decreased and BBS increased significantly more in the HEG than in the MEG (p<0.05). There was no significant difference in EOB between the groups (p>0.05) ( Table 2). DISCUSSION Factors that are related to stroke patients' gait include their direction perception, standing balance, and voluntary adjustment of the affected side lower extremity, as well as their sense of joint position and the existence or non-existence of joint contracture 7) . Lee and Jeong 8) reported that when 20 normal adult female students performed horseback riding simulator exercises, the muscle strengths in their thighs and waists were greatly improved. Therefore, since horseback riding simulator exercises improve the muscle strengths in the thigh and lumbar regions, as well as trunk stabilization, the aim of the present study was to examine the effects of horseback riding simulator exercises on the balance ability of stroke patients. Devienne and Guezennec 9) in a study of 20 normal females, noted that horseback riding exercise increased the strength of their knee flexor and the quadriceps femoris muscles. Quint and Toomey 10) observed that horseback riding significantly increased the range of forward and backward passive tilt, of a study on 13 children with cerebral palsy. Choo 11) conducted a horseback riding program for children with cerebral palsy for 3 months and reported positive influences on their static and dynamic equilibrium. Back et al. 12) noted that 40 subjects who exercised on a horseback riding simulator had greater increases in the strength of the biceps brachii, transverse abdominis, abdominal oblique, and adductor longus muscles than when they jogged. Lee and Jeong 13) reported that horseback simulator exercise increased muscle strengths in the femoral and the lumbar areas of 20 normal female students. Cho et al. 14) noted that horseback riding simulator exercise was effective at improving postural balance ability and proprioceptive sense in a study of 30 normal adults. Kuczyński and Słonka 15) conducted horseback riding simulator exercise for 25 cerebral palsy children for 12 weeks, and observed a significant improvement in their left and right balance ability. The balance ability test administered in the present study showed that balance ability after the intervention had improved in both the HEG and MEG, which indicates that both horseback riding and mat exercises are effective at improving postural balance. However, greater improvements after the exercise were observed in the HEG than in the MEG in ECB and BBS. This finding is consistent with the results reported by Hammer et al. 6) who conducted horseback riding simulator exercise for 11 multiple sclerosis patients and observed improvements in their BBS. Similarly, Beinotti et al. 7) conducted horseback riding exercise for 20 stroke patients and observed enhancement of their gait and balance abilities. Cho et al. 4) reported that horseback riding simulator exercises were effective at improving postural balance ability and proprioceptive sense, while Nashner and Peter 18) reported that proprioceptive sense-improving exercises elicited a larger improvement in postural sway when the eyes were closed. In the present study, ECB showed significant differences between the groups after the intervention, indicating HEG's proprioceptive senses improved more than MEG's proprioceptive senses; whereas, EOB did not show a significant difference. When horseback riding simulator exercise is performed by chronic stroke patients, increases in the major muscles of the upper and lower extremities and trunk, and improvements in proprioceptive sense ovserved, indicating that the subjects are better able to maintain their balance and equilibrium. Horseback riding rehabilitation is one of many diverse treatment methods for patients with neurological damage and utilizes the gait of a horse. The horse's rhythmic movement stimulates the patients and aids them to improve posture and balance, and has a treatment effects that include reduced muscle tone, improved trunk adjustment, and enhanced equilibrium responses and autonomic reflexes. However, horses are costly and horseback riding is a sport that is not easily accessible to most ordinary people. In addition, stroke patients are less likely to have an opportunity to do this sport. Horseback riding lacks recognition as a form of rehabilitation, facilities for horseback riding rehabilitation, and professional therapists. However, a horseback riding simulator exercise that mimics movements of a horse can be used as a therapeutic substitute for functional improvement of a patients' balance ability. The limitations of the present study are that the number of subjects was too small to allow generalization of the results, and no follow-up evaluation was conducted to determine the long-term effects of horseback riding simulator exercise. In addition, muscle strength, rigidity, senses and gait ability were not examined prior to and after the exer- cise. Therefore, further research is needed on the effects of horseback riding exercise on muscle activities of the trunk and the lower extremities of stroke patients.
2018-04-03T00:00:34.947Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "f75ff8a312cc7fa0f3895de904f7ca794dc6bad7", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/25/9/25_jpts-2013-127/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f75ff8a312cc7fa0f3895de904f7ca794dc6bad7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237487074
pes2o/s2orc
v3-fos-license
Women in the 2019 hepatitis C cascade of care: findings from the British Columbia Hepatitis Testers cohort study Background Women living with hepatitis C virus (HCV) are rarely addressed in research and may be overrepresented within key populations requiring additional support to access HCV care and treatment. We constructed the HCV care cascade among people diagnosed with HCV in British Columbia, Canada, as of 2019 to compare progress in care and treatment and to assess sex/gender gaps in HCV treatment access. Methods The BC Hepatitis Testers Cohort includes 1.7 million people who tested for HCV, HIV, reported cases of hepatitis B, and active tuberculosis in BC from 2000 to 2019. Test results were linked to medical visits, hospitalizations, cancers, prescription drugs, and mortality data. Six HCV care cascade stages were identified: (1) antibody diagnosed; (2) RNA tested; (3) RNA positive; (4) genotyped; (5) initiated treatment; and (6) achieved sustained virologic response (SVR). HCV care cascade results were assessed for women, and an ‘inverse’ cascade was created to assess gaps, including not being RNA tested, genotyped, or treatment initiated, stratified by sex. Results In 2019, 52,638 people with known sex were anti-HCV positive in BC; 37% (19,522) were women. Confirmatory RNA tests were received by 86% (16,797/19,522) of anti-HCV positive women and 83% (27,353/33,116) of men. Among people who had been genotyped, 68% (6756/10,008) of women and 67% (12,640/18,828) of men initiated treatment, with 94% (5023/5364) of women and 92% (9147/9897) of men achieving SVR. Among the 3252 women and 6188 men not yet treated, higher proportions of women compared to men were born after 1975 (30% vs. 21%), had a mental health diagnosis (42% vs. 34%) and had used injection drugs (50% vs. 45%). Among 1619 women and 2780 men who had used injection drugs and were not yet treated, higher proportions of women than men used stimulants (64% vs. 57%), and opiates (67% vs. 60%). Conclusions Women and men appear to be equally engaged into the HCV care cascade; however, women with concurrent social and health conditions are being left behind. Treatment access may be improved with approaches that meet the needs of younger women, those with mental health diagnoses, and women who use drugs. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-021-01470-7. Introduction The treatment experiences and needs of women living with hepatitis C virus (HCV) are frequently overlooked in research, yet there are relevant clinical differences between men and women related to HCV infection and Open Access *Correspondence: margo.pearce@bccdc.ca 2 School of Population and Public Health, University of British Columbia (UBC), Vancouver, BC, Canada Full list of author information is available at the end of the article disease progression. Female sex is a significant predictor for spontaneous clearance among people with acute HCV infection and a factor in liver disease progression among those living with chronic HCV [1]. Rates of liver fibrosis and cirrhosis progression appear to be slower in younger women (< 50 years) compared to men; however, this difference disappears in older women (> 50 years), possibly due to hormonal changes in menopause [1,2]. For women who have reproductive potential, HCV in pregnancy is a concern. Though pregnancy does not exacerbate HCV disease progression, HCV infection can contribute to adverse perinatal outcomes [3][4][5][6]. In addition, vertical HCV transmission affects 4-7% of infants born to women living with chronic HCV and up to 11% of infants born to women with HIV-HCV co-infection [7]. There are also gendered differences that underscore intersectional barriers faced by some women living with HCV. A cohort study in Ontario, Canada, highlighted that compared to men, women's immigration status and lower socioeconomic status were more likely to negatively affect HCV treatment uptake [2]. Other studies have reported barriers to women's access to HCV treatment, including advanced age, rurality, injection drug use, and involvement in sex work [8][9][10][11][12]. The introduction of novel direct-acting antiviral (DAA) therapies for chronic HCV infection has dramatically changed the HCV treatment landscape. In contrast with the arduous and moderately effective interferonbased treatment, DAA therapies are a well-tolerated and highly effective cure, with > 95% of patients achieving sustained virological response (SVR) in just 8-12 weeks [13]. Reduced clinical barriers to HCV cure have inspired the possibility of eliminating HCV globally by 2030, which will require diagnosing > 80% of those living with HCV and treating 85% of those diagnosed with chronic infection [14]. To achieve this, new HCV infections and deaths related to HCV need to be addressed alongside DAA treatments through scaled-up harm reduction and linkage to liver care. A powerful way to monitor progress toward HCV elimination goals is to evaluate the HCV care cascade at the population level by assessing progress through RNA testing, genotype testing, treatment initiation, and SVR stages. In British Columbia (BC), Canada, integrated population-level laboratory testing and health administration data has made this possible. A 2018 analysis demonstrated that women comprised 37% of the approximately 53,000 people living with HCV in BC and that similar proportions of men and women progressed through the stages of care [15]. However, little is understood regarding factors that influence women's access to HCV care at the population level. These factors may compound existing barriers and create, or negatively contribute to, risk environments where women are "hardly reached" by health/social services and at greater risk for adverse health outcomes [16]. Monitoring of HCV diagnosis and care among women is thus critical both to achieve HCV elimination goals in BC and ensure that women receive timely and equitable access. The objectives of this study were to: (a) construct the population-level HCV care cascade in BC stratified by sex from 2000 to 2019; (b) evaluate progress through the stages of the 2019 HCV care cascade for women and men living with HCV in BC; (c) characterize progress and highlight gaps in the HCV care cascade experienced by women living with HCV in BC. Methods This study represents population data from BC, Canada, where all residents are registered for publicly funded health insurance via the Medical Services Plan (MSP). MSP is a single-payer system covering healthcare provided by fee-for-service practitioners including general practices, private laboratories, and other providers. Laboratory HCV testing for the entire province is centralized at the BC Centre for Disease Control Public Health Laboratory (BCCDC PHL) except for 5% of tests that are performed at regional labs, which send specimens that test positive to BCCDC PHL for confirmation. All prescriptions dispensed in BC are recorded within a central payer-agnostic system called PharmaNet. HCV therapies are publicly funded in BC through the PharmaCare Limited Coverage Drug Program. Interferon-based combination therapies (Interferon/Ribavirin) for HCV treatment became available in 2000, and the more efficacious Pegylated interferon/Ribavirin therapy became available in May 2003 [17]. DAA treatments were available in BC in 2014 and became publicly funded in early 2015, though eligibility for public coverage was restricted to priority patients with fibrosis stage 2 (F2) or above (Metavir or equivalent) or extrahepatic manifestations. In March 2017, eligibility for public coverage expanded to people with comorbidities including HIV or hepatitis B (HBV) co-infection, diabetes, chronic kidney disease, co-existent liver disease, and women who were planning to become pregnant in the next 12 months [18]. Remaining restrictions for publicly funded DAA treatment were removed in BC in April 2018. HCV testing and treatment in BC is provided in various healthcare settings including primary, community, and specialized clinics. It is important to note that prior to January 2020, new HCV antibody positive tests required a follow-up EDTA blood sample for HCV RNA nucleic acid testing (NAT). As of January 2020, persons who are positive for anti-HCV antibodies will automatically be tested for HCV RNA by NAT if: (1) they are first-time antibody positive or (2) if they have not been tested by NAT before. HCV genotype testing is required to prescribe HCV treatment in BC. This analysis uses data from the British Columbia Hepatitis Testers Cohort (BC-HTC) study. We have previously published on the BC-HTC construction and data linkage [19]. Briefly, BC-HTC includes all BC residents who ever tested for HCV or HIV, or were diagnosed with HBV, HCV, HIV, or active tuberculosis (TB) in BC between 1990 and 2015, linked with data on medical visits, hospitalizations, cancers, prescription drugs, and deaths. The laboratory, prescription, and mortality data were updated to 31 December 2019 to facilitate creation and assessment of the 2019 HCV care cascade (Additional file 1: Table S1). In this study, we refer to 'women' as people who were assigned female sex at birth. Although 'woman' also implies gender identity, this was not determinable in this study. BC-HTC data are de-identified and analyzed anonymously; thus, informed consent was not required. Institutional ethics approval was provided by the University of British Columbia Research Ethics Board (H14-01649) and all research was carried out in accordance with relevant guidelines and regulations. Cascade of HCV care Operational definitions for six stages of the HCV cascade of care are described in Additional file 1: Table S2. The stages were defined as: a) HCV diagnosed; (b) HCV RNA tested; (c) HCV RNA positive; (d) genotyped; (e) initiated antiviral treatment; and (f ) sustained viral response (SVR). We applied these definitions to the data to estimate the number and proportion of women in each stage by the end of the year from 2000 to 2019. Focusing on the year 2019, we also applied these definitions to compare the number and proportion of both men and women at each stage. Next, we evaluated the 2019 cascade stages by demographic characteristics and comorbidity profiles of women who were diagnosed with HCV. Finally, to get a clearer understanding of gaps and leakage in the HCV care cascade among women compared to men, we report on the inverse 2019 HCV care cascade: the number and proportions of women and men who were diagnosed anti-HCV positive but did not advance to HCV RNA testing, genotype testing, or treatment initiation stages. Estimate of viraemia The estimate of HCV RNA positive women in BC in 2019 was based on: (1) the number of untreated women whose last HCV RNA test on record is positive; (2) 75% [20] of those who were positive by antibody testing and had no HCV RNA or genotype testing done, as about 25% of antibody-positive people clear infection spontaneously; (3) 75% [21] of the untested and undiagnosed estimate; (4) those treated women determined not to have achieved SVR (the SVR rate calculated for treated women with available RNA test after treatment was used to estimate how many treated women with no available RNA test after treatment would fail to achieve SVR) [15]. Demographic characteristics and comorbidity profiles Demographic characteristics included birth cohort, ethnicity, social and material deprivation [22], and urbanicity. Ethnicity was derived using Onomap software, which identifies ethnicity using name network cultural/linguistic clustering techniques [23][24][25]. Onomap has been previously validated and used in demographic and health research [23,24]. Onomap is prone to misclassifying people with anglicized names and those with mixed ethnicities [26]; however, our internal validation demonstrated that Onomap's sensitivity and specificity relative to self-identified ethnicity was 93% and 98.6% for South Asian people, respectively, and 66.7% and 99.5% for East Asian people, respectively. Ethnic groups were therefore classified as South Asian, East Asian, and Other BC Residents. Comorbidity indicators were derived from MSP data containing physician fee-for-service billing and diagnostic codes, and hospitalization data for mental health diagnoses, problematic alcohol and drug use, cirrhosis, and decompensated cirrhosis (Additional file 1: Table S3). Characteristics and comorbidities of people diagnosed HCV antibody-positive were stratified by sex as well as proportions of women and men at each stage of the HCV care cascade. Chi-squared tests were carried out to compare categorical variables between women and men. All analyses were conducted using SAS/STAT software version 9.4 and R version 3.4.3. Role of the funding source The BC Centre for Disease Control supported construction of the BC-HTC to inform policy and program related to HCV in BC. The study's funders had no role in study design, data analysis, data interpretation, or writing of the article. Patient and public involvement In Spring 2020, study investigators engaged with a community-based HIV/HCV organization in BC and a group of women with lived experience (WWLE) of the HCV care cascade to prioritize lines of inquiry. Over the next eight weeks, through a collaborative, consensus-based process, we reviewed results with the group of WWLE to take into account their perspectives and feedback and ensure findings were interpreted in ways that were destigmatizing and relevant to communities. This work culminated in two open-access 90-minute webinars in Summer 2020 that focused on the WWLE's reflections and policy recommendations in response to the study results [27]. Table 1. Women born within the 1945-1964 birth cohort represented the highest proportion of all anti-HCV positive women (51.5%) and in each stage of the care cascade thereafter, including 56.2% of those genotyped, 63.5% of those who initiated HCV treatment, and 66.5% of those who achieved SVR. Younger women born between 1965 and 1974 and ≥ 1975 were 21.3 and 22.2% of all anti-HCV positive women, respectively, yet smaller proportions of those women progressed through the care cascade, making up just 17.7 and 14% of women who initiated HCV treatment, and 16.5 and 12.4% of women who achieved SVR, respectively. Women with East Asian or South Asian ethnicity comprised 5.4 and 4.5% of anti-HCV positive women in BC, with proportions increasing slightly as the women progressed through the care cascade stages, reaching 5.6 and 5.9% of women who initiated HCV treatment and 6.1 and 5.3% of women who achieved SVR, respectively. Women within the most materially deprived quintile (Q5) made up a consistent proportion in each cascade stage, including 27.6% of those anti-HCV diagnosed, 27.8% of those RNA tested, 26.9% of those genotyped, 24.6% of those who initiated treatment, and 23.7% of those who achieved SVR. The proportions of women in the cascade stages who were in the most socially deprived quintile (Q5) increased from 22% of those anti-HCV diagnosed and RNA tested to 35.9% of those genotyped, 32.9% of those who initiated treatment, and 32.3% of those who achieved SVR. Women Women with a history of injection drug use made up 37.2% of women who were anti-HCV positive, 38.7% of women who were RNA tested, and 37.4% of women who were genotype tested, respectively; yet, they made up just 31.4% of women who initiated treatment and 28.6% of women who achieved SVR. Women with a history of injection drug use and who had used opioids made up about two-thirds of women in each stage of the HCV care cascade. Women with a history of injection drug use and who had used stimulants made up 59.5% of anti-HCV positive women, 60.6% of those RNA tested, 59.8% of those genotyped, and 56.6% of those initiated on treatment. The proportions of women in each stage of the cascade who had been diagnosed with a major mental health disorder were fairly consistent, including 37.8% of those anti-HCV positive, 39.6% of those RNA tested, 39.4% of those genotyped, 38.1% of those who started treatment, and 36.6% of those who achieved SVR. Women with HCV/HBV co-infection made up 4.6% of women who were anti-HCV positive, 4.3% of those RNA tested and genotyped, 4.4% of those treatment initiated, and 4.6% of those who achieved SVR. Women with HCV/HIV coinfection made up 2.9% of anti-HCV positive women, were 3.1% of those who were RNA tested, and 3.3% of . Higher proportions of anti-HCV positive women than men who had not been RNA tested were within the most deprived material quintiles (p = 0.034) whereas lower proportions of women than men who had not been RNA tested were within the most deprived social quintiles (p = 0.013). Increasing proportions of women and men who used injection drugs were left behind in the HCV cascade of care stages. Higher proportions of women than men who had injected drugs had not been RNA tested (28.2% vs. 24.1%, respectively) (p < 0.001), genotyped (43.9% vs. 37.1%, respectively) (p < 0.001), or treated (49.8% vs. 44.9%, respectively). (p < 0.001) Corresponding disparities were observed among people with a history of injection drug use who had used opioids, with 59.5% of women compared to 53.2% of men not being genotyped (p < 0.034), and 66.8% of women compared to 59.6% of men not initiating treatment (p < 0.001). Among people with a history of injection drug use who had used stimulants, 58.6% of women compared to 54.3% of men had not been genotyped and 64% of women compared to 56.8% of men had not initiated treatment (p < 0.001). Higher proportions of anti-HCV diagnosed women with a mental health diagnosis compared to men with a mental health diagnosis had not been RNA tested (27% vs. 19.1%, respectively) (p < 0.001), genotyped (36.1% vs. 29.5%, respectively) (p < 0.001), or initiated on treatment (42.2% vs. 34%, respectively) (p < 0.001). Discussion Using population-based HCV care cascade monitoring data, this study has described women within and outside the HCV care cascade in BC, Canada. We observed steady progress across the care cascade among women living with HCV, with a substantial increase in treatment uptake after the introduction of DAAs in 2015 and expanded coverage starting in 2017. This increase also led to a reduction in the estimated prevalence of viraemic women in the province, from 0.4% to 2000 to 0.2% in 2019 [15]. In 2019, nearly equal proportions of women and men progressed through the HCV care cascade. These results should encourage public health programming and treatment providers that significant progress is being made to eliminate HCV infection in BC. We also identified key groups of women being left behind in the care cascade: specifically, younger women were less likely to progress across the cascade stages compared to men of the same age, which may impact reproductive outcomes. Similarly, women with problematic substance use were less likely to receive treatment. These findings highlighted opportunities to adapt programming and clinical care plans to accommodate women's needs, as HCV risk environments and barriers to treatment frequently intersect with sex and gender-based realities. This study demonstrated that in 2019, just over half of women diagnosed anti-HCV positive in BC were born between 1945 and 1964 (baby boomers) and that this birth cohort represented an increasing proportion of women in subsequent HCV care cascade stages. Women in the 1965-1974 birth cohort comprised a significant proportion of women who were RNA positive and of those treated for HCV. These findings support previous research demonstrating that though the overall rate of HCV infection in the 1945-1974 birth cohort is declining, this population still makes up the majority of prevalent HCV infections in BC and Canada and those in need of HCV treatment [14,28,29]. Baby boomers have thus been identified as a priority population in Canada's HCV elimination targets, and national testing guidelines are for one-time HCV screening of all Canadians born between 1945-1974 [14,30]. Most HCV infections among baby boomers result from past exposure in medical settings or past injection drug use; however, they may be less likely to seek out testing and treatment due to a lack of HCV awareness, difficulty recalling past exposures, or stigma related to substance use [31]. In the inverse HCV care cascade, women and men born between 1945 and 1964 made up similarly higher proportions of those not RNA tested, genotyped, or treatment CI confidence interval, HR Hazard ratio. Multivariable analysis included adjustment for sex, Dukes class for test series and TNM4-stage for validation series, differentiation grade (G1-2 vs. G3-4), and age (as continuous). Age and TNM-stage/Dukes-classification also remained as independent predictors of prognosis in the multivariable mode initiated. Women born between 1965 and 1974 made up about one quarter of women not RNA tested, genotyped, or treatment initiated. As previously discussed, for older women living with HCV, the risk for accelerated liver fibrosis progression becomes a concern. Most younger women living with chronic HCV experience slower liver disease progression, including cirrhosis and hepatocellular carcinoma [32], but some biological studies suggest that post-menopausal women may lose the putative protective effect of estrogen on the liver due to a decline of estrogen levels in the post-menopausal period [1,33]. Older women who have unknowingly been living with HCV for decades and those who are aware of their HCV diagnosis but have not yet engaged in the HCV care cascade may be at risk for advanced liver disease. Promising interventions aimed at increasing HCV screening and linkage to HCV care among baby boomers have leveraged the utility of electronic health records by adding HCV status to routine patient maintenance reminders for healthcare providers, followed by coordinated linkage to HCV treatment [31]. Similar approaches that also work to reduce the stigma associated with HCV infection may serve to identify older women in BC who are unaware of their HCV status and encourage engagement in the care cascade [11]. Younger women born after 1975 comprised 22.2% of anti-HCV positive women, yet made up successively lower proportions of women in each HCV care cascade stage in 2019. Conversely, in the inverse HCV care cascade, these women comprised higher proportions than men among those not RNA tested, genotyped, or initiated on treatment. This finding parallels a previous study using population laboratory surveillance data in BC that demonstrated a significant increase over time in the proportion of newly diagnosed HCV positive women within an age range of reproductive potential who were lost to follow-up for RNA and/or genotype testing -from 10.2% to 2008 to 24.3% in 2019 [34]. Similarly, a large cohort study involving Veterans Administration data in the United States that found younger women had significantly lower odds of receiving DAA treatment than younger men [35]. As mentioned above, there is risk for vertical transmission among younger women living with HCV who become pregnant. A number of population-based studies in the US have indicated that rising maternal and pediatric HCV prevalence is likely related to concomitant increasing opioid use among women of reproductive potential [3,[36][37][38]. In the BC Hepatitis Testers Cohort, 61% of women with chronic HCV infection who were born after 1975 had histories of injection drug use and opiate use, among whom 50% had not been treated for HCV as of 2019. Treating women before or between pregnancies is therefore essential, yet, considering gendered realities faced by women living with HCV, they must be assured that they will receive individual and family support as part of their HCV care plan. Younger women with past or current substance use may avoid or delay both prenatal and HCV care because of potential stigma within healthcare towards people who use substances or the possibility of their children being apprehended due to child welfare concerns [39]. Younger women may also be managing competing health, social, and economic priorities and feel they must delay treatment [40]. Outside of pregnancy, HCV infection is a concern for women's health [41]. Because of younger women's typical slower progression of liver disease, healthcare providers may mistakenly not prioritize HCV treatment for younger women with chronic HCV infection. Awareness of the potential long and short-term extrahepatic manifestations of HCV infection and potential improvements in quality of life should be emphasized to both care providers and younger women living with HCV. *Includes only women with a history of injection drug use The majority of anti-HCV positive women in each stage of the HCV care cascade were within the most severe quintiles for material and social deprivation. In the inverse HCV care cascade, similarly high proportions of anti-HCV positive women and men who were not RNA tested, genotyped, or initiated on treatment had severe material deprivation. Likewise, high and similar proportions of anti-HCV positive women and men who had not been RNA tested, genotyped, or initiated on treatment had severe social deprivation. Though HCV care and treatment in BC is publically available to all living with HCV through universal healthcare, poverty and social isolation intersect with multifaceted issues faced by women with HCV. Women living with HCV have reported navigating gender-based violence, racism in the healthcare system, and immigration processes while juggling work, childcare, and other competing priorities [10,42,43]. These situations can create complex obstacles to women's HCV care, wherein some groups of women become among those who are "hardly reached" by treatment providers [15,16]. It is important to note that although HCV positive women who have recently immigrated, who are Indigenous or Black, are involved in sex work, or unstably housed were not identifiable in our study, the experiences and healthcare needs of these key groups of women have been previously highlighted in research and must not be overlooked moving forward [10,43,44]. Awareness of barriers and expansion of specialized, women-centered approaches, such as culturallysafe HCV outreach and peer-support programming, are therefore essential [45]. Women in BC with a history of injection drug use made up 37.2% of women who were HCV antibody diagnosed, 38.7% who were RNA tested, and 37.4% who were genotype tested; yet, they made up just 31.4% of women who initiated treatment and 28.6% of women who achieved SVR. In the inverse HCV care cascade, somewhat higher proportions of women compared to men who not been RNA tested, genotyped, or initiated on treatment had injected drugs. The proportions of both women and men living with HCV and a history of injection drug use and opioid or stimulant use steadily increased across the inverse cascade stages. Nevertheless, disproportionately higher numbers of anti-HCV positive women who had used opioids or stimulants were left behind in the care cascade. These findings correspond to studies based in the United States that have reported a high frequency of opiate and stimulant use among women at risk of or living with HCV [46]. In other BC population-based analyses, uninterrupted opioid agonist therapy (OAT) was associated with higher likelihood HCV treatment uptake among people who inject drugs after adjusting for sex, yet stimulant use disorder was negatively associated with treatment uptake [47,48]. Research has also demonstrated that gendered power dynamics contribute higher HCV exposure risk for women, such as being second on the needle, requiring help to inject, and needing to negotiate harm reduction with risk for violence [49][50][51]. Women with lived experience of HCV have highlighted that intersecting experiences of sexism, racism, and discrimination toward women who use injection drugs create significant barriers to accessing healthcare, including addiction treatment [27,43]. Involving HCV-affected women who use drugs in the design and delivery of HCV screening, treatment, and harm reduction programming will result in innovative solutions that address these barriers and lead to more women engaging in the HCV care cascade and experiencing improved wellbeing beyond achieving SVR. Overall, this study demonstrated that in 2019, 37.8% of women and 28.8% of men who were diagnosed anti-HCV positive in BC had had a mental health diagnosis. Anti-HCV positive women with a mental health disorder made up about 40% of women within each stage of the HCV care cascade and increasing proportions of women in each inverse HCV care cascade stage. Higher proportions of women compared to men who had not been RNA tested, genotyped, or initiated on treatment had received a mental health diagnosis. National self-reported data suggests that women in Canada are more likely than men to have had past and recent major depression and generalized anxiety [52] and more likely to perceive that their mental health care needs are not met [53]. Intervention research based in the United States and Australia has reported that patients with severe mental health diagnoses who received HCV care integrated with mental health care had a higher likelihood of achieving SVR [54]. Few of the study participants were women, however, and therefore the relevance and effectiveness of such interventions for women who have mental illness and are living with HCV is unclear. In addition, mental health disorders are frequently concurrent with problematic substance use, requiring specialized care and harm reduction. Womencentred HCV interventions that are trauma-informed, culturally safe, and work within peer-support frameworks may better meet the needs of women diagnosed with mental health disorders [55]. We found that the proportion of women in each stage of the 2019 HCV care cascade living with HCV-HBV co-infection was relatively constant at about 4.5%. In the inverse cascade, proportions of women and men with HCV-HBV co-infection who were not RNA tested were somewhat higher than the proportions who were not genotyped or initiated on treatment, highlighting that those who received RNA testing are likely to progress through subsequent HCV care cascade stages. Somewhat higher proportions of men compared to women in the inverse cascade stages were living with HCV-HIV co-infection, likely reflecting the higher burden of HIV infection among men in BC. Proportions of women and men living with HCV who had cirrhosis and decompensated cirrhosis were similar. It is important to note that although more prevalent among men living with HCV, over 25% of women not HCV RNA tested, genotyped, or treated had problematic alcohol use. Accelerated liver disease progression among these women is of grave concern, especially among those unaware of their HCV diagnosis or treatment options. Continued focus on providing HCV treatment to women living with significant comorbidities is needed, specifically with enhanced models that address relational and contextual barriers to engaging in healthcare among women with HCV and HBV or HIV co-infection. Limitations Although this study is based on comprehensive data to characterize the HCV cascade of care in BC, there are limitations that impact the measurement of each stage. The model to estimate the number of people who were undiagnosed HCV antibody-positive was based on 2012 BC and Canadian data [21,56]. BC residents have historically tested for HCV more than other provinces, with testing volumes increasing in recent years, especially after the STOP-HIV initiative began in BC, suggesting that our estimate of the proportion who are undiagnosed may be lower than the national average. Further, the national mandate to test all baby boomers for HCV has increased the number of people born between 1945 and 1965 living with HCV infection who have been diagnosed; subsequently, HCV positivity is declining in this age group. Simultaneously, the number of new/incident cases of HCV have fallen in BC over the past decade, mortality among people with chronic HCV is higher compared to people without HCV, and uptake of curative DAA treatments is increasing [57]. This study may therefore overestimate the number of undiagnosed and prevalent cases of HCV in BC; however, the estimated fraction of undiagnosed people in our cohort was similar to what Hamadeh et al. (2020) reported in population model estimates of chronic HCV infection in the province (33.3%) [58]. In addition, BC-HTC data does not contain information about gender identity, and therefore we cannot comment on the HCV care cascade experienced by people classified as female sex assigned at birth but who do not identify as women. We recognize that transgender men and other gender-diverse people may experience unique barriers to HCV screening and linkage to HCV care. Future work should focus on the specific HCV care needs of this key population. Though we validated Onomap for use in the BC population, it is not able to identify all people, in particular: those who would describe themselves as having a mixed ethnicity; people whose surnames are not specific to ethnic groups, and; people who adopt surnames of another ethnic group. Onomap does not identify people with Indigenous ethnicity. Due to legislated forced assimilation in Canada, many Indigenous peoples' names were changed to biblical or other European names [59]. Thus, there is a misclassification of various ethnic groups through this methodology. We used diagnostic codes in administrative datasets to assess history of mental illness and substance use. This raises several issues: bias towards underestimating prevalence in those less engaged in healthcare, and potential misclassification related to sensitivity and specificity of these measures. Potentially lower linkage rates in some key groups would result in less representation, especially people who are homeless, street-involved, and incarcerated [19]. Conclusions This study has shown that women are progressing similarly to men across the HCV care cascade stages. However, gaps remain for some groups of women, particularly baby boomers and younger women, women experiencing poverty and social isolation, women with problematic substance use, and women with mental health disorders. Though access to HCV testing and treatment has expanded dramatically with DAAs, systemic barriers to testing and treatment in BC, especially within primary care and community-based health and social services [60], disproportionately impact marginalized populations. Programming that is peer-based and specifically reaches out to support women to engage or re-engage with the HCV care cascade could help BC reach HCV elimination targets, as well as achieve equity of health care access and outcomes. Such programming must understand and address the overlapping challenges faced by women living with HCV, as they are frequently gendered and exacerbate barriers to engaging in any form of healthcare.
2021-09-13T13:14:51.112Z
2021-09-13T00:00:00.000
{ "year": 2021, "sha1": "fe8087be0f138057caae396d1079795eae2784c4", "oa_license": "CCBY", "oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/s12905-021-01470-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ff81983acfea522c45cdef8cb1bdf6f6df77a98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9300086
pes2o/s2orc
v3-fos-license
Development of a program for tele-rehabilitation of COPD patients across sectors: co-innovation in a network. Introduction The aim of the Telekat project is to prevent re-admissions of patients with chronic obstructive pulmonary disease (COPD) by developing a preventive program of tele-rehabilitation across sectors for COPD patients. The development of the program is based on a co-innovation process between COPD patients, relatives, healthcare professionals and representatives from private firms and universities. This paper discusses the obstacles that arise in the co-innovation process of developing an integrated technique for tele-rehabilitation of COPD patients. Theory Network and innovation theory. Methods The case study was applied. A triangulation of data collection techniques was used: documents, observations (123 hours), qualitative interviews (n=32) and action research. Findings Obstacles were identified in the network context; these obstacles included the mindset of the healthcare professionals, inter-professionals relations, views of technology as a tool and competing visions for the goals of tele-rehabilitation. Conclusion We have identified obstacles that emerge in the co-innovation process when developing a programme for tele-rehabilitation of COPD patients in an inter-organizational context. Action research has been carried out and can have helped to facilitate the co-innovation process. Patients with chronic obstructive pulmonary disease (COPD) pose a serious public health problem. It is estimated that 210 million people have COPD worldwide, and that more than three million people died of COPD in 2005, equal to 5% of all deaths globally that year [3]. Patients with severe and a very severe COPD have a readmission rate of 63% during a mean follow-up of 1.1 year, with physical inactivity among the most significant predictor for readmissions [4]. According to the global strategy for diagnosing, managing and preventing COPD, stable COPD is managed using a combination of interventions such as smoking cessation, pharmacological therapy, education, pulmonary rehabilitation, nutritional interventions, vaccinations, oxygen therapy and surgery [5]. However, the question remains as to the most effective means of delivering and coordinating multidisciplinary care to COPD patients according to the disease continuum and across the healthcare system [6]. Reviews of the disease management programs for COPD patients show programs that are heterogeneous in terms of interventions, outcome measures and study design. However, quality of life is improved, and triple intervention programs have resulted in lower probability of at least one hospital admission compared to usual care. The reviews also conclude that there is a need for more research on chronic disease management programs in patients with COPD across primary and secondary care [7,8]. Studies of home tele-monitoring of chronic diseases, including COPD, indicate home tele-monitoring to be a promising approach that empowers patients, positively influencing their attitudes and behavior and potentially improves their medical condition [9,10]. In the research and innovation project, called 'Telehomecare, chronic patients and the integrated healthcare system' (the Telekat project), we have taken up the challenge of combining co-innovation, disease management and technology in order to develop a tele-rehabilitation program for Danish COPD patients. 'Tele-rehabilitation' can be defined as rehabilitation between the patient's home and healthcare professionals with the support of communication and information technology. In the Telekat project, the patient groups are those with severe or very severe COPD. The aims of the project are (1) to prevent re-admission of COPD patients by promoting home-based tele-rehabilitation; and (2) to develop and test a preventive program of rehabilitation for people with COPD across sectors. The development of the program of tele-rehabilitation across sectors is based on a co-innovation process that involves COPD patients, their relatives, healthcare professionals and representatives from private firms and universities. Relatively little research has been conducted to explore co-innovation processes in complex healthcare networks that are constructed as innovation alliances [11]. There is limited systematic research on the development of system preparedness for participating in an innovation process and anticipating the impact of an innovation [12]. Findings from a study of the design and implementation phases of a telehomecare system identified several types of controversies that emerge as a part of the interorganizational and inter-professional agenda. These controversies involved competing claims of jurisdiction over knowledge technologies or differences in network visions [13]. The research has focused mainly on the adoption phase of innovations rather than the earlier phases of idea development, conceptualization, obstacles and legitimatization in the innovation process whereby new services and practices are established in the healthcare sector [14,15]. In order to expand our understanding of co-innovation in a network, this paper focuses on identifying potential obstacles that might arise when developing an integrated program for tele-rehabilitation of COPD patients. The level of analysis is the various actors involved in the Telekat network: the COPD patients, relatives, healthcare professionals and university and private technology providers. Presentation of the Telekat case study We begin by introducing the context and parties in the case study, followed by a presentation of the design of the co-innovation process. Finally, we describe how the tele-rehabilitation process developed despite obstacles in the co-innovation process. Presentation of context and parties In 2007, the Danish healthcare reform transferred responsibility for rehabilitation from the hospitals to the municipalities. Today, Danish patients with severe and very severe COPD are offered rehabilitation when the clinical symptoms limit their functional level and quality of life. The rehabilitation includes physical training, instruction in the disease, nutrition guidance, pulmonary physiotherapy, assistance to stop smoking, etc. The rehabilitation typically takes place as an instructional course administered by the municipality or hospital. The course, of six weeks' duration, is held away from the patient's home. Clinical experience shows that COPD patients attend the rehabilitation course several times to prevent further worsening of the disease. The Telekat project has attempted to develop a tele-rehabilitation program which takes place in the patients' own homes and in collaboration with various healthcare professionals, such as district nurses, general practitioners (GP), nurses and doctors at a healthcare centre and hospital. Rehabilitation can thus become a part of everyday life and eventually help break an often downward spiral of decreasing well-being for the person suffering from COPD. The following parties were involved in the co-innovation process: • Healthcare center aims at elaborating and implementing rehabilitation programs for patients with a chronic disease, such as COPD patients. The center has had had more than 700 COPD patients for rehabilitation since early 2007. • Pulmonary Medical Clinic at a university hospital is the regional competence centre for COPD patients with a severe and very severe illness. Specialized nurses and doctors see the patients regularly at the outpatient clinic. • District nursing takes care of those patients with chronic diseases who need monitoring, counseling and special assistance, such as administering medication in their homes. The Danish healthcare reform has changed the district nurse's role so that their work also has a preventive focus. • GP is the patient's doctor. The GP must coordinate patient care and treatment across sectors and advise the patients on rehabilitation options. • The firms-are specialized in IT and telehealth solutions and operate in the national and international markets. • COPD patients and relatives-the COPD patients suffer from severe or very severe illness and have attended courses in rehabilitation at the healthcare center. • Universities-have research experience within user-driven innovation and telehealth. None of the healthcare professionals, COPD patients and relatives has experience with tele-rehabilitation technology. Design of the co-innovation process The Telekat project began in January 2008 and ends in June 2011. The project is divided into four phases, and this paper focuses on phases I and II. Phase I (January-June 2008): Design phase. Findings and results from phases III and IV are now being prepared for publication. The co-innovation process has been centered on two forums: a user panel and a network laboratory. Table 1 shows an overview of aims for the forums, members and numbers of workshops during phases I and II. In preparation for the first workshop in the user panel, researchers had conducted qualitative studies in the homes of the COPD patients in order to identify their expressed and unarticulated needs [16] in connection with rehabilitation technologies. Second, the data were presented and integrated into ideas and concepts in the user panel. Concurrently, prior to the first workshop in the network laboratory, researchers conducted participant-observation (see Methods), following healthcare professionals at work in order to identify professional issues concerning rehabilitation of COPD patients. The observations were presented and integrated into the work of the network laboratory. Alongside these workshops, working groups were set up to deal with specific technological and clinical issues. Researchers facilitated the co-innovation process via action research (see Methods) in order to create collective reflections, empower the participants to generate new ideas and established synergy between the forums. Emerging themes in the development of the program In the development of the program, our field observations revealed several key issues in connection with the co-innovation process between the parties. The COPD patients wanted to move rehabilitation activities to their homes but to still have the possibilities to be in contact with healthcare professionals. They expressed the desire to learn more about their own disease while carrying out their daily routines at home, and they wanted to learn more about monitoring their own symptoms. The healthcare professionals, researchers and firms expressed a vision of being able to empower the COPD patients in managing their own disease so that the COPD patients could avoid readmissions. The healthcare professionals wanted to be able to use each other's competence across sectors for the benefit of the patients, to share data and to give the patients more responsibility and quality of life by having them carry out rehabilitation activities in their own homes; this would improve their physical and mental condition. In the process of developing the concept of tele-rehabilitation through the workshops, the following themes emerged: • The Telekat tele-rehabilitation program The program consisted of the following operations: A telehealth monitor box is installed in the patient's home. Using wireless technology, the telehealth monitor can collect and transmit data about the patient's blood pressure, pulse, weight, oxygen level, lung function, etc. via the Internet network, transmitting the data to a web-based portal or directly into the patient's electronic health care record. Healthcare professionals, such as district nurses, GP, nurses, doctors and physiotherapists at the health care centre or hospital, can assess the patient's data, monitor the patient's disease and training inputs and provide advice to the patient. The patients and relatives can also view the data on the web portal, and they can also decide with whom they want to share their data (see Figure 1). The patient has the equipment placed in the home for four months. The patient receives an individual training program by a physiotherapist and may carry out homebased exercises. A tele-rehabilitation team consisting of health care professionals from primary and secondary care meet virtually to coordinate and discuss the individual rehabilitation programme for the COPD patients. Theoretical framework A combination of theories based on network and innovation constitutes the conceptual framework for this study. Classic organizational theories tend to overlook network issues, paying attention only to the parties carrying out their respective share of the combined processes and tasks. Network theory opens up the boundaries of the organizations and helps explain network dynamics and processes. Theories of innovation in networks can elucidate the dynamics, interactions and creative process that take place between parties when developing new services and concepts. A network is defined as: "the basic social form that permits inter-organizational interactions of exchange, converted action, and joint production. Networks are unbounded or bounded clusters of organizations that, by definition, are non-hierarchical collectives of legally separate units" [17, p. 46]. The network literature reveals different models of networks [18,19]. The Telekat network can be characterized as systemic. It contains different parties with unequal capabilities working together in a value chain in an inter-organizational field to solve a joint task, for example, telerehabilitation of COPD patients. Any network consists of five elements: parties, processes, vision, and architecture and culture. The parties are the resources of the network. A crucial element in relations between the network parties is trust. Network processes are centered on exchange of coordination, information and joint problem-solving between the organizations. A vision for the network is a joint vision, in this case, the effective tele-rehabilitation of COPD patients. The network architecture shapes the structural framework for collaboration. Formal and informal culture in the network constitutes the norms and values for interaction between the parties. Competencies in the network are attached to the parties' 'home' organizations, such as the mental models and attitudes of the parties or their knowledge and skills. The innovation literature distinguishes between incremental and radical innovations. Incremental innovation consists of small steps whereby services or workflows are improved [20]. In contrast, a radical innovation is a new idea that is being implemented. Creating co-innovation between multi-organizational networks involves two types of change: creating an initial network and managing change within an established network. Change processes in networks are complex and not well understood in the literature [21]. Building a new network entails establishing new relations between parties, building new roles, establishing a new vision in the system, etc. Changing an existing network must account for relationships between organizations within the whole system. The multiple and complex relation- ships in a 'fusion of networks' produces emergent phenomena which are difficult to explain just by knowing the parties. Hence, it is difficult to predict how the networks will react over time [21]. In the Telekat project, we have focused on creating radical process innovation in a co-innovation process. The co-innovators are network parties: the COPD patients, relatives, public and private organizations. The literature distinguishes according to their level of analysis within co-innovation [21, 22, p. 3-4]. There can be co-innovation (1) between departments within a firm, (2) between firms in a horizontal and vertical dimension, including public and private organizations, and (3) at a meso-and macro-level, where the co-innovation is a co-evolving process between technical and institutional innovations in a long-term perspective. This article focuses on the second type of co-innovation, the horizontal and vertical dimension. In this type, there are multiple levels of interaction involving a network of users, public and private organizations. The ambition is to create a radical innovation in the Telekat project that includes robust changes of actors' perceptions and changes in the existing network composition. These changes involve actors, their positions and access rules [23, p. 226]. In the case of new initiatives, such as tele-rehabilitation in a new network domain, uncertainty can hinder collaboration and interaction due to uncertainty about boundaries of each other's domains or fear for losing one's own domain. A formalizing of competencies and domains of actors in rules can bring certainty, and they can design new domain agreements on issues, such as responsibility [22, p. 212-32]. In this perspective, it is essential to identify the obstacles in a co-innovation process. Case study The case study method [24] was chosen as the overall research strategy for this study. The case study was used to elucidate the co-innovation process of an integrated program for tele-rehabilitation of COPD patients in the operational context. The case study approach makes it possible at present to study the obstacles that emerge in the co-innovation process. The study included an ongoing process analysis during the design and clinical testing phases of the program of tele-rehabilitation. The theoretical framework informing the process analysis was based on network and innovation theory as a means of understanding the factors that facilitate or impede the co-innovation process. Action research Action research can be defined as an umbrella for research based on values where knowledge contributes to collective actions that change existing situations and mindsets. Action research can be defined as research that contributes to empowerment of processes [25]. Doing action research means going beyond the traditional expert role and seeing oneself as a co-creator of democratic and change-oriented knowledge in cooperation with the other parties. The aim of the Telekat project was to facilitate the co-innovative process of development of an integrative program of rehabilitation of COPD patients across sectors using tele-rehabilitation technology. Many parties with different interests participated in the process, and in order to identify the obstacles and facilitate user dialogue, action research was carried out. Interventions were carried out when discussions reached a deadlock or became too personal. To avoid bias in the use of action research, discussions [26] were carried out with research colleagues and field notes written prior to the intervention being carried out. Data collection techniques A triangulation of data collection techniques has been used in order to provide multiple sources of evidence [24] in the case study. The sources are documents, participant observation and qualitative interviews. Documents In order to obtain a basic knowledge about the context of the case, different documents such as public reports, rehabilitation plans, minutes from meeting and homepages were studied in the initial phase. Documents related to the project, such as minutes from meetings in working groups and workshops, were studied from phases I and II. Qualitative interviews Qualitative interviews [27] were conducted in order to identify the motivations of participants and the perceived obstacles they faced within the activity of the Telekat network. The respondents were selected for interviews came from the following groups involved in the network: Representatives from district nursing, hospital, • healthcare center, GP and firms; Managerial staff from the pulmonary medical ward at • the hospital, district nursing and healthcare center; Principal participants from the IT-and administra-• tion in the municipality and region. meetings in working groups, workshops and in the network laboratory, all of which were forums where participants took part in the co-innovation process. Third, we sought to observe how the concept of telerehabilitation was tested in clinical practice. These observations took place while accompanying nurses and doctors at work in the hospital, in patients' homes and at the healthcare center. Observation checklists were used and field notes were taken. Three researchers carried out the observations, and a total of 123 hours were used for observations during phases I and II. Data analysis methods All the transcribed interviews were coded with Nvivo 8.0 software and analyzed using methods inspired by Kvale and Brinkmann (2009). The data were analyzed using a combination of deductive and inductive strategies. The code tree was formed on the basis of central definitions and concepts (in vitro nodes) from the theoretical framework and from interviews (in vivo nodes). When formulating the concepts from the respondents, 10 qualitative interviews were studied and coded on the basis of a first-off impression. These interviews introduced two district nurses, one nurse from the healthcare center, one GP, one hospital doctor and nurse, one hospital manager, one manager of district nursing and one employee and manager from two firms. The next step was a rough coding, followed by more refined coding following a review of the coded material and adjustments. This step sought to identify topics and patterns, and the interpretation was widened to include a framework of understanding beyond the respondents. This phase included an in-depth interpretation held up against common-sense understanding. In this phase, the interviews were analyzed with a view to inferring motivations and underlying perceptions. The process was carried out in dialog with research colleagues. There are certain sources of bias in the application of a computer program for data analysis. First, computer coding entails a decontextualisation of the data. Second, the software has been developed on the basis of grounded theory-an inductive approach-and in the Telekat project, a combined code strategy is deployed. Third, the application of the software gives the researcher a 'feeling of being distant' from the data. Throughout the project, all data collected through phases I and II were validated in collaboration with research colleagues, an ongoing dialog with healthcare professionals and through the triangulation of data sources. Limitations of the research design In relation to conducting a case study, one of the recurring discussions concerns its generalizability. In order Table 2 provides a description of the interviewed respondents in phases I and II. A total of 32 interviews were conducted. All respondents gave their oral consent to participate in the interviews. The interviews were conducted as semi-structured interviews lasting 1-1.5 hours. The interviews were recorded and transcribed. Transcriptions of all interviews were carried out by one person. The same two researchers conducted all interviews. Focus group interviews By the end of the co-innovation process in phases I and II, focus group interviews had been carried out with the user panel. The aim of the focus group interviews was to validate observations and issues from interviews. The respondents in the focus group interviews gave their oral consent to participate in the interviews. Patients in the user panel gave their written consent to participate in the interviews. The focus group interviews were conducted as semi-structured interviews and lasted 1.5 hours. The interviews were recorded and transcribed. Transcriptions of all interviews were carried out by one person. The same researchers conducted all focus group interviews. Participatory observations Through the innovation process in phases I and II, participatory observations [28,29] were carried out. The aim of the observations was three-fold. First, we sought to observe interactions and discussions among the participants while developing the program of telerehabilitation across sectors. Second, we wanted to pose questions about observed obstacles in order to obtain an understanding of participants' motivations in the project. The observations were conducted at Lack of learning culture Management and employees within district nursing and healthcare center state that they do not have time or take the time for reflections and joint discussions about the innovation process. As a district nurse explained: "If I had discussed the ideas from workshops about the concepts of tele-rehabilitation with my colleagues, I probably would have brought more new aspects into the innovation process". The mindset of the healthcare professionals 2.1 Concern about sharing responsibility between healthcare professionals and COPD patients The healthcare professionals expressed concern about sharing responsibility between themselves and the COPD patients in how patients would react when their measured values were beyond acceptable range. Observations showed that the professionals raised questions, such as: "Will the patients expect us to follow the measured values all the time?" "Will the patients be able to react on time if the values are out of range?" In order to learn more about this problem, action research was carried out with the goal of having the healthcare professionals reflect on how responsibility for patients' condition could be most effectively shared between professionals and patients. To think 'out of the box' During interviews, the clinicians expressed the view that is difficult to work in a creative mode and think utopia. They expressed the view that they were not used to working creatively in interdisciplinary groups across sectors for the purpose of developing a joint concept for tele-rehabilitation. In order to facilitate a creative innovative process, action research was used in order empower the healthcare professionals to generate new ideas. Viewing patients as co-innovators In the interviews, healthcare professionals stated that they found it difficult to collaborate with the COPD patients in order to innovate a new concept for telerehabilitation. The healthcare professionals saw themselves as the experts on the COPD patients' needs. Observations from workshops showed that the healthcare professionals responded with reservations when confronted with the ideas from patients compared to ideas from firms, researchers or healthcare professionals. A GP stated "How does COPD patients know what their tele-rehabilitation needs are". to optimize generalization of case studies, the case study literature [24,30] tends to recommend strategic case selection or analytical generalization. Here we can simply point out that in the Telekat project, analytical generalization has been applied by using a theoretical framework. A triangulation of data collection and analysis supports the process of analytical generalization. In this way, obstacles in the co-innovation process can be singled out. The researchers' involvement in the case study makes it important to distance ourselves in the integration of data in relation to theory in order to prevent the process from becoming theoretically tautological. Ethical approval Ethical approval was obtained from the local Ethics Committees ( August 27, 2008/N-20080049 Table 3 presents a thematic listing of the obstacles identified in the co-innovation process. Management of healthcare accords The healthcare professionals have work routines that require them to organize their working plans six weeks in advance. This means that meetings and workshops in the innovation process had to be planned at least two months in advance in order to respect the daily work routines. A nurse at the hospital stated: "It gives us discontinuity in the creative process, and if we get some new ideas and want an extra meeting, we have to wait until the next working schedule has been planned". Specialist versus generalist Nurses at the hospital and healthcare expressed doubt that the district nurses had the necessary competence to counsel COPD patients on rehabilitation. A hospital nurse stated: "How can a district nurse have the knowledge to guide a patient on rehabilitation activities-they are generalists in homecare". Action research was carried out in order to stimulate the group of healthcare professionals to reflect on what level of knowledge was necessary in order to guide a COPD patient during tele-rehabilitation. Using technology to work preventive All groups of healthcare professionals expressed the view that they found it difficult to combine preventive rehabilitation with technology. They raised questions, such as: "What can we use all the measured values for? "Will the COPD patients become more worried about their illness?" "How will the patients' quality of life be affected by measuring the values?" Action research was conducted among the healthcare professionals in order to create joint reflections on how the measured values could become an issue for counseling the COPD patients in their rehabilitation activities in their everyday lives, e.g., for monitoring the development of their symptoms. Technology creates information overload The GPs were concerned that the tele-rehabilitation equipment would cause an information overload in GPs electronic patient record. The GPs asked, "What happens if we do not pay attention to measurements that are out of range?" Can you design some intelligent software to help us with decision-making? Observations showed that the GPs were worried about potential information overload in the patients' records. However, the firms insured the GP that they could insert 'intelligence' in the software so that the GP would not have to fear a situation where they neglected to see a key danger signal in the measurements. Business versus healthcare visions The firms have visions for product and concept development due to the firms' market strategies. They place priority on developing software and hardware that can sell on a national and international market, independent of the specific organizations of healthcare systems in other regions or countries. A representative of one firm explained: "We have to create concepts that fit both the national and international market on telehealth". Discussion We have explored and identified obstacles that needed to be overcome in the initial phases of a co-innovation process. In the network context, work contracts are an inherent obstacle that can conflict with the planning innovation process in the public sector. These work responsibility conflicts can be overcome if management is flexible and has the possibility (resources) to integrate the creative activities with the daily work. Lack of learning culture (knowledge sharing between colleagues) in the organizations can be a major obstacle to overcome in order to insure a culture and readiness for attending innovations processes. Catalyzing the mindset of the healthcare professionals for thinking 'out of the box' and recognizing patients as co-innovators, action research was carried out to empower the healthcare professionals and the patients. The innovation process was designed so that the patients' ideas became a direct part of the process. The intent was to eliminate aberrations in the process of developing the program for tele-rehabilitation at the expense of the healthcare professionals' authority. We cannot identify studies that have focused on this issue, and further research is needed. An issue that reached a deadlock was how to share responsibility between healthcare professionals and COPD patients in facilitating tele-rehabilitation. We observed that the healthcare professionals exhibited varying perspectives on COPD rehabilitation and how to share responsibility. In order to facilitate this dilemma, action research was carried out so as to create collective reflection in the sense-making process. This step was important, as the intervention served as a springboard for a joint understanding and concept to be tried out in clinical practice. Weick et al. (2005) state that the process of sense-making unfolds as a sequence in which people are concerned with identity in a social context and are engaged in ongoing circumstances from which they extract cues and make sense retrospectively and still enact in the ongoing process [31]. Creating co-innovation between multi-organizations and professionals is complex (see theoretical framework), and discussion of knowledge-sharing between specialists versus generalists occurred as an obstacle. This issue is seen in a similar study of developing a telehomecare solution in an inter-organizational field [13]. Using action research as an approach to overcome the obstacle as 'the active use of technology as a Action research can help participants in an innovation process to see the potentials of the technology, create utopia and provide a better adoption of the technology in clinical practice [32]. In the innovation alliance-the Telekat network-a competing vision of business versus a healthcare vision was an obstacle that was inherent due to different mandates, goals, tasks, competences and cultures among the parties. Action research was carried out in order to encourage the parties to see beyond their own immediate mandates and professional concepts. Lundin et al. (2008) confirm that doing action research in a network context raises issues, such as the local versus the global aspect; in our study, this was relevant to the national market versus the international market [33]. Action research is subject to constant debate concerning difficulties of generalization due to the role of the intervening researcher [30]. In order to deal with this critique, we have documented our observations and interventions as field notes, carried out a collective reflection in the process of problem identification, data gathering, and joint diagnosis of the problem before action taking in the Telekat network. Through we have experienced some difficulties in carrying out action research, such as avoiding 'lecturing' the employees in dialogues, and avoiding conflicts of power between management and employees. We regard action research as an important tool for facilitating the coinnovation processes in a network containing multiple organizations and new technologies. Action research was used to facilitate interlocking interactions in the innovation process or to raise questions in the discussions that reached a deadlock. Researchers in a Swedish study argue that an action researcher creates new relationships, actor conceptions and becomes an active creator of the discourse, thus shaping the collaboration in an inter organizational network [34]. Further research is needed in order to gain more knowledge of the obstacles to the co-innovation process. The project seems to have overcome the initial obstacles and reached the point of co-innovation. The telerehabilitation program is now being tested in clinical practice and seems to show promising results in helping patients to avoid readmission, fragmentation and the potential discontinuities related to distance treatment of COPD patients [35]. Bonney et al. (2007) con-firm that co-innovation in a network is possible when the parties create shared vision, consistent structures and processes, opportunities for mutual benefits and co-operation. A successful tele-rehabilitation program both relies on, and can generate, relations of trust and commitment [36]. Conclusion We have identified obstacles that emerge in the co-innovation process when developing a program for tele-rehabilitation of COPD patients in an inter-organizational context. Obstacles are indentified in the network context; the mindset of the healthcare professionals; interprofessionals relations; seeing technology as a tool and, finally, in competing visions. Action research has been carried out and can have had a mediating role in helping the co-innovation process to succeed.
2017-06-17T19:58:38.580Z
2011-03-24T00:00:00.000
{ "year": 2011, "sha1": "ea1c813ca83dc38020fd5093382a4b1856d8c3e3", "oa_license": "CCBY", "oa_url": "http://www.ijic.org/articles/10.5334/ijic.582/galley/1235/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea1c813ca83dc38020fd5093382a4b1856d8c3e3", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212603082
pes2o/s2orc
v3-fos-license
SUPERLATIVE ALGORITHM FOR REDUCTION OF ACTIVE POWER LOSS This paper proposes Spinner Dolphin Algorithm (SDA) for solving optimal reactive power problem. Echolocation is the genetic sonar used by Spinner dolphin & it used by few kinds of other animals for direction-finding, hunting in diverse environments. This ability of Spinner dolphin is imitated in this paper to develop a new-fangled procedure for solving optimal reactive power problem. Spinner Dolphin Algorithm (SDA) takes reward of the overriding rules and outperforms many vigorous optimization methods. The new-fangled approach SDA leads to exceptional results with small computational efforts. In order to evaluate the efficiency of the proposed algorithm, it has been tested Standard IEEE 57,118 bus systems and compared to other specified algorithms. Simulation result show that Spinner Dolphin Algorithm (SDA) is advanced to other algorithms in reducing the real power loss and voltage profiles are within the limits. Introduction Optimal reactive power problem minimizes the real power loss and bus voltage deviation. Various mathematical techniques like the gradient method [1][2], Newton method [3] and linear programming [4][5][6][7] have been adopted to solve the optimal reactive power dispatch problem. Both the gradient and Newton methods have the complexity in managing inequality constraints. If linear programming is applied then the input-output function has to be uttered as a set of linear functions which mostly lead to loss of accuracy. The problem of voltage stability and collapse play a major role in power system planning and operation [8]. Global optimization has received extensive research awareness, and a great number of methods have been applied to solve this problem. Evolutionary algorithms such as genetic algorithm have been already proposed to solve the reactive power flow problem [9,10]. Evolutionary algorithm is a heuristic approach used for minimization problems by utilizing nonlinear and non-differentiable continuous space functions. In [11], Genetic algorithm has been used to solve optimal reactive power flow problem. In [12], Hybrid differential evolution algorithm is proposed to improve the voltage stability index. In [13] Biogeography Based algorithm is projected to solve the reactive power dispatch problem. In [14], a fuzzy based method is used to solve the optimal reactive power scheduling method. In [15], an improved evolutionary programming is used to solve the optimal reactive power dispatch problem. In [16], the optimal reactive power flow problem is solved by integrating a genetic algorithm with a nonlinear interior point method. In [17], a pattern algorithm is used to solve ac-dc optimal reactive power flow model with the generator capability limits. In [18], F. Capitanescu proposes a two-step approach to evaluate Reactive power reserves with respect to operating constraints and voltage stability. In [19], a programming based approach is used to solve the optimal reactive power dispatch problem. In [20], A. Kargarian et al present a probabilistic algorithm for optimal reactive power provision in hybrid electricity markets with uncertain loads. Spinner Dolphin Algorithm (SDA) a new optimization technique & is used to solve the reactive power problem. This method mimics strategy used by dolphins for their hunting procedure. Dolphins create a type of voice [21] called sonar to trace the target. By doing this dolphin alter sonar to alter the target and its position. Echolocation is the genetic sonar used by Spinner dolphin & it used by few kinds of other animals for direction-finding, hunting in diverse environments. This ability of Spinner dolphin is imitated in this paper to develop a new-fangled procedure for solving optimal reactive power problem. Spinner Dolphin Algorithm (SDA) takes reward of the overriding rules and outperforms many vigorous optimization methods. The new-fangled approach SDA leads to exceptional results with small computational efforts. In order to evaluate the efficiency of the proposed algorithm, it has been tested Standard IEEE 57,118 bus systems and compared to other specified algorithms. Simulation result show that Spinner Dolphin Algorithm (SDA) is advanced to other algorithms in reducing the real power loss and voltage profiles are within the limits. Problem Formulation The optimal power flow problem is treated as a general minimization problem with constraints, and can be mathematically written in the following form: Where f(x,u) is the objective function. g(x.u) and h(x,u) are respectively the set of equality and inequality constraints. x is the vector of state variables, and u is the vector of control variables. The state variables are the load buses (PQ buses) voltages, angles, the generator reactive powers and the slack active generator power: x = (P g1 , θ 2 , . . , θ N , V L1 , . , V LNL , Q g1 , . . , Q gng ) T The control variables are the generator bus voltages, the shunt capacitors/reactors and the transformers tap-settings: Where ng, nt and nc are the number of generators, number of tap transformers and the number of shunt compensators respectively. Active Power Loss The objective of the reactive power dispatch is to minimize the active power loss in the transmission network, which can be described as follows: Where g k : is the conductance of branch between nodes i and j, Nbr: is the total number of transmission lines in power systems. P d : is the total active power demand, P gi : is the generator active power of unit i, and P gsalck : is the generator active power of slack bus. Voltage Profile Improvement For minimizing the voltage deviation in PQ buses, the objective function becomes: Where ω v : is a weighting factor of voltage deviation. VD is the voltage deviation given by: Equality Constraint The equality constraint g(x,u) of the Optimal reactive power problem is represented by the power balance equation, where the total power generation must cover the total power demand and the power losses: This equation is solved by running Newton Raphson load flow method, by calculating the active power of slack bus to determine active power loss. Inequality Constraints The inequality constraints h(x,u) reflect the limits on components in the power system as well as the limits created to ensure system security. Upper and lower bounds on the active power of slack bus, and reactive power of generators: Upper and lower bounds on the bus voltage magnitudes: Upper and lower bounds on the transformers tap ratios: Upper and lower bounds on the compensators reactive powers: Where N is the total number of buses, N T is the total number of Transformers; N c is the total number of shunt reactive compensators. Spinner Dolphin in Natural World The word ''echolocation'' was initiated by Griffin [22] to explain the capability of flying bats to locate obstacles and preys by listening to echoes recurring from high-frequency clicks that they emitted. The finest studied echolocation in nautical mammals is the dolphins [23]. A dolphin is gifted to generate sounds in the form of clicks. Rate of recurrence of these clicks is superior to that of the sounds used for communication and it differs between species. As soon as the sound strikes an object, some of the power of the sound-wave is reflected back towards the dolphin. Instantaneously an echo is received; the dolphin generates one more click. The time fall between click and echo enables the dolphin to appraise the distance from the object. The altering power of the signal as it is received on the two sides of the dolphin's head enable to evaluate the way. By incessantly emitting clicks and receiving echoes in this technique, the dolphin can follow objects and home in on them [25]. The clicks are directional. For echolocation, frequently happening in a short sequence called a click rate. The click rate increases when close to an object concentration [24]. Although bats also utilize echolocation, however, they differ from dolphins in their sonar scheme. Bats use their sonar scheme at little ranges around 3-4 m, whereas dolphins can sense their targets at ranges varying more than a hundred meters. A lot of bats hunt for insects that dash rapidly to and fro and making it very dissimilar from the runaway behaviour of a fish chased by dolphin. The pace of sound in air is about one fifth of that of water, thus the information transmit rate for the period of sonar transmission of bats is much shorter than that of the dolphins. Spinner Dolphin Echolocation Process Spinner Dolphins primarily investigate all around the search space to discover the prey. The moment a dolphin approaches the target, the animal confine its search, and incrementally increases its clicks in order to concentrate on the location. The method simulates dolphin echolocation by restraining its exploration relative to the distance from the target. Prior to starting, search space should be sorted out by using the following regulation: Search space order: For every variable to be optimized during the procedure, sort alternatives of the search space in an uphill or downhill order. If alternatives take account of more than one characteristic, then carry out ordering according to the most significant one. Using this technique, for variable j, vector A j of length LA j is shaped which contains all probable alternatives for the jth variable putting these vectors subsequently to each other, as the columns of a matrix, the Matrix Alternatives MA+NV is produced, in which MA is max(LA j ) j=1:NV , with NV being the number of variables. Furthermore, a curve according to which the convergence factor must change during the optimization procedure should be assigned. Here, the change of convergence (CF) is considered as PP is the predefined probability, PP 1 the convergence factor of the first loop in which the answers are selected randomly, Loop i the number of the current loop. Procedure of Spinner Dolphin Algorithm (SDA) as follows: Where AF (A+k)j is the accumulative fitness of the (A + k)th alternative to be chosen for the jth variable, R e is the effective radius in which accumulative fitness of the alternative A's neighbours are affected from its fitness. Fitness (i) is the fitness of location i. It should be added that for alternatives close to edges (where A + k is not a valid; A + k < 0 or A + k > LA j ), the AF is calculated using a reflective characteristic. In order to hand out the option much evenly in the search space, a small value of is added to all the arrays as AF = AF + . Here, e should be selected according to the method the fitness is defined. It is superior to be less than the minimum value achieved for the fitness. Find the top location of this loop and name it "The best Location". Find the alternatives allocated to the variables of the top location, and let their AF be equal to zero. And it can be defined as follows  for j = 1: Number of variables  for i = 1: Number of alternatives  if i = The best location(j) For variable j (j=1toNV) , compute the probability of choosing alternative i (i=1toALj) , according to the following relationship: Allocate a probability equal to PP to all alternatives chosen for all variables of the best location and dedicate rest of the probability to the other alternatives according to the following formula:  for j = 1: Number of variables  for i = 1: Number of alternatives  if i = The best location(j) Compute the subsequently step locations according to the probabilities assigned to each alternative. Replicate Steps ii-vi as many times as the Loops Number. Spinner Dolphin Algorithm (SDA) For Solving Optimal Reactive Power Problem Step a. instigates the description of the problem and choose the positions of dolphin randomly. Step b. Compute the fitness for every location. Step c. Compute the accumulative fitness by devoting the intended fitness to the alternatives chosen for every dimension and its neighbours according to the dolphin regulations and find the best location. Step d. Assign the possibility of the most excellent location equal to the predefined possibility value in the current loop and share out rest of the probability between other alternatives according to the premeditated accumulative fitness's. Step e. Choose next loop locations according to the designed probability. Step f. Is terminating criterion reached-if yes stop or go to step b. Simulation Results At first Spinner Dolphin Algorithm (SDA) has been tested in standard IEEE-57 bus power system. The reactive power compensation buses are 18, 25 and 53. Bus 2, 3, 6, 8, 9 and 12 are PV buses and bus 1 is selected as slack-bus. The system variable limits are given in Table 1. The preliminary conditions for the IEEE-57 bus power system are given as follows: P load = 12.120 p.u. Q load = 3.068 p.u. The total initial generations and power losses are obtained as follows: ∑ = 12.474 p.u. ∑ = 3.3166 p.u. P loss = 0.25870 p.u. Q loss = -1.2072 p.u. Table 2 shows the various system control variables i.e. generator bus voltages, shunt capacitances and transformer tap settings obtained after optimization which are within the acceptable limits. In Table 3, shows the comparison of optimum results obtained from proposed methods with other optimization techniques. These results indicate the robustness of proposed approaches for providing better optimal solution in case of IEEE-57 bus system. Table 4, with the change in step of 0.01. The statistical comparison results of 50 trial runs have been list in Table 5 and the results clearly show the better performance of proposed Spinner Dolphin Algorithm (SDA) in reducing the real power loss. Conclusion In this paper Spinner Dolphin Algorithm (SDA) successfully solved optimal reactive power problem. The new-fangled approach SDA leads to exceptional results with small computational efforts. In order to evaluate the efficiency of the proposed Spinner Dolphin Algorithm (SDA), it has been tested Standard IEEE 57,118 bus systems and compared to other specified algorithms. Simulation result show that Spinner Dolphin Algorithm (SDA) is advanced to other algorithms in reducing the real power loss and particularly voltage profiles are within the limits.
2020-03-07T18:45:07.934Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "eaef0a3592d49ac33990beae49e70067318a3019", "oa_license": "CCBY", "oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/IJRG17_A10_720/2176", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eaef0a3592d49ac33990beae49e70067318a3019", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
244922505
pes2o/s2orc
v3-fos-license
Integrative geochronology calibrates the Middle and Late Stone Ages of Ethiopia’s Afar Rift Significance Understanding the evolution, dispersals, behaviors, and ecologies of early African Homo sapiens requires accurate geochronological placement of fossils and artifacts. We introduce open-air occurrences of such remains in sediments of the Middle Awash study area in Ethiopia. We describe the stratigraphic and depositional contexts of our discoveries and demonstrate the effectiveness of recently developed uranium-series dating of ostrich eggshell at validating and bridging across more traditional radioisotopic methods (14C and 40Ar/39Ar). Homo sapiens fossils and associated Middle Stone Age artifacts are placed at >158 and ∼96 ka. Later Stone Age occurrences are dated to ∼21 to 24 ka and ∼31 to 32 ka, firmly dating the upper portion of one of the longest records of human evolution. The Halibee member of the Upper Dawaitoli Formation of Ethiopia's Middle Awash study area features a wealth of Middle and Later Stone Age (MSA and LSA) paleoanthropological resources in a succession of Pleistocene sediments. We introduce these artifacts and fossils, and determine their chronostratigraphic placement via a combination of established radioisotopic methods and a recently developed dating method applied to ostrich eggshell (OES). We apply the recently developed 230 Th/U burial dating of OES to bridge the temporal gap between radiocarbon ( 14 C) and 40 Ar/ 39 Ar ages for the MSA and provide 14 C ages to constrain the younger LSA archaeology and fauna to ∼24 to 21.4 ka. Paired 14 C and 230 Th/U burial ages of OES agree at ∼31 ka for an older LSA locality, validating the newer method, and in turn supporting its application to stratigraphically underlying MSA occurrences previously constrained only by a maximum 40 Ar/ 39 Ar age. Associated fauna, flora, and Homo sapiens fossils are thereby now fixed between 106 ± 20 ka and 96.4 ± 1.6 ka (all errors 2σ). Additional 40 Ar/ 39 results on an underlying tuff refine its age to 158.1 ± 11.0 ka, providing a more precise minimum age for MSA lithic artifacts, fauna, and H. sapiens fossils recovered ∼9 m below it. These results demonstrate how chronological control can be obtained in tectonically active and stratigraphically complex settings to precisely calibrate crucial evidence of technological, environmental, and evolutionary changes during the African Middle and Late Pleistocene. geochronology j Middle Stone Age j Late Stone Age j Middle Awash j Ethiopia A ccurately dating the emergence of Homo sapiens and associated technologies in Africa is an enduring challenge in geochronology and a persistent source of frustration for paleoanthropologists (1)(2)(3). Most of Africa's Middle Stone Age (MSA) lies beyond the ∼50-ka range of 14 C dating, and even Later Stone Age (LSA) occurrences often lack associated charcoal or bone suitable for this technique. Furthermore, crucial Eurasian finds originally dated by 14 C have recently required large revisions, highlighting the importance of improved sample preparation even for well-established dating methods (4)(5)(6). The 40 Ar/ 39 Ar dating of potassium-rich Pleistocene volcanic minerals has yielded solid calibrations based on association and correlation (7)(8)(9)(10)(11)(12). However, even when present, many such rocks are contaminated by detrital minerals or lack datable fractions, further limiting application of the technique. Efforts to overcome these geochronological barriers have often led to adoption of less-reliable techniques such as electron spin resonance, thermoluminescence, or other trapped-charge techniques to obtain varying age estimates for important fossils of emergent H. sapiens (refs. 1 and 2, ref. 13 contra ref. 14; ref. 15 contra ref. 16). Such approaches, often pursued when more tested and cross-validated methods are unavailable, require in situ measurements, may involve questionable assumptions, and/ or yield less-precise ages than many radioisotopic decay-based methods (1,17). Even for more established techniques, methodological developments continue to require revision of earlier chronologies (e.g., ref. 18). Here we integrate the results of detailed stratigraphic and geomorphological field studies with satellite imagery, tephra chemistry, and multiple radioisotopic chronometers including the recently developed 230 Th/U burial dating of ostrich eggshell (OES) (19) to calibrate a suite of stratigraphically superimposed fossil and artifact assemblages from the Middle Awash study area, Afar Rift, Ethiopia. This provides the temporal scale against which ongoing and future studies of key fossils and artifacts will be measured and provides a model approach for other occurrences with similar geochronological challenges and resources. We predict that further applications of this Significance Understanding the evolution, dispersals, behaviors, and ecologies of early African Homo sapiens requires accurate geochronological placement of fossils and artifacts. We introduce open-air occurrences of such remains in sediments of the Middle Awash study area in Ethiopia. We describe the stratigraphic and depositional contexts of our discoveries and demonstrate the effectiveness of recently developed uranium-series dating of ostrich eggshell at validating and bridging across more traditional radioisotopic methods ( 14 C and 40 Ar/ 39 Ar). Homo sapiens fossils and associated Middle Stone Age artifacts are placed at >158 and ∼96 ka. Later Stone Age occurrences are dated to ∼21 to 24 ka and ∼31 to 32 ka, firmly dating the upper portion of one of the longest records of human evolution. integrated, basinal-to-submeter geological approach employing multiple geochronological methods will meet ongoing challenges (20) of calibrating and understanding Pleistocene climate change, tectonic processes, environments, technologies, and biological evolution. The chronostratigraphic framework presented here is foundational to ongoing paleoanthropological studies on a spatially and stratigraphically extensive set of MSA and LSA occurrences in the Dawaitoli Formation of the Middle Awash. We first sketch the history of earlier work and outline our broader stratigraphic findings. We then describe four paleoanthropologically important Halibee member beds. We describe this succession in chronological order, oldest first, summarizing geological and geochronological results and introducing the paleoanthropological content of each. The broader implications of this research are then considered. Background The Middle Awash depository of Ethiopia's southwestern Afar Rift comprises variably exposed and tectonically disturbed strata whose cumulative late Miocene through Holocene thickness measures ∼1 km. Paleoanthropological work in the 1970s by Kalb et al. (21) was followed by our Middle Awash research project that began in 1981. We have recovered hominid fossils from 14 separate time-successive horizons, cataloged 33,803 vertebrate fossils from 431 localities, established 69 excavations within 311 archaeological localities, and procured ∼2,000 geological samples. These resources are now broadly chronologically calibrated across the last 6 My, demonstrating that sedimentation through time in this terrestrial depository was obviously episodic rather than the continuous sequence promoted by the previous project (22). The Middle Awash study area is geographically divided into artifact-and fossil-bearing areas assigned local Afar names. Localities within them are abbreviated with three-letter prefixes (e.g., HAL-A1). Thus, the term "Halibee" applies equally to a village, a wadi, and to the Halibee member geological unit (23,24). Middle Awash deposits contain an array of archaeological occurrences ranging from Oldowan to LSA, but research has historically focused on rich and extensive Acheulean occurrences (21). Establishment of chronostratigraphic and spatial control over the area's MSA and LSA resources proved more difficult, requiring extensive field and laboratory research and methodological advancements of the last two decades. For example, the area's best-known MSA assemblages were first recorded in the 1970s and became the focus of intense work in the Aduma area during the 1990s (3,21,25). Precise geochronological placement proved elusive because various methods were only able to calibrate the younger Aduma MSA at roughly 80 to 100 ka, whereas the underlying Aduma MSA assemblages (ADU-A1) were not dated. The LSA of the Middle Awash was first recorded in "Holocene" beds near the Namey Koma hills in the headwaters of the Messalou wadi, a tributary of the Awash River ( Fig. 1) (21). Clark et al. (26) confirmed LSA occurrences in the Oulen Dorwa (OUD) basin in stratified sections that remained undated until now. Results presented here provide ages for the Middle Awash MSA and LSA that reveal occupation of the Afar Rift floor before and during the Last Glacial Maximum (∼26 to 19 ka) (27), contemporaneous with human occupation recently documented from southwestern Ethiopia (28) to central Tanzania (29). As a result of our ongoing research, MSA and LSA occurrences have now been documented as widely distributed across the Middle Awash study area. The richest, least disturbed, and most spatially and stratigraphically extensive of them lie within a deep stratified section best exposed in largely unvegetated erosional topographies in catchments of the Kada and Ounda Halibee western tributaries of the Awash River (Fig. 1). The LSA is embedded near the top of the stratigraphic succession in sediments that overlie strata containing the progressively older MSA and Acheulean assemblages (SI Appendix, Figs. S1-S7). Adjacent, penecontemporaneous deposits north of the Middle Awash study area present analogous challenges to calibrating their paleoanthropological resources (30,31). These include limitations of radioisotopic dating, rapid facies changes, and syndepositional faulting in a dynamic geomorphic setting. Careful establishment of faults, slumps, inverted channels, erosional features, useful marker horizons, and volcanic strata related to fossils and artifacts are keys to successful stratigraphic placement, and this work is underpinned by strong field logistics and iterative laboratory studies. In keeping with the Middle Awash project's long-term scientific, development, and resource management goals (32), the challenges of creating an adequate chronostratigraphic framework for the Pleistocene artifacts and fossils were met by the integrative approach to chronostratigraphy presented here. Results Dawaitoli Formation. The steep gradient of the modern Awash River north of its Messalou confluence ( Fig. 1) results in active downcutting, a northerly narrowing riverine forest and floodplain, and active headward erosion into the primarily fluvial sediments comprising the flanking Plio-Pleistocene Dawaitoli Formation (24). Today, the main Awash tributaries (Messalou on the east and Wallia, Halibee, and Talalak on the west; Fig. 1) further expose extensive sediments for a north-to-south distance of ∼25 km in the northern sector of the Middle Awash study area. The Dawaitoli Formation's youngest strata lie ∼90 m above and west of the modern Awash River in the Halibee area. The overall succession dips slightly to the west, its sediments ranging from Holocene to Pleistocene, sampling the last ∼700 ka. Understanding the complex and dynamic erosional/depositional interfaces that operated here during the Pleistocene was key to the Halibee chronological framework presented below. Fluvial deposition and erosion through time was largely controlled by the base level of downstream depocenters as the northward-flowing axial Awash River was tectonically dropped and/or volcanically dammed by the ongoing extension-related tectono-volcanic activity of the Afar Rift. These erratic tectonic and geomorphological conditions pertain even today in the Middle Awash, with tributaries such as the Kada Halibee seasonal stream featuring rapid headward erosion that broadly exposes the Dawaitoli Formation's sediment stack. In contrast, lower gradient stretches of the Awash River in the southern sector of the Middle Awash exhibit broad floodplains, lakes, swamps, and riverine forest and currently feature active sedimentation (Fig. 1). The utility of these modern analog settings in the inference of Pleistocene depositional and occupational conditions is discussed below. Artifacts and fossils of the Dawaitoli Formation were embedded in this dynamic setting in which sedimentation was influenced by abrupt tectonic and consequent geomorphological changes. The result was a time-transgressive, shifting interface between erosion and deposition, all with climatic variation proceeding independently. Modern erosion today exposes the Formation's deposits on a broad scale (Fig. 1). The current lack of vegetation cover allows excellent visualization and even remote tracing of stratigraphically superimposed lithological units via satellite imagery. Combining these data with ground-truth investigation has revealed recurrent cut-and-fill features, including inverted paleotopography evidenced by inverted fluvial channels that parallel those documented for Utah, Egypt, Australia, and China (see refs. 33 and 34 for reviews) and even Mars (35, 36) ( Fig. 1 and SI Appendix, Figs. S8-S10). Halibee Member Stratigraphy. The MSA and LSA occurrences described below derive from the Halibee geological member, informally recognized here, and were deposited during the late Middle and Late Pleistocene. The member is informally divided (from the base upward) into four sedimentary beds: Chai Baro (MSA), Faro Daba (MSA), Wallia (LSA), and OUD (LSA) (Fig. 2). Outcrops of the first three beds lie in the Halibee (HAL) region, ∼15 km west of counterparts in the small OUD sedimentary basin perched east of Namey Koma in the headwaters of the Messalou wadi (Fig. 1). Halibee member sediments are primarily associated with the Pleistocene paleo-Awash River and its tributaries. Sediments range from fine-grained silts and clays to sands, interbedded tephra, and massive cobble conglomerates derived from the steep Afar Rift escarpment and shoulder to the west. Lithological heterogeneity reflects intense tectonic and fluvial interactions near the western rift margin, as with the penecontemporaneous upper Busidima Formation exposed in the Hadar and Gona study areas to the north (30, 31). A widespread, usually calcite-cemented, variably indurated, thick (∼3.0 to 5.0 m) cobble conglomerate (the Didamela Cobble Conglomerate, DMCC) (SI Appendix, Figs. S6 and S7) at the base of the member is a valuable marker horizon. It floors the Chai Baro beds and is traceable for ∼8.5 km north to south at roughly the 565 m (southern) to 575 m (northern) elevation. Sourced from the adjacent western rift escarpment and emplaced by high-energy fluvial conditions in a marginal half-graben, the emplacement mechanisms for such thick, laterally extensive conglomerates remain contentious (30,37,38). However, their resistance to erosion and often >10 km lateral traceability make some of them useful stratigraphic markers. Lower depositional energy sediments were abruptly emplaced upon the DMCC. The lowest ∼5 m of Chai Baro beds silts and silty clays are richly fossiliferous and contain MSA artifacts. Superimposed on these are ∼10 to 15 m of mostly sterile silts, silty clays, and subordinate fine sands, followed by a crucial tephra marker horizon here named the Didale Glass Shard Tuff (DGST). Now precisely dated (below), the DGST is widely present and traceable across the Halibee region. It also serves as a key to understanding the extensive cut-and-fill depositional regime in which the overlying Halibee member MSA and LSA assemblages were emplaced. Fig. 2. Inset imagery is panchromatic and multispectral Worldview-2 imagery captured normal to the Faro Daba landscape. Yellow arrows mark the courses of sinuous Pleistocene paleochannels currently topographically inverting due to local erosion. Fossiliferous ∼100-ka Faro Daba beds were emplaced north and east of these channels directly atop darker surfaces of the DMCC that had been locally reexposed by paleo-erosion that removed the older Chai Baro beds. The ∼158-ka DGST of these beds is the most reflective sediment in the lower right corner of the insets. See SI Appendix for details and illustrative groundtruth photographs. Following additional relatively low-energy deposition, paleoerosion locally removed substantial thicknesses of the Chai Baro beds across a wide swath of the Halibee region. This erosional downcutting sometimes reached and reexposed the resistant DMCC and is today best evidenced as inverted channels (SI Appendix, Figs. S8-S11). Such channeling contributed to the undulating paleotopography atop the DMCC. In an area near the Faro Daba village, one such channel is obvious in satellite imagery as a meandering feature that laterally truncates the DGST and contains large rip-up clasts of DGST in its bed load ( Fig. 1 and SI Appendix, Fig. S8). Today these channels are emergent in eroding landscapes of the Halibee member because modern erosion into softer flanking silts leaves the channels topographically inverted. Such paleochannels are today recognizable in the field and satellite imagery by their sand-to-gravel lithologies, and because they now provide better anchorage/drainage for modern Acacia bushes than do the soft silty sediments more rapidly eroding from their flanks (Fig. 1, Insets). After the local erosional removal of Chai Baro sediments, the base level for the local paleo-Awash catchment was again relatively elevated. Deposition in the Halibee region then commenced as the Faro Daba beds were emplaced abruptly and unconformably across the undulating upper surface of the DMCC in places where Chai Baro sediments had been removed. The low-energy sedimentation of the Faro Daba beds' fine-grained overbank and floodplain silts first filled low topography atop the DMCC and eventually submerged it. Abundant fossil wood, burned tree trunks, and carbonate rhizoliths evince relatively dense vegetation spatially and stratigraphically tightly associated with vertebrate fossils and MSA lithics in the several meters of silts and silty clays atop the DMCC. These basal Faro Daba sediments are overlain by a basaltic tuff and a succession of up to ∼15 to 20 m of predominantly silty clays and some fine sands. Atop these are the silty clays, silts, and sands of the now-dated LSA-bearing Wallia beds. The younger, uppermost Halibee member's OUD beds east of the Awash also bear newly dated LSA occurrences. Below we introduce sedimentological and paleoanthropological contexts and provide tephrochemical and radioisotopic results that constrain the four time-successive sets of paleoanthropological occurrences in the Halibee member. Halibee's Chai Baro Beds. The richest fossil-bearing deposits in the Chai Baro beds are exposed sinuously and nearly continuously in the Kada and Ounda Halibee catchments over a northto-south distance of ∼8 km. They are recognized by their silt, fine sand, carbonate components, and buff color that contrasts with the more clay-rich, largely sterile dark brown silty clays with sand lenses that succeed them. The aforementioned DGST is the most prominent and widespread tuff in the Halibee member. Ranging from 0.5 to 1.6 m thick, it is identifiable by its fresh, coarse cuspate and bubble-wall glass shards and occasional pumice clasts. Exposures vary from apparent primary fall deposits to those that have undergone minor reworking. Frequently overlying the DGST is a 5-to-30-cmthick, dark gray, very fine-grained, bedded and unconsolidated vitric ash, here referred to as the Bartikimber Vitric Tuff (BRVT). Unlike the DGST that has compositionally uniform glass, the BRVT glass preserves a wide linear compositional primary array with secondary compositionally distinct clusters. A compositionally bimodal mafic-felsic tuff is intermittently exposed above the DGST, and less than a meter above the BRVT when present ( Fig. 2 and SI Appendix, Figs. S21 and S22 and Table S1). The DGST was the first Halibee member tephra to yield 40 Ar/ 39 Ar results, an age of 148 ± 34 ka (all errors 2σ or 95% CI, unless otherwise stated) (11). Given the large uncertainty, we resampled the DGST in 2015 (sample MA15-07) from a primary, coarse-grained, fining-upward airfall deposit hosting primary anorthoclase, pumice, and abundant glass shards. Secondary minerals (calcite, anhydrite, and gypsum) were present as well as clays and minor lithics that likely washed or settled in over time. Two size fractions of anorthoclase crystals revealed a population of xenocrystic grains concentrated in the coarse fraction (Fig. 3). Of 44 grains from MA15-07, 32 juvenile grains were combined to yield an inverse isochron age of 159.4 ± 11.6 ka (SI Appendix). Whereas the previously published age results from the DGST are less precise than those presented here, we feel that accuracy in such cases is maximized by inclusion of all valid measurements concordant within uncertainty. Thus, we combine all valid data to yield a single weighted mean age; using the inverse variance as the weight factor as is standard practice when combining results with disparate precision. Combined with the previous results, the weighted mean age of the DGST is now 158.1 ± 11.0 ka. This provides a minimum age for underlying artifacts and fossils in the Chai Baro beds and a maximum age for those in the Faro Daba beds. The Chai Baro H. sapiens and other vertebrate fossils and MSA artifacts in locality HAL-VP-5 that lie ∼9 m below the DGST are therefore most probably substantially older than the Upper Herto occurrences ∼60 km to the south (10) because the intervening sediment ( Fig. 2 and SI Appendix, Fig. S2) is predominantly dark brown silty clays associated with slow or standing water deposition. How much older than Herto remains an open question because more precise chronological placement of Chai Baro has proven elusive given the lack of associated tephra or OES. The use of stratigraphic thicknesses to estimate age in tectonically active and fluviatile zones featuring widely and rapidly varying sedimentation rates is common in modern paleoanthropology despite known challenges in such geologic settings, with only rare, extremely welldated exceptions (39). We follow ref. 40 in considering simple thickness-based approaches to be ill-founded and potentially misleading in most terrestrial applications. We therefore await further field acquisition of dateable samples to resolve the antiquity of the fossiliferous basal part of the Chai Baro beds. Abundant vertebrate fossils have been collected from the base of the Chai Baro beds. Lithic artifacts are less abundant than those seen in the younger Faro Daba beds. Excavations have not yet been conducted, pending improved geochronological placement. However, field observations and limited controlled surface collection of one occurrence during paleontological extraction allow initial characterization of the Chai Baro MSA. It shares virtually all technological and raw material attributes with the overlying and younger Faro Daba MSA. Chai Baro flaked stone artifacts include Levallois cores, flakes, and points; retouched points; scrapers; and very large heavy-duty tools (sharpened cobbles). The wide variety of raw material includes lavas, obsidian, and siliceous rocks. Although exposed across fewer square kilometers than the Faro Daba MSA, conjoining lithic sets eroding from fine-grained, rhizolith-rich silts of the Chai Baro beds indicate primary depositional contexts. Available excavatable occurrences are predicted to provide excellent integrity and resolution, sensu ref. 41 (SI Appendix, Fig. S12). Fossils from the Chai Baro beds are often well-preserved and sample a rich and diverse terrestrial fauna. Mammalian size range spans from ubiquitous and abundant small mammals up to rhinoceratids, including diverse bovids as well as numerous Papio and cercopithecin individuals (SI Appendix, Fig. S12). The abundant primates include four hominid specimens, a dentition, femoral fragment, and two partial hominid crania that likely predate the ∼160-ka Herto crania. Halibee's Faro Daba Beds. The MSA-bearing deposits of the Faro Daba beds were first erroneously called "Issie" (21); incorrect spatial placements on published maps also misidentified the HAL and Wallia tributaries, geographic errors repeated by Negash et al. (42). The Faro Daba beds outcrop most widely atop the undulating platform of the DMCC in the area west of the confluence of the Kada Halibee tributary and the Awash River (Fig. 1). In contrast to the Chai Baro MSA exposed further north, and by fortuitous geomorphological circumstances, Faro Daba's fossiliferous overbank silts and silty clays are widely and horizontally exposed atop the resistant DMCC platform. Geomorphologically, the indurated cobble conglomerate serves as a physical shelf that protects the soft, immediately overlying Faro Daba beds from headward erosion. This has resulted in a low rolling topography that provides uniquely wide windows into the paleolandscape because of the abundant embedded fossils and artifacts concentrated in these relatively soft sediments. The Faro Daba fossil-bearing sediments across this ∼1.5-km 2 surface contain abundant large rhizoliths at their basal contact with the DMCC and a prominent basaltic tuff marker horizon that usually lies ∼1 to 5 m above the DMCC top (Fig. 2). The paleoanthropological resources of the Faro Daba beds are concentrated above and below this Afcaro Basaltic Tuff (AFBT). It typically presents as a dark gray to black, unconsolidated to moderately indurated, 5-to-150-cm-thick, fine-sand sized scoriaceous deposit that locally shows evidence of reworking and overthickening in paleochannels. Based on the analysis of five separate samples, the AFBT displays one dominant and one subordinate basaltic glass compositional mode (SI Appendix, Figs. S21 and S22 and Table S1). Abundant fossil wood, burned tree trunks, and rhizoliths evince a biotically rich and temporally brief interval during which the included MSA artifacts and vertebrate fossils were deposited. As with the older occurrences of the Chai Baro beds, the Faro Daba paleoanthropological resources are concentrated in these lower silts and silty clays of the Faro Daba beds, followed by largely unfossiliferous overlying darker silty clays. Prior work utilized 40 Ar/ 39 Ar and 14 C dating in a series of attempts to constrain the ages of the Faro Daba beds. The Faro Daba remains were emplaced after local erosional removal of the sub-DGST Chai Baro package, and thereby postdate the maximum 40 Ar/ 39 Ar age of 158.1 ± 11.0 ka for the DGST. Attempts to date the AFBT and the artifacts and fossils interbedded with it have repeatedly failed due to low K concentrations combined with extremely high atmospheric 40 Ar concentrations (11). A minimum age of >54 ka (accelerator mass spectrometry 14 Cdead) was obtained from charcoal in an in situ burned tree stump charcoal sample (SI Appendix, Fig. S7C ) overlying the AFBT by ∼4 m (11). This result helped to confirm stratigraphic relationships and erosional processes described above, but tighter age constraints were required. Our original application of 40 Ar/ 39 Ar to obsidian debitage provided maximum ages for fossils and artifacts at Herto (10), inspiring parallel work on the Faro Daba MSA. We first subjected 21 surface-collected, DGPS (differential Global Positioning System)-controlled obsidian debitage samples to combined X-ray fluorescence analysis, electron probe microanalysis, and 40 Ar/ 39 Ar age determinations. A chemically and chronologically distinct juvenile population dating to 106 ± 20 ka was identified, thereby constraining the extrusion age of the obsidian and providing a maximum age for the Faro Daba MSA (11). Building on these results, in 2015 we sampled additional diagnostically MSA surface and in situ obsidian artifacts from the Faro Daba MSA bearing outcrops to assess their chemical compositions and link them with the previously dated obsidians. Of the 17 additional pieces presented here, all were either techno-typologically diagnostically MSA or recovered in situ. Of the 17 pieces, 3 demonstrated major/minor oxide and trace element concentrations that geochemically match the ∼106-ka-dated obsidian (SI Appendix, Fig. S27): two bifacial points from HAL-A2 and a retouched flake from the archaeological excavation at HAL-A25 (both Faro Daba beds) (Dataset S1). These obsidians derive from the unknown geological source that was represented by three pieces (MA04-28K, 28O, and 28P) in ref. 11, later characterized as Type 10 in ref. 42. These results exclude the possibility of contamination and conclusively link the ∼106-ka obsidian dates to in situ Faro Daba artifact assemblages, rendering this obsidian extrusion age as the maximum age limit for them. However, the Faro Daba beds' upper age limit remained constrained only by the infinite 14 C age, therefore insufficiently precise for placing the Faro Daba remains. Acquiring a precise minimum age for the Faro Daba MSA required the application of the recently developed 230 Th/U OES burial dating technique (19). We recovered OES fragments (MA15-09) from ∼5 m above the AFBT (Fig. 2). Numerous OES fragments were collected at a single location, eroding from within a medium to light grayish brown silty clay, likely from a single egg or a larger eggshell fragment, parts of which were excavated in situ. Five fragments were analyzed using laser ablation inductively coupled plasma mass spectrometry (ICP-MS) to characterize the distribution of U and Th in the OES fragments and by solution multicollection ICP-MS to produce 230 Th/U ages on two subsamples of each OES fragment, yielding 10 230 Th/U ages ranging from ∼97 to 91 ka (SI Appendix). Three samples produced burial ages consistent with a single stage model of U uptake upon burial (19), yielding a weighted mean age of 96.4 ± 1.6 ka (all errors are 2σ or 95% CI, unless otherwise stated; Fig. 3). Good agreement of the 230 Th/U ages among multiple OES fragments indicates that their mean 230 Th/U age provides a firm minimum age for their hosting stratum. Indeed, close agreement between 230 Th/U burial ages and 14 C ages of younger eggshells observed in this study (see Halibee's Wallia Beds) and a prior one (19) shows that OES 230 Th/U ages may closely date their host strata. Collectively, these relations, along with preservation of stratigraphic order between the mean OES age and the mean age of underlying obsidian, indicates that MSA artifacts and fossils from the Faro Daba beds are now constrained to between ∼96 ka and ∼106 ka. The lower ∼5 to 10 m of the Faro Daba beds comprise richly fossiliferous silty clays immediately below and above the AFBT. Lithic artifacts accompany the fauna, most concentrated in the average ∼5-m interval between the tuff and the top of the underlying DMCC. Archaeological research began with reconnaissance transects, followed by collection of artifacts derived from sieving operations conducted during fossil recovery, DGPS-controlled surface collections, and, finally, 238 m 2 in five excavations. Surface and excavated artifacts from the Faro Daba beds closely match, with Levallois cores and products that include points (SI Appendix, Figs. S13-S15), technological blades, scrapers, perforators, and other retouched tool types and heavyduty tools that include core axes and pick forms. Conjoining sets of lithics, often involving many flakes and small debitage, are often found freshly exposed atop the soft eroding silts. Flakes account for more than half of the total artifact count from the excavations. The pooled lithic assemblage is obviously part of the MSA technocomplex in all aspects, including a wide range of raw materials, mostly volcanic rocks probably procured locally from readily accessible clasts in the DMCC, along with imported obsidians and siliceous rocks. Both surface and subsurface artifacts are usually in direct association with faunal remains, primarily rodents and bovids. A large assemblage of fossil vertebrates has been collected from three major paleontological localities representing the Faro Daba beds (SI Appendix, Fig. 16). Because of the striking abundance, integrity, and resolution of the archaeological occurrences sandwiched between the top of the DMCC and the base of the AFBT, initial collection of fauna was restricted to fossils found above the tuff. As in the older Chai Baro silts, the Faro Daba fauna is taxonomically diverse, but proportionally distinct. Rodents are common in the collections, and the bovids range up to eland (Taurotragus) size. Primates are remarkably abundant, and many partial skeletons of the dominant cercopithecines and colobines were recovered. Unlike the Chai Baro assemblages, papionins are absent, perhaps indicating Faro Daba's closer proximity to a riverine forest. Included among the primates are an array of 13 cataloged H. sapiens comprising a mix of postcranial and craniodental elements. The most complete of these is a partial skeleton, with mandible and cranial vault, of a very large and robust adult, presumed male, individual that eroded from sediments above the AFBT but below the dated OES (sample MA15-09) horizon. Halibee's Wallia Beds. The Wallia beds comprise the uppermost sediments in the Kada Halibee catchment. Erosion and absence of vegetation allow aerial and satellite imagery to be used as effective mapping tools by which the extent of the Wallia beds can be readily and accurately assessed at large scale (Fig. 1). To date, the Wallia beds are the least investigated Dawaitoli Formation deposits. Limited foot transects across these extensive outcrops included observation and photography of stone tools and fossils eroding from deposits associated with two sampled tephra described below. The Wallia beds are exposed from at least the southern end of the Wallia catchment northward to the Talalak river, a distance of ∼20 km. Combining our preliminary surveys, tephrochronological results, and imagery, we estimate that many years of intensive foot survey will be required for adequate paleoanthropological survey coverage of these deposits across the ∼50-km 2 exposures. Such reconnaissance is predicted to identify dozens (if not hundreds) of LSA occurrences. Imagery also indicates that these sediments (or their chronological and geomorphological equivalents) stretch far north of the Talalak, to near the Gona wadi where they also include LSA occurrences (30). Here we briefly introduce the Halibee Wallia beds and their contents. The LSA-associated fossils and artifacts in the Wallia beds uncomformably overlie the Faro Daba MSA and are embedded in light gray to reddish brown silts and medium brown silty clays measuring a total thickness of ∼8 m. These sediments are interbedded with two tephra above a pebble sandstone/conglomerate eroded into underlying dark silts and clays. These tephra are fine-grained, partially reworked, crystal-poor, gray vitric tuffs ranging from 5 to 60 cm in thickness. The lower tuff (MA15-11) was not analyzed, and the upper tuff is represented by three samples spanning a ∼5.5-km north-to-south transect that includes MA15-10. This tuff, the Seegeri Vitric Tuff (SEVT), is compositionally uniform and readily distinguishable from other Halibee member felsic tephra. An additional tuff that is represented by a single isolated sample (MA04- 16) has not yet been related to the others but represents a third tephra occurrence in the Wallia beds. Fragments of a whole, in situ, collapsed ostrich egg (MA15-12; Fig. 2) were extracted from gray, bedded to partially laminated silty clay ∼2 to 3 m above sediments hosting LSA archaeology at locality HAL-A27 in the Wallia beds. Vitric tuff MA15-11 is present between this OES and the archaeology and also outcrops at HAL-A26 1.1 m beneath the SEVT (MA15-10) and atop an analogous LSA assemblage. Inverse isochron for anorthoclase grains from MA15-07 after excluding xenocrysts; the final weighted mean age of the DGST (inset age) includes both data from MA15-07 (this study) and from MA09-04 (11). For details, see SI Appendix and Dataset S1. This OES find provided an opportunity to further evaluate the fidelity of 230 Th/U burial dating by comparing 14 Fig. 3 and SI Appendix]. The 14 C age dates eggshell mineralization, and the 230 Th/U burial age dates initial contact of the eggshell with U-bearing soil water. Overlap between the two results indicates OES burial and U uptake ensued shortly after eggshell mineralization, i.e., within the analytical uncertainties of the two techniques. The Wallia LSA is therefore dated to ∼31 ka based on the mutually consistent 14 C and 230 Th/U burial OES ages. The two archaeological localities thus far established in the Wallia beds were created to document the presence of LSA lithics associated with the geological samples described above. Few to no vertebrate fossils have been found associated with the abundant evidence of flaked and ground stone technology belonging to the LSA technocomplex (SI Appendix, Figs. S17-S19). Obsidian is the single dominant raw material type, with siliceous rocks represented by only a few pieces. Blades dominate the assemblages, but bladelets and convergent and oval-shaped flakes are also present. Sizes of flakes and blades are generally much larger than the younger OUD LSA assemblage (below). Crested blades attest to being removed sequentially from prismatic/pyramidal cores, and some blades and Levallois products were retouched into end scrapers and points. Ground stone implements include hand stones and fragments of lower grindstones. Messalou's OUD LSA. The OUD area east of the modern Awash River (Fig. 1) contains a set of LSA occupations whose importance was recognized in the mid-1970s (21). Additional reconnaissance, excavation, and sampling of Locality OUD-A1 in 2018 led to the establishment of a local measured section (Fig. 2) that includes two tuffs and archaeological remains, including OES and a variety of flaked and ground stone tools attributable to the LSA. The OUD beds comprise interbedded fluvial and lacustrine basin-fill tuffaceous sediments deposited unconformably upon eroded Pliocene basement rocks. These filled the base of a small basin created by basalts forming the Namey Koma volcanic edifice in the uppermost catchment of the Messalou drainage east of the modern Awash (Fig. 1). The LSA-bearing light brownish gray tuffaceous silty clay beds are massive, variably consolidated, and represent laterally continuous, gently sloping erosional surfaces with abundant artifacts, unlike the ledgeforming, reworked tuff and unconsolidated gray silt below. The uppermost light brown silty clay contains no stone tools. The tuffaceous lake margin sediments bearing densely concentrated LSA artifacts at OUD-A1 are intercalated with two unconsolidated, laterally discontinuous, fine-to coarse-grained reworked vitric tuffs (MA18-11 and 18-13; Fig. 2). These tuffs are ∼30 cm thick and separated by ∼1.7 m. Our hypothesis that these might correlate with the two Wallia beds tephra described above was falsified by laboratory analysis showing the glass components to be compositionally distinct and preserving abbreviated linear compositional arrays (SI Appendix, Table S2 and Figs. S21 and S22). MA18-13 is also multimodal, preserving a minor mode compositionally equivalent to the underlying MA18-11 tuff, indicating incorporation of MA18-11 shards during deposition/reworking (Dataset S1). OES fragments from the archaeologically rich silty clay layer were surface-collected and 14 C-dated to between 23.9 to 21.4 cal ka BP, thus ∼10 ka younger than the Wallia LSA ( Fig. 3 and Dataset S1). We intensively surveyed and then performed DGPScontrolled collection of diagnostic surface artifacts at OUD-A1, followed by a 35-m 2 excavation. Bone was absent from both, but OES was well-preserved. Lithic raw material quality and diversity are high, including silicious rocks, obsidian, and finegrained basalt extracted from cobbles and boulders exposed in adjacent slopes of older basement sediments. The assemblage is microlith-dominated. Assemblage analysis is currently underway, but field observations allow introductory characterization of the assemblage (SI Appendix, Figs. S18 and S19). Cores are mostly single-(and less commonly opposed-) platform, usually pyramidal, and less frequently prismatic. Most flakes are laminar, in the bladelet category. Platform preparation was by core-tablet removals usually on elongate pebbles/ nodules/blocks/blanks that are locally available in the adjacent basement sediments. Simple (unretouched) microliths were the primary targets, although some abrupt backing was applied. True geometric forms are absent, but perforators are present. Ground stone artifacts include hand stones and lower grindstones, the latter mostly fragmentary. The presence of OES beads and preforms among both surface-exposed and excavated materials indicate the local manufacture of these items. Discussion Establishment of the chronostratigraphic framework presented here is of pivotal importance because accurate and precise temporal placement is foundational to solving many research questions involving the origin and evolution of African H. sapiens. The production of such knowledge will require the effective and precise coupling of recently acquired lake sediment cores with both adjacent and distant geological, archaeological, and paleontological datasets (20). The Halibee member's sedimentary record contains paleoanthropological resources emplaced episodically within a single depository over ∼200 ka. Its dense and laterally extensive MSA and LSA archaeology and associated faunal occurrences are now emerging across large outcrops that allow sampling of biological and cultural landscapes at ages and scales rarely represented even in eastern Africa. The open-air conditions under which these remains were deposited are distinct from many contemporaneous occurrences recorded in caves or rock shelters. In situ collections of archaeological and fossil material are valuable for research questions that require high-precision spatial and microstratigraphic data. However, such precise and accurate placements from limited excavations into large openair occurrences often generate inadequate samples with which to test various hypotheses about evolution, innovation, and adaptations (17). This is where exploration followed with controlled surface collection of specimens "in stratigraphic context" is a valuable approach (see ref. 45). For example, by combining surface transects at broad scale, surface collection of assemblages at different spatial and stratigraphic resolutions, and then targeted excavations, our initial findings in the OUD beds illustrate the power of this strategy. There, the remarkable coherence of independently dated samples from a single actively deflating stratum provides a precise age constraint on a substantial collection of artifacts. The LSA there is now firmly placed chrono-stratigraphically and can be characterized typo-technologically. However, understanding the geological and ecological contexts in which this assemblage of artifacts or others like it accumulated is crucial. It is here that the utility of actualistic data becomes apparent. Fluvial processes and fluviatile ecologies associated with the modern Awash River provide important frameworks for interpreting Halibee's Pleistocene sediments, flora, and fauna (46). For example, recurrent deposition of primarily silty sediments embedding the stratigraphically restricted fossil and artifact occurrences atop cemented, topographically undulating cobble and pebble conglomerates of the Halibee member indicates seasonal overbank fluvial sedimentation initiated by a relatively elevated base level. Inverted channels with flow directions similar to modern drainage at Halibee further reveal the Pleistocene landscapes during LSA and MSA times. The Faro Daba beds provide additional evidence from which the MSA landscape may be inferred. Abundant rhizoliths and burned tree root/trunk systems in silty overbank sediments indicate water sufficient to support dense vegetation in proximity to the paleo-Awash and its tributaries here ∼100 ka. The cemented underlying DMCC cobble conglomerate would have impeded drainage after seasonal flooding, creating shrinking seasonal ponds flanking the paleo-Awash River and attracting a wide variety of fauna. Modern analogs are observed in the modern Awash River's annual flood cycle. Pleistocene and Holocene wadi paleochannels now emergent as sinuous inverted sands and pebbles across the Halibee succession represent tributaries that once drained the catchment west of the Halibee area depository, as they do today. These wadis would have provided humans and fauna solid footing and therefore natural corridors to ephemeral ponds and marshes, and thereby to more permanent water, shade, lithic raw material, and biotic resources of the larger paleo-Awash river to the immediate east. After the richly fossiliferous and artifact-bearing Pleistocene Halibee bed silts representing the seasonally flooded paleolandscape were deposited, these tributary channels-both active as well as inverted-were eventually submerged by progressive sedimentation in expanding seasonal swamps and marshes, with annual sedimentation leveling the topography. The sterility of the overlying finer dark brown clay-rich deposits indicates that the underlying fossil-rich silts were deposited during temporal windows that were relatively short-lived compared to the span of the Halibee member. We predict that continued integration of actualistic investigations in the Middle Awash with the geological and paleobiological evidence from archaeological occurrences will generate multiple testable hypotheses about Pleistocene occupation of the Halibee area. Broader Implications and Potentials The paleoanthropological resources of the Afar introduced and calibrated above join a growing body of multidisciplinary evidence with which to investigate current issues in human evolution. For example, there are serious ongoing debates among paleolithic archaeologists about the reality of purported transitions between named archaeological technocomplexes such as the Acheulean, MSA, and LSA (47)(48)(49)(50). Indeed, from eastern to western Africa, understanding the nature of the relationship between the MSA and LSA remains incomplete, with little consensus on issues ranging from timing to geography to technology. Understanding the temporal and spatial variation in technologies (51)(52)(53), subsistence (54), mobility (55,56), and potential ecosystem modifications (57) of Middle and Late Pleistocene human populations is best accomplished via comprehensive research on stratified, calibrated sequences of timesuccessive, geographically limited archaeological occurrences associated with skeletal remains. The rapidly expanding nexus of MSA localities in the Afar thereby creates additional opportunities for progress in testing the modes and tempos of biological and cultural change and the causes of observed variation (58)(59)(60)(61). In the biological realm, the last several decades have witnessed persistent efforts to match oceanic and lacustrine proxies of global climate change with evolutionary events. Many earlier efforts ignored at least some of six fundamental problems involved in any such enterprise practiced at incompatible scales, with deficient datasets, and falsely equating correlation as causation (62). A recent contribution entitled "Rethinking the ecological drivers of hominin evolution" (20) rereviewed these attendant problems and called for a "new phase of paleoanthropological research" (p. 797) that abandons the "pattern matching paradigm" (p. 797) in favor of placing greater emphasis on "theory-driven prediction" (p. 803). Such meta-analyses have their value in paleoanthropology, but they will never substitute for the rare assemblages that combine high ecological and behavioral integrity with negligible time-averaging. The recent acquisition of calibrated, high-resolution terrestrial lake sediments recording environmental variables has already enhanced knowledge of climate change and attendant complexities of tectonism in eastern Africa (63)(64)(65)(66). However, revealing how these dynamics relate to the evolution, dispersal, and behaviors of H. sapiens through time requires more than drill cores and will increasingly depend on outcrops of evidence-bearing sediments that accumulated more episodically and whose contents require sustained extraction, comprehensive analysis, and secure age calibration. The Halibee member occurrences described above meet these strict criteria. Conclusions The African MSA witnessed the emergence of anatomically modern humans and their expansions to Eurasia (66). Understanding the biology of these people and their descendants requires chronologically placed fossils. Ethiopia now contains a succession of five dated sets of human skeletal remains in MSA contexts: the Omo I partial skeleton at >212 ka (67) and four sets of remains from the Middle Awash: The Chai Baro fossils at >158 ka (see above), the Herto fossils at ∼156 to 160 ka (10), the Faro Daba fossils at ∼100 ka (see above), and the upper Aduma fossils at <100 ka (3). We anticipate that LSAassociated human remains will follow. The relative and chronometric placements established above for the sequence of occupations in the Halibee area combine with the high ecological and archaeological integrity of these assemblages to establish the study area's potential for advancing paleoanthropological knowledge of technological, geological, biological, and environmental changes in a single basin during a period widely associated with the ultimate emergence and dispersal of modern humans. A recent summary (66) concluded that "interdisciplinary analysis … will undoubtedly reveal new surprises about the roots of modern human ancestry." (p. 235). The Halibee member's contributions to understanding the anatomical, behavioral, and ecological aspects of this ancestry are evident. Establishment of a sound chronostratigraphic framework via independent, cross-verifying chronometers is foundational to ongoing paleoanthropological research. Our results demonstrate the power of sustained field and laboratory work and further confirm that the deep sedimentary stack of Ethiopia's Afar remains central to understanding the origins and evolution of our species. Methods Field and laboratory methods employed to generate the results described above are standard in geoscience and paleoanthropology. Detailed descriptions and illustrations of these methods are presented in SI Appendix. Data Availability. All study data are included in the article and/or supporting information.
2021-12-08T06:17:04.032Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "0cf2e1b8d8f41f7c802f232c8aa45e2961b3c131", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2116329118", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f4a249ff4c1a7b0d0266d1733354fdda42d2fc49", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
246531574
pes2o/s2orc
v3-fos-license
Lyophilization provides long-term stability for a lipid nanoparticle-formulated, nucleoside-modified mRNA vaccine Lipid nanoparticle (LNP)-formulated nucleoside-modified mRNA vaccines have proven to be very successful in the fight against the coronavirus disease 2019 (COVID-19) pandemic. They are effective, safe, and can be produced in large quantities. However, the long-term storage of mRNA-LNP vaccines without freezing is still a challenge. Here, we demonstrate that nucleoside-modified mRNA-LNPs can be lyophilized, and the physicochemical properties of the lyophilized material do not significantly change for 12 weeks after storage at room temperature and for at least 24 weeks after storage at 4°C. Importantly, we show in comparative mouse studies that lyophilized firefly luciferase-encoding mRNA-LNPs maintain their high expression, and no decrease in the immunogenicity of a lyophilized influenza virus hemagglutinin-encoding mRNA-LNP vaccine was observed after 12 weeks of storage at room temperature or for at least 24 weeks after storage at 4°C. Our studies offer a potential solution to overcome the long-term storage-related limitations of nucleoside-modified mRNA-LNP vaccines. INTRODUCTION Lipid nanoparticle (LNP)-formulated nucleoside-modified messenger RNA (mRNA) vaccines developed by Moderna and Pfizer-BioNTech demonstrated safety and very high (>90%) efficacy and are at the forefront of the battle against the coronavirus disease 2019 (COVID- 19) pandemic. [1][2][3] Currently, the most critical limitation of this novel vaccine platform is the requirement of a special cold-chain system for long-term storage. While most conventional vaccines can be stored at 2 C-8 C in a refrigerator for at least 6 months, mRNA-LNP vaccines need to be stored frozen, presenting a considerable obstacle to vaccine distribution in countries with poor infrastructure. Lyophilization (freeze-drying) is commonly used in the pharmaceutical industry to increase the stability and shelf life of various products by removing the water from drug formulations. 4,5 In a freeze-dried form, mRNA-LNP vaccines could be conveniently shipped worldwide without the need for cooling or freezing. However, lyophilization of LNPs is less than straightforward. While the process is readily applied to true solutions, LNPs are much more complex; carefully assembled using well-defined processes, 6 these nanostructured particles are made from specific types of lipids at certain ratios. 7 Physicochemical parameters, such as particle size, polydispersity, and proper payload encapsulation, are critical to biological performance and must be retained during the lyophilization process itself and subsequent storage. Careful selection of lyophilization buffers, cycle process parameters, and temperatures is of the utmost importance to ensure they are preserved. Recent studies have shown that LNPs containing small interfering (si) RNA or mRNA can be successfully lyophilized. [8][9][10] Tekmira Pharmaceuticals developed an LNP for treatment of Zaire Ebola virus (ZE-BOV) infection containing siRNA targeting VP24, VP35, and L polymerase proteins. 11,12 After demonstrating complete protection of non-human primates (NHPs) in an otherwise lethal model of ZE-BOV, a reformulated, lyophilized version (TKM-100802) was assessed in a phase 1 clinical trial in 2014 (NCT02041715). 13 While Tekmira reported equivalent efficacy with the wet and lyophilized formats of their siRNA-LNP, not all studies have had the same conclusion. Ball et al. found that siRNA-LNPs can be lyophilized, but they show significantly lower efficacy (gene silencing in cell culture) after reconstitution with water. 8 Two recent studies demonstrated that the mRNA-LNP platform can also be lyophilized. 9, 10 Zhao et al. generated lyophilized firefly luciferase-encoding mRNA-LNPs and demonstrated that the reconstituted material maintains the mRNA expression efficiency in mice as observed with in vivo bioluminescence imaging studies. 9 Hong et al. developed a lyophilized severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mRNA-LNP vaccine formulation and showed that the reconstituted vaccine can induce strong immune responses in mice. 10 Importantly, none of these publications provides information on the stability of lyophilized mRNA-LNP formulations over time. Here, we describe a very efficient lyophilization procedure that can be used to produce nucleoside-modified mRNA-LNPs as freeze-dried cake. Lyophilized mRNA-LNPs were generated and stored at À80 C, À20 C, 4 C, 25 C (room temperature), and 42 C for 4, 12, or 24 weeks. We demonstrate that the physicochemical properties of mRNA-LNPs do not significantly change after storage at room temperature for 12 weeks, or at 4 C for at least 24 weeks, and then reconstitution with water. Using the same storage conditions, we show that firefly luciferase-encoding mRNA-LNPs do not lose their high translatability as measured by in vivo bioluminescence imaging studies in mice. Most importantly, we demonstrate in comparative mouse immunization studies that a nucleoside-modified mRNA-LNP influenza virus vaccine does not lose potency after 12 weeks of storage at room temperature or for at least 24 weeks at 4 C as a lyophilized product. We believe that this report represents a major advance in the field of mRNA-LNP vaccine development, as it offers a potential solution for the suboptimal long-term storage temperature requirements of these potent new-generation vaccines. Production of lyophilized mRNA-LNP formulations LNPs comprising the ionizable lipid (6Z,16Z)-12-((Z)-dec-4-en-1yl)docosa-6,16-dien-11-yl 5-(dimethylamino)pentanoate, 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC), cholesterol, and PEG 2000 -C-DMA at a molar ratio of 50:10:38.5:1.5 were formulated using a modified version of our proprietary T-mixer manufacturing process [US 9,005,654]. This involves mixing lipids dissolved in ethanol with a low-pH, aqueous solution of nucleic acid in a Tshaped mixing chamber. LNPs form spontaneously as the ethanol concentration drops below the level required to support lipid solubility. Particles are then rapidly stabilized by further dilution with an aqueous buffer in the collection reservoir. The process is robust, highly scalable, and has been used to encapsulate a variety of nucleic acids, including siRNA and mRNA. 6,12,14 Here, we separately encapsulated two different mRNAs encoding either firefly luciferase (Luc) or hemagglutinin from the A/Puerto Rico/8/1934 influenza virus strain (PR8 HA) in LNPs, subsequently exchanging the carrier buffer to 5 mM Tris pH 8, containing 10% sucrose and 10% maltose (w/v). LNP formulations were then freeze-dried using a VirTis Genesis Pilot Lyophilizer. The lyophilization cycle consisted of a freezing step at À45 C, a primary drying step at À25 C and 20 mTorr, and a secondary drying step at 30 C and 20 mTorr. At the completion of the cycle, samples were brought to atmospheric pressure, backfilled with ultrapure nitrogen gas, stoppered, and then transferred to various storage temperatures (À80 C, À20 C, 4 C, 25 C, 42 C) for stability monitoring. All samples were noted to have a dense, white, freeze-dried cake structure. Aliquots from the same batch of mRNA-LNP were stored frozen at À80 C and served as a benchmark control in this evaluation. Physicochemical characterization of lyophilized mRNA-LNPs At set time points (0, 4, 12, and 24 weeks after production), lyophilized mRNA-LNPs were removed from storage. The lyophilized vials were inspected for cake appearance, which can be indicative of physicochemical changes that may impact product quality and biological performance. 15 All lyophilized vials contained a uniform and elegant cake ( Figure S1), showing no signs of cake collapse, shrinkage, or cracked texture. This cake was quickly reconstituted by the addition of nuclease-free water to a target concentration of 0.5 mg/mL total mRNA. After the addition of water, vials were gently inverted several times and quickly acquired a clear, opalescent appearance with no visible solids ( Figure S2). We previously assessed the stability of this LNP composition in a frozen format. These studies found no change in key quality attributes when stored for 1 year at À80 C (Table S1). To generate a similar dataset with this set of lyophilized products, we first analyzed the properties of the wet Luc and PR8 HA mRNA-LNPs before and after freeze-thaw and post-reconstitution of the lyophilized samples at release, marking the week 0 time point (Table S2). Hereafter, the frozen and reconstituted samples in the time course study were analyzed using stability-indicating assays for total RNA content, mRNA purity, percentage of RNA encapsulation, lipid identity, lipid content, mean particle size, and polydispersity. We utilized dynamic light scattering (DLS) to characterize particle size and polydispersity (size distribution). Frozen LNPs stored at À80 C showed no particle size change over time ( Figure 1A). Similarly, lyophilized LNPs stored at 4 C and below also maintained particle-size integrity for at least 24 weeks after production. In contrast, an increase in z-average diameter was measured in lyophilized samples stored at elevated temperatures. Interestingly, despite exhibiting size growth, these samples maintained a narrow size distribution, where the polydispersity index was <0.10 ( Figure 1B). We also noted that particle-size increase reached a plateau following 4-week storage of lyophilized LNPs at 42 C. Encapsulation efficiency was measured by the RiboGreen assay, which relies on a dye that fluoresces upon binding to singlestranded mRNA. Dye accessibility is low with intact LNPs, so only unencapsulated mRNA is detected. To determine the total mRNA concentration, entrapped mRNA is released by addition of a detergent (Triton X-100) to lyse the LNPs. The ratio of fluorescence intensity before and after addition of Triton allows for the calculation of the proportion of encapsulated mRNA payload, typically > 90% in stable formulations. There was no significant change in encapsulation efficiency of mRNA-LNPs stored under most conditions, including lyophilized mRNA-LNPs stored for 24 weeks at room temperature ( Figure 1C). Only at 42 C storage, , the lyophilized PR8 HA mRNA-LNP product showed a steady decline in encapsulation efficiency after the first 12 weeks, followed by an increase between 12 and 24 weeks. While there is no binding interference of RiboGreen with unencapsulated single nucleotides, the increased encapsulation observed at 24 weeks may be due to binding time (weeks) www.moleculartherapy.org of the dye to degraded segments of the mRNA payload. This hypothesis correlates well with the measured total mRNA concentration. A steady decline in total mRNA content over time, including the 24-week time point, was reported for the lyophilized product stored at 42 C. For all other storage formats and conditions, no significant change in total mRNA concentration was reported over this time period ( Figure 1D). The integrity of nucleoside-modified mRNA-LNPs was assessed by capillary electrophoresis. No notable changes in mRNA integrity were observed for the À80 C frozen product, as well as the lyophilized product stored at either À20 C or À80 C, for at least 24 weeks (Figures 1E and S3). Above subzero temperatures, mRNA chemical degradation was observed in a temperature-dependent manner. For lyophilized mRNA-LNPs stored at 4 C for 24 weeks, there was an approximate 10%-15% decrease in RNA integrity. Meanwhile, for samples stored at 25 C, approximately 30% reduction in mRNA integrity was reported. It is important to note that no further mRNA degradation was observed for the PR8 HA mRNA-LNPs between 12 and 24 weeks for both 4 C and 25 C storage conditions. The greatest loss of mRNA integrity (approximately 70%) was reported for the lyophilized samples stored at 42 C. All four lipid components were analyzed by ultra high-performance liquid chromatography (UHPLC) and are stable for at least 24 weeks, regardless of the LNP format and storage temperature ( Figures S4 and S5). Importantly, the lipid composition maintained the target molar ratio of 1 To further demonstrate the benefits of lyophilization, we conducted a comparative study of non-lyophilized and lyophilized samples stored at different temperatures for 4 weeks. Since the datasets for Luc and PR8 HA mRNA-LNPs were very comparable in the main stability arm, we conducted this direct comparison with Luc mRNA-LNPs only. In this short-term stability study, non-lyophilized LNPs were stored as a wet formulation at À80 C, À20 C, 4 C, 25 C, and 42 C for 4 weeks. In comparison with the lyophilized LNPs, more changes in particle characteristics were reported for the non-lyophilized coun-terpart by 4 weeks. Particle size of non-lyophilized samples increased at all storage temperatures, which was not observed with lyophilized samples stored below 42 C. At 42 C, lower mRNA integrity was reported for the non-lyophilized sample than for the lyophilized sample. Lower RNA encapsulation was also reported for the non-lyophilized sample stored at À20 C. Overall, these results demonstrate that lyophilized samples provide improved stability over non-lyophilized samples (Table S3). Additionally, an in-use stability study was performed at room temperature with the Luc mRNA-LNPs. Both frozen and lyophilized samples were removed from À80 C storage, thawed, and reconstituted (if applicable) and then held at 25 C for up to 24 h. All attributes were comparable to those of the samples at time 0 (Table 1). These results support in-use stability of both À80 C frozen and lyophilized/reconstituted samples for at least 24 h at 25 C, which exceeds the current in-use stability instructions of approved mRNA-based COVID-19 vaccines. 16 In vivo activity of frozen and reconstituted lyophilized Luc mRNA-LNPs The translatability of Luc mRNA-LNPs was evaluated in mice by in vivo imaging studies. As most vaccines are given intramuscularly (IM) or intradermally (ID), mRNA-LNPs were tested after IM and ID injections (Figures 2, S6;, and S7). Animals were injected IM with Luc mRNA-LNPs, and protein production from the frozen and reconstituted lyophilized products stored for 0, 4, and 24 weeks was examined (Figures 2A and S6). Lyophilization results in some decrease in activity of Luc mRNA-LNPs compared with the frozen formulations (Figures 2A, 2B, and S6A). Lyophilized mRNA-LNPs stored at room temperature (or lower temperatures) displayed a high level of protein production at week 4 (Figures 2A, 2C, and S6B). Storing mRNA-LNPs at 42 C results in a substantial drop in activity compared with storage in other conditions by week 4 (Figures 2A, 2C, and S6B). Impressively, lyophilized mRNA-LNPs stored at 4 C remain stable for at least 24 weeks ( Figures 2A, 2D, and S6C). A decrease in protein production from Luc mRNA-LNPs was found after storage at room temperature for 24 weeks (Figures 2A, 2D, and S6C). Storing mRNA-LNPs at 42 C for 24 weeks results in a substantial further drop in activity compared with week 4 (Figures 2A, 2D, S6B, and S6C). We obtained very similar results after evaluating protein production from ID-administered Luc mRNA-LNPs ( Figures 2E-2H and S7). In summary, lyophilized Luc mRNA-LNPs remained stable at room temperature for at least 4 weeks and at 4 C for at least 24 weeks. Week 0 Week 4 Week 24 Week 4 Week 24 9 Figure 2. In vivo imaging studies with Luc mRNA-LNPs Frozen Luc mRNA-LNPs were stored at À80 C throughout the study, and lyophilized Luc mRNA-LNPs were stored at À80 C, À20 C, 4 C, 25 C, or 42 C for 0, 4, or 24 weeks prior to reconstitution. Immunogenicity of frozen and reconstituted lyophilized PR8 HA mRNA-LNP influenza vaccines To investigate the performance of the reconstituted lyophilized PR8 HA mRNA-LNP vaccines stored at various temperatures, mice were immunized IM or ID with a single dose of vaccine, and HA inhibition (HAI) titers in sera of these vaccinated mice were determined 4 weeks later (Figure 3). HAI titers are a simple readout and clearly reflect the quality and magnitude of immune responses induced by PR8 HA mRNA-LNP vaccines. As immunogenicity is a critical parameter, an additional time point (week 12) was added to the PR8 HA study protocol to provide more detailed information. As in previous experiments, frozen mRNA-LNPs served as a benchmark control in this evaluation. When administered IM, no significant difference between the activity of frozen and freshly reconstituted lyophilized formulations was found ( Figure 3A). Despite some decrease in mRNA integrity ( Figure 1D), the immunogenicity of the lyophilized vaccines did not decrease after 12 weeks of storage at room temperature ( Figures 3B and 3C). Impressively, no drop in PR8 HAI activity and only a modest decrease in immunogenicity were found after 24 weeks of storage at 4 C and at room temperature, respectively ( Figure 3D). In accordance with previous measurements, storing PR8 HA mRNA-LNPs at 42 C resulted in a significant drop in activity by week 4 (Figure 3B), a substantial further decrease by week 12 (Figure 3C), and very low to no activity by week 24 ( Figure 3D). Similar results were obtained after ID immunizations with PR8 HA mRNA-LNP vaccines ( Figures 3E-3H). In summary, lyophilized PR8 mRNA-LNP vaccines did not lose activity after storage at room temperature for 12 weeks and lost some activity by 24 weeks. Storing the lyophilized vaccines at 4 C for at least 24 weeks did not result in any loss of immunogenicity. DISCUSSION The emergence of SARS-CoV-2 has motivated a global effort to develop a protective vaccine. mRNA is an attractive vaccine modality, owing to its flexibility in antigen design and the speed of development and production. However, effective mRNA vaccines require both efficient mRNA delivery to cells and a high level of antigen expression to induce robust immune responses coupled with durable protective immunity. Specialized delivery technologies such as LNPs are required to realize their full potential. Despite real-world evidence demonstrating the advantages of this promising novel platform, the instability of mRNA-LNP vaccines and the need for frozen storage remain major limitations. Here, we investigated lyophilization to achieve long-term pharmaceutical stability in these formulations. Lyophilization, or freeze-drying, is one of the most common methods used for long-term preservation of drug products, including colloidal nanoparticle suspensions. 4,5 Physical instability can be characterized as aggregation or fusion of the nanoparticles, manifesting as an increase in particle size or polydispersity. Chemical instability is most often observed as degradation of the mRNA payload and/or lipid components. Either form of instability would pose challenges for storage in an aqueous buffer as a wet formulation. Extrinsic parameters, such as storage-buffer pH and temperature, may further impact stability. As a result, lyophilization buffers, cycle times, and temperature are important parameters for preserving the physicochemical parameters of LNPs. Nucleoside-modified mRNA-LNPs were produced as a freeze-dried cake through lyophilization. For a quick assessment of the success of this process, macroscopic analysis was performed by visually inspecting the lyophilized product for cake appearance. Observations such as cake collapse, shrinkage, or cracked texture may indicate potential changes in LNP characteristics. The lyophilized vials in this study presented a uniform and elegant cake that was rapidly restored to its original state after resuspension in nuclease-free water. The reconstituted samples acquired a clear, opalescent appearance with no visible solids. Particle size is an important stability parameter and can influence pharmacokinetics, distribution, safety, and efficacy. For frozen and lyophilized LNPs stored at 4 C (and lower temperatures), particle size stability was reported for at least 24 weeks. Room-temperature storage of the lyophilized vials provided at least 4 to 12 weeks of particle size stability. Under the most accelerated storage condition of 42 o C, the lyophilized product exhibited a large increase in particle size between 0 and 4 weeks, but remained stable for at least 24 weeks. Under these conditions, the maximum particle diameter was less than 150 nm. Importantly, this particle size is still within the range where LNPs have been reported to elicit robust immune responses in animals, including NHPs. 17 As mRNA is an active drug substance, contributing to the immunogenicity and efficacy of the successful mRNA-LNP vaccines, it is important to monitor its encapsulation efficiency, integrity, and content. With intact and stable LNPs, the encapsulation efficiency of an mRNA payload is typically >90%. Here, we found no change in this parameter under most storage conditions, including lyophilized mRNA-LNPs stored for 24 weeks at room temperature. Moreover, no change was reported in total mRNA concentration in lyophilized LNPs stored at room temperature and below during the course of this study. While currently there are no published criteria on the acceptable limits of RNA integrity and its threshold in respect to vaccine efficacy, it is critical to prevent mRNA degradation to ensure biological performance. RNA integrity represents the most temperature-sensitive, stability-limiting parameter. Remarkably, no notable changes in Luc and PR8 HA mRNA integrity were observed for LNP products Figure 3. Immunogenicity of PR8 HA mRNA-LNP vaccines Frozen PR8 HA mRNA-LNPs were stored at À80 C throughout the study, and lyophilized PR8 HA mRNA-LNPs were stored at À80 C, À20 C, 4 C, 25 C, or 42 C for 0, 4, 12, or 24 weeks prior to reconstitution. (A-H) mice were (A-D) IM-or (E-H) ID-injected with 10 mg PR8 HA mRNA-LNPs (frozen or reconstituted lyophilized products), serum was collected 4 weeks post-immunization, and PR8 HAI titers were determined. n = 10 mice per group. Error bars are SEM. Each symbol represents one animal. HAI titers below the limit of detection are shown equal to 1 on the graph. Statistical analysis: one-way ANOVA with Bonferroni's multiple comparisons test on log-transformed data was performed; *p % 0.05, **p % 0.01, ***p % 0.001, ****p % 0.0001. www.moleculartherapy.org stored at subzero temperatures for at least 24 weeks. mRNA chemical degradation was observed in a temperature-dependent pattern, resulting in an approximately 10%-15% decrease in RNA integrity for lyophilized mRNA-LNPs stored for 24 weeks at 4 C and an approximately 30% decrease in RNA integrity for lyophilized mRNA-LNPs stored for 24 weeks at room temperature. Impressively, the 4 C lyophilized samples maintained the same in vivo potency as the frozen samples. Meanwhile, only slight decreases in immunogenicity were observed with lyophilized PR8 HA mRNA-LNP samples stored at room temperature for 24 weeks. It is important to note that no further mRNA degradation was observed for the PR8 HA mRNA-LNP samples between 12 and 24 weeks for both 4 C and room temperature storage conditions. The greatest loss of mRNA integrity (approximately 70%) was reported for the lyophilized samples stored at 42 C; however, this degree of degradation did not completely abrogate in vivo potency. As each lipid component in the LNP has specific functions during particle formation, stabilization, and biological performance, it is critical to maintain the stability of lipid components to ensure a pharmacologically active drug product. The amine group of the ionizable lipid is positively charged at acidic pH, promoting encapsulation of the negatively charged mRNA payload during particle formation. Following cellular uptake of the LNP, it further drives endosomal fusion and cytoplasmic release of payload. The PEG-conjugated lipid controls particle size during formation and prevents particle aggregation by sterically stabilizing the LNP. DSPC and cholesterol are often referred to as structural lipids with concentrations chosen to optimize particle size, encapsulation, and stability. In aggregate, the LNP serves to protect the delicate RNA molecule from serum nucleases during transition to target cells and promotes uptake and delivery. All four lipids maintained their integrity under all storage conditions tested in this study. These trends are impressive, as some ionizable lipids and DSPC are susceptible to temperature and pH-dependent hydrolysis, which was not observed here. 18 All frozen and lyophilized LNPs maintained the target molar ratio of 1.5:50:38.5:10 (PEG lipid:ionizable lipid:cholesterol:DSPC). Overall, these results are very encouraging, as other groups that evaluated the stability of lipid-based nanoparticles encapsulating Luc mRNA reported a significantly lower bioluminescence signal in vivo after storing the lyophilized product for 1 week at À80 C. 9 Although the presence of a cryoprotectant stabilized particle size, they speculated that the nanostructure of these mRNA formulations was altered during the lyophilization and reconstitution process, thereby affecting their delivery efficiency in vivo. In our studies, we were able to maintain key physicochemical attributes of our lyophilized mRNA-LNP product and demonstrate high in vivo translation. A preliminary shelf life of at least 24 weeks at 4 C offers increased flexibility over the current options. Both authorized COVID-19 mRNA vaccines require frozen storage in the presence of sucrose. 18 SpikeVax is stable for up to 6 months at À15 C to À20 C, whereas Comirnaty is stable at À60 C to À80 C for up to 6 months or À15 C to À25 C for 2 weeks. 19 Moreover, in-use stability assessment of our mRNA-LNPs reported no change in physicochemical characteristics at room temperature for at least 24 h. This provides a significant advantage, as it maximizes the use of drug product in a single vial and enables efficient administration over this 24-h period. Currently, SpikeVax is reported to have up to 12 h of 25 C stability, whereas Comirnaty has up to 2 h stability at 25 C or 6 h stability after dilution with 0.9% saline for injection. 16 The urgency of the COVID-19 pandemic demanded the rapid identification and development of a protective vaccine. Although Moderna and Pfizer/BioNTech quickly developed very effective nucleoside-modified mRNA-LNP vaccines, 1-3 some critical aspects of vaccine stability have yet to be addressed. In the few reports published on lyophilized mRNA-LNPs, there has been no discussion of the key quality attributes of these products after long-term storage and the biological impact of long-term storage. We believe that this report represents an important advancement in the field of mRNA-LNP vaccine research, as our dataset provides a better understanding of the physicochemical characteristics and in vivo activity (translatability and immunogenicity) of this new-generation platform. The lyophilization approach represents a compelling opportunity for improving thermostability of mRNA-LNP vaccines and will be critical in facilitating rapid global distribution of these vaccines in the future. Ethics statement The investigators faithfully adhered to the "Guide for the Care and Use of Laboratory Animals" by the Committee on Care of Laboratory Animal Resources Commission on Life Sciences, National Research Council. Mouse studies were conducted under protocols approved by the Institutional Animal Care and Use Committees (IACUC) of the University of Pennsylvania. All animals were housed and cared for according to local, state, and federal policies in an Association for Assessment and Accreditation of Laboratory Animal Care International-(AAALAC)-accredited facility. Production of mRNA-LNPs mRNAs were produced from linearized plasmids encoding codon-optimized firefly Luc or HA of A/Puerto Rico/8/1934 influenza virus as described. 20 Briefly, mRNAs were transcribed to contain 101-nt-long poly(A) tails. m1J-5 0 -triphosphate (TriLink) instead of uridine 5'triphosphate (UTP) was used to generate modified nucleoside-containing mRNA. Capping of the in-vitro-transcribed mRNAs was performed co-transcriptionally using the trinucleotide cap1 analog CleanCap (Tri-Link). mRNA was purified by cellulose purification, as described. 21 All mRNAs were analyzed by agarose gel electrophoresis and were stored frozen at À20 C. Lyophilization process Lyophilization was performed in a glass chamber of a Pilot Freeze Dryer (SP Scientific VirTis Genesis 35L Pilot Lyophilizer). Samples were frozen at À45 C for 3 h, this being followed by a primary dry cycle at À25 C/20 mTorr for 84 h. During the secondary dry cycle, samples were warmed to 30 C/20 mTorr and held for 5 h. Vials were backfilled with nitrogen, capped, and transferred to various storage temperatures (À80 C, À20 C, 4 C, 25 C, 42 C) for stability assessments. LNP characterization Frozen and lyophilized vials were removed from storage at set time points (e.g., weeks 0, 4, 12, and 24) and equilibrated to room temperature. Lyophilized samples were reconstituted by quick addition of 500 mL nuclease-free water and gently mixed. LNPs were diluted to 0.8-1.6 ng/mL total mRNA in phosphate buffered saline (PBS), pH 7.4, and transferred into a polystyrene cuvette to measure particle size and polydispersity by DLS (Malvern Nano ZS Zetasizer), using a refractive index (RI) of 1.590 and an absorption of 0.010 in PBS at 25 C and a viscosity of 0.9073 centipoise (cP) and RI of 1.332. Measurements were made with 10-s run durations with the number of runs automatically determined. Each measurement had a fixed position of 4.65 mm in the cuvette with an automatic attenuation selection. Diameters are reported as z-average. RNA encapsulation efficiency and concentration were determined by the Quant-iT RiboGreen Assay (Life Technologies). Quantification of RNA in LNP formulation was conducted using a standard curve generated from a dilution series of the corresponding RNA stock (either Luc or PR8 mRNA). Both standards and samples were diluted with 1x Tris-EDTA (TE) buffer, pH 8.0. Samples were targeted to reach 0.1 ng/mL in the final sample in the polystyrene cuvette. Fluorescence was measured using a spectrofluorophotometer (Varian Cary Eclipse) set at 500-nm excitation and 525-nm emission. The standard curve was calculated by linear regression analysis of the fluorescence intensity plotted against the concentration of the standard. RNA encapsulation of LNP samples was determined by comparing the signal of the RNA-binding fluorescent dye RiboGreen in the absence and presence of a detergent (0.1% Triton X-100). In the absence of a detergent, the signal comes only from accessible (unencapsulated) RNA. In the presence of a detergent, the LNP is disrupted so that the measured signal comes from the total RNA (both encapsulated and non-encapsulated). The encapsulation percentage is calculated using the following equation: Encapsulation efficiency (%) = ([Fluorescence] total -[Fluorescence] unencapsulated )/(Fluorescence) total  100%. RNA integrity was measured by capillary electrophoresis on the Agilent 5200 Fragment Analyzer, using the Agilent HS RNA Kit (DNF-472-1000). At each time point, LNP samples were treated with Triton X-100 to disrupt the particles, diluted to 0.0025 mg/mL, mixed with the marker diluent, and then heat denatured at 70 C for 2 min. The unformulated RNA payloads were treated in exactly the same manner. The Fragment Analyzer injected the sample at 7 kV for 150 s, with separation at 8 kV for 45 min. Data from each run were analyzed using PROSize 3.0 software (Agilent Technologies). RNA integrity of the formulated mRNA-LNPs is represented as the percentage relative to the unformulated mRNA standard assayed within the same run. Lipid content was determined by UHPLC using the Thermo-Scientific Vanquish UHPLC system with a CAD detector. The UHPLC method uses an Ace Ultracore superC18 column (100  2. Eight-week-old female BALB/c mice (Charles River Laboratories) were utilized for this study. Lyophilized mRNA-LNPs were reconstituted by the addition of nuclease-free water to a target concentration of 0.5 mg/mL total mRNA. Reconstituted mRNA-LNPs were filtered by using a 13-mm 0.2 mm syringe filter (Pall Acrodisc). mRNA-LNPs were diluted with sterile PBS (Corning) and administered via the IM or ID routes using a 3/10cc 29 1 / 2 G insulin syringe (Covidien) and 40or 30-mL injection volumes, respectively. Blood collection Mice were isoflurane-anesthetized, and blood was collected through the retro-orbital route. Serum was separated from blood by centrifugation at 10,000  g for 5 min. Separated serum was stored at À20 C until used. Bioluminescence imaging studies Bioluminescence imaging was performed with an In Vivo Imaging System (IVIS) Spectrum imaging system (Caliper Life Sciences). Mice were administered D-luciferin (Regis Technologies) at a dose of 150 mg/kg intraperitoneally. Mice were anesthetized after receiving D-luciferin in a chamber with 3% isoflurane (Piramal Healthcare Limited) and placed on the imaging platform while being maintained on 2% isoflurane via a nose cone. Mice were imaged at 5-min postadministration of D-luciferin using an exposure time of 5-60 s to ensure that the signal acquired was within effective detection range (above noise levels and below charge-coupled device [CCD] saturation limit). Bioluminescence values were quantified by measuring photon flux (photons/second) in the region of interest from where www.moleculartherapy.org
2022-02-05T14:12:42.694Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "46594ee5a202621515d925f74039d5c45a055667", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S1525001622000843/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1393a4ed7d816c4359d76b8f84ec9dcbf4c60258", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
225843329
pes2o/s2orc
v3-fos-license
Identification of Potential Donors for Higher β-Glucan Content in Oats Avena sativa L. Germplasm Avena sativa L. is a cereal grain species which is prominently cultivated for consumption as a human food as well as animal feed (Daou and Zhang 2012). Oat usage in human foods has increased as information on its beneficial nutritional properties has come to light. This wonder cereal grain is referred to as ‘Super grain’ because of its impressive health benefits (Smulders et al., 2016). These beneficial effects are attributed to the soluble fiber content present in the oat grain. β glucan, a class of polysaccharides is the major component of soluble fiber in oats. Introduction Avena sativa L. is a cereal grain species which is prominently cultivated for consumption as a human food as well as animal feed (Daou and Zhang 2012). Oat usage in human foods has increased as information on its beneficial nutritional properties has come to light. This wonder cereal grain is referred to as 'Super grain' because of its impressive health benefits (Smulders et al., 2016). These beneficial effects are attributed to the soluble fiber content present in the oat grain. β -glucan, a class of polysaccharides is the major component of soluble fiber in oats. Β -glucan is a hemicellulose which makes up about 75 percent of endosperm cell walls of oat grain (Miller et al., 1995). In barley and oats, β -glucan is consists of mixed-linkage (1, 3) (1, 4) -β-D-glucose units (Tohamy et The common oat (Avena sativa) is a cereal grain species considered to be the most economical and the richest source of the soluble dietary fiber. Soluble fiber content in oats is the main reason of the valuable effects of the crop. It is. In the present study, the estimation of β -glucan content was carried out on a total of 95 genotypes of oat which were procured from NBPGR, New Delhi. The isolation and estimation of the β -glucan content was done by using the alkaline extraction method. The β -glucan content in the studied genotypes ranged from 0.43 % to 6.90%. Only 4 germplasm lines had significantly higher β -glucan over standard check OL 10 (5.79%). These were the exotic germplasm lines viz. EC 537851, EC 246158, EC 528874, EC 372463. These could serve as the donors for higher β -glucan content in the oat breeding programmes. , 2003). Scientific research has constantly been emphasizing on the importance of oat βglucan in the human diet. Oat β -glucan is directly related to health improvements like blood pressure, lowering bad cholesterol, improving diabetes and immune response (Keenan et al., 2002, Braaten et al., 1994, Jenkins et al., 2002, Estrada et al., 1997, Kaur et al., 2019. Oat crop is gaining economical interest because of its unmatchable health implications. K e y w o r d s As a result of this substantial breeding efforts have focused on increased β -glucan content. Therefore, identification of the genotypes having high β -glucan content can be useful for enhancing the β -glucan content of local germplasm lines by different breeding strategies (Ahmad et al., 2014). In this study, we carried out the estimation of β -glucan content of oat genotypes and screened those for higher β -glucan content, so that these could be useful for different breeding programs for oat improvement. Materials and Methods A total of 95 genotypes of hexaploid oats were used for the present research. These genotypes included germplasm lines of exotic and indigenous origin. 65 genotypes were of exotic origin and remaining 30 were indigenous collection. All of these were made available by National Bureau of Plant Genetic Resources, New Delhi. Estimation of β -glucan content The process of extraction and estimation of β -glucan content is extremely difficult. Despite this, it has been continued to be developed over the years. There are several methods of isolation of β -glucan i.e., acid extraction, alkaline extraction and enzymatic extraction (Daou and Zhang, 2012). Here, in our study, we used alkaline extraction method outlined by Woods et al., 1977. The reagents used in this method were sodium carbonate, sodium bicarbonate, 2 mol/L HCl and isopropanol (Fig. 1). Fig.1 The extraction procedure of βglucan (Wood et al., 1977) Twenty eight genotypes were found to have the β -glucan content in lower range i.e. from 0.43 % to 2.0 % (Table 1). The frequency distribution of the genotypes is also shown in Fig. 2. These results were within the range of previously reported studies. Fig.2 Frequency distribution graph of individuals for β -glucan content The results of our study yielded a total of 24 promising genotypes which could be useful for future oat breeding programs. These are the lines with higher β -glucan content. They can serve as potential donors in oat improvement programs for developing the varieties with higher β -glucan values. These selected genotypes can be further characterized by using molecular approaches to find the genetic regions controlling the trait.
2020-07-02T10:17:46.191Z
2020-05-20T00:00:00.000
{ "year": 2020, "sha1": "5afc35d00b72b090a4fb6588b1d09c781065072a", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-5-2020/Gagandeep%20Kaur,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0b7e1f5a60d1a5cc51bab88038b3541a5398cea3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
187780930
pes2o/s2orc
v3-fos-license
Design and application of analytical system for vertical foam’s structure The purpose of this paper was to observe and analysis the vertical foam’s structure. Firstly, the horizontal foam’s structure characterization using binocular stereo microscope and particle analyzer was introduced. Secondly, the foam receiver and monocular stereo microscope with universal support were designed to observe vertical foam. Then the analytical system was composed of particle analyzer and microscope. Thirdly, this analytical system was applied to analysis fire fighting foam’s vertical structure. Further study should focus on foams’ structure evolution under heat radiation condition. Introduction Foams were formed by trapping pockets of gas in a liquid or solid. Fire fighting foam could be used to extinguish a fire by cooling the environment, coating the fuel and preventing combustible's contact with oxygen. Foam's fire control ability was relative with macroscopic parameters including foam's type, expansion, drainage time and produce processing [1] . Foam's fire control ability was relative with microscopic parameters including structure evolution and membrane thickness. The foam's structure could be captured by camera device [2] . Based on horizontal foam structure's study, the vertical foam's analysis system would be established in this paper. Binocular stereo microscope Stereo microscope was an optical microscope variant designed for low magnification observation of a sample, typically using light reflected from the surface of an object rather than transmitted through it. To fire fighting foam with high transmittance, the light could both be used to reflect from foam's surface and transmit through the foam. This SZ760T2LED stereo microscope's maximum magnification could reach 110 X with the help of 2 X eyepiece. The working distance was 110 mm and pupillary distance was 54-76 mm. Then the horizontal foam's structure evolution video could be obtained using camera device(2048*1536). The captured figures from the video could be analyzed using particle analyzer further more. Particle analyzer for horizontal foam's structure Particle analyzer could be used to determine the size range, the average size and the distribution of the particles in a liquid sample [3] . The foam figure could be captured from the binocular stereo microscope's video. Then the foam bubbles could be recognized and analyzed statistically. In addition, the relative parameters to foam bubble should not only include average foam bubble diameter, but also include diameters' coefficient of variation [4] . Table 1. Foam bubble's parameters in particle analyzer. Parameter Name Content Comments Circumference The linear distance around the foam bubble's boundary. - Area The quantity that expressed the extent of a two-dimensional figure - Aspect ratio The ratio of short axis and long axis. <1 Long axis The longest line segment that passed through the center of the circle or polygon foam bubble. - Short axis The shortest line segment that passed through the center of the circle or polygon foam bubble. - Diameter of foam Any straight line segment that passed through the center of the circle or polygon foam bubble. To foam with non-circle shape, the diameter was an average diameter of equivalent circle. The coefficient of variation The ratio of the standard deviation to the mean of foam bubble's average diameter. The standardized measure of dispersion of a frequency distribution. Foam number per unit area The foam bubbles' number per unit area on the foam figure. This number was changing with foam's evolving. Based on above study results, the analytical system for foam's structure should be composed of binocular stereo microscope and particle analyzer. The special foam receiver with vertical observation window should be designed. Foam receiver with vertical observation window Considering this receiver might be used under heat radiation, the vertical transparent glass should be selected. Then, quartz glasses were introduced. The foam receiver's height could be suitable and the added cushioning was put into the drainage receiver for buffering. There were six parts in foam receiver including four sides' vertical transparent glass, four slopes, one spiral joint for drainage receiver, four supports, one base and one drainage receiver with cushioning. The physical parameters of quartz glasses were shown. The other parts including slope, base and support were produced using stainless steel. Electrical insulation High electrical insulation even in high temperature environment. To get clear figures. Then the monocular stereo microscope with universal support was introduced (2048*1536). The vertical analyzer system was built up composing of foam receiver and monocular microscope. The height of universal support was 400 mm. Application of vertical analyzer system There were different types of fire fighting foam agent such as AFFF and P [5] . AFFF meant aqueous film forming foam extinguishing agent, which could control fire by both foam babble and aqueous film. P meant protein agent, which came from animals' hoof or hair. After generation, the vertical foam's structure could be observed using vertical analyzer system [6] . The difference could also be compared intuitively as shown in Table 4. In these data, the coefficients of variation were quite variable. The lower coefficient of variation was, the more stable foam structure was. P was more stable than AFFF, which meant P had longer drainage time according to foam bubble's parameters [7]. P's stability was caused by the protein scaffold whose structure could keep the shape of foam bubble. Then P had a longer burnback time compared with AFFF or other fire fighting foam. Conclusion In this paper, the vertical analyzer system was built up composing of foam receiver and monocular microscope. This vertical analyzer system could be used to compare the vertical AFFF and P foam's vertical structure parameters. In the further, more foams' structure evolution under heat radiation would be studied systematically. This work would contribute to improve fire fighting foam's mechanism research and foam's extinguishing effect [8] .
2019-06-13T13:19:05.521Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "772372db1932b01c1db1e8f0837e27ff6659c167", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/186/2/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "24300499748919c4621b82e167341dbd1eef9a80", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
150223892
pes2o/s2orc
v3-fos-license
‘Won’t Somebody Please Think of the Children?’ Hate Speech, Harm, and Childhood Some authors claim that hate speech plays a key role in perpetuating unjust social hierarchy. One prima facie plausible hypothesis about how this occurs is that hate speech has a pernicious influence on the attitudes of children. Here I argue that this hypothesis has an important part to play in the formulation of an especially robust case for general legal prohibitions on hate speech. If our account of the mechanism via which hate speech effects its harms is built around claims about hate speech’s influence on children, then we will be better placed to acquire evidence that demonstrates the processes posited in our account, and better placed to ascribe responsibility for these harms to individuals who engage in hate speech. I briefly suggest some policy implications that come with developing an account of the harm of hate speech along these lines. that many of them agree that communication is an important factor. And this shouldn't be surprising. Philosophical inquiry often focuses our attention on the subtleties of language, and there is a temptation in this mode of inquiry to assign language a position at the explanatory center of everything. In short, plenty of philosophers are fellow travellers with progressives who propose that communication plays a major role in sustaining de facto social hierarchies. Although we should treat this view seriously, we should be wary of any over-confident or hyperbolic characterization of the causal role that communication plays in de facto social hierarchy. The forces that underpin social hierarchies are of course enormously complex. It is hard enough to explain how the major economic elements of a social order function: trade, jobs, housing, and the organisation of business and government. Institutional practices (e.g. in policing, the courts, schools, and workplaces) further complicate the picture, and there are also private domains in which social arrangements are largely shaped by informal norms and customs. With these other factors in view, the cautious position would be to say that social hierarchies aren't sustained by communicative behaviors as such, but by an interlocking network of policies, institutions, and material conditions, which advantage some people and groups over others. But then, given that sort of picture, there is room for doubt about whether speech is making any distinctive and significant causal contribution to social hierarchy. After all, it may be that when people say racist things, for instance, this is largely an epiphenomenal symptom of a deeper racist social order, whose etiology lies in other material and institutional forces. 9 And in response to this it isn't enough to simply assert that speech is doing the work. Our account becomes speculative past the point of credibility if it suggests that words can create social realities ex nihilo. We need hypotheses to specify how, exactly, speech might be an operative factor in these causal systems, and then credible evidence to substantiate those hypotheses. Among the different kinds of speech that might be seen as contributing to social hierarchy, one that is often singled out for attention is speech that overtly expresses contempt or disdain towards people on the basis of their social group, e.g. speech that essentializes groups with negative traits ('all Muslims are terrorists'), or uses slurs or dehumanizing terms to convey a view of the group as disgusting, evil, or in some other way of lesser status. Like many others, I will narrow my inquiry to this class of communicative conduct. And I will use the term 'hate speech' to refer to it. 10 In narrowing my inquiry like this, I don't mean to deny that other forms of communication, apart from overt hate speech -like everyday chatter in the home and workplace, or the stereotyping of groups in the mainstream media -might play an important causal role in sustaining social hierarchies. 11 Indeed, given that these other forms of communication are in certain ways more ubiquitous and less avoidable than hate speech, and sometimes more subtle in encoding identity-prejudicial views, they may sustain de facto social hierarchies in ways that overt hate speech doesn't or couldn't replicate. Nevertheless, much of the work that has been done by philosophers and legal theorists around this topic focuses on the effect of more overt forms of identity-prejudicial communication, and that is where I will focus too. More specifically, I want to examine the hypothesis that hate speech contributes to identity-based social hierarchies by influencing children to support or accept those hierarchies. This hypothesis isn't 10 In this definition I am following James Weinstein and Ivan Hare. 'In its purest form' they say, 'hate speech is simply expression which articulates hatred for another individual or group, usually based on a characteristic (such as race) which is perceived to be shared by member of the target group'; see 'General introduction: free speech, democracy, and the suppression of extreme speech past and present', in I. Hare and J. Weinstein (eds.), Extreme Speech and Democracy (Oxford: Oxford University Press, 2009), pp. 1-7, 4. Some authors define 'hate speech' in a way that also emphasizes the feelings that certain speech characteristically elicits, and not just the feelings it expresses; e.g. Rae Langton, 'The authority of hate speech', forthcoming in Oxford Studies in Philosophy of Law, Vol. 3. Alexander Brown has recently argued that, on the understanding of the term 'hate speech' that is acquiring popular currency beyond legal discourse, hatred needn't be involved in hate speech in any respect, either in the attitudes it expresses or elicits; 'What is hate speech? Part 1: the myth of hate', Law and Philosophy 36(4) (2017): 419-68. Nevertheless, I take it that the class of communicative acts picked out by the definition that I have given merits attention in its own right, in part because its members are the paradigmatic instances of the kinds of speech that are regulated by the kinds of laws customarily identified as 'hate speech regulations'; for an overview of these, see Ivan Hare and James Weinstein (eds.), Extreme Speech and Democracy entirely novel (see §II.A). What I am trying to do here is to build on lines of inquiry that are suggested in the work of other authors, by identifying certain merits in this kind of hypothesis that aren't fully recognized in the literature on this topic. I should acknowledge, at the outset, that imploring others to 'think of the children' can sometimes just be cheap, emotive bluster. 12 But while it is important to be mindful of this concern, we shouldn't dismiss a prima facie plausible hypothesis about the role of communicative factors in social hierarchy just because of its superficial similarity with moralistic rhetoric. It is reasonable to 'think of the children' in a discussion about the harm of hate speech, as long as we proceed cautiously. The merits of focusing on hate speech's influence on children don't really come into play if our question is just whether some instances of hate speech are harmful to particular individuals. The answer to that is uncontroversial. Token instances of speech that expresses contempt towards people on the basis of their social group can be used to harass, threaten, and incite violence. We don't need to be specially convinced that these instances of hate speech are harmful, or that there is an in-principle justification for legally restricting them. 13 There is room for debate about what the right regulatory approach is in this area, e.g. whether we should have customized restrictions on hate-speech-as-harassment, or rely on generic anti-harassment laws. But notwithstanding these open questions, the real controversy over hate speech -and the controversy to which my discussion in this paper is addressed -is not about whether hate speech is harmful in specific instances, for people whom it is used to directly and personally attack. The controversy, rather, is about whether all instances of hate speech are implicated in harming others, in a way that would give us an in-principle justification for what I will call BANS, i.e. general legal prohibitions on hate speech, which apply irrespective of whether the targeted speech is 12 Indeed, such rhetoric can be used to stir moral panics; see Marjorie Heins, Not in Front of the Children: 'Indecency', Censorship, and the Innocence of Youth (New Brunswick: Rutgers University Press, 2007). This is lampooned in episodes of The Simpsons where the shallowly pious Helen Lovejoy cries 'won't somebody please think of the children'. 13 Granted, this hasn't always been true. One contribution of critical race scholarship on hate speech has been to create a wider recognition of the threatening and harassing power of hate speech; see in particular, the seminal collection Mari J. Matsuda, Charles R. Lawrence, Richard Delgado, Kimberlé Williams Crenshaw, Words That Wound: Critical Race Theory, Assaultive Speech, and the First Amendment (Boulder: Westview Press, 1993). being used to harass, incite violence, or in any other direct way threaten or harm people. 14 It is true that in focusing on the case for BANS we are setting a high bar for opponents of hate speech. One could argue that we have grounds for thinking hate speech makes some contribution to social hierarchy -one which justifies some form of legal response, like antidiscrimination laws that disallow hate speech in workplaces -while at the same time believing we lack the evidence we would need in order to assert that all hate speech is harmful in a manner that would justify BANS. Still, the question I intend to explore here is what it would take to satisfy that more demanding standard of justification. And this is part of what makes the focus on children relevant. If we are aiming to develop an evidentially-supported defense of the thesis that hate speech plays a significant causal role in sustaining social hierarchies -one with the potential to underwrite an in-principle justification for BANS -the hypothesis most likely to realize this aim is one focusing on hate speech's influence on children. Or so I will argue. The rest of the paper is organized as follows. In §II I survey some of the existing work on hate speech's influence on children, and I discuss three conditions that an account of hate speech's harm needs to meet in order to provide an in-principle justification for BANS. First, it should explain how all instances of hate speech make a contribution to the postulated harm. Second, it should be the kind of account for which it is possible in principle to acquire evidence that substantiates the key claims about how this contribution occurs. And third, it should explain how the person who engages in hate speech, i.e. the 'hate speaker', can justifiably be ascribed responsibility for the harm. In §III I discuss the advantages of focusing on hate speech's influence on children when formulating an account of its harm, in a way that links up with these three conditions. I conclude in §IV by sketching some of the policy implications that may follow if we account for hate speech's harm in a way that emphasizes its influence on children. II. WHAT IS NEEDED IN AN ACCOUNT OF HATE SPEECH'S HARM? In scholarly writing that examines identity-prejudicial communication and the case for its regulation, we find a number of passing comments on the negative impact that such communication can have on children. 15 There are only a few authors, though, who explicitly claim that hate speech contributes to social hierarchy specifically through its influence on children. A. Existing Work on Hate Speech and Childhood Delgado and Stefancic make this claim in Understanding Words that Wound, a text that digests the key ideas in discussions about hate speech from critical race theory. They devote a chapter to hate speech's bad influence on children, arguing that 'much of the blame' for feelings of inferiority among minority groups 'rests with the words and names children are exposed to while growing up', and that since children have fewer coping mechanisms than adults, they are 'particularly susceptible to the wounds words can inflict'. 16 But the evidence cited to support all this is not entirely convincing. For instance, when Delgado and Stefancic say that much of the blame for 15 In his 1962 Presidential address to the American Political Science Association, Charles S. Hyneman derided the marketplace of ideas ethos in First Amendment theory, and said he couldn't see 'why there is so little support… for governmental action designed to lessen or prevent the indoctrination of children' into racist views; see 'Free speech: at what price?', The American Political Science Review 56(4) (1962): 847-52, 849. More recently, in arguing that pornography subordinates women, Rae Langton suggests that a key issue is whether pornography 'is authoritative for… the fifty percent of boys who 'think it is okay for a man to rape a woman if he is sexually aroused by her''; see 'Speech acts and unspeakable acts', Philosophy & Public Affairs 22(4) (1993): 293-330, 311-12, my emphases. She also discusses pornography's influence on children by examining the findings of the 2013 Report of the UK Office of the Children's Commissioner, into adolescents' views about consent; see 'Is pornography like the law?', in M. Mikkola (ed.), Beyond Speech: Pornography and Analytic Feminist Philosophy (Oxford: Oxford University Press, 2017), pp. 23-38. Jeremy Waldron is another influential scholar who alludes to the impact of identity-prejudicial speech on children. He opens his book on the subject with the following story. 'A man out walking with his seven-year-old son and his ten-year-old daughter turns a corner on a city street in New Jersey and is confronted with a sign. It says: 'Muslims and 9/11! Don't serve them, don't speak to them, and don't let them in'. The daughter says, 'What does it mean, papa?' Her father, who is a Muslim… doesn't know what to say'; see The Harm in Hate Speech, p. 1. For Waldron, such episodes reveal hate speech's raison d'être, which is to ensure that 'for the father walking with his children… there will be no knowing when they will be confronted by one of these signs'; see ibid: 3. 16 feelings of racial inferiority lies with the words and names children are exposed to while growing up, they cite Delgado's claim that minority children 'question their competence, intelligence, and worth' primarily because 'they constantly hear racist messages'. 17 The supporting citations for this come from classic social scientific texts on racial prejudice from the mid-20th century, by Robert Redfield, Gordon Allport, and Mary Goodman, and the evidence in these texts just indicates that racist communicative practices are one factor among others in generating racial stigma, not something to which 'much of the blame' can be assigned. 18 Cortese is another scholar who devotes particular attention to the effect of hate speech on children. Drawing on Piaget's developmental psychology, Cortese claims that progress in the child's cognitive development consists in the expansion of her ability to construct a 'world-image' through sympathetically identifying with other people's perspectives. 19 On this picture, in-group and outgroup associations are 'hardwired' in our cognition, as a consequence of this developmental process. And Cortese's contention, then, is that hate speech impairs the processes involved in the child's 'socioemotional' development, in a way that ultimately leads to identity-prejudicial attitudes in adulthood, which in turn contributes to the perpetuation of identity-based social hierarchy. 20 Again, though, the evidence used to support these claims about the influence of hate speech is unpersuasive. The empirical studies that Cortese cites indicate that prejudicial attitudes manifest in children's thoughts at alarmingly young ages. But evidence of these effects doesn't substantiate his key claim that hate speech is responsible for them. Cortese's attempt to link questions about hate speech's effects 17 Richard Delgado, 'Words that wound: a tort action for racial insults, epithets, and name-calling', Harvard Civil Rights-Civil Liberties Law Review 17(1) (1982): 133-82, 146. 18 For instance, the text from Allport that they cite says that 'plural causation is the primary lesson we wish to teach', and that economic exploitation and social structure are important contributors to prejudice, and also that it is 'a serious error to ascribe prejudice and discrimination to any single taproot'; Gordon W. Allport, The Nature of Prejudice (Reading Massachussetts: Addison-Wesley, 1954) B. Causing and Contributing to Harm As well as the limitations in the evidence they cite, the accounts discussed above are partly orthogonal to our purposes, since they are partly concerned with individual-level harmful effects of particular instances of hate speech, which, as explained in §I, will not be the focus of our inquiry here. But having said that, the authors cited above are right to develop an account of hate speech's harm in a way that is responsive to empirical research in this area. And for reasons that I will present and discuss in §III, these authors are also right to focus on hate speech's impact on children. It is worth doing a little more background theoretical work, however, to clarify exactly what is needed from an account of hate speech's harm, if it is going to be able to provide an in-principle justification for BANS. Having clarified these criteria, we can use them to identify some of the advantages of emphasizing hate speech's effect on children. In this I will be treating the harm principle as a necessary (but not sufficient) condition on the permissible prohibition of speech. The state cannot use the coercive apparatus of the law to prohibit speech if its aim is to edify the speaker, or penalize bad speech per se, or send a message condemning certain speech. If the state is going to prohibit hate speech, it owes us (and the speaker) a rationale based on the ultimate aim of preventing harms to others. 22 How might one substantiate the claim, then, that all instances of hate speech are harmful in a way that suffices to justify BANS -even 21 In a similar vein to the above, Brown surveys a range of claims made about the harms of hate speech, including in critical race theory, and finds that in several cases authors cite empirical studies which they say show that hate speech harms its targets, when in fact the cited studies show that 'discriminatory treatment' is the relevant causal factor, and don't license the inference that hate speech in particular (as a specific form of discriminatory treatment), is responsible for the relevant harms; see Hate Speech Law: A Philosophical Examination (New York: Routledge, 2015), pp. 56-58. Further critical discussion can be found in Heinze (see Hate Speech and Democratic Citizenship, pp. 125-29) of how legal theoretic work in this area represents the findings of empirical research on the effects of hate speech. 22 Sunstein suggests that the prohibition of hate speech might be defended in terms of its 'expressive' function, the idea being that such laws send a message condemning the attitudes of the hate speaker; Cass R. Sunstein, 'On the expressive function of the law', University of Pennsylvania Law Review 144(5) (1996): 2021-53. In assuming the harm principle I am ruling out any such justification for BANS. hate speech that isn't used to threaten, harass, or incite violence, and thus isn't directly responsible for any particular harm being done to any particular individual? The kind of claim one must defend here is one that says all instances of public hate speech make a contribution to some general state of affairs that is the cause of concrete harms to particular individuals. Feinberg proposes a theoretical framework to distinguish this kind of environmentally-mediated harm from harm that is directly inflicted by particular acts. Where a 'private harm principle' only allows prohibitions on directly harmful acts, he says, a 'public harm principle' would permit prohibitions on acts whose restriction is necessary 'to prevent impairment of institutional practices and regulatory systems that are in the public interest'. 23 Tax evasion and contempt of court are Feinberg's paradigmatic examples of acts whose harmfulness is more aptly characterized by way of this public harm principle. Even though isolated instances of tax evasion don't directly harm anyone, they are still genuinely harmful, he says, 'insofar as they weaken public institutions in whose health we all have a stake'. 24 Joshua Cohen describes how environmentally-mediated harms may be effected by speech in particular. Speech, he says, 'may help to constitute a degraded, sickening, embarrassing, humiliating… or demeaning environment', and when this is the case, although we cannot 'trace particular harmful or injurious consequences to particular acts of expression that… constitute the unfavorable environment', we can judge that individual speech acts are contributing to the social environment's degradation, and that specifiable harms are resulting from this. 25 This is the kind of account of hate speech's harm that one needs to provide in order to defend BANS: one on which all instances of hate speech are contributing to a social environment that harms people in targeted groups, or that weakens institutions protecting their interests. The type of environmentally-mediated harms that I am especially concerned with, as I explained in §I, are those constituted by occupying a subordinate position in an identity-based social hierarchy, e.g. those that come with systematic disadvantages in resources, labor conditions, or social opportunities. Another class of environ- mentally-mediated harms that one could focus on would be those fostered by a 'climate of hatred' towards a group, 'associated with an increased chance of acts of discrimination, violence, [and] damage to property'. 26 Granted, the two kinds of harms may be causally interrelated, insofar as climates of hatred can be produced by de facto social hierarchies, and can reinforce those hierarchies in turn. 27 But there is a particular set of complexities associated with harms borne of a climate of hatred that I don't want my account to inheritspecifically, complexities in how we conceive of the relations of causation and responsibility that obtain between a speaker fueling a climate of hatred, and someone under that climate performing an act of violence or discrimination against a particular victim. 28 By contrast, the harms generated by social hierarchy per se, are harms for which, typically, there is no actor whose conduct is the proximate cause of the harm -in other words, they are harms that are necessarily conceived of as structural rather than agential. 29 The kind of account of hate speech's harm whose prospects I want to focus on, then, is one where the hate speaker is culpable for causally contributing to a social order in which such structural harms are effected. 26 See Brown, Hate Speech Law, p. 67. 27 E.g. see Cecilia L. Ridgeway's account of how, in a social order where one group gains a positional advantage over another, recognition of the initially accidental disparity will be transformed over time into a set of beliefs about the inferiority of the disadvantaged group; 'The emergence of status beliefs: from structural inequality to legitimizing ideology', in J. T. Jost and B. Major (eds.), The Psychology of Legitimacy: Emerging Perspectives on Ideology, Justice, and Inter-group Relations (Cambridge: Cambridge University Press, 2001), pp. 257-77. 28 Specifically, as Brown says, to substantiate this kind of rationale we won't just need an evidentially-supported account of how hate speech contributes to the degraded social environment, but also evidence that the 'climate of hatred' makes proximately harmful acts (e.g. violence) against members of target groups likely and imminent. There are reasons to doubt that the climate-to-act causal pathways operate so straightforwardly, and practical difficulties in conducting studies that would demonstrate these pathways if they were in effect; Brown, Hate Speech Law, 68-70. 29 In structurally harmful social hierarchies, as Iris Marion Young puts it, 'in most cases it is not possible to trace which specific actions of which specific agents cause which specific parts of the structural… outcomes'; Political Responsibility and Structural Injustice (Kansas: The University of Kansas, 2003), p. 7. The idea isn't that the harms are ultimately done to some amorphous entity, e.g. to society per se, or to a social group abstractly conceived. The environmentally-mediated harms regulated by a public harm principle still redound to individuals. What's distinctive about them (and prevents their regulation under a private harm principle) is that there is no direct causal link between perpetrator and victim. Individual actors contribute to system-effects, and people's interests are wrongfully setback by those system-effects, but in a way such that we typically cannot attribute specific effects to specific actors. C. Evidential Support A harm-based rationale for BANS should be backed by evidence that supports its claims about the causal processes through which hate speech contributes to this kind of social order. Even if it isn't the legal theorist's job to supply the data, she should not be indifferent to whether and how evidence may be adduced in support of her conjectures. After all, as discussed in §I, there are other credible hypotheses about the principal causal forces behind de facto social hierarchy. It is easy to imagine hate speech as the culprit, because it is the conspicuous facade of identity-prejudice. It is also an expressive practice that is often the province of low-status speakers, who are less skilled than elites in finessing their expression to avoid the infringement of mainstream expressive customs. We can see that deep, structural changes in employment and social mobility would have a major impact on identity-based social hierarchies. But these reforms are hard to achieve. Restricting hate speech is easier, partly due to the limited social capital of the people BANS typically penalize. But BANS would be patently illegitimate if they were essentially an exercise in expedient scapegoating. For all these reasons, advocates of BANS should not be content with a plausible-sounding just-so story about how hate speech is playing a role in the perpetuation of social hierarchy. They should want their story to be backed up by evidence. 30 There are ways of conceptualizing the harm that hate speech inflicts that can notionally sidestep this demand for empirical support. Some authors appeal to conceptions of harm on which hate speech doesn't cause, but rather constitutes, a harm to its targets. While there is a case to be made for this approach, it also has limitations, insofar as it makes the truth of claims about hate speech's harm primarily hinge on esoteric normative and social-30 There is a considerable, but disciplinarily disparate, body of empirical research investigating the effects of hate speech, and it is unclear what (if any) sort of confident conclusions can be drawn from it about hate speech's distinctive role in perpetuating identity-based social hierarchies. One important recent cluster of papers on this topic comes from a collaborative project between law and political theory on the effects of hate speech and its regulation in Australia; see Katharine Gelber and Luke theoretic theses, which lie outside the sphere of empirical arbitration. Complex problems of legitimacy arise if our justification for BANS appeals to an understanding of harm-infliction which rests on philosophical conjectures that many in the polity reject (or would reject, if the question arose). By contrast, if our claims about the harmfulness of hate speech are based on the results of the application of widely-accepted methods for assessing the impact of different factors on people's welfare and interests (defined in terms that are standard to mature social scientific disciplines), then we will have an especially robust justification for BANS -the kind that anyone should recognize as legitimate, in principle, on pain of irrationality or general scepticism. The call for evidential support, then, in debates about BANS, is at least in part about seeing whether this decisive type of justification for BANS is in the offing. This need not be motivated by the prioritization of expressive liberty above all other considerations, as some authors suggest. 31 One might argue that the burden of proof should be reversed, such that opponents of BANS have to provide evidence for the view that hate speech doesn't harm or endanger its targets. 32 This sort of precautionary approach gains prima facie plausibility from the historical record of cases where hate speech seems to have helped to fuel genocidal movements. If we have reason to think that hate speech can contribute to catastrophic harms, in contexts where identity-prejudice gives way to murderous atrocities, then we arguably also have reason to think it can contribute to the routine harms associated with identity-based hierarchies in relatively stable societies. 33 On the other hand, there are several difficulties with the appeal to precautionary justifications in this area. Precautionary laws don't eliminate risk as such, they just trade one set of risks (the risks 31 E.g. Waldron, The Harm in Hate Speech, 148. 32 Brown indicates some sympathy for a precautionary approach, e.g. he says an authority may impose restrictions on certain instances of hate speech, 'because having identified the possibility… that a proportion of the individuals targeted by hate speech will not participate in the formation of public opinion, and bearing in mind the conditions of uncertainty that surround these outcomes, it errs on the side of precaution'; Brown, Hate Speech Law,199. 33 For discussion of hate speech's role in genocidal social movements, see Lynne Tirrell, 'Genocidal language games', in I. Maitra and M. K. McGowan (eds.), Speech and Harm: Controversies over Free Speech (Oxford: Oxford University Press, 2012), pp. 174-221. For discussion of how, even in stable democratic societies, hate speech may contribute to a kind of 'slow-burn' incitement of anti-democratic movements against marginalized groups; see e.g. Tsesis, 'Dignity and speech'. Whether we can learn something about hate speech's likely impact in stable democracies, from observing its involvement in genocidal movements elsewhere, is a complex question in its own right. One of Heinze's core theses in Hate Speech and Democratic Citizenship is that we cannot make ready inferences across this divide. of legal inaction) for another set. 34 And in a liberal legal system in particular, there will be a presumptive opposition to precautionary laws that infringe against basic civil rights, insofar as such laws themselves run the risk of allowing specious assertions about imminent dangers to erode the rule of law and usher in authoritarianism. 35 At any rate, whatever the best general defense of precautionary principles might amount to, a preventive rationale -that adverts to the imminent aim of redressing the empirically demonstrated harms of hate speech -gives us a stronger case for BANS than a precautionary rationale. Once again, the aim here is to explore whether that kind of particularly robust justification for BANS is in the offing. D. Responsibility for Harm If one is seeking to defend BANS, the story one tells about the harm done by hate speech needs to be one on which the hate speaker can be ascribed responsibility for the harm. An example will help to convey what I have in mind. Suppose someone were to argue as follows. Racial hierarchies, in-groups and out-groups, us and them: our penchant for social taxonomies and rankings reflects the structure of language itself. Identity-prejudice is not due to particular speech acts, but to the underlying grammars and vocabularies that frame all verbal communication. Language prefigures the discriminations that define social cognition, opening us up to some people and closing us off to others. 36 This picture identifies speech as responsible for causally contributing to de facto social hierarchy, but it does so in a way which suggests that social hierarchy cannot be meaningfully combatted by trying to single out and counteract the effect of any particular speech acts. The 35 Just as historical cases can be cited to emphasize the risks of legal inaction, they can also be cited to identify risks associated with infringing civil rights for the sake of addressing first-order risks, e.g. see Geoffrey R. Stone, Perilous Times: Free Speech in Wartime from the Sedition Act of 1798 to the War on Terrorism (New York: W. W. Norton, 2004). 36 Although I'm not attributing this sketch of a position to anyone, the kind of linguistic constructivism that underpins it appears in some of Charles Taylor's work; e.g. see 'Theories of meaning', in Human Agency and Language: Philosophical Papers 1 (Cambridge: Cambridge University Press, 1985), pp. 248-92, 263. problem with this picture is not the fact that it sees speech as contributing to harms that are structural, indirect, or environmentally mediated. As I explained in §II.B, this is precisely the type of account of hate speech's harm that we are looking to develop. The problem here, rather, in the characterization of the mechanism through which communication is harmful, is that it doesn't enable us to discriminate between instances of communication that are contributing to harm, and instances whose effects are benign or positive. On this picture all language-users collaborate in sustaining social hierarchy, and there is little any of us can do to resist this. In order to underwrite a credible justification for BANS, an account of hate speech's contribution to the structural harms of social hierarchy cannot have this form. If we are going to punish individual hate speakers, we need reason to think that they are responsible for making a distinctive contribution to the harms of social hierarchy, one that is different from -and more important than -the contribution made by the listener or by the citizenry at large. There is little justificatory payoff in an account that ultimately depicts hate speakers as flotsam and jetsam, drifting around in a sea of deeper forces that are the real drivers of social inequality. We also need our account to support the notion that, where the hate speaker contributes to harmful outcomes by influencing other people, he can still be rightfully conceived of as responsible for contributing to the relevant harms. At the same time, however, the account we give must not have the upshot that in any context in which B is influenced by A into u-ing, A can be deemed responsible for contributing to whatever ensues from B's u-ing. If our account of responsibility for contributory harm was that inclusive -in societies like ours, which have complex, multidirectional networks of crosscutting influences -it would too easily break down into an implausible picture, on which everyone who speaks in public ends up being partly responsible for contributing to a vast range of downstream consequences. In short, in seeking evidence of the hate speaker's contribution to a harmful social order, we need to be able to understand the speaker's responsibility for that contribution in a way such that he is at least partly accountable for actions performed by the people he influences, but without making it the case that responsibility for harms which result from influencing others comes too cheaply. III. HATE SPEECH AND CHILDREN: EVIDENCE AND RESPONSIBILITY I have argued that an account of hate speech's harm must explain how all instances of hate speech contribute to a harmful state of affairs, and be amenable to empirical confirmation of its claims about these effects, and show how hate speakers can be understood as responsible for the relevant harms. In this section I explain the advantages of focusing on children in developing an account of hate speech's harms that has the potential to satisfy these conditions. A. The Legitimation and Normalization Hypothesis As I explained at the outset, in §I, our hypothesis about how hate speech contributes to unjust de facto social hierarchies, and their resultant harms for members of targeted groups, must not downplay historical and material factors. Our account becomes speculative past the point of credibility if it suggests that words can conjure up social realities ex nihilo. With this constraint in mind, the most credible type of hypothesis about hate speech's contribution to social hierarchy is one which posits that hate speech legitimates and normalizes social hierarchies. Several authors appeal to something like this in their discussion of the relationship between speech and social hierarchy. In Matsuda's ground-breaking work on hate speech, she says the power of racist groups 'derives from their offering legitimation and justification for otherwise socially unacceptable emotions of hate, fear, and aggression'. 37 Parekh says identity-based social hierarchy is 'legitimized by a wider moral climate which is built up and sustained by… gratuitously disparaging and offensive remarks'. 38 Among MacKinnon's charges against pornography, she says that it 'authorizes and legitimizes' sexual abuse. 39 though pornographers lack formal authority, they legitimate discrimination against women by representing women's subordinate social position as 'ordinary and normal'. 40 If we are employing this critical vocabulary in order to describe hate speech's contribution to a harmful social structure, then an initial version of our hypothesis might be stated as follows. The LAN (i.e. Legitimation and Normalization) Hypothesis: Hate speech causally contributes to the harms of de facto social hierarchies by legitimating and normalizing systematic material and institutional inequalities that track social identity categories. What exactly does it mean to say that hate speech legitimates and normalizes material and institutional inequalities? In particular, how are social facts about what is legitimate affected by hate speech, if most hate speech comes from people who lack any formal authority to impose norms for others? One possibility, proposed by Ishani Maitra, is that the hate speaker can acquire 'licensed authority'. Like the person who takes charge in a chaotic social situation and finds that others fall into line behind his leadership, the hate speaker can assume a kind of de facto authority, due to his contingent situational influence rather than any recognized positional authority. 41 It is unclear, though, whether it would make sense to say that hate speech 'legitimates' social hierarchy, if the hate speaker's 'authority' to influence the social facts about what is legitimate is reliant upon 40 Rae Langton, 'Subordination, silence, and pornography's authority', in R. C. Post (ed.), Censorship and Silencing: Practices of Cultural Regulation (Los Angeles: Getty Research Institute for the History of Art and the Humanities, 1998), pp. 261-84, 269. Langton's work in this area is perhaps best known for the way that it uses Austin's speech act theory to explicate the claims that pornography subordinates and silences women. But another integral element of her work is its development of the concepts of legitimation and normalization, and the relations between them. As well as the above source, these elements are at work in her 2015 Locke Lectures on 'Accommodating Injustice' (see philosophy.ox.ac.uk/john-locke-lectures), and in other works of hers cited here, including 'Speech acts and unspeakable acts', 'Is pornography like the law?', and 'The authority of hate speech'. One aspect of Langton's development of these concepts is her emphasis on epistemic authority. The makers of pornography don't accidentally succeed in normalizing a picture of women as objects, and in causing people's beliefs to reflect that picture. Rather, she argues, pornography shapes the world such that makes it (partly) true that women are what pornography represents them as; pornographers normalize and legitimate women's subordination, by expressing the epistemic authority they have as architects of a patriarchal social order, and authoritatively transmitting knowledge of their design to others. This aspect of her view is the focus in 'Speaker's freedom and maker's knowledge', in Sexual Solipsism, pp. about what is legitimate can be altered by regular, low-status speakers, based on the idea that speakers can verbally enact 'conversational exercitives' which (in the first instance) alter what is proper and improper conduct within the particular conversation in which they are performed; e.g. see 'Oppressive speech'. But it is a further question whether this kind of account can be extended to explain how low-status speakers can alter legitimacy facts in a further-reaching way, which affects the whole hierarchical ordering of a society. other people's voluntarily acceding to his leadership. 42 In light of this concern, we can see why it makes sense to conceive of legitimation and normalization as complementary processes. We are all subject to powerful de facto social norms that enjoin us to act in accordance with whatever practices and behaviors we understand to be descriptively normal in the context where we are acting. 43 The conjecture, then, will be that when a person engages in public hate speech, even if they do not possess any formal political authority, they can represent the subordinate status of their targets as being (descriptively) normal, and in so doing they can give identity-based de facto social hierarchies the appearance of (normative) legitimacy. To be clear, in stating that the LAN Hypothesis is a credible one, I am not leaping to the conclusion that it is true. It remains for the hypothesis to be assessed in light of relevant data, and as I will explain in the next section, certain challenges are likely to arise in trying to acquire evidence that demonstrates the specific process that the hypothesis describes. Still, the hypothesis has two important features to recommend it. First, it assigns a distinct role to hate speech in sustaining de facto social hierarchy, but -crucially -without denying the primacy of the historical-material forces that underpin racial social structures, patriarchy, and other identity-based social hierarchies. I expect that almost no-one who is seriously engaged in these debates actually believes that hate speech summons racism or heteronormativity into existence out of thin air. But it is easy to add rhetorical flourishes in describing the causal powers of communication and, in so doing, downplay historical-material factors. It is easy to say things like 'words create the hierarchies and people fill them', 44 which are meant to highlight the harmful potential of communication, but which seemingly do this by attributing magical powers to speech. The LAN hypothesis doesn't make the influence of hate speech a rival explanatory hypothesis to one that emphasizes the historical-material bases of social hierarchy. Rather, it posits that hate speech plays a key role in cementing the conditions that historical-material forces set in place. 42 Given this account of what is occurring, a more fitting characterization might be to say that the social hierarchy is being mutually enacted by speaker and audience together. For further discussion of these kinds of cases, see Saray Ayala and Nadya Vasilyeva, 'Responsibility for silence', forthcoming in Journal of Social Philosophy. The second advantageous feature of the LAN Hypothesis is that it posits the operation of a phenomenon that is widely recognized and which has been empirically observed, i.e. the phenomenon of normalization. Whether the phenomenon is actually in effect in this particular context -whether hate speech does in fact normalize identity-based social hierarchies -is a further empirical question. But the phenomenon of normalization itself isn't merely some speculative or conjectural critical concept. It is an empirically observed phenomenon, of interest to researchers in a number of empirical disciplines, including psychology and sociology. 45 Given that I defined hate speech in terms of disdain and contempt, one might note that the mere fact of someone expressing 'disdain' for a group wouldn't necessarily represent that group's subordination as either descriptively normal or normatively legitimate -at least, not for all values of 'disdain'. If we are proposing that hate speech legitimates social hierarchy, then, we need to interpret 'disdain' and 'contempt' for x as meaning something like 'seeing x as worthy of a subordinate position'. 46 All liberal democracies subscribe to some form of the doctrine that 'human beings are born free and equal in dignity and rights'. 47 We encode this in our legal systems in 45 For starters, there is a large body of research on how norm-abiding behavior can be sustained by people attending to environmental cues that indicate other people's norm-adherence. One of the seminal papers on this topic is Robert B. Cialdini, Raymond R. Reno, and Carl A. Kallgren, 'A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places', Journal of Personality and Social Psychology 58 (6) various ways, and in most liberal countries this enjoys majoritarian support. Any form of discriminatory, identity-based mistreatment should be straightforwardly recognizable as unjust in this kind of formally egalitarian social milieu. But hate speech impairs the recognition of this by making salient to its audience a representation of its targets as second-class beings, who are quite rightly assigned a subordinate social position. And all instances of public hate speech make a contribution to the salience of these derogatory group representations. In this way, so a proponent of the LAN Hypothesis would suggest, hate speech helps people to see the disadvantages faced by the target group as normal, natural, and legitimate, and it thus deters efforts at reforming the wider structural hierarchies that generate these disadvantages. B. Evidential Support, Children, and the LAN Hypothesis As discussed in §II.C, a harm-based rationale for BANS should be backed by evidence that supports its claims about the causal processes through which hate speech contributes to the harms of social hierarchy. The natural question, then, is what evidence can be adduced in support of the LAN Hypothesis? However, for reasons that will become evident shortly, we actually need to delve into the adjacent -more theoretical -question, of what evidence could be unearthed and adduced in support this hypothesis, if it were in fact true. I argued that the LAN Hypothesis is credible because it is compatible with the highly plausible assumption that material and institutional factors have causal primacy in the creation and perpetuation of de facto social hierarchies. The LAN Hypothesis sees hate speech's role as bolstering those hierarchies by shaping people's attitudes in a way that favors them. But this picture of how the causal factors work together creates difficulties if we are trying to evidentially demonstrate hate speech's effects. If hate speech's influence follows on from (and interacts with) other more fundamental causal forces, then its distinctive effects will, in the normal run of cases, be difficult to isolate and detect. Consider the everyday bigot, A, who mostly keeps her prejudiced views to herself, but is regularly exposed to hate speech in her daily life. We want to see whether there is any evidence that this exposure contributes to A's view of the racial inequalities in her society as normal and legitimate, as the LAN Hypothesis claims. But there is an important rival hypothesis in the background. A's entire life has been spent in a society ordered by innumerable forms of racial inequality. White people dominate the upper ranks in politics, business, law, academia, the arts, and the military. Among the various ways in which they are socially outranked by white people, black people generally achieve worse outcomes in education and in other proxies of intellectual ability. The complex historical and institutional forces that explain these patterns are beyond A's comprehension, and the persistence of this social order confers upon it the appearance of naturalness. By applying simplistic explanatory heuristics, A comes to believe that the best explanation for the inequalities that she observes and experiences in her society is one that attributes some kind of general inferiority in intellectual capacities to black people. We can generalize from the uncertainty that this rival hypothesis creates. Any study that aims to gauge the influence of hate speech on adult subjects will have to examine individuals for whom this rival hypothesis would be a prima facie plausible explanation of how they came to regard identity-based social hierarchies as normal and legitimate. In order to control for the factors that are emphasized in this rival explanation, while gauging the influence of hate speech, we would have to screen out the conditioning influence of a whole life spent in the shadow of inequality. It may be relatively easy to devise studies to test whether behavioral manifestations of identity-prejudice can be activated through exposure to hate speech. But the evidencing of this effect wouldn't demonstrate that hate speech is essentially involved in the formation of identity-prejudice. When examining adult subjects who regard racial inequalities as normal and legitimate, there are many other factors besides hate speech that could plausibly be causally responsible for this, such that it is going to be hard to acquire clear evidential support for any hypothesis that purports to isolate the distinctive contribution of hate speech. The confounding factors to be controlled for are too many, and too much enmeshed with people's everyday experiences of living in societies like ours, to simply screen off. But now: consider a modified version of the LAN Hypothesis, which adverts to hate speech's influence on children in particular. The CLAN (i.e. Childhood Legitimation and Normalization) Hypothesis: Hate speech causally contributes to the harms of de facto social hierarchies by influencing children's attitudes in a way that legitimates and normalizes systematic material and institutional inequalities that track social identity categories. Studies that could find evidence in support of the CLAN Hypothesis will be easier to devise and execute, in comparison to the LAN Hypothesis. It is verging on impossible to find adult experimental subjects who have been insulated from the wider social conditions that might lead one to think of de facto social hierarchies as normal, simply due to living a society in which they are normal. It will be easier to find children who have been insulated from their society's overall conditions like this. Especially at pre-school ages, some children live relatively cloistered lives, in which they don't see the material and institutional elements of identity-based social inequality, or indeed, in which they don't even encounter people from other social groups. Obviously this isn't true of all children. But it is true of some children, and they may in principle become subjects for studies aiming to isolate the influence of hate speech on people's attitudes. For example, suppose we take a four year-old, C 4 , living in an ethnically homogenous community, and under appropriately controlled conditions, we expose her to examples of hate speech against a social group, G, that she and her family have no interaction with and know little or nothing about. Suppose we then find, in follow-up tests weeks or months later, that C 4 starts manifesting a pattern of negative attitudes towards members of G, e.g. she shows less distress at the mistreatment of Gs compared to members of other groups. In trying to explain this finding, there would be no reason to wonder whether C 4 's anti-G attitudes could be explained in terms of her attempts to independently interpret the patterns of power and disadvantage that she has been observing in a social system where G's are structurally subordinated. Rather, we would attribute the change in C 4 's anti-G attitudes to the influence of hate speech, because there would be no other good explanation as to what altered her attitudes. If an accumulation of this kind of evidence were to indicate that children take on identity-prejudicial attitudes as a result of exposure to hate speech, this would provide evidence to support the hypothesis that hate speech makes a distinct causal contribution to the prevalence of attitudes, either conscious or unconscious, about the legitimacy and normality of identity-based social hierarchies. The point that I am making here isn't premised on the implausible claim that in general, or in typical cases, hate speech is the main factor (or the only factor) which influences children towards accepting the normality or legitimacy of identity-based inequality. For one thing, some children will absorb identity-prejudice through exposure to relatively subtle manifestations of it in their family members, in words and deeds. And we should also allow that in most cases in which hate speech does play a key role in inculcating prejudice in a particular child, the child's attitudes, as she matures, will typically be reinforced by other factors. More generally, it seems plausible to suppose that for (most) children who acquire identityprejudicial attitudes, hate speech will just be one contributing factor among others in this. My point here is about what kinds of things it is possible to learn from observing hate speech's effects on particular child subjects, which it is not possible (to all practical purposes) to learn from observing hate speech's effects on particular adult subjects. Individual child subjects are sometimes insulated in a way that adults cannot be from the other confounding causal factors that can influence people toward accepting the normality or legitimacy of identity-based inequalities. Because of this it will be easier with children than with adults to acquire evidence of any distinct influence that hate speech does have in normalizing and legitimating identity-based social inequalities. 48 C. Assigning Responsibility to the Hate Speaker Here is a second reason why the modified CLAN Hypothesis is a better foundation on which to build a harm-based justification for BANS. Set aside my main contention in the previous section, for the sake of argument, and suppose we acquired evidence that provided a similar degree of empirical support for both the LAN and CLAN Hypotheses. With regard to the CLAN Hypothesis, we would have no 48 We can imagine cases in which a child is exposed to speech that expresses contempt for groups that don't occupy a subordinate position in a de facto hierarchy, e.g. in a hypothetical egalitarian social order where no groups occupy such a position, or in speech that is contemptuous towards privileged groups. In such cases, the speech's influence on the child wouldn't contribute to the kind of structural harms that I have been emphasizing. But it may still contribute to harmful outcomes, e.g. by influencing the child towards performing harmful acts, or by undermining the child's own self-respect. The fact that we are focusing on hate speech's contribution to structural harms, and how this might be involved in making a case for BANS, is consistent with thinking we might have other kinds of harmbased justifications for regulating hate speech in these other kinds of cases. reason to refrain from ascribing the hate speaker responsibility for the identity-prejudicial attitudes inculcated by her speech. If an adult influences a child's attitudes, the adult bears responsibility for this influence -both moral culpability, i.e. liability to be blamed, and legal responsibility, i.e. accountability for resultant harm -if anyone does. Of course there are complications and caveats around this, but in general, children are not responsible for being influenced by adults towards attitudes that result in harmful outcomes. By contrast, when it comes to the LAN Hypothesis, the problem of how responsibility should be ascribed is much more complicated. And this is because adults are, outside of rare cases, like brainwashing, responsible -that is to say, culpable, and where practical stakes are involved, accountable -for their attitudes, even when those attitudes causally stem from the influence of other people. If person A's communication has an influence on B's attitudes, and if B is a responsible agent, then in the normal run of cases, B -and not A -is culpable and accountable for bad consequences that result from B's attitudes. 49 Again, this is not the case with children. Children don't bear this kind of general responsibility for their mental lives and how they respond to the influence of others: at least, not in the same range of cases, nor to the same degree as adults. This is what we standardly suppose, at any rate, both in our informal ethical blaming practices, and as a matter of legal doctrine. 50 All of this is consistent with the point from §II.B, that A's contribution to an aggregate harm, x, suffices in principle to justify us in holding A legally accountable for x. The point being made here is that there is 49 The idea here is that if you're a normal adult, 'you have a mind of your own', and are thus normally responsible for how you respond to other people's influence; see Thomas Nagel, 'Personal rights and public space', Philosophy & Public Affairs 24(2) (1995): 83-107, 96. This idea is integral to several influential accounts of the grounds of the right to free speech (e.g. Thomas Scanlon, 'A theory of freedom of expression', Philosophy & Public Affairs 1(2) (1972): 204-26); it is fully compatible with recognizing exceptional cases, e.g. of provocation, in which normal adults have diminished responsibility for how they are influenced by others (see L. W. Sumner, 'Incitement and the regulation of hate speech in Canada: a philosophical analysis', in I. Hare and J. Weinstein (Eds), Extreme Speech and Democracy (Oxford: Oxford University Press, 2009), pp. 204-20, 215ff); and there are reasons to think its structuring role in free speech theory can be retained despite the general limitations in our control over our cognition (see Robert Mark Simpson, 'Intellectual agency and responsibility for belief in free speech theory', Legal Theory 19(3) (2013): 307-30). 50 Here is how the point was stated in a landmark contemporary U.S. Supreme Court case addressing issues around the capital punishment of adolescents. 'Developments in psychology and brain science continue to show fundamental differences between juvenile and adult minds. For example, parts of the brain involved in behavior control continue to mature through late adolescence… [Juveniles'] actions are less likely to be evidence of 'irretrievably depraved character' than are the actions of adults '. See Roper v. Simmons, 543 U.S. 551 (2005), at 570. an exception to this general thesis about when we can ascribe responsibility to people whose conduct contributes to aggregate harms. If the mechanism via which A makes her contribution to x is through communicative acts that influence another responsible agent, B, to behave in ways that lead to x, then A's contribution to x isn't sufficient to justify holding A accountable for x. 51 One might worry that judgements about causation and culpability are being run together in what I'm saying. Our interest in the causal origins of social hierarchy isn't only about who can be blamed. We also want to understand the causal processes at work, independently of who can be held accountable for them. Still, the disparities in responsibility between adults and children aren't only relevant here with regards to questions of assigning blame. They also have a bearing on how we characterize the causal character of these modes of influence. Imagine a case in which a teacher is indoctrinating his class of primary-school aged children. Given the cognitive disparities, the children will have limited ability to resist the teacher's influence, or to influence his attitudes in turn. Because of this it is plausible to characterize the inculcation of attitudes in these children as a matter of a certain agent, the teacher, acting upon a group of patients, the children. By contrast, in a situation where a community of agents with broadly comparable cognitive abilities are engaged in ongoing communication with each other in a multidirectional network of cross-cutting influences, it generally isn't plausible to characterize this as a process of certain agents acting upon certain patients. The correspondence between this and the two contrasting versions of our hypotheses should be clear. The CLAN hypothesis represents the harmful effects of hate speech in a manner such that if the hypothesis were substantiated, the hate speaker could straightforwardly be ascribed responsibility for causally contributing to the relevant harms; the LAN hypothesis does not. Given how the LAN hypothesis represents hate speech's contribution to the socially mediated harm, an attempt to pin responsibility for this harm onto the hate speaker will lead to an implausible over-attribution of responsibility, i.e. to nearly 51 Evan Simpson presents an analysis of the case for regulating hate speech which emphasizes the responsibilities of listeners. On his view we can (or should be able to) expect listeners to be reasonable in how they respond to the influence of hate speakers, and, roughly, this consideration problematizes most kinds of legal regulation of hate speech; see 'Responsibilities for hateful speech', Legal Theory 12(2) (2006): 157-77. everyone involved in the wider social ecosystem, or else it will involve some sort of ad hoc confinement of responsibility to the hate speaker alone. These sorts of distinctions matter in the critical analysis of social hierarchy. There are differences between acts of subordination performed by particular agents, and processes of subordination that are structural -differences in the underlying causal mechanisms, and in how they can be counteracted. 52 In the way that they try to counteract identity-based social hierarchy, advocates of BANS aren't just recognizing and resisting structural injustices. They are trying to pinpoint and counteract a specific contribution to structural injustice, for which hate speech is allegedly responsible. If we are seeking to vindicate that critical project, the CLAN hypothesis presents us with a more viable characterization than the LAN hypothesis of the causal processes involved in the perpetuation of unjust social hierarchy and of hate speech's role in that perpetuation. IV. SUMMARY AND POLICY IMPLICATIONS Many progressives believe that communicative factors contribute to de facto social hierarchy, and that hate speech plays an important role in this, in a way that can justify BANS, at least in principle. In order to substantiate these convictions and defend the restriction of hate speech -even instances of it that aren't used to threaten or harass particular people -we need evidence that shows how all hate speech contributes to the harms of social hierarchy, in a way such that hate speakers bear some responsibility for the harms. The most promising hypothesis, in seeking such evidence, is the CLAN Hypothesis: hate speech influences children's attitudes in a way that legitimates and normalizes identity-based inequalities. Hate speech may influence adults too, but it will be easier to acquire evidence of the mechanisms of influence, and to hold hate speakers responsible for the outcomes, if we focus on its effect on children. In arguing for these merits of the CLAN Hypothesis, I don't mean to suggest that no other material, institutional, or communicative factors are involved in the inculcation of prejudicial attitudes in children besides the influence of hate speech. Our question was whether there is any distinctive contribution that hate speech might be making to the structural harms of identity-based social hierarchy, alongside whatever other causal factors are involved in this. If hate speech is in fact making such a contribution, and if we are looking for evidence of this in order to provide an especially robust justification for BANS, then the CLAN Hypothesis warrants particular attention. The restriction of hate speech is an established part of most liberal democratic legal systems outside the U.S. Those who want to see this evidentially vindicated should be pursuing collaborative inquiry with empirical researchers. If hate speech does contribute to social hierarchyall hate speech, even those instances of it that aren't used to harass, etc. -then systematic evidence of this should be attainable. Cortese and Delgado and Stefancic gesture in this direction, but there are limitations in the evidence they cite. Nevertheless, examining hate speech's influence on children, as these authors do, is a promising approach. There may be other kinds of arguments for BANS, built around the aim of preventing harm to children, besides the one that I have been exploring here. 53 However, the most credible case for restricting hate speech will be one that simultaneously substantiates the claim that hate speech contributes to identitybased social hierarchies without implausibly downplaying the causal primacy of material and institutional factors in underwriting social structures. In order to develop and substantiate that case, we should be asking whether there is evidence that exposure to hate speech impacts children's attitudes in a way that legitimates and normalizes identity-based inequality. For all I have said, one could still defend a hard-line free-speech thesis, which would see BANS as illegitimate even if we did have an evidentially backed account of hate speech's harmful influence on children. I won't try to offer an assessment of this view of free speech here, except to register one point. If our rationale for restricting hate speech adverts to its influence on children, this allows us to sidestep some prominent free-speech-based objections to BANS. Consider the views of authors like Weinstein and Heinze, that freedom to engage in hate speech is entailed by an essential condition of democratic legitimacy 54 ; or consider Baker's view, that 'the state only respects people's autonomy if it allows people… to express their own values', regardless of 'how this expressive content harms other people'. 55 These claims trade on the notion that it is illegitimate to impinge upon people's autonomy by trying to control what ideas they're exposed to. But this wouldn't always be a reason to oppose contentbased restrictions on speech whose underlying justification was to limit what kinds of messages children are exposed to. In short, the kind of justification for restricting hate speech that I've been proposing is better-placed than some other justifications to address some free-speech-based objections. I will conclude by briefly considering what the policy implications might be if our primary justification for regulating hate speech is one that adverts to its malign influence on children. One set of issues will be about the policies governing social institutions involved in children's care, education, and socialization. We rightly expect these institutions to shield children from hate speech, by adults and by other children, although at higher levels we also expect social sciences and humanities education to enable children to intelligently reckon with the social reality of the attitudes animating hate speech. Beyond this, schools may be the only institutions with any hope of countering the influence of hate speech by parents to children in the home, and in a society committed to reforming identity-based social hierarchies, this is a pro tanto reason to think that overtly anti-discrimination values should structure not only the institutional culture of schools, but also certain parts of the curriculum. So far as that is the case, it creates complications in religious educational institutions, in which various kinds of identity-based prejudices -of a sexist, homophobic, or aggressively religiously chauvinistic kind -are more likely to be unofficially tolerated or inculcated as part of children's religious instruction. A serious commitment to shielding children from the influence of hate speech shouldn't suddenly lapse when hate speech occurs under the banner of religious instruction. But equally, regulatory bodies policing these requirements should strive 54 for nuance and contextual sensitivity in deciding exactly where the avowals of a devout conscience shade into hate speech. The remaining question is how children's exposure to hate speech can be limited in public spaces generally, outside of institutional contexts. In regards to this question it is useful to distinguish two ways of imposing general legal restrictions on a given type of public expression. Consider the difference between how Holocaust denial laws function in some European states, and how laws function in many states to regulate the public broadcast of adults-only entertainment, including violent and sexually explicit cinema. In both cases the legal restraints are general, in that they apply to instances of the relevant communicative acts irrespective of whether, in any instance, they are being used to harass or in some other way target particular individuals. But obviously there is an important difference. In properly regulated contexts, where measures have been taken to constrain children's access, it is permissible to broadcast adults-only entertainment, whereas in jurisdictions with Holocaust denial laws, there is no cordoned-off public arena where Holocaust denial is permitted. It is prohibited regardless of whether it is expressed to random people on the street or to like-minded allies in a clubhouse for extremists. If our case for restricting hate speech is linked to evidence of its influence on children, then the policies enacted by BANS might bear more of a resemblance to regulations governing adult entertainment than to prohibitions on Holocaust denial. The primary aim of the intervention would be to limit hate speech's influence on children. Of course it wouldn't cancel out hate speech's influence entirely. Children occupy public spaces in various ways, and in different ways at different ages. The way that children encounter extremist ideas online is variable, and as long as the internet remains relatively free and open, children will sometimes encounter hate speech on it, whether by stumbling across it accidentally, or looking for it out of a curiosity towards the hidden and illicit. But still, in contexts where hate speech is protected children do encounter it in public spaces, and there is at least a pro tanto case for limiting their exposure to such encounters. Most legal systems still impose some age-based regulations on the distribution of adult entertainment, in a way that reflects the commonplace view that such material has a negative influence on children. And while the stakes might be higher (or at any rate different) with hate speech, there is reason to think the aim of removing hate speech from public spaces where children will be susceptible to its influence is most effectively pursued by a regulatory approach that borrows from this policy area. This suggestion is radical in that it proposes something different to the prevailing approaches in anti-hate speech law. But it is also mild in the sense that it's likely to be more agreeable to those who think that these prevailing approaches run unacceptably close to being outright prohibitions on certain forbidden opinions.
2019-05-12T14:24:28.494Z
2018-12-07T00:00:00.000
{ "year": 2018, "sha1": "9e2ce4db48bd8bfb4faa4cf435d7780b22db89ad", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10982-018-9339-3.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "f7c52d57c9bc8f59fdcca9a35d9f3e96da1dc6bf", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Psychology" ] }
222384897
pes2o/s2orc
v3-fos-license
Systematic revision of the genus Peronia Fleming, 1822 (Gastropoda, Euthyneura, Pulmonata, Onchidiidae) Abstract The genus Peronia Fleming, 1822 includes all the onchidiid slugs with dorsal gills. Its taxonomy is revised for the first time based on a large collection of fresh material from the entire Indo-West Pacific, from South Africa to Hawaii. Nine species are supported by mitochondrial (COI and 16S) and nuclear (ITS2 and 28S) sequences as well as comparative anatomy. All types available were examined and the nomenclatural status of each existing name in the genus is addressed. Of 31 Peronia species-group names available, 27 are regarded as invalid (twenty-one synonyms, sixteen of which are new, five nomina dubia, and one homonym), and four as valid: Peronia peronii (Cuvier, 1804), Peronia verruculata (Cuvier, 1830), Peronia platei (Hoffmann, 1928), and Peronia madagascariensis (Labbé, 1934a). Five new species names are created: P. griffithsi Dayrat & Goulding, sp. nov., P. okinawensis Dayrat & Goulding, sp. nov., P. setoensis Dayrat & Goulding, sp. nov., P. sydneyensis Dayrat & Goulding, sp. nov., and P. willani Dayrat & Goulding, sp. nov.Peronia species are cryptic externally but can be distinguished using internal characters, with the exception of P. platei and P. setoensis. The anatomy of most species is described in detail here for the first time. All the secondary literature is commented on and historical specimens from museum collections were also examined to better establish species distributions. The genus Peronia includes two species that are widespread across the Indo-West Pacific (P. verruculata and P. peronii) as well as endemic species: P. okinawensis and P. setoensis are endemic to Japan, and P. willani is endemic to Northern Territory, Australia. Many new geographical records are provided, as well as a key to the species using morphological traits. Introduction Onchidiid slugs live in the intertidal, worldwide, except at the poles. Their larvae are released in sea water and, in that sense, onchidiids are truly marine. As adult slugs, however, they breathe air through a lung and die if they are immersed in water for too long. The slugs of the genus Peronia Fleming, 1822a are found across the entire tropical and subtropical Indo-West Pacific, from South Africa to Hawaii. They primarily inhabit rocky shores and coral rubble, can occasionally be found on muddy sand, but are typically not found inside mangrove forests. The genus Peronia includes all onchidiid slugs with a dorsal notum bearing ramified appendages, or dorsal gills, which are most easily seen when animals are relaxed. Dorsal gills tend to be retracted when live animals are crawling at low tide, and they can be hard to see on specimens preserved without relaxation. In fact, Cuvier did not mention dorsal gills in the original description of Onchidium peronii Cuvier, 1804, the first Peronia species ever recognized. Dorsal gills were first illustrated by Savigny (1817: pl. 2, fig. 3.5) on a plate of gastropods from the Red Sea in the famous Description de l'Egypte, and first described by Audouin (1826: 19) in the explanation of Savigny's plate. Dorsal gills are either present or absent on the dorsal notum of onchidiid slugs, and all slugs with dorsal gills belong to the genus Peronia (Dayrat et al. : 1861. For the past sixty years or so, authors have accepted only two valid Peronia species names for two species broadly distributed across the Indo-West Pacific (e.g., Solem 1959: 38-39;Marcus and Marcus 1970: 213-214;Britton 1984: 183): P. peronii (Cuvier, 1804) and P. verruculata (Cuvier, 1830). However, the differences between P. peronii and P. verruculata have remained unclear, to say the least, and both names have been used arbitrarily. More importantly, 31 species-group names are available for onchidiids with dorsal gills and their exact application has never been addressed. Indeed, the taxonomy of the genus Peronia is so challenging that people have avoided it for decades, and is the last author who created species names for onchidiids with dorsal gills, except for the recent Peronia persiae Maniei et al., 2020a, regarded in the present work as a synonym of P. verruculata. The taxonomy of the genus Peronia is comprehensively revised here for the first time. The goals of the present revision are to determine how many Peronia species there are, where they are distributed, how they are related, how they can be identified, how many of the available species names are valid, and to create new names if needed. All the available types of all onchidiid species were re-examined in the context of our revision of the whole family (Dayrat 2009;Dayrat et al. 2016Dayrat et al. , 2018Dayrat et al. , 2019aGoulding et al. 2018a, b, c), which served as a basis to establish a complete list of all the species names available in the genus Peronia. For the sake of clarity, important features (especially intestinal loops) of the types of Peronia nominal species are illustrated here. In many cases, lectotypes are designated in order to clarify the application of species names. Fresh material was collected across the entire Indo-West Pacific, from South Africa to Japan, Hawaii, and eastern Australia. Special attention was paid to collecting fresh material from type localities. Specimens from which DNA could be extracted were also obtained from museum collections (the first author visited many collections around the world). Old museum specimens from which DNA could not be extracted were also examined, especially in cases of interesting geographical records or when specimens were included in important onchidiid studies (Semper 1880(Semper -1885Plate 1893;. Because they are notoriously cryptic, Peronia species were first delineated using DNA sequences. Then, the anatomy of the specimens was examined in order to determine diagnostic characters for each species as well as individual variation. As in our previous revisions (Dayrat et al. , 2018(Dayrat et al. , 2019aGoulding et al. 2018a, b, c), both mitochondrial and nuclear DNA sequences were used for species delineation and relationships. Nine Peronia species are recognized here, five of which are new to science: P. griffithsi Dayrat & Goulding, sp. nov., P. madagascariensis , P. okinawensis Dayrat & Goulding, sp. nov., P. peronii (Cuvier, 1804), P. platei , P. setoensis Dayrat & Goulding, sp. nov., P. sydneyensis Dayrat & Goulding, sp. nov., P. verruculata (Cuvier, 1830), and P. willani Dayrat & Goulding, sp. nov. Both P. madagascariensis and P. platei were only known from the original descriptions and are described anatomically in detail for the first time. Amazingly, the best anatomical description of P. peronii so far is Cuvier's (1804) original description, but many traits are described and illustrated here for the first time. Finally, the anatomy of all mitochondrial units of P. verruculata is described in detail for the first time from numerous localities, although some anatomical information was scattered in the literature for three of them (units #1, #3, and #4). These nine species cannot be distinguished externally, except for the very large individuals of P. peronii (longer than 100 mm). However, details of the internal anatomy can help separate species, except for P. platei and P. setoensis which are both cryptic externally and internally. Geographic distribution varies greatly among Peronia species. Three species are broadly distributed across the Indo-West Pacific, from the western Indian Ocean to the West Pacific: P. griffithsi, P. peronii, and P. verruculata. The six other species are characterized by much narrower geographic ranges. Three species are even endemic: Peronia okinawensis and P. setoensis are endemic to Japan, and P. willani is endemic to the Northern Territory, Australia. Of the 31 Peronia species names available, four are valid and 27 are invalid: 21 synonyms (16 of which are new), five nomina dubia, and one junior secondary homonym. The large number of available names in Peronia is explained by a combination of three main factors. First, Peronia slugs have often been collected, because they are common across the Indo-West Pacific and because they mostly live in the rocky intertidal, which is more easily accessible than mangrove forests where most other onchidiids are found. Second, earlier zoologists created new species names without examining the types of existing nominal species and without proper knowledge of individual variation, which resulted in many names being added unnecessarily. Third, Peronia is a genus for which molecular data were critically needed, because species are externally cryptic; also, species could hardly be delineated just based on their internal anatomy because they differ only with respect to minute anatomical details. The fact that five new species names are needed in Peronia even though there already are 31 available names shows that a comprehensive revision was desperately needed. Nomenclature Establishing a complete list of available names for a taxon often requires an enormous amount of time but it is the keystone of any taxonomic revision, because otherwise it would be impossible to address the nomenclatural status of available names and to determine how many new species names are needed. All available type specimens were re-examined beyond the taxon of interest (Peronia) because species names often are incorrectly classified when they are first created. For instance, Onchidium durum was originally created for slugs with a smooth notum, but the types of O. durum clearly bear dorsal gills. Ignoring O. durum because it was created for slugs with a smooth notum would have led to an incomplete list of available Peronia species names. Several species names had to be transferred to Peronia, because they refer to slugs with dorsal gills, regardless of whether species were originally described with dorsal gills or not. When type specimens are not located, one needs to go through original species descriptions very carefully, and still beyond the taxon of interest. Reciprocally, not all species names ever classified in Peronia belong to Peronia: for instance, several specific names originally combined with Peronia refer to Onchidella species. Finally, many species names of doubtful application need to be commented upon. In total, 51 species-group names had to be considered for the revision of Peronia. Of these, only 31 are available Peronia species names (Table 1). Indeed, ten of those Table 1. Alphabetic list of the 51 existing species-group names of which the nomenclatural status is addressed in the present work. Details can be found in the text: comments on the four valid Peronia species names, their synonyms, and the junior homonym are in the species remarks; comments on the fifteen nomina dubia and the ten names that must be classified in other genera are in the general discussion. Table 2. DNA extraction numbers and GenBank accession numbers for all the specimens included in the present study. The letter H next to an extraction number indicates the holotype. Sequences marked with an asterisk (*) are from our former publications (Dayrat et al. , 2018(Dayrat et al. , 2019aGoulding et al. 2018a, b, c). In addition, 11 COI sequences also marked with an asterisk (*) were obtained from GenBank (GB) and BOLD: four sequences from China (Sun et al. 2014), two from Singapore , two from Japan (Takagi et al. 2019), one from the Persian Gulf (unpublished), one from Gujarat, western India (unpublished), and one from Iran LGEN099-14* GB Iran, Persian Gulf LC027608* 5500 MNHN-IM-2009- Vouchers used in Dayrat et al. (2011). Ten of our Peronia specimens were tentatively identified by Dayrat et al. (2011) at a time when nothing was known about the onchidiid species diversity in general and most especially in the genus Peronia. Most of those ten specimens were merely referred to with numbers (e.g., Peronia sp. 1). In order to avoid any confusion, those specimens are all included here so that correct species names are provided ( (NHMUK 20050628) identified as Peronia sp. 6 from Sulawesi, Indonesia. Types of existing species-group names. All type specimens available for all onchidiid species-group names have been examined in context of the revision of the entire family. Comments on many onchidiid types can be found in our previous revisions (Dayrat et al. , 2018(Dayrat et al. , 2019aGoulding et al. 2018a, b, c). In total, 118 type specimens (holotypes, lectotypes, paralectotypes, syntypes, etc.) are commented on here for the first time. Fifteen of those 118 type specimens are commented on in the general discussion because they are types of nomina dubia which may or may not refer to Peronia slugs. All the other (103) types are commented on in species descriptions because they are the types of 25 species-group names which must be classified in Peronia and which are not nomina dubia (Table 1).There are only two Peronia species names for which types could not be located: Scaphis lata Paraperonia jousseaumei Labbé, 1934a. Finally, 14 lectotypes are designated here in order to clarify the application of 14 species names, usually because syntypes belong to different species or come from very distant localities. Many type specimens were not labeled as types and were found within the general collections. In most cases, it was easy to determine that specimens were types because the information on the labels would match perfectly to that of the original descriptions. However, finding Labbé's types was challenging, with the exception of the holotype, by monotypy, of Onchidium astridae Labbé, 1934b, preserved in Bruxelles (RBINS I.G.9223/MT.3822): it was not marked as a holotype, but the name Onchidium astridae is on the label, and the locality and collector information is matching. The types of all the other Peronia species (and one subspecies) described by Labbé are preserved at the MNHN (the monograph in which those new taxa were described was almost exclusively based on material from the MNHN). The major issue with this material is that Labbé did not write any of his new species names on any of the labels. To be fair and fully accurate, there are actually three jars for which a specific name was written in pencil and in tiny letters on labels: one jar contains the type material of Onchidium durum (MNHN-IM-2000-33698), and two other jars contain part of the type material of Paraperonia gondwanae (MNHN-IM-2000-33683, MNHN-IM-2000. Eleven years ago, Dayrat (2009) considered that identifying the types of Labbé's onchidiid species names in the MNHN collection would be too risky (because specimens could be erroneously interpreted as types). However, after Virginie Héros (who is in charge of the Mollusk type collection at the MNHN) correctly remarked that it should still be possible to find some of Labbé's types, an excel file was generated including all the old onchidiid material preserved at the MNHN and all the material cited in monograph. By comparing various information (localities, names of the collec-tors, collecting dates, specimen sizes), it then became clear that many specimens could be identified as types with great confidence, even though they were not labeled as types and Labbé's species names were not indicated on the labels. For instance, originally, no jar clearly labeled as the type material of Scaphis carbonaria was found at the MNHN. However, of the old jars found at the MNHN with specimens from New Caledonia, only one matches perfectly the information provided in Labbé's original description of S. carbonaria: an individual collected in 1880 by Réveillère (with an identification as Peronia). Other jars with one or more specimens from New Caledonia were collected by Fisher in 1878 and by François in 1894. Therefore, it is extremely likely that the specimen collected by Réveillère in 1880 is the holotype, by monotypy, of Scaphis carbonaria (MNHN-IM-2000-33708). In many cases, however, identifying the types happened to be much more challenging because there were several jars with the same locality, the same collector, and the same collecting date. In order to avoid any future confusion, Labbé's types are commented on in great detail in species descriptions. There are only two of Labbé's species for which no type material could be confidently traced back at the MNHN: Scaphis lata Paraperonia jousseaumei Labbé, 1934a. Finally, the type material of Peronia persiae, recently described by Maniei et al. (2020a), was not borrowed for examination. Regardless, there is no doubt that P. persiae is a junior synonym of both P. verruculata (Cuvier, 1830) and P. gondwanae (Table 1), because all the COI and 16S sequences published for P. persiae cluster within the mitochondrial unit #4 of P. verruculata. Additional material examined (historical museum collections). In addition to the 189 specimens included in the molecular analyses (not including the eleven outgroups) and the 118 type specimens of existing nominal species, 297 old specimens were obtained from museum collections from which no DNA could be extracted. Those specimens correspond to a total of 60 jars. One jar contains 161 specimens. All other jars contain fewer than 15 specimens. These old museum specimens are not included in the anatomical species descriptions, except for the description of Peronia verruculata from the Red Sea. Instead, these additional specimens are commented on in the species remarks. The additional specimens were especially useful to provide geographic records from places which could not be visited, such as the Chagos Archipelago, Nicobar Islands, Persian Gulf, and Socotra. Identifying Peronia species using only anatomical traits is challenging but possible (see below). Finally, some of the historical specimens from museum collections were studied by previous authors, and their reexamination allowed us to confirm or reject many identifications from the literature. Anatomical preparations and descriptions Size (length/width) is indicated in millimeters (mm) for each specimen. Both the external morphology and the internal anatomy were studied. All anatomical observations were made under a dissecting microscope and drawn with a camera lucida. Radulae and male reproductive organs were prepared for scanning electron microscopy (Zeiss SIGMA Field Emission Scanning Electron Microscopy). Radulae were cleaned in 10% NaOH for a week, rinsed in distilled water, briefly cleaned in an ultrasonic water bath (less than a minute), sputter-coated with gold-palladium and examined by SEM. Soft parts (penis, accessory penial gland, etc.) were dehydrated in ethanol and critical point dried before coating. Anatomical species descriptions are based on those 179 Peronia individuals for which sequences were generated for the present study as well as on the available type material for species with existing names (see below). To avoid unnecessary repetition, the description of anatomical features that are virtually identical between Peronia species (e.g., nervous system, heart, and stomach) is not repeated for each species. However, all the characters that are useful for species comparison (e.g., intestinal loops and male apparatus) are described for every species. Special attention has been given to illustrating the holotype and the type locality of each new species. Species are being described following a phylogenetic order. The detailed description of Peronia verruculata is based on the mitochondrial unit #1, by far the most widespread (from Peninsular Malaysia to the West Pacific) and most abundant (55 specimens in our study), but variations in the other units are precisely reported and figure captions indicate the unit to which each illustrated individual belongs. Types of intestinal loops In onchidiids, types of intestinal loops are defined based on the pattern of the intestine on the dorsal aspect of the digestive gland (with the digestive gland still in place). Plate (1893) first distinguished four types of intestinal loops (types I to IV) and later added a type V. Only the types I and V are found in Peronia. Hoffmann (1928: 51, pl. 3, fig. 11) noted before Labbé that intestinal loops of type V differ from other types and he referred to them as type Ia. Labbé's terminology (type V) is preferred because past authors have adopted it and because a type V is very different from a type I. The different types of intestinal loops and their individual variation are best revealed by coloring sections of the intestine differently (Dayrat et al. 2019b, c, d): a clockwise intestinal loop is colored in blue, a counterclockwise intestinal loop is colored in yellow, and a transitional loop between them is colored in green (Fig. 1). The intestine first appears dorsally on the right side. In intestinal loops of type I, the intestine starts by forming a clockwise (blue) loop which does not make a complete circle. As a result, the transitional (green) loop is oriented to the right ( Fig. 1A-F). In two species with intestinal loops of type I (P. okinawensis and P. peronii), the transitional loop is oriented between 12 and 3 o'clock ( Fig. 1D-F). In the three other species with intestinal loops of type I (P. sydneyensis, P. verruculata, and P. willani), the transitional loop is oriented between 3 and 6 o'clock ( Fig. 1A-C). In intestinal loops of type V, the intestine starts by forming immediately a counterclockwise (yellow) loop. In intestinal loops of type V, the counterclockwise loop is oriented between 10 and 11 o'clock ( Fig. 1G-I). Four Peronia species are characterized by intestinal loops of type V: P. griffithsi, P. madagascariensis, P. platei, and P. setoensis. Three independent sets of phylogenetic analyses were performed: 1) Maximum Likelihood and Bayesian analyses with concatenated mitochondrial COI and 16S sequences; 2) Maximum Parsimony analyses with concatenated nuclear ITS2 and 28S sequences; 3) Maximum Parsimony analyses with ITS2 haplotype sequences. Maximum Parsimony analyses were conducted in PAUP v 4.0 (Swofford 2002) with gaps coded as a fifth character state, and 100 bootstrap replicates conducted using a full heuristic search. Prior to Maximum Likelihood and Bayesian phylogenetic analyses, the best-fitting evolutionary model was selected for each locus separately using the Model Selection option from Topali v2.5 (Milne et al. 2004): a GTR + G model was independently selected for COI and 16S. Maximum Likelihood analyses were performed using PhyML (Guindon and Gascuel 2003) as implemented in Topali. Node support was evaluated using bootstrapping with 100 replicates. Bayesian analyses were performed using MrBayes v3.1.2 (Ronquist and Huelsenbeck 2003) as implemented in Topali, with five simultaneous runs of 1.5×10 6 generations each, sample frequency of 100, and burn in of 25% (and posterior probabilities were also calculated). Topali did not detect any issue with respect to convergence. All analyses were run several times and yielded the same result. In addition, genetic distances between COI sequences were calculated in MEGA 7 as uncorrected p-distances. COI sequences were also translated into amino acid sequences in MEGA using the invertebrate mitochondrial genetic code to check for the presence of stop codons (no stop codon was found). Molecular phylogenetic analyses The monophyly of Peronia is strongly supported in all analyses except in the mitochondrial ML analyses (bootstrap of 58), which confirms that all onchidiid slugs with dorsal gills belong to the same clade (Figs 2-4). Seven nodes of higher relationships among Peronia species are well supported. Supports are indicated here in parentheses in the following order: ML bootstrap in mitochondrial analysis, Bayesian posterior probability in mitochondrial analysis, bootstrap in ITS2 analysis, bootstrap in ITS2 and 28S analysis (bootstrap values below 50% and posterior probabilities below 0.90 are replaced by a dash). Most basally, Peronia is always split in clades A and B. Clade A is strongly supported (99, 1.0, 100, 100) and includes P. peronii and P. okinawensis. Clade B is also strongly supported (99,1,93,99) and includes clade C and P. madagascariensis as its most basal species. Clade C, which is consistently recovered but moderately supported (-, -, 90, 87) 991 Singapore 1538, 1747, 2571, 2987, 5468, 5639, 5670, 1759, 3751, 3752, 5481 & 6202 Japan, Lombok, New Caledonia, PNG, Queensland, Sumatra, Vanuatu & Vietnam 2162, 698, 6212, 6214, 2682, 2856, 2870 The monophyly of each species recognized here is strongly supported in all analyses, except for the special case of P. sydneyensis (see below, species delineation). Within four species, some least-inclusive units are supported by the mitochondrial markers but not by comparative anatomy and nuclear markers (Figs 2-4): two units within P. peronii (one unit from Mauritius and the other from the West Pacific); two units within P. platei (one unit from Hawaii and the other from Papua New Guinea); two units within P. griffithsi (one unit from Mauritius and the other from Kei Islands and Papua New Guinea); and three units of P. verruculata from South-East Asia and the West Pacific (units #1, #2, and #3). Two least-inclusive mitochondrial units within P. verruculata from the western Indian Ocean (units #4 and #5) are also monophyletic in nuclear analyses (Figs 2-4) but are anatomically cryptic (see below). Note that populations of P. verruculata from the Red Sea are not represented in molecular analyses (see below, species delineation). In mitochondrial analyses (Fig. 2), P. sydneyensis and P. willani form together the strongly supported clade E and the monophyly of each species is also strongly supported. In nuclear analyses (Figs 3, 4), they also form a strongly supported clade but P. sydneyensis is paraphyletic with respect to P. willani. Both species are close geographically (P. sydneyensis is distributed in New South Wales, Queensland and New Caledonia, and P. willani is distributed in the Northern Territory) and may be the result of a recent divergence. The paraphyly in nuclear analyses most likely is the result of incomplete lineage sorting (see below, species delineation). Pairwise genetic divergences Pairwise genetic distances were calculated for a total of 13 units (Fig. 5, Table 3): the five mitochondrial units within P. verruculata as well as the eight other species. A barcode gap is found in all cases, apart from the mitochondrial unit #1 of P. verruculata. Comparative anatomy All Peronia slugs are characterized by dorsal gills which are not found in other onchidiids. They are also all characterized by a unique combination of internal traits: they are the only onchidiid slugs with intestinal loops of type I or V, an accessory penial gland, and no rectal gland. The fact that any slug with this combination of traits belongs to a Peronia species is helpful to identify specimens with dorsal gills retracted inside the notum. There are no external differences between Peronia species. In the field, it is not possible to reliably identify any of them, especially because sympatric species are often found together at the exact same sites. Individuals of very large size (longer than 100 mm) are only found in P. peronii, but smaller individuals are impossible to distinguish externally from other species. Also, tall papillae over the entire notum seem to be mostly found in P. peronii and P. madagascariensis, but that may be due to the fact that Figure 5. Diagram to help visualize the data on pairwise genetic distances between COI sequences within and between species and mitochondrial units (P. verruculata) in Peronia (see Table 3). Ranges of minimum to maximum distances are indicated (in percentages). For instance, within P. willani, individual sequences are between 0 and 1.9% divergent; individual sequences between P. willani and the other species or units are minimally 4.3% and maximally 16.8% divergent. The colors are the same as those used in Figs 2-4, 6. slugs of both species are the largest, and it remains difficult to define exactly what a tall papilla is because papilla size is highly variable. Internal differences help identify some species reliably, but not all (Table 4). Internal differences are almost exclusively based on combinations of traits because no Peronia species is characterized by any unique, distinctive feature, except for P. peronii (characterized by a spine of the accessory penial gland longer than 3 mm) and P. sydneyensis (characterized by strong protuberances on the spine of the accessory penial gland), and it remains difficult to identify Peronia species anatomically. For instance, where they overlap geographically (Queensland and New Caledonia), P. verruculata and P. sydneyensis can only be distinguished based on the length of the spine of the accessory penial gland, the presence of strong protuberances near the tip of the spine of the accessory penial gland, and the length of the penial hooks, which are all traits that are hardly accessible to a non-expert. However, only two Peronia species are cryptic externally and internally: P. setoensis and P. platei, which are not sister taxa (Figs 2-4) and do not overlap geographically, at least based upon current data (Fig. 6). Finally, the mitochondrial units of P. verruculata cannot be reliably distinguished anatomically. Types of intestinal loops are useful for the identification of Peronia species (Fig. 1): species are characterized by intestinal loops of type V (P. griffithsi, P. madagascariensis, P. platei, and P. setoensis), type I with a transitional loop oriented between 12 and 3 o'clock (P. okinawensis and P. peronii), or type I with a transitional loop oriented between 3 and 6 o'clock (P. sydneyensis, P. verruculata, P. willani). Exceptions exist but are remarkably rare: only one individual in P. sydneyensis was found with a transitional loop slightly Table 3. Pairwise genetic distances between mitochondrial COI sequences in Peronia. Ranges of minimum to maximum distances are indicated (in percentage). For instance, the intra-specific divergences within P. madagascariensis are between 0 and 0.6%, while the inter-specific divergences between P. griffithsi and P. madagascariensis are between 9.3 and 11.3%. Figure 6. Geographical distribution of the Peronia species A distribution of all Peronia species except for P. peronii B distribution of P. peronii. The colors are the same as those used in Figs 2-5. Colored areas correspond to hypothetical geographical ranges based on confirmed records only. Distinct colors are used for each unit of P. verruculata. The distribution of P. verruculata in the Indo-West Pacific is actually continuous. However, because it is unclear which units are present in regions from where we have no fresh material of P. verruculata (red areas), no unit of P. verruculata is shown there. For P. peronii, black dots correspond to material identified based on anatomical characters and blue dots correspond to material with DNA sequences. Details on species distribution can be found in each species description. outside the range of that species (at 2 o'clock). Types of intestinal loops, however, can only be used in combination with other traits for the purpose of species identification. The insertion of the retractor muscle of the penis is not very useful in identification because it mostly matches the distribution of the respective intestinal loop types (Table 4). An insertion near the heart is only found in the two species with intestinal loops of type I and transitional loops oriented between 12 and 3 o'clock (P. peronii and P. okinawensis). Within each species, all individuals share the same insertion of the retractor muscle (either near the heart or at the posterior end of the visceral cavity). However, in P. griffithsi, which is widely distributed from the West Pacific to Mauritius, individuals are characterized by both insertions. In P. peronii, the retractor muscle can exceptionally be vestigial (with no clear insertion). The length of the muscular sac of the accessory penial gland varies depending on the size of animals, but it is useful to help identify some species. Indeed, only two species (P. peronii and P. willani) are characterized by a muscular sac which is longer than 20 mm (Table 4). The length of the spine of the accessory penial gland is helpful to distinguish closely related species which, otherwise, are very similar anatomically: P. peronii (at least 3 mm) and P. okinawensis (less than 2.3 mm); P. setoensis (more than 0.9 mm) and P. griffithsi (less than 0.62 mm); and P. sydneyensis (less than 1 mm) and P. willani (more than 1.5 mm). The diameter of the spine at its base can be used in exactly the same way. The length of penial hooks also differs between species: the longest hooks are found in P. madagascariensis (up to 100 μm), the shortest in P. setoensis and P. griffithsi (less than 25 μm). Species delineation The delineation of Peronia species is straightforward. They are all supported by independent data sets: they are reciprocally monophyletic with both mitochondrial and nuclear markers, and their monophyly is strongly supported; they are all separated by a large barcode gap; and they each are characterized by a unique combination of anatomical traits (with the exception of P. setoensis and P. platei, which are cryptic). Only two species need special attention: P. sydneyensis and P. verruculata. The paraphyly of P. sydneyensis with respect to P. willani in nuclear analyses most likely is the result of incomplete lineage sorting, because lineage sorting progresses more rapidly for mitochondrial alleles than for nuclear alleles (Funk and Omland 2003). Also, P. sydneyensis and P. willani species are clearly distinct anatomically: in particular, P. sydneyensis is characterized by unique, strong protuberances near the tip of the spine of the accessory penial gland. Therefore, P. sydneyensis and P. willani are regarded as two recent but well-delineated species. Despite some genetic structure, Peronia verruculata is regarded as a single species for various reasons. In mitochondrial analyses, P. verruculata is split in five leastinclusive mitochondrial units of which the relationships are basically unresolved due to low support (Fig. 2). In nuclear analyses, the mitochondrial units #1, #2, and #3 are not monophyletic (Figs 3, 4). Therefore, they should not be recognized as distinct taxa. In nuclear analyses, units #4 and #5 are monophyletic (Figs 3, 4). However, the mitochondrial units #1, #2, and #3 do not form a monophyletic group with respect to units #4 and #5. Recognizing mitochondrial units #4 and #5 each as a separate taxon would mean that mitochondrial units #1, #2, and #3 would also have to be recognized as separate taxa, which is unwarranted for the reasons given above. All mitochondrial units of P. verruculata are cryptic anatomically (their anatomical traits display overlapping variation) while P. verruculata is clearly distinct from other Peronia species (Table 4). Finally, it would seem premature to recognize units #4 and #5 as independent lineages because our geographical sampling of P. verruculata is not continuous (Fig. 6). Future samples from southern India (including Sri Lanka) or the Arabian Sea (the coast of Yemen, Oman, Somalia) might show that the individuals of the western mitochondrial units #4 and #5 can still interbreed, exactly like units #1, #2, and #3. We therefore refrain from naming those five mitochondrial units within P. verruculata. They are merely regarded as mitochondrial units that indicate some genetic structure, but the current data do not suggest that they should be recognized as distinct taxa. Note that taxon names are already available for the mitochondrial units #1, #4, and #5 of P. verruculata (Table 1). Finally, note that Peronia verruculata was described from the Red Sea, from which no fresh material could be obtained. However, the specimens examined from the Red Sea are anatomically indistinguishable from the specimens of the five mitochondrial units of P. verruculata (Table 4). Therefore, at this stage, there is no reason to think that the populations from the Red Sea belong to a distinct species and that the name P. verruculata cannot apply to the whole species from the Red Sea and South Africa all the way to the West Pacific (Japan, New Caledonia, and Queensland). At any rate, there are plenty of available names that can be used in the future if it were to be demonstrated that the populations from the Red Sea belong to a distinct species (see remarks on P. verruculata). Species distribution Geographic distribution is discussed in detail with each species description. The map of species distributions only illustrates the records that are regarded as correct (Fig. 6). Most of those correct records correspond to the specimens included in our molecular analyses. However, they also include types as well as historical museum specimens and records from the literature which could be positively identified using anatomical traits (e.g., intestinal loops, length of the spine of the accessory penial gland). The secondary literature was read with great attention, especially in cases where it could provide geographical records not included in our material. Every record found in the literature is commented on (in the species remarks). Records from the literature are certainly not taken for granted because the secondary literature is plagued with two major issues. First, past authors did not always take the time to examine type specimens. For instance, did not examine the types of Onchidium verruculatum and Onchidium peronii which are preserved at the Paris Museum (he did not list them in the material examined for these species), even though his study was almost exclusively based on material from that institution. Second, because there was no proper knowledge about intraspecific character variation, nobody knew which character could help distinguish species or not. For instance, Hoffmann's (1928: 73) record of Peronia verruculata from Hawaii was never questioned, but Peronia slugs from Hawaii are all characterized by intestinal loops of type V, which means that they cannot belong to P. verruculata (which is characterized by intestinal loops of type I). Gistel, 1848: x. Paraperonia Labbé, 1934a: 196. Scaphis Labbé, 1934a: 203. Lessonia Labbé, 1934a: 213 [junior homonym of Lessonia Swainson, 1832, replaced by Lessonina Starobogatov, 1976. Quoya : 216. Lessonina Starobogatov, 1976: 211. Quoyella Starobogatov, 1976: 211 [unnecessary replacement name for Quoya . (Baker 1938: 86). Scaphis: Onchidium astridae , by subsequent designation (Starobogatov 1976 Eudrastus: Likely, although for unclear reasons, from the Greek εὖ, eu, for true, and δραστέoς, drasteos, a verbal adjective which means to be done. Paraperonia: From the Greek παρα, para, meaning beside, and Peronia. Scaphis: After the Greek ἡ σκᾰφίs, which means small boat (Labbé, 1934a: 202). Quoya: After the French naturalist Jean René Constant Quoy , a member of two circumnavigations from 1817 to 1820 with captain Freycinet and from 1826 to 1829 with captain Dumont d'Urville. Joseph Paul Gaimard [1793-1858] described several species of onchidiids based on their collections in the southern seas. Quoyella has the same etymology. Lessonina: After the French naturalist René Primevère Lesson [1794Lesson [ -1849, a member of a circumnavigation from 1822 to 1825 with captain Duperrey. Lesson described several species of onchidiids based on his collections in the southern seas, such as the type species of Lessonina, Onchidium ferrugineum, which he collected in West Papua, Indonesia. Labbé's invalid name Lessonia was also dedicated to Lesson. Gender. Onchis: Masculine. Férussac did not specify the gender of Onchis which he did not combine with any specific name, and even the binomen Onchis peronii, which Férussac did not use per se, would not help in that respect. Because Onchis is derived from the masculine Greek noun ὁ ὂγκος, it is considered to be of masculine gender. Peronia: Feminine. No gender was specified by Fleming, and the combination Peronia peronii does not help to determine it. Because no gender was originally specified or indicated and because Peronia ends in -a, it is treated as a name of feminine gender (ICZN 1999: Article 30.2.4). Indeed, Peronia mauritiana, an early combination used by Blainville (1824: 281), shows that Peronia has always been treated as a name of feminine gender. Eudrastus: Masculine. No gender was originally specified or indicated. Eudrastus ends in a word derived from a word of variable gender (a verbal adjective) and should be treated as masculine (ICZN 1999: Article 30.1.4.2). Paraperonia: Feminine. Gender of Peronia. Scaphis: Feminine. The gender was not specified by Labbé, but his original combinations S. atra, S. carbonaria, S. lata, and S. punctata indicate that he treated Scaphis as a name of feminine gender, which is correct since Scaphis is derived from the feminine Greek noun ἡ σκᾰφίs. Quoya: Feminine. The gender was not specified by Labbé, but his original combination Q. indica indicates that he treated Quoya as a name of feminine gender, which is assumed to be the gender of Quoyella as well. Lessonina: Feminine. The gender was not specified by Starobogatov, and no gender was specified for Lessonia by Labbé. Labbé's original combination Lessonia ferruginea indicates that he treated Lessonia as a name of feminine gender, which is assumed to be the gender of Lessonina as well. Diagnosis. Body not flattened. Dorsal gills present. Dorsal eyes present. No retractable, central papilla present. Eyes at tip of short ocular tentacles. Male opening below right ocular tentacle and to its left. Foot wide. Pneumostome median, on ventral hyponotum. Intestinal loops of types I or V. Rectal gland absent. Accessory penial gland present, with muscular sac. Penis with hooks. Remarks. Phylogenetic analyses show that all species of slugs with dorsal gills belong to the same clade (Figs 2-4). Seven generic names apply to that clade (excluding spelling mistakes, unjustified emendations, replaced names, and Peronia Blainville, 1824, a junior homonym of Peronia Fleming, 1822a). Note that the species name of a type species can be valid (such as Peronia peronii), synonymous (such as Onchidium tonganum, junior synonym of P. peronii), or even a nomen dubium (such as Quoya indica). Remarks on the nomenclatural history of the genus Peronia follow a chronological order. Cuvier (1804) described the first Peronia species as Onchidium peronii but did not mention the presence of dorsal gills. Nor did he illustrate them. He only described a mantle covered by small warts subdivided in even smaller warts. Dorsal gills are actually present on the dorsum of the type specimen of Onchidium peronii from Timor, but they are retracted, as most often seen in preserved specimens. Cuvier (1804: 41) also confessed that he would have believed O. peronii to be terrestrial, due to its pulmonary cavity "similar to that of reptiles", but that he regarded it as marine because Péron was certain to have collected it in seawater. But, Cuvier (1804: 41) Buchannan (1800) wrote that slugs live in Bengal on leaves of Typha reeds and are "very nearly allied" to Limax, suggesting that they are terrestrial, although he did not mention the presence of a pulmonary cavity and did not clearly state whether the slugs were terrestrial or not. At any rate, authors considered that Buchannan's (1800) O. typhae was not a marine species and Blainville (1817: 440) fig. 3.5) in the Description de l'Egypte for slugs from the Red Sea; for a collation, see Baring (1838) and Sherborn (1897). However, gills remained completely unnoticed because the explanation of Savigny's plate was published nearly ten years later by Audouin (1826: 18-20). Onchis is not etymologically rigorous. The latinization of ὂγκος is oncos or oncus, as in the English word oncology. The Greek letter κ is "c" in Latin, while χ becomes "ch." That Férussac used onchis instead of oncos is not surprising, as naturalists often took liberties with the latinization of Greek words. A famous example being the word taxonomy, created as taxonomie by De Candolle (1813: 19) from the Greek words taxis (arrangement, order) and nomos (law, rule): taxis should have stayed as taxi-to form taxinomie, taxinomy, exactly like in the English word taxidermy (from taxis and dermis, skin). However, the Code does not require taxon names to be etymologically correct. Therefore, the intentional spelling change of Onchis to Oncus by Agassiz (1846: 259;1848: 748) is an unjustified emendation because Onchis is not the result of "inadvertent error, such as a lapsus calami or a copyist's or printer's error" (ICZN 1999: Article 32.5.1) and therefore Onchis must not be corrected. The emendation of Onchidium into Oncidium by Agassiz (1846: 259;1848: 748) also is unjustified for the same reason. The generic name Peronia first appeared in two different venues, both published by Fleming (1822). One venue is Fleming's (1822a: 574) article "Mollusca" in the fifth volume (second part) of the Supplement to the fourth, fifth, and sixth editions of the Encyclopaedia Britannica published in May 1822 (as clearly indicated in a memorandum at the end of the sixth volume of the Supplement), even though the Supplement was only completed in 1824 (date on the title page). The other venue is Fleming's (1822b: 463) Philosophy of Zoology which, according to Feuer and Smith (1972: 55), was not published earlier than May 1822 but no later than June 1822. The mention of Peronia in the Supplement is considered here to be the earliest one because it was published in May 1822. Peronia Fleming, 1822a is an objective junior synonym of Onchis Férussac, 1822, because Férussac's Onchis was published prior to Fleming's Peronia and both generic names share the same type species (Onchidium peronii). However, to the best of our knowledge, Onchis has only been used twice in a binomen, and both times before 1899: by Stimpson (1855) for Onchis fruticosa, a species name that has remained unnoticed until now, and by Mörch (1863) for Onchis (Peronella) armadilla Mörch, 1863, i.e., Onchidella armadilla (Mörch, 1863). Reversal of precedence applies here (ICZN 1999: Article 23.9). Onchis, the senior synonym, "has not been used as a valid name after 1899" (ICZN 1999: Article 23.9.1.1) and Peronia, the junior synonym, "has been used for a particular taxon, as its presumed valid name, in at least 25 works, published by at least 10 authors in the immediately preceding 50 years and encompassing a span of not less than 10 years." (ICZN 1999: Article 23.9.1.2) A chronological list of 25 works meeting the criteria of ICZN Article 23.9.1.2 is provided here, all of which mentioning Peronia, Peronia verruculata, or Peronia peronii as valid names: Marcus and Marcus (1970), Starobogatov (1976), , Biskupiak and Ireland (1985), Faulkner (1987), Pietra (1990), Arimoto et al. (1993), Davies-Coleman and Garson (1998) Xu et al. (2018). Onchis Férussac, 1822, objective senior synonym, is regarded as a nomen oblitum, and Peronia Fleming, 1822a, objective junior synonym, is regarded as a nomen protectum (ICZN 1999: Article 23.9.1.2). Fleming (1822a: 571, 574) classified Onchidium (with only the type species O. typhae) in a group of slugs that "reside constantly on the land," and transferred O. peronii to Peronia, a genus for marine slugs that have "their residence constantly in water" and look like Onchidium. However, Fleming (1822a: 574) expressed doubts that Peronia slugs are air-breathing, as Cuvier (1804) claimed in the original description of O. peronii: "This genus, which we have named in honor of M. Peron, was referred by Cuvier to the Onchidium of Buchanan (…) and the species termed O. Peronii. It was found creeping upon marine rocks, under water, at the Mauritius, by M. Peron. M. Cuvier conjectures that it breathes free air, and has accordingly inserted it among the Pulmones aquatique [Pulmonés aquatiques, i.e., aquatic pulmonates]. Some doubts, however, may reasonably be entertained about the truth of this supposition. It would certainly be an unexpected occurrence to find a marine gasteropodous mollusca obliged to come to the surface at intervals to respire. It will probably be found that it is truly branchiferous." It was Audouin (1826) who demonstrated later that both Cuvier and Fleming were correct because Peronia peronii can breathe through both its pulmonary cavity and dorsal gills. Blainville (1824: 280) created the generic name Peronia without being aware that Fleming (1822a, b) had already created exactly the same name two years before. Indeed, that Blainville (1824: 258) wrote "our genus Péronie" clearly suggests that he thought he was the author of Peronia. Also, most past authors attributed the authorship of Peronia to Blainville instead of Fleming (e.g., Stoliczka 1869: 100;Plate 1893: 102;Labbé 1934a: 189). Peronia Blainville, 1824 is a junior homonym of Peronia Fleming, 1822a and thus cannot be used as a valid name (ICZN 1999: Article 52.2). However, Peronia Blainville is also a junior objective synonym of Peronia Fleming, because they "both denote nominal taxa with name-bearing types whose own names are themselves objectively synonymous." (ICZN 1999: "objective synonym" in the glossary) Indeed, O. peronii, the type species of Peronia Fleming, and P. mauritiana, the type species of Peronia Blainville are objective synonyms because they share the same lectotype, i.e., the specimen from Mauritius which Cuvier (1804: pl. 6) illustrated (see below, the comments on the type material of O. peronii and P. mauritiana). When he created the generic name Peronia, Blainville (1824: 280, 281) cited only one species name, Peronia mauritiana, a junior objective synonym of Onchidium peronii. Blainville (1824: 281) also claimed that he knew four or five other species of marine onchidiids from the southern hemisphere, without naming them, but Blainville (1826: 523) listed them two years later (Table 1): Peronia laevis, a junior objective synonym of Marmaronchis vaigiensis; Peronia semituberculata, a junior objective synonym of Onchidium planatum, itself a nomen dubium which may or may not refer to an onchidiid species; Peronia oniscoides, which all authors ignored except for Labbé (1934a: 243) and which clearly does not refer to a Peronia species (see general discussion). In addition, Blainville (1826: 523) also pointed out that Onchidium celticum, a name which Cuvier used for small marine slugs from the coast of Brittany, France, could also refer to a Peronia; Onchidium celticum remained a nomen nudum until 1832, when it was described by Audouin and Milne-Edwards (1832: 118). Like Cuvier (1804), Férussac (1822), and Fleming (1822a, b), Blainville (1824) did not mention the existence of dorsal gills. Dorsal gills were first described by Audouin (1826: 18-20) in the explanation of a plate by Savigny (1817: pl. 2) from the Description de l'Egypte. Savigny's (1817: pl. 2, figs 3.1-3.8) plate displays eight drawings for two onchidiid slugs from the Red Sea, with one of them clearly representing a dorsal gill (Savigny 1817: pl. 2, fig. 3.5). According to Audouin (1826: 19), it was Cuvier himself who identified those two slugs as Onchidium peronii, although Cuvier (1830) later changed his mind and created the new name Onchidium verruculatum for them. More importantly, Audouin (1826: 19) described in great detail the "small vascular branches" at the posterior end of the dorsum, or "tubercles" that work as "true gills." And Audouin (1826: 19) even made this clever statement: "The Onchidie thus would have at the same time a pulmonary apparatus and a branchial apparatus; and that structure is in perfect agreement with what we know of the habits of that mollusk: Péron says that it is aquatic; on the contrary M. Cuvier, without the authority of this observer, would have believed it to be terrestrial. (...) We think that the Onchidie, at least the species illustrated here, enjoys the capacity to breathe under water thanks to the help of those ramified tubercles which cover the posterior end of its body, without the necessity of coming up to the surface; which is relatively difficult for an animal that slowly crawls at the bottom underwater. As for the pulmonary opening, it indicates that the onchidie breathes air as well; and we must suppose that several times in its life it finds itself in the condition to do so. " Audouin supposedly assumed that those slugs were truly aquatic. Because Peronia was originally used as a genus for all marine onchidiids by both Fleming (1822a, b) and Blainville (1824Blainville ( , 1826, several Peronia species names already existed by 1830: Peronia mauritiana, P. peronii, P. oniscoides, P. semituberculata, and P. laevis (see above). Of those names, only the two objective synonyms P. mauritiana and P. peronii refer to true Peronia slugs, i.e., slugs with dorsal gills (Table 1). Cuvier (1830: 46) did not see the need for a genus assignment for marine onchidiid species and still only recognized Onchidium, but other naturalists started transferring species names from Onchidium to Peronia. Lesson (1833: pl. 19) transferred his own Onchidium ferrugineum Lesson, 1831a to Peronia, and clearly specified that he agreed with Blainville that marine onchidiids should be classified in a distinct genus. Dorsal gills are very clearly described by Lesson (1831a: 128-130;1831b: 300-302;1832: 36-37, fig. 32;1833: pl. 19) in O. ferrugineum, but they were not the reason why he transferred it from Onchidium to Peronia. Shortly after that, Oken (1834a) also transferred six Onchidium species names by Gaimard (1832-1833) to Peronia (P. cinerea, P. incisa, P. nigricans, P. patelloides, P. punctata, and P. tongana), with no justification but most likely because he also adopted the idea that marine onchidiids should not be classified in Onchidium. The name Eudrastus was created by Gistel (1848: x), as a replacement name for "Peronia (Quoy, Isis 1834. 287.)." Gistel refers here to a report (Oken 1834a: 283-310) on Gaimard's (1832-1833) contribution to the Voyage de découvertes de l'Astrolabe published in Isis, the encyclopedic journal edited by Lorenz Oken from 1817 to 1848. This report was most likely written by Oken himself, as was often the case (Kertesz 1986), which would explain that the six onchidiid specific names mentioned (tongana, incisa, patelloides, nigricans, punctata, cinerea) are combined with Peronia instead of Onchidium, the generic name originally used by Gaimard (1832-1833). Regardless of who authored that Isis report, Gistel (1848) did create the new generic name Eudrastus for those six species. Baker (1938: 86) subsequently designated Onchidium tonganum Quoy & Gaimard, 1832 (Peronia tongana in Isis), as the type species of Eudrastus. Onchidium tonganum is regarded here as a junior subjective synonym of Peronia peronii, so Eudrastus is a junior subjective synonym of Peronia. Britton (1984: 182-183) suggested that Eudrastus should be regarded as a junior synonym of Peronia because it seemed to be based on "unimportant characters." John Edwards Gray (1847: 179) attributed the authorship of Peronia to Blainville (with an erroneous date of 1825) but, most importantly, gave its modern definition to Peronia by restricting it to six species of slugs with "radiating processes" on the back (JE Gray 1850: 117): P. alderi, P. ferruginea, P. mauritiana, P. peronii, P. punctata, and P. tongana. All those names refer to true Peronia slugs with dorsal gills. JE Gray (1850: 117) restricted Onchidium to Buchannan's O. typhae and included all the other marine species without dorsal gills in a new genus Onchidella. JE Gray's (1850) clarity only lasted for a few years. Indeed, Adams and Adams (1855: 234) pointed out that Peronia slugs differ from Onchidium and Onchidella because of "arbusculiform and other appendages of the mantle, which have sometimes been mistaken for gills." Because they did not believe that gills were distinct from other dorsal papillae, Adams and Adams (1855: 234) classified in Peronia some names that belong to both Peronia (P. ferruginea, P. mauritiana, P. peronii, P. punctata, P. tongana) and to Onchidella (O. celtica, O. indolens, O. marginata, and O. parthenopeia). JE Gray's (1850) classification was adopted by Keferstein (1865a) but, until work, all authors have ignored the genus Peronia and simply used the genus Onchidium for slugs with and without dorsal gills (Stoliczka 1869;Semper 1880Semper -1885Plate 1893;Bretnall 1919;. Stoliczka (1869: 100-102), who was the first one to re-examine live slugs of O. typhae since Buchannan (1800), firmly argued that slugs with "dorsal tufts" were anatomically so similar to Onchidium and Onchidella that only one name, Onchidium, was needed. Stoliczka (1869: 98) also clarified that O. typhae is not a terrestrial species but that, instead, it lives in "damp places, generally close to tanks or ditches, especially those which are supplied during high tide with brackish water." Stoliczka's (1869) strong influence can be seen in Semper's (1880Semper's ( -1885 study of the onchidiids from the Philippines (and other parts of the Indo-West Pacific) in which all onchidiids are in Onchidium, with the exception of a single species in his new genus Onchidina Semper, 1882; for a collation of Semper's work, see Johnson (1969). Plate (1893) adopted a classification with five genera, but the four species of slugs with dorsal gills recognized by Plate are classified in Onchidium with thirteen species of slugs without dorsal gills. Hoffmann (1928) adopted a classification with six genera, six species of slugs with dorsal gills being classified in Onchidium with 34 species without dorsal gills. Then, suddenly, in 1934, the number of onchidiid taxon names for slugs with dorsal gills dramatically increased. Based on the onchidiid collection at the Paris Museum, created fourteen new species-group names for slugs with dorsal gills (all but one name are species names) and four new generic names: Lessonia (later replaced by Lessonina), Paraperonia, Quoya, and Scaphis. Below, the nomenclatural status of Labbé's generic names is justified first (they all are junior synonyms of Peronia), followed by opinions in the secondary literature. The generic name Paraperonia was created by Labbé (1934a: 196) for four species similar to Peronia but with intestinal loops of type V (instead of type I). The type species is Paraperonia gondwanae , by subsequent designation (Starobogatov 1976: 211). Labbé's description of Paraperonia gondwanae was based on 38 individuals with intestinal loops of types I and V which belong to different species. The application of the name P. gondwanae is clarified through the designation of a lectotype (see P. verruculata): Paraperonia gondwanae is a junior synonym of Peronia verruculata, and Paraperonia is a junior synonym of Peronia. The generic name Scaphis was created by Labbé (1934a: 203) for nine species similar to Peronia but supposedly with an oblique, almost vertical hyponotum. The type species is Onchidium astridae , by subsequent designation (Starobogatov 1976: 211). Onchidium astridae is a junior synonym of Peronia verruculata, and Scaphis is a junior synonym of Peronia. Lessonia Labbé, 1934a is objectively invalid because it is the junior homonym of Lessonia Swainson, 1832 [Aves]. Starobogatov (1976: 211) replaced it by Lessonina. Labbé (1934a: 213-216, figs 48-50) described Lessonia based on a single species, Onchidium ferrugineum Lesson, 1831a, of which he examined no other material than the four syntypes (MNHN-IM-2000-22951). The examination of the three remaining syntypes (one syntype was lost by or after Labbé) revealed that the lectotype (Goulding et al. 2018b: 75) belongs to a Peronia species and that the two paralectotypes belong to Wallaconchis ater (Lesson, 1831a). Both Lesson's original description of Onchidium ferrugineum and Labbé's re-description of Lessonina ferruginea are a confusing combination of traits that characterize species from two distinct genera. For instance, the dorsal gills mentioned by both authors, are characteristic of Peronia, while the absence of an accessory penial gland mentioned by Labbé (even though there is a penial gland in the lectotype) is characteristic of Wallaconchis. Thanks to the designation of a lectotype with dorsal gills, the name Onchidium ferrugineum clearly applies to a Peronia species and Lessonina becomes a junior synonym of Peronia. Starobogatov (1976: 211) created Quoyella as a replacement name of Quoya , which he treated as a junior homonym of "Quoya Deshayes, 1843" [Mollusca, Gastropoda, Planaxidae]. In the second edition of Lamarck's Histoire naturelle des animaux sans vertèbres, Deshayes indicates that he originally thought of creating a new genus Quoya but that, after all, he decided not to (Deshayes and Milne-Edwards 1845: 236). Deshayes still used the binomen "Planaxis decollata Quoy" (Deshayes and Milne-Edwards 1845: 238). However, in the Explication des planches of his Traité élémentaire de conchyliologie, Deshayes (1853: 50) used Quoya for two valid species names: Quoya decollata and Quoya grateloupi. Regardless, according to Gray (1847: 138), the generic name Quoya by Deshayes is an incorrect subsequent spelling of his Quoyia JE Gray, 1839. As an incorrect subsequent spelling, Quoya Deshayes is not available (ICZN 1999: Article 33.3) and, as a result, Quoyella is an unnecessary replacement name. Ironically, Gray (1847: 138) indicated that he originally found the generic name Quoyia in a manuscript by Deshayes in 1830 ("Quoyia, Desh. MSS. 1830; Gray, 1839 (...) Quoya, Desh. 1843"). According to Baker (1938: 87), Quoya Agassiz, 1862 [Coelenterata] is another homonym of Quoya Labbé, 1834a. However, the spelling of that generic name is not Quoya but Quoyia (Agassiz, 1862: 173). So, Quoyia Agassiz, 1862 is a junior homonym of Quoyia Gray, 1839, but Quoya Labbé, 1934a is not a junior homonym of Quoyia. Quoya indica , type species of Quoya by monotypy, is regarded here as a nomen dubium even though it applies to a species with dorsal gills and thus belongs to Peronia (see general discussion). Nothing is ever simple in onchidiid taxonomy. Indeed, Labbé (1935a, b) also described what he called "microgills" in Elophilus Labbé, 1935a, a name preoccupied by Elophilus Meigen, 1803 (Diptera) and replaced by Labbella Starobogatov, 1970. Labbé's (1935a microgills consolidated the old idea of a gradual continuum between regular dorsal papillae and dorsal gills. So, for instance, Marcus and Marcus (1960: 875) argued that one cannot say for sure whether a papilla is a dorsal gill or not. However, Dayrat et al. (2016Dayrat et al. ( , 2019d demonstrated that there are no gills at all (not even microgills) on the notum of the type material of the type species of Labbella which actually belongs to Onchidium stuxbergi (Westerlund, 1883). Labbella is a junior synonym of Onchidium. Contrary to regular papillae, dorsal gills are distinctively branched, which is striking if specimens are fully relaxed before preservation but otherwise difficult to see. Finally, note that Labbé (1935b: 320) claimed that he observed rudimentary eyes on dorsal gills, which, to our knowledge, has never been confirmed. Labbé (1934a: 187, 188) rightly recognized the importance of dorsal gills for classification and he separated all five genera of slugs with dorsal gills from all other onchidiids. According to Labbé, onchidiids deserved their own order, the Silicodermatae, composed of two suborders: Dendrobranchiatae (onchidiids with dorsal gills) and Abranchiatae (onchidiids without dorsal gills). Our phylogenetic analyses clearly demonstrate that all species of slugs with dorsal gills belong to a single clade, and that only one generic name (Peronia) is necessary (Figs 2-4). However, the species of slugs with no dorsal gills do not form a natural group (Figs 2-4). In other words, the absence of dorsal gills is a plesiomorphic trait for the onchidiids and the presence of dorsal gills is a synapomorphy for the genus Peronia. Labbé's (1934a: 187) distinction between the tribes Peroniidae (Peronia and Paraperonia) and Scaphidae (Scaphis, Lessonina, Quoya) based on the orientation of the hyponotum (horizontal versus oblique) is meaningless. This trait obviously varies depending on preservation, and Labbé exclusively studied preserved material from the collections of the MNHN without access to live animals. Labbé's (1934a: 187) distinction between Peronia and Paraperonia based on the intestinal types (type I in Peronia and type V in Paraperonia) is unwarranted because Peronia species with intestinal loops of type V are not more closely-related to each other (Table 4, Figs 2-4). Also, Labbé often made mistakes with respect to intestinal types: for instance, the type material of Paraperonia gondwanae includes individuals with loops of both types I and V, even though Labbé described it as a species with loops of type V. Labbé asserted that the position of the pneumostome and the size of the muscular sac differ between Peronia and Paraperonia. However, the position of the pneumostome varies between individuals and is not consistently on the right side of the median axis in species he classified as Paraperonia. Labbé's (1934a: 187) distinction between Scaphis, Quoya, and Lessonina, is also unwarranted. Again, the position of the pneumostome (on the right of a median line in Scaphis according to Labbé) varies between individuals. re-description of Lessonina ferruginea (the type species of Lessonina, by monotypy) was based on individuals of two different species (see above). The male opening of the lectotype, which bears dorsal gills, is on the left of the right ocular tentacle, exactly as in all Peronia species, while the male opening of the two paralectotypes, which belong to Wallaconchis ater, is under the right ocular tentacle (Goulding et al. 2018: 75). Labbé (1934a: 216, fig. 51) described a double male opening in Quoya indica (the openings of the penis and of the accessory penial gland being supposedly separated), but this could not be confirmed in the type material. Regardless, male openings occasionally appear separated due to preservation (when the vestibule is everted) and that is by no means a trait of generic value. Authors completely rejected idea that the presence or absence of dorsal gills could be of any use in onchidiid classification (e.g., Marcus and Marcus 1960;Starobogatov 1976). Britton (1984: 180) even asserted that "the division of the group into two subordinate taxa based on this character is no longer admissible." As for the status of Labbé's ( , 1935a generic names for slugs with dorsal gills, authors were not in agreement. Marcus and Marcus (1970: 213) regarded Peronia and Paraperonia "at most as subgenera." Starobogatov (1976) regarded all names as valid: Lessonina, Paraperonia, Peronia, Quoyella (unnecessary replacement name for Quoya), Scaphis, and Labbella (supposedly with micro-gills). Britton (1984: 182-183) suggested that Paraperonia, Eudrastus and Scaphis should be regarded as junior synonyms of Peronia because they seemed to be based on "unimportant characters," but treated Labbella (supposedly with micro-gills), Lessonina, and Quoyella (for Quoya) as valid. In a recent review of the application of onchidiid generic names, Dayrat et al. ( : 1861 made it clear that all slugs with dorsal gills belong to one clade and that Eudrastus, Lessonina, Onchis, Paraperonia, Peronia, Quoyella (for Quoya), and Scaphis all refer to that clade. Note that the application of Lessonina was fully clarified when a lectotype was designated for its type species Onchidium ferrugineum (Goulding et al. 2018: 75). ( Labbé, 1934a: 197-198 Cuvier's (1804: pl. 6) illustrations are truly remarkable, they are flipped at 180° because, for instance, the heart and the male anterior parts are on the left. Something must have happened during the engraving or the printing. Hoffmann (1928: 71) referred to Mauritius as the "Typ-Lokalität" of Onchidium peronii but did not formally designate a lectotype for O. peronii. In case of syntypes, "the place of origin of the lectotype becomes the type locality of the nominal species-group taxon, despite any previously published statement of the type locality." (ICZN 1999: Article 76. Peronia peronii 2) The original description of Onchidium peronii was based on two specimens collected by Péron: the lectotype from Mauritius, of which the internal anatomy was illustrated in detail by Cuvier (1804: pl. 6), could not be located and is likely lost; the paralectotype from Timor (MNHN-IM-2000-22938) was very briefly mentioned by Cuvier (1804: 39) who merely wrote that another specimen was brought from Timor by Péron and that Onchidium peronii is present "at the two extreme ends of the Indian Ocean." The paralectotype (60/40 mm) is well preserved even though dorsal papillae with eyes cannot be counted because their color faded. It is obvious that Cuvier did not actually use it for his detailed anatomical description and illustrations on plate 6, because it was never opened prior to the present study, except for a tiny cut near the lung. It was carefully opened on its side to draw a dorsal view of its intestinal loops of type I (Fig. 9A) and measure the length (4.5 mm) of the spine of the accessory penial gland (by transparency, so that the male copulatory apparatus was not dissected). Blainville also mentioned the name Peronia mauritiana in his Manuel de Malacologie et de Conchyliologie (Blainville 1825: 490) and in the article "Péronie" of the Dictionnaire des Sciences Naturelles (Blainville 1826: 523). The illustration published by Blainville (1827: pl. 46, fig. 7) in the Atlas of the Manuel differs from that published by Cuvier (1804: pl. 6, fig. 1). The specimen used by Blainville for that illustration could not be located, which does not matter much since it does not have any name-bearing function. However, it also means that, because there are two species of Peronia in Mauritius, Blainville's (1827: pl. 46 It is unclear how many specimens Quoy and Gaimard (1832: 210-211, pl. 15, figs 17, 18) examined for the original description of Onchidium tonganum. They may have examined more than one individual. Regardless, it is clear that Onchidium tonganum applies to a Peronia species because the notum of the lectotype bears gills which were also illustrated in the original description. Its notum also bears fifteen dorsal papillae with eyes but others probably faded. The lectotype was dissected prior to the present study. The accessory penial gland and the penial apparatus are missing (pieces of the deferent duct remain). The intestinal loops are of type I with a transitional loop between 2 and 3 o'clock ( Fig. 9B). Quoy and Gaimard (1832: 216) briefly mentioned the presence of O. tonganum in Manokwari, West Papua, Indonesia, but that record could not be confirmed (although P. peronii is known to be present there because Manokwari is the type locality of O. punctatum). Lectotype and paralectotypes (Onchidium punctatum). Indonesia • lectotype, hereby designated, 70/60 mm; dans le port de Dorey " That old label does not say "Dorey" (for the locality), which is only mentioned in the original description, but it clearly indicates that the lectotype was part of the type series of Onchidium punctatum. The lectotype bears dorsal gills, as illustrated by Quoy and Gaimard (1832: pl. 15, figs 27, 28). It was dissected prior to the present study, likely by Labbé (1934a: 203-204) and its penis is missing but its intestinal loops are of type I with a transitional loop at 3 o'clock ( Fig. 9C). Its spine of the accessory penial gland, still in place in the animal, is 3.7 mm long. A second jar was found with two paralectotypes (MNHN-IM-2000-33701). An old label for that second jar says "Onchidium piquetée, Q G. MM Quoy Gaimard, 1829" with no locality data. The name "Peronia" was added on the label. The number "51" also appears on another old label, which corresponds to an unknown numbering system. There also is a more recent label saying "Peronia picta QG, M. Quoy et Gaimard, 1829." Quoy and Gaimard did not describe any onchidiid species with the specific name picta. However, the French vernacular name of Onchidium punctatum in Quoy and Gaimard's (1832: 215) original description is "Onchidium piquetée." So, it is likely that these two additional specimens were part of the type series of Onchidium punctatum. Both paralectotypes (35/25 and 32/30 mm) bear dorsal gills. The largest paralectotype was dissected prior to the present study, possibly by Labbé (1934a: 203, 204), and its penis is missing but its accessory penial gland remains. The small paralectotype was not dissected. Labbé (1934a: 203) listed three individuals from Port-Dorey which he (implicitly) regarded as part of the original series of Onchidium punctatum. Labbé gave the measurements for only two individuals: "a" (35/25 mm), likely the largest paralectotype; "b" (77/56 mm), likely the lectotype. In addition, in his redescription of Scaphis punctata, Labbé (1934a: 204-205) mentioned two individuals identified as Peronia and collected by Quoy and Gaimard in 1829, from an unknown locality. Those two individuals are likely within another jar found at the MNHN with the old number "48" and a label saying "Peronia M. Quoy et Gaimard 1829." There is no reason to consider that those two unidentified individuals from the collection (Goulding et al. 2018: 63), and one specimen collected by Raffray in 1878 (with numbers "22" and "75" on the label). Lectotype and paralectotypes (Paraperonia fidjiensis). Fiji • lectotype, hereby designated, 60/50 mm; 1876; Filhol leg.; MNHN-IM-2000-33692. No jar clearly labeled as the type material of Paraperonia fidjiensis was found at the MNHN, but the lectotype could be traced, and six paralectotypes could not be found at the MNHN. Labbé (1934a: 197-198, figs 9-11) described Paraperonia fidjiensis based on seven individuals from Fiji ("Iles Fidji") collected by Filhol (Henri Filhol [1843-1902) in 1876 and with the following sizes: 75/50 mm for six "a" individuals and 70/50 mm for a seventh "b" individual. Two jars of material collected in Fiji by Filhol in 1876 were found at the MNHN. The first jar, labeled as "Peronia [written over Oncidium] I. Fidji M r . Filhol n°11 1876" and "71," contains a single Peronia specimen which, given its size (60/50 mm), very likely is part of the type series of P. fidjiensis, and which is designated as the lectotype (MNHN-IM-2000-33692). Its radula and all reproductive parts are missing. Its intestinal loops are clearly of type I, with a transitional loop at ~ 1 o'clock ( Fig. 9E). The second jar, labeled as "Oncidiella I. Fidji M r . Filhol n°11 1876" and "101," contains four poorly-preserved specimens which do not even appear to belong to Peronia, with a size (less than 30 mm) not compatible with the original description of P. fidjiensis, and which, therefore, cannot be regarded as part of the type series. Additional material examined (historical museum collections). Chagos Archipelago • 1 specimen 95/65 mm; Ye Ye, Peros Banhos atoll; 24 Feb 1996; M Spalding (from N Yonow's personal collection) leg.; exposed on shallow reef flat on rocks; MNHN-IM-2014-7992. Fiji GenBank sequence. One COI sequence was obtained from GenBank (LC390402) for an individual identified as Peronia sp. and collected from Okinawa, Japan (Takagi et al. 2019), which is the northernmost confirmed locality for Peronia peronii. Distribution (Fig. 6). Given that our fresh molecular samples of P. peronii from the West Pacific (Guam, Papua New Guinea) are conspecific with those from Mauritius, it is assumed here that all individuals with a long spine of the accessory penial gland belong to the same species. Strictly speaking, however, the presence of P. peronii from places like Zanzibar, the Maldives, Nicobar Islands, West Papua, Timor, Palau, New Caledonia, and Tonga, would still need to be validated with fresh material. Interestingly, but for unclear reasons, Peronia peronii seems to be only recorded from relatively small islands, the largest ones being Timor, New Caledonia, and Fiji. Even in Papua New Guinea, it was found on small islands close to the mainland but not on the mainland. Peronia peronii seems to be transported across vast distances from the western Pacific Ocean to the western Indian Ocean, but which does not seem to settle on the coasts of large land masses. We did not find it in any of the many localities we visited in the Philippines, Vietnam, Malaysia, Borneo, Sulawesi, Halmahera, Sumatra, etc. It is possible that we occasionally missed it in a few places (obviously we missed it in Timor and New Caledonia where it is present), but it is unlikely that we missed it everywhere. Habitat (Fig. 7). Live slugs of Peronia peronii are found in the rocky intertidal, like most other Peronia slugs. Many of our specimens were collected at night or just before sunrise, suggesting that P. peronii is, at least partly, a nocturnal species. This could explain why we missed it at some localities where we only collected during the day. Peronia peronii is not rare, but it is definitely not as common as some other species. The fact that collecting it at night seems necessary, at least in some localities, might explain why collections of P. peronii are not as abundant as collections of P. verruculata. Color and morphology of live animals (Fig. 8). No picture of live animals was available for individuals from the West Pacific (Guam and Papua New Guinea). The description of the color of live animals is based on the Mauritius individuals. The dorsal notum is brown, with a greenish hue, light to dark, mottled with darker and lighter areas. The color of the dorsal papillae varies a s that of the background itself. The ventral surface (foot and hyponotum) is yellowish-greenish and can change rapidly in any given individual. The ocular tentacles are brown-grey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Dorsal papillae can be particularly tall (easily up to 4 mm), even in preserved specimens, and are evenly distributed over the entire notum. Preserved, they are difficult to distinguish from retracted dorsal gills in the posterior half of the notum. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (15)(16)(17)(18)(19)(20). The longest animals are 140 mm long in Mauritius and 115 mm long in the West Pacific. Digestive system (Figs 9-12). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 75 μm long. The hook of the lateral teeth is approximately 160-200 μm long. The intestinal loops are of type I, with a transitional loop oriented between 12 to 3 o'clock. Reproductive system (Figs 13-16). In the anterior (male) parts, the muscular sac of the accessory penial gland is at least 30 mm long in specimens from Mauritius and at least 25 mm long in specimens from the West Pacific (Guam & Papua New Guinea). Note that, in some additional museum specimens, the muscular sac was only 20 mm long, and, even exceptionally 17 mm long (see remarks below). The hollow spine of MNHN-IM-2013-12500) in the West Pacific (Guam and Papua New Guinea). Its diameter at the conical base is approximately 400 μm in specimens from Mauritius and between 400 and 500 μm in specimens from the West Pacific (Guam and Papua New Guinea). Its diameter at the tip measures 160-170 μm in specimens from the West Pacific, and from 180 to 200 μm in specimens from Mauritius. Note that, in some additional museum specimens, the spine was only 3 mm long (see remarks below). The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Exceptionally, the retractor muscle can even be vestigial ([5472] MNHN-IM-2013-14052). Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 50 μm long. Diagnostic features (Table 4). Peronia peronii is the only Peronia species which is easy to identify anatomically. Indeed, it is characterized by a very long spine (at least 3 mm) of the accessory penial gland, which is distinctive and easily accessible (one just needs to pull on the flagellum of the penial gland or, even, in some cases, measure the spine by transparency). The two longest spines were found in the lectotype of P. fidjiensis (MNHN-IM-2000-33692) from Fiji (5 mm), and in an old historical specimen (ANSP 304860) from the Maldives (4.8 mm). Peronia peronii is additionally characterized by a unique combination of anatomical traits: muscular sac longer than 20 mm, intestinal loops of type I (with a transitional loop oriented between 12 and 3 o'clock), retractor muscle inserting near the heart. Also, no individual larger than 80 mm was found in any other Peronia species so far. Animal size can be useful when several Peronia species are found at the same site. For instance, the two individuals of P. verruculata (unit #1) found at the station PM 12 (near Madang, Papua New Guinea) are 35 and 38 mm long while the individual of P. peronii from the same station is 80 mm long. The type I of its intestinal loops (with a transitional loop oriented between 12 and 3 o'clock) is only shared by P. okinawensis, a species endemic to Japan with which it is most closely related. Remarks. Synonymies. There is no doubt that Cuvier's (1804) Onchidium peronii applies to the species described here, just based on animal size alone. According to Cu- vier, the lectotype from Mauritius measured approximately 140 mm long and our molecular data show that all individuals of that size from Mauritius belong to a single species (Table 4). Cuvier's (1804: pl. 6) detailed anatomical description and drawings are exclusively based on the lectotype (he did not dissect the paralectotype from Timor). Cuvier (1804: 48, pl. 6, fig. 8) described the spine of the accessory penial gland as a "very sharp, brown spike" but unfortunately did not provide its length. However, Cuvier's (1804: pl. 6, fig. 4) illustration of the intestinal loops is identical to some of our Mauritius individuals here: intestinal loops of type I with a transitional loop at 3 o'clock. The paralectotype of Onchidium peronii from Timor (MNHN-IM-2000-22938) is only briefly mentioned by Cuvier in the original description. The length (4.5 mm) of the spine of its accessory penial gland (checked for the present study) indicates that it also belongs to P. peronii. Its intestinal loops are also identical to those of the lectotype (Fig. 9A). Peronia mauritiana is a junior objective synonym of Onchidium peronii because they share the same name-bearing type. The lectotype was dissected prior to the present study and most of the male copulatory parts are missing (only the deferent duct remains). As a result, the length of the spine of the accessory penial gland, which is diagnostic of P. peronii, cannot be checked. Labbé (1934a: 191) listed the lectotype in the material he examined for his re-description of P. tongana, but he did not point out that it was part of the type material of O. tonganum and he did not describe it anatomically. It is possible but not certain that Labbé dissected the lectotype. At any rate, its intestinal loops are of type I with a transitional loop between 2 and 3 o'clock ( Fig. 9B). Both the length (100 mm) of the lectotype as well as its intestinal loops indicate that Peronia tongana is a junior synonym of P. peronii (Table 4). Onchidium punctatum is regarded here as a junior synonym of P. peronii because the length (3.7 mm) of the spine of the accessory penial gland of the lectotype (MNHN-IM-2000-22966) is only compatible with P. peronii (Table 4). The length of the lectotype (70 mm, preserved) is also far more compatible with P. peronii than with P. verruculata, another species found in West Papua. Our many individuals of P. verruculata are all less than 60 mm long (alive), except a single individual from New Caledonia (73 mm alive). Given their small size, the two paralectotypes (MNHN-IM-2000-33701) likely belong to P. verruculata (unit #1) instead of P. peronii, which would not be surprising at all because the type locality of O. ferrugineum (a junior synonym of P. verruculata) is the same as that of O. punctatum (Manokwari, West Papua, Indonesia). At the end of the description of O. punctatum, Quoy & Gaimard (1832: 216) also mention in passing that they also found Onchidium tonganum in Port Dorey (i.e., Manokwari, West Papua, Indonesia) and they even point out that local inhabitants know how to distinguish both species. Both O. punctatum and O. tonganum are regarded here as junior synonyms of P. peronii. However, it remains true that there are two sympatric Peronia species in West Papua, P. verruculata and P. peronii, which can be distinguished in the field based on animal length (except, of course, for individuals measuring less than 60 mm long). Bergh (1884a: 129-142, pl. IV, figs 25-27, pl. V, figs 1-27, pl. VI, figs 5-18, 20, 21) described Onchidium melanopneumon from a single individual (65/40 mm) from Fiji. This specimen was completely dissected by Bergh and is now empty (NHMUK 1888.5.30.39). Onchidium melanopneumon applies to a Peronia species due to the presence of dorsal gills, and the length (4 mm) of the spine of the accessory penial gland indicates that it applies to P. peronii (Table 4). Its intestinal loops (Bergh 1884a: pl. V, fig. 27) are also similar to those found in P. peronii, although the transitional loop is slightly past the 3 o'clock limit. As a result, O. melanopneumon is regarded as a junior synonym of P. peronii. Bergh (1884b: 263;1885: 176) briefly mentioned again O. melanopneumon in a comparative study on the affinities of onchidiids. Labbé (1934a: 197-198, figs 9-11) described Paraperonia fidjiensis based on seven individuals from Fiji, one of which could be found and is designated as the lectotype (MNHN-IM-2000-33692). Because all reproductive parts are missing, the length of the spine of the accessory penial gland cannot be checked. However, according to Labbé (1934a: 197, fig. 10), the spine of the accessory penial gland is 5 mm long, which is only compatible with P. peronii (Table 4), and is the longest spine known in P. peronii. The intestinal loops of the lectotype of P. fidjiensis are clearly of type I, with a transitional loop oriented at ~ 1 o'clock ( Fig. 9E), even though Labbé (1934a: 197) erroneously described them a type V, which is a mistake he often made. Given the length of the lectotype (60 mm) and, most importantly, the length of the spine of the accessory penial gland, P. fidjiensis is regarded as a junior synonym of P. In the seventh volume of the second edition of Lamarck's Histoire naturelle des animaux sans vertèbres, which was revised by Deshayes and Milne-Edwards (1836), P. mauritiana is proposed as a synonym of Onchidium peronii. However, as a reference for P. mauritiana, the authors mentioned the illustration published by Blainville (1827: pl. 46, fig. 7) in the Atlas of his Manuel which differs from that published by Cuvier (1804: pl. 6, fig. 1) and may or may not refer to Peronia mauritiana. Adams and Adams (1855: 235) merely listed Peronia mauritiana, P. peronii, P. punctata, and P. tongana as Peronia species names. Note that for P. peronii, they refer to Savigny's illustrations of individuals from the Red Sea misidentified as P. peronii by Audouin instead of Cuvier's original description of P. peronii, which means that Adams and Adams refer to P. verruculata instead of P. peronii (see remarks on P. verruculata). Adams and Adams (1855: pl. LXXXI, fig. 3) also reproduced the original illustration of O. tonganum by Quoy and Gaimard (1832: pl. 15, fig. 17 fig. 2) mentioned by Berge was actually published after Cuvier's death in the Disciples' edition of the Règne Animal which was accompanied by beautiful illustrations. According to , the authorship for the mollusks should be attributed to Deshayes (1836Deshayes ( -1845 The record of Onchidium peronii from Natal, South Africa (Krauss 1848: 72) likely is a record of P. madagascariensis, the only Peronia species known in South Africa so far (see remarks on P. madagascariensis). However, P. verruculata (unit #5) could also be present in northeastern South Africa because its southernmost known locality is in Maputo, Mozambique (ca. 26°S). This record by Krauss was mentioned again by a few authors (Sturany 1898: 73;Collinge 1910: 171;Connolly 1912: 224-225;Connolly 1939: 454). The records of Onchidium peronii from Mozambique by Martens (1879: 735) in Ibo Island (ca. 12°21'S) and Inhambane (ca. 23°52'S) are within the geographical range of both P. verruculata (unit #5) and P. madagascariensis (Fig. 6). It is not possible to know to what species Martens was referring; this record by Martens was mentioned twice by Connolly (1912: 225;1939: 454). Semper (1880: 258-260, pl. XIX, figs 2, 9, pl. XXII, figs 1, 2, 10) referred to huge onchidiid slugs (from 50 to 105 mm, preserved) as Quoy and Gaimard's (1832) Onchidium tonganum and merely suggested, with a question mark, that O. peronii could refer to the same species. Semper (1880: 258) listed five geographical records for O. tonganum: Tonga and West Papua (as Port Dorey), from Quoy and Gaimard (1832); Mauritius, based on some material from the Vienna and Kiel museums; Samoa, based on some material from the Museum Godeffroy; and Bohol, Philippines, based on his own collections. Semper (1880: 258) indicated that the specimens he examined were from 50 to 105 mm long, preserved, and that the smallest individual was found in Mauritius. His anatomical description perfectly matches the anatomy of P. peronii. In particular, a spine of an accessory penial gland measuring 4 mm long is only compatible with P. peronii (Table 4). However, he did not clearly indicate whether he observed a long spine in every specimen. It cannot be excluded that he measured the length of the accessory penial gland spine only in a specimen from Mauritius. Therefore, the records of P. peronii in Bohol and in Samoa are regarded here as questionable, even though it is very possible that P. peronii lives in Samoa, given that it is so close to Tonga (800 km) and Fiji (1100 km). Semper (1882: 290) thought that Cuvier's (1804) original description of P. peronii was problematic because his drawing of the dorsal notum did not match the internal anatomy. Because Semper was convinced that Cuvier used specimens that did not belong to the same species, he thought that the name P. peronii should not be used. Plate (1893: 172-173) disagreed with Semper even without examining the type material of P. peronii. It is demonstrated here that the two type specimens described by Cuvier as P. peronii both belong to the same species (see above). Semper (1882: 268) was undecided about the nomenclatural status of what he called Onchidium mauritianum (then a new combination), which he listed as one of the names for which a "closer inspection of the originals" was needed. Like most authors, he cited Blainville's (1827: pl. 46, fig. 7) illustration (which is not part of Blainville's original description) as a reference without realizing that it may or may not correspond to P. mauritiana, a junior objective synonym of P. peronii. Semper (1882: 289) was also undecided about the status of Onchidium punctatum, for which he erroneously thought that the type locality was unknown. He suggested that it might refer to the same species as Onchidium tumidum, which is not possible because O. tumidum was recently transferred to Paromoionchis (Dayrat et al. 2019a). Tapparone Canefri (1883: 214) listed all of Semper's geographic records for Peronia tongana with no new material or anatomical observations (see above). Tapparone Canefri (1883: 214) also regarded Peronia punctata as a valid species name, but with no other reference or material than the original description by Quoy and Gaimard (1832). Tapparone Canefri's suggestion that Peronia punctata could refer to the same species as Onchidium tumidum must be rejected because O. tumidum was recently transferred to Paromoionchis (Dayrat et al. 2019a). Smith (1884: 92) mentioned Onchidium (Peronia) punctatum from Albany Island and Thursday Island, in the Torres Strait, without any description. This is likely a record of Peronia verruculata (unit #1), the only species thought to be present in the Torres Strait, although our study does not include any fresh material from the Torres Strait and P. peronii could also live there (Fig. 6). Note that Thursday Island also happens to be the type locality of Scaphis viridis, a junior synonym of Peronia verruculata. Bergh (1884a: 142-148, pl. VI, fig. 19, pl. VII, figs 1-6) described as Onchidium tonganum a specimen from the collections of the Copenhagen Museum which was collected in the Nicobar Islands during the Galathea Expedition (station 305). That specimen (85/55 mm), dissected by Bergh, is in a jar (NHMD 613753) with a second specimen (70/50 mm) which is still entire and not dissected by Bergh. Both specimens were re-examined for the present study, although Bergh's measurement of the penial gland spine in the largest specimen (4.25 mm) could not be checked because internal organs are missing. Given the specimen sizes, their intestinal loops (type I with a transitional loop at 3 o'clock in the second specimen), and the spine of their accessory penial glands (4.25 mm in the largest specimen according to Bergh, and 4 mm in the second specimen), those two specimens belong to P. peronii. Plate (1893: 172-173, pl. 12, figs 85, 87, 91) re-described Onchidium peronii based on at least one specimen from Mauritius for which he did not provide any size. However, given the length of the spine of the accessory penial gland (7 mm long), there is no doubt that he examined P. peronii. It is possible that he included a part of the duct of the accessory penial gland in that measurement because the longest spine observed in the present study was 5 mm, in the lectotype of P. fidjiensis (MNHN-IM-2000-33692). According to Plate, the retractor muscle inserts near the central nervous system, which does fit in the variation observed here (Table 4). Plate listed several synonyms: Peronia mauritiana, Onchidium tonganum, O. melanopneumon, and possibly (with a question mark) P. corpulenta. Note that Plate (1893: 172) rightly regarded O. melanopneumon as a junior synonym of O. peronii but for a weak reason (a similar pigmentation of the lung). These synonymies are all accepted here, except for P. corpulenta which is regarded as a nomen dubium (see general discussion). Plate (1893) did not comment on O. punctatum. Godwin-Austen (1895: 443) listed Onchidium mauritianum in Little Nicobar. It is impossible to know what species was referred to. However, P. peronii (of which P mauritiana is a junior synonym), is known to be present in Nicobar Islands. Onchidium punctatum is one of the eight onchidiid species mentioned by Hedley (1909: 369) from Queensland, Australia, without any reference to any material. It is impossible to know what species Hedley refers to. Our data show that there are two Peronia species in Queensland (Fig. 6). The record of Onchidium peronii from Durban, Natal, South Africa by Collinge (1910: 171) likely is a record of P. madagascariensis, the only Peronia species known in South Africa so far (see remarks on P. madagascariensis). However, P. verruculata (unit #5) could also be present in northeastern South Africa because its southernmost known locality is in Maputo, Mozambique (ca. 26°S). This record was mentioned again by Connolly (1912: 225;1939: 454). According to Connolly (1912: 224-225), Onchidium peronii is a valid name and Peronia mauritiana (as spelling mistake mauritziana) and O. tonganum are its synonyms. The references listed by Connolly are all commented on above. Let us briefly emphasize here, however, that the localities of Onchidium peronii mentioned by Connolly in South Africa and Mozambique are problematic. Connolly (1939: 454) later admitted that "it is open to question (...) whether the true O. peronii Cuv. really exists in South Africa." Connolly (1939:453), who did not cite work, considered that Peronia was a subgenus of Onchidium and should include onchidiid slugs with dorsal gills. Vayssière (1912: 125-129) recorded seven individuals of Peronia peronii shipped to him from Moucha Islands (Djibouti) by Charles Gravier and Félix Pierre Jousseaume, two of the people who also collected many specimens studied by . Vayssière mostly focused on the description of the radula, which is not useful to identify species. Vayssière reported a wide range of animal sizes (from 10 to 80 mm long and from 6 to 60 mm wide). Thus, it is very possible that he examined more than one species. Instead of P. peronii, which has never been positively recorded from Djibouti, Vayssière likely examined P. verruculata, P. madagascariensis, or both (Fig. 6). His specimens of large size most likely were P. madagascariensis because P. verruculata individuals rarely are longer than 60 mm (Table 4). Note that the number of rows of teeth and the number of teeth per half row mentioned by Vayssière (95 to 100 rows on average) are higher than what was observed here, although they are more compatible with P. madagascariensis than P. verruculata (Table 5), acknowledging that radular formulae are expected to vary. It is not possible to determine to what species Odhner (1919: 42) was referring solely based on his brief, external description of Onchidium peronii from Toliara, Madagascar. However, his material, dissected here, clearly belongs to P. peronii: a single individual (65/50 mm) is characterized by intestinal loops of type I with a transitional loop at 3 o'clock, a spine of the accessory penial gland of 3 mm long, and a muscular sac of 25 mm long (SMNH 180381). Bretnall (1919) uncritically took for granted every species record ever published, without considering that species often are misidentified. Bretnall (1919) accepted O. peronii as a valid name, with Onchidium tonganum and Peronia mauritiana as synonyms, and P. corpulenta as a potential synonym (with a question mark). The references listed by Bretnall (1919: 311-312) for O. peronii are all commented on above already. However, Bretnall's (1919: 313) list of geographic records needs to be discussed, especially because Bretnall did not mention the key characters supporting a proper identification of P. peronii (Table 4). The presence of P. peronii in Samoa, which Bretnall obtained from Semper (see above), should not be taken for granted, even if it is quite possible. The presence of P. peronii in the Buccaneer Archipelago, northern Western Australia (16S, 123E), based on specimens from the Australian Museum, should not be taken for granted, even though it is quite possible. The identification of P. peronii in the Santa Cruz Islands, Solomon Islands, based on specimens from the Australian Museum, should not be taken for granted (specimens may have been misidentified), even if the Santa Cruz Islands are within the known geographical range of P. peronii (Fig. 6). Bretnall (1919: 315-316) also regarded O. melanopneumon as a valid name, for which he cited Bergh's (1884a) original description and its French summary by Joyeux-Laffuie's (1885), and indicated Plate's (1893) proposed synonymy (with O. peronii) with a question mark. Bretnall (1919: 316) listed Lord Howe Island, off southeastern Australia (based on specimens from the Australian Museum), as a locality for O. melanopneumon, but without description of key characters. Thus, the presence of P. peronii in Lord Howe Island, which is 1350 km south of the southernmost known locality of P. peronii (New Caledonia), is not taken for granted here. As for Onchidium punctatum, Bretnall (1919: 316-317) followed Semper (1882: 289) and Tapparone Canefri (1883: 214) who both thought that it could be a synonym of Onchidium tumidum (see above), which is not possible because O. tumidum refers to a Paromoionchis species (Dayrat et al. 2019a). Hoffmann (1928: 71), following most of Plate's (1893) nomenclatural decisions, accepted Peronia mauritiana, P. corpulenta, Onchidium tonganum and O. melanopneumon as junior synonyms of O. peronii. Hoffmann, like other authors, did not mention the key anatomical characters that allow a reliable identification of P. peronii and uncritically accepted most geographical records published before him. As a result, his proposed distribution for O. peronii should not be taken for granted. For instance, the presence of P. peronii in Lord Howe Island, off southeastern Australia, obtained from Bretnall (1919) is questionable. Hoffmann (1928: 44) examined a specimen from the Nicobar Islands (NHMD 613753) which was originally mentioned by Mörch (1872a: 28; 1872b: 325; see above). Hoffmann (1928: 44-45) also provided several geographical records (Sumatra, Java, Marshall Islands, Kiribati, Fiji) for O. peronii based on material preserved at the SMNH in Stockholm. His material was re-examined and all records are confirmed. Hoffmann only dissected two individuals, one from Sumatra (SMNH 180354) and one from Kiribati (SMNH 180379). The other eighteen specimens were dissected for the present study. Eight large specimens (longer than 65 mm) examined by Hoffmann from Sumatra (SMNH 180354), Java (SMNH 180355), Kiribati (SMNH 180376, 180377, 180380, 180382, 180475), and Fiji (SMNH 180373) share the diagnostic characteristics of P. peronii: a spine of the accessory penial gland between 3 and 4 mm long, intestinal loops of type I with a transitional loop at 3 o'clock, and a muscular sac between 20 and 25 mm long (exceptionally 17 mm, SMNH 180354). Seven smaller specimens (between 15 and 37 mm long) examined by Hoffmann from the Marshall Islands (SMNH 180356), Fiji (SMNH 180374), and Kiribati (SMNH 180353, 180383, 180384) are immature: the anterior male reproductive parts are barely developed, and, if present, the spine of the accessory penial gland is still soft (SMNH 180353). Given their intestinal loops (type I with a transitional loop at 3 o'clock), they are regarded as individuals of P. peronii. In other species, individuals of that size are already fully mature. Two smaller specimens (between 28 and 37 mm long) examined by Hoffmann from Fiji (SMNH 180357, 180375) belong to P. peronii because of several characteristics (retractor muscle inserting near the heart, intestinal loops of type I with a transitional loop at 3 o'clock, a spine of 3 mm long). Their muscular sacs (11 and 15 mm) are shorter than in other specimens, suggesting that they likely are not fully mature. Two specimens from Kiribati (SMNH 180378, 180478), poorly preserved, could not be confidently identified. Finally, the male reproductive parts are missing in a specimen from Kiribati dissected by Hoffmann (SMNH 180379), but its intestinal loops (type I with a transitional loop oriented between 1 and 2 o'clock) confirm that it belongs to P. peronii. Note that the locality of the specimen from Sumatra (SMNH 180354) is problematic. The label and Hoffmann's publication both say "Pulu Pasu, west coast of Sumatra," but there is no such place on the west coast of Sumatra. There are two small islands off the west coast of Sumatra called Pulau Asu (Hinako Islands) and Pulau Pasumpahan (south of Padang). There also is a small island called Pulau Pasu in the Riau Islands, but that archipelago is located north of Sumatra, in the South China Sea. So, it is unclear where that specimen was collected exactly in Sumatra. O'Donoghue (1929: 833) reported one specimen (30/21 mm) of Peronia peronii from Port Taufiq, Suez, Egypt. A radular formula (65 × 72-1-72) is not enough to identify a Peronia species, and he most likely examined P. verruculata or P. madagascariensis (Fig. 6). Two names accepted as valid by are regarded as synonyms of Peronia peronii: P. tongana, and P. fidjiensis. Labbé (1934a: 191) himself acknowledged that differences between P. peronii and P. tongana were weak. The traits that he mentioned (position of the pneumostome with respect to the anus, head longer than the foot) vary greatly due to preservation. Labbé (1934a: 197-198) did not compare Paraperonia fidjiensis with Peronia peronii probably because he classified them in two distinct genera. However, there are no differences between the type material of P. fidjiensis and the type material of P. peronii. Labbé (1934a: 190) agreed with most authors that P. mauritiana and O. melanopneumon were synonyms of P. peronii. Like Plate (1893), Labbé (1934a: 190) thought that P. corpulenta was simply a potential synonym of P. peronii but in fact it is a nomen dubium (see general discussion). All references cited by Labbé for P. peronii and P. tongana have been commented on above, but proposed distribution ranges need additional clarification. Labbé's (1934a: 190-191) re-description of P. peronii was based on one individual (100/70 mm) from Sumatra (MNHN-IM-2012-25150), one individual (90/70 mm) from the Seychelles (MNHN-IM-2012-25149), and ten individuals from the Red Sea (not found in the MNHN collections). At least one of those specimens belongs to P. peronii because of the length of the spine of the accessory penial gland mentioned by Labbé as 6 to 7 mm. The specimens from Sumatra and the Seychelles were fully dissected by Labbé (the Sumatra individual is basically empty): the male parts are missing, and it is not possible to determine the type of intestinal loops. However, given their huge size, they most likely belong to P. peronii. The presence of P. peronii in the Red Sea is possible but, at this stage, questionable: the size mentioned by Labbé for those specimens (17/12 mm) strongly suggests that he did not examine P. peronii from the Red Sea. Those specimens from the Red Sea identified as P. peronii by Labbé could not be located at the MNHN (there are no specimens collected by Clot-Bey in the collections, and there are too many jars of specimens collected by Jousseaume to determine which jar corresponds to the species description in Labbé's monograph). Labbé's (1934a: 191-192, figs 4-7) re-description of P. tongana was based on one individual from Djibouti (Obock), one individual (85/60 mm) from the Seychelles (MNHN-IM-2012-25148), two individuals from New Ireland, and one individual from Tonga which happens to be part of the type series by Quoy and Gaimard (MNHN-IM-2000-22937) even though Labbé does not mention it. The specimen from the Seychelles was re-examined for the present study and, given its huge size, it is confirmed that it belongs to P. peronii: its intestinal loops are of type I, with a transitional loop at 3 o'clock; the male parts are missing. There are two specimens (60/50 mm) from New Ireland at the MNHN which could potentially be the two specimens mentioned by Labbé, but the collecting dates do not match. At any rate, it does not matter much since our fresh specimens demonstrate that P. peronii is present in New Ireland (Fig. 6). The specimen from Obock could not be traced back at the MNHN; there is a specimen (80/60 mm) which could possibly correspond to it but it is a problematic specimen as it could also be a type specimen for P. gaimardi, and is now an empty notum (see below, remarks on the type material of P. gaimardi in P. verruculata). Thus, the presence of P. peronii in Djibouti is not accepted here and would need to be supported by positive evidence. Risbec (1935: 415) illustrated the eggs of an onchidiid individual from New Caledonia which he called "Oncidium tonga Q et G," clearly a spelling mistake for Onchidium tonganum Quoy & Gaimard, 1832. It is not possible to know what species Risbec was referring to because there are three Peronia species in New Caledonia (Fig. 6). White (1951: 241) reported a single specimen (53/38 mm) of Onchidium peronii from the Persian Gulf. The radular formula (88 × 88-1-88) is not enough to identify a Peronia species. White's record referred either to P. verruculata (unit #4) or P. madagascariensis (Fig. 6). In Japan, Baba (1958: 144) indicated that some specimens of Onchidium verruculatum from Tokara Islands, south of Kyushu (ca. 30°N) are very large (up to 120 mm long), suggesting that P. peronii is found there, which would be its most northern record. Macnae and Kalk (1958: 34, 44, 128) mentioned Onchidium peronii from Inhaca Island, Mozambique (ca. 26°S). Given that no information is provided for species identification, this record is not taken for granted. Onchidium peronii was likely confused with P. verruculata (unit #5), which our material indicates is present in Inhaca, or even P. madagascariensis, known from South Africa to western India (Fig. 6). The fact that the slugs were found on sand (Macnae and Kalk 1958: 128) could suggest that they saw P. verruculata (unit #5). Solem (1959: 39) did not report any new material or localities for P. peronii. The references that he mentioned (e.g., Bretnall 1919; are already commented on above. His proposed distribution ("from the Red Sea and Mauritius to New Caledonia, Samoa, and the Marshall Islands") is not fully accurate because it is based on the assumption that people never made any mistakes when identifying P. peronii, which is unfortunately not true. Solem (1959: 38) mentioned what he thought were the three "most obvious" of the "numerous differences" between O. peronii and O. verruculatum: distribution of branchial plumes (dorsal gills) on the notum, relative position of the pneumostome and the anus, and relative width of the hyponotum and pedal sole. But those features vary among individuals and should not be used for species identification. Marcus and Marcus (1960: 877) described Peronia peronii from the Maldives based on eight specimens. Given that they report a maximum animal length of 155 mm, a long (4.5 mm) spine of the accessory penial gland, as well as a retractor muscle inserting near the heart, there is little doubt that they did examine P. peronii (Table 4). Later, Marcus and Marcus (1970: 213) added that they observed a retractor muscle inserting near the nerve ring in another of their specimens from the Maldives, which also is compatible with our present observations: a vestigial retractor muscle was even observed here in P. peronii (Table 4). Some of the material examined from historical museum collections for the present work also came from the Maldives (ANSP 304860). Webb et al. (1969: 107-112) described copulatory mechanisms in specimens they identified as O. peronii. It is unclear from where those specimens were, possibly South Africa. At any rate, given that they illustrate a spine of the accessory penial gland which is only 1.4 mm long (Webb et al. 1969: 110, fig. 3), they did not examine individuals of P. peronii. It is not possible to determine whether Marcus and Marcus (1970: 213) examined an individual of Peronia peronii from Madagascar because they do not provide the key features that characterize it. They could have seen a large individual of P. madagascariensis instead. Britton (1984: 183) merely mentioned the fact that Marcus and Marcus (1970) accepted only two valid species names (P. peronii and P. verruculata), which is not strictly exact because Marcus and Marcus (1970) did not address the nomenclatural status of P. tongana and did say that P. branchifera was close to P. verruculata but not that it was its synonym. Patil and Kulkarni (2013) reported Onchidium peronii from Uran City, near Mumbai, India, but it is impossible to determine what species they saw (most likely it was P. madgascariensis or the unit #4 of P. verruculata, or both). Many chemical studies have mentioned P. peronii in the past few decades. However, the name P. peronii was used arbitrarily. The individuals used for the extraction of natural products may not have been properly identified. Biskupiak and Ireland (1985) extracted peroniatriols from specimens identified as P. peronii from Guam. Peronia peronii is undeniably present in Guam. However, it is possible that P. verruculata (unit #1) could be present there as well. Pietra (1990: 145) mentioned peroniatriols in Peronia peronii from Micronesia where more than one species may be found. Arimoto et al. (1993) did not indicate where specimens of P. peronii and O. verruculatum were collected. In Japan, where the individuals used by Arimoto et al. (1993) possibly came from, there are four Peronia species which are all cryptic externally. Pietra (2002: 290) briefly cited peroniatriols in P. peronii based on the work by Arimoto et al. (1993). Finally, the antibacterial peptide extracted from individuals of Peronia peronii from the Persian Gulf (Bitaab et al. 2015) was most likely extracted from individuals of either P. verruculata (unit #4), or P. madagascariensis, or both (Fig. 6). The same general remark applies to ecological studies: Morrisey et al. (2010: 72) listed (with no justification for species identification) the presence of Peronia peronii in mangroves of the estuary of the Mtata River (31°57'S), South Africa; most likely, Morrisey et al. (2010: 72) encountered P. madagascariensis instead. Finally, a few last words on P. peronii in phylogenetic studies. Dayrat et al. (2011: 428) and White et al. (2011: 4) identified a specimen from Guam (CASIZ 180486) as Peronia peronii, which is specimen [443] in the present study (Fig. 2). The specimen tentatively identified as Peronia cf. peronii from Mozambique (NHMUK 20060414) by Dayrat et al. (2011: 428) belongs to P. madagascariensis, which is specimen [735] in the present study (Fig. 2). The DNA sequences of the specimen from Guam were used again in several studies (e.g., Gaitán-Espitia et al. 2013;Harasewych et al. 2015). , hermaphroditic (female) reproductive system C anterior, male, copulatory apparatus. Scale bars: 3 mm (A-C). Abbreviations: ag accessory penial gland, dd deferent duct, ddg dorsal digestive gland, fgm female gland mass, hg hermaphroditic gland, i intestine, ms muscular sac, ov oviduct, pdg posterior digestive gland, ps penial sheath, rm retractor muscle, rs receptaculum seminis, sp spermatheca, st stomach, v vestibule. Distribution (Fig. 6). Endemic to Okinawa, Japan. Etymology. Peronia okinawensis is named after its type locality: okinawensis is a latinized adjective that agrees in gender (feminine) with the generic name (ICZN 1999: Article 31.2). Habitat. The only specimens known were found on a reef flat. Peronia okinawensis seems to be rare compared to P. verruculata (unit #1) but may be more abundant at some other sites in Okinawa. It would be interesting, in the future, to map in detail at what exact sites the three Peronia species that are sympatric in Okinawa (P. okinawensis, P. peronii, and P. verruculata) overlap or not, in Okinawa and possibly in the rest of the Ryukyu Islands. Peronia okinawensis Color and morphology. No picture of live animals is available. The color of preserved specimens is beige mottled with darker areas dorsally and whitish ventrally. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (8-15). The largest specimens are 27 mm long. Digestive system (Figs 17A, 18). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 45 μm long. The hook of the lateral teeth is approximately 110 μm long. The intestinal loops are of type I, with a transitional loop oriented between 12 to 3 o'clock. Reproductive system (Figs 17B,C,19,20). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 15 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length ranges from 1.8 mm ([696-3] UF 352288) to 2.3 mm ([696-4 H] UF 352288). Its diameter at the conical base ranges from 240 to 300 μm. Its diameter at the tip ranges from 115 to 150 μm. The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 35 μm long. Diagnostic features (Table 4). Peronia okinawensis is characterized by a unique combination of anatomical traits: muscular sac shorter than 15 mm, intestinal loops of type I (with a transitional loop oriented between 12 and 3 o'clock), retractor muscle inserting near the heart. Remarks. A new species name is needed because no existing name applies to the species described here. The specimen [696-2] was tentatively identified as Peronia cf. verruculata by Dayrat et al. (2011). This identification should be disregarded because the specimen [696-2] belongs to the species described here (Figs 2-4). Peronia okinawensis is one of the four Peronia species in Japanese waters (Fig. 6). For a comparison of their geographic range, see remarks on P. setoensis. For their identification, see the identification key as well as Table 4. It is possible that P. okinawensis is not strictly endemic to Okinawa. Peronia madagascariensis (Labbé, 1934a) Figs 21-25 Paraperonia madagascariensis Labbé, 1934a: 199, fig. 15. Paraperonia jousseaumei Labbé, 1934a: 198 [1891-1973]) in 1932. Only one old jar was found at the MNHN with a specimen collected from Fort-Dauphin (MNHN-IM-2000-33680). The information on the label (specimen collected by Décary in 1932) matches the information provided in Labbé's original description of P. madagascariensis, and even the specimen size matches. Therefore, that specimen is considered to be the holotype by monotypy of P. madagascariensis. The holotype was dissected by Labbé. The radula, the posterior (hermaphroditic) reproductive parts, and the male parts are all missing. The intestinal loops are of type V (Fig. 21A). that collecting information. One of them contains specimens that are part of the type series of P. gondwanae because the specific name "gondwanae" is written on an old label (MNHN-IM-2000-33683). The three labels of the other jar (MNHN-IM-2014-7993) say: "Peronia Mer Rouge Mr Jousseaume n°15, 1892," "Oncidium [written over "Oncidiella"] peronii Cuvier Mer Rouge M. Jousseaume n°15-1892," and, for unknown reasons, "60." This jar contains six specimens of Peronia, from 60/45 to 25/15 mm, two of which were dissected, possibly by Labbé. The intestinal loops of the two dissected specimens are of type I and thus are not in agreement with fig. 12) original illustration of the intestinal loops of type V in P. jousseaumei. Also, the sizes and the number of individuals do not match the original description of P. jousseaumei. Those specimens could possibly be some of the eight non-type specimens that Labbé (1934a: 190) mentioned in his re-description of Peronia peronii collected by Jousseaume from the Red Sea ("Mer Rouge") in "1852" Oman GenBank and BOLD sequences. One COI sequence was obtained from BOLD (LGEN099-14) for an individual identified as Onchidium verruculatum and collected from Dwarka, Gujarat, on the western coast of India (ca. 22°N), which is the easternmost known locality for P. madagascariensis. A second COI sequence was obtained from GenBank (LC027608) for an individual identified as Peronia sp. and collected from the coast of Iran in the Persian Gulf. Both sequences were unpublished. Distribution (Fig. 6). From South Africa to the Red Sea and western India (ca. 22°N): South Africa, Mozambique, Madagascar (type locality of P. madagascariensis), Gulf of Oman, Iran (Strait of Hormuz), Yemen (Socotra), India (Mumbai, Gujarat), Red Sea (type locality of P. jousseaumei). All records are new except for the type locality in Madagascar. Peronia madagascariensis is, so far, not present in Mauritius. Etymology. Peronia madagascariensis was named after its type locality, Madagascar. Peronia jousseaumei was named after Félix Pierre Jousseaume , a medical doctor and malacologist who collected many specimens from the Red Sea preserved at the MNHN and which studied for his monograph on onchidiids. Habitat. Peronia madagascariensis is found in the rocky intertidal, like most other Peronia slugs. Color and morphology. No picture of live animals was available. The color of preserved specimens is not different from other species (greyish brown and mottled with darker and lighter areas dorsally, and light brown greyish ventrally). The dorsal notum of live animals is covered by dozens of papillae of various sizes. In large individuals, dorsal papillae can be particularly tall (easily up to 4 mm), even in preserved specimens, and are evenly distributed over the entire notum. Preserved, they are very difficult to distinguish from retracted dorsal gills in the posterior half of the notum, but they are regular papillae with or without eyes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 12 to 18). Dorsal gills seem taller and denser than in other species. The largest specimens in our fresh material are 55 mm long but two additional museum specimens are much longer (80 mm). Digestive system (Figs 21A-D, 22). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 55 μm long. The hook of the lateral teeth is approximately 100 to 130 μm long. The intestinal loops are of type V. Reproductive system (Figs 21E,(23)(24)(25). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 15 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length ranges from 2 mm ([5502] MNHN-IM-2009-16393) to 2.4 mm ([5500] MNHN-IM-2009-16391). Its diameter at the conical base ranges from 200 to 230 μm. Its diameter at the tip ranges from 70 to 80 μm. The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 100 μm long. Diagnostic features (Table 4). Peronia madagascariensis is characterized by a unique combination of two anatomical traits: intestinal loops of type V and a spine of the accessory penial gland longer than 2 mm. Remarks. The name Paraperonia madagascariensis clearly applies to a Peronia species because of the dorsal gills on the notum of the holotype. The holotype was entirely dissected by Labbé. The radula, the posterior (hermaphroditic) reproductive parts, and the anterior copulatory apparatus are missing. The intestinal loops are of type V (Fig. 21A), as illustrated by fig. 17). The name Peronia madagascariensis applies to the species described here because it is, according to our molecular data, the only Peronia species with intestinal loops of type V along the eastern African coast, from South Africa to the Persian Gulf and western India, including Madagascar. Note that some of our fresh material was collected only 150 km east of the type locality in southern Madagascar. Some internal characters described by Labbé (1934a: 199) could not be verified on the holotype because most internal parts are missing, but they are similar to the species described here. In particular, the length of the spine of the accessory penial gland (2 mm) is compatible with what was observed in our material. Additional, non-type specimens were found in historical museum collections which could be identified as P. madagascariensis due to the presence of intestinal loops of type V, from Oman (UF 368019), the Strait of Hormuz (NHMD 635302), and Socotra (SMF 358305). Those localities, however, are all already included within the known distribution of P. madagascariensis based on our DNA sequences, as the Strait of Hormuz is very close to the Gulf of Oman. Finally, one of the "a" paralectotypes of Labbé's (1934a: 199) Paraperonia gondwanae from Bombay (MNHN-IM-2000-33682), with intestinal loops of type V (Fig. 21B), belongs to P. madagascariensis. Note that two of those museum specimens are longer (80 mm) than our fresh material (less than 55 mm). Peronia slugs with intestinal loops of type V are without doubt present in the Red Sea. For instance, one of the "c" paralectotypes of Labbé's (1934a: 200) Paraperonia gondwanae from Suez (MNHN-IM-2000-33683) is characterized by intestinal loops of type V (Fig. 21C), which means that it does not belong to P. verruculata (characterized by intestinal loops of type I). Paraperonia jousseaumei, with the Red Sea as type locality, is also characterized by intestinal loops of type V. Even though the type material of P. jousseaumei could not be located at the MNHN, Labbé's (1934a: fig. 12) drawing of the internal anatomy of P. jousseaumei clearly illustrates intestinal loops of type V. Given that P. madagascariensis is widespread from South Africa all the way to western India, including the Strait of Hormuz, it is accepted here that it also is distributed in the Red Sea. That, however, will still need to be confirmed with fresh material from both the Red Sea and the Gulf of Aden. If it appears that the populations of Peronia slugs with intestinal loops of type V from the Red Sea are a distinct species, then the name P. jousseaumei could apply to them and be valid. Finally, given that P. madagascariensis is present in the Strait of Hormuz, it most likely also is distributed in the rest of the Persian Gulf, which hopefully will be confirmed at some point with fresh material. Even though the names Peronia madagascariensis and Peronia jousseaumei were never used prior to the present contribution, they are not regarded as new combinations because Paraperonia has already been regarded as a synonym of Peronia by Britton (1984: 182) and because it has also been made clear that the genus Peronia included all species of slugs with dorsal gills (e.g., Dayrat et al. : 1861. The specimen [703] from Oman was tentatively identified as Peronia sp. 2 by Dayrat et al. (2011) but it clearly belongs to P. madagascariensis (Fig. 2). Also, note that its COI sequence was resubmitted to GenBank because the old one (GenBank HQ660044) was inaccurate. The specimen [735] from Mozambique was tentatively identified as Peronia cf. peronii by Dayrat et al. (2011). This identification should be disregarded because the specimen [735] belongs to P. madagascariensis (Fig. 2). Etymology. Peronia platei was named after German zoologist Ludwig Hermann Plate , professor of zoology at the University of Jena and author of a monograph on onchidiids (Plate 1893). Habitat. Peronia platei is found primarily in the rocky intertidal. According to the label, specimens from Kiribati were collected on sand inside a lagoon (P. sydneyensis and P. willani are also known to be found on sand). Color and morphology of live animals (Fig. 26). No picture of live animals was available for specimens from the West Pacific. The description of the color of live animals is based on Hawaii individuals. The dorsal notum is uniformly very dark grey, almost black, including papillae. The hyponotum is light yellowish. The foot is light yellowish to orange. The ocular tentacles are grey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 7 to 10). The papillae with dorsal eyes cannot be counted in specimens from Hawaii because the no-tum is too dark and because eye pigmentation tends to fade in preservation. The largest specimens are 30 mm long in Hawaii and 20 mm in Papua New Guinea. Digestive system (Figs 27, 28). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 30 to 35 μm long. The hook of the lateral teeth is approximately 60 to 90 μm long. The intestinal loops are of type V. Reproductive system (Figs 29-32). In the posterior (hermaphroditic) parts, the deferent duct and the oviduct are straight. In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 5 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length ranges from 0. Guinea. Its diameter at the conical base ranges from 95 to 100 μm (Hawaii) and from 65 to 80 μm (Papua New Guinea). Its diameter at the tip ranges from 25 to 30 μm (Hawaii) and from 20 to 30 μm (Papua New Guinea). The retractor muscle is shorter or longer than the penial sheath and inserts at the posterior end of the visceral cavity. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 60 μm long in Hawaii and less than 20 μm long in Papua New Guinea. Diagnostic features (Table 4). Peronia platei is cryptic with P. setoensis. Both species share the same combination of anatomical traits: intestinal loops of type V, retractor muscle inserting at the posterior end of the visceral cavity, a spine of the accessory penial gland from 0.8 to 1 mm long (P. platei) and from 0.9 to 1.2 mm long (P. setoensis). The diameter of the spine of the accessory penial gland at its tip is larger in P. platei (25 to 30 μm) than in P. setoensis (less than 25 μm) but that may be simply due to limited sampling. Peronia platei and P. setoensis are both distributed in the West Pacific but they are not sympatric based on current data (Fig. 6). Remarks. Onchidium platei applies to the species described here because the anatomy of the lectotype is identical to the anatomy of our material (Table 4): gills on the dorsal notum; muscular sac of the accessory penial gland less than 5 mm long; spine of the accessory penial gland 0.9 mm long (observed by transparency); intestinal loops of type V; seven dorsal papillae with eyes. Our molecular analyses show that the species described here is widespread across the West Pacific, from Papua New Guinea to Hawaii. There is no reason to think that the populations in French Polynesia (type locality of O. platei) are a distinct species. This, however, will have to be confirmed with fresh material from French Polynesia, preferably from Moorea, the type locality. All eight paralectotypes (also from Tahiti) also belong to the same species. Hoffmann's (1928: 51-53, figs 9, 10, pl. 3, figs 11, 12) original description, which is quite detailed, needs to be briefly commented on. Hoffmann mentions that dorsal gills are lacking but they are undoubtedly present in the lectotype and all paralectotypes (dorsal gills are often hard to see in preserved animals). The anatomical traits he describes agree with our observations on the type material. The intestinal loops, Hoffmann says, are of type I but slightly different from the regular type I due to the absence of a loop. Hoffmann calls it a type Ia. His illustration of it clearly represents a type V (Hoffmann 1928: pl . 3, fig. 11). The spine of the accessory penial gland is 1 mm long and the retractor muscle attaches to the posterior end of the visceral cavity. According to Hoffmann (1928: 53), O. platei is most closely related to O. tumidum Semper, 1880 and O. nebulosum Semper, 1880 but differs from them based on the penis size. Onchidium tumidum was recently transferred to Paromoionchis (Dayrat et al. 2019a), and O. nebulosum (type locality in Palau) applies to a Peronia species but is regarded here as a nomen dubium (see general discussion). Additional specimens were found in historical museum collections which could be identified as P. platei mostly based on the intestinal loops of type V, the specimen size, and their geographic origin. Specimens from Kiribati (SMNH 106488) are especially interesting because they confirm the presence of specimens similar to P. platei far from Hawaii and Papua New Guinea, which strongly supports the assumption that P. platei is widespread across the entire West Pacific. Note that those specimens from Kiribati are not identified as P. setoensis (which is anatomically cryptic with P. platei) because P. setoensis is found in much colder waters (33°N) in Japan (Fig. 6). Labbé (1934a: 224) merely mentioned Onchidium platei as one of the valid Onchidium species names. Ruthensteiner (1997) briefly commented on the anatomy of the lung of Onchidium cf. branchiferum, based on specimens from Hawaii. Those were most likely specimens of Peronia platei, the only Peronia species found in Hawaii. Finally, note that the specimen [706] (UF 303653) was tentatively referred to as Peronia sp. 1 by Dayrat et al. (2011). No Peronia slug from Hawaii was positively demonstrated to belong to P. verruculata (unit #1), which is characterized by intestinal loops of type I. Therefore, Hoffmann's (1928: 44, 73) record of O. verruculatum from Hawaii is interpreted here as a misidentification of P. platei. Labbé (1934a: 193), Solem (1959: 39), and Marcus and Marcus (1970: 213) all assumed that P. verruculata was present in Hawaii based on study, without collecting or examining any new material. Onchidella evelinae Marcus & Burch, 1965 was described based on small specimens (average length 6 mm) from Eniwetok Atoll, Marshall Islands (ca. 11°N, 162°E). The type material was deposited at the Museum of Zoology, University of Michigan, but could not be located there (personal communication from the collection manager, Dr. Taehwan Lee). Onchidella evelinae is a misidentification for one of the onchidiid species present in the Marshall Islands: it cannot refer to Onchidella slugs because an accessory penial gland is mentioned in the original description and because Onchidella is not present in the middle of the West Pacific. The Marshall Islands are within the distribution range of P. platei (Fig. 6), but a detail from the original description (the internal organs can be seen through the dorsal notum) suggests that O. evelinae does not refer to Peronia slugs because their notum is too thick for internal organs to be seen through it. Peronia peronii is also present in the Marshall Islands (Fig. 6), but, given the very small size of the specimens and that they were sexually mature, it is most unlikely that O. evelinae is a junior synonym of P. peronii (Fig. 6). The size of the spine of the accessory penial gland (1.3 mm) reported in the original description of O. evelinae is compatible with what is currently known (< 1 mm) for P. platei (Table 4). Onchidella evelinae is regarded here as a new junior subjective synonym of Marmaronchis vaigiensis (Quoy & Gaimard, 1825): first, because internal organs can occasionally be seen through its thin notum (e.g., Dayrat et al. 2018: fig. 5E); second, because there are known records (Dayrat et al. 2018: fig. 9) of M. vaigiensis in Pohnpei, Micronesia (ca . 6°N, 158°E), just a few degrees west of the Marshall Islands, and it is very possible that M. vaigiensis also is in the Marshall Islands. The size of the spine of the accessory penial gland (1.3 mm) reported in the original description of O. evelinae is higher than what is currently known for M. vaigiensis (< 1 mm), but that trait does vary intra-specifically. Distribution (Fig. 6). Endemic to subtropical waters of Japan: Honshu, Nishimuro, near Seto Marine Biological Laboratory (33N, type locality), Sagami Bay (35°N), and possibly Boso Peninsula, near Sagami Bay (35°N); Kyushu, Nagasaki, 32N (Keferstein 1865a, b, as P. verruculata). Etymology. Peronia setoensis is named after its type locality, near the Seto Marine Biological Laboratory: setoensis is a latinized adjective that agrees in gender (feminine) with the generic name (ICZN 1999: Article 31.2). Habitat (Fig. 33). Peronia setoensis is found in the rocky intertidal. Few individuals are currently known but it may be discovered in additional localities in the future. Color and morphology of live animals (Fig. 34). The dorsal notum is greenish brown, light to dark, mottled with darker and lighter areas, occasionally with yellowish sides. The color of the dorsal papillae varies as that of the background itself. The ventral surface (foot and hyponotum) is yellowish or greyish and can change rapidly in any given individual. The ocular tentacles are brown-grey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 8 to 12). The largest specimens are 20 mm long. Digestive system (Figs 35A, B, 36). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 35 μm long. The hook of the lateral teeth is approximately 90 μm long. The intestinal loops are of type V. Reproductive system (Figs 35C,D,37,38). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 5 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length ranges from 0.9 mm ([3754] NSMT-Mo 78987) to 1.2 mm ([3753] NSMT-Mo 78987). Its diameter at the conical base ranges from 80 to 85 μm. Its diameter at the tip ranges from 15 to 25 μm. The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 25 μm long. Diagnostic features (Table 4). Peronia setoensis is cryptic with P. platei. Both species share the same combination of anatomical traits: intestinal loops of type V, retractor muscle inserting at the posterior end of the visceral cavity, a spine of the accessory penial gland from 0.9 to 1.2 mm long (P. setoensis) and from 0.7 to 1 mm long (P. platei). Peronia setoensis and P. platei are anatomically very similar to P. griffithsi, in which, however, the spine of the accessory penial gland is slightly shorter (less than 0.62 mm long). All three species are distributed in the West Pacific but Peronia setoensis is adapted to much colder waters than P. platei and P. griffithsi (Fig. 6). Remarks. A new species name is needed because no existing name applies to the species described here. A specimen from Sagami Bay (35°N), preserved in Stockholm (SMNH 180725), not included by Hoffmann (1928: 73) in his list of material for O. verruculatum, is identified here as P. setoensis because of its intestinal loops of type V (Table 4). This specimen indicates that P. setoensis is distributed on the eastern Pacific coast of Japan north of the type locality. Keferstein (1865b) described as P. verruculata three slugs from Nagasaki, Kyushu, Japan (ca. 32°44'N). His written description (Keferstein 1865b) was also based on an individual from Java but his figure captions clearly indicate that his drawings illustrated an individual from Nagasaki (Keferstein 1865b: pl. VI, figs 14-16): Keferstein's (1865b: pl. VI, fig. 16) drawing of the internal anatomy unmistakably illustrates intestinal loops of type V. Therefore, it is very likely that P. setoensis, the only one species of Peronia slugs with intestinal loops of type V in Japan, is also distributed in Kyushu. It is unclear whether Keferstein's (1865a: pl. CII, figs 20*, 20**, pl. CV, figs 1, 2) drawings illustrate the same Nagasaki individual as the one with intestinal loops of type V (Keferstein 1865b: pl. VI, fig. 16). It cannot be excluded that Keferstein examined several species found in Japan (Fig. 6). The Java individual cannot be identified. The molecular data presented here indicate that there are four Peronia species in Japanese waters, but their geographic ranges need to be explored in better detail (Fig. 6). Peronia setoensis is definitely (our DNA sequences) present in southern Honshu (Wakayama Prefecture) and very likely in Kyushu based on Keferstein's (1865b: pl. VI, fig. 16) drawing of intestinal loops of type V. Peronia verruculata (unit #1) is definitely (our DNA sequences) present in Wakayama Prefecture (ca. 33°N), southern Honshu, and is thus expected to be present in all Japanese waters south of Wakayama Prefecture. Also, Peronia verruculata is present in Sakurajima, Kyushu (ca. 31°N) and Okinawa (ca. 26°N) based on sequences that Takagi et al. (2019) recently published (see remarks on P. verruculata). Peronia peronii is also present in Okinawa based on COI sequences that Takagi et al. (2019) recently published (see remarks on P. peronii). And, finally, our new species P. okinawensis is only known from Okinawa so far. Besides Keferstein (1865a, b), several authors mentioned onchidiids from Japan but, in most cases, species cannot be identified based on the limited information provided. Stimpson (1855: 380) described Onchis fruticosa based on slugs with dorsal gills from Kikaijima (28°30'N), between Kyushu and Okinawa, which could potentially belong to any of the four species present in Japanese waters. As a result, Onchis fruticosa is regarded as a nomen dubium (see general discussion). Baba (1958) illustrated onchidiid slugs from three different places: Tokara Islands, just south of Kyushu (ca. 30°N); Amakusa, near Nagasaki, Kyushu (ca. 32°30'N); and Misaki, Osaka, Honshu (ca. 34°N). Baba (1958: 144) indicates that some specimens of Onchidium verruculatum from Tokara Islands were very large (up to 120 mm long), suggesting that P. peronii is found there, which would be its northernmost record (see remarks on P. peronii). The smaller specimens that Baba (1958: 144) mentions from Tokara Islands could be a combination of P. verruculata (unit #1) and possibly P. setoensis. The two species which Baba (1958: 21) seems to distinguish (as Onchidium and Onchidium verruculatum) in Misaki, near Osaka, could be P. verruculata (unit #1) and P. setoensis, which, based on our DNA sequences, are sympatric near the Seto Marine Laboratory, which is close to Osaka. And, finally, the slugs crawling on mud in Amakusa, near Nagasaki, are not Peronia slugs (Baba 1958: 51) but most likely belong to Paromoionchis tumidus, a species which is present nearby, in Kumamoto Uki, as the COI sequences from the slugs of "Group I" in Takagi et al. (2019) cluster with our sequences of P. tumidus (Dayrat et al. 2019a). Katagiri and Katagiri (2007) distinguished two Peronia species (both as Onchidium verruculatum) in the waters of the Boso Peninsula (near Sagami Bay, Honshu, ca. 35°N) based on external appearance and development. One species, called Isowamochi, is characterized by planktotrophic development, and the other, called Minneawamochi, by direct development. Most likely, these slugs belong to P. verruculata (unit #1) and P. setoensis, which are the only two Peronia species found north of 30N. However, this assumption would have to be confirmed with fresh collections and DNA sequences. Ueshima (2007) commented that the external distinction between the two species recognized by Katagiri and Katagiri (2007) is far more subtle and problematic, and he rightly suggested that molecular data could determine the relationships between those two species and P. verruculata (erroneously said to be from the Mediterranean). Note that Ueshima's (2007) material, which covered a broad latitudinal range from the Kanagawa Prefecture (near Sagami Bay, ca. 35°N) all the way to Ishigaki Island (Okinawa, ca. 24°N), potentially included slugs from all four Peronia species found in Japan. Distribution (Fig. 6). Indo-West Pacific: Mauritius (type locality), Indonesia (Kei Islands), and Papua New Guinea (New Ireland). Etymology. Peronia griffithsi is named after Owen Griffiths, who kindly and generously hosted and guided one of us (Tricia Goulding) in Mauritius. Habitat (Fig. 39). Peronia griffithsi is found in the rocky intertidal, like most other Peronia slugs. Our specimens from Mauritius were collected just before sunrise, suggesting that P. griffithsi is, at least partly, a nocturnal species. Color and morphology of live animals (Figs 40,41). No picture of live animals was available for specimens from Kavieng. The description of the color of live animals is based on Mauritius and Kei individuals. The dorsal notum is greenish brown, light to dark, mottled with darker and lighter areas. The color of the dorsal papillae varies as that of the background itself, but dorsal papillae can also be yellowish-greenish. The ventral surface (foot and hyponotum) varies from whitish to yellowish and can change rapidly in any given individual. The ocular tentacles are brown-grey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 6 to 10). The largest specimens are 25 mm long. Digestive system (Figs 42-44). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 35 μm long. The hook of the lateral teeth is approximately 70 μm long. The intestinal loops are of type V. Reproductive system (Figs 45-48). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 5 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length is 0. retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 25 μm long. Diagnostic features (Table 4). Peronia griffithsi is characterized by a unique combination of anatomical traits: intestinal loops of type V, muscular sac of the accessory penial gland less than 5 mm long, spine of the accessory penial gland less than 0.62 mm long. In P. platei and P. setoensis, which are anatomically similar to P. griffithsi, the spine of the accessory penial gland is longer than 0.7 mm (P. platei) and 0.9 mm (P. setoensis). Remarks. A new species name is needed because no existing name applies to the species described here. A large population (161 specimens) from Kei Islands and identified by Hoffmann as Onchidium verruculatum was found in the collections of the Copenhagen Museum (NHMD 635303). Those specimens most likely belong to P. griffithsi because their intestinal loops are of type V (only a few individuals were dissected). Also, the retractor muscle of the few individuals dissected inserts near the end of the visceral cavity, as in specimens from Mauritius, suggesting that an insertion near the heart is not as common. Interestingly, Hoffmann (1928: 44) Distribution (Fig. 6). Southern West Pacific: New South Wales (type locality) and Queensland (up to 20°S), Australia, and New Caledonia. Etymology. Peronia sydneyensis is named after its type locality in Sydney, New South Wales, Australia: sydneyensis is a latinized adjective that agrees in gender (feminine) with the generic name (ICZN 1999: Article 31.2). A, B). Abbreviations: dd deferent duct, fgm female gland mass, hg hermaphroditic gland, ov oviduct, rs receptaculum seminis, sp spermatheca Habitat (Fig. 49). Unlike most other Peronia species, which are found in the rocky intertidal, P. sydneyensis is primarily found on muddy or coarse sand. Color and morphology of live animals (Figs 50,51). The dorsal notum is greenish brown, light to dark, mottled with darker and lighter areas. The color of the dorsal papillae varies as that of the background itself. The ventral surface (foot and hyponotum) varies from whitish to dark grey, including yellowish, bluish, and greenish, and can change rapidly in any given individual. The ocular tentacles are brown-grey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 8 to 16). The largest specimens are 30 mm long (New South Wales), 50 mm long (Queensland), and 41 mm long (New Caledonia). Digestive system (Figs 52-54). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 40 μm long. The hook of the lateral teeth is approximately 80 μm long. The intestinal loops are of type I, with a transitional loop oriented between 3 and 6 o'clock; exceptionally, the transitional loop is oriented at 2 o'clock. Reproductive system (Figs 55-58). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 10 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (includ- ing at its tip) varies between individuals. Its length ranges from 0.6 mm ([2680] MTQ) to 1 mm ([2661] MTQ). Its diameter at the conical base ranges from 90 to 100 μm. Its diameter at the tip measures 20-50 μm. The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 30 μm long. Diagnostic features (Table 4). Peronia sydneyensis is characterized by unique and distinctive protuberances on the spine of the accessory penial gland (Fig. 58). These strong protuberances were observed in all individuals. Protuberances can also be observed (as exceptional cases) in other species but they are always much smaller in size (Figs 37D, 104B, 105F). In addition, Peronia sydneyensis is characterized by a Peronia sydneyensis is distinct anatomically from P. willani, with which it is most closely related (Figs 2-4), and from P. verruculata, with which it overlaps geographically in Queensland and New Caledonia (Fig. 6). Remarks. A new species name is needed because no existing name applies to the species described here. The records of Onchidium verruculatum from New South Wales (Bretnall 1919: 310;Dakin 1947: 144;Smith and Kershaw 1979: 92;Hutchings and Recher 1982: 119;Hyman 1999) are most likely records of Peronia sydneyensis, the only Peronia species known in New South Wales based on current data (Fig. 6). Some of these records (or even all of them) could be a combination of both P. sydneyensis and P. verruculata: the southernmost locality of P. verruculata (unit #1) is in MacKay, Queensland (21°22'S), but given that P. verruculata tolerates colder waters in Japan (up to at least 33°40'N), it is possible that it is also present in New South Wales. Peronia sydneyensis was collected only in Sydney (33°39'S), but it is not excluded that both species are sympatric as far south as Sydney. Additional fresh material between southern Queensland and New South Wales is needed to determine more precisely the geographic range of each species. Note that the intestinal loops of type II by Hyman (1999: fig. 7B) illustrate the digestive system of a misidentified individual (most likely Paromoionchis daemelii, easily confused in the field with Peronia sydneyensis). Finally, note that the specimen [734] (AM C.459511) was tentatively referred to as Peronia sp. 3 by Dayrat et al. (2011). Distribution (Fig. 6). Endemic to Darwin, Northern Territory, Australia. Peronia willani Etymology. Peronia willani is named after Richard Willan, senior curator of mollusks at the Museum and Art Gallery of the Northern Territory, Darwin, Australia, who kindly and generously helped us during our field expedition around Darwin. Habitat (Fig. 59). Unlike most other Peronia species, which are usually found in the rocky intertidal, P. willani is primarily found on sandy mud or even directly on mud. Color and morphology of live animals (Fig. 60). The color of the dorsal notum is highly variable, from nearly whitish to dark brown and greenish, most often mottled with darker and lighter areas. The color of the dorsal papillae varies as that of the background itself, but dorsal papillae can also be lighter (yellowish-greenish) than the background. The ventral surface (foot and hyponotum) varies from whitish (almost transparent) to yellowish and can change rapidly in any given individual. Occasionally, a black ring is present on the hyponotum around the pedal sole. The ocular tentacles are browngrey, like the head. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Some papillae bear black dorsal eyes at their tip. The number of papillae with dorsal eyes is variable (from 10 to 25). The largest specimens are 65 mm long. (Figs 61A, 62). Examples of radular formulae are presented in Table 5. The median cusp of the rachidian teeth is approximately 30 μm long. The hook of the lateral teeth is approximately 100 μm long. The intestinal loops are of type I, with the transitional loop oriented between 3 to 6 o'clock. Digestive system Reproductive system (Figs 61B, C, 63, 64). In the anterior (male) parts, the muscular sac of the accessory penial gland is less than 25 mm long. The hollow spine of the accessory penial gland is narrow, elongated, and straight or slightly curved, and its shape (including at its tip) varies between individuals. Its length ranges from 1.5 mm ([1620] NTM P.57626) to 1.9 mm ([1628 H] NTM P.57625). Its diameter at the conical base ranges from 240 to 250 μm. Its diameter at the tip ranges from 80 to 100 μm. The retractor muscle is shorter or longer than the penial sheath and inserts near the heart. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 37 μm long. Diagnostic features (Table 4). Peronia willani is characterized by a unique combination of anatomical traits: intestinal loops of type I (with a transitional loop oriented between 3 and 6 o'clock), retractor muscle inserting at the posterior end of the visceral cavity, muscular sac up to 25 mm, spine of the accessory penial gland between 1.5 and 1.9 mm long. Peronia willani is anatomically distinct from P. sydneyensis, with which it is most closely related (Figs 2-4), and from P. verruculata, from which it is close geographically (Fig. 6). Remarks. A new species name is needed because no existing name applies to the species described here. A specimen from Darwin, Northern Territory, preserved in Stockholm (SMNH 180715) identified as O. verruculatum by Hoffmann (1928: 73) is identified here as P. willani because of its massive (18 mm long) muscular sac (Table 4). Also, to our knowledge, P. verruculata is not present in Northern Territory (Fig. 6). Peronia verruculata (Cuvier, 1830) Figs 65-109 Onchidium verruculatum Cuvier, 1830: 281;Semper 1880: 255-257, pl. 22, figs 3, 4; 1882: pl. 21, fig. 1 . 3.2) looks exactly like the MNHN specimen (without the male parts outside, which were subsequently removed). The figure 3.3 illustrates a much larger individual which could not be located. No information was provided on sizes, except that the illustrations were of "natural length" (figures 3.1, 3.2) and "likely of natural length" (figure 3.3) according to Audouin (1826: 19). Given that it is unclear whether Savigny (unknowingly) illustrated one or two species, it is appropriate to designate the specimen preserved at the MNHN as the lectotype (MNHN-IM-2000-22941). The animals illustrated by Savigny (1817) were not accompanied by any species name, but they were named and described ten years later by Audouin (1826: Cuvier likely changed his mind and later decided that, for some reason, the specimens illustrated by Savigny were a distinct species he called Onchidium verruculatum. The lectotype is still well preserved, considering how old it is. The radula and the posterior (female) reproductive parts are still inside but only the deferens duct remains for the male copulatory parts. Its intestinal loops are of type I (Fig. 86A) and 26/18 mm; same collection data as for the lectotype; MNHN-IM-2000-22951. The lectotype was designated by Goulding et al. (2018: 75) to clarify the application of Onchidium ferrugineum. The two paralectotypes belong to Wallaconchis ater (Lesson, 1831a) because they lack dorsal gills, lack an accessory penial gland, and are characterized by a highly coiled penis. Labbé re-examined four specimens from the original type series but there are only three specimens left in the jar, so one specimen was lost by or after Labbé. The lectotype is well preserved. Its dorsal notum bears obvious gills. Its male opening is located below and to the left of the right ocular tentacle. Pieces of the deferent duct and of the flagellum of the accessory penial gland remain, but the muscular sac and the spine of the accessory gland are missing. The posterior (female) part of the reproductive system is still in place inside the lectotype. Its radula is missing. Its intestinal loops of type I (with a transitional loop at 4 o'clock) are illustrated here (Fig. 80A). Lectotype ( (figs 3.1-3.8). There is no easy explanation for the exclusion of (Fig. 80B). Dorsal gills are present on the notum. The holotype, clearly labeled as "Oncidium Astridae Labbé," was dissected by Labbé for the original description but is relatively well preserved. The radula, the posterior (female) reproductive parts, and the intestinal loops of type I (Fig. 80C) are still in place inside the specimen. Male parts are missing. Dorsal gills are present on the notum (partly cut by Labbé). Note that the locality on the label of the holotype is indicated as Sorong, but with a question mark. Lectotype and paralectotypes (Peronia gaimardi). Solomon Islands • lectotype, hereby designated, 44/27 mm; Vanikoro; 1829; JRC Quoy & JP Gaimard leg.; MNHN-IM-2000-33705. • 1 paralectotype, 35/30 mm; same collection data as for the lectotype; MNHN-IM-2000-33705. The type material also includes a paralectotype from Djibouti which could not be located with certainty (see below). Originally, no jar clearly labeled as the type material of Peronia gaimardi was found at the MNHN. The original description of P. gaimardi is based on three individuals, two individuals identified as Onchidium, from "Vanikoro (Quoy and Gaimard 1829)," and one individual identified as "Oncidium Peronii," from "Obock, Récif de Clochettins (Gravier 1904)." The two specimens from Vanikoro were found at the MNHN in a jar with three labels. One old label says "Onchidium [subsequently replaced by Peronia] de Vanikoro, mm Quoy et Gaimard 1829." Another label only says "44" for unknown reasons. And two individuals from Vanikoro were originally used by Labbé to describe P. gaimardi. The lectotype was dissected by Labbé. Its radula and male apparatus are missing. The female parts are still inside the animal. Its intestinal loops of type I are illustrated here (Fig. 80E). Dorsal gills are present on the notum. The paralectotype from Vanikoro was not dissected by Labbé. As for the paralectotype from Obock, Djibouti, it could not be traced with certainty, which does not matter given that it has no name-bearing function. Based on the original description (Labbé 1934a: 194), the paralectotype from Djibouti was collected by Gravier in 1904 at the "Récif de Clochettins, Obock," that it measured 80/57 mm, and that its body was "very flattened." There is a jar at the MNHN with a label saying "Oncidium Peronii, Cuv. Obock M. Gravier 1904 -A Labbé, dét [for "déterminé," i.e., identified] 1933." Another label says "F" for unknown reasons. All the informa- tion on the label matches the information provided by Labbé in the original description of P. gaimardi, and the size (80/60 mm) of the specimen perfectly matches the size of the paralectotype of P. gaimardi. That specimen is just an empty notum with dorsal gills (all internal organs are missing). However, for two reasons, it is extremely unclear Abbreviations: ag accessory penial gland, dd deferent duct, ddg dorsal digestive gland, fgm female gland mass, hg hermaphroditic gland, i intestine, ms muscular sac, ov oviduct, pdg posterior digestive gland, ps penial sheath, rm retractor muscle, rs receptaculum seminis, sp spermatheca, st stomach, v vestibule. in 1904 from Obock (there are other specimens from Obock at the MNHN, but not collected in 1904 by Gravier), and that the specimen may not even have been collected by Gravier, it is not possible to know whether that specimen is the paralectotype of P. gaimardi, a non-type material used by Labbé for a re-description of Peronia tongana, or even something completely different. Lectotype and paralectotype (Peronia anomala). Red Sea • lectotype, hereby designated, 10/8 mm; 1893; Jousseaume leg.; MNHN-IM-2000-33678. • 1 paralectotype, 6/3 mm; same collection data as for the lectotype; MNHN-IM-2000-33678. Originally, no jar clearly labeled as the type material of Peronia anomala was found at the MNHN, but it could be traced back. The original description of P. anomala is based on two individuals (10/9 and 5/5 mm) from the Red Sea ("Mer Rouge") collected by Jousseaume in 1893. Several old jars were found at the MNHN with material collected from the Red Sea by Jousseaume. Most jars are labeled as "1892" for collecting date, one jar is labeled as "1893" (MNHN-IM-2000-33678), and another as "1823" (MNHN-IM-2000-33698). The jar with the (erroneous) collecting date of 1823 is the type series of Onchidium durum (see below). The jar with a collecting date of 1893 matches perfectly the information provided in Labbé's original description of P. anomala and even the animal sizes match (MNHN-IM-2000-33678): these two specimens are considered to be the type series of P. anomala, and the largest specimen is designated as the lectotype. Both the lectotype and the paralectotype were dissected by Labbé. The radula and female and male reproductive parts of the lectotype are missing (the lack of penis and accessory penial gland, mentioned by Labbé, but likely due to the lectotype being not fully mature, cannot be checked). Dorsal gills are present on the notum. Its intestinal loops are not of type II (Labbé 1934a: 195), but of type I instead (Fig. 86B). The paralectotype is largely destroyed but bears dorsal gills on the notum. The type material mentioned in the original description also includes a paralectotype from Mauritius which could not be located with certainty at the MNHN, a paralectotype from the Red Sea which could not be located at the MNHN, and another individual missing from one of the jars from the Red Sea (see below). Most importantly, the type specimens belong to more than one species, so a lectotype is designated to clarify the application of the name P. gondwanae. Originally, no jar clearly labeled as the type material of P. gondwanae was found at the MNHN, but most of the type material could be traced back. The original description of P. gondwanae is based on 38 individuals which Labbé, as often, listed in his article using italicized letters: a) three individuals from Bombay and one individual from The specific name "gondwanae" was written in pencil only on two old jars at the MNHN. One jar contains four of the five "c" individuals collected from the Red Sea by Jousseaume in 1892 (MNHN-IM-2000-33683); the name "gondwanae" is written on the small label with the number "59;" the size of the four specimens (40/30 mm) matches the size provided by Labbé. Another jar contains the 13 "e" individuals collected from the Red Sea by Jousseaume in 1892 (MNHN-IM-2000-33688); this jar was found only labeled as "57 gondwanae," i.e., with no locality, collector name, or collecting year, but the number of individuals and their size (32/25 to 25/20 mm) matches the size provided by Labbé (35/25 mm). No other jar labeled as P. gondwanae was found at the MNHN, but most of the remaining type material could be traced back thanks to the matching of collector's name, collecting date, specimen sizes, and the number of old jars from any given locality at the MNHN. There are only three old jars with specimens from Bombay at the MNHN. One jar contains seven Platevindex individuals collected by Roux in 1826. The two other jars contain the three "a" individuals from Bombay: one jar contains two individuals ( Both specimens match the size provided by Labbé for the "b" individual (60/50 mm). listed only once a specimen from Mauritius by Mathieu in his entire work, and that specimen could be the one in either jar (i.e., MNHN-IM-2000-33686 or MNHN-IM-2000. Finally, the "a" individual identified from the Red Sea could not be located. The 29 mm long "a" individual from Bombay, dissected by Labbé, is designated here as the lectotype of Paraperonia gondwanae (MNHN-IM-2000-33681). Its radula and male parts are missing. Its intestinal loops are clearly of type I (Fig. 84A) even though Labbé described loops of type V. The 50 mm long "a" individual from Bombay was also dissected by Labbé (MNHN-IM-2000-33682). Its radula and male parts are missing but its intestinal loops are of type V (Fig. 21B), as in the original description, so it does not belong to P. verruculata but P. madagascariensis instead. Labbé dissected only two of the 15 specimens from Suez (MNHN-IM-2000-33684): the radula and the male parts are missing from both specimens (38/32 and 35/28 mm) but their intestinal loops are both of type I (Fig. 86D), suggesting that they belong to P. verruculata, even though Labbé described loops of type V. Labbé dissected only one (40/30 mm) of the four specimens from Suez (MNHN-IM-2000-33683), acknowledging that maybe one specimen was lost: the radula and the male parts are missing, but its intestinal loops are of type V (Fig. 21C), as in the original description, suggesting that it belongs to P. madagascariensis. Labbé dissected seven of the 13 specimens (assumed to be) from the Red Sea (MNHN-IM-2000-33688). Those specimens are all completely destroyed and extremely poorly preserved. An undissected individual (35/25 mm) from the same lot was dissected for the present study and its intestinal loops are of type I, suggesting that it belongs to P. verruculata (Fig. 86E). Finally, according to Labbé, the intestinal loops of the specimen from Mauritius (collected by Mathieu) are of type V. One specimen collected by Mathieu from Mauritius is completely empty inside (MNHN-IM-2000-33687). The loops of the other specimen are of type I (Fig. 9D), suggesting that it belongs to P. peronii (MNHN-IM-2000-33686). Lectotype labeled as the type material of Scaphis viridis was found at the MNHN. However, only one old jar was found at the MNHN with specimens collected from Thursday Island, and the collecting information on the label (specimens collected by M. Lix in 1892) matches the information provided in Labbé's original description of S. viridis (even though, according to Labbé, the specimens were collected in 1890). The sizes provided by Labbé (48/20,47/30,and 42/25 mm) match the sizes of the three specimens here and their notum clearly bears dorsal gills, as in the original description of S. viridis. Labbé mentioned four specimens but, given that he provided measurements for only three specimens, it is possible that he only examined three specimens. Or he examined four specimens and one is now missing. The three type specimens are largely destroyed inside (due to Labbé's dissections). The male parts and radula are missing in both paralectotypes but are still inside the lectotype. The intestinal loops of the lectotype are of type I, with a transitional loop at 5 o'clock (Fig. 80F). The three types are green (hence the specific name chosen by Labbé) but that color is clearly due to preservation. Holotype (Scaphis carbonaria). New Caledonia • holotype, by monotypy, 40/26 mm; 1880; Réveillère leg.; MNHN-IM-2000-33708. Originally, no jar clearly labeled as the type material of Scaphis carbonaria was found at the MNHN. However, of the several old jars found at the MNHN with specimens collected from New Caledonia, only one matches perfectly the information provided in Labbé's original Other jars with specimens from New Caledonia were collected by Fisher in 1878 or by François in 1894. Therefore, it is extremely likely that the specimen collected by Réveillère in 1880 and identified as "Peronia" is the holotype, by monotypy, of Scaphis carbonaria. The size of the holotype (40/26 mm) matches the size provided by Labbé in the original description of S. carbonaria (36/25 mm). Its notum is not well preserved. Dorsal papillae are quite flattened (as pointed out by Labbé) and dorsal eyes cannot be seen, likely because their black color faded. However, dorsal gills are clearly present on the notum. Its intestinal loops are of type I (Fig. 80D) but its radula is missing. . Abbreviations: ag accessory penial gland, dd deferent duct, fgm female gland mass, hg hermaphroditic gland, ms muscular sac, ov oviduct, ps penial sheath, rm retractor muscle, rs receptaculum seminis, sp spermatheca, v vestibule. The posterior (female) reproductive parts are still present but poorly preserved. The copulatory parts are missing, except for the muscular sac of the accessory penial gland (approximately 10 mm long) and so the length of the spine of the accessory penial gland cannot be checked (it was not mentioned by Labbé in the original description). The original description of S. gravieri is based on seven individuals: two individuals (10/7.5 and 8/6.5 mm) from Djibouti collected by Gravier in 1904; four individuals (32/29 and 30/25 mm) from Zanzibar collected by Grandidier (the French naturalist and explorer Alfred Gandidier ) in 1865; and one individual (28/19 mm) from Mayotte collected by Ach. Vimont in 1883. One old jar was found at the MNHN with a specimen from Mayotte (MNHN-IM-2000-33695). The information on the label (specimen collected from Mayotte by Vimont in 1883) matches the information provided in Labbé's original description of S. gravieri, and the specimen size also matches. Therefore, that specimen from Mayotte is here considered to form part of the type series of S. gravieri and designated as the lectotype (MNHN-IM-2000-33695). This lectotype was dissected by Labbé: the radula and the posterior (hermaphroditic) reproductive parts are still in place but the male parts are missing. The intestinal loops are of type I with a transitional loop at 6 o'clock (Fig. 85A). Another old jar was found at the MNHN with specimens from Zanzibar (MNHN-IM-2000-33693). The information on the label (specimens collected from Zanzibar by Grandidier in 1865) matches the information provided in Labbé's original description of S. gravieri, and the specimen size also matches (Labbé likely provided the size of the largest two specimens). Therefore, those four specimens from Zanzibar are considered to form part of the type series of S. gravieri and are now paralectotypes (MNHN-IM-2000-33693). Only one paralectotype (30/28 mm) from Zanzibar was dissected by Labbé: the radula and the posterior (female) reproductive parts are still in place but the male parts are missing. The intestinal loops are of type I. The two paralectotypes from Djibouti could not be traced with certainty. There are two old jars of specimens collected by Gravier in 1904 at the MNHN. One jar is labeled with Obock as locality (not Djibouti, even though Obock is in Djibouti) and contains one Peronia specimen of which the size (80/60 mm) does not match Labbé's original description of S. gravieri. Also, that specimen from Obock is more likely to be a paralectotype of P. gaimardi or a non-type specimen used by Labbé for the re-description of Peronia tongana. The three specimens (70/60, 70/65, and 65/65 mm) of the second jar collected by Gravier in 1904 are from Djibouti (MNHN-IM-2000-33696), which matches perfectly the original description of S. gravieri by Labbé. The problem is that the specimen sizes do not match because Labbé described two individuals of only 10/7.5 and 8/6.5 mm. It is likely that Labbé meant centimeters instead of millimeters (even though he wrote "mm") because he described a muscular sac of 8 mm in the specimens from Djibouti, which is impossible in individuals that are only 8 and 10 mm Figure 96. Reproductive system, Peronia verruculata, Red Sea, spm #1 (ZMH 27472/4) A posterior, hermaphroditic (female) reproductive system B anterior, male, copulatory apparatus. Scale bars: 2 mm (A), 4 mm (B). Abbreviations: Abbreviations: ag accessory penial gland, dd deferent duct, fgm female gland mass, hg hermaphroditic gland, ms muscular sac, ov oviduct, p penis, ps penial sheath, rm retractor muscle, rs receptaculum seminis, sp spermatheca. long. One of three specimens, possibly dissected by Labbé, possibly is part of the type series of S. gravieri, but it remains questionable. In addition, a specific name was added in pencil on an old label with the number "69" but that name, which is impossible to read, seems to start with a J, and not a G. In summary, it remains unclear whether those three specimens from Djibouti can be regarded as part of the type series of S. gravieri; however, it ultimately does not matter because a lectotype is designated here. Syntypes (Scaphis tonkinensis). The type material of Scaphis tonkinensis (ten syntypes up to 20/18 mm, according to the original description) could not be located with certainty at the MNHN. Only one old jar was found at the MNHN (MNHN-IM-2000-33700) with specimens collected from Vietnam (as "Tonkin"), and the information on the label (material collected by M. Julien in 1874) matches the informa- tion provided in Labbé's original description of S. tonkinensis. Therefore, it is possible that the jar mentioned here contains the type material of S. tonkinensis. Unfortunately, the jar only contains three pieces of unidentifiable and poorly preserved tissue (each piece measuring approximately 20/10 mm). Two pieces are likely not even part of an onchidiid slug, and it is unclear whether the third piece is part of an onchidiid dorsal notum or not. So, regardless of whether this material is regarded as part of the type material of S. tonkinensis, it is basically useless. Syntypes (Scaphis lata). The type material of Scaphis lata (four syntypes up to 28/28 mm, from Vietnam) could not be located at the MNHN. Only one old jar was found at the MNHN (MNHN-IM-2000-33700) with specimens collected from Vietnam (as "Tonkin"), but the information on the label (specimens collected by M. Julien, in 1874) does not match exactly the information provided in Labbé's original description of S. lata (specimens collected by M. Julien in 1878), and, instead, matches the information provided in Labbé's original description of S. tonkinensis (see above). Several old jars were found at the MNHN with material collected from the Red Sea by Jousseaume. Most jars are labeled with 1892 as collecting date, one jar is labeled with 1893, and another with 1823. The jar with a collecting date of 1893 (MNHN-IM-2000-33678) contains the type series of P. anomala (see above). On the jar with the collecting date of 1823, there is another tiny label with the number "61" (for an unknown numbering system) on which O. durum is clearly written in pencil. It is one of the very few cases in which a species name is indicated for some MNHN mate- rial studied by Labbé and there is little doubt that the specimens are the type series of O. durum, especially because the number of individuals and their sizes perfectly match with Labbé's original description. Clearly, 1823 is a mistake for 1893. Most importantly, contrary to what was described by Labbé, gills are present on the dorsal notum of those individuals. All specimens are poorly preserved. They likely dried at some point and their body is hard. Three specimens were opened by Labbé and are now largely destroyed with only the digestive system inside. A lectotype is designated here in order to clarify the application of O. durum (specimens in the type series could belong to more than one species). Its intestinal loops are not of type II (Labbé 1934a: 221): they clearly are of type I (Fig. 86C). Holotype and paratypes (Peronia persiae). The original description of P. persiae is based on a total of 14 individuals (from 13 to 37 mm): the four types (see above) and ten other specimens from the same two localities as the types. DNA sequences (COI and 16S) are provided for 11 of those 14 individuals, including all four type specimens. However, it is unclear which GenBank sequences correspond exactly to the holotype because this information is missing in GenBank as well as in Maniei et al. (2020a: table 2). It is assumed that the holotype, called "specimen LA7" in Maniei et al. (2020a: table 1), corresponds to the individual called "voucher LaFM7S" in GenBank. Ultimately, it does not matter at all because all mitochondrial sequences of P. persiae cluster together within the unit #4 of P. verruculata: only the COI (MK993404) and the 16S (MK993392) sequences of the "voucher LaFM7S" are included in our phylogenetic analyses to represent P. persiae (Fig. 2). Finally, note that the COI and 16S GenBank accession numbers are switched for P. persiae in Maniei et al. (2020a: [1795]; Sumatra, (MH002570) for an individual identified as Peronia sp. and collected from Singapore . This individual as well as others referred to as "Peronia sp. 2" by Chang et al. (2018), following Dayrat et al. (2011), clearly belong to the mitochondrial unit #1 of Peronia verruculata (Fig. 2). A third COI sequence was obtained from GenBank (LC390389) for an individual identified as Peronia sp. and collected from Sakurajima, Kagoshima, Japan (Tagaki et al. 2019), which is south to the northernmost known locality near the Seto Marine Biological Laboratory (see material examined). This individual as well as others from "Group V" were referred to as "Peronia sp." by Takagi et al. (2019) and clearly belong to the mitochondrial unit #1 of Peronia verruculata (Fig. 2). Four COI sequences were obtained from GenBank (JN543152, JN543153, JN543154, JN543165) for individuals from the coast of China, from Hainan (18°N) to Fujian (26°N) . These individuals were referred to as "Peronia verruculata" by Sun et al. (2014) and clearly belong to the mitochondrial unit #1 of Peronia verruculata (Fig. 2). Finally, the COI (MK993404) and 16S (MK993392) sequences of the "voucher LaFM7S" represent P. persiae (Fig. 2): all published mitochondrial sequences of P. persiae cluster together within the unit #4 of P. verruculata so only one individual is needed to represent P. persiae. Distribution (Fig. 6). Peronia verruculata is the most widespread of all onchidiid species. Its most western records are known from the Red Sea and southern Mozambique (26°S). Its most eastern records are in Japan, Wakayama (33°N), Vanuatu,and Queensland (21°S). It is unclear how far south it is distributed in southeastern Australia, although we did not find it in Sydney, New South Wales (see remarks below as well as remarks on P. sydneyensis). Undoubtedly, the delineation and distribution of the mitochondrial units of P. verruculata will change as new DNA sequences are added, especially from the Arabian Sea, the Red Sea, southern India, as well as southeastern Australia (see species remarks). Note that the range of P. verruculata is continuous. Even though our molecular analyses do not include specimens of P. verruculata from places like southern India, the Persian Gulf, or the northwestern corner of the Indian Ocean (coasts of Somalia, Yemen, and Oman), P. verruculata must be present there (red areas in Fig. 6). As of today, units #1 and #2 are sympatric in southeastern Sumatra (we found them both together at our stations 78 and 82), and units #1 and #3 are sympatric in Singapore. Peronia verruculata also is very abundant and has been very often recorded in the past. However, Peronia species are externally cryptic and can be easily misidentified and confused. Here the records that are positively confirmed are distinguished from the records that cannot be confirmed. Erroneous applications of the name P. verruculata (or some of its synonyms) are also listed. All the details can be found in the species remarks (see below). Onchidium durum was named after the hard (durum in Latin) notum of the preserved type specimens. Onchidium elberti was named after Dr. J. Elbert, who collected the holotype in 1909. Onchidium ferrugineum was named after the rusty (ferrugineum in Latin) color of the live individuals collected by Lesson which belong to two different species: the lectotype belongs to Peronia verruculata (unit #1) and the paralectotypes to Wallaconchis ater. The dorsal notum of some individuals of W. ater can be homogenously of rusty color (e.g., Goulding et al. 2018b: fig. 36F) but individuals of P. verruculata (unit #1) are not typically of rusty color, although their notum commonly displays red patches. Lesson's (1833: pl. 19, figs 1, 2) illustrations of Peronia ferruginea in his Illustrations de Zoologie represent a Peronia slug with a dorsal notum that is homogenously of rusty color: it almost looks like an individual of Wallaconchis ater to which dorsal gills were artificially added. Peronia gaimardi was named after Joseph Paul Gaimard , who collected (with Jean René Constant Quoy) the type material in Vanikoro in 1829 during a voyage of the Astrolabe. Paraperonia gondwanae was named after its supposedly Gondwanan distribution (Red Sea, Mauritius, western India, and Torres Strait). Scaphis gravieri was named after Charles Joseph Gravier , professor of zoology (worms and crustaceans) at the MNHN, who collected two paralectotypes from Djibouti. Scaphis lata was named after the broad (lata in Latin) and circular shape of preserved type specimens. Peronia persiae was named after the Persian Gulf. Peronia savignii was named after Marie Jules César Lelorgne de Savigny [1777Savigny [ -1851, a French zoologist who participated in Napoleon's expedition to Egypt and published a plate of illustrations for gastropods (including onchidiids) in the Description de l'Egypte (Savigny 1817: pl. 2). Scaphis tonkinensis was named after its type locality in Tonkin, i.e., Vietnam. Onchidium verruculatum was named after the dorsal notum covered with warts (verruculatum in Latin). Scaphis viridis was named after the (artificial) green color of the preserved type specimens. Habitat (Figs 65-69). Unit #1 is found in a large variety of habitats. It is predominantly found on rocks in the rocky intertidal (including man-made structures). It can also be found on huge and isolated boulders on a sandy beach or in coral rubble mixed or not with sand. The rocks on which the unit #1 is found can be associated or not with sparse mangrove trees. It is also found on sandy mud inside or nearby mangroves. Exceptionally, it can be found on old logs inside muddy mangroves. Unit #2 is found on coral rubble and rocks on sandy beaches. Unit #3 is found on rocks on a beach and in the rocky intertidal. Unit #4 is found in the rocky intertidal. Unit #5 is found in the rocky intertidal as well as on mud, sandy or not. There was no habitat data on the labels of the material studied here for unit #6 but it is most likely found in the rocky intertidal, like the other units of Peronia verruculata. Peronia verruculata is extremely common across its entire distribution. In localities where they overlap geographically, the different mitochondrial units are found more or less in equal abundance (units #1 and #3 in Singapore,and units #1 and #2 in southeastern Sumatra). Peronia verruculata is commonly found during the day, even though a few individuals were also collected at night. Color and morphology of live animals (Figs 70-77). In unit #1, live animals are not covered with mud, but they can often bear tiny pieces of various materials, such as sand and broken shells (Figs 70-73). The background color of the dorsal notum is highly variable, most often brown (light to dark), or greenish, and occasionally even black. The background is mottled with darker areas, occasionally with red areas. In most animals, the color of the dorsal papillae varies as that of the background itself. In some animals, however, the tip of the dorsal papillae (with and without dorsal eyes) can be bright yellow. The color of the foot is the same as that of the hyponotum, which varies greatly from pure white to dark blue-green. In most animals, the ventral surface is yellowish-greenish or yellowish-bluish. The ventral color (foot and hyponotum) of an individual can change rapidly, especially when disturbed. The ocular tentacles are brown-grey (variable from light to dark), like the head. The ocular tentacles are short (just a few millimeters long). Preserved specimens no longer display the colors of live animals. Colors tend to fade rapidly with preservation. The dorsal notum of live animals is covered by dozens of papillae of various sizes. Those papillae do not retract within the notum, whether animals are disturbed or not, and so the dorsal notum is never smooth. Larger papillae are not arranged in two longitudinal and lateral ridges (on either side of the median line), even though larger papillae are mostly concentrated in the central area of the dorsal notum. Some papillae bear from one to five black dorsal eyes at their tip (most papillae bear three eyes). The number of papillae with dorsal eyes is variable (from 10 to 22) and papillae in the central area of the dorsum tend to bear more eyes than those on the side. Occasionally, papillae can bear more than five eyes: a central, large papilla can bear up to eight eyes but, like other papillae, is not fully retractable within the notum. The exact number of papillae with eyes can be difficult to count because papillae are often dark, and because the eyes, which are located at the tip of the papillae, can be seen only if papillae are relaxed. Dorsal gills are present on the posterior third of the dorsal notum. Dorsal gills are most easily observed when animals are relaxed under water. When slugs are not under water, dorsal gills are retracted and hard to see. If animals were not relaxed before preservation, gills can be retracted and hard to see in preserved specimens (the best relaxation method is to immerse live specimens in a solution of magnesium chloride). The color variation in unit #2 (Fig. 74) and unit #3 (Fig. 75) is similar to the color variation in unit #1, and specimens cannot be separated where units overlap geographically (in Singapore for units #1 and #2, and in southeastern Sumatra for units #1 and #3). The number of papillae with dorsal eyes observed in unit #2 (from 14 to 22) and in unit #3 (from 10 to 18) is within the range observed in unit #1. Slight differences may be due to a more limited sampling. In unit #4, the color of the dorsal notum is brown, mottled with darker and lighter areas (Fig. 76). The ventral surface (foot and hyponotum) is brown-greyish. The number of papillae with dorsal eyes varies from 10 to 18. In unit #5, the dorsal notum is brown, light to dark, mottled with darker areas (Fig. 77). The ventral surface (foot and hyponotum) is yellowish, greenish, or bluish, and can change rapidly in any given individual. The number of papillae with dorsal eyes varies from 10 to 20. Pictures of live animals were not available for unit #6 (Red Sea). The dorsal color of preserved specimens is beige with faded darker areas. The ventrum is beige. The number of papillae with dorsal eyes varies from 10 to 18, but the black eye color possibly faded in some of them. The largest specimens are 60 mm long in unit #1, 55 mm long in unit #2, 40 mm long in unit #3, 60 mm long in unit #4, 50 mm long in unit #5, and 40 mm long in unit #6. Exceptionally, one individual in New Caledonia was 73 mm long (unit #1). External morphology (Fig. 78A-C). The body is not flattened. The notum is oval. The hyponotum is horizontal in live animals. The orientation of the hyponotum as well as the shape of the dorsal notum of preserved animals greatly vary depending on preservation. The width of the hyponotum relative to the total width of the ventral surface (pedal sole and hyponotum) varies among individuals but is approximately one third. In the anterior region, the left and right ocular tentacles are superior to the mouth. Eyes are located at the tip of the two ocular tentacles. Inferior to the ocular tentacles, superior to the mouth, the head bears a pair of oral lobes. The latter are smooth, with no transversal protuberance. The male opening (of the copulatory complex) is below and to the left of the right ocular tentacle (i.e., between the two ocular tentacles, but closer to the right than to the left tentacle). The anus is posterior, median, close to the edge of the pedal sole. On the right side (to the left in ventral view), a peripodial groove is present at the junction between the foot and the hyponotum, running longitudinally all the way from the head to the posterior end. The female pore, which marks the posterior end of the peripodial groove, is located a few millimeters from the anus and the pneumostome, which does not vary much among individuals. The pneumostome is median. Its position on the hyponotum relative to the notum margin and the edge of the pedal sole varies among individuals but averages in the middle. Visceral cavity and pallial complex. The anterior pedal gland is small, more or less round, and flattened, lying on the floor of the visceral cavity below the buccal mass and below a thin layer of connective tissue (it can be hard to detect). The heart, enclosed in the pericardium, is on the right side of the visceral cavity, slightly posterior to the middle. An anterior vessel supports several anterior organs such as the buccal mass, the nervous system, and the copulatory complex. The kidney is nearly symmetrical, the right and left parts being equally developed. The kidney is intricately attached to the respiratory complex. The lung is posterior in two more or less symmetrical parts, left and right, which are joined in the middle. Nervous system (Fig. 78D). The circum-esophageal nerve ring is post-pharyngeal and pre-esophageal. The paired cerebral ganglia are separated by a short cerebral commissure of which the length varies among individuals. Paired pleural and pedal ganglia are also all distinct. The visceral commissure is short but distinctly present and the visceral ganglion tends to be slightly to the left. Cerebro-pleural and pleuro-pedal connectives are short and pleural and cerebral ganglia touch each other on either side. Nerves from the cerebral ganglia innervate the buccal area and the ocular tentacles and, on the right side, the penial complex. Nerves from the pedal ganglia innervate the foot. Nerves from the pleural ganglia innervate the lateral and dorsal regions of the mantle. Nerves from the visceral ganglia innervate the visceral organs. Ganglia are commonly surrounded by almost transparent connective tissue through which they can be observed. Digestive system 82A,83A,84A,B,(85)(86)(87)(88)(89)(90)(91)(92)(93). There are no jaws. The left and right salivary glands, heavily branched, join the buccal mass dorsally, on either side of the esophagus. The esophagus is narrow and straight, with thin internal folds. The esophagus enters the stomach anteriorly (Fig. 79). Only a portion of the posterior aspect of the stomach can be seen in dorsal view because it is partly covered by the lobes of the digestive gland. The dorsal lobe is mainly on the right. The left, lateral lobe is mainly ventral. The posterior lobe covers the posterior aspect of the stomach. The stomach is a U-shaped sac divided into four chambers (Fig. 79). The first chamber, which receives the esophagus, is delimited by thin tissue, and receives the ducts of the dorsal and lateral lobes of the digestive gland. It is internally smooth (with no ridges). The second, posterior chamber, delimited by thick muscular tissue (which takes most of the space inside), receives the duct of the posterior lobe of the digestive gland. The third, funnel-shaped chamber is delimited by thin tissue with high leaflet-like ridges internally. The fourth chamber is continuous and externally similar to the third, but it bears only low, thin ridges internally. The intestine is long and narrow. Intestinal loops were checked in every specimen listed in the material examined: the intestinal loops are of type I with a transitional loop oriented between 3 and 6 o'clock 82A,83A,84A,B,85,86). There is no rectal gland. The radula is in between two large postero-lateral muscular masses (Figs 87-93). Each radular row contains a rachidian tooth and two half rows of lateral teeth of similar size and shape. Examples of radular formulae are presented in Table 5. The rachidian teeth are unicuspid (Fig. 87A): the median cusp is always present; there are no conspicuous cusps on the lateral sides of the base of the rachidian tooth. The median cusp of the rachidian teeth is approximately 40 μm long. The lateral aspect of the base of the rachidian teeth is straight. The half rows of lateral teeth form an angle of 45° with the rachidian axis. Except for the few innermost and few outermost lateral teeth, the size and shape of the lateral teeth do not vary along the half row, nor do they vary among half rows. The lateral teeth are unicuspid with a flattened and curved hook (approximately from 80 to 120 μm long) with a rounded tip, but there is also a pointed spine on the outer lateral expansion of the base, or basal lateral spine (Fig. 87D). In most cases, the basal lateral spine cannot be observed because it is hidden below the hook of the next, outer lateral tooth. It can only be observed when the teeth are not too close (such as in the innermost and outermost regions) or when teeth are placed in an unusual position. The inner and outer lateral aspects of the hook of the lateral teeth are straight (i.e., not wavy and not with any protuberance). The female organs are located (with some male parts) at the posterior end of the visceral cavity (Figs 82B, 83B, 84C, 94A-C, 95A, B, 96A). The hermaphroditic gland is a single mass, joining the spermoviduct through the hermaphroditic duct (which conveys the eggs and the autosperm). There is a narrow, elongated receptaculum seminalis (caecum) along the hermaphroditic duct. The female gland mass contains various glands (mucus and albumen) which can hardly be separated by dissection and of which the exact connections remain uncertain. The hermaphroditic duct becomes the spermoviduct (which conveys eggs, exosperm, and autosperm). Proximally, the spermoviduct is not divided (at least externally) and is embedded within the female gland mass. Distally, the spermoviduct branches into the straight deferent duct (which conveys the autosperm up to the anterior region, running through the body wall) and the oviduct. The free oviduct conveys the eggs up to the female opening and the exosperm from the female opening up to the fertilization chamber. The large, spherical-ovate spermatheca connects to the oviduct through a short duct. The oviduct is narrow and straight. There is no vaginal gland. The penial sheath is narrow and elongated. The penial sheath protects the penis for its entire length. The beginning of the retractor muscle marks the separation between the penial sheath (and the penis inside) and the deferent duct, which is highly coiled. The retractor muscle, which can be shorter or longer than the penial sheath, inserts at the posterior end of the visceral cavity. Inside the penial sheath, the penis is a narrow, elongated, soft, hollow tube. Its distal end bears conical hooks which are less than 50 μm long in units #1 and #2, less than 55 μm long in units #5 and #6, and less than 60 μm in units #3 and #4 (Figs 97-102). When the penis is retracted inside the penial sheath, the hooks are densely packed inside the tube-like penis; during copulation, the penis is evaginated like a glove and the hooks are outside, not as densely packed. In some individuals of unit #4, a few penial hooks are exceptionally double, or twopronged (Fig. 100C). The accessory penial gland is a long, tube-like flagellum with a proximal dead end. The length of the flagellum of the penial gland varies among individuals but it is [991] would be assigned to the mitochondrial unit #3 because the spine is longer than 200 μm in unit #3 while it usually is less than 200 μm in unit #1, but it belongs to the mitochondrial unit #1 (Fig. 2). The diameter of the tip of the spine only partly overlaps between unit #1 (from 35 to 50 μm) and unit #3 (from 40 to 80 μm), but that trait is hardly practical when it comes to identification (it requires SEM). The units #1 and #2 are sympatric in Sumatra (we found them both together at the stations 78 and 82) but they cannot be separated because they are completely cryptic anatomically ( Table 4). All that is not to say that there are no anatomical differences between units of P. verruculata. On average, the diameter of the spine of the accessory penial gland tends to be larger both at the base and at the tip in unit #3. However, because ranges of variation overlap, anatomical traits cannot be used to reliably assign individuals to any particular unit. Peronia verruculata is close anatomically to P. sydneyensis and P. willani. They all share intestinal loops of type I with a transitional loop oriented between 3 and 6 o'clock. There are, however, important differences. The muscular sac of the accessory penial gland is significantly longer in P. willani (up to 25 mm) than in P. verruculata (up to 15 mm); the spine of the accessory penial gland is significantly shorter in P. sydneyensis (less than 1 mm) than in P. verruculata (at least 1.3 mm); strong, hemispherical protuberances cover the spine in all individuals of P. sydneyensis and are absent in all other species. Peronia sydneyensis and P. verruculata cannot be confused even where they are sympatric (Queensland and New Caledonia) and Peronia verruculata and P. willani are not sympatric based on current data. Remarks. Species delineation. Our decision of recognizing a single species with high population structure and several mitochondrial units is explained in the results (see species delineation). Fresh material from the Red Sea, Somalia, Yemen, Oman, and the Persian Gulf is needed to determine the relationships between the populations of P. verruculata from the Red Sea and the remainder of the species. Similarly, fresh material is needed from southwestern and southeastern India, including Sri Lanka, to determine the relationships between the western (Indian Ocean) and eastern (South-East Asia and West Pacific) populations. Most likely, additional populations will show that mitochondrial units are even more mixed than what is already shown here, and new units may be found. Nuclear markers will remain indispensable as the current data show that populations that seem divergent using mitochondrial markers are not reproductively isolated. It is not excluded that populations from the Red Sea belong to two distinct species (both with intestinal loops of type I): P. verruculata and another species endemic to the Red Sea. The Peronia diversity in the Red Sea would thus be similar to what is found in Japan, which is also at the periphery of the distribution of Peronia (Fig. 6). Synonymy. The application of all the species names regarded as junior synonyms of P. verruculata is addressed here, following a chronological order starting with P. verruculata (Tables 1, 6). Based on our data, there are two Peronia species in the Red Sea, one characterized by intestinal loops of type I (with a transitional loop oriented between 3 and 6 o'clock) and Simroth, 1920 Sulawesi, Indonesia Onchidium astridae West Papua, Indonesia Peronia gaimardi Vanikoro, Solomon Islands Scaphis viridis Torres Strait, Australia Scaphis carbonaria New Caledonia Scaphis tonkinensis Vietnam Scaphis lata Vietnam Persian Gulf, Pakistan & western India Paraperonia gondwanae Mumbai, western India Peronia persiae Maniei et al., 2020a Persian Gulf, Iran #5 Mozambique & Madagascar Scaphis gravieri Mayotte Red Sea Onchidium verruculatum Cuvier, 1830Red Sea Peronia savignii Récluz, 1869 Red Sea Peronia anomala Red Sea Onchidium durum Red Sea the other characterized by intestinal loops of type V (see remarks on P. madagascariensis). Because the intestinal loops of the lectotype of O. verruculatum are of type I (Fig. 86A), P. verruculata applies to the species described here with intestinal loops of type I. The original description of Onchidium ferrugineum was published four times in different venues by Lesson, twice in 1831 (first in the Bulletin des sciences naturelles and then in the zoology section of the Coquille voyage), once in February 1832 (in the Mémorial encyclopédique), and once again in 1833 (in his Illustrations de Zoologie). According to Cretella (2010), the date of publication for the description of O. ferrugineum in the Coquille voyage is November 15, 1831. Therefore, the oldest and original description of O. ferrugineum is the one published in the Bulletin des sciences naturelles in April 1831. Both descriptions from 1831 did not include any illustration. An illustration of an animal ventral view was published by Lesson (1832: 36-37, fig. 32) in the Mémorial encyclopédique. Two beautiful, colored pictures were published in Lesson's (1833: pl. 19) Illustrations de Zoologie. The type locality (of the lectotype) of Onchidium ferrugineum is Manokwari, West Papua, Indonesia, where at least three Peronia species are known to be present (Fig. 6). Based on the length of the lectotype (35 mm) and its intestinal loops of type I with a transitional loop at 4 o'clock ( Fig. 80A), Onchidium ferrugineum applies to the species described here (P. verruculata) and not to P. griffithsi or P. peronii (Table 4). Unfortunately, this identification cannot be confirmed by the muscular sac or the spine of the accessory penial gland, which are missing in the lectotype. Labbé (1934a: 213-216) claims that there is no accessory penial gland in Onchidium ferrugineum and thus does not comment on the spine and the muscular sac of the accessory penial gland of the lectotype. It is unclear whether Labbé dissected the lectotype or if he found it already dissected by Lesson (who commented on the penis of the paralectotypes and thus might have dissected the lectotype as well). Eleven dorsal papillae with eyes were counted on the lectotype, but it is possible that others faded with time. Lesson (1833: pl. 19) transferred Onchidium ferrugineum to Peronia. In the written description, Lesson (1833: unnumbered page) considered Peronia ferruginea the type of a genus which he decided to call Peronia, following Blainville, but the type species of Peronia is O. peronii, by monotypy, and the author of Peronia is Fleming (1822a, b). Oken (1834b: 269-270) reported Peronia ferruginea from Lesson's (1833: pl. 19) Illustrations de Zoologie. Van der Hoeven (1850: 786;1856: 817) suggested, based on Lesson's (1833: pl. 19) own illustration, that Peronia ferruginea may be a nudibranch instead of an onchidiid, but there is no question that Peronia ferruginea applies to an onchidiid species. Gray (1850: 117), Adams and Adams (1855: 235), and Tapparone Canefri (1883: 214) classified O. ferrugineum in Peronia but other authors preferred the original combination with the generic name Onchidium. Semper (1882: 268) kept O. ferrugineum in Onchidium and regarded it as a questionable name because he (erroneously) thought that its original locality was unknown. Plate (1893) did not comment on it. Bretnall (1919: 326-327) thought that O. ferrugineum referred to a species insufficiently known and merely repeated Lesson's original description. Bretnall (1919: 326) also suggested that O. ferrugineum seemed "closely related to that of M. de Blainville," i.e., Peronia mauritiana, a synonym of P. peronii. Solely based on information from the original description, Hoffmann (1928: 71, 74) regarded O. ferrugineum as a junior synonym of O. verruculatum and disagreed with Bretnall that it could refer to O. peronii. However, the application of O. ferrugineum cannot be deduced from the original description, especially because it is based on Peronia and Wallaconchis specimens (see above the remarks on the type material). Finally, Labbé (1934a: 213-216), who re-examined the type specimens of O. ferrugineum, created the generic name Lessonia (later replaced by Lessonina) for O. ferrugineum, for a genus characterized by a unique combination of traits (large and coiled penis, dorsal gills, etc.), without realizing that the types of O. ferrugineum were part of two species from two different genera. The two syntypes of Onchidium branchiferum are from Manila, Luzon, Philippines. Anatomical traits described by Plate (insertion of the retractor muscle of the penis at the end of the visceral cavity, spine of the accessory penial gland 1 mm long) indicate that O. branchiferum applies to P. verruculata, even though they cannot be confirmed on the syntypes in which all internal organs are either missing or destroyed (Table 4). Plate did not draw the intestinal loops but he describes them as being type I (the orientation of the transitional loop is unknown). The number of radular teeth per half row (88) also matches what is known in P. verruculata (Table 5). According to Plate (1893: 184), O. branchiferum is easily recognizable because its branchial plumes are only present on the posterior end of the dorsum (posterior sixth). However, this trait is not distinct from other species and varies depending on preservation (gills are often retracted in preserved specimens and can only be observed if specimens were carefully relaxed before preservation). Plate (1893) did not provide any other feature supporting O. branchiferum as a distinct species, and he did not compare it with any other existing species. Onchidium branchiferum is regarded here as a new junior synonym of P. verruculata (Tables 1, 6). Hoffmann (1928: 75) listed Onchidium branchiferum as a valid name (solely based on information from the original description) but considered it to refer to a "local form" of O. verruculatum. Labbé (1934a: 194) transferred Onchidium branchiferum to Peronia and regarded P. branchifera as a valid name "out of deference to the eminent zoologist Ludwig Plate" even though he agreed with Hoffmann that P. branchifera most likely was just a local form of P. verruculata. Labbé's re-description of P. branchifera was based on a specimen (30/23 mm) collected by Ach. Cuming in 1844 from an unknown locality in the Philippines. There are two jars preserved at the MNHN with Peronia specimens collected by Ach. Cuming in 1844. Labbé (1934a: 192-194) also re-described a specimen collected from the Philippines by Ach. Cuming in 1844 as P. verruculata. It is not possible to determine which jar corresponds to what species in monograph because Labbé did not indicate species identifications for any of the MNHN specimens he examined. Labbé's description of a "short penial gland" indicate that he most likely examined P. verruculata (unit #1). Finally, Marcus and Marcus (1970: 213) wrote that P. branchifera was close to P. verruculata but with no explanation. Onchidium elberti was described by Simroth (1920) from Muna Island, southeastern Sulawesi, Indonesia, where only Peronia verruculata is known to be present (Fig. 6). Internal features of the holotype (24 mm long) are fully compatible with the anatomy of P. verruculata: intestinal loops are of type I with a transitional loop oriented at 5 o'clock ( Fig. 80B) and the muscular sac of the accessory penial gland is 8 mm long (Table 4). Eleven papillae with dorsal eyes were counted (which fits within the range of the species) but some may have faded with time. As a result, Onchidium elberti is regarded here as a junior synonym of Peronia verruculata (Tables 1, 6). Hoffmann (1928: 71, 75) thought Onchidium elberti was a junior synonym of O. verruculatum, based on information from Simroth's original description. Onchidium astridae, the type species of Labbé's genus Scaphis, was originally described by within the genus Onchidium. Only one specimen is known, the holotype (20/18 mm) by monotypy, from Sorong, West Papua, Indonesia. There is no doubt that Onchidium astridae applies to a Peronia species because the dorsum of the holotype bears gills. All copulatory parts are missing and Labbé did not describe the length of the muscular sac or the length of the spine of the accessory penial gland. Labbé (1934a: 213, fig. 46) described two muscular sacs instead of just one, but that could not be confirmed here. At least three Peronia species are present in West Papua (Fig. 6). However, given the size of the holotype (20 mm long) and, importantly, its intestinal loops of type I with a transitional loop at 4 o'clock ( Fig. 80C), Onchidium astridae is regarded as a junior synonym of P. verruculata (Tables 1,4,6). Note that the number of papillae with dorsal eyes could not be counted on the preserved holotype. According to , Onchidium astridae is close to Onchidium vaigiense and O. steenstrupi, but both names refer to Marmaronchis vaigiensis, a species which belongs to a distinct genus (Dayrat et al. 2018). The original description of Peronia gaimardi was based on two specimens from Vanikoro, Solomon Islands, which were found at the MNHN, and one specimen from Djibouti, which could not be located. The type locality is Vanikoro, locality of the lectotype designated in the present study. Our molecular data demonstrate that Peronia verruculata (unit #1) is present in Vanikoro, but P. peronii and P. platei could also be found there (Fig. 6). Given the intestinal loops of type I (with a transitional loop at 5 o'clock) observed in the lectotype (Fig. 80F), P. gaimardi is regarded as a synonym of P. verruculata (Tables 1, 4, 6). The male parts of the lectotype are missing and Labbé's description of the copulatory apparatus is confusing because it is based indiscriminately on individuals from both Vanikoro and Djibouti. His measurement of the spine of the accessory gland (8 mm long) is most likely a mistake. In the present study, the longest spine (5 mm long) was found in the lectotype of P. fidjiensis (a synonym of P. peronii) from Fiji. Also, the lectotype of P. gaimardi only is 44 mm long, which would make it a very small individual of P. peronii. Given the large size (80 mm long, according to Labbé) of the paralectotype from Djibouti, it most likely belongs to P. madagascariensis, a species present there, and for which large specimens are known (Table 4). It would imply that Labbé confused its intestinal loops of type V for a type I, which is a mistake he often made. Marcus and Marcus (1970: 214) wrote that P. gaimardi might be a junior synonym of P. verruculata based on information from the original description. Peronia anomala, originally described from the Red Sea, is regarded as a junior synonym of P. verruculata because, contrary to what Labbé indicated in the original description, Peronia anomala is characterized by intestinal loops of type I (Fig. 86B). It is assumed in this work that there is only one species of Peronia slugs with intestinal loops of type I in the Red Sea, although fresh material from the Red Sea may show that there is more than one species. Marcus and Marcus (1960: 881) suggested that P. anomala could be a synonym of P. verruculata and that intestinal loops of both types I and II are found in P. verruculata, but intestinal loops are only of type I in P. verruculata and there are no intestinal loops of type II in Peronia. Maniei et al. (2020a: table S1) took Labbé's description for granted and considered that P. anomala was characterized by intestinal loops of type II. The type specimens used by Labbé for the original description of Paraperonia gondwanae belong to several species, because our data show that slugs with intestinal loops of types I and V necessarily belong to distinct species. The application of the name Paraperonia gondwanae is determined by the lectotype from Bombay (MNHN-IM-2000-33681) with intestinal loops of type I (Fig. 84A). Paraperonia gondwanae applies to P. verruculata, and, more specifically, to the populations of the mitochondrial unit #4 from western India and Pakistan (Fig. 6, Tables 1, 6). The paralectotypes from the Red Sea with intestinal loops of type I also belong to P. verruculata: one "e" paralectotype from the Red Sea (MNHN-IM-2000-33688), and two "d" paralectotypes from Suez (MNHN-IM-2000-33684). The paralectotypes with intestinal loops of type V belong to P. madagascariensis: one of the "a" paralectotypes from Bombay (MNHN-IM-2000-33682) and one of the "c" paralectotypes from Suez (MNHN-IM-2000-33683). The large specimen with intestinal loops of type I from Mauritius (MNHN-IM-2000-33686), which may or may not be part of the type material of P. gondwanae, likely belongs to P. peronii. Scaphis viridis was described by Labbé based on three syntypes (four according to the original description) from Thursday Island, in the Torres Strait, Australia. The presence of P. verruculata in the Torres Strait is not demonstrated positively with fresh material. However, P. verruculata is the only species we found in northeastern Queensland (up to Cairns,16°S). None of the Peronia slugs we collected north of Bowen (20°S) were individuals of P. sydneyensis which is thought to be only distributed from southern Queensland down to New South Wales (Sydney) and eastwards to New Caledonia. More importantly, both the original description (Labbé 1934a: 207-208, figs 31-34) and the traits examined in the lectotype here confirm that S. viridis applies to P. verruculata (Table 4) fig. 32), muscular sac of the accessory penial gland 14 mm long (Labbé) and 15 mm long (lectotype), spine of the accessory penial gland 1 mm long (Labbé) and 1.7 mm long (lectotype), retractor muscle attaching at the posterior end of the visceral cavity. Because those traits are only compatible with the anatomy of P. verruculata, S. viridis is regarded here as a junior synonym of P. verruculata and which applies to the unit #1 (Tables 1, 6). Finally, a total of 13 dorsal papillae with eyes was observed in the lectotype; more may have faded with time. Labbé only compared S. viridis with Peronia acinosa, a nomen dubium which may or may not refer to an onchidiid species (see general discussion). There are three Peronia species in New Caledonia, the type locality of Scaphis carbonaria (Fig. 6). DNA sequences of individuals from New Caledonia belong to two species in our molecular data set (P. verruculata and P. sydneyensis). Although our molecular data do not include any specimen of P. peronii from New Caledonia, it is present there based on the rest of its distribution (it is found all the way to Fiji and Tonga; Fig. 6) and on an old specimen from a historical museum collection (ANSP 203028). Two characters in Labbé's original description are problematic. The penis, described as "wide and short, without hooks" (Labbé 1934a: 209, our translation), is absolutely incompatible with Peronia, in which the penis is thin, elongated, and always with hooks in the distal region. The absence of dorsal eyes on the notum is also quite perplexing. The notum of the holotype is in poor condition and its dorsal eyes cannot be seen, likely because their black color faded. However, dorsal gills are clearly present on the notum and there is no doubt that S. carbonaria applies to a Peronia species. Based on the length of the muscular sac of the penial accessory gland (10 mm), S. carbonaria is not a junior synonym of P. peronii. However, its muscular sac and its intestinal loops of type I with a transitional loop oriented at 4 o'clock are compatible with both P. verruculata and P. sydneyensis (Table 4). The length of the spine helps distinguish both species but Labbé did not mention it and it is missing in the holotype. Therefore, strictly speaking, S. carbonaria should be regarded as a nomen dubium. However, because there are many older names available for the unit #1 of P. verruculata (Table 6), S. carbonaria can be regarded as another junior synonym of P. verruculata. It would make no sense to apply it to P. sydneyensis because several important organs (the penis, the spine of the penial accessory gland, the radula) are missing in the holotype and because Labbé's original description is problematic and incomplete. Scaphis gravieri was described originally based on types from Mayotte, Zanzibar, and Djibouti. The application of Scaphis gravieri is now based on the lectotype from Mayotte (MNHN-IM-2000-33695), with intestinal loops of type I (Fig. 85A). Our data do not include fresh material from Mayotte, but Mayotte is located between Madagascar and Mozambique where P. verruculata (unit #5) is present. Therefore, S. gravieri is regarded as a junior synonym of P. verruculata (Tables 1,4,6). Note that P. madagascariensis, a distinct species with intestinal loops of type V, also is expected to be present in Mayotte, even though it has not been recorded there so far (Fig. 6). The presence of P. verruculata in Zanzibar (locality of some paralectotypes of S. gravieri) is possible but needs to be confirmed with fresh material. Additional, non-type specimens from Zanzibar were examined (MNHN-IM-2014-7989, MNHN-IM-2014-7990): their intestinal loops are of type I with a transitional loop at 5 o'clock. Therefore, those specimens cannot belong to P. madagascariensis (intestinal loops of type V) or P. peronii (intestinal loops of type I with a transitional loop oriented between 12 and 3 o'clock), and thus likely belong to P. verruculata. Peronia verruculata is expected to be present in Djibouti (locality of some paralectotypes of S. gravieri), but that still needs to be demonstrated with fresh material from the northwestern Indian Ocean (Somalia, Yemen, Oman) as well as from the Red Sea and the Persian Gulf. Pieces of possibly up to three syntypes of Scaphis tonkinensis were located at the MNHN (MNHN-IM-2000-33700) but they are useless, poorly preserved, unidentifiable pieces of tissues. Determining the status of S. tonkinensis thus relies entirely on Labbé's original description. Given that P. verruculata (unit #1) is the only species known in Vietnam, and that several characters provided by Labbé (1934a: 213) match its anatomy (muscular sac 12 mm long, intestinal loops of type I), S. tonkinensis is regarded as a junior synonym of P. verruculata (Tables 1,4,6). No type material could be located for Scaphis lata. Determining the status of S. lata thus relies entirely on Labbé's original description. Labbé's original description to determine its status. Given that P. verruculata (unit #1) is the only species known in Vietnam, and that several characters provided by Labbé (1934a: 213) match its anatomy (muscular sac 8 mm long, intestinal loops of type I), S. lata is regarded as a junior synonym of P. verruculata. Labbé mentioned the presence of dorsal gills and so at least some of the syntypes of S. lata were Peronia slugs. The fact that he also described intestinal loops of type II (which is absent in Peronia) means that he either made a mistake (all loops were of type I) or that some syntypes were not Peronia slugs. Onchidium durum, originally described from the Red Sea, is regarded as a junior synonym of Peronia verruculata because, contrary to what was indicated in the original description, Onchidium durum is characterized by dorsal gills and intestinal loops of type I. It is presumed here that there is only one species of Peronia with intestinal loops of type I in the Red Sea. Labbé frequently confused types of intestinal loops; there are no well-documented cases of Peronia slugs with intestinal loops of type II. Peronia persiae, originally described from the Persian Gulf, is regarded as a new junior subjective synonym of P. verruculata because its mitochondrial DNA sequences, represented by the GenBank "voucher LaFM7S" in our analyses, all cluster together within the unit #4 of P. verruculata (Fig. 2). An older name, P. gondwanae , already refers to the unit #4 of P. verruculata (Tables 1, 6). So, even in the hypothetical event that unit #4 would later need to be named as a distinct taxon (of subspecific or specific rank), P. persiae would still remain invalid because P. gondwanae would always take priority over it. The description of P. persiae by Maniei et al. (2020a) is an example of the common but regrettable practice that consists in creating new species names without a comprehensive revision, which almost inevitably leads to increasing the number of unnecessary synonyms (Dayrat 2005). Here are a few of the major methodological issues in the study by Maniei et al. (2020a). First, Maniei et al. (2020a) ignored the existence of many available Peronia species names, which is especially problematic in the case of names with type localities near the Persian Gulf (Table 1), such as Onchidium durum and Paraperonia jousseaumei with a type locality in the Red Sea, and Scaphis gravieri with a type locality in Mayotte. Second, Maniei et al. (2020a) decided to create a new name before the nomenclatural status of the other Peronia names was addressed. For instance, Maniei et al. (2020a: table S1) compared P. persiae with P. branchifera, P. ferruginea, P. gaimardi, and P. lata as if they were all valid names, but these names all refer to the unit #1 of P. verruculata (Tables 1, 6). Third, Maniei et al. (2020a) only examined specimens of P. persiae from the Persian Gulf, which means that, for comparison, they relied exclusively on the literature which, as the present work shows, is plagued with taxonomic and anatomical errors. For instance, Maniei et al. (2020a: table S1) assumed that the intestinal loops of P. verruculata were of types I and II, but it is positively demonstrated here that the intestinal loops of P. verruculata are all of type I and that there are no loops of type II in Peronia. Fourth, apart from P. persiae, only P. verruculata and P. peronii are represented in the phylogenetic trees by Maniei et al. (2020a: figs 11, 12), exclusively based on sequences obtained from GenBank (many of which were misidentified). Most specimens in their phylogenetic trees are not even identified at the species level. Using DNA sequences to create a new species name while most species are not being included in phylogenetic analyses is highly problematic. Maniei et al. (2020b) used the same mitochondrial COI sequences as in Maniei et al. (2020a) to compare metabolites between the Peronia slugs they called P. persiae and one Peronia individual from Bangka Island, near Sumatra, Indonesia. That specimen from Bangka Island, identified as Peronia sp. 7 by Maniei et al. (2020a) and as P. verruculata by Maniei et al. (2020b), belongs to the unit #1 of P. verruculata: its COI (MK993397) and 16S (MK993396) sequences cluster within unit #1. Note that the GenBank accession numbers for COI and 16S are switched in Maniei et al.'s (2020a) Table 2. Maniei et al. (2020b) summarized their rationale for creating the name P. persiae as follows: "The ABGD test revealed that specimens of P. persiae form a separate clade (clade 2). Thus, the specimens from two localities of the Persian Gulf (Iran), i.e. Bandar Lengeh and Lavan Island, were considered as a distinct new species." Mitochondrial loci alone are not sufficient evidence to delineate species: molecular delimitation analyses can over-split species based on population structure, particularly when these are based on a single locus (Sukumaran and Knowles 2017). More importantly, very high intra-specific mitochondrial divergence has been repeatedly documented in several onchidiid genera (e.g. Goulding et al. 2018c;Dayrat et al. 2019a). Maniei et al. (2020b) argue that research on metabolites requires sound taxonomic knowledge. That certainly is a commendable goal: indeed, any comparative work in any biological field should be based on correct taxonomy. Unfortunately, P. persiae is a junior synonym of both P. gondwanae and P. verruculata (Tables 1, 6). So, the metabolites compared between "P. persiae" and "P. verruculata" merely are intra-specific differences (within P. verruculata) due to the long geographic distance (between the Persian Gulf and Bangka Island) as well as, most likely, different diets: in fact, Maniei et al. (2020b) acknowledged in their introduction that numerous biotic and abiotic factors influence the chemical composition. To conclude anything about specific differences in metabolites among Peronia based on specimens from only two regions, one of which being represented by a single individual is, to say the least, premature. In order to demonstrate that distinct metabolites are found in distinct species, one needs to study actually distinct species, i.e., species that were reliably identified, and one also needs specimens of the same species from different habitats and from different locations. It is our hope that the present, comprehensive, taxonomic revision will help physiologists, biochemists, ecologists, etc., to identify Peronia slugs correctly. Some comments are also needed regarding the original anatomical description of P. persiae by Maniei et al. (2020a). According to Maniei et al. (2020a: 510, fig. 6, table S1), the intestinal loops of P. persiae are of type II, but they are without doubt of type I: the transitional loop is oriented at ~ 5 o'clock, as in intestinal loops of type I (Fig. 1). The radular formulae provided by Maniei et al. (2020a: 509) fit well with what was observed here for the unit #4 of P. verruculata (Table 5), acknowledging individual variation: from 49 × 47.1.47 (in a live specimen 22 mm long) up to 71 × 87.1.87 (in a live specimen 65 mm long). The length of the spine of the accessory penial gland ("around 1.3 mm") reported by Maniei et al. (2020a: 513) is shorter than what was observed here (from 2.2 to 2.8 mm) but this trait is known to vary between individuals (Table 4). Maniei et al. (2020a: table S1) compared the shape of the tip of the spine of the accessory penial gland between species, but that trait varies greatly intra-specifically and is useless to distinguish species. Finally, Maniei et al. (2020a: 513, fig. 8B) reported some "fork-shaped" penial hooks, which were also observed here in the unit #4 of P. verruculata (Fig. 100C). Additional material (historical museum collections). A specimen from Tanimbar, Indonesia (WAM S26630) is identified as P. verruculata because of its accessory gland spine (1.5 mm long), its intestinal loops of type I (with a transitional loop at 3 o'clock), and its muscular sac (10 mm). Seven specimens from Zanzibar (MNHN-IM-2014-7989 and MNHN-IM-2014-7990) are also identified as P. verruculata because their internal anatomy is only compatible with that species (Table 4). Finally, specimens from the Persian Gulf (NHMD 635301) with intestinal loops of type I (with a transitional loop at 6 o'clock) demonstrate that there is more than one Peronia species in the Persian Gulf (Fig. 6). Indeed, based on our DNA sequences, P. madagascariensis (with intestinal loops of type V) is present in the Persian Gulf, and individuals with intestinal loops of type I must belong to a different species. Given that P. verruculata is known from Pakistan and western India (unit #4), eastern Africa (unit #5), and the Red Sea, it most likely lives in the Persian Gulf too. The fresh material recently described as P. persiae by Maniei et al. (2020a) confirms with molecular data the presence of the unit #4 of P. verruculata in the Persian Gulf (Fig. 2). In addition, several historical specimens preserved at various institutions were examined for the present study. They are discussed below in the secondary literature section because they were studied by previous authors. Secondary literature. JE Gray (1850: 117) and Adams and Adams (1855: 235) did not mention Onchidium verruculatum in their list of Peronia species names. That might seem surprising because they transferred to Peronia all slugs with "radiating processes" (Gray 1850: 117) or "arbusculiform tufts" (Adams and Adams 1855: 234) on the dor-(NHMD 635300). Those specimens are important historically because they were mentioned by several authors (see below). Given their size (35/28 to 20/15 mm), their digestive system (type I with a transitional loop oriented at 6 o'clock, in the largest individual), and the size of their accessory gland spine (1 mm in the largest individual), those specimens belong to P. verruculata, but could potentially belong to more than one mitochondrial unit (Table 4). Mörch (1872a: 28; 1872b: 325) first mentioned them as Peronia mauritiana. Semper (1880: 255) identified them as O. verruculatum. Bergh (1884a) described one of them in detail (see below). Hoffmann (1928: 44, 73) also listed them in his material examined for O. verruculatum. Schmeltz (1874: 96) listed Peronia verruculata from Samoa in a catalog of the Museum Godeffroy. This possibly is a record of P. peronii, although P. platei could also live there (Fig. 6). Ihering (1877: 230-237, pl. IV, fig. 3) described the nervous system of Peronia verruculata but did not provide any information on the specimens he examined. It is impossible to determine what Peronia species he actually studied. Fischer and Crosse (1878: 689-690, pl. XXXI, figs 13-15) briefly described the radula of specimens they identified as Onchidium (Peronia) verruculatum from New Caledonia. There are three Peronia species in New Caledonia, and it is not possible to determine what species they examined. Semper (1880: 255-257, pl. 22, figs 3, 4; 1882: pl. 21, fig. 1) re-described O. verruculatum based on specimens from a variety of localities (Red Sea, East Coast of Africa, Nicobar, Ambon, eastern Australia, Philippines). His written description mostly focuses on traits that are not informative for species identification (e.g., number of dorsal papillae, number of dorsal eyes, radular teeth). Some of Semper's records of P. verruculata most likely are correct, given the geographic origin of the material (Fig. 6): Ambon, Philippines, and Cape York (Queensland, Australia). Some other material could be a mix of more than one species: P. madagascariensis and P. verruculata in the Red Sea and eastern Africa; P. verruculata and P. sydneyensis in MacKay, Queensland. Semper's material from Brisbane (27°S) most likely was part of P. sydneyensis (Fig. 6). Finally, Semper's specimen from Nicobar was part of some material collected during the Galathea Expedition and first reported by Mörch (1872a: 28;1872b: 325) as Peronia mauritiana (NHMD 635300). Those specimens, re-examined for the present study, belong to P. verruculata (see above). Bergh (1884a: 148-151, pl. VII, figs 7-12, pl. VIII, fig. 14) described in detail the anatomy of an individual of O. verruculatum from Nicobar. The animal size (33/23 mm) and the size of the accessory penial gland spine (1.76 mm) match well the anatomy of P. verruculata (unit #1). This specimen was part of a group of specimens collected during the Galathea Expedition in Sambelong, Great Nicobar, which were examined for the present study (NHMD 635300). Their size (35/28 to 20/15 mm), their digestive system (type I with a transitional loop oriented at 6 o'clock, in the largest individual), and the length of their accessory gland spine (1 mm in the largest individual) are also compatible with P. verruculata. However, those specimens could potentially belong to more than one mitochondrial unit (Fig. 6). Von Martens (1897: 126) mentioned Onchidium verruculatum from both Ambon and Timor with no description. Our molecular data indicate that Peronia verruculata does live in Ambon and Timor. However, Peronia peronii also lives in Timor and likely lives in Ambon too. Farran (1905: 358-359, pl. VI, figs 13-22) described a Peronia slug he identified as Onchidium verruculatum from the Gulf of Mannar based on one preserved specimen. Given the specimen size (31/34 mm) and the length of the spine of the penial accessory gland (2.8 mm), it is likely a record of P. verruculata, but it is unclear whether it is the unit #2 (known from the Andaman Islands) or unit #4 (known from Mumbai, western India). It could also be a record of a small, immature individual of P. peronii (which has not been recorded from southern India but could possibly be found there). Our present study does not include any specimen from Sri Lanka or the Gulf of Mannar. Onchidium verruculatum is one of the eight onchidiid species mentioned by Hedley (1909: 369) from Queensland, Australia, without any reference to any material. It is impossible to know what species Hedley refers to. Our data show that there are two Peronia species in Queensland which overlap geographically (Fig. 6). The references listed by Bretnall (1919: 310) for Onchidium verruculatum are all commented on above already. Let us say a few words about the specimens he examined himself. Bretnall's (1919: 310) records of O. verruculatum from Broken Bay, New South Wales (33°30'S) are likely records of Peronia sydneyensis, the only Peronia species known in New South Wales (Fig. 6). Bretnall's (1919: 310) records of O. verruculatum from Port Curtis, Queensland (ca. 23°30'S) could be records of P. sydneyensis but they could also include P. verruculata because the known southernmost locality of the mitochondrial unit #1 of P. verruculata is at ca. 21S (see remarks on P. sydneyensis). The record of Onchidium verruculatum from Katsepy (Catsèpe), northwestern Madagascar, by Odhner (1919: 23) is within the geographical range of both P. verruculata (unit #5) and P. madagascariensis (Fig. 6). The voucher specimen, re-examined here (SMNH 180724), clearly belongs to P. verruculata because of its intestinal loops of type I (with a transitional loop at 6 o'clock). Hoffmann (1928: 72) listed many references for O. verruculatum, all of which (but one) are commented upon elsewhere already: comments on the references for Onchidium peronii, O. punctatum, and Peronia mauritiana can be found in our remarks on P. peronii; comments on the references for Onchidium ferrugineum and O. elberti can be found above, in our remarks on synonymies; Peronia alderi is regarded as a nomen dubium and is commented on in the general discussion. Mörch's (1872a: 28;1872b: 326) record of Peronia (Onchidiella) marmorata from Nicobar Islands, which Hoffmann (1928: 72) included in his list of correct references for O. verruculatum, is commented on here: it is not possible to know to what species Mörch refers; Godwin-Austen (1895: 443) listed Mörch's record as Onchidium (Onchidiella) marmorata in a faunistic inventory of Nicobar and Andaman, without clarifying to what species that name was referring. At any rate, Lesson's (1831b) Onchidium marmoratum belongs to Marmaronchis (Dayrat et al. 2018). More importantly, Hoffmann (1928: 44) examined specimens from the collections in Stockholm and Copenhagen which he identified as O. verruculatum. Most of those specimens could be re-examined for the present study and are commented on here. Several specimens are confirmed here to belong to P. verruculata based on diagnostic anatomical traits (Table 4): the material from Karachi (SMNH 180721) belongs to the unit #4 of P. verruculata; the material from Hong Kong (SMNH 180707) and Queensland (SMNH 180712, 180713, 180714) belongs to the widespread unit #1; the material from Singapore (SMNH 180716) and the Java Sea (SMNH 180719, 180720, 180722) could belong to any of the three units (#1, #2, #3) present in the region. However, several specimens listed by Hoffmann (1928: 44, 72) clearly do not belong to P. verruculata (see remarks on each corresponding species): the specimen from Port Natal, South Africa (SMNH 180711) belongs to P. madagascariensis; the specimen from Sagami Bay, Japan (SMNH 180725) belongs to P. setoensis; and the specimen from Port Darwin, Northern Australia (SMNH 180715) belongs to P. willani. The Red Sea specimens from the Copenhagen collections listed as "Savigny leg., Mus. Marsil" belong to P. verruculata because of their intestinal loops of type I (NHMD 90791). The label in the jar says that they were obtained by the Copenhagen Museum in 1860 (journal entry) from Savigny and the museum of Marseille (erroneously spelled "Marsielle"). Given that the type material of O. verruculatum was originally illustrated by Savigny (1817), it is worth making it clear here that those specimens are not the type material of O. verruculatum (Hoffmann did not say they were). The type material of O. verruculatum is in Paris (MNHN-IM-2000-22941). The other specimens mentioned by Hoffmann (1928: 44, 73) could not be reexamined for the present study: the specimens from the Red Sea could potentially belong to P. verruculata or P. madagascariensis; the specimens from Tharangambadi (Tranquebar), southeastern India, most likely belong to P. verruculata; the specimens from New Caledonia could potentially belong to any of the three species present there; the specimens from Hawaii clearly belong to P. platei. All the references mentioned by Labbé (1934a: 192-193) for Peronia verruculata are already commented on above. Labbé (1934a: 193) blindly accepted the distribution provided by Hoffmann (1928: 44, 73), which was not accurate because, for instance, P. verruculata is not present in Hawaii (see above). Labbé (1934a: 193) mentioned intestinal loops of type II in one individual from the Red Sea, even though he did not list any material examined from the Red Sea. At any rate, those intestinal loops were most likely of type I as aforementioned Labbé often made that kind of mistake. For instance, Labbé (1934a: 196) described as P. anomala a species with supposedly anomalous intestinal loops of type II, but the type material, re-examined here, clearly is characterized by loops of type I (Fig. 86B). The specimens examined by Labbé from the Philippines likely belong to P. verruculata, but the individuals from New Caledonia or New Guinea could belong to several Peronia species. Finally, so far, only P. peronii and P. griffithsi are positively known from Mauritius and his record of P. verruculata there (as Ile de France) must not be taken for granted. The record of Onchidium (Peronia) verruculatum from Natal, South Africa (Connolly 1939: 454) likely is a record of P. madagascariensis, the only Peronia species so far known from South Africa. However, P. verruculata (unit #5) could also be found in northeastern South Africa because its southernmost known locality is in Maputo (ca. 26°S), very close to South Africa. Allan and Bell (1947: 152) and Allan (1950: 368) reported onchidiid slugs living in dead coral which they identified as Onchidium verruculatum from Moreton Bay, Brisbane, Queensland, Australia. Given its latitude (ca. 27°S), Brisbane is clearly in the range of P. sydneyensis and possibly of P. verruculata (unit #1) as well. Indeed, it is still unclear how far south P. verruculata is distributed in southeastern Australia, although we did not find it in Sydney, ca. 33S (see remarks on P. sydneyensis). For the record of O. verruculatum from New South Wales by Dakin (1947: 144), see remarks on P. sydneyensis. published a detailed anatomical study of a species they identified as Onchidium verruculatum based on material from the western coast of India. They mention four localities: Vengurla (ca. 15°50'S), Malvan (ca. 16°06'S), Mumbai (ca. 19°S), and Kathiawar (ca. 21°S). The illustration of the intestinal loops provided by fig. 6) leaves no doubt about the fact that they examined individuals of P. madagascariensis, a species with intestinal loops of type V distributed from South Africa all the way to (at least) Mumbai. Whether a type V was observed by the authors in all the specimens, including those from the southernmost localities (Vengurla and Malvan), is unclear. The presence of intestinal loops of type V in all the specimens examined by would mean that P. madagascariensis is found much farther south than Mumbai. If the authors did not notice that some intestinal loops were of type I, then they described two species under the name Onchidium verruculatum: P. madagascariensis and P. verruculata (Fig. 6). Baba (1958: 144) reported that some individuals of Onchidium verruculatum from Tokara Islands (ca. 30°N), just south of Kyushu, were very large (up to 120 mm long), suggesting that they were P. peronii instead (see remarks on P. peronii). The smaller specimens, however, could be P. verruculata (unit #1) and possibly P. setoensis (see remarks on P. setoensis). The two species which Baba (1958: 21) seems to distinguish (as Onchidium and Onchidium verruculatum) from Misaki (ca. 34°N), near Osaka, could be P. verruculata (unit #1) and P. setoensis, which, based on our DNA sequences, are sympatric near the Seto Marine Laboratory, which is close to Osaka (Fig. 6). According to Quoy and Gaimard (1825: 428, our translation), Onchidium planatum is "related to Onchidium peronii, with which it differs by its smaller size, its color [dirty greenish], and the shape and arrangement of the dorsal warts." Also, the "extremely small eyes placed at the superior part of the tentacles" likely refer to the eyes at the tip of the ocular tentacles. The most striking trait of Peronia peronii, its dorsal gills, is not mentioned in the original description of O. planatum, and it is clearly indicated that the dorsal "warts" of O. planatum differ from those of P. peronii. So, based on the original description, one could say that O. planatum may or may not refer to an onchidiid species. Given that Quoy and Gaimard (1825: 429-430, pl. 66, fig. 9) were able to describe and illustrate as Onchidium secatum a slug that obviously is not an onchidiid, the name Onchidium planatum is regarded as a nomen dubium (which may or may not refer to an onchidiid). There is, at the MNHN, a specimen which is part of the type series of O. planatum (MNHN-IM-2000-33706). That specimen is accompanied by three labels: the oldest label says "Onchidium planum, Q. G. Freyc. p. 428., de Guam, MM Quoy et Gaimard, Expn Freycinet." The name "Peronia" was subsequently added on that oldest label in pencil. Because the oldest label clearly refers to Onchidium planatum described from Guam by Quoy and Gaimard (1825: 428), a recent label indicates that the specimen is a syntype of O. planatum, from Guam. The third label says "Oncidium Peronii, Guam, Quoy et Gaimard, A. Labbé, dét. 1933," suggesting that Labbé re-identified that specimen at some point as Peronia peronii even though he listed it as part of the material he examined for his re-description of Onchidium planatum. The specimen is now completely destroyed and poorly preserved: there are only two pieces of notum of which the length of 55 mm matches the original description; the oral area is totally destroyed; all internal organs are missing; no dorsal gills can be seen, but possibly because the notum is so poorly preserved; and it is unclear if a peripodial groove is present or not. Labbé (1934a: 225), who did not seem to realize that he was looking at one of the syntypes of O. planatum, mentioned several internal characters suggesting that O. planatum refers to an onchidiid species (intestinal loops of type I, accessory penial gland present with a muscular sac) although whether Labbé did see those structures or not remains an open question (Labbé often described and even drew structures that he could not have seen). The lack of dorsal eyes and gills could be due to the poor preservation, in which case a good guess would be that the (destroyed) syntype of O. planatum (MNHN-IM-2000-33706) belongs to a Peronia species. However, the fact that no dorsal gills and no dorsal eyes can be seen at all and that it is extremely unclear whether there is a peripodial groove or not seems to suggest that O. planatum may not even refer to an onchidiid species. It is very possible that the type series included more than one species. Semper (1882: 289) listed O. planatum as a problematic species name. Hoffmann (1928: 69, 84-85) thought that it was a valid Onchidium species name with Onchidella tabularis Tapparone Canefri, 1883 and Onchidium (Oncis) applanatum Simroth, 1920 as synonyms. Onchidella tabularis is a nomen dubium although it is clear that it does not refer to an Onchidella species (Dayrat et al. 2016: 37). Onchidium applanatum is a valid Platevindex species name. Platevindex applanatum was described only one jar at the MNHN with the following information (the label is not the original label): "Oncidiella, Tongatabou, Mrs. Quoy et Gaimard, 1829." The sizes provided by Quoy and Gaimard (13 to 15 mm long), likely for live animals, approximately match the preserved specimens considering preservation. The large syntype was dissected prior to the present study and was found empty with no internal organs. The small syntype, still intact, was opened for the present study. It is an immature individual with no reproductive parts. However, its intestinal loop of type I, the lack of rectal gland, and the lack of dorsal gills all indicate that it belongs to a Wallaconchis species. However, because the penial apparatus could not be checked, Onchidium cinereum is regarded as a nomen dubium, even though there is only one Wallaconchis species known so far in southwestern Pacific Ocean. Past authors transferred O. cinereum to Peronia (Oken 1834a: 287) or Onchidella (e.g., Gray 1850: 117; Adams and Adams 1855: 234), or just kept the original combination (Semper 1882: 286-287;Plate 1893: 142;Bretnall 1919: 319;Hoffmann 1928: 68, 81). Peronia alderi JE Gray, 1850 was created by JE Gray (1850: 117) for a slug illustrated by his wife ME Gray (1850: pl. 226, fig. 3) and which Alder had apparently identified as P. punctata in a manuscript: the only information associated with that illustration says "P. Alderi. P. punctata, Alder, MSS, t. 226. f. 3." That slug clearly belongs to Peronia, based on the presence of dorsal gills. Alder and Hancock (1855: 34) briefly mentioned "Onchidium punctatum (Peronia Alderi, Gray)" in the context of a comparison between the dorsal gills in onchidiids and those in nudibranchs. However, given that no type locality is indicated and that no type specimen could be located (of which the label could have potentially indicated the type locality), Peronia alderi must be regarded as a nomen dubium (Table 1). Semper (1882: 268) transferred P. alderi to Onchidium but could not make any decision regarding its status because of insufficient data. Even though he does not seem to have examined any material, and for unclear reasons, Hoffmann (1928: 68, 72) mentioned New Guinea and the Torres Strait as records for Peronia alderi which he regarded as a synonym of Onchidium verruculatum. Peronia acinosa was described by Gould (1852: 291-292;1856: pl. 21, fig. 384a), based on an unspecified number of type specimens from Fiji Islands. Peronia acinosa may or may not refer to an onchidiid species, mostly because its long ocular tentacles lack eyes at their tip (some onchidiids illustrated by Gould distinctly have ocular tentacles with eyes at their tip), its color would be very unusual for an onchidiid (deep berylgreen dorsum and slatey violet foot), and it is "everywhere closely covered with large rounded papillae" (which are not characteristic of onchidiids). Also, the type material could not be located. Johnson (1964: 36) could not find it either. Therefore, Peronia acinosa is regarded here as a nomen dubium (Table 1). Adams and Adams (1855: 234) transferred P. acinosa to Onchidella. Bretnall (1919: 326) thought that P. acinosa was a valid name, although he admitted that data were insufficient. Hoffmann (1928: 68, 102) rejected that P. acinosa could be an Onchidella and questioned that it could be a Peronia, and proposed (with a question mark) that it could be a synonym of Onchidina australis. Given that, for instance, Gould described large rounded papillae, P. acinosa clearly does not refer to Onchidina slugs. ent duct which remain in a small vial. The female (hermaphroditic) posterior parts are still in place inside. The digestive system was destroyed and the type of the intestinal loops could not be determined. Semper did not indicate the length of the spine of the accessory penial gland nor the length of the muscular sac, which is missing. Without any of those critical characters, it is impossible to determine the application of Onchidium nebulosum, especially given that both P. verruculata and P. peronii are known to be present in Palau and that both P. okinawensis and P. platei could potentially be there as well. As a result, O. nebulosum is regarded as a nomen dubium even though it is clear that it applies to a Peronia species (Table 1). Plate (1893: 171-172) described a "medium-sized" specimen which he identified as Onchidium nebulosum, from Pohnpei, Micronesia, which is 2600 km east of Palau. Plate's description is problematic for at least two reasons. First, the presence of dorsal gills is not mentioned, which means that it is not certain that Plate did examine a Peronia individual. Second, Plate indicated a series of traits and measurements (intestinal loops of type I, muscular sac of the accessory penial gland 11 mm long, spine of the accessory penial gland 2.5 mm long, retractor muscle of the penis inserting near the heart) but those cannot be compared to the original description of Onchidium nebulosum because Semper did not mention them. Thus, there is no reason to admit that Plate did examine what Semper had originally described as Onchidium nebulosum. If Plate examined a specimen with dorsal gills, then it possibly was an individual of P. okinawensis, from Okinawa, which is 3800 km west of Pohnpei. Some of the characters seem to match (Table 4). It is unlikely, however, that Plate examined a specimen of P. platei, because most characters do not match (Table 4). Finally, it is not excluded that Plate examined a Paromoionchis or a Laspionchis individual instead (with no dorsal gills). Bretnall (1919: 310-311), Hoffmann (1928: 71), and Labbé (1934a: 224) assumed that Plate's identification was correct and accepted Palau and Pohnpei as two records of O. nebulosum, but did not add any new material. Onchidium multiradiatum Semper, 1882 refers to an onchidiid species which belongs to Peronia or not, but is regarded as a nomen dubium because the type locality is unknown (Table 1). Semper (1882: 269) mentioned two individuals in the original description. One syntype, 30/22 mm, was located (ZMB/Moll 39026): the male and female parts are missing and the region of the male opening is partly destroyed; the radula is still present but the type of the intestinal loops cannot be determined; dorsal gills are not obvious but are present. Plate (1893: 141) merely mentioned Onchidium multiradiatum as an available species name. Both Hoffmann (1928: 79-80) and Labbé (1934a: 225) listed Onchidium multiradiatum as a valid species name, exclusively based on Semper's information. Hoffmann also briefly compared it to Onchidium griseum, which seems to refer to a species of Paromoionchis but also is a nomen dubium because its type locality is unknown (Dayrat et al. 2019: 70). Quoya indica , the type species, by monotypy, of the genus Quoya Labbé, 1934a, was originally described based on three specimens for which, according to Labbé, there was no information other than the locality, "Mer des Indes," i.e., Indian Ocean. Because the type locality is too vague, Quoya indica is regarded as a nomen dubium. Three specimens (16/8, 10/8, and 7/5 mm) were found (MNHN-IM-2000-33679), which seem to match the material used by Labbé to describe Quoya indica. The only information on the labels tells us that they are from the "Mer des Indes." No collector or collecting date are indicated. As often with Labbé, no species identification is indicated either. Those three specimens possibly are the syntypes of Quoya indica. They dried and are very hard and poorly preserved. However, dorsal gills are present on the largest specimen and possibly on the smallest specimen too. Labbé (1934a: 216, fig. 51) described a double male opening (with the openings of the penis and of the accessory penial gland being separated). This could not be confirmed and is by no means a trait of generic value. Indeed, the opening of the penis and the opening of the accessory penial gland occasionally appear to be separated due to preservation (when the vestibule is everted). Internal characters could not be checked. In particular, the presence of intestinal loops of type V (not illustrated by Labbé) and of an accessory penial gland fig. 53) could not be confirmed. There is at the MNHN another jar (numbered "31" on an old label) with a single specimen from the "Mer des Indes." However, that specimen is identified as "Oncidium," suggesting that it is not part of the type series of Quoya indica. Instead, it possibly is the non-type specimen that Labbé (1934a: 204) identified as Scaphis punctata. It is confirmed here that Quoya indica refers to a Peronia species (dorsal gills are present on the largest possible syntype), and that, therefore, Quoya is a junior synonym of Peronia. However, because the type locality is too vague and because no internal characters could be confirmed, Peronia indica is regarded as a nomen dubium (Table 1). The species-group name hombroni, created before 1961 as a variety name, is now of subspecific rank (ICZN 1999: Article 45.6.4). Labbé (1934a: 202, fig. 23) described Paraperonia gondwanae hombroni based on one specimen from the Torres Strait, Australia. Originally, no jar clearly labeled as the type material of P. gondwanae hombroni was found at the MNHN. However, only one old jar was found at the MNHN with material collected from the Torres Strait by M. Hombron aboard the Astrolabe, as in Labbé's original description of P. gondwanae hombroni (MNHN-IM-2000-33694). Therefore, it most likely contains the holotype, by monotypy, of P. gondwanae hombroni. Unfortunately, there is little doubt that whatever is in that jar is not an onchidiid (it seems to be an empty notum of a nudibranch). Three explanations are possible. First, this material (MNHN-IM-2000-33694) is not the holotype of P. gondwanae hombroni, even though all the collecting information matches. Second, it was originally the holotype of P. gondwanae hombroni, but that holotype was switched by mistake with something completely different. Third, the material inside the jar is the material examined by Labbé, which means that he would have completely made up the description. Given the description of dorsal gills and of an accessory penial gland by Labbé (1934a: 202), it is likely that P. gondwanae hombroni applies to a Peronia species. However, the original description is problematic because Labbé (1934a: 202) writes that the intestinal loops are of type V and sometimes of type I, which is just impossible given that he examined only one individual. Because the type of the intestinal loops is uncertain, P. gondwanae hombroni cannot be applied reliably to any Peronia species and is regarded as a nomen dubium (Table 1). As discussed in detail in our revision of Paromoionchis, Onchidium straelenii Labbé, 1934b is a nomen dubium (Dayrat et al. 2019: 70-72). The examination of the two syntypes used by Labbé (RBINS I.G.9223/MT.3823) revealed that Labbé's original description is erroneous regarding several important characters. In particular, Labbé (1934a: 213) subsequently transferred Onchidium straelenii to his genus Scaphis based on dorsal gills being supposedly numerous and highly ramified. However, there are no gills at all on the dorsal notum so it is clear that Onchidium straelenii cannot be classified in Peronia. Onchidium straelenii was arbitrarily placed in the genus Onchidium but it clearly should not be classified in Onchidium because several traits, such as the lack of a rectal gland, are incompatible with Onchidium . The generic placement of Onchidium straelenii remains unclear, hence its status as a nomen dubium (Table 1). Species delineation Peronia species cannot be distinguished externally, except for the longest individuals of P. peronii (more than 100 mm). However, they all differ internally, apart from P. platei and P. setoensis which cannot be distinguished (Table 4). This situation is similar to what has been observed in several other onchidiid genera: in Wallaconchis, Laspionchis, Paromoionchis, and Peronina, species cannot be distinguished externally but they all differ with respect to their copulatory apparatus (Dayrat et al. 2019a, b;Goulding et al. 2018b, c). The special difficulty in Peronia is that species differ in minute details. In other genera, species differences tend to be obvious. For instance, an accessory penial gland is present in Peronina tenera and absent in P. zulfigari (Goulding et al. 2018c). In comparison, Peronia species may only differ with respect to the length of the spine of the accessory penial gland (Table 4). This has made it very difficult for past authors to interpret anatomical differences. Peronia species diversity has been interpreted in two opposite directions, both of which were unfortunately erroneous. At one end of the spectrum, considered that every single difference justified the creation of a new taxon name. As a result, while accepted only six species of slugs with dorsal gills, all being still classified in Onchidium along with 34 species without dorsal gills, Labbé (1934a: 187) thought that there were five genera and 21 species of slugs with dorsal gills. However, the present monographic revision shows that only one of all Labbé's new names is valid (Table 1): Peronia madagascariensis . At the other end of the spectrum, more recent authors accepted only two species, P. peronii and P. verruculata, which they could not even really distinguish (e.g., Solem 1959: 38-39;Marcus and Marcus 1970: 213-214;Britton 1984: 183). Peronia is a taxon for which the use of DNA sequences as an independent test for species delineation has been indispensable. Without DNA sequences, it would have been impossible to determine which anatomical traits differ or not among species, which is perfectly illustrated by the species diversity of Peronia in Japan. Past authors have somehow sensed that there were more than one species in Japan but could not tell them apart (e.g., Baba 1958;Katagiri and Katagiri 2007;Ueshima 2007). Our data show that there are four species in Japan, two being endemic (Fig. 6). After species are delineated using DNA sequences, their anatomical differences become clear: for instance, P. setoensis is the only species in Japan with intestinal loops of type V (Table 4). The present study also demonstrates that even though mitochondrial COI sequences are necessary, they must not be used blindly. Indeed, if one were to take into account only mitochondrial DNA sequences (Figs 2, 5), one might think that there are up to 16 Peronia species: P. verruculata could be split into five distinct species, and P. peronii, P. platei, and P. griffithsi could be split into two species each. However, our two other data sets (nuclear DNA sequences and comparative anatomy) strongly suggest that those merely are cases of species with high genetic structure: all individuals of P. platei, for instance, are completely indistinguishable anatomically, and nuclear ITS2 sequences do not support the existence of two distinct taxa in P. platei. The long geographic distances between sampling sites (e.g., Hawaii and Papua New Guinea for P. platei) may partly explain the high intra-specific genetic structure. In cases where mitochondrial units are sympatric (P. verruculata units #1 and #2 overlap in southeastern Sumatra, and P. verruculata units #1 and #3 overlap in Singapore), genetic distances could be explained by the fact that those mitochondrial lineages were isolated for some time before coming into contact again. The importance of investigating nuclear DNA sequences as well as comparative anatomy has been demonstrated in other onchidiid genera, especially in Paromoionchis, Wallaconchis, and Peronina (Dayrat et al. 2019a;Goulding et al. 2018b, c). Species are not externally cryptic in all onchidiid genera: the six Melayonchis species and the four Onchidium species can all be distinguished in the field on external traits (Dayrat et al. , 2019c. When species can be distinguished externally, they are also unequivocally supported by both mitochondrial and nuclear sequences, i.e., DNA sequences do not support any cryptic diversity within those Melayonchis and Onchidium species. True cryptic diversity remains exceptional in onchidiids: Marmaronchis vaigiensis (Quoy & Gaimard, 1825) and M. marmoratus (Lesson, 1831) cannot be distinguished externally or internally (Dayrat et al. 2018). Types of intestinal loops It is not an exaggeration to say that identifying the types of intestinal loops, originally defined by Plate (1893) and , has remained challenging for authors. A thorough re-examination of the specimens examined by Plate and Labbé in the context of the complete revision of the Onchidiidae has shown that even they were confused about intestinal types. For instance, Dayrat et al. (2019c: fig. 2) demonstrated that Plate's original definition of the type III was based on an erroneous number of dorsal loops in a specimen of Onchidium stuxbergi (Westerlund, 1883). also repeatedly made mistakes (see all the species remarks above), the most notorious being his original description of P. anomala: the specific name anomala was created to emphasize that the intestinal loops of that species were anomalous (i.e., of type II instead of type I as in most Peronia), but the intestinal loops of the type material of P. anomala are of type I (Figs 1, 86B). In that context, it is thus not too surprising that Maniei et al. (2020a) described the intestinal loops of P. persiae as of type II although they clearly are of type I, with a transitional loop at 5 o'clock (see above remarks on P. verruculata). Hopefully, the method that Dayrat et al. (2019b: fig. 1; 2019c: fig. 2; 2019d: fig. 13) recently introduced to identify types of intestinal loops will help put an end to that confusion. This method is based on the coloration of different sections of the intestinal loops and, most importantly, takes individual variation into account (Fig. 1). It is very important to note that this method does not redefine the types of intestinal loops, it merely clarifies them, and the difference between the types I and II originally defined by Plate and Labbé is maintained. According to Plate and Labbé, type I is characterized by a transitional loop oriented to the right in dorsal view, at 3 o'clock (Dayrat et al. 2019b: Fig. 1A), and type II is characterized by a transitional loop oriented to the left in dorsal view, at 9 o'clock (Dayrat et al. 2019b: fig. 1C). The reality is that the orientation of the transitional loop varies between individuals, but a left or right orientation of the transitional loop remains true ( Fig. 1): in type I, the transitional loop is oriented between 12 and 6 o'clock (always to the right in dorsal view, as stipulated by Plate and Labbé); in type II, the transitional loop is oriented between 6 and 12 o'clock (always to the left in dorsal view, as stipulated by Plate and Labbé). Only the types I and V are found in Peronia (Table 4, Fig. 1): the transitional loop of type I is always oriented to the right (from 12 to 3 o'clock, or from 3 to 6 o'clock); there is no transitional loop in type V. To this day, there is no positive record (proven with an illustration) of intestinal loops of type II in Peronia. In the future, a few individuals may be shown to exceptionally possess a transitional loop oriented at 7 o'clock (which strictly speaking would correspond to a type II), but this has never been observed among the hundreds of Peronia specimens dissected for the present study. Geographic distribution The genus Peronia includes the two most widespread onchidiid species, P. verruculata and P. peronii, as well as species that are endemic to comparatively small areas, at least according to the current data ( Fig. 6): P. setoensis and P. okinawensis are endemic to Japan, and P. willani is endemic to Northern Territory. One reason may be the development mode. In Japan, on the eastern coast of Honshu, near Sagami Bay (ca. 35°N), Katagiri and Katagiri (2007) documented two Peronia species, one characterized by a planktotrophic development (called "Isowamochi") and the other characterized by a direct development (called "Minneawamochi"). Most likely, those species correspond to P. verruculata (unit #1) and P. setoensis, which are the only two Peronia species found north of 30N in Japan (Fig. 6). Another reason may be that all species cannot compete ecologically with P. verruculata, one of the most abundant onchidiid species in the Indo-Malayan region (P. tumidus is also extremely abundant but it lives in mangroves, not in the rocky intertidal, although both P. verruculata and P. tumidus often are found together on muddy sand). A third reason may be related to diversification history. The fact that several species (P. okinawensis, P. platei, P. setoensis, P. sydneyensis, P. willani) are characterized by narrow distribution ranges at the periphery of broadly-distributed species (P. griffithsi, P. peronii, P. verruculata) raises the question of whether peripatric speciation events may have occurred. Phylogenetic relationships of sister species suggest that P. okinawensis could have emerged peripatrically from P. peronii. As for the other species, it remains uncertain because relationships among clades E, F, and G are still unclear (Figs 2-4). Finally, given that they are sister species, it is most likely that P. willani and P. sydneyensis are the result of a recent allopatric speciation (the Torres Strait serving as a biogeographic barrier). In the future, it will be necessary to investigate the phylogenetic relationships of populations of P. verruculata from the regions from where no fresh material could be obtained, especially the northeastern Indian Ocean (the coasts of Somalia, Yemen, and Oman), the Persian Gulf, the Red Sea, and southern India. It will also be necessary to include fresh material from new localities for P. peronii (its distribution provided here is based on many specimens identified only based on anatomy). Dozens of new specimens of P. peronii may reveal some higher genetic structure within P. peronii, as observed in P. verruculata, given that both species are widely distributed. At the moment, the low level of genetic structure within P. peronii (compared to P. verruculata) may simply be due to the fact that our mitochondrial analyses include thirteen specimens of P. peronii while they include 102 specimens of P. verruculata. Populations of Peronia slugs also need to be investigated in southeastern Australia (southern Queensland and northern New South Wales) to determine more precisely the geographic range of P. verruculata. Also, it is possible that species that are endemic based on current data (P. okinawensis, P. setoensis, P. willani) will be found elsewhere and will thus be characterized by a wider range. Finally, it is not excluded that additional new species will be found. Ngô Xuân (Vietnam). A research permit was awarded to Benoît Dayrat in Singapore
2020-10-16T00:47:36.783Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "edcb0e4dd51e421a998a1f9350963ed68708de2f", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/52853/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "edcb0e4dd51e421a998a1f9350963ed68708de2f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
269972966
pes2o/s2orc
v3-fos-license
Broad proteomics analysis of seeding-induced aggregation of α-synuclein in M83 neurons reveals remodeling of proteostasis mechanisms that might contribute to Parkinson’s disease pathogenesis Aggregation of misfolded α-synuclein (α-syn) is a key characteristic feature of Parkinson’s disease (PD) and related synucleinopathies. The nature of these aggregates and their contribution to cellular dysfunction is still not clearly elucidated. We employed mass spectrometry-based total and phospho-proteomics to characterize the underlying molecular and biological changes due to α-syn aggregation using the M83 mouse primary neuronal model of PD. We identified gross changes in the proteome that coincided with the formation of large Lewy body-like α-syn aggregates in these neurons. We used protein-protein interaction (PPI)-based network analysis to identify key protein clusters modulating specific biological pathways that may be dysregulated and identified several mechanisms that regulate protein homeostasis (proteostasis). The observed changes in the proteome may include both homeostatic compensation and dysregulation due to α-syn aggregation and a greater understanding of both processes and their role in α-syn-related proteostasis may lead to improved therapeutic options for patients with PD and related disorders. Supplementary Information The online version contains supplementary material available at 10.1186/s13041-024-01099-1. Introduction The deposition of misfolded proteins is a characteristic pathological feature associated with dozens of human neurodegenerative diseases including Parkinson's disease (PD).PD belongs to a group of related neurodegenerative disorders called synucleinopathies, in which the primary pathology is the intracytoplasmic accumulation of α-synuclein (α-syn) typically in neurons and, in some cases, glial cells [1].In PD, fibrillar α-syn aggregates appear in the form of Lewy bodies (LB) and Lewy neurites (LN) primarily in the substantia nigra pars compacta resulting in the loss of dopaminergic neurons [2].While most cases of PD are considered idiopathic, a genetic factor has been implicated in 5-10% of patients diagnosed [3].The first genetic cause of PD identified was a point mutation (A53T) in the SNCA gene that encodes α-syn [4].Since then, both point mutations and gene multiplications (duplication and triplication) of the SNCA gene have been reported in patients presenting with early onset PD [5,6].The exact function of α-syn, a highly expressed, 14kDa intrinsically disordered protein, however, remains poorly understood [7].Under pathological conditions, α-syn monomers misfold, aggregate and spread in a prion-like manner between interconnected neurons [8,9].The insoluble protein is thought to contribute to disease either by gain of an unknown novel toxic function or by loss of a normal endogenous function.Exogenously added misfolded α-syn can act as a template to initiate the misfolding and aggregation of endogenous α-syn in both cellular and animal models of PD [10][11][12].Analysis of LBs suggest that phosphorylation of α-syn at serine 129 (pS129) is the dominant posttranslational modification and is typically enriched in the detergent-insoluble fraction of PD cell and animal tissue lysates [13,14].Several studies have attempted to characterize other α-syn modifications and distinct protein components of LBs using mass spectrometry (MS)-based proteomics approaches to understand the process of LB formation and its contribution to disease [15][16][17][18]. In order to fully understand the underlying cell biology and pathogenesis of PD, many in vitro and in vivo models have been established capitalizing on some of the genetic causes linked to familial PD [19].One such model is the M83 transgenic mouse line, which overexpresses the human form of A53T α-syn under the mouse PrP promoter [20].The homozygous M83 mice start to develop dramatic motor symptoms at 8 months of age, and this coincides with the appearance of α-syn inclusions around 7 months of age.The hemizygous M83 mice also develop the same phenotype but have a much later age of onset [20].The α-syn inclusions in the M83 mice are found in both neuronal cell bodies and processes and recapitulate many of the same biochemical and histological properties of human α-syn inclusions including positive staining with pathologically exclusive α-syn antibodies, the presence of detectable ubiquitin modifications, positive Gallyas silver staining, detergent insoluble high molecular mass α-syn aggregates, and α-syn fibrils that measure 10-16 nm wide [20].Ultrastructural analysis of brain sections from M83 mice show significant axonal degeneration and the presence of α-syn inclusions in these neurons [20].The development of α-syn inclusions in M83 mice can be rapidly expediated with the intracerebral introduction of pathological α-syn purified from PD tissue or recombinant α-syn pre-formed fibrils (PFFs) generated in vitro [11].M83 mice injected before the onset of symptoms, at 2-5 months, show central nervous system (CNS)-wide α-syn pathology including LBs and LNs as soon as 30 days post injection (dpi) [11].Spreading of pathological α-syn inclusions in CNS regions distant from the site of injection is also evident as well as shortened lifespan [11]. Cellular processes that lead to and are disrupted by α-syn fibrillization, aggregation and formation of LBs and LNs induced by PFF inoculation have been previously studied in vitro [10,12].Such in vitro models are essential to understand the biology behind disease pathogenesis and for genetic and small molecule-based drug discovery approaches looking for modulators of α-syn aggregation and pathology.It has been shown that the treatment of non-transgenic (wild-type) embryonic primary mouse neurons with PFFs generated from either full-length or truncated recombinant α-syn can recapitulate the formation of α-syn inclusions resembling those found in human synucleinopathies [12].In these conditions, small insoluble α-syn aggregates develop starting at day 4 and continuing through day 7 post-PFF treatment followed by the formation of LN-like structures by day 10 and eventually LB-like aggregates by day 14 and beyond [12].Accumulation of the pathological α-syn aggregates coincides with loss of synaptic proteins and a concomitant decrease in functional connectivity of the neurons. In depth proteomic analysis represents a powerful approach for deciphering disease-dependent changes in model systems and tissues from human patients.Characterization of insoluble α-syn aggregates isolated from primary neurons post-PFF treatment have identified proteins that were also found in the insoluble fraction of human brain with synucleinopathy providing confidence on the use of such in vitro models for research [16].More recently, proteomic analysis of extracted, latestage, insoluble α-syn inclusions from day 14 and day 21 post-PFF treatment of isolated primary neurons from non-transgenic mice was employed to further characterize LB formation in culture.This led to a more extensive understanding of the underlying cellular processes and molecular dyshomeostasis that contribute to LB formation and decline in neuronal health [21]. In this study, we used quantitative total proteomics and phospho-proteomics to extensively characterize temporal changes in the total and detergent-insoluble protein fractions isolated from neurons from M83 transgenic mice treated with recombinant α-syn PFF.We further employed protein-protein interaction-based network analysis to define the biological mechanisms altered due to α-syn aggregation to get a comprehensive understanding of specific mechanisms that can be targeted for rational drug design.Our results show broad changes in several key biological processes that may contribute to or be disrupted by the formation of α-syn aggregates over time.We specifically identified enrichment of several mechanisms regulating cellular proteostasis including changes in several RNA binding proteins.It is our understanding that this is the first time that total proteomics and phospho-proteomics have been used to capture aggregation specific changes in a PD primary neuronal culture model.Having a better understanding of the cellular landscape of primary neurons directly from the M83 transgenic mice will help us to improve the therapeutic relevance of the model and contribute to a synchronized translation from in vitro to in vivo work. Dose-and time-dependent formation of α-syn aggregates by addition of recombinant α-syn PFFs in M83 mouse primary neurons To understand the molecular and biological alterations associated with α-syn aggregation, we established a seeding-based model using primary cortical neurons from M83 transgenic mice that express the human A53T mutant (hA53T) α-syn under the mouse PrP promoter.The addition of exogenous α-syn PFFs can robustly induce aggregation of endogenous wild-type or overexpressed α-syn.Using a previously published and widely used protocol, we generated recombinant human α-syn fibrils [12].We first evaluated the extent of aggregation of the endogenous hA53T α-syn in the M83 neurons after treatment with α-syn PFFs by staining for α-syn phosphorylated at serine 129 (pS129).Cortical neurons isolated from M83 transgenic mice were treated at days in vitro (DIV) 7 with human α-syn PFFs at 1, 0.5, 0.25 and 0.125 µg/ml concentrations, α-syn monomer at 1 µg/ml, and PBS vehicle control.After treatment for 7, 14, and 21 days, cells were fixed with methanol to remove soluble protein and immunostained for pS129 positive α-syn aggregates, microtubule-associated protein 2 (MAP2), a neuronal marker, and nuclei stain (Fig. S1 A.) Small puncta of pS129-α-syn aggregates were detected in neurites after 7 days of PFF treatment (Fig. 1 A).After 14 and 21 days of PFF treatment there was an overall increase in pS129-α-syn aggregate formation, and some aggregates were visible in the cell soma (arrow, top right, Fig. 1A).At all timepoints we observed a clear dose-dependent increase in pS129-α-syn aggregation with increasing concentrations of PFFs (Fig. 1B-D).The number of aggregates in the soma increased from 7 to 14 days after PFF treatment but did not increase further at 21 days after PFF treatment (Fig. 1E).We did not observe overt toxicity at any of the three timepoints as we did not observe significant changes in MAP2 area (Fig. S1 B).Furthermore, we observed a significant reduction in pS129-positive α-syn aggregates when we knocked down SNCA using pooled siRNAs against human SNCA (Fig. S1 C and D).The hyperphosphorylated α-syn aggregates in the soma appear condensed and resemble LBs in human PD brains and are also positive for p62 and ubiquitin consistent with previously published literature (Fig. 1F and G) [16,[22][23][24].Likewise, we did not observe a large difference in α-syn aggregation between 0.5 µg/ml and 1 µg/ml of PFFs suggesting saturation around the 1 µg/ml PFF concentration.Thus, we observed a dose-and time-dependent increase in α-syn aggregation, with peak aggregation after 14 days of PFF treatment. Quantitative total and phospho-proteomic analysis in total lysates from M83 neurons treated with α-syn PFFs We wanted to better understand α-syn aggregationmediated molecular changes to define specific diseaserelevant signatures that could be used to identify novel drug targets or potential disease biomarkers.MS-based quantitative proteomics is a powerful technology that enables understanding of cellular and molecular mechanisms of disease.To this end, we used tandem mass tag MS (TMT-MS) to perform total and phospho-proteomics in M83 primary neurons treated with recombinant PFFs over time [25].Individual 11-Plex TMT-MS was used for total lysate analyses from two different timepoints: 7 days post-PFF treatment when small α-syn aggregates were observed predominantly in the neurites and 14 days post-PFF treatment when we saw LB-like α-syn aggregation that did not increase with further incubation time [25,26].We chose 1 µg/ml PFF with which we saw maximal pS129-α-syn soma aggregates and compared to samples treated with PBS vehicle control (Fig. 1 E).A high degree of overlap was observed between days for both proteins and phosphopeptides with 5523 proteins and 14,227 phosphopeptides being consistently identified at both timepoints (Fig. S2 A and B).Since there may be broad proteome changes due to maturity of the neuronal cultures over time, and potential increase of glial cells during the later time points, comparisons were made within each individual timepoint for consistency.Principal component analysis (PCA) within each individual time point for both total and phospho-proteomics showed good clustering of replicates and separation by treatment (Fig. S3 A-D).The total amount of aggregation increased substantially between 7 and 14 days post PFF treatment.Some changes were observed in phosphorylated peptides 7 days post-PFF treatment.However we observed differential expression of several total and phosphopeptides at 14 days post-PFF treatment compared to PBS treated group when using the cutoffs of log 2 fold change | (Log 2 FC) >0.5| and adjusted p-value (padj)< 0.05 (Fig. 2 A-D). Therefore, we performed the follow up computational analysis using data from the day 14 post-PFF treatment timepoint.First, we performed an analysis using Metascape, an online gene annotation and analysis resource [27].Metascape analysis of all the total and phospho upregulated proteins showed affected processes related to RNA metabolism and microtubule organization (Fig. 2 E), while heat shock factor protein 1 (HSF1) and actinbased processes were identified in the analysis of all downregulated proteins (Fig. 2F).The identification of RNA-related processes led us to further investigate RNA binding proteins (RBPs) in the dataset.When we specifically looked for overlap with the 1542 manually curated RBPs [28] we identified 64 and 36 RBPs that were found in the up and down regulated proteins respectively (Fig. 2 G and I).These proteins seem to show strong connectivity as seen in the subnetworks generated using String database, a database of known and predicted proteinprotein interactions (Fig. 2G and I) [29].We saw changes in pathways related to mRNA splicing, transport, and metabolism in the RBPs that were upregulated, while the downregulated RBPs showed changes in mRNA Fig. 1 Characterization of α-syn aggregation in neurons isolated from M83 mice.A Recombinant human α-syn PFFs were added to DIV7 M83 neurons at 1.000,0.500,0.250 and 0.125 ug/ml concentrations along with 1.000 ug/ml of α-syn monomer and PBS control.Cells were fixed at 7-, 14-and 21-days post-treatment.Small puncta of pS129-α-syn inclusions (green) were initially detected in neurites 7 days after α-syn PFF treatment.At 14 and 21 days of PFF treatment, pS129-α-syn aggregates become more elongated and some were also found in cell soma shown by arrows.B-D Quantification using high-content image analysis showed significantly higher total pS129-α-syn aggregate formation at 7 (B), 14 (C), and 21days (D) of PFF treatment in a dose-dependent manner and no aggregation was observed with PBS or α-syn monomer treatment.(N=6 replicates).Data are mean±SD ****p<0.0001(Ordinary one-way ANOVA).E Immunofluorescence quantification suggested that the number of aggregates in the cell soma were higher at 14 and 21 days after PFF treatment compared to 7 days.F-G pS129-α-syn aggregates were positive for autophagy markers p62 and ubiquitin (orange).Neurons were stained with MAP2 (purple) and nucleus with Hoechst (blue).Scale bars, 50 μm processing, mRNA metabolic process, ribonucleoprotein complex biogenesis and translation (Fig. 2H and J).Taken together, this suggests a broad remodeling of the proteome due to aggregation that spans several pathways including heat shock response, cytoskeletal mechanisms and pathways regulated by RBPs. Network analysis to identify pathways impacted by α-syn PFF-mediated changes in proteomics and phospho-proteomics We next performed a broader network analysis to figure out the significance of the total and phospho-proteome changes observed.Protein-protein interactions (PPI) are key to the proper functioning of proteins and the regulation of diverse cellular activities.PPI data can be used to generate in silico networks where each protein is represented by a node, and each interaction is represented with an edge between two nodes.Nodes with high degree of connectivity denote a hub.Direct interactions between seed genes represent the zero-order network, while interactions between seed genes and all nodes directly connected to the seeds represent the first-order network.These networks can be used as maps to identify specific disease-relevant clusters or changes that can elucidate the underlying molecular mechanisms misregulated during disease.We performed network analysis using the upregulated and downregulated total and phospho-proteomics at day 14 post-PFF treatment.Using an adjusted p-value < 0.05 and an absolute value |logFC > 0.5| and |logFC > 1.0| as thresholds for total and phospho-proteomics respectively, four input lists from the PFF vs PBS comparisons were used (Fig. 2A-D; Supplementary Table S1-S4).The proteins in the input lists were used as seeds to generate first-order PPI networks using ConsensusPathDB (CPDB), a resource which integrates public data from human, mouse and yeast into one meta-database with over 600,000 human protein interactions reported [30].To decipher the biological mechanisms affected by PFF-mediated α-syn aggregation, we looked for common proteins that overlapped in the four first-order networks.There were 1690 proteins that overlapped and metascape analysis of these showed a very strong presence of pathways regulating RNA metabolism, cell death, and cellular response to stress (Fig. 3A).We next used an internally developed network-based visual analytics prototype to build a zero-order network using the 1690 overlapping proteins at a stringent cutoff of 0.999.Analysis of the network modules showed several clusters that correspond to different proteostasis mechanisms suggesting a broad dysregulation of proteostasis due to α-syn aggregation (Fig. 3B).The pathways identified included chaperones and heat shock proteins, ubiquitin-proteosome pathway, mitophagy, autophagy-lysosomal pathway, translation, unfolded protein response, vesicle trafficking and RNA splicing (Fig. 3B).Many of these pathways have evidence of being linked to PD pathology [31].This is consistent with the notion that the regulation of proteostasis is essential in preventing protein aggregation and this tightly regulated balance of protein synthesis and clearance is disrupted during aging and neurodegeneration [32].An in-depth analysis of proteins in the different clusters may yield novel drug targets and/or biomarkers of disease progression.Our data suggests a strong contribution of an imbalance in proteostasis towards α-syn induced pathology in M83 neurons. Proteomics analysis of the insoluble fraction of α-syn PFF treated M83 neurons MS-based approaches have been extensively used to characterize the detergent-insoluble proteome in cell lysates and tissues from animal models and human patients with different neurodegenerative diseases [12,21,[33][34][35]. α-syn aggregates in wild-type primary mouse neuronal cultures treated with PFFs have been shown previously to be resistant to mild detergent treatment, like LBs isolated from human PD patient (See figure on next page.)Fig. 2 Temporal proteomic and phospho-proteomic analysis in the total lysates of M83 neurons with α-syn aggregation.A-D α-syn PFF mediated changes in protein expression in total lysates of M83 neurons were evaluated at 7-, and 14-days post-treatment using TMT-MS.Changes were plotted on a volcano plot with log fold changes on x-axis and negative log of p values on y-axis.Statistically significant changes were determined based on a false discovery rate of 0.05.Red dots indicate proteins significantly decreasing, green dots indicate proteins significantly increasing, and black dots indicate proteins that did not change.On day 7, no significant changes were noted in protein expression in proteomics (A) or phospho-proteomics (C).At day 14, a total of 135 proteins in the proteomics (B) and 633 phosphopeptides in the phospho-proteomics (D) showed statistically significant changes (E-F) Enrichment analysis using Metascape was performed based on the proteomics and phospho-proteomics data from the DIV14 post-PFF treatment timepoint.Processes related to RNA metabolism and microtubule organization were identified based on upregulated proteins (E) and HSF1 and actin-based processes were identified in the analysis of downregulated proteins (F).G-J Overlap evaluation with 1542 manually curated RNA binding proteins led to identification of 64 upregulated and 36 downregulated proteins.Subnetworks were created using String database based on these upregulated (G) or downregulated (I) proteins.Identification of mRNA splicing, transport, and metabolism was found in proteins that were upregulated (H).Identification of mRNA processing, mRNA metabolic process, and ribonucleoprotein complex biogenesis and translation was found in the proteins that were downregulated (J) brain samples [16,21,36].To profile disease-specific protein changes in the detergent-insoluble extracts, we performed a label-free quantitative proteomic analysis using M83 primary neuron lysates from DIV14 and DIV21 post-PFF treatment when LB-like α-syn aggregates are most pronounced.Based on the protocol described previously, we performed sequential extraction with 1% Triton X-100 and 2% SDS in M83 neuron lysates treated with PFFs or PBS for 14± 1 and 21± 1 days to isolate the insoluble fraction [12].We confirmed by western blot analysis that pS129-α-syn is enriched upon PFF treatment (Fig. 4A) in the Triton X-100 insoluble and SDS extractable fraction.In the PBS treated neurons, α-syn was completely extracted in the Triton X-100 fraction.We then proceeded to analyze the samples by MS to identify proteins enriched with insoluble α-syn aggregates.Using a logFC of ≥ 0.25 and padj<0.05,92 proteins were found to be enriched in the PFF treated insoluble fraction by total proteomic analysis (Supplementary Table S5).Similar to the total and phospho-proteome analysis performed on the total lysates, we first performed an analysis using metascape for the 92 proteins.Interestingly the top identified mechanisms were involved in membrane/ vesicle trafficking and selective autophagy.Identified gene sets included mitophagy, mitochondrial transport, RHO GTPases activate WASP and WAVE and RAB geranylgeranylation (Fig. 4B).To further identify the Fig. 3 Network analysis for identification of pathways changing with α-syn PFF treatment in M83 neurons.A A total of 1,690 overlapping genes in the first order networks including proteins and phosphopeptides changing with PFF treatment at day 14 in proteomics (at the cut off of absolute value log FC>0.5) and phospho-proteomics (at the cut off of absolute value log FC>1) were identified.Metascape analysis of these genes showed enrichment of pathways regulating RNA metabolism and cell death, and cellular response to stress.B A zero-order network was generated from the 1,690 overlapping genes using internal network visualization tool.Pathways identified based on clustering of the 1,690 genes included various proteostasis mechanisms including splicing, ubiquitin related, proteasome, mitochondrial genes, translation, lysosome, endocytosis, unfolded protein response, vesicle trafficking and chaperone broad molecular signature behind these changes, we built a first order interactome using a stringent cutoff of 0.999 in CPDB.Visualization of this network containing 1328 proteins and 3621 interactions using internally developed network-based visual analytics tool identified modules involved in proteostasis.Pathways in this network included the mitochondrial pathway, proteins involved in ubiquitin-proteasome pathway, vesicular trafficking, and mRNA splicing (Fig. 4C).The data from the insoluble proteome suggest changes that may also affect the solubility and re-localization of broader cellular proteins besides α-syn. Modulation of Larp1 affects α-syn aggregation in M83 primary neurons We were intrigued to see several RBPs that were significantly changed in our phospho proteomics dataset.Specifically, the RBPs with phosphopeptides that were downregulated upon PFF-treatment seem to regulate ribonucleoprotein complex biogenesis and translation and some of these were also associated with stress granule assembly (G3BP1, ATXN2, TIAL1, LARP1, UBAP2L) [37,38].Identification of Larp1 in this cluster prompted further investigation since Larp1 is thought to be a key regulator downstream of mTOR involved in the stability and translation of 5'-terminal oligopyrimidine (TOP) motif comprising mRNAs, which encode proteins involved in the translation apparatus such as ribosomal proteins and translation factors [39].TOP is one of the well-characterized cis-regulatory motifs for translational control located immediately downstream of the transcriptional start sites of mRNAs.We first tested the effect of Larp1 knockdown on α-syn aggregation in the M83 seeding model.Knockdown of Larp1 significantly increased PFF-induced pS129 α-syn aggregation (Fig. 5A and B).We also tested the effect of Larp1 knockdown in another α-syn aggregation model and saw similar effect.In this model we measured aggregation of endogenous wild-type α-syn after seeding with recombinant PFF (Fig. 5C).Interestingly, when we probed for Larp1 in the detergent-insoluble fraction, we found enrichment of Larp1 in the control (PBS-treated) conditions.This enrichment was reduced in the lysates from M83 neurons treated with PFF suggesting a change in solubility of Larp1 (Fig. 5D).A recent study reported that dephosphorylated SRSF2, a RBP involved in mRNA splicing, formed higher molecular weight detergent-insoluble oligomers [40].Consistent with this report, we identified several phosphorylation sites on Larp1 that were significantly decreased after 14 days of PFF treatment (Table 1) and may explain the decreased Larp1 observed in the insoluble fraction.This suggests that phosphorylation of Larp1 may affect its solubility and oligomerization.The exact role of such modifications on Larp1 activity needs to be carefully examined and is beyond the remit of this paper.Previous studies have shown that TOP-containing genes not only code for the classical ribosomal and translationrelated proteins but also other diverse proteins, some with lysosome-related and metabolism-related functions suggesting a role for Larp1 beyond translation regulation.We found 19/92 insoluble proteins that overlapped with predicted TOP-containing genes suggesting broad dysfunction of these proteins in PD [41]. Discussion Integrated omics approaches have been extensively used to characterize in vitro and in vivo PD models and PD patient samples.It is our understanding that this is the first time that total proteomics and phospho-proteomics have been used to capture aggregation specific changes in the M83 primary neuronal culture model.Although the progressive accumulation of aggregated α-syn in patients correlates with the appearance and progression of disease symptoms, the process linking α-syn aggregation to disease pathogenesis and neurodegeneration is still unclear.We employed quantitative proteomics and network analysis to gain insight into the molecular mechanisms that contribute to α-syn aggregation-induced changes in cortical neurons from M83 transgenic mice expressing the A53T human α-syn.Synthetic α-syn fibrils can induce a PD-like aggregation phenotype and accelerate pathology in vivo in M83 mice [11].We found that α-syn aggregates initiated in the neurites as filamentous aggregates at day 7, and by day 14 post-seeding several condensed p62 and ubiquitin-positive aggregates were seen in the soma resembling LBs.Extending the incubation time to 21 days did not change the morphology or the proportion of these cell body aggregates, and this was in contrast to what was reported by Mahul-Mellier et al. using a wildtype seeding model [21].This could be due to 3-5-fold overexpression of the human A53T transgene in neurons from M83 mice [20].Consistent with the maximum pathology, we observed the most significant changes in the total and phospho-proteomics 14 days after PFF treatment. In our dataset, we identified a large number of RBPs that change upon α-syn aggregation.RBPs play a key role in post-transcriptional gene regulation including splicing, RNA transport, translation, stability, metabolism, and RNA decay [42].Our data detected a strong presence of RBPs that played a role in all these processes suggesting a widespread dysregulation of RBPs upon α-syn aggregation.Previous studies looking for genetic modifiers of α-syn toxicity as well as direct α-syn protein interactors found that after vesicle trafficking, proteins involved in mRNA metabolism more specifically RBPs and ribosomal subunits were the most identified [43][44][45].It was also recently reported that α-syn directly associates with multiple mRNA-decapping proteins thereby modulating processing bodies (P-bodies), which are membraneless organelles that function in mRNA turnover and storage.P-body homeostasis and mRNA stability were altered in PD patient-derived iPSC-neurons and in PD-postmortem samples suggesting a direct role for α-syn in mRNA homeostasis [45].More than 50% of RBPs in the eukaryotic proteome are expressed in the brain [46].An interdependent network of RBPs has been hypothesized to Table 1 Phosphorylation sites on Larp1 that were significantly decreased after 14 days of PFF treatment in the total lysate samples from M83 neurons regulate complex pathways in the CNS and alterations in this regulatory RBP-network could explain the dysfunction seen in different neurodegenerative and neuropsychiatric diseases [47] Not surprisingly, dysregulation of RBPs has been reported in diverse neurodegenerative diseases including frontotemporal dementia, amyotrophic lateral sclerosis, Huntington's Disease (HD) and Alzheimer's Disease (AD).RBPs were enriched in modules that correlated with AD pathology while aberrant interaction of mutant huntingtin oligomers with RBPs has also been reported [48][49][50].In AD, RBPs have been found in the detergent-insoluble fraction and to co-aggregate with the microtubule associated protein tau, the pathological component of neurofibrillary tangles [33,51].Tau interacts with RBPs and promotes stress granule (SG) formation concomitantly accelerating tau aggregation [52,53].We identified components of SGs in our dataset including Larp1.The formation of SGs represents cells' ability to cope with stress and regulate cellular translation.We believe that α-syn aggregation induces a broad change to the cellular proteome evident by enrichment of multiple proteostasis mechanisms.This suggests the contribution of a common pathway in related neurodegenerative diseases and offers the potential for targeting multiple diseases using a common therapeutic strategy.It will be important however, to see if specific RBPs regulating select RNA mechanisms are modulated in the different diseases.A recent study using RNA pulldown and MS identified disease subtype specific variations in the RNAbinding proteome of sporadic and progressive AD [54]. Our data adds to the extensive and carefully executed data sets looking at changes in the insoluble proteome over time in PFF induced endogenous mouse α-syn aggregation and Lewy body-like inclusion formation in wild-type primary mouse neuronal cultures [16,21].It is interesting to note that the insoluble proteomic changes we observed seem to be consistent with changes previously reported for PD -e.g., mitophagy, autophagy, vesicle mediated transport etc., [55].These proteins might be enriched in the insoluble fraction by being sequestered into aggregates away from their normal location or due to a general upregulation and aberrant interaction with aggregates [24].We found several overlaps of the insoluble proteins with the dataset reported by Mahul-Mellier et al., suggesting common aggregation-mediated molecular changes [21].PPI networks can be used to generate disease-specific maps and identify signatures that contribute to a particular phenotype.We built a broad PPI interactome using the pathology-specific total and phospho-proteomic changes and identified several clusters involving mechanisms regulating cellular proteostasis.More recently, Haenig et.al. used PPI-based interactome mapping to provide a network of neurodegenerative disease proteins [56].This effort identified both distinct and interconnected disease modules suggesting both disease-specific and common mechanisms contributing to disease pathologies.Deeper understanding of these changes will provide novel therapeutic opportunities for targeting PD and related neurodegenerative diseases. Preparation of primary neuron cultures Primary cortical neurons were prepared from M83 heterozygous mice (B6;C3-Tg(Prnp-SNCA*A53T)83Vle/J; JAX number 004479).All animal experimental procedures were reviewed and approved by the Abbvie Institutional Animal Care and Use Committee (IACUC).Day 15.5+/-0.5 days embryos were removed from pregnant female and placed into 10cm dish with Ca and Mg free HBSS.Individual embryos were placed in a well of a 6 well plate containing 2-3 mls of Hibernation media (2% B-27 Supplement; 48% Hibernate-E media; 20% L15 media 30%CO2 independent media) on ice.Using a dissection microscope, the hippocampus and whole cortex were isolated and placed in a 15 ml conical tube with 10 mls of HBSS.HBSS was then aspirated, and neuronal isolation enzyme diluted in Ca/Mg free HBSS was incubated in a 30°C for 30 mins with gentle inversions every 5 mins.Cells were triturated and washed with DMEM/10% FBS and gently passed through a 70uM cell strainer.Cells were then counted and plated using plating media (DMEM/10% FBS/1X Penstrep).3-6hrs after plating, media was completely removed and replaced with culture media (2% B27 plus 1% Glutamax in Neurobasal Plus media). Preparation of α-syn pre-formed fibrils (PFF) Recombinant human α-syn PFFs were generated from monomeric starting material following an established protocol [57].Briefly, recombinant human α-syn (1-140) at 6 mg/ml in 10 mM Tris, 50 mM NaCl, pH 7.6 was thawed on ice and then centrifuged at 15,000 x g for 10 min at 4°C to pellet out any preexisting higher molecular weight species.The supernatant was then collected, α-syn concentration measured, and conditions adjusted to 5 mg/ml α-syn in 10 mM Tris, 150 mM NaCl, pH 7.6.0.5 ml α-syn was then aliquoted into individual, sealed, 1.7 ml Eppendorf tubes and placed in an Eppendorf thermomixer at 37°C, 1000 RPM for seven days.Successful fibrillization was confirmed by thioflavin t binding assay, sedimentation assay and imaging by negative staining transmission electron microscopy.α-syn fibrils were then aliquoted and stored at -80°C until use. Primary neuron culture treatment with human α-syn pre-formed fibrils (PFF) or PBS Primary neurons were treated with the indicated concentration of PFFs (.125 ug-1 ug/ml for dose response and 1 ug for all subsequent proteomic samples) one time on DIV 7. 5uL aliquots of 5 mg/ml samples were thawed at room temperature and 245uL of PBS was added and mixed by gentle pipetting to bring to a concentration of 100 ug/ml.PFFs and PBS were then sonicated using a water bath sonicator and diluted to 2x in culture media (2% B27 plus 1% Glutamax in Neurobasal Plus media).50% of culture media was removed from each well and replenished with either the 2x PFF or PBS culture media and returned to the incubator.Media was replenished every 3-4 days until cells were collected.For the time course (see Fig. S1 A.) Cells treated for 7 days with PFF were collected at DIV 14, cells treated for 14 days were collected at DIV 21 and cells treated for 21days were collected at DIV 28.All cells received a one time addition of PFF(7 day, 14 day and 21 day treatment nomenclature corresponds to how many days after the one time PFF treatment cells were collected) Primary mouse M83 neurons were treated with 1uM accell Larp1 siRNA Smartpool or accell non-targeting siRNA pool, from dharmacon at DIV 5. α-syn PFFs were added at DIV 8 for 2 weeks at 0.5 ug/ml.For endpoints, one plate was fixed with methanol and stained for pS129 α-syn aggregates, MAP2 and nuclei number and the second plate was used for qPCR analysis to measure gene knockdown efficiency. Immunofluorescence protocol After treatment, neuronal media was gently removed and cells were incubated with ice cold methanol for 20 mins at -20°C.Cells were washed 3x with dPBS (Thermofisher Scientific cat#14190144) and blocking buffer (4% Albumin Bovine Fraction V (BSA) Research Products International, 0.2% Triton in dPBS) was added to each well for 1 hr at room temperature.After sufficient blocking, blocking buffer was removed and primary antibodies (α-syn pS129 Cell Signaling Technology 23706 (1:3000); MAP2 Abcam ab5392 (1:2500) were diluted in blocking buffer and incubated overnight on an orbital shaker at 4°C.After primary antibody incubation, cells were washed 3x with dPBS.Alexa-flourophore-conjugated secondary antibodies were added to blocking buffer and filtered through Spin-X column centrifuged at 5000 RPM for 5 mins.Filtered secondary antibodies diluted in blocking buffer were added to each well.Cells were incubated at room temperature in the dark in secondary antibody on an orbital shaker for two hours.Cells were washed with dPBS 3x and nuclear stain was diluted in dPBS and added for 15 mins at room temperature.Cells were then washed with dPBS 3x and stored in dPBS at 4°C until imaging. Isolation of insoluble fraction This protocol was originally published by Volpicelli-Daley [12].Briefly, primary neurons plated at a density of 1,000,000 cells/ well in 6 well dishes were washed twice with dPBS.Cells were lysed in 250uL 1% Triton in TBS (1%Triton-X, 50mM Tris, 150mM NaCl pH 7.4) with 1X Halt Protease and phosphatase inhibitor.Using a cell scraper, cells were collected and placed in tubes on ice.Cells were sonicated on ice ten times at a pulse of 0.5 second on 0.1 second off with 20% amplitude.After sonication tubes remained on ice for 30 min.Cells were centrifuged at 100,000 x g for 30 mins at 4°C.The supernatant was collected from each sample and saved as the soluble fraction.A total of 250uL of 1% Triton in TBS was added to each pellet and subsequently sonicated ten times at a pulse of 0.5s on 0.1s off with 20% amplitude on ice.The sonicated pellets were then centrifuged down at 100,000 x g for 30 mins at 4°C.The supernatant was discarded and 80uL of 2% SDS buffer (2% SDS 50mM Tris, 150mM NaCl pH 7.4) was added and each pellet was resuspended.Samples were sonicated 15 times at 0.5s on 0.1s off pulse with 30% amplitude in a water bath sonicator.Samples were then immediately frozen at -80°C until further processing. Western blot Protein concentrations were measured by micro-BCA assay (ThermoFisher Scientific 23235) and samples were normalized.Samples were run on the Protein Simple Wes instrument according to the manufacturer's instructions.Depending on the size of the protein either the 2-40kDa, 12-230kDa or the 66-440kDa Separation Module was used along with either the Anti-Rabbit or Anti-Mouse Detection Modules.Primary antibodies used were pS129 α-syn (CST 23706) and Larp1 (CST 14763S).Briefly, Fluorescent Master Mix was added to diluted samples and samples were then heated at 95°C for 5 mins before loading into the cassette along with the antibodies, chemiluminescent substrate, and blocking reagent.The Wes instrument was run on default settings.Area under the curve (AUC) was quantitated from the resulting electropherograms.Samples were normalized to loading controls cofilin (CST 5175S), beta-actin (Licor 926-42210) or vinculin (abcam ab129002) and data graphed as protein/ loading control. M83 total and phospho-proteomic sample prep M83 mouse neurons were lysed in a buffer of 8M urea, 10 mM TCEP, 40 mM chloroacetamide, and 50 mM Tris supplemented with protease/phosphatase inhibitors. Protein concentrations for all treatments and timepoints were determined by protein BCA (Thermo Fisher Scientific, #23225), and equimolar protein concentrations were aliquoted, diluted to 1.5 M urea with 100 mM Tris, and digested with a trypsin/Lys-C combination enzyme mixture (Promega, V5071) in a 1:50 enzyme/substrate ratio overnight.Each sample was acidified to 0.1% TFA, desalted with 100 mg Strata-X polymeric reversed phase desalting columns (Phenomenex, #8B-S100-ECH), and dried down via a centrivap.Each timepoint's 10 sample treatments were labeled with their own TMT 11-plex isobaric chemical tag set (Thermo Fisher Scientific, #A34808), reserving the 11 th channel for a "pooled" channel.This pooled channel consisted of a mix of small aliquots from the study's 30 total samples, which served as a reference for normalization and cross-comparison of relative differences across the three TMT 11-plex sets.Once each set of TMT 11-plex peptides was mixed, quenched, and desalted a second time with Strata-X columns, each set was individually resuspended in 80% ACN 0.1% TFA and enriched for phosphopeptides using Qiagen Ni-NTA beads as described below.The flowthrough from Fe-IMAC enrichment contained non-phosphopeptides for proteome quantification.The three enriched phosphopeptide sets and three flowthrough proteome samples were each fractionated using a Waters Acquity UPLC.Peptides were separated using a linear gradient starting with aqueous 20 mM ammonium formate and increasing up to 20 mM ammonium formate in 80% ACN using a 2.1 x 100 mM UPLC column packed with 1.7 uM BEH C18 material (#186002352).Both phosphopeptide and proteome samples were collected into 15 total fractions, dried down via a centrivap, and resuspended in 0.1% formic acid. Phosphopeptide enrichment Iron immobilized metal affinity chromatography (FeI-MAC) was used to enrich phosphoserine, threonine, and tyrosine residues in the samples.First, samples were dried down and resuspended in 80% ACN and 0.15% TFA to prepare them for phospho-enrichment.Meanwhile, Ni-NTA magnetic agarose beads (Qiagen #36113) were cleaned with 40 mM EDTA and washed with water.Fe3+ was then chelated onto the NTA magnetic agarose beads for 30 mins at room temperature (RT) and 1350 RPM in a ThermoMixer.Beads were then washed in the same buffer that the samples were resuspended in (80% ACN and 0.15% TFA) and aliquoted evenly across all samples (500ul beads per mg of sample).FeIMAC beads were incubated with samples for 30 mins in a ThermoMixer at RT and 1350 RPM.Phosphopeptides were eluted thereafter with 50% ACN 0.7% NH 4 OH solution and immediately acidified with 4% FA.The eluate was dried down and resuspended in 0.1% formic acid and injected for MS analysis. M83 insoluble aggregate sample preparation 18 insoluble aggregate samples from PFF treated neurons and 18 samples from PBS treated neurons were processed for proteomics.Detergent clean-up and sample digestion was done with single-pot, solid-phase enhanced sample prep (SP3) and 96 well plate format robot assisted sample handling (Integra Assist Plus).First, 0.5M TCEP (Tris(2carboxyethyl)phosphine), 0.5M CAA(Chloroacetamide) were added to each sample.Samples were reduced and alkylated in a covered shaker (Eppendorf ThermoMixer) at 1000 RPM for 30 mins at 37 °C.Meanwhile, equal amounts of Sera-Mag SpeedBeads (GE Healthcare, #45152105050250) and Sera-Mag carboxylate modified magnetic particles (GE Healthcare, #65152105050250) were combined, washed with water, and reconstituted with water in equal volume (SP3 beads).The SP3 beads (10ul), 0.5M DTT(Dithiothreitol), and 80% ethanol were added to each sample and mixed on the ThermoMixer at 1200RPM for 10 mins at room temperature to facilitate binding.The sample-bead mix was then washed three times with 80% ethanol.Trypsin/Lys-C combination enzyme mixture was added at a 1:50 enzyme/substrate ratio for an overnight digestion (1000 RPM, 37 °C, Ther-moMixer).The next day, samples were acidified to pH 2 for C18 column-based sample clean up.Thereafter, samples were dried down and resuspended in 0.1% formic acid.500 ng of sample was injected for MS analysis and the remaining used for phospho-enrichment.Insoluble aggregate samples were enriched for phosphopeptides as described above, and both insoluble proteome peptides and phosphopeptides were injected for MS analysis and quantified in label-free fashion. LCMS methods (total, phospho, and insoluble proteome) Each TMT 11-plex's 15 phosphopeptide and 15 proteome fractions were analyzed using a 120 min nano-LC MS data dependent acquisition (DDA) method.Peptides were separated using a Thermo Fisher Scientific Easy nanoLC 1200 with solvents 0.1% formic acid in water and 0.1% formic acid in 80% ACN.Peptides were separated linearly from 4% ACN to 45% ACN using an Easy Spray ES902 analytical column.Phosphopeptide or proteome fractions were analyzed using a Thermo Fisher Scientific Orbitrap Exploris 480 mass spectrometer, with survey MS scans collected at 120,000 resolving power and 350-1800 m/z scan range.A MS1 standard (100%) AGC target was selected with maximum injection times automatically calculated within a 2 second MS cycle time.Charge states 2-8 were considered for fragmentation with a 25 second exclusion duration.Peptides were selected with a 0.8 m/z quadrupole isolation window and fragmented with a 36% normalized collision energy.Orbitrap MS/ MS resolving power was set to 45,000, with standard (100%) AGC targets.Phosphopeptide fractions were analyzed with a maximum injection time of 120 msec, while maximum injection times for proteome fractions were automatically calculated based on available cycle time.Insoluble aggregate proteomics and phospho-proteomics data was collected using an Evosep nano-LC running 44 min 30 SPD standard gradient methods and connected to a Thermo Fisher Scientific Lumos MS.The Lumos was run at 120,000 resolving power with a 350-1800 m/z scan range.A MS1 standard (100%) AGC target was selected with maximum injection times automatically calculated within a 3 second MS cycle time.Charge states 2-8 were considered for fragmentation with a 30 second exclusion duration.Peptides were selected with a 0.8 m/z quadrupole isolation window and fragmented with a 30% normalized collision energy.Orbitrap MS/MS resolving power was set to 7,500, with standard (100%) AGC targets.Phosphopeptide fractions were analyzed with a maximum injection time of 120 msec, while maximum injection times for proteome fractions were automatically calculated based on available cycle time. Proteomics data analysis All TMT data and insoluble aggregate proteomics data were searched using MaxQuant, set with either TMT 11-plex chemical modifications or no modifications (insoluble aggregate phospho/proteomics).Enzyme digestion was specified as tryptic with a maximum of 2 missed cleavages.Chemical modifications considered for data searching included a fixed modification of carbamidomethylation of cysteines and variable modifications of methionine oxidation, acetylation of protein N-termini, and phosphorylation of serine, threonine, and tyrosine residues (for phospho-proteome MS data searches).All data was searched against a target-decoy Uniprot mouse proteome database containing both canonical sequences and protein isoform sequences, downloaded on September 20, 2020.Reporter ion MS2 spectra were used for TMT quantification with a precursor ion mass tolerance of 4.5 ppm and a fragment ion mass tolerance of 20 ppm.Whether for TMT or label-free quantification studies, protein groups and phosphosite quantitation MaxQuant tables were exported and filtered to consider quantifiable protein groups or phosphosites for >50% measurements were observed across all samples.Of those protein groups or phosphorylation sites that remained, missing values were imputed from the normal distribution of each samples' protein or phosphosite quantitation.Protein group or phosphosite intensities from TMT studies were summed and median normalized to account for TMT mixing differences across the samples.Insoluble aggregate protein group LFQ and phosphosite intensities were also normalized by peptide amount loaded for LCMS. Proteomics differential abundance From the quantile-normalized intensities for PFF-vs.PBS-treated samples within each timepoint/TMT run, additional downstream quality checks and analyses were performed on each of the TMT runs separately (each TMT run contained samples from a single timepoint: 7-or 14-days treatment).For each, we removed proteins only identified by site, potential contaminants, and reverse proteins.Of the remaining proteins, we kept proteins with at least one unique peptide and more than one peptide.We removed the pooled sample from each dataset and replaced all the zero normalized intensity values with 'NA' values.We excluded proteins if they contained missing values in > 20% of a timepoint's conditions.For subsequent differential expression analyses, we used the Limma package in R. No covariates were used in the model.Within the differential expression results, we considered proteins with an adjusted p-value < 0.05 statistically significant. Phospho-proteomics analysis methods From the quantile-normalized intensities for PFF-vs.PBS-treated samples within each timepoint/TMT run, additional downstream quality checks and analyses were performed on each of the TMT runs separately (each TMT run contained samples from a single timepoint: 7or 14-days treatment).For each, we removed the pooled sample from each dataset.We removed phosphopeptides whose leading proteins were potential contaminants.We then replaced all the zero quantile-normalized intensity values with 'NA' values and excluded proteins if they contained missing values in > 20% of a timepoint's conditions.Based on a PCAs excluding proteins with missing values, we detected and removed one outlier (M83_PFF_ Day07_2) from downstream analyses.For subsequent differential expression analyses, we used the Limma package in R. No covariates were used in the model.No covariates were used in the model.Within the limma results, we considered phosphopeptides with an adjusted p-value < 0.05 statistically significant. Fig. 4 Fig.4 Proteomic analysis in the insoluble fraction of α-syn PFF treated M83 neurons.A M83 neurons treated with α-syn PFFs or PBS for 14± 1 and 21± 1 days were sequentially extracted with 1% Triton X-100 followed by 2% SDS.Western blot analysis showed significant enrichment of pS129 α-syn in the Triton X-100 insoluble fraction from PFF treated neuron samples.With PBS treatment, α-syn was extracted in 1% Triton X-100 fraction.B Insoluble fractions analyzed by Mass spectrometry identified 92 proteins that were changed in the total proteome with PFF treatment at logFC cutoff of ≥ 0.25.An enrichment analysis using metascape for these 92 proteins was performed, which identified membrane/vesicle trafficking, autophagy, endocytosis, and exocytosis mechanisms.C Based on a stringent cutoff of 0.999 in CPDB, a first order network of these 92 proteins including 1640 proteins and 4,738 interactors was generated.Based on this, a zero-order network was established using the internally developed network-based visual analytics tool to identify enriched pathways which are shown in the figure.The most enriched pathways in this network included the mitochondrial pathway, translation, ubiquitin related, lysosome, unfolded protein response, vesicle trafficking and mRNA splicing Fig. 5 Fig. 5 Modulation of Larp1 affects α-syn aggregation in M83 primary neurons.A-B The effect of Larp1 knockdown was evaluated on α-syn aggregation in the M83 seeding model.A Shows representative images with non-targeting control siRNA and LARP1 siRNA treatment in M83 neurons stained with pS129 α-syn (green), MAP2 (purple) and nucleus with Hoechst (blue).Scale bars, 50 μm.and (B) shows quantification using high-content image analysis, which indicated that knockdown of Larp1 significantly increased PFF-induced pS129 α-syn aggregation.(N=6 replicates).Data are mean±SD.**p=0.0066(t-test).C Quantification of PFF-induced endogenous mouse α-syn aggregation in CD1 neurons also suggested that that knockdown of Larp1 resulted in significant increase in aggregation compared to non-targeting control siRNA.(N=6 replicates).Data are mean±SD.****p<0.0001(t-test).D WES analysis with Triton X soluble and insoluble fractions isolated from M83 mice with either PBS or PFF treatment showed enrichment of Larp1 in the PBS-treated samples but less enrichment in the PFF treated samples
2024-05-24T05:19:58.093Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "dd3a45bf352cf6548e97cb354a51ac1a93c8b6d3", "oa_license": "CCBY", "oa_url": "https://molecularbrain.biomedcentral.com/counter/pdf/10.1186/s13041-024-01099-1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07141e33b7cd520c542f6345e38f14ab11309de2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219048799
pes2o/s2orc
v3-fos-license
Identification of Quality Indicators in Teacher Education Program The present research aimed to investigate the perception of teachers’ quality indicators in teacher education programs. The objectives of the study were to explore the perception of teachers regarding quality indicators in teacher education program, find out the difference of opinion about principles of quality indicators among university teachers regarding their teaching experience. Research was quantitative in nature survey research design was used in this study. Population of this study was all university teachers teaching in teacher education programs in universities of Lahore. Sample was drawn by using simple random sampling. Questionnaire was used for the collection of data. Descriptive and inferential statistics were applied for the analysis of data. The major findings of the study showed that the positive perception of teachers’ quality indicators in teacher education programs. Teachers were satisfied from the quality indicators of teacher education program. Introduction Quality education is one of the most attractive objectives all over the globe. One of the six objectives, illustrated by the World Education is identified with the improvement of "all parts of value based and quality education" so as to accomplish the recognized learning results (UNESCO, 2000). As different variables including educational program, conveyance of substance, learning condition, supervision, and organization of scholarly offices add to the nature of training, the focal significance of the instructor cannot be denied. The ability and eagerness of instructors decide the statures to which an instructive framework can rise (Iqbal, 1996). Paliakoff and Schwartzbeck (2001), see that nature of instructors is the most basic part of teaching and that it directly affects student learning. Literature recommends that quality of instructors relies upon instructive capabilities of educators and nature of pre-administration and in-administration instructor training (Aga Khan Foundation, 1998;Sharma, 1993). Instructor training in this way accepts extraordinary significance in accomplishing the objective of quality education. In Pakistan, the quality of educator instruction has been addressed and censured every now and then by the concerned bodies. So as to satisfy the developing needs of educators at different levels, the instructor training framework has experienced critical quantitative extension, yet the quality of educators' training has been ignored and bargained. Remarking on the present condition of educator training in Pakistan, the National Education Policy: 1998-2010 detects: "The qualitative component of instructor training program has gotten negligible consideration bringing about large scale manufacturing of educators with shallow comprehension of both the substance and system of instruction" (Government of Pakistan, 1998, p.47). An ongoing report distributed by UNESCO about instructor training in Pakistan calls attention to that "nonattendance of value must be handled earnestly in a setting where educator student collaborations are interceded by a strong administration, just as by an empowering arrangement condition" (UNESCO, 2008, p.12). The assessment of current condition of educator instruction quality is urgently required so as to change instructor training segment in Pakistan. Quality factors are "conventional proclamations made so that they guarantee exhaustive inclusion of the most applicable areas of the quality of instructor training foundation" (National Assessment and Accreditation Council [NAAC], 2007, p.3). Yackulic and Noonan (2001) hold that factors in instructor training reflect the significant parts of instructor training program. Dimensions may play out major roles, for example, portraying current circumstance, evaluating pre-decided targets, giving nonstop input about progression towards accomplishment of targets, what's more, recognizing factors that added to results accomplishment (European Commission, 2001). Chande (2006) accepts that performance factors might be of three sorts: quantitative, story (subjective) and mix of both. It is hard to characterize quality of education absolutely chiefly due to complex nature of educating learning procedure and enormous number of partners associated with tutoring (Mirza, 2003). Different researchers have distinguished various determinants of training quality. Cheng and Cheung (1997) characterize quality of education as a lot of components containing information, procedure and yield of instruction framework. In light of designing model of instruction, Adams' (1993) system of value comprises of establishment' notoriety, assets/input, process, content, yields/results, and worth included. As indicated by Santos (2007), a conventional school quality model is described by test marks and different data sources including learner family contextual, school attributes, instructor attributes and student's natural capacity. The factors of education quality distinguished by Thaung (2008) incorporate students, educators, content, instructing learning forms, learning situations, and results. Actually, the estimation of model is yet to be talked about and broke down in the scholastic writing. Another critical model of quality of education has been given by UNICEF (2000) which includes five measurements for example quality students, quality learning situations, quality substance, quality procedures, and quality results. Memon (2003) contends that this structure seems, by all accounts, to be progressively feasible and important if explicit criteria are charted to evaluate the quality of education. Quality in existing instructor training projects is right now being bantered in numerous nations and at numerous levels (Hoban, 2004). Like quality of education, quality in educator training cannot be effectively characterized as there are different perspectives on what compelling instructor training projects are. Various originations about the quality of educator readiness are reflected in scope of changes being attempted in different nations (Calderhead, 2001). There are a few normal issues that might be markers of low nature of educator training programs over the globe. Tom (1997) has distinguished ten issues that are tricky in numerous traditional educator training programs: misty objectives, divided courses which need importance and intelligence, incongruity between courses from various resources, discontinuities between college courses and school practice, low status of instructor teachers even inside a personnel of training, autonomous division structures in resources of instruction that advance an absence of coordinated effort, hazy vocation way of educators and their job in practicum supervision, such a large number of partners associated with instructor training, absence of making arrangements for change procedures, and powerlessness of instructor training to one-off change. Hoban (2004) includes eleventh point, absence of correspondence among institutions. Truth be told, instructor training in Pakistan is experiencing major issues that hinder its general execution and viability. The basic issues include: absence of subsidizing and assets, inadequately prepared preparing establishments, short preparing period, undue accentuation on quantitative extension, limited extent of educational plan, unevenness among general and expert courses, over-accentuation on theory instead of training no coordination between instruction offices and preparing foundations, insufficient nature of guidance, absence of inadministration preparing of instructor teachers, disappointment in executing valuable changes, obscure destinations, low quality of reading material, damaged examination framework, and absence of supervision and responsibility, research and assessment of educator preparing programs (Aly, 2006;Iqbal, 2000). So as to change instructor training part in Pakistan, there is a desperate need to assess the viability of existing educator preparing programs. Significance of Study As the teacher training play and important role in enhancing the quality of teacher so teacher training is very important for teacher to meet the current and futures need of teaching. It is important to bring quality in the educational programs of teachers. This study was focused on identifying the quality indicators in teacher education programs. This study is significant that it generated primary data about quality assurance in teacher education. The findings of this study have implications for HEC, Accreditation Council for Teacher Education and TEIs management for highlighting the important aspects which may be focused for quality improvement in teacher education programs. The suggested quality indicators may also be used for assessing quality of the academic program at TEIs and other institutions of higher education. Research Objectives The objectives of the study were to: 1. Explore the perceptions of teachers regarding quality indicators in teacher education program. 2. Find out the difference between perceptions of lecturer and assistant professor regarding quality indicators in teacher education program. Research Methodology The research was quantitative and survey in nature. The population of the study was comprised of all the teachers working in universities of Lahore. Further only those public universities were taken in which subject of Education is being taught. There are three general public universities in Lahore i.e. University of the Punjab, University of Education and Lahore college for women university. The sample of the study drawn from the target population. There are total 108 teachers were responded. Simple random technique used for the selection of sample. The selfdeveloped questionnaire used by the researchers after reviewing literature. The questionnaire was developed for the teachers' opinion on the Quality indicators rated most important by teachers. Items were constructed on five point Likert scale for this purpose. Validity of questionnaire was ensured through expert opinion. Reliability was Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.1, 2020 measured by Cronbach's Alpha. 25 It is indicated that scale has internal consistency, with the Cronbach's Alpha coefficient .785. The researchers personally visited the each institution. The teachers were approached in their concerned classes and department. The confidentiality of data was ensured by taking the consent of respondents. The collected data were analyzed through descriptive and inferential statistics. Data Analysis The detail of data analysis is given below. .98 It is indicated mean of the statements about quality indicators in teacher education programs presents that promoted by the university level of the educational institutions is ranging from M= (3.57 to 3.75), SD= (.75 to 1.00) which including Mean of the scale. It is concluded that majority of the participants are satisfied. So, they are agreed about factor teaching instructions. 3.66 1.03 It is indicated mean of the statements about quality indicators in teacher education programs presents that promoted by the university level of the educational institutions is ranging from M= (3.60 to 3.74), SD= (.93 to1.08) and total (M=3.68, SD=.66) which including Mean of the scale. It is concluded that majority of the participants are satisfied. So, they are agreed about factor learning environment. .15 It is concluded that they have not significance difference regarding teachers' have opportunities to learn how to use technology to enhance instructions. They have not significance difference regarding teachers at my university learn how to use data to assess to student learning needs. It seems that they have significance difference regarding we make decisions about professional development based on research that shows evidence to improved student performance. It is concluded that they have significance difference regarding factor teaching instructions. Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.1, 2020 1.00 1.57 .11 It is concluded that they have not significance difference regarding instructional and assessment to meet the needs of divers' learners. They have significance difference regarding teacher prior knowledge and experience is taken into consideration when designing staff development at our university. It is concluded that they have significance difference regarding our professional development promotes deep understanding about the content we teach in class. They have significance difference regarding factor developing knowledge. 90 .00* It is concluded that they have not significance difference regarding we observe each other's classroom instruction as one way to improve our teaching. They have not significance difference regarding creative ways to expand human and material resources. It is concluded that they have significance difference regarding my university leaders encourage sharing responsibility to achieve university goals. .34 It is concluded that they have not significance difference regarding we are focused on creating positive relationships between teachers and students. They have no significance difference regarding my university professional development helps me to learn about effective student's assessment techniques and administrators engage teachers in conversations about instruction and student learning to improve teaching standard and relationship. Conclusion The present research aimed to investigate the perception of teachers' quality indicators in teacher education programs. It was concluded that scale has high internal consistency. Majority of the participants were agreed about factor professional development, teaching instructions, developing knowledge, learning environment and relationship. It is concluded that they have significance difference regarding Professional development is part of my university improvement plan. They have significance difference regarding Teachers have opportunities to practice new skills gained during staff development. They have not significance difference regarding the faculty learns about effective ways to work together. Teachers are provided opportunities to gain deep understanding of the subjects they teach. It is concluded that they have significance difference regarding factor professional development. It is concluded that they have not significance difference regarding teachers' have opportunities to learn how to use technology to enhance instructions. They have not significance difference regarding teachers at my university learn how to use data to assess to student learning needs. It seems that they have significance difference regarding we make decisions about professional development based on research that shows evidence to Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.1, 2020 13 improved student performance. It is concluded that they have significance difference regarding factor teaching instructions. It is concluded that they have not significance difference regarding instructional and assessment to meet the needs of divers' learners. They have significance difference regarding teacher prior knowledge and experience is taken into consideration when designing staff development at our university. It is concluded that they have significance difference regarding our professional development promotes deep understanding about the content we teach in class. They have significance difference regarding factor developing knowledge. It is concluded that they have not significance difference regarding we are focused on creating positive relationships between teachers and students. They have no significance difference regarding my university professional development helps me to learn about effective student's assessment techniques and administrators engage teachers in conversations about instruction and student learning to improve teaching standard and relationship. Recommendations On the basis of conclusion, following recommendations made. 1. Since the concept of quality indicators is relatively new in under-developing countries like ours, so a number of programs, seminars, workshops and conferences for the purpose of awareness and importance of quality indicators be planned at district, division and provincial level. 2. For the purpose of comparison similar research studies should be conducted to gain information about quality indicators of teachers in public and private sector. This will not only be helpful in bringing qualitative changes in teaching but will also create an atmosphere of competition between public and private sector institutions. 3. Identical research studies at primary, secondary and treasury level teachers are recommended in future so that teachers at all level may be prepared keeping in consideration the importance of quality indicators. 4. For more understanding of quality indicators a number of training programs specially for teachers working in rural areas and also for the female teachers should be arranged at tehsil and district level so that more and more teachers may participate and there professional competence through emotional intelligence may be enhanced. 5. Government, policy makers and curriculum developers should give due considerations to the concept of instructional behavior so that the students and teacher may get more and more benefits of teaching learning process in the form of success.
2020-02-06T09:08:12.090Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3512806bb2b18a384b191faa6787a439e7d90925", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/JEP/article/download/51224/52922", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7ce83f1a0bfc3e2d66fb4239ac7ebdbd94fdf74e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
249025823
pes2o/s2orc
v3-fos-license
Developing the Integrated Marketing Communication (IMC) through Social Media (SM): The Modern Marketing Communication Approach The increased usage of social media forced the brands to integrate social media in their marketing communication channel, as it becomes the need of the hour, as it determines overall brand identity, brand image, and company performance in the present marketing competition. This research aimed to track the evolution and advancement of the IMC concept, and how it reformed the way of marketing communications. Moreover, the study highlights the importance of social media, as how it can influence consumer behavior in a substantial way. The study developed a theoretical framework through systematic review in the context that serve to integrate the existing conceptual framework of IMC with social media (SM) that is also called consumer generated media (CGM) and offer implications for understanding the manifestation as a tool of augmentation for marketing practice. The present study reviews and explains the liaison between social media/consumer generated media and IMC through enhanced IMC outcomes in the modern-day marketing communication approach. The findings of the study serve as a springboard for future research and applications in the field of marketing mix, in order to build strong foundations of the brand physically as well as virtually in the mind of customers. Introduction In recent years, integrated marketing communication (IMC) is dominating and influencing the companies' communications and marketing strategies. It has been successful for companies in terms of the brand appeal (Gurău, 2008), the brand equity (Šerić, 2017) and the brand performance (Luxton et al., 2015(Luxton et al., , 2017. However, the availability of social media has reshaped IMC as it offers new channels and methods of communication with consumers (Gordon-Isasi et al., 2021), and it allows consumers to fully utilize this medium, therefore it is also named as consumer-generated media (CGM). Companies are now actively engaged with customers through social media platforms (Aslam & de Luna, 2021) as they allow two-way communication (Hudson et al., 2016). Social media allow customer interactions, collaboration, knowledge, and information sharing related to their preference to support brands (Carlson et al., 2018). Hence, social media has revolutionized and reinvented the modern IMC methods and strategies. The present study relates SM and IMC, as modern-day IMC seems incomplete without SM promotion. SM has reshaped the traditional IMC and has helped to develop a trilateral relationship between company and consumers; company to consumer, consumer to consumer, and consumer to the company. It is a win-win situation for both the company and the consumers as it removes confusion and clarifies the market offerings in the form of products and services. But sometimes opposite as consumer can give negative ratings and remarks to the brand. The emergence of SM or so called CGM, in the virtual and real-world has changed the tools and strategies for communication and approach toward the consumer. Marketing managers have included social media when developing and executing IMC strategies in the promotion mix for a customer focused promotional message. This shift in the control over information is radically persuading the perception people develop over a brand or company (Aslam et al., 2020). Consumers prefer to network and create groups on various online media platforms that share common preferences, interests, and desires. Hence, the SM has paved a new way to augment and amplify the IMC strategies to include all forms of SM as an inclusive and potential tool in designing modernday marketing communications. The internet-enabled mobile phone revolution, commonly known as the wireless world, has made communications easy to access and waits for the rotation and movement of fingers (Arif et al., 2016). Consequently, the present study aims to integrate concepts from the promotion mix, psychology, consumer behavior, business management, and marketing practices. The authors established a conceptual framework to facilitate the understanding IMC development through SM, and subsequently provide a set of opinions. As Hulland (2020), Jaakkola (2020) and Mokhtarian and Cao (2008) advocated "that non-empirical studies suffer from the lack of commonly accepted samples, so the authors followed the approach suggested for conceptual papers in the marketing domain by considering conceptual papers not just as a means to take stock but to break the new ground on which to build a new and enhanced conceptualization" (Becker & Jaakkola, 2020;Jaakkola, 2020). With reference to marketing communication and promotion, the study is designed in three parts: the first part presents IMC and its main contributions to marketing promotion, the second part presents SM and its main contributions to marketing promotions, and the third part involves fusing the literature on IMC and SM into an integration approach to explore directions for future research that can contribute to more effective marketing promotions. Objective of the Study The aim of the present study is to find the integration impact of SM with IMC and this becomes significant to our investigation. Examining and connecting distinct ideas of marketing communication/promotion offered by various scholars in the area to assess the consensus and consistency of marketing communication/promotion strategies. Justifying their contribution and the interaction style of SM/CGM with IMC by offering relevant examples with a focus on the fourth P of marketing in general and IMC in particular. Methodology To assess the importance of IMC and SM in the marketing domain, the present research aims to explore the existing research related to the field of marketing communications, and to integrate the marketing communication mix toward achieving better marketing strategies for the companies, in promoting their products and services efficiently. For this, systematic review was conducted. In first, the process of systematic collection, assessment, and integration of existing work forms the core of review papers (Bem, 1995;MacInnis, 2011;Yadav, 2010). Review papers or conceptual reviews or theory focused articles (Barczak, 2017;Hulland & Houston, 2020;Kozlenkova et al., 2014;Palmatier et al., 2018;Rindfleisch & Heide, 1997;Short, 2009;Stewart & Zinkhan, 2006) do not provide and analyze first-hand data, instead provide integration of literature (Gilson & Goldberg, 2015;Goodwin et al., 2004;Nicolaisen & Driscoll, 2014). Articles were initially identified using a key word search in prominent literature databases such as WoS, Scopus, Google Scholar, EBSCO, and ProQuest (Archambault et al., 2009;Bartol et al., 2014;Donthu et al., 2020;Norris & Oppenheim, 2007). Two main search strings were used initially with combination of key words in order to cover the related themes. First search sting for IMC with key words used was; (1) integrated marketing communication = (("integrated marketing communication/s) OR ("IMC/s") OR ("marketing promotion/s") OR ("promotion mix") OR ("marketing communication/s") OR ("communication mix") ("product promotion/s") OR ("brand promotion/s")) and (2) social media/consumer generated media = (("social media") OR ("social media marketing") ("social media promotion") OR ("consumer media") OR (consumer generated media")). In the inclusion criteria, we have considered only peer reviewed articles appearing in leading journals and published in English language only. The second step sorts the papers according to the relevancy of topic, and in the third step we critically analyzed the papers, and put forward their key findings in a systematic and integrated manner (Apriliyanti & Alon, 2017;Archambault et al., 2009;Byington et al., 2019;Donthu et al., 2020;Dzikowski, 2018;Markoulli et al., 2017;Martínez-López et al., 2018;Parker et al., 2017). Besides some articles related to bibliometric analysis, systematic literature review/structured literature review articles were also refereed. A total of 3,517 articles appeared initially and were examined by reading the title, abstract, and keywords. Papers limited to business management and marketing communication, communication/promotion mix, social media marketing and document of type article, review, and in press articles were selected. A total of 523 papers were selected in second round. Exclusion criteria included papers not related directly to the theme (IMC & SM) were removed by extensive reading and reviewing. Finally, based on the inclusion criteria to the related themes, only 300 articles from about 130 different journals were adopted for further investigation. The present study aims to address the gap through robust reviewing, observation and highlighting the key marketing outcomes/ themes and further developing inclinations achieved through improving IMC with SM/CGM in the realization of modern communication/promotion approach. Review of IMC Interactions Marketing philosophers have initially referred to IMC as one of the four P's (promotion) of the marketing mix (Kotler, 2000;Kotler et al., 2001;Kotler, 2003;Ogden, 1998). Despite its ongoing growth and relevance in both academia and professional circles (Muñoz-Leiva et al., 2015;Pisicchio & Toaldo, 2021), IMC has never been more important than now in this fast-paced, dynamic, and ever-increasing environment of marketing and communication (Taylor, 2010;Vernuccio & Ceccotti, 2015). Consequently, the IMC community has thus pushed for more rigorous empirical research to enhance its theoretical development. As Porcu et al. (2019) defines IMC as "the stakeholder-centred interactive process of cross-functional planning and alignment of organizational, analytical and communication processes that permit continuous discussion by conveying transparent and consistent messages via all media in order to foster long-term profitable relations that create value." Academics and professionals are pointing out that IMC has evolved from a narrow, marketing-centric approach to a broader organizational view in which the customer is the central focus. So, the IMC research area has always been considered a vibrant academic debate and most existent research. In early theoretical approaches (Raman & Naik, 2004;Schultz, 1992Schultz, , 1996 IMC is evidently confined to mixes and planning of marketing communications, whereas recent research suggests otherwise (Kliatchko & Schultz, 2014;Luxton et al., 2017;Porcu et al., 2017;Tafesse & Kitchen, 2017;Vernuccio & Ceccotti, 2015) argued that a firm-wide approach should be initiated to involve the whole organization into IMC as a market deployment mechanism, enabling optimization and achieving superior communication effectiveness (Luxton et al., 2017). As Kliatchko (2005) argued, though the conceptualization of the IMC paradigms had developed substantially, at that time it did not adequately capture the epitome of IMC's essential characteristics. Moreover, the authors agree that the commonalities and key components of IMC are concerned with managing and selling communication in an exceedingly holistic and strategic manner. In an exceedingly practical nous, to offset the weaknesses of others, it tries to combine, integrate, and synergize elements of the communication mix as one and to create a unified message that should not be developed in isolation (Kitchen et al., 2004;Kliatchko, 2005). Some authors (Duncan & Moriarty, 1998;Schultz, 1996) care for IMC from a workplace perspective and speak of managing the standard promoting communication combination (advertising, sales promotions, public relations, sales promotion, etc.) to possess generalized data with all communication tools for marketers in an integrated fashion instead of separate practices. Developing Brand and Challenges Ahead The perception of relationship among customers is holistic and cumulative, the exchange or transfer of merchandise managed in an exceedingly terribly, trustworthy, and timely manner is a part of this relationship (Grönroos, 2004). The link yields an interaction method wherever numerous varieties of contacts amongst the provider or agency and the client occur over time. These contacts could also be very completely different in terms of the selling situation. Among them, there are contacts between folks, some between people and machines and systems, and a few between merchant and customer systems. As a result, implementing IMC necessitates the engagement of the whole company and its agents. It requires attention at all levels of the organization, from the highest corporate strategic level to the day-to-day implementation of individual tactical actions . To enhance customer connectivity and responsiveness of the organization in putting the customer first, IMC requires the approval of an "outside-in" approach (Kitchen & Schultz, 2001;Kitchen et al., 2004;Low, 2000;. The marketing information system that is designed by IMC planners and strategists to foster a clear understanding of brand opinions, foster timely dialog with consumers, and facilitate insights and understanding of competitive brand activity as vital, especially for those who are responsible for setting marketing policies and strategies. The strategic consistency dimension acknowledges that all communications linked to the brand entity provide consistent message to customers and other stakeholders. The coordination of brand messages, from IMC sources and other social media sources and other aspects of the marketing mix, coordination of staff facing a customer, and, more generally, contact with the organization, must be consistent to protect and enhance the image of the brand. To safeguard and promote the brand's image, the coordination of brand messaging from IMC sources and other social media sources, as well as other components of the marketing mix, coordination of personnel addressing a customer, and, more broadly, interaction with the organization, must be consistent. In the assessment of Keller and Lehmann (2003), Ambler et al. (2002), Reid (2005), by linking effectiveness in marketing communication management and campaigns with customer and brand equity outcomes, a "chain of IMC productivity" is likely to exist, mirroring the brand value chain concept. Besides distinguishing strengths, weaknesses, threats, and opportunities within the professional setting it might be necessary to attain the right positioning of the corporate as well as the organization of the like profile, identity, and image. Further Anantachart (2005) suggests that efforts in marketing communications should be made in order to build a strong and comprehensive brand. A star in the BGC matrix, through consumer communication, the brand develops extra exposure and preference, and it must eventually become a powerful brand. In association with the marketing context, a significant aspect of marketing communication is the attempt to establish a two-way or, even better, a multi-way communication process. While not all communication efforts are effective, their overall impact should lead to a response that enables the relationship to be maintained and enhanced (Grönroos, 2000). Any given effort, like a sales meeting, unsolicited mail letter, or a package of information, should be integrated into a planned continuing process (Grönroos, 2004). Within the client's mind the goal of marketing communications is to provide a mental representation of the service product that simplifies the analysis of the service provided. In addition, it's fascinating to make advertisements that evoke responses that aid in the development of customer information (Carlson et al., 2003;Nowak & Phelps, 1994). Ads that possess advertisement devices, company contact information, and promotion elements, likewise as whole advertising, are capable of generating responses that add concreteness to the service offering and evoke audience actions that may build a database of information (Schultz, 1993;Yarbrough, 1996). Moreover Keller (2001) suggests that marketing communications could play a dual role as it is one of the keys to the success of the many brands and one amongst the causes resulting in the failure of many brands. In terms of brand and channel equity, to maintain stakeholder relationships and leverage them, marketing communication is essential (Dawar, 2004;Duncan & Moriarty, 1998;Lannon & Cooper, 1983;Srivastava et al., 2000;White, 1999). Moreover, brands play a central role in firms' responses to competitive moves, the brand manages and measures its marketing efforts and results, as brand advertising and promotion attract audiences and augment brand sales volume. Today, brands have become the focus of a company's marketing efforts and are seen as a basis of market command, competitive advantage, and better returns (Dawar, 2004). As Reid et al. (2005) argue that brand communication is directly linked to brand functionality, in the sense that a brand's distinctiveness to customers is not a property of the product itself, but an outcome of the brand communication. Building and sustaining brand equity requires well-designed and implemented marketing communication strategies., however, there is enormous complexity in the task of IMC. Keller (2001) attempted to provide some current perspectives on how to understand marketing communications and how integrated marketing communication programs would be designed to help with the issues and challenges. Developing Consensus and Consistency There is a great deal of diversity and disagreement in the IMC (Torp, 2009), Gradually, IMC has been developed and elaborated over the years, resulting in both a greater level of precision and a broader scope. Finally, the ideal of normative consistency is challenged by the notion and understanding of integration. In this review, practitioners and theorists, as well as those who fall between the theorists and practitioners, are discussed (Torp, 2009). As demonstrated by Porcu et al. (2019) hospitality business are likely to perform better in the market when their communication efforts are effectively integrated. IMC has been shown by Kliatchko and Schultz (2014), Luxton et al. (2015Luxton et al. ( , 2017, Porcu et al. (2019), and Taylor (2010), to have an immediate and positive effect on market performance, when IMC is implemented as an endto-end sales and customer satisfaction system, customers are better served and their brand is more valuable. It has provided a substantial and significant response to calls for more rigorous empirical research to establish the impact of IMC on sales, customer satisfaction and brand advantage. Moreover, the management staff should strive to improve the organization's flexibility, speed of response, and outreach which could be achieved by actively listening to the voices of internal and external stakeholders. It is vital to foster a healthy collaboration environment within the organization as well as between the organization and its partners or outsourced functions by effectively communicating in the work environment (Porcu et al., 2019). As an integration model communication consistency permits marketer to coordinate numerous structured sources of messages accordingly, so that an even insight and identity regarding the organization and its brand may be shaped. Communication consistency is more often chased either in an exceedingly cross-media context, wherever synergistic and cooperative execution parts and cues are simultaneously multiple media platforms are used or in a successive media context in a longitudinal media campaign (Dewhirst & Davis, 2005;Grove et al., 2007;Mcgrath, 2005;Naik & Raman, 2003;Reid et al., 2005;Tafesse & Korneliussen, 2013;Voorveld et al., 2011). Through IMC, organizations could produce synergies between and among exclusive conveyance mechanisms amplify performance, and the probability of reaching communicative goals is increasing. Achieving such organizational goals necessitates devoted design to align verbal and visual manifestations to certain targeted representational processes and audience appeals. Institutions ought to take into account the associated pursuits of IMC as a priority strategy, as more advantages are derived from integrated marketing communications. IMC programs that are effective are mostly implemented on a functional level. This involves coming up with supported multiple media channels, implementing marketing campaigns and wider scope for integration of communications. Functional integration has a client-centered focus and takes advantage of outside-in planning and customer databases (Peltier et al., 2003;Swain, 2004;Zahay et al., 2004). As Rowley (1998) states that promotion is a vital element of the marketing mix. Various types of promotional approaches include advertising, direct marketing, sales promotion, public relations and publicity, and personal selling and sponsorship. A plan for using reliable and appropriate communication methods involves multiple considerations, including the target audience, communication objectives, and marketing communication messages. Next, factors relating to the "how" also deserve attention; this includes selecting the appropriate communication channel, establishing the budget, and formulating the promotional campaign. Finally, a careful arrangement of the results of promotional endeavors is required to assess if the investment in marketing was beneficial (Rowley, 1998). While theoretical ideas of IMC are imprecise and irresolute (Kliatchko, 2005), around its theoretical framework it necessitates agreement and clarity. It is crucial for the progress and development of research in any area to adopt a paradigm, which is a set of rules that members of a community agree upon as fundamentals. As paradigms contribute to the development of theoretical generalizations, shape knowledge gathering, and influence the selection of analysis techniques, a future discussion on IMC should take a broader and more comprehensive approach. Further Schultz (2003, 2009), found that most companies have remained on grade primarily managing plans of action coordination of promotional components which terribly few, a few in today's world, have affected money and strategic integration. The principal goal of IMC is to effect on the perception of significance and behavior through engaged communication. The event and diffusion of IMC are closely related to swift technological progression and rapidly globalizing, deregulations of markets and discrimination of consumption patterns. This has stressed the requirement to regulate objectives and methods for dynamic promoting and communication realities. From this perspective, communication has got to move from techniques to strategy. Within the rapidly changing and extremely competitive world of the 21st century solely strategically directed IMC will facilitate business to maneuver forward (Holm, 2006). In order to integrate their communications, organizations have to embrace diversity and selection and balance the knowledge of its several voices with the determination to ensure that its overall expression is clear and consistent (Thøger Christensen et al., 2008). Dimensions and Philosophy of IMC Development Communication is the method of convincing thoughts and sharing their meanings among people or organizations. It may be represented as an adhesive that keeps a distribution channel collective and unique. Control of communication in a marketing channel is an essential concern from the opinion of theoretical and social control purposes. Communication in the marketing channel can function as the process by which transmission and dissemination of information is clear (Frazier & Summers, 1984). The term integrated marketing communication (IMC) first appeared as the concept of applying consistent brand messaging across diverse media channels and platforms in the late 20th century. Primarily the IMC model was created to address the need for businesses to provide consumers with reasonably standard advertising and makes the recommendation that marketers should pay close attention to the client, his or her preferences, shopping patterns, media exposure, and other factors, so the consumer is exposed to merchandise that matches his or her needs in many ways, as well as the combination of communication methods that the consumer may find more engaging and credible. To push their products or services, businesses use totally different tools like brochures, telemarketing, websites, ads, and so on. Integrated marketing communications represent the build-up of all elements that endorse connections in a very brand's marketing mix by building shared meanings with the stakeholders of the brand. The goals of the marketing communication are to provide info to focus on the audience, make an impact, and to spice up the sales. IMC is being considered as the commercial strategy that is used to get the most superb effect at the commercial level. Usually, it is the blend of various promotions mix that is utilized in an identical manner to provide the smooth message to make the most effect at the purchaser or consumer end. Its effects on the organizational overall performance and emblem fairness are thrust regions that can stimulate can be stimulated with the aid of using the IMC process. Usually, consistency of message, media, layout consistency, reinforcement, and budget alignment remain the additives of IMC. Boosting the income, constructing a robust brand image, and gaining aggressive benefits are a range of the dreams of IMC primarily. IMC is getting used to form an excellent image of the organization in purchaser reminiscence that purchaser stocks superb phrases of mouth to others. IMC specializes in customers that however extra values may be transferred to him with the help of using the company that is supported with the aid of using IMC, it enables the company to make a near relationship with customers with the aid of using aggressive scale of structure to gain overall performance. A major development in communications over the last few decades has been IMC as company and its brands stand to benefit from it in the form of competitive advantage. The influence of IMC is said to pave the way for an array of changes in the communications of the company toward various stakeholders having an impact on the ability and potential of businesses to gain, maintain, and influence customers (Kitchen et al., 2004;Reid et al., 2005;Stewart, 1996). Additionally, the authors argue that IMC has passed through, and is still passing through, the fundamental debate about its meaning, purpose, and the right to appear and have its own identity that can stand out from other marketing concepts, such as integrated marketing, customer relationship management, brand awareness, and market orientation. Integrated marketing communication must be viewed as a new paradigm for managing marketing campaigns (Kitchen et al., 2004). According to Baker and Mitchell (2000) IMC specializes in constructing and leveraging clients and their pursuits and relationships with the organization and brands. In addition to tying IMC into one-to-one marketing and CRM, this orientation to relationships challenges marketers to integrate, coordinate, measure, and be responsible for both traditional and new interactive marketing methods (Baker & Mitchell, 2000). IMC can also be a market driver in certain instances, and it may be driven by the market in others so as to extend the idea of customer-centered communication, if it gives the organization a superior market advantage (Carrillat et al., 2004;Eagle & Kitchen, 2000;Ewing & Napoli, 2005;Jaworski et al., 2000;Low, 2000). By monitoring, controlling, and influencing messages sent to individuals and boosting determined dialog with them, IMC is the process of creating, establishing, and nurturing commercial relationships with customers and other stakeholders. The approach of IMC measures the performance of the integrated program by estimating the financial gains (outcomes-driven), retrieves market values and measures the returns of investments on the markets (Kliatchko, 2005). Once markets have been identified, the most powerful points of contact or channels of communication (channel-focused) can be helpful in establishing connections with each market. Understanding the prohibited markets, as well as the essential means via which they may be reached, as a result, marketing communication campaign will be more focused and strategic. The philosophy of IMC shows that an enterprise may also make contributions to the idea of integrating communication in which there may be a prominence on elevating recognition in the direction of the benefits, and consequently anticipated, to combine messages. Developing a mindful and optimistic mindset toward integrating messages builds a common essence with a flow-on effect on organizational objectives and values (Duncan & Everett, 1993;Harris, 1998;Stewart, 1996). There may be a need to guide internal workers as well as external service providers such as advertising agencies to ensure that the positioning strategy and messaging are consistent. In this capacity the significance of IMC is recognized in the guiding concept of promotions, legitimizes the context used, and sees the desired outcome through coordinated and integrated communication processes (Reid, 2005). Without the physical integration of management functions responsible for developing and delivering the message, this may not be possible. As a result, IMC has articulated a new philosophical and physical philosophy that is very different from its previous philosophy. A fieldspecific communication praxis through communication and rhetorical studies provides an alternative perspective for theory and curricular innovation in the IMC discipline (Groom, 2011;Houman Andersen, 2001). Using a communication praxis, one acknowledges what field researchers and market participants have created in terms of the interrelationship between marketing communications and marketing disciplines (Farmer & Patterson, 2003;Ihlen, 2002;Schultz & Schultz, 2004;Thøger Christensen et al., 2009;Torp, 2009;Toth, 1999). According to Cornelissen (2001), Duncan and Moriarty (1998), IMC is seen as a management philosophy that needs to be embedded in the business process to achieve a business outcome, while others see it primarily as a campaign development process linked to a broader brand strategy (Baker & Mitchell, 2000;Kohli & Jaworski, 1990;Lings, 2004;Nowak & Phelps, 1994). The idea of IMC as a philosophy or concept was evident as early as 1991 in the widely cited definition by the American Association of Advertising Agencies (Duncan & Everett, 1993). Reid (2005) quotes Duncan and Everett, and suggests, an organization with an IMC philosophy might or might not physically integrate individuals into one department. While strategic IMC focuses on influencing the overall brand positioning strategy, tactical IMC focuses on the planning and implementation of individual inclusive campaigns that work to build and reinforce brand positioning over time and contribute incrementally to the development of strong customer-based brand equity as advocated by Reid (2005). This should reflect best practices in developing and implementing individual campaigns in practice. As the marketplace has become additional competitive and consolidated, Organizations are increasingly recognizing the value and advantage of open, transparent, and interactive communication that is holistically interwoven across their operations. Because of the increased distinctiveness, "one of the most desirable results of effective IMC is the creation of more monopolistic brands making the brand less inclined toward competition" (Rust et al., 2004). From an IMC viewpoint, Rust et al. (2004) found that marketing strategy and techniques (including marketing communication) had an influence on consumer attitudes, loyalty, satisfaction, turnover, and retention, among other things. Enhanced IMC strategies will likely improve brand awareness, positive brand attitudes and preferences, brand action intentions, and purchase acceleration over time if they are incorporated. Price premiums and price elasticity reductions, as well as increased market share and profitability, and other factors, will result in greater customer and brand equity and other related factors (Keller & Lehmann, 2003). IMC philosophy does not dismiss the IMC toolbox (one look and one sound) nor does it abandon or diminish the looking glass. This approach embraces complexity and opacity, assigning communication resources in such a way that it enables businesses to meet the complexity of a situation (catastrophic oil spill of BP in the Gulf of Mexico in 2010). Incorporated into commercial operations as well as everyday processes of operation and strategy development, IMC becomes more than just a tool for executing functional and consistent messaging. It becomes a necessary mode of engagement for companies seeking to remain nimble, agile, and responsive during crises (Groom, 2011;Thøger Christensen et al., 2008). Further increasingly productive discussions of branding, corporate citizenship, and social concerns, and sustainability (Cone et al., 2003;Lee, 2012), however, broaden the field of IMC by pursuing philosophical and morally demanding themes of study. In IMC development, these domains demand the application of language, ethics, and praxis-based decision making. Discussions in these areas have turned IMC into a philosophically pompous discipline, one that emphasizes a greater need for communication in all marketing disciplines (Kent & Taylor, 2002;Kitchen & Schultz, 2003). The Marketing Communication Tetrahedron (MCT) proposed by Keller (2001) are characterized by interactions between and among themselves, highlighting four sets of factors by which marketing communications can be characterized as: "(1) Consumer factors (e.g., knowledge and processing goals); (2) Communication factors (e.g., modality information, brand-related information, executional information); (3) Response factors (e.g., cognitive and affective processing; memory, judgment, and behavioral outcomes); and (4) Situation factors (e.g., place and time)." Because of the latest technological development, the Using an IMC approach will enable you to accurately capture empirical behavioral data about consumers, to use valuation tools and techniques, and to differentiate customers based on economic criteria as well. In addition, technological advancements have aided IMC significantly (Calder & Malthouse, 2003;Kitchen & Schultz, 1999;Schultz & Schultz, 1998). The advent of technology has not only increased innovative communication channels, but has also made databases as one of the most valuable tools for managing customers today. An essential benefit of IMC is that it allows the company to specialize in more specific and welldefined targets (Schultz & Schultz, 1998). The development manager, internal marketer, and company profile who has a marketing background and qualification are often appointed in today's skill services, so they should be provided with information outside the standard marketing concepts. Additionally, sports, political, and commercial enterprise marketers should consider the simultaneous use of multiple technologies and communication strategies by their stakeholders, clients, constituents, and customers. Technology advancements and economic growth have significantly contributed to the growth of communication, such as via the Internet, mobile phones, wireless handlers, rich media, and linked code from graphic programs to CRM support, among other things, resulting in career options that didn't exist few decades ago. Concurrently, it has extended the scope of marketing applications beyond the normal consumers/products and advertising/promotion bias. Therefore, marketing communications are a means of establishing a dialog with consumers and represent the voice of a brand (Keller, 2001). Every marketer is aware of the terms word of click, word of mouse, word of web, B2C, C2B, C2C, and buzz marketing communications through the internet-enabled computer or mobile-based simulated environment of the virtual world, often used to mention communication of the modern business era and which have structured the contemporary organizational marketing communications to a new dimension of the digital age. Consequently, viral marketing is the initiation of the strategy used by marketers to encourage the masses to propagate a message to provide coverage, exposure, and influence of the communication mix by an organization. With the access of digital media to a large audience, consumers have been handled the creative and distributive power of the marketing message. So there is a dominating role of social media in contemporary marketing communications and performance of an organization in terms of co-creation of brand identity, brand meaning, brand image, and value. As individuals construct their personal narratives mixed in their cultural and social expectations, so consumers are viewed as equity partners of the brand today. Hence IMC further stimulated and integrated by social media, works as a dominant platform in framing, execution, and developing marketing communications to a high level of organizational expectations among customers. Conceptual papers in the field of marketing can link the work across discipline, bridge existing theories, provide multiple-level insights, enable theorizing and theory building, theory synthesis, theory adaptation, typology, modeling, concept integration, summarizing, increasing understanding, build coherence, and broaden the scope of our thinking and above all warrant publication (Alvesson & Sandberg, 2011;Corley & Gioia, 2011;Cornelissen, 2017;Gilson & Goldberg, 2015;Jaakkola, 2020;Lemon & Verhoef, 2016;MacInnis, 2011;Vargo & Lusch, 2004). In this perspective, the purpose of this study is to review the effectiveness of social media/consumergenerated media as an associate in the appraisal of integrated marketing communications by using online media as a platform and generating content and dispersion of instant messages regarding various products and services. As an effective and appraising promotional tool, the paper aims to review and explain the role of and liaison between social media/consumergenerated media and IMC in developing modern-day marketing communications and relationships with the consumers. Table 1 shows the list of journals in the domain whose articles were referenced four or more times and contributed to more than half of the total references in the present study. The "Journal of Marketing" is the highest contributor to the literature with 18 references followed by the "International Journal of Advertising" and the Journal of Marketing Communications" with 17 references each and so on. Table 2 lists the primary writers in the domain who have been cited more than twice and contribute to more than 25% of the literature in this study. With a contribution of 11 articles to our investigation, Schultz, D. E. attains the top position in the list followed by Keller, K. L. and Kitchen, P. J. with 5 articles each and Cornelissen, J. P. and Seric, M. with 4 articles each. IMC Approach and Key Marketing Outcomes IMC could be a more advanced issue than coordination and performance in the pursuit of a variety of activities. It is rather the art of uniting a sender's meanings and goals with the meticulously designated receiver's conditions of preunderstanding and interpretation, to develop an optimal strategy wherever content and variety of the messages are congruent and to optimize the choice of channels. Therefore, IMC is now considered a strategic issue, which requires an approach based on the characteristics of strategy and strategic choices. Strategy thinks about the long direction of a company as strategic selections are possible, getting ready to gain some competitive advantage over competitors in the market. Also, strategic decisions are interwoven with the will of the actions an organization performs. It is to try and do with what stakeholders and the management wish the organization to be like and to be regarding. This might and may embody necessary decisions about vision, mission, objectives, product range, pricing, withdrawal from or getting into new markets. The strategy will be seen because of the matching of resources and activities or an organization to the setting within which it operates, generally referred to as "the seek for strategic fit." As proposed by Dekimpe and Deleersnyder (2018) (2004) that conceptual studies may need to summarize the review outcomes in the form of tables. The present research (see Table 3 below) also efforts to summarize the review of various research studies with reference to marketing promotions in the arrangement of making IMC more result-oriented. The researcher here attempts to collaborate various studies related to IMC and their marketing outcomes as leading conclusions in the form of factors contributing to the performance of the organization. Defining Social Media According to Dwivedi et al. (2015), essentially social media marketing is a dialog between consumers, or audiences, or businesses, products, or services which funnels into a positive dialog between the explicit parties for the purpose of learning from one another's opinions and experience, ultimately benefiting both. Similarly, Filo et al. (2015) defined social media as "new media technologies that enable interactivity and co-creation that allow for the development and sharing of user-generated content among and between organizations." The utilization of social media technologies, channels, and software systems is to create, link, provide, and interchange offerings that are worth to an organization's stakeholders. People deliberate social networking because with the use of social media one can directly contact and engage with others. How ample social media may do a contribution to the position of organizations, the majority of companies worldwide anticipate taking advantage of such applications in their business to achieve new customers or enhance the customer experience of their current customers, therefore generating more profits and sales revenues (Ananda et al., 2016;Gulbahar & Yildirim, 2015;Movsisyan, 2016;Wu, 2016;Yadav et al., 2015). A social media platform would certainly provide a novel and inexpensive way of communication, improved interactivity, and greater security for customer interaction (Leeflang et al., 2014). This, in turn, assists firms in attempting to carry out their marketing efforts with greater efficiency and success in comparison to traditional methods of promotion like as victimization (i.e., newspaper, radio, TV.). Organizational alignment, organizational benefits, psychosocial outcomes, business outcomes, reduced interdepartmental conflict, decreased transaction costs through cooperation, reduced duplication of effort, reduced duplication of communication strategies, clear alignment of brand positioning, one voice-one look, consideration of corporate goal, four pillars model, cost Savings, cordial interdepartmental relations. Market Impression Srivastava et al. (1998), Low (2000), Eagle and Kitchen (2000), Naik and Raman (2003) Financial impact, impact on firm value, profit and growth, EBIT (earnings before interest and taxes), cash flow stability and growth, ROI (return on investment)/ROBI (return on brand investment-current and future), EVA (economic value add), brand financial performance, MVA (market value-add), market capitalization, share price, result driven IMC, optimizing costs, overall profitability, economic and financial performance. Financial Assessment Source: Authors' compilation. sharing sites and apps, musing sharing sites and apps, content sharing sites, intellectual property sharing sits, user sponsored blogs, company sponsored websites and blogs/ weblogs, business networking sites and apps, e commerce communities, podcasts, news delivery sites and apps, information and education delivery sites and apps, social bookmarking sites, virtual worlds, etc., like "Airbnb(150m+), Amazon (150m+ followers), HBO (134m+ subscribers), Houseparty-Fortnite Trivia challenge(9m+ followers), BuzzFeed Tasty Social Media Interaction Several studies attempted to investigate the function of social media in business organizations from customers perspective. Likewise, Yap and Lee (2014), Pitt et al. (2011), found that customers loyalty to social media networks (Facebook page of a company/brand) is associated with social influence, compatibility, enjoyment, and usage behavior associated with the company's offerings and intentions to use the brand's social media platform for online shopping (Annie Jin, 2012;Treem & Leonardi, 2013). In a study of "location-based social network sites" Prohaska (2011), Yavuz and Toker (2014), found that customers' registration behavior is primarily driven by their desire to promote their desired self-image, and by the fun of connecting with others, while few studies have examined how firms themselves can benefit from social media. Despite the increasing importance of social media in business organizations, most studies so far have concentrated on consumers' attitudes toward it, while limited research has examined how businesses can benefit from it (Porcu et al., 2012;Tsimonis & Dimitriadis, 2014). The conclusions of Kietzmann et al. (2011) reveal that organizations, communities, and people all experience significant and pervasive changes as a result of social media. For organizations seven functional building blocks of social media proposed by Kietzmann et al. (2011) are "identity, conversations, sharing, presence, relationships, reputation, and groups" for monitoring and understanding the varying functions and influences of social media, so as to develop a social media approach based on a combined set of building blocks for online communities (Kietzmann et al., 2011). Consumers often turn additional time to mixed varieties of social media to conduct their information searches and to form their selections (Lempert, 2006;Mayzlin, 2006). Compared with corporate-sponsored communication via the usual components of the promotion mix, social media communication is perceived by consumers as a more reliable source of product and service information (Foux, 2006). The present research argues that social media is a component of the promotion mix because it associates characteristics of traditional IMC tools (companies talking to customers) with a highly enlarged form of word-of-mouth to the word of web/ word of app/word of CGM (customers talking to one another) whereby marketing managers cannot control the content and frequency of such information. As an innovative stage, social media leverages a rich mix of technology and media trends enabling immediate, immediate communication, employing multimedia formats (audio and visual presentations) and several platforms of delivery (Muñiz & Schau, 2007). Social media or consumer-generated media is a must to accept, include and integrate into the promotion mix. Coordinating all the promotional activities including social media must provide a customer-centric unified promotional message. Consumers have been urged to contribute images or video of the product in action by several firms. Customers are more likely to talk about companies and products after they feel they have learnt a lot about them. By allowing shoppers to observe other customers using the product, they are able to entertain and interact with them. So, to encourage word-ofmouth and social-media-based conversations, products and services should be built with talking points in mind (Mangold & Faulds, 2009). As media fragmentation increases, and stakeholders are able to share information about organizations and co-create content, social media can be expected to be incorporated into IMC. Social media could be a very important communication tool for organizations. Nevertheless, its full potential and assortment appear to be unknown by the companies. Social media platforms are merely seen as channels that enable disseminating a message and empowering dialogic communication, instead of tools that give a chance to succeed in an audience further ahead (Kietzmann et al., 2011). According to Schultz and Kitchen (2000), social media platforms are well suited to the third stage of the IMC pyramid, which focuses on fostering and ideally achieving economic and strategic integration of IMC through the use of information technology. Social media offers a chance for IMC because it is meant to augment the two-way communication method between the organization and its stakeholders by having the ability to facilitate discussions, provide feedback and suggestions, and build general comments. Further, Social media, on the other hand, is more cost-effective than more traditional marketing channels, such as print media, and thus is of great value to these organizations. As a result, social media has become part of a new marketing setting that has changed the appearance of IMC. Integration of social media into commercial communication frameworks is still a long way off (Zarkada & Polydorou, 2014). Social media is likely to fail to deliver the benefits offered by IMC if organizations view it as an auxiliary activity rather than an integral part of it. This should be the initial stage in developing both a marketing and communication strategy. It has been suggested that due to the infancy and nature of social media, it is more ambiguous than traditional media (Kunz & Hackworth, 2011;Mangold & Faulds, 2009). The findings of Valos et al. (2016) reveal unique characteristics of social media (such as interactivity and individualization, incorporation of communication and distribution channels, immediacy, and information collection) affect traditional marketing communications contexts. Several research findings reveal that employees must also be skilled in using and executing social media strategies (Alves et al., 2016;Frazier & Summers, 1984;Hinz et al., 2011;Latiff & Safiee, 2015;Tsimonis & Dimitriadis, 2014). The proficiency of employees in utilizing and managing various social media networks must be a determining factor when an organization chooses a particular social media network to focus on. With limited social media capabilities, other organizations may decide to rely solely on one or two social media networks to achieve their organizational goals. Additionally, organizations with highly skilled employees can afford to use a comprehensive mix of social media networks to achieve a wide variety of social media objectives. The results of the study conducted by Alves et al. (2016) convey that many studies are devoted to understanding the behaviors of shoppers in social media, and a great deal of research devoted to understanding the behaviors of companies, their varied aspects, particularly barriers to social media usage, ROI measurements, and ways to optimize strategies, among others, could lead to future research directions. SM/CGM Approach and Enhanced IMC Outcomes As proposed by Dekimpe and Deleersnyder (2018), Lamberton and Stephen (2016), MacInnis (2011), Palmatier (2016, Palmatier et al. (2018), Samiee (1994), Steinhoff et al. (2019), Wade and Hulland (2004), that conceptual studies may need to summarize the review outcomes in the form of tables. Hence the researcher in Table 4 below summarizes the review of various research inferences with reference to SM/CGM and marketing promotions in the arrangement of making SM/CGM-enabled IMC more result-oriented. The researcher has attempted to collaborate various studies related to social media and the marketing outcomes as leading conclusions in the form of factors contributing to the performance of the organization. Conceptual Modeling Based on the above reviews, the author developed a conceptual model (Figure 1) to highlight the synergistic approach of IMC and SM/CGM (Dowling et al., 2020;Hulland, 2020;Khamitov et al., 2020;Palmatier, 2016;Samiee, 1994;Sample et al., 2020;Sutton & Staw, 1995). It is evident that the coordination of IMC and SM/CGM has effectively promoted, transformed and developed modern-day communications as observed in different research studies. With all the information in hand and all the accessibility with us, organizations could plan and develop a marketing communication mix structure rooted in both IMC and SM which should be reliable and responsive for generating + e-WOM/+WOW in relation to brand identity and company image. SM/CGM is an opportunistic augmentation with IMC but should be handled with care because it breeds a lot of uncontrolled messages (user-generated content) that organizations must address in a proper and tactical way. With all the technological patronage, SM/CGM had made IMC more dynamic and synergistic having reach and access to a large audience. Performance Measures The overall objective of the social media-enabled communication mix is to increase the performance of an organization tangibly and intangibly. These performance measures can be evaluated differently by different organizations through key performance indicators (KPIs). Common KPIs of Social Media marketing metrics are: 1. Likes, 2. Engagement, 3. Followers growth, 4. Traffic conversions, 5. Social interactions, 6. The social sentiment, 7. Social visitor goals, 8. Social shares, 9. Web visitors from social channel, 10. Social visitors conversion rates, etc. These could essentially enhance the communication performance and in turn marketing results. Moreover, social media augmented IMC could make improvements the potential areas of business like; increasing customer loyalty, retention rate/churn, share of wallet, average order value, frequency of purchase, customer satisfaction score (CSS), advertising rates (CPM/cost per 1,000 views) and (click-through rates/CTRs). Similarly, there are many other market measures and metrics at micro and macro level which are used to evaluate the performance of an organization like penetration percentage, brand receivable turnover rate, net promoter scores, offer redemption rate, customer retention rate, repeat order rate, customer lifetime value, etc. Conclusion Although a great opportunity for marketers, there are concerns about data privacy and trust in brands on social media. There must be a clear resolution from both the company and the customer side to maintain data honesty, credibility, transparency, and confidentiality. So, it is a moral duty of every stakeholder to support these standards. Moreover, there is a need for a regulatory mechanism to address any issues. People active on social media have become micro-influencers with followers ranging from a few hundred to a thousand at the most, but their links are far-reaching owing to web 2.0. Digitally enabled social interaction through social media greatly influences and intersects most aspects of the lives of people like travel, health, education, recreation, hospitality, fitness, diet, clothing, home, children, family, relations, office, entertainment, music, and films, media, news, politics, economy, society and development, environment, science and technology, competition and what not? In the digital era of virtual reality (VR) and augmented reality (AR), the era of the internet of things (IoT), algorithms and artificial intelligence (AI), Virtual influencers (VI) and robotics, the era of stern competition, the era of misperception and uncertainty, the era of possible opportunities and challenges indeed, marketers must prioritize giving meaning and impressions to brands. Organizations must not leave the whole mechanism up to consumers if they must be recognized in the virtual and physical world rather, they must synergize their whole Luxurious lifestyles and prominent luxury brands, brand endorsers, ability for social interaction and esthetical presentation, social media marketing campaigns, fashion domain and instagram, instagram celebrities, fashion bloggers, fashion brands, social media celebrities, Authentic connectedness, higher purchase intention endorsement, Quasi -promotional activities, real and relatable brands, peer Facebook users. Campaign Breeder Law and Braun (2000), Mangold and Faulds (2009) Branding, market research, customer relationship management, service provision, and sales promotion, positive implications of deploying social media in marketing strategies, enhance brand identity and market performance, build brand loyalty beyond traditional methods, brand followers, brand awareness, brand recognition, and brand recall, Three way communication, increasing the power of consumers, positive attitude toward the featured brand, listening to genuine reviews and looking at peer users' real experiences, customer experience and knowledge, Image Augmentation Hong ( communications in a way to be responsive and thoughtful to the customers and to address the demands of the consumers most efficiently. Throughout the research analysis, the author has presented and tried to fill the gap and show the connotation and development between SM and IMC with respect to building the strong foundations of the brand virtually as well as physically in the minds of customers. Therefore, organizations must have an orientation toward accepting the truth that if they want to prosper and flourish in the market, they should systematically design their communication mix strategy to impress a large audience. They should also be committed to accepting social media as a major initiator, influencer, and developer of more attention to the audience. Hence as a mother of online communications, social media breeds a lot of brand messages, in turn, brand communication must be the centripetal force that could attract customers from all around. Research Insights and Future Scope At present the development of SM/CGM-assisted IMC has widespread influence and accessibility, customers can now send a direct message to company easily, order products and services online, ask them questions, which is aided by virtual assistants or virtual influencers or AI robots or Chatbots most often. It provides faster, personalized, and even instantaneous customer service and is, in turn, economical for organizations. By reviewing the social media environment and considering where it is heading in the context of consumers and marketing practice, the study concludes that social media is a real stimulation provider to IMC. Modern IMC would be rather incomplete without the presence of an online platform and active consumer involvement as consumers are real advocates of brand presence. Although a lot of filtrations need to be done at the company side, the influence and future of SM/CGM on the augmentation IMC is exciting, amazing, and convincing. With reference to message consistency, the goal is to shift away from a limited concentration on marketing communications and toward a broader corporate viewpoint that involves the whole organization and customers for interactivity and establishing and nourishing relationships. Organizations must understand the presence of social media in their IMC as reality as it has now become relevant to our society and culture. It must not be downsizing or upsizing communications rather it must be rightsizing communications accordingly. Although this study comprehensively examined and highlighted the main themes and outcomes addressed in the IMC and SM covered by the prior literature of marketing communications, it does not include all the factors and how they interact (antecedences and consequences). This could be the future direction of the research and would help to establish a stronger theoretical foundation to examine the emerging area of integration of marketing communication mix. The research is in the hope that the thoughts discussed, related to SM/CGM-augmented IMC and respectively, here stimulate many new concepts and research in the future to further strengthen the interdependence and relationship between SM/CGM and IMC. Eventually, the expectation is to see IMC more vibrant and sustainable when aided by social media platforms. Acknowledgments Authors would like to acknowledge and thank Deanship of Graduate and Research Studies of Dar Al Uloom University (DAU), Saudi Arabia Riyadh for financial support. Authors are exceptionally indebted to Professor Abdulrahman Alsultan, Dean of the College of Business Dar Al Uloom University (DAU) for his motivation, enthusiasm and support for this research. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2022-05-25T15:13:16.487Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "17ea50285bb702bcaf37fa6c819fcb4798ef7e77", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440221099936", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "a566ee5f0f23e5f82e24b140bdc2ed164c4551d7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
265649072
pes2o/s2orc
v3-fos-license
Publication trends related to Uses and Gratification Theory on social media ABSTRACT Introduction Social media has become a communication tool that is growing according to human needs.Suddenly, everyone quickly uses the device to suit their hobbies and needs and also to gather information [1][2] Chesher stated that social media that is accessed using smartphones is individual, resulting in everything taken using this device being directly related both from the point of view and the user experience in their daily lives [3].Internet and social media are, without question, become one critical source of information on almost everything [4].It has been a subject of extensive research that covered areas such as trends [5], sentiment [6], misinformation [7] and others.With these new habits of communication, the research about social media has become larger too, and there's many method to analyze the phenomena on social media. One of the theories that are used and compatible in today's condition is Uses and Gratification Theory (UGT), which has been modified over 50 years.Compared to other theories, UGT is unique for analyzing social media phenomena because it has relevance with digital media related to everyday life and has the variety of needs and the audience required to be active [8]. A review of previous research primarily suggests that UGT specify in these five categories such as includes the following components including information [9], relaxation [10], convenience [11], This analysis aims to analyze publication trends related to the Uses and Gratification Theory and social media in Scopus Journal Articles from 2019-2021.The Uses and Gratification Theory analyzes the genuine social and psychological needs that generate expectations for social media use.Nearly 50 years after, the Uses and Gratification Theory in today's research is often used in social media.According to the result, most Uses and Gratification Theory articles came from the USA, followed by China, the United Kingdom, Malaysia, and India.Meanwhile, the top five most used keywords are social media (n=130), uses and gratifications (n=68), uses and gratifications theory (n=41), Facebook(n=26), and uses and gratification theory (n=18).To conclude, these number shows the trends of publication related to Uses and Gratification Theory, mostly about social media spanning from 2019-2021.Facebook is the social media most often mentioned, but in the future, Instagram, Twitter, and TikTok, as younger media, could be used as an alternative research object.It also suggested the theme of fake news and its spread, entertainment satisfaction, the rise of mobile entertainment today, and artificial intelligence in media use. entertainment [12], and social interaction [13].But then, Kaur et al., in their research, added one category called financial benefit, which means platform users will get bundling or cheaper packages to make it more affordable to them [14].All these years, the research about UGT mainly discussed these categories and aimed to know how the audience chooses media for their needs.In recent research in 2019, UGT used to understand how people use apps for food delivery orders.This study found eight gratifications that had never been mentioned before in the five categories of UGT such as convenience, societal pressure, customer experience, delivery experience, a search of restaurants, quality control, listing, and ease-of-use [15].Other research in 2022 about UGT in the context of online photo sharing on Instagram.The result reveals seven gratifications: disclosure, peer influence, trend influence, self-promotion, diversion, habitual pastime, and social interaction [16].Further research using UGT to study the reasons why teenagers always use social media [17] and also study the effect of social media use [18]. Recent studies on UGT have always wanted to know the reasons behind audience use of media but have never explored how UGT has been used so far.This research will try to trace what analysis was carried out during 2019-2022 to find a result that can conclude which themes scholars have not explored.The author gathers the data from Scopus (http://www.scopus.com)with limitations on Social Media and UGT.Publication trends related to these two themes can be used as a mapping and reference for other researchers developing ideas in response to these patterns.The aim of this article is to mapping the trends, and then would like to analyze which country did the most research related to Uses and Gratification Theory and find gaps within the mapping for future research. Uses and Gratification Theory The media theory in the early days suggested that the audience was passive; hence the theory was called Hypodermic Needle Theory, which mimics the doctor who injects a vaccine into the patient.Firstly introduced in 1944, the Uses and Gratification Theory focused the study on finding the reason behind every choice of audience related to particular media.On the progress in 1954, UGT was used as a tool to examine needs and motivation.Still, later in 1964, it was used to investigate audience intention to watch specific programs on TV and to understand how audiences view the mass media. Uses and Gratification later developed in the early 1970s that describe the significant social theory about how and why people choose specific media platforms to meet their requirements.Katz views every audience as an active user and always have a reason to use media to satisfy their need.Uses and Gratification theory want to explains why people satisfy certain need and how they do it [19].This theory suggests that people freely and consciously choose their type of platform usage to meet their necessities [20] [21].The current study aims to explore UGT for two main reasons.First, UGT is always used to understand the audience's reasons and motives for their choice of media but very rarely uses mapping so that it becomes less known which countries and what themes predominate.Second, UGT and social media have been highly intertwined in recent years.So that with this mapping, further research on UGT and social media will be known for further research. Bibliometrix Bibliometrix is an open-source tool for executing comprehensive scientific literature as it counts as science mapping analysis.It was programmed inside R software to facilitate statistical and graphical packages as an integrated piece.We have designed and produced an R-tool for comprehensive bibliometric analyses [22].Boyack and Klavans argue that science mapping works to combine classification with visualization [23].This argument is strengthened by Medina & Leeuwen, who state that science mapping uses bibliometric methods to examine how disciplines, fields, specializations, and individual papers are related.Science mapping produces a spatial representation of findings similar to geographic maps and provides data visualization [24].The aim is to create a representation of the research area's structure by partitioning elements (documents, authors, journals, words) into different groups.Visualization is then used to create a visual representation of the classification that emerges [25]. Method This research is an analytical bibliometric study of publication trends related to the Uses and Gratification Theory and social media.The database was retrieved from Scopus (http://www.scopus.com) on 4/9/2022, with the limitation year from 2019 to 2022.Then, the data were analyzed with the R-based software.Biblioshiny app [22] can be downloaded freely from: https://bibliometrix.org/.The software uses for data processing and also visualizing with some graphics, and the author needs to describe the data based on the visual.The database also contains information for up to 328 articles and must be a journal article in English.These analytical methods are called the science mapping approach consisting of bibliometric search, scientometric analysis, and qualitative discussion [26]. 4.1.Annual Scientific Production and Three-field plot analyses Figure 1 shows the Annual Scientific Production based on data taken from Scopus in the 2019-2021 timespan.Based on Fig. 1, the highest year of article production is in 2021, at the number of 99 articles, a growth of 1,28% from 2020 with 72 articles.The 2022 year shows that Scopus already have 80 articles published even though the year is not finished yet, so there is a big chance to surpass the 2021 publication related to social media and UGT. Using a three-field plot analysis, Fig. 2 shows the correlations between the three units set under certain conditions, which are author, author's country, and keywords.These three-field plot analyses all use set number 20 that displays the most emerging author, author's country, and keywords in 2019-2021.The size of each rectangle indicates these three categories connected by grey associated with each element in each list.On the left side, we have the author's name, and in the middle (AU_CO) is the focal point of this three-field plot.Between 2019 and 2021, the top 5 countries are shown, with the USA having the most articles, followed by China, Malaysia, India, and the U.K. Furthermore, Fig. 1 illustrates the five most frequently published themes related to social media and UGT in Scopus Journal, represented by a green rectangle (D.E.).Social media takes the most keywords from this diagram (n=292); this may cause that UGT has been known for its use to analyze social media theme research.Uses and Gratification come second, third, and fifth; even though there are differences in the mention, it can be said that these three keywords are still in the same scope.Facebook is tucked between the keywords in the fourth position (n=62).With this stat, we can see that social media research has an increasing trend from year to year, along with the rising number of social media users [27].The larger the rectangle indicates, the more research themes appear; thus, this study described a popular research theme.Indonesia, the author's country and also the fourth most significant social media user, does not contribute many articles about the UGT study, so this is one of the gaps that researchers in Indonesia can explore. The Author's Contribution From the data shown in figure 3, analyze who the top 15 authors influenced the articles related to social media and UGT research.According to the data, Buhalis D [28] ranks first with a total of 199 citations, and uniquely, this article discusses tourism and hospitality, not discussing social media but having a tremendous impact on the development of UGT research.The second most cited is Apuke O [29] with the research about fake news on Covid-19 that was shared between social media users with 187 citations.Next is Abbas J [30] on the third most cited document, with 151 quotations for the research about the impact of social media on learning behaviour and sustainable education in Pakistan.Kircaburun's [31] study on UGT related to social media problems among university students, which also analyses the five personality traits and motives for using social media in this research with 119 citations, came in the fourth most cited document.Finally, Dolan's research titled A framework for engaging customers through social media content [32] has 112 mentions as the top five most cited documents globally.Based on this data, we can see that 3 out of 5 most cited documents globally are about social media and UGT.Indeed, this discovery is extraordinary that the author receives many citations in the future year, which will continue to rise. Fig. 4. Single country publications and multiple country publications The other aspect of the author that needs to look at is the writing collaboration.Fig. 4 shows Single Country Publications (SCP) and Multiple Country Publications (MCP) in the Scopus articles related to social media and UGT research.This will indicate international communication between researchers in a scientific discipline between countries.Furthermore, the advanced technology encourages the author to contact another author in a country with a high level of collaboration to research in the same field.According to the database, the top five countries with the most SCPs are the USA, China, India, Malaysia, and Germany.Meanwhile, based on the MCP data, China has the most MCP with 12 articles, the United Kingdom has 9 articles, Malaysia and Korea have 8 articles, and the USA has 4 articles.USA is the most SCP article but came fifth on MCP; this indicates that articles from the USA rarely include the author from another country, and this gap may be complete in the future.According to Fig. 5, the most popular words were social media (n = 130), uses and gratifications (n = 68), uses and gratifications theory (n = 41), Facebook (n = 26), and uses and gratifications theory (n = 12).In other words, the most common topics for publication in the Scopus related to UGT research were social media, UGT, and Facebook.This ensures that this social media and UGT have a very high use on publication. Fig. 6. Thematic Map Analysis The research topic in the red circle is the most used theme in 2019-2021 publication and can be developed in the future.Fig. 6 shows many possibilities that can be explored, namely social media addiction, brand equity, entertainment, gratification, social media use, and many more.For recommendations for future research, we can move from the basic theme that is mostly about UGT, Information seeking, and move to the motor/niche themes in the blue circle.The theme suggested in the blue circle is about fake news and its spread, also theme about entertainment satisfaction is influenced by the rise of mobile entertainment today, and social media use and/or addiction related to todays phenomena. The research results show that the USA still dominates the UGT study.Waisbord stated that the study of communication is still well-known for being white people's study in the context of the UGT study, which is proven by the first rank, which is dominated by the USA [33]. But even so, China, India, and Malaysia, as countries from Asia, are starting to show that studies on UGT are no longer only controlled by the US.Also, the theme that is not yet explored is still having a wide range, from news to entertainment satisfaction, and also addiction and problematic in social media use by youth could be the theme that emerges in the coming years. Conclusion Based on the data, the most productive year of article production is 2021, with the number of 99 articles, a growth of 1,28% from 2020 with 72 articles.Most Uses and Gratification Theory articles came from the United States of America, as the third largest user of social media surprisingly leads in the field of UGT research.As the first and second most social network users, China and India came second and fifth in this research, U.K. and Malaysia in third and fourth place [34].Indonesia, the fourth most significant social media user, does not contribute many articles about the UGT study, so this is one of the gaps that researchers in Indonesia can explore.In conclusion, Facebook is the social media most often mentioned.As we know, they are the most significant social media used all around the world [35].Meanwhile, the top five most used keyword are social media (n=130), uses and gratifications (n=68), uses and gratifications theory (n=41), Facebook(n=26) and uses and gratification theory (n=18).In the future, we can expect younger social media such as Instagram, Twitter, and TikTok to be used as alternative research objects.And finally, as stated by Zyoud et al., bibliometric analysis is proven to be an excellent tool to map published literature on a particular subject and, simultaneously, reveal research gaps in a certain topic [36].In addition, future research is recommended with the theme of fake news and the spread of fake news because one of the uses of media is related to news/ fake news, entertainment satisfaction, and also addiction and problematic in social media use by youth Fig. 2 . Fig. 2. Three-field plot correlations related to Uses and Gratification Theory on Social Media.
2023-12-05T16:48:35.223Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "5a0e15690e318afe9f4c7f591b45e0d9ff6f1dd1", "oa_license": "CCBYSA", "oa_url": "https://pubs2.ascee.org/index.php/ijcs/article/download/789/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "19bb615b51eff8e3bf57bf468a173e15fb0ce6e9", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [] }
18083286
pes2o/s2orc
v3-fos-license
An unusual stratospheric ozone decrease linked to isentropic air-mass transport as observed over Irene (25.5° S, 28.1° E) in mid-May 2002 A prominent ozone minimum of less than 240 Dobson Units (DU) was observed over Irene (25.5 ◦ S, 28.1 ◦ E) by the Total Ozone Mapping Spectrometer (TOMS) during May 2002 with extremely low ozone value of less than 219 DU recorded on 12 May, as compared to a climatological mean of 249 DU for May between 1999 and 2005. In 5 this study, the vertical structure of this ozone minimum is examined using ozonesonde measurements performed over Irene on 15 May 2002, when the total ozone (as given by TOMS) was about 226 DU. Indeed, it is found that the ozone minimum is of Antarctic polar origin with a low-ozone layer in the middle stratosphere above 625 K and of tropical origin with low-ozone layer between 400-K and 450-K isentropic levels in the 10 lower stratosphere. The upper and lower depleted parts of the ozonesonde profile for 15 May, are respectively attributed to equatorward and poleward transport of low-ozone air toward the subtropics. The tropical air moving over Irene and the polar one passing over the same area associated with enhanced planetary-wave activity are simulated successfully using a high-resolution advection contour model (MIMOSA) of Potential 15 Vorticity. Indeed, in mid-May 2002, MIMOSA maps show a polar vortex filament in the middle stratosphere above the 625-K isentropic level and they show also tropical air-masses moving southward (over Irene) in the lower stratosphere between 400-K and 450-K isentropic levels. The winter stratospheric wave driving and its associated localized isentropic mixing leading to the ozone minimum are investigated by means of 20 two diagnostic tools: the Eliassen-Palm flux and the e ff ective di ff usivity computed from the European Center for Medium-range Weather Forecasts (ECMWF Abstract A prominent ozone minimum of less than 240 Dobson Units (DU) was observed over Irene (25.5 • S, 28.1 • E) by the Total Ozone Mapping Spectrometer (TOMS) during May 2002 with extremely low ozone value of less than 219 DU recorded on 12 May, as compared to a climatological mean of 249 DU for May between 1999 and 2005.In this study, the vertical structure of this ozone minimum is examined using ozonesonde measurements performed over Irene on 15 May 2002, when the total ozone (as given by TOMS) was about 226 DU.Indeed, it is found that the ozone minimum is of Antarctic polar origin with a low-ozone layer in the middle stratosphere above 625 K and of tropical origin with low-ozone layer between 400-K and 450-K isentropic levels in the lower stratosphere.The upper and lower depleted parts of the ozonesonde profile for 15 May, are respectively attributed to equatorward and poleward transport of low-ozone air toward the subtropics.The tropical air moving over Irene and the polar one passing over the same area associated with enhanced planetary-wave activity are simulated successfully using a high-resolution advection contour model (MIMOSA) of Potential Vorticity.Indeed, in mid-May 2002, MIMOSA maps show a polar vortex filament in the middle stratosphere above the 625-K isentropic level and they show also tropical air-masses moving southward (over Irene) in the lower stratosphere between 400-K and 450-K isentropic levels.The winter stratospheric wave driving and its associated localized isentropic mixing leading to the ozone minimum are investigated by means of two diagnostic tools: the Eliassen-Palm flux and the effective diffusivity computed from the European Center for Medium-range Weather Forecasts (ECMWF) fields. The unusual distribution of ozone over Irene during May 2002 in the middle stratosphere is closely connected to the anomalously pre-conditioned structure of the polar vortex at that time of the year.Indeed, the perturbed vortex was typically predisposed for easy erosion by dynamical transport processes, which have been driven by strong planetary wave activity and have eventually resulted in a very large latitudinal advection of polar air masses towards the subtropics.The exceptional presence of polar vortex Introduction Conclusions References Tables Figures Back Close Full Introduction Tropical ozone is a prominent actor in atmospheric chemistry and physics and the tropical Southern Hemisphere latitudes are among the best locations for detecting a possible recovery of the ozone layer.However tropical ozone studies that rely only on satellite measurements are not able to neither resolve vertical-fine scale structures nor completely inform our understanding of both photochemical and dynamic processes that are operating in the atmosphere and contributing to the ozone budget.The sparseness of in-situ measurements in the tropical and subtropical Southern Hemisphere has limited investigations of ozone distribution and variability related to atmospheric dynamics and climate, e.g., the meridional transport, the varying position of the Intertropical Convergence Zone (ITCZ), the Quasi-Biennial Oscillation (QBO), the El Ni ño-Southern Oscillation (ENSO) and La Ni ña.Against this background, the Southern Hemisphere Additional Ozonesondes (SHADOZ) project was initiated in 1998 to increase the ozonesonde launches at tropical and subtropical latitudes.Irene (25.5 • S, 28.1 • E) in South Africa became part of the SHADOZ network in October 1998 and ozonesonde launches have continued on a bimonthly basis up to the present day.Situated in the subtropical region, Irene represents a location of major interest for the observation of low and high latitude influences attributed to transport processes.Thompson et al. (2003a, b) used 1100 SHADOZ radiosondes from 10 southern tropical and subtropical sites during the 1998-2000 period to characterize the seasonality and variability in ozone.They showed that the total amount of ozone is generally low in the tropics in winter.Their data also show higher stratospheric ozone at Irene due to a greater frequency of mid-latitude air passing over the site.In their classification of tropospheric Diab et al. (2003) showed that source regions over continental central Africa and long-distance transport are responsible for the mid-tropospheric peak in summer and the low-tropospheric enhancement in spring.In their climatology of the tropospheric ozone based on ozonesonde measurements over Irene, Diab et al. (2004) noted that the seasonal features over Irene are modulated by both tropical and mid-latitude influences because of its location on the boundary of zonally-defined meteorological regimes.Bencherif et al. (2003) used lidar aerosol data measured over Durban (29.9 • S, 31.0 • E, South Africa) during the period 21 April to 14 June 1999, to stress the importance of horizontal transport of air masses from the tropics towards the subtropics and mid-latitudes across the southern subtropical barrier in the lower stratosphere.In their case study of 12 July 2000, using ozone soundings performed from Reunion Island (20.8 • S, 55.5 • E), Portafaix et al. (2003) reported on a strong isentropic exchange between the mid-latitudes and the tropical stratosphere.Brinksma et al. (1998) presented an analysis of low ozone values during the 1997 winter in the mid-latitudes (New Zealand), which they attributed to the northward and southward meridional transport.Logan et al. (2003) presented a full analysis of the Quasi-Biennial Oscillation (QBO) in tropical ozone using SHADOZ measurements, supplemented by satellite profile and column data derived from SAGE II and Total Ozone Mapping Spectrometer (TOMS) measurements.Using a middle atmosphere circulation model, Horinouchi et al. (2000) showed that the transport between the tropics and the extratropics is strongly dependent on altitude and has geographic preferences in the lower stratosphere with the existence of lateral privileged routes in northern hemisphere during winter.All these previous studies dealt explicitly with the influence of the horizontal exchange between the mid-and high-latitudes on one side, and tropics and subtropics on the other side on the distribution and variability of stratospheric ozone. In the present paper a high-resolution advection contour model (MIMOSA) of Potential Vorticity and dynamical diagnosis tools, are used to investigate the basic dynamics Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Print Version Interactive Discussion EGU behind an unusual ozone decrease observed over Irene. The dilation of the polar vortex edge, which is due to a linear propagation of Rossby waves leads to irreversible non linear mixing at lower latitudes in the surf zone, where the horizontal gradients in Ertel Potential Vorticity are small (Teitelbaum et al., 1998).An unusually weak polar vortex was an exceptional feature of the entire winter of 2002, which pre-conditioned it for a progressive dilation.This was associated with distinctive persistent stratospheric vacillations starting in the early winter 2002 (Scaife et al., 2005).This behavior of the wintertime polar vortex was considered to be the main perquisite for the development of the first major sudden stratospheric warming recorded in September 2002 (Journal of Atmospheric Sciences, Special Issue, Vol.62, No. 3).This major warming, which has never been observed before over the Southern Hemisphere since its discovery by Scherhag (1952) induced a splitting into two parts of the Antarctic ozone hole.Many studies have focused on the major sudden stratospheric warming of September 2002 and on the separation of the ozone hole into two pieces (Varotsos, 2002(Varotsos, , 2003a(Varotsos, , b, 2004;;Allen et al., 2003;Hio and Yoden, 2005;Kr üger et al., 2005;Manney et al., 2005;Newman and Nash, 2005;Roscoe et al., 2005).However, the early-winter pre-conditioning anomalies and their impact on the subtropics have been little documented and studied. The present paper reports on an unusual event characterized by an ozone decrease in mid-May 2002, over Irene a subtropical site, in connection with planetary-wave activity increase, polar vortex filament excursions up to subtropical latitudes in the middle stratosphere and tropical air-masses presence in the lower stratosphere over the same area. The data and analytical tools used in this study are described in Sect. 2. In Sect.3, we will characterize the May 2002 ozone anomaly by the use of 7 years of TOMS and ozonesonde data.The dynamical processes are investigated in Sect. 4. Conclusions are presented in Sect. 5. Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Print Version Interactive Discussion EGU Data and analysis In this section, a brief overview of the ozone and meteorological data along with the diagnostic tools used for the analyses is given. Ozone data The 1998-2005 ozone profiles for Irene (25.5 • S, 28.1 The diagnostic tools used in this paper: the Ertel Potential Vorticity (Epv), the Eliassenpalm (E-P) flux and the effective diffusivity, were computed from the ECMWF reanalyses.In fact, the Potential Vorticity (PV) on isentropic surfaces behaves as a dynamical tracer in the absence of diabatic effects, and is well adapted to study isentropic transport across dynamical barriers: polar vortices or subtropical barriers (Hoskins et al., 1985;Holton et al., 1995;Bencherif et al., 2003).In addition, the E-P flux and the Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Print Version Interactive Discussion EGU effective diffusivity have been used as diagnostic tools in a number of studies of atmospheric data and numerical models of specific dynamical phenomena.Indeed, on one hand, the E-P flux vector and its divergence show a clear picture of planetary-wave propagation from the troposphere into the stratosphere and mesosphere (Eliassen and Palm, 1961;Andrews et al., 1983Andrews et al., , 1987;;Kanzawa et al., 1984).On the other hand, the effective diffusivity K eff , presented as a function of equivalent latitude, is a powerful and elegant tool for the characterization of the large-scale isentropic mixing (Nakamura, 1996;Haynes and Shuckburgh, 2000a, b;Allen andNakamura, 2001, 2003;Tan et al., 2004;Morel et al., 2005).K eff provides a measure of the mixing properties of a flow.Indeed, K eff shows low values nearby dynamical barriers (vortices and subtropical barriers).High values of K eff are associated to strong mixing. The MIMOSA advection transport model In order to investigate the contribution of the horizontal transport mechanism in the vertical distribution of ozone over subtropics, we have used the MIMOSA (Mod élisation Isentrope du transport M éso-échelle de l'Ozone Stratosph érique par Advection) highresolution advection contour model.MIMOSA advection model of PV was developed at Service d'A éronomie by Hauchecorne et al. (2002).The model runs on an orthogonal grid covering the whole Southern Hemisphere with a resolution of 3 grid points/degree.Epv at each grid point is advected using ECMWF winds and the advected fields are re-interpolated on the original grid every 6 h (Morel et al., 2005). Ozone observations over Irene in May 2002 This section is designed to characterize an extreme ozone event in the stratosphere observed during May 2002 over Irene.The specific date of May 15 of our study is chosen by matching particularly low ozone events identified from Earth Probe TOMS records with a coinciding ozonesonde flight over Irene. EGU Daily total ozone values derived from TOMS records for years between 1999 and 2005 are depicted for the month of May in Fig. 1.As shown by the dashed horizontal line on the figure, the monthly averaged total ozone over Irene location is 249±12 DU (at 1σ).The absolute minimum total ozone (219 DU) which is about 30 DU less than the May climatological mean is obtained on 12 May 2002.As for the coincident day of ozonesonde flight (on 15 May), the corresponding total ozone (226 DU) is also significantly less than the climatological mean.In fact, one notices that the negative anomaly of total ozone persists for more than a week (see Fig. 1). The mid-May vertical distribution of stratospheric ozone over Irene as obtained from ozonesonde measurements is illustrated in Fig. 2. It shows the ozone concentration profile (solid line) recorded on 15 May, together with the monthly mean profile (dashed line).The later is obtained similarly as for the TOMS total ozone mean, i.e. by averaging together all the May ozone profiles (over 16 ozonesondes flown fortnightly from 1999 to 2005). The ozone profile recorded on 15 May 2002 (Fig. 2, solid line) shows strong negative deviations in comparison with the 7-year (1999-2005) mean profile for May (dashed line) between 400-K and 450-K in the lower stratosphere and above the 625-K potential temperature level in the middle stratosphere.This suggests that the total ozone decrease reported from TOMS data in the early winter of 2002 (mid-May) and depicted in Fig. 1 may be related to the (very) low concentrations of ozone at isentropic levels between 400-K and 450-K and at those greater than 625 K (Fig. 2). Isentropic transport and the mid-May 2002 ozone minimum The aim of this section is to investigate the role of isentropic transport of tropical and polar air masses, in conjunction with an increase in planetary-wave activity and the induced isentropic mixing, to contribute to the extreme ozone reduction event observed in early winter 2002 in the subtropics (Irene).Introduction Conclusions References Tables Figures Back Close Full In order to investigate tropical and polar air-mass transports toward the subtropics, high resolution PV-maps on selected isentropic surfaces were constructed for 4-15 May using the MIMOSA advection model.Figure 3 shows snapshots of PV advected by MIMOSA (APV) on the 440-K (lower stratosphere) and on the 675-K (middle stratosphere) isentropic surfaces for selected days prior to and during the ozone minimum event.The tropical and polar air-masses can be identified respectively by low and high absolute APV values.On each APV-map, the location of Irene is indicated by a black spot.On 4 and 8 May (upper APV maps on plate (b) of Fig. 3), at the 675-K isentropic surface, Irene is covered by air-masses of relatively low absolute APV values, while on 12 and 15 May, a tongue with high absolute APV indicating air of polar origin is deformed and shifted away from the pole toward the subtropics.It extends over the 15-120 • E longitude and 20-40 • S latitude range, covering a large area over the south part of Africa, including Irene.In parallel, on 4 and 8 May (upper APV maps on plate (a) of Fig. 3), at the 440-K isentropic surface, Irene is covered by air-masses of relatively high absolute APV values, while on 12 and 15 May, a tongue with low absolute APV indicating air of tropical origin, has moved eastward and southward toward subtropics. Nearly the same transport situations are obtained from MIMOSA outputs for selected isentropic surfaces in the 625-800 K (middle stratosphere) and 400-450 K (lower stratosphere) ranges (not shown).This is in agreement with the vertical extension of the negative deviation observed on the ozone concentration profile recorded on 15 May for θ-levels higher than 625 K and those between 400 K and 450 K (see Fig. 2). Because of polar vortex disturbances, MIMOSA analyses show how polar airmasses were injected into mid-latitude regions and sporadically into the subtropics. Moreover, this large latitudinal extension (from pole to subtropics) goes simultaneously, in a reverse way, with isentropic transport of tropical air-masses towards the mid-latitudes in the lower stratosphere.This episode of horizontal exchange between Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Print Version Interactive Discussion EGU the tropical stratospheric reservoir and mid-latitudes is there again well identified on MI-MOSA APV-maps during the period from 10-18 May (not shown).This is in agreement with the time extension of the minimum of total ozone derived from TOMS measurements (see Fig. 1).Thus, the unusual reduction of total ozone observed over Irene by mid-May 2002 seems to be related to isentropic transport of air-masses simultaneously in the lower and upper stratosphere, respectively from the tropics to the mid-latitudes and from the pole to the subtropics. It is a particularly interesting situation.In the lower stratosphere (400-450 K) the ozone profile (Fig. 2) shows a tropical influence.Indeed, ozone concentrations there are significantly under climatological values and similar to the tropical values (Portafaix et al., 2003).As for the low concentrations of ozone in the upper part of the profile (above 625 K), they can be attributed to air-mass advection from pole to tropics due to the fact that there is less ozone in the polar region. Wave activity and isentropic mixing A perturbed polar vortex is associated with enhanced planetary wave activity, which contribute to pull out materials from the vortex and distribute filaments equatorward (Schoeberl et al., 1988(Schoeberl et al., , 1992)).Moreover, in the stratosphere nearby a subtropical barrier the isentropic mixing has been linked to disturbances occurring at the vicinity of the polar vortex in the winter hemisphere (Waugh, 1993).The transport of polar air toward low latitudes occurs in the form of a polar filament.This transport can have different effects depending on whether it is reversible or irreversible.If reversible, the effect is to perturb the ozone content at low latitudes for a limited period of time.If irreversible, polar air is mixed with the surrounding air. The rapid and irreversible deformation of Epv contours on the 675-K isentropic surface observed in plate (b) of Fig. 3 suggests a planetary-wave breaking linkage resulting in quasi-horizontal mixing and irreversible tracer transport (McIntyre andPalmer, 1983, 1984).12626 Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Print Version Interactive Discussion EGU Figure 4 shows E-P cross-sections computed using ECMWF fields, with arrows representing the E-P flux vectors, and contours, values of the wave driving, averaged over the period 3-8 May (Fig. 4a) and 11-16 May (Fig. 4b) and for the selected date of 15 May 2002 (Fig. 4c).Planetary-wave breaking is identified by the convergence of the E-P flux (i.e.negative wave driving in Fig. 4).The comparison between Figs. 4a and 4b shows an increase in stratospheric wave activity during the period from 11 to 16 May.In fact, strong upward wave propagation is located over high latitudes during that period of time (Fig. 4b).The E-P flux vectors bend equatorward with height and generate a large region of convergence over the subtropics in the stratosphere, where the wave driving reaches a minimum lower than −6 m.s −1 .The wave activity is particularly strong on 15 May 2002 (Fig. 4c) and is associated with a greater wave penetration and an enhanced wave driving in the subtropical middle stratosphere where the wave driving reaches a minimum lower than −10 m.s −1 per day.This analysis demonstrates that, by early-winter 2002, planetary-wave activity has significantly increased during the mid-May period.It shows upward and equatorward planetary-wave trajectories. Figure 5 shows values of the effective diffusivity K eff calculated as described by Allen andNakamura (2001, 2003) and by Morel et al. (2005).The state of mixing on the 700-K isentrope for the period April-May 2002 is summarised in Fig. 5 showing the time evolution of K eff as a function of equivalent latitude.It shows a region of large K eff between 10 and 20 May 2002 in the 20-30 • S area.From the superimposed contours on Fig. 5 illustrating ECMWF zonal winds as a function of equivalent latitude, it can be seen that the southern stratospheric zonal circulation changed from easterlies to westerlies early, allowing the planetary waves to spread and bend equatorward nearby the subtropics (as shown by EP-flux on Fig. 4) and contributing to the increase in mixing.Indeed, one notices that mixing (K eff ) has increased in the 20-30 -year (1999-2005) mean May profile.Indeed, the polar vortex in the early winter of 2002 was unusually disturbed so that enhanced planetary-wave activity easily eroded it into filaments.This gave rise to large-scale transport of polar air toward the subtropics and largely contributed to the development of the low ozone episode over Irene in mid-May 2002. Discussion and conclusion In this paper we investigated the ozonesonde dataset obtained at Irene, a South-African subtropical site, as part of SHADOZ programme.The retrieved ozone concentration profiles were supplemented by daily TOMS total ozone columns derived for the same location and covering the same period, i.e., November 1998-May 2005. A prominent ozone minimum has been reported in mid-May 2002 from TOMS and ozonesonde datasets.Combination of these datasets suggests that the most significant contribution to the total ozone reduction may be explained by low ozone concentrations obtained at isentropic surfaces higher than 625 K in the middle stratosphere and at those between 400 K and 450 K in the lower stratosphere.The absolute minimum of total O 3 (219 DU) was 30 DU (about 12%) less than the May mean value. It was found from MIMOSA advected PV-maps that the observed ozone reduction over the subtropics (Irene) could be attributed to a transport of tropical and polar airmasses.From planetary-wave trajectories illustrated by EP-flux in Fig. 4, the largescale transport polar air-masses was driven by an unusual increase of planetary-wave activity due to the early reversal of the zonal circulation followed by an increase of mixing near the subtropics (Fig. 5). The present study demonstrated that the early-winter dynamics of 2002 was directly responsible for the unusual ozone reduction over the subtropics through large-scale transport and mixing of tropical and polar air-masses.Other extreme ozone minima over Northern and Southern Hemispheres have also been shown to also have dynam- EGU ical origins (Brinksma et al., 1998;Hood et al., 2001;Grainger and Cordero, 2002;Semane et al., 2002;Hood and Soukharev, 2005).Recently the Journal of Atmospheric Sciences devoted a special issue on "the Antarctic winter and sudden warming 2002".Many studies have focused on the major sudden stratospheric warming of September 2002 and on the ozone hole split into two pieces (Varotsos, 2002(Varotsos, , 2003a(Varotsos, , b, 2004;;Allen et al., 2003;Hio and Yoden, 2005;Kr üger et al., 2005;Manney et al., 2005;Newman and Nash, 2005;Roscoe et al., 2005).Nevertheless, the early-winter pre-conditioning anomalies and their impact on the subtropics have been little documented and studied. In fact, the early-winter 2002 ozone minimum and its large extension up to the subtropics represent an anomaly.It is closely connected to the unprecedented state of the southern polar vortex disturbances recorded during May 2002 as reported by Newman and Nash (2005). Usually, the winter circulation at high southern latitudes is characterized by low planetary-wave activity and a strong polar vortex which is somewhat isolated from midlatitudes (Cordero and Grainger, 1997). To summarize, a 8-12% decrease in total column of ozone, concomitant with lowozone concentrations in the middle stratosphere at isentropic levels above 625 K and in the lower stratosphere (400-450 K) observed over Irene in mid-May 2002, can be attributed respectively to ozone-poor air originally from the polar vortex and to ozonepoor air coming from tropics.This resulted in the lowest ozone column recorded during the 7-years period (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005).MIMOSA advected PV-maps representing the early winter 2002 period in the middle stratosphere highlighted an unusually high planetarywave activity and a disturbed polar vortex with filament excursions and strong mixing up to the subtropics in the middle stratosphere.In parallel, MIMOSA model simulated successfully the transport of a tropical poor-ozone air toward the subtropics in the lower stratosphere. subtropics during May 2002 can be considered as the first sign of the particular polar vortex disturbances, which after being well reinforced, contributed to the unprecedented behavior of the Antarctic spring ozone hole observed during September 2002. and polar air mass advection toward the subtropics (Thompson et al., 2003a, b)this study have been performed by the SAWS (South African Weather Service) and are archived on the SHADOZ (Southern Hemisphere Additional Ozonesondes) web site (http://croc.gsfc.nasa.gov/shadoz)(Thompsonetal., 2003a, b).For the present study, 178 profiles measured fortnightly between November 1998 and May 2005 were used. More precisely, in order to examine the early winter state of stratospheric ozone over Irene, we focused our analysis on the May ozone concentration profiles measured during the period from 1999 to 2005.In addition, total ozone columns over Irene for the same period were taken from the TOMS experiment on board the Earth Probe satellite (data available at NASA/Goddard Space Flight Center web site: http://toms.gsfc.nasa.gov/ozone), which provides daily global distribution of ozone with a resolution of 1 • in latitude and 1.25 • in longitude.2.2.Diagnosis tools 2.2.1.The Ertel potential vorticity, the Eliassen-Palm flux and the effective diffusivity • S equivalent latitude range by mid-May as underlined by the dotted circle.Clearly, the early-winter dynamics of 2002 are directly responsible for the unusual ozone reduction observed over Irene in mid-May 2002.The large-scale transport and mixing of polar air-masses explains the decrease of stratospheric ozone over Irene and Introduction EGUvided by the Southern Hemisphere Additional Ozonesondes (SHADOZ) project (http://croc.gsfc.nasa.gov/shadoz).The Laboratoire de Physique de l'Atmosph ère (LPA) is supported by the French Centre National de la Recherche Scientifique (CNRS), the Institut National des Sciences de l'Univers (INSU), the Conseil R égional de La R éunion and the European Community (FEDER).The present study is part of the 2005 French PNCA programme (Programme National de Chimie Atmosph érique).
2019-04-05T03:38:42.414Z
2005-12-07T00:00:00.000
{ "year": 2005, "sha1": "d8889e1687ab0960dc28060560849738ba0d4910", "oa_license": "CCBYNCSA", "oa_url": "https://acp.copernicus.org/articles/6/1927/2006/acp-6-1927-2006.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e5ee3148a33756a6c907d5b4ea1e6b5330e4f657", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
14054210
pes2o/s2orc
v3-fos-license
Epidemiology of Eating Disorders: Incidence, Prevalence and Mortality Rates Eating disorders are relatively rare among the general population. This review discusses the literature on the incidence, prevalence and mortality rates of eating disorders. We searched online Medline/Pubmed, Embase and PsycINFO databases for articles published in English using several keyterms relating to eating disorders and epidemiology. Anorexia nervosa is relatively common among young women. While the overall incidence rate remained stable over the past decades, there has been an increase in the high risk-group of 15–19 year old girls. It is unclear whether this reflects earlier detection of anorexia nervosa cases or an earlier age at onset. The occurrence of bulimia nervosa might have decreased since the early nineties of the last century. All eating disorders have an elevated mortality risk; anorexia nervosa the most striking. Compared with the other eating disorders, binge eating disorder is more common among males and older individuals. Introduction Epidemiological studies provide information about the occurrence of disorders and trends in the frequency of disorders over time. For epidemiological studies on eating disorders there are some methodological issues. Eating disorders are relatively rare among the general population and patients tend to deny or conceal their illness and avoid professional help. This makes community studies costly and ineffective. Therefore, many epidemiological studies use psychiatric case registers or medical records from hospitals in a circumscribed area. This type of study will underestimate the occurrence of eating disorders in the general population, because not all patients will be detected by their general practitioner or referred to the hospital or mental health care. Furthermore, differences in rates over time could be due to improved case detection, increased public awareness leading to earlier detection and wider availability of treatment services, instead of a true increase in occurrence [1,2]. Anorexia nervosa (AN) and bulimia nervosa (BN) are the two specified eating disorders according to the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition (DSM-IV). However, the most common eating disorder diagnosis in either clinical and community samples is the rest category 'eating disorder not otherwise specified' (EDNOS) [3][4][5][6][7]. EDNOS is a heterogeneous, not well defined group of eating disorders and includes partial syndromes of AN and BN, purging disorder and binge eating disorder (BED). A comprehensive meta-analysis of 125 studies suggests that EDNOS is associated with substantial psychological and physiological morbidity, comparable with the specified eating disorders [8]. In 2013 the fifth edition of the DSM is scheduled to appear, including a thoroughly revised eating disorder section. A major goal is to reduce the size of the EDNOScategory. To achieve this goal the criteria for AN and BN will be broadened [9,10] and BED will be added as a specific eating disorder. The decision to make BED a separate diagnosis is partly informed by epidemiological data supporting the construct validity of BED. BED differs from AN and BN in terms of age at onset, gender and racial distribution, psychiatric comorbidity and association with obesity. BED is often seen in obese individuals, but is distinct from obesity per se regarding levels of psychopathology, weight-and shape concerns and quality of life [11]. BED aggregates strongly in families independently of obesity, which may reflect genetic influences [12,13]. In this review we will describe the epidemiology of AN, BN, EDNOS and BED according to DSM-IV andif availableto the proposed DSM-5-criteria. The proposed changes in DSM-5 diagnostic criteria will alter the coverage of the diagnostic categories and thus their disease frequencies as well. Some studies used both a narrow and a broad or partial definition of AN, including DSM-IV AN with or without amenorrhea and ICD-10 atypical AN [14][15][16]. These broad or partial definitions of AN are in line with the proposed DSM-5-criteria for AN and will be referred to as 'broad AN' throughout this review [9]. In a Finnish study of female twins, the 5-year clinical recovery rates of AN and broad AN were almost the same; i.e. 66.8 % and 69.1 % respectively, providing evidence for the validity of broad AN [14]. Definitions of each epidemiological measure are provided at the respective paragraphs. This article is based on research publications on the epidemiology of eating disorders and updates our previous reviews, with special emphasis on studies published in the last three years [2,[17][18][19]. Method We searched online Medline/Pubmed, Embase and PsycINFO databases using several keyterms relating to eating disorders and epidemiology. The reference lists of the articles found were checked for any additional articles missed by the database search. This review is limited to articles published in English, describing the basic epidemiological parameters incidence, prevalence and mortality rates. Incidence The incidence rate is the number of new cases of a disorder in the population over a specified period. The incidence rate of eating disorders is commonly expressed in terms of per 100 000 persons per year (person-years). The study of new cases provides clues to etiology. Anorexia Nervosa Community studies assessing the incidence of eating disorders are scarce. Keski-Rahkonen and colleagues conducted a large community study to quantify the incidence of AN, yielding an incidence rate of 270 per 100 000 person-years in [15][16][17][18][19] year old Finnish female twins during 1990-1998 [14,19]. The incidence rate of broad AN was 490 per 100 000 person-years in the same group [14]. A much higher incidence rate of 1204 per 100 000 person-years (95 % confidence interval (CI): 652-2181) for broad AN in females aged 15-18 was found in another Finnish study of a relatively small sample of 595 adolescents [20]. The high incidence rate might be explained by the small sample size limiting statistical power and a very broad definition of AN used in this study, including subjects with an ageadjusted body mass index (BMI) up to 19, without explicitly stating that weight loss of at least 15 % had to be present. Community rates are much higher than incidence rates derived from primary care and medical records [1,21], reflecting the selection filters that form the pathway to (psychiatric) care [22]. Incidence rates derived from general practices represent eating disorders at the earliest stage of detection by the health care system. Currin and colleagues [23] searched the General Practice Research Database in the UK for new cases of AN between 1994 and 2000 and compared their data with the findings of a similar study for 1988-1993 [24]. The age-adjusted and sex-adjusted incidence rate of AN remained stable over the two study periods: In 2000 it was 4.7 (95 % CI: 3.6-5.8) per 100 000 person-years, compared with 4.2 (95 % CI: 3.4-5.0) per 100 000 person-years in 1993. In the Netherlands the overall incidence rate of AN ascertained by general practitioners in a large representative sample of the Dutch population remained stable as well. In 1995-1999 it was 7.7 (95 % CI: 5.9-10.0) per 100 000 person-years, practically the same as the rate of 7.4 per 100 000 person-years in 1985-1989 [1]. Incidence rates are highest for females aged 15-19 years. They constitute approximately 40 % of all cases, resulting in an incidence rate of 109.2 per 100 000 [15][16][17][18][19] year old girls per year in 1995-1999 [1]. The incidence of AN among males was less than 1 per 100 000 person-years in general practices in the Netherlands and the UK [1,23]. AN does occur among children <13 years of age, but is relatively rare [1,23]. Three studies used a national Paediatric Surveillance System to identify new cases of early-onset eating disorders presenting to pediatricians [25•, 26•, 27]. In Canada, the incidence rate of early-onset restrictive eating disorders diagnosed by pediatricians was 2.6 (95 % CI: 2.1-3.2) per 100 000 personyears in children aged 5 to 12 years [25•]; in Australia it was 1.4 (95 % CI: 1.1-1.7) per 100 000 person-years in 5-13 year old children [27]. In the Canadian study 62 % of new restrictive eating disorder cases met criteria for AN [25•]. Of the Australian pediatric inpatients with a newly diagnosed restrictive eating disorder only 37 % could be classified as AN, although 61 % had life-threatening complications of malnutrition [27]. In British pediatric and psychiatric care an overall incidence rate of 1.1 per 100 000 person-years for AN was found among children <13 years of age [26•]. Among middle aged and elderly women AN is relatively rare as well [28][29][30]. In a Spanish population-based study using the Public Health Registry to identify eating disorder cases diagnosed by mental health professionals, new cases of AN were found among women over 45 years of age, constituting 64 % of all new eating disorder diagnoses in this age-group [31]. It is unknown whether this reflects late detection or late age at onset. The question of whether the incidence of AN is on the rise has been under debate. Longterm epidemiological studies are sensitive to minor changes in the absolute incidence numbers and in the methods used, for example, variations in registration policy, demographic differences between the populations, faulty inclusion of readmissions, the specific methods of detection used or the availability of services [18,32]. In a meta-analysis of the incidence of AN in mental health care, various studies in northern Europe were combined (see figure 1). Until the 1970s, there was an increase of the registered incidence of AN in Europe. Since 1970, the incidence of AN in Europe seems to have been rather stable [1,18,33,34]. In Switzerland the incidence of severe AN in females was studied in a geographically defined region using the same methodology from 1956 to 1995. The incidence of severe AN requiring hospital admission rose significantly during the 1960s and 1970s and reached a plateau of around 1.2 per 100 000 person-years thereafter [21]. In the Netherlands from the 1980s up to now general practitioners have registered new cases with an eating disorder in a representative sample of the Dutch population. While the overall incidence of AN was stable around 7 per 100 000 person-years, the incidence in 15-19 year old girls increased significantly from 56.4 per 100 000 person-years in 1985-1989 to 109.2 per 100 000 person-years in 1995-1999 [1]. This is in line with an Italian study examining age at onset of AN in a large sample of 1,666 patients referred to an eating disorders outpatient unit between 1985 and 2008. Patients referred in more recent years had an earlier age at onset [35•]. In Rochester, MN, USA, the age-adjusted incidence rates of AN showed a significant linear increasing trend only in females aged 15-24 years from 1935-1989 [36]. Bulimia Nervosa Only a few incidence studies of BN have been conducted. In the community study of the 1975-1979 birth cohorts of female Finnish twins the incidence rate of BN was 200 per 100 000 person-years at the peak age of incidence, 16-20 years [37••]. A broader definition of BN was examined as well. When symptom frequency was relaxed to once a week, in concordance with the proposed DSM-5-criteria [10], the peak incidence rate of broad BN was 300 per 100 000 person-years in [16][17][18][19][20] year old females [37••]. Isomaa and colleagues found an incidence rate of 438 (95 % CI: 132-1175) per 100 000 person-years in 15-18 year old Finnish females for another broad definition of BN, including subjects who fulfilled all but one of the criteria for BN [20] . According to the nation-wide primary care study in the Netherlands, the overall incidence rate of BN tended to decrease from 8.6 per 100 000 person-years in 1985-1989 to 6.1 per 100 000 person-years in 1995-1999 [1]. In a primary-care study from the UK, the overall age-and sexadjusted incidence rate of BN decreased during the second half of the 1990s from 12.2 per 100 000 person-years in 1993 to 6.6 per 100 000 person-years in 2000. However, the incidence rate of BN in women aged 10-19 years remained relatively stable around 40 per 100 000 person-years in 1993 as well as in 2000 [23]. Several studies suggest that age at onset of BN is decreasing. In a sample of 793 Italian BN patients referred to an eating disorders outpatient unit between 1985 and 2008, subjects born in 1970-1972 had a mean age at onset of 18.5 years, compared to 17.1 years in subjects born between 1979-1981 [35•]. In the Dutch primary care study the high risk group of BN shifted from 25-29 year old females in 1985-1989 to 15-24 year old females in 1995-1999 [1]. It is unclear whether this reflects a true earlier age at onset or rather earlier detection of BN cases. Eating Disorder Not Otherwise Specified and Binge Eating Disorder Epidemiological studies on EDNOS are sparse, because of its heterogeneity and undefined operational criteria, except for BED, for which in DSM-IV research criteria were formulated. In a Spanish population-based study using the Public Health Registry to identify eating disorder cases diagnosed by mental health professionals, the incidence rate of EDNOS was 6.5 (95 % CI: 4.8-7.9) per 100 000 inhabitants per year [31]. A British national surveillance study of newly diagnosed eating disorders in pediatric and psychiatric care found an incidence rate of 1.2 per 100 000 personyears for EDNOS among children <13 years [26•]. To our knowledge no incidence studies on BED yet exist. Binge eating as a disordered eating behavior or symptom is quite common among adolescents: In a longitudinal study of a large cohort of US adolescents, the incidence rate for binge eating was 10.1 per 1000 person-years among females and 6.6 per 1000 person-years among males (both sexes ≥ 14 years), which translates into 1010 per 100 000 personyears and 660 per 100 000 person-years among female and male adolescents, respectively [38]. Prevalence The prevalence can be expressed as point prevalence, oneyear prevalence rate and lifetime prevalence. The point prevalence is the prevalence at a specific point in time, e.g. January 1 of a specific year. The one-year prevalence rate is the point prevalence plus annual incidence rate (the number of new cases in the following year). The lifetime prevalence is the proportion of people that had the disorder at any point in their life. The prevalence is the most useful measure for planning health care facilities, as it indicates the demand for care. Case detection through a two-stage screening approach is the standard procedure to estimate the prevalence of eating disorders [2,39]. In the first stage a large population is screened for the likelihood of an eating disorder by means of a screening questionnaire that identifies an at-risk group. In the second stage definite cases in the at-risk group are established on the basis of a personal interview. Problems associated with this type of study are poor response rates, sensitivity of the screening instrument and the restricted size of the groups interviewed [40]. To circumvent this last problem several studies use a structured interview such as the Composite International Diagnostic Interview (CIDI), usually administered by lay-interviewers, to assess the prevalence of eating disorders in a large population sample. Anorexia Nervosa The lifetime prevalence of AN and broad AN has been assessed in three large population-based cohort studies of twins [14][15][16]. In Sweden, it was 1.2 % (AN) and 2.4 % (broad AN) in the largest twin study of women from the 1935-1958 birth cohorts [16]. In an Australian study of female twins aged 28-39 years, the lifetime prevalence of AN was 1.9 % and of broad AN 4.3 % [15]. The lifetime prevalence of AN was 2.2 % and of broad AN 4.2 % in a large sample of women from the 1975-1979 birth cohorts of Finnish twins [14]. In men from the same birth cohort the lifetime prevalence of AN was 0.24 % [41]. Stice and colleagues followed a relatively small sample of 496 adolescent girls over an 8-year period from early adolescence into young adulthood, administering annual diagnostic interviews. They found a lifetime prevalence by age 20 years of 0.6 % for AN and 2.0 % for broad AN [42]. In Portugal, the point prevalence of AN among adolescent girls was 0.39 % and of broad AN 0.64 % [6]. In an Australian population-based sample of 1,597 14-year old boys and girls, only one case of AN was identified by means of a self-report eating disorder screening questionnaire; four other subjects met partial criteria for AN [43]. Prevalences of AN estimated with two-stage procedures varied from 0 to 0.9 % with an average point prevalence of Fig. 1 Registered Yearly Incidence of Anorexia Nervosa. Adapted from Hoek [18] 0.29 % in young females [2]. In a meta-analysis [2], the oneyear prevalence rate per 100 000 young females was computed at different levels of care (Table 1). Using two-stage studies of community samples and estimates of the incidence, the one-year prevalence rate of AN in the community was calculated as 370 per 100 000 young females. One can conclude from table 1 that the majority of patients with AN in the community do not enter the mental healthcare system [18]. Several studies used the CIDI to estimate the lifetime prevalence of AN in large population samples [45, 46••, 47••]. Both in a nationally representative survey of the US household population [45] and in a population-based study in six European countries [46••], the lifetime prevalence of AN was 0.9 % among adult females. In the US study it was 0.3 % among males [45], while in the European study not a single male case of AN was found [46••]. In a large representative sample of US adolescents the lifetime prevalence of AN was 0.3 % in 13-18 year old females as well as males [47••]. The female-to-male ratio in these studies is considerably lower than the 10:1 ratio found in the Finnish twin study and as reported in previous reviews [2,14,41], which could be due to differences in methodology and small numbers of cases with eating disorders [45, 47••]. However, despite this restriction many recent community-based studies have found that AN is more common among males than previously thought. AN may be even more frequently underdetected in males than in females [19]. A large study of Swedish twins born during the period 1935-1958 documented a higher prevalence of AN in both male and female participants born after 1945 than those born before 1945 [16]. Bulimia Nervosa The generally accepted point prevalence of BN from twostage studies is about 1 % among young females [2,40]. Keski-Rahkonen and colleagues found a lifetime prevalence of 1.7 % for BN in women from the 1975-1979 birth cohorts of Finnish twins [37••]. When symptom frequency was relaxed to once a week, in concordance with the proposed DSM-5-criteria, the lifetime prevalence rose to 2.3 % in women [37••]. In an Australian twin cohort of women aged 28-39 years, a lifetime prevalence of BN of 2.9 % was found [15]. According to US [45] and European [46••] large scale two-stage studies of the population the lifetime prevalence of BN, assessed with the CIDI, varied between 0.9 and 1.5 % among women and between 0.1 % and 0.5 % among men. Marques and colleagues compared the prevalence of BN across nationally representative samples of ethnic groups in the US. BN was more prevalent among Latinos and African Americans than non-Latino whites. Lifetime prevalences ranged from 0.51 % (non-Latino whites ) to 2.0 % (Latinos) [48]. In a recent study of a nationally representative sample of US adolescents, a lifetime prevalence of BN of 1.3 % and 0.5 % was found among 13-18 year old females and males, respectively [47••]. In a US sample of 496 adolescent females, followed for 8 years, a lifetime prevalence of 1.6 % for BN was found at age 20 years [42]. An Australian populationbased study of 1,597 14-year old adolescents reported 9 cases of BN, translating into a point prevalence of 0.6 % [43]. Trace and colleagues assessed the impact of reducing the binge eating frequency on the lifetime prevalence of BN in a large population sample of female Swedish twins. The lifetime prevalence of BN increased from 1.2 % for a minimum of 8 binges per month (DSM-IV) to 1.6 % for at least 4 binges per month (proposal DSM-5) [49]. The decrease in occurrence of BN over time found in the incidence studies is supported by a US study of university students in which the point prevalence of BN among women significantly decreased from 4.2 % in 1982, to 1.3 % in 1992 and 1.7 % in 2002 [44]. In another US study among female students the point-prevalence of probable cases of BN remained relatively stable between 1990 and 2004 [50]. Eating Disorder Not Otherwise Specified and Binge Eating Disorder Often used diagnostic interviews to estimate the prevalence of eating disorders, like the CIDI and the Structured Clinical Interview for DSM disorders (SCID) do not cover EDNOS. In recent studies that used the CIDI alterations have been made to include subthreshold AN [47••] and BED [ 45, 46••, 47••]. Researchers have operationalized EDNOS in different ways; reported prevalences are therefore difficult to compare and in community studies the use of limited definitions will underestimate the true prevalence of eating pathology that could be classified as EDNOS [8]. The point prevalence of EDNOS in a nation-wide community sample of young females was 2.4 % [6]. The lifetime prevalence of BED has been assessed in large population samples in the US [45, 47••] and Europe [46••]. In six European countries it was 1.9 % for women and 0.3 % for men [46••]. In the US higher lifetime prevalences were found The US researchers used a duration criterium of only three months instead of the six months DSM-IV research criteria require, which might partly explain the higher percentages. Hudson and colleagues examined data from a non-clinical sample to estimate how much the prevalence of BED will increase under the proposed DSM-5-criteria that relax the requiremens for the frequency (from two per week to one per week) and duration of binges (from six to three months). They extrapolated their findings to the results of the aforementioned study of the US household population and estimated that the lifetime prevalence of BED would increase with an additional 0.1 % to 3.6 % in women and 2.1 % in men [51]. In a study of a large sample of adult Swedish female twins, a relatively low lifetime prevalence of 0.17 % for BED was found, which rose to 0.35 % when DSM-5 criteria were applied [49]. Mortality One could describe the mortality rate as an incidence rate in which the event being measured is death [52]. Mortality rates are often used as one of the indicators of illness severity. The standard measures for mortality are the crude mortality rate (CMR) and the standardized mortality ratio (SMR). The CMR is the number of deaths within the study population over a specified period. The SMR is the ratio of observed deaths in the study population to expected deaths in the population of origin [18,19,52]. Anorexia Nervosa In a meta-analysis of excess mortality in the 1990s, anorexia nervosa was associated with the highest rate of mortality among all mental disorders [53]. In a recent meta-analysis of 35 as duration of follow-up increases, the expected mortality in the population of origin will increase as well, resulting in lower SMRs. In a meta-analysis of SMRs in 2001, the overall SMR of AN in studies with 6-12 years of followup was 9.6 (95 % CI: 7.8-11.5) and in studies with 20-40 years of follow-up 3.7 (95 % CI: 2.8-4.7) [55]. Age, case severity and study period influence mortality rates as well [54••]. In a Swedish study [56], a significantly higher mortality rate (4.4 % vs. 1.2 %) was found among female patients hospitalized due to AN in 1977-1981 compared with those hospitalized in 1987-1991. The authors argue that this dramatic decrease in mortality is related to the introduction of specialized care units for patients with eating disorders. Finally, in an audit conducted in the UK, death certificates emerged as a flawed source of information with both over-and underreporting of AN as a cause of death, the latter probably more common [19,57]. Bulimia Nervosa In a recent meta-analysis of 12 studies describing the mortality rates of patients with BN, a weighted mortality rate of 1 [64]. Duration of follow-up ranged from one (four studies) to 12 years (one study). The single 12-year follow-up study provided the only report of deaths at follow-up: 2 of 68 patients admitted for inpatient treatment of BED had died after 12 years, leading to a CMR of 2.9 % and a non-significant SMR of 2.29 (95 % CI: 0.00-5.45) [65]. These data from an inpatient sample may not be representative of patients with BED seen in other settings [64]. BED is associated with obesity. In a large US population-based study 42 % of the subjects with a lifetime diagnosis of BED were obese (BMI >30 kg/m 2 ) at the time of the interview and also a significantly higher prevalence of morbid obesity (BMI>40 kg/m 2 ) was found among these subjects compared to respondents without any eating disorder (OR 4.9; 95 % CI: 2.2-11.0) [45]. Obesity and especially morbid obesity is associated with increased risk for mortality, although the net effect of obesity on mortality is difficult to quantify [66,67]. Finally, in a meta-analysis of the risk of suicide in eating disorders, no suicide had occurred among 246 patients with BED after a mean follow-up of 5.3 years [68]. Eating Disorders in Non-Western Countries and Among Ethnic Minorities In the past, eating disorders have been characterized as culture-bound syndromes, specific to Caucasian subjects in Western, industrialized societies [69]. Recent studies demonstrate that eating disorders and abnormal eating behaviors do occur in non-Western countries and among ethnic minorities [48,[70][71][72][73][74]. An increasing occurrence of eating disorders in non-Western countries has been associated with cultural transition and globalization, including modernization, urbanization and media-exposure promoting the Western beauty-ideal [70,[75][76][77]. The most comprehensive attempt to quantify eating disorders in a non-Western setting was conducted on the Caribbean island Curaçao (Netherlands Antilles), where the full spectrum of community health and service providers was contacted. The overall incidence of AN of 1.82 (95 % CI: 0.74-2.89) per 100 000 person-years was much lower than in the US and Western Europe. No cases were found among the black population. However, the incidence of 9.08 (95 % CI: 3.71-14.45) among the minority mixed and white population was similar to the incidence in the Netherlands and in the United States [78]. In the Netherlands, incidence rates of psychiatric hospital admissions for AN did not differ between Netherlands Antilles immigrants and native Dutch [79], suggesting that exposure to the Western beauty ideal is a risk factor for the development of AN, possibly in interaction with migrationrelated stress. A similar finding for risk of BED among Mexican-American immigrants was found by Swanson and colleagues: In their study of a sample of people of Mexican origin in Mexico and the US, migration from Mexico to the US was associated with an increased risk of BED [80]. A recent study comparing prevalences of eating disorders across ethnic groups in the United States reported similar prevalences of AN and BED among non-Latino whites, Latinos, Asians and African Americans. BN was more prevalent among Latinos and African Americans than among non-Latino whites [48]. Conclusions AN is relatively common among young women. While the overall incidence rate remained stable over the past decades, there has been an increase in the high risk-group of [15][16][17][18][19] year old girls. It is unclear whether this reflects earlier detection of AN cases or an earlier age at onset. The occurrence of BN might have decreased since the early nineties of the last century. All eating disorders have an elevated mortality risk; AN the most striking. Compared with the other eating disorders, BED is more common among males and older individuals. Disclosure No potential conflicts of interest relevant to this article were reported. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2018-04-03T01:01:25.892Z
2012-05-27T00:00:00.000
{ "year": 2012, "sha1": "6e6c23f715b8fc08802314130094b12734fd4191", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11920-012-0282-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6e6c23f715b8fc08802314130094b12734fd4191", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
30404684
pes2o/s2orc
v3-fos-license
Initiation of Aspirin Therapy Modulates Angiogenic Protein Levels in Women with Breast Cancer Receiving Tamoxifen Therapy Abstract Aspirin has a range of antineoplastic properties linked to inhibition of cyclooxygenase enzymes in tumor cells, platelet inhibition and to inhibition of angiogenesis. We undertook a prospective study to determine the influence of a 45‐day course of aspirin therapy on circulating and intraplatelet levels of selected proangiogenic (vascular endothelial growth factor [VEGF]) and antiangiogenic (thrombospondin‐1 [TSP‐1]) proteins, and platelet protein release in women diagnosed with breast cancer who were receiving tamoxifen therapy. Initiation of aspirin therapy increases serum and intraplatelet levels of TSP‐1 without a corresponding increase in VEGF levels. Following aspirin therapy, VEGF levels decreased (relative to pretreatment levels) while TSP‐1 returned to pretreatment levels. Plasma TSP‐1 and VEGF levels did not change on aspirin therapy. Aspirin use also decreased thrombin receptor mediated release of TSP‐1 and VEGF from platelets. The selective impact on platelet angiogenic protein content and release supports one mechanism by which aspirin can modify the angiogenic balance in women receiving tamoxifen therapy. Aspirin therapy appears to favor an overall antiangiogenic balance in women with breast cancer who are receiving tamoxifen therapy. Introduction Th e Nurse's Healthy Study has reported an association between aspirin use and a decrease in distant recurrence and improved survival in women with breast cancer who had survived a minimum of 1 year following cancer diagnosis. 1 Th is most recent report adds to a growing literature suggesting a potential benefi t for aspirin use in the prevention of breast cancer. [2][3][4][5][6][7] Aspirin's clinical benefi t in patients with cancer has been linked in part to inhibition of cyclooxygenase in tumor cells. 8,9 Laboratory evidence also suggests aspirin decreases tumor angiogenesis and vascular endothelial growth factor (VEGF) levels, a protein found largely in platelets and a potent stimulator of angiogenesis. [10][11][12][13] In addition to direct tumor and tissue eff ects, aspirin also moderates agonist stimulated platelet activation. Th e platelet serves as a reservoir for proangiogenic proteins, such as VEGF, and antiangiogenic proteins, such as thrombospondin-1 (TSP-1), which can be released from platelets following activation. 14,15 In the laboratory, platelet inhibition by aspirin has been demonstrated to reduce agonist stimulated VEGF and TSP-1 release from the platelet alpha granule suggesting a plausible mechanism by which local angiogenic protein levels might be controlled in the tumor vasculature micro-environment in aspirin users. 16 In laboratory models, platelet inhibition by aspirin and thienopyridine derivatives has been demonstrated to decrease angiogenesis, further supporting the suppression of platelet activation as a viable mechanism of infl uencing tumor angiogenesis. 17 We have previously demonstrated that the selective endocrine receptor modulators tamoxifen and aromatase inhibitors (AIs; anastrozole, letrozole, and exemestane) have diff erential eff ects on serum angiogenic protein levels. 18 In that study, tamoxifen use was associated with an increase in serum VEGF levels; a result consistent with higher platelet derived VEGF levels in tamoxifen users as compared to nonusers. 19 Th erefore, women receiving tamoxifen therapy might be hypothesized to derive particular benefi t from aspirin associated changes in circulating angiogenic proteins. Th us, we prospectively studied the impact of aspirin therapy on circulating levels of the proangiogenic protein, VEGF, and the antiangiogenic protein, TSP-1, as well as platelet mediated angiogenic protein release. Materials and Methods Twelve women with a diagnosis of breast cancer (Stage I-IV) or DCIS who were current users of tamoxifen therapy for a minimum of 90 days were enrolled in this single center study. To minimize potential confounding eff ects of prior therapy, a predefi ned interval of a minimum of 30 days since the last chemotherapy, radiation therapy or surgery was required prior to study enrollment. Current users of aspirin, antiplatelet or anticoagulation therapy were excluded from enrollment. Intermittent aspirin or nonsteroidal antiinfl ammatory (NSAID) users who were willing to abstain from periodic use for the duration of the study were enrolled. All study participants were queried as to their intake of prescription and over the counter medications as well as dietary supplements at each study visit. Tylenol use was allowed during the study, and no restrictions on dietary supplements were imposed . Patients with a history of prior gastrointestinal or central nervous system bleeding or a recent (within 12 months) history of any clinically signifi cant bleeding were excluded from the study. Patients receiving investigational agents for the treatment of breast cancer, with the exception of a gonadotropic releasing hormone (GnRH) antagonist, were not included in the study. Patients received study vials containing noncoated acetylsalicylic acid (aspirin) 325 mg tabs at initiation of the study and following informed consent. Participants were instructed to take one aspirin daily for a total of 45 days. Study medication compliance was obtained by verbal report and patient interview at follow up visits. All adverse events reported by the patient during the duration of the study were recorded. Th e study was approved by the institutional review board of University of Vermont and written informed consent meeting all federal, state and institutional guidelines was obtained from all patients. A local, study independent, data safety monitor was established prior to clinical study initiation. Th is clinical study is registered at clinicaltrials.gov (NCT00727948). Blood sample collection Venous blood samples were collected prior to initiation of aspirin therapy, at 30 and 45 days on-therapy, and 30 days postaspirin completion. Plasma samples were collected into vacutainer tubes supplemented with 0.5 mL of 3.2% sodium citrate (for plasma). Plasma samples were mixed for 30 seconds, separated by centrifugation (3,000 rcf for 10 minutes at room temperature [RT]) and stored at -80°C. Serum samples were incubated at RT for 45 minutes, centrifuged at 2,000 rcf for 15 minutes, and stored at -80°C. Standardized sample collection was used for all subjects to minimize platelet activation during phlebotomy. Enzyme immunoassay VEGF and TSP-1 levels were measured with a quantitative sandwich enzyme immunoassay (Quantikine human VEGF kit, Quantikine human Endostatin Kit, R&D Systems, Inc. Minneapolis, MN, USA) according to the manufacturer's instructions. All measurements were performed in duplicate and the average value reported for each patient at each time point. Statistical analysis Data are presented as mean ± SEM. Repeated measures analysis of variance was used to compare means among the 4 time points. A signifi cant F -test was followed by all possible pairwise t -tests. Th e quadratic eff ect was used to examine the eff ect of aspirin on the release of angiogenic proteins from platelets prior to and posttherapy compared to during aspirin therapy. Th e analyses were conducted using SAS (Version 9.2, SAS Institute Inc., Cary, NC, USA). Statistical signifi cance was based on α = 0.05. Reported intraplatelet levels of protein were calculated by subtracting plasma values from serum values at a given (each) time point. Results Th e characteristics of the women initiating aspirin therapy are seen in Table 1 . Th e majority of women had DCIS or stage 1 invasive ductal carcinoma. Only one patient had metastatic disease and was being treated in the nonadjuvant setting. Smoking status was assessed but subsequent analysis found no diff erences between smokers and nonsmokers in any of the parameters measured. Th e average duration of prior tamoxifen use was 14 months (range 2-46 months). Eleven women completed the study with one patient lost to follow-up at 30 days postaspirin treatment. No diff erences in platelet counts were observed over time ( p = 0.98). Adverse events were reported in two patients and included dyspepsia and increased bruising. Both events were considered minor and did not preclude completion of the study. All women reported continued compliance with aspirin intake throughout the study although fi ve women (58%) reported missing one to three doses of aspirin at some time during the 45 days of treatment. During the 30-day postaspirin study period, no participants reported using NSAID therapy. Th e eff ect of aspirin on intraplatelet VEGF and TSP-1 levels Repeated measures analysis of variance detected signifi cant diff erences over time in mean intraplatelet TSP-1 and VEGF ( p value = 0.01 and 0.02, respectively) when analyzed for the total study period (four time points). As seen in Figure 1 , TSP-1 but not VEGF levels increased following the start of aspirin therapy. TSP-1 levels were signifi cantly greater than pretreatment following 45 days of aspirin therapy. Th e initial increase in TSP-1 levels represented a mean 1.3-fold increase in intraplatelet levels. In contrast, aft er completing a 45-day course of aspirin therapy, 30-day postaspirin levels of VEGF were signifi cantly less than prior to treatment ( Table 2 ). VEGF was 21% lower than the mean pretreatment level. Mean TSP-1 at 30 days postaspirin therapy returned to pretreatment levels and was not signifi cantly less than prior to treatment. Th e eff ect of aspirin on serum VEGF and TSP-1 levels Th e impact of daily aspirin therapy on serum protein levels is seen in Figure 1 . Repeated measures analysis of variance detected diff erences over time in the mean serum levels of TSP-1 and VEGF ( p value = 0.01 and 0.02, respectively). TSP-1 increased during aspirin therapy and was signifi cantly higher at 45 days when compared to pretherapy, with a 33.7% increase in serum TSP-1 levels. In contrast, no signifi cant changes in VEGF were found between mean pretreatment level and mean levels during aspirin therapy. Th irty days following completion of aspirin therapy, VEGF levels were significantly decreased by 21% ( Table 2 ). Like intraplatelet levels, serum TSP-1 levels following a course of aspirin therapy were not diff erent from pretreatment levels. Plasma VEGF and TSP-1 levels aft er aspirin therapy initiation Plasma VEGF and TSP-1 levels were also assessed prior to, during and subsequent to daily aspirin therapy. We did not detect any signifi cant diff erence over time in VEGF or TSP-1 levels ( p = 0.11 and 0.44, respectively). Figure 1 contrasts the plasma levels with serum and intraplatelet protein levels following aspirin therapy initiation. Aspirin therapy decreases activation dependent angiogenic protein release Because thrombin is considered a major driver of thrombosis and platelet activation in tumors, thrombin receptor stimulated release of angiogenic proteins was studied in an ex vivo platelet activation whole blood assay ( Figure 2 ). Analysis of released TSP-1 and VEGF revealed signifi cant diff erences over time in mean TSP-1 release ( p = 0.01). A trend toward inhibition of VEGF release was also demonstrated ( p = 0.07). Maximal inhibition of thrombin receptor-mediated release was noted at 45 days on aspirin therapy. At 45 days on therapy, mean VEGF release was decreased by 11% while TSP-1 was decreased by 14%. Discussion We prospectively demonstrated the impact of a short course of aspirin therapy on circulating angiogenic protein levels in women receiving tamoxifen therapy. Based on two pivotal proteins that contribute to the angiogenesis balance, we found aspirin therapy has time and protein specifi c eff ects on circulating protein levels. Th e majority of our study population was receiving tamoxifen in the nonmetastatic setting suggesting our results are likely of most relevance to that patient group. Our fi ndings suggest that aspirin modulates circulating angiogenic proteins and may favor a systemic antiangiogenic balance. For the antiangiogenic protein TSP-1, the initiation of aspirin therapy resulted in signifi cant increases in platelet and serum levels. TSP-1 is an antiangiogenic protein stored within the platelet alpha granule and platelet (but not plasma) levels of TSP-1 have been demonstrated to regulate early stages of tumor angiogenesis. 20 Importantly, plasma levels of this protein were not found to change in our study. Th e increase in TSP-1 seen in our study, however, was dependent on the presence of drug (aspirin) as posttreatment levels were not diff erent from pretreatment levels. Th is observation suggests that aspirin treatment may shift the angiogenic balance by favoring an increase in the antiangiogenic protein TSP-1. Th e mechanism that underpins this observation will need further assessment in subsequent studies. In contrast to TSP-1, VEGF values did not change while on aspirin therapy, however, platelet VEGF levels decreased following the completion of 45 days of aspirin. VEGF is a potent proangiogenic growth factor that has been associated with poor prognosis in patients with breast cancer and serum VEGF levels correlate with intratumoral microvessel density. [21][22][23] Th e magnitude of the decrease in platelet VEGF that was seen in our study was approximately 20%. While we are aware of no conclusive data with regard to the degree of decrease needed in circulating VEGF to have a clinically significant impact, Banerjee and colleagues have found tamoxifen use was associated with a 30% increase in VEGF levels. 24 Th us, the magnitude of our eff ect is at least consistent with other documented eff ects of drugs on VEGF levels in patients. Th e mechanisms that underpin our fi ndings are not known. Decreased prostaglandin production (as seen with aspirin therapy) has been linked to decreased levels of VEGF. 25 In addition, in rat models of mammary carcinogenesis, acetylsalicylic acid decreased both VEGF concentration and tumor diameter. 11 In vitro studies of lung cancer, sarcoma, and colon cancer models showed similar results. 12,26 A lack of an early decrease in VEGF levels was surprising based on the above data. Th e reasons for the delay in response that was seen in our study are not known. Th e timing of inhibition of tissue production of VEGF relative to aspirin therapy initiation is not known. Longer duration of aspirin use and a larger sample size will need to be explored in subsequent prospective studies using aspirin in cancer patients. Unique to our study is the assessment of the effect of aspirin on agonist-induced platelet protein release. Several model systems have demonstrated the proangiogenic eff ects of platelets (reviewed by Bambace and Holmes 27 ). Platelets can contribute to the balance of tumor-associated angiogenesis through release of both stimulators and inhibitors of angiogenesis. 15,28,29 We found the release of both angiogenic proteins studied was inhibited by aspirin therapy, however this result was only signifi cant for the antiangiogenic protein, TSP-1. Similarly, Coppinger reported in a mass spectrometry based analysis of platelet protein release, the inhibition of TRAP induced TSP-1 release in healthy individuals. 16 Additionally, aspirin has been shown to inhibit VEGF release from resting platelets as well as platelets exposed to ADP and MCF-7 cells. 30 In our study, the inhibition of release was modest as anticipated given the use of a direct thrombin receptor agonist (TRAP) that preferentially (but not exclusively) activates through the PAR1 pathway. Th e decrease in release of these angiogenic proteins suggests the need for additional platelet pathway specifi c investigations in patients with cancer. Limitations of our study include the study of only a subset of potential protein contributors to angiogenesis, the short course of aspirin therapy and small sample size. We chose to initially study early effects of the drug to avoid confounding by changes in underlying disease state. A dose-dependent effect of aspirin on angiogenic protein levels is also not known. We chose an aspirin dose of 325 mg daily based on observational data suggesting that 325 mg of aspirin might be necessary to achieve maximum chemopreventative effect. 31 In breast cancer, the benefits of any particular aspirin regimen (dose or duration) are not well established. Additional studies of longer duration that include concurrent tissue assessment of angiogenesis will be needed to further extend our observations. Whether or not the changes we have seen in angiogenic protein levels will ultimately be the most important protein specific effects of aspirin relative to angiogenesis remains unknown at this time. Our data suggests aspirin therapy impacts angiogenic protein levels and may modify the angiogenic balance in women treated with tamoxifen therapy. The increase in antiangiogenic protein levels (TSP-1) while taking aspirin therapy without a concurrent increase in pro-angiogenic VEGF levels suggests this impact may be, on balance, antiangiogenic. These observations are likely most clinically relevant in the primary and secondary prevention setting for women with breast cancer (including DCIS) receiving tamoxifen therapy. Given the small size of our study, additional studies are imperative to fully understand the impact of aspirin therapy on angiogenesis in patients with breast cancer. Our results should be viewed as only a first step in understanding the impact of aspirin therapy on the angiogenic balance and important angiogenic proteins. However, our data in combination with the observed decrease in cancer recurrence in aspirin users in observational clinical trials continues to support a role for investigating less expensive agents such as aspirin therapy in women with breast cancer. Confl ict of Interest Th e authors have no confl ict of interest to declare. Sources of Funding Th is work is supported by a grant from Th e Breast Cancer Research Foundation, New York, NY and the Charles H. Smith Memorial Fund (University of Vermont/Fletcher Allen Health Care).
2018-04-03T05:05:43.953Z
2013-05-15T00:00:00.000
{ "year": 2013, "sha1": "6b08ff2d9e6f3e0fb1f0eeee49d0a6681dc81288", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5350889?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6b08ff2d9e6f3e0fb1f0eeee49d0a6681dc81288", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12749263
pes2o/s2orc
v3-fos-license
(1) Pediatric Obesity and Cardiometabolic Disorders: Risk Factors and Biomarkers [This corrects the article on p. 6 in vol. 28, PMID: 28439216.]. Obesity remains the most prevailing disorder in childhood males and females worldwide. Its high prevalence markedly predisposes children to insulin resistance, hypertension, hyperlipidemia and liver disorders while enhancing the risk of type 2 diabetes and cardiovascular diseases. In this review, the relationship of obesity with genetic and environmental factors will be described and the underlined causes will briefly be reported. As obesity in children constitutes an increasingly health concern, important potential biomarkers have been discussed for the diagnosis, treatment and follow-up of the wide range of overweight-related complications. Awareness about the applicability and limitations of these preventive and predictive biomarkers will intensify the research and medical efforts for new developments in order to efficiently struggle against childhood obesity. INTRODUCTION The prevalence of childhood obesity is rapidly increasing and presents a major public health concern in developed and developing countries (1)(2)(3)(4), and assessment of obesity is of utmost importance to paediatricians. However, there are varying definitions of obesity in children and adolescents, along with ethnic-specific variations in body fat content and distribution, which complicate this undertaking (5). Moreover, these divergences may explain prevalence dissimilarities associated with cardiometabolic diseases (CMD) (e.g. insulin resistance, hypertension, dyslipidemia and diabetes) in adulthood (6)(7)(8)(9)(10)(11). In the context of epidemiological studies, body mass index (BMI, weight/height 2 ) in adults is currently considered as a diagnostic test (separator variable) which is able to identify overweight (25 kg/m 2 ) and obese (30 kg/m 2 ) individuals and may predispose to increased CMD risk, morbidity and mortality (12,13). However, no similar definite values can be used in childhood and adolescence because of the substantial changes in BMI, which occur naturally from birth to adulthood (14, 15), and because of the limited data in youth that relate BMI trajectory to cardiovascular events later in life. Age-and sex-specific BMI cut-offs were developed to define overweight and obese using different nationally representative age-and sex-specific data sets, following recommendations from the International Obesity Task Force (16,17). International age-and sex-specific BMI cut-offs for overweight and obese girls and boys are illustrated in Figure 1. Applying this concept to BMI trajectory, Attard et al. (18) demonstrated that the odds for diabetes were 2.35 higher for those with a BMI of 30 kg/m 2 relative to young male adults who had maintained a BMI of 23 kg/m 2 over an average of 12 years. These data suggest there is potential for improving the ability to assess the effect of paediatric obesity on development of diseases at a later time point. Secular trends demonstrate that the prevalence has plateaued in some countries (19) or even decreased (20), but has continued to rise in others, independent of how overweight and obesity are defined in childhood (1,(21)(22)(23). The apparent contradiction could partially depend on the span of the retrospective studies and on the years included. Nevertheless, the present high number of young adults with the stigmata of the metabolic syndrome (MetS), and the related non-alcoholic fatty liver disease (NAFLD) justifies that it be considered a major world public health issue (24). This review briefly describes the various potential causes of obesity in youth and underscores the available biomarkers for associated conditions. Definite BMI thresholds to identify an increased risk for CMD cannot be used in childhood and adolescence. Age-and sex-specific BMI cut-offs to define overweight and obesity and predict trajectory into adulthood should be utilized using different nationally representative age-and sex-specific data. OBESITY AND LIFESTYLE Lifestyle is broadly defined as the way or manner by which a person or a group of people lives. However, lifestyle can be influenced by a complex set of factors that are intertwined and can affect the quality of living and health ( Figure 2). The socioeconomic position (SEP) stands out among these factors because it has a direct impact on the quality of nutrition and the living environment, including access to adequate physical activity facilities and education. Consequently, a comprehensive view must be adopted whenever addressing this topic but a majority of studies tend to focus in this area in a fragmented manner. One such study, based on self-reports, demonstrated that poor children in the United States have worse health compared to wealthy children. This difference in health status diverged further as the children aged; thereby suggesting the adult health gradient had its origins in childhood. However, other than family income no other factors were considered which could explain these results (25). SEP may also impact the quality of nutrition. Darmon et al. (26) reported that higher-quality diets consisting of whole grains, lean meats, fish, low-fat dairy products, fresh vegetables and fruits were associated with greater affluence, whereas energy-dense and nutrient-poor diets (refined grains, added fats) are preferentially consumed by persons of lower SEP. Likewise, in a systematic review, Cameron et al. (27) reported that children of lower SEP had a steeper weight gain trajectory initiating at birth and led to a greater prevalence of obesity in children and adults. Pre-pregnancy maternal BMI, diabetes, pre-pregnancy diet, smoking during pregnancy, low birth weight, breastfeeding initiation and duration, early introduction of solids, maternal and infant diet quality, and some aspects of the home food environment were among the early-life predictors of later obesity and amid links with SEP. Furthermore, lack of physical activity is an additional risk factor for developing obesity. A longitudinal study involving repeated 7-day physical activity recall questionnaires over a 5-year period demonstrated that greater fluctuations in physical activity led to an increase in body fat in adolescent girls and boys (28). An interventional study supported these conclusions, demonstrating interruption of sedentary time with brief moderate-intensity walking resulted in an improvement of short-term metabolic function in non-overweight children without increasing subsequent energy intake (29). Despite the difficulty in directly comparing studies because of the variety of environmental factors and defined end-points, systematic reviews consistently highlight that better and safer access to physical activity resources are directly related to increased leisure time physical activity in children and adolescents, which subsequently decreases the risk of developing obesity (30)(31)(32)(33)(34). Access to physical activity resources is directly related to higher leisure time physical activity in children and adolescents and decreases the risk of developing obesity. OBESITY AND GENETIC/EPIGENETIC FACTORS In addition to the risk factors previously discussed, genetic background and foetal programming through epigenetic modifications are equally important in the development of obesity and related diseases. There is also increasing evidence suggesting synergetic effects between gene variant loci involved in metabolic traits and dietary or lifestyle factors. Maes et al. (35) compiled data from more than 25,000 twin pairs and 50,000 biological and adoptive family members and reported that genetic components contribute 40-70% to the inter-individual variability in common obesity. Another study showed that parental obesity doubled the risk of adult obesity among both obese and non-obese children less than 10 years of age (36). Few studies have investigated the gene-environment interactions related to sedentary behaviour using large cohorts. The Identification and prevention of Dietaryand lifestyle-induced health EFfects In Children and infantS cohort (IDEFICS) used a subsample of 4406 participants to demonstrate that the fat mass and obesity-related gene (FTO) polymorphism (rs9939609) could explain ~9% of the obesity variance, thereby suggesting the FTO gene was sensitive to the social environment (37). To date, genome wide association studies (GWAS) have provided evidence for a number of gene variants associated with the development of obesity in the youth. Willer et al. (38), based on a cohort of 11 year-old children, demonstrated significant and consistent association between BMI and variant loci (SNPs) located in or near the trans-membrane protein-18 (TMEM18), potassium channel tetramerisation domain containing-15 (KCTD15) and glucosamine-6-phosphate deaminase-2 (GNPDA2) genes. The high brain and hypothalamic expression of these factors, together with FTO and the melanocortin-4 receptor (MC4R), independently associated with adiposity and insulin resistance (39), supports the argument for a neuronal foundation in obesity. Whether these loci are modulated under neuronal influence by the environment or lifestyle remains to be elucidated. Graff et al. (40) provided a partial answer by establishing a dose-dependent A myriad of peer-reviewed publications have confirmed this initial hypothesis (42)(43)(44)(45)(46). Lee et al. (47) suggest there is a gene-foetal environment interaction, one of which occurs through in utero exposure to maternal cigarette smoking and leads to a preference in adolescence for moderately enhanced fatty foods by silencing the opioid receptor mu-1 gene (OPRM1) involved in the brain reward system. Small gestational age (SGA) is also well recognized and linked to an increased risk for rapid postnatal weight gain and subsequent development of obesity and chronic metabolic diseases later in life. The Auckland Birth weight Collaborative Study demonstrated that smoking, low pregnancy weight, maternal short stature, maternal diet, ethnic origin of mother and hypertension are all "environmental" risk factors for SGA (48). A subgroup of the cohort later established that polymorphic FTO (rs9939609, intron), KCNJ11 (rs5219, missense Lys23Glu), BDNF (rs925946, 9.2 kb upstream), PFKP (rs6602024, intron), PTER (rs10508503, 179 kb upstream) and SEC16B (rs10913469, intron) genes, were related to obesity, type 2 diabetes, and SGA which indicates the important interaction between genetic factors and foetal environment (49). Finally, a prospective singleton normal pregnancy cohort study demonstrated a direct relationship between the maternal adipokines, leptin (a satiety factor) and adiponectin (an insulin sensitizer). The study included 339 healthy women without pre-existing diabetes who were evaluated at 24-28 and 32-35 weeks of gestation and the cord blood (foetal compartment) assessed at birth (50). Foetal insulin sensitivity was negatively associated with cord blood leptin and positively with pro-insulin concentrations, suggesting the maternal impact on foetal adipokines may be an early life pathway in maternal-foetal transmission of the propensity to develop obesity and insulin resistance later in life. These examples provide compelling evidence on the role and impact of the foetal environment and development of chronic diseases later in life. Parental obesity more than doubles the risk of adult obesity among obese and non-obese children. Gene-environment interactions are modest, and individually are not able to explain the development of obesity and the onset of related diseases. There are compelling evidence highlighting the role of foetal environment and development of chronic diseases later in life. OBESITY AND MICROBIOTA In addition to the above considerations, the gut microbiota may increasingly be shown to impact the course of metabolic diseases. This aspect is briefly reviewed. The synergistic relationship between the human body and the vast microbiotic environment present on all interfaces with the exterior, particularly the gut lumen, has become of major interest to the medical community. The microbiome cell number far outnumbers somatic or germ cells and represents a far more varied gene diversity than the human genome (51). The advent of high throughput genome sequencing technologies allowed the first meta-sequence of the human gut microbiome to be conducted, utilizing stool collected from 124 individuals, and characterized > 3X10 6 genes from approximately 1000 different microscopic species (52)(53)(54). An excellent review by Arora et al. (55) discusses the composition of the gut microbiota and its association with metabolic diseases. Figure 4, taken from this review, shows that 2 phyla, namely Firmicutes and Bacteriodetes, constitute healthy adult gut microbiota and their relative proportions differ among populations. Neonatal intestinal flora evolves according to its early environmental exposures, nutrition patterns (maternal or industrial milk), hygiene levels and therapeutic drug usage (56). Differences in intestinal flora patterns during the first six months of life may have potential impact and downstream consequences on the later development of chronic conditions such as type 2 diabetes and allergies (57,58). The gut microbiota has emerged as a new important player in the pathogenesis of obesity, potentially explained by the fact that each microbiotic species transforms the undigested and partially digested food into metabolites that may influence the physiological systems of the host. Therefore, a loss in diversity may lead to unwanted effects (55). This hypothesis is supported by the observation that composition of the gut microflora is globally less diverse in obese subjects, with a relative enrichment in Firmicutes and a impoverishment in Bacteroïdes (59). Moreover, detailed analysis of the flora in obese subjects reveals a bimodal distribution: those with a low gene count (LGC) characterised by the predominance of 5 pro-inflammatory bacteria and a less diversified metagenome, and those with a high gene count (HGC) with a high percentage of 4 anti-inflammatory bacteria genii (60). The LGC group presents with insulin-resistance, dyslipidemia and low-level infiltration of adipose tissue with pro-inflammatory cytokine secreting immunity cells. It has recently been established that levels of butyrate-producing bacteria are reduced in patients with type-2 diabetes, whereas levels of Lactobacillus sp. are increased, thus the reduction of butyrateproducing bacteria may be causally linked to type 2 diabetes. The causal relationship for these differences in humans remains to be elucidated but opens the way to possible treatment of obesity via dietary manipulation. For example, a low calorie regiment composed of plant fibres, proteins and low carbohydrates potentially increases the microbiota diversity (61). Interestingly, bariatric surgery also increases the gut microbiota diversity (62,63). As each microbiotic species transforms the undigested and partially digested food into metabolites that may influence the physiological systems of the host, a loss in diversity may lead to unwanted effects. The gut microbiota, a new player in the world of obesity and cardiometabolic diseases, is increasingly called upon to elucidate findings related to these diseases and may eventually impact their course and treatment. BIOMARKERS The status of metabolically healthy obese (MHO) individuals has been reported (64, 65) but obesity, particularly abdominal, remains a major risk factor for developing a series of complications ( Figure 5) such as the metabolic syndrome, type 2 diabetes, early atherosclerosis and nonalcoholic fatty liver disease (NAFLD), the latter considered the hepatic manifestation of insulin resistance (66)(67)(68). Cellular redox potential imbalance, inflammatory processes and insulin resistance are central in the development of the complex chronic metabolic disturbances ( Figure 6); hence measurement of related biomarkers to detect minor disturbances could help distinguish MHO from metabolically non-MHO individuals, and may result in establishing early primordial prevention programs. However, at the present time there is no international consensus as to the specific pathways that should preferentially be targeted in order to define the prevalence and severity of the conditions during childhood and adolescence. IMAGING TECHNIQUES In the last decade, utilization of ultrasonography, transient elastography and magnetic resonance imaging (MRI) has increased significantly. In the context of the present review these techniques, except for MRI, are not suitable for the detection of metabolic disturbances and are primarily used to evaluate the extent of liver damage. Although widely available, ultrasonography is unable to accurately detect or quantify early liver fatty acid infiltrations. Furthermore, this technique is prone to observer-and operator-dependent variability and its use in obese patients is subject of debate (69,70). Transient elastography, based on the assessment of liver stiffness, has also been shown to be useful in presence of significant fibrosis and cirrhosis (71). Liver magnetic resonance imaging-estimated proton density fat fraction (PDFF) is more sensitive and favourably comparable to histopathology scores (72). This technology is currently restricted to tertiary care institutions, is expensive, and demands experienced staff. In summary, these imaging techniques are useful in detecting steatosis, but they are relatively inefficient in determining early stage liver damage. Biomarkers easily measured in central laboratories are therefore of utmost importance and should center on insulin resistance, inflammation and oxidative stress, as this triad is the signature of NAFLD. INSULIN RESISTANCE The term insulin resistance (IR) frequently refers to a physiological state characterized by a diminished biological response to insulin. More precisely, IR refers to a holistic reduction of glucose uptake in response to physiological insulin concentrations, primarily in muscle tissue. The optimal assessment of IR in children and adolescents remains controversial. Following the Consensus Conference on Childhood IR in 2010, experts highlighted: 1) the paucity of data regarding cut-offs to define insulin resistance; 2) poor performance of surrogate measures such as fasting plasma insulin; and 3) lack of justification for screening children, even obese children, because there are no accepted treatments for euglycemic IR (73). However, the development of robust methods for assessing insulin sensitivity (IS) in paediatric populations remains of great interest, particularly for epidemiological studies to monitor metabolic trajectory into adulthood. The hyperinsulinemic-euglycemic clamp is the gold standard for determining total-body IS (73). However, it is not applicable in the context of population screening or routine clinical workup. In 2014 Brown and Yanovski (74) published an excellent review on this technique as well as surrogate measures and their pitfalls. The hyperinsulinemic-euglycemic clamp, as its name indicates, depends on repeated measures of both insulin and blood glucose, each having their own potential analytical pitfalls that may hinder inter-laboratory comparison (Table 1). Reliable interpretation of hyperinsulinemiceuglycemic clamp studies is also dependent upon normal inter-individual biological differences such as insulin clearance rates and time required to reach a steady state. Alternative methods include the insulin tolerance test (ITT), the hyperglycemic clamp, the insulin-modified or frequently sampled intravenous glucose tolerance test (FSIGT) and the more frequently used oral glucose tolerance test (OGTT) (74). FASTING INSULIN AND THE HOMA-IR Assessment of IR or IS is frequently conducted using single measurements due to ease of availability and simplicity. Measurement of fasting insulin concentrations are considered representative of insulin hepatic sensitivity (low concentrations) or resistance (high concentrations). In theory, this information is valuable and may alert clinicians to eventual liver function impairment but there are issues around defining an abnormal elevated fasting insulin concentration because the data on reference values in fasting insulinemia are scarce. In addition, the lack of standardization or harmonization between different insulin assays hampers direct comparison between laboratories, peer-reviewed publications, and impedes coherent measures for treatment guidelines. This was highlighted in 2007 by the IFCC Working Group on Standardization of Insulin Assays, in an evaluation of 12 commercial insulin methods (75). The within-assay CVs ranged from 3.7% to 39.0% and between assay CVs from 12% to 66% (75). In 2009 the working group reported that 4 out of 10 insulin assays, when re-calibrated with a purified recombinant insulin preparation, had ≥ 95% of the 39 individual donor sera results within 32% of the target value assigned by an isotope dilution-mass spectrometry assay. In addition, 7 of 10 assays had a bias >15% in 36 to 100% of individual samples. The consensus group concluded that agreement between assays would improve using an international reference material and a higher order mass spectrometry method (76). Subsequent high-throughput mass spectrometry immunoassays have been developed to quantitate human intact insulin as well as insulin analogs, which may allow an accurate definition of insulinemia to be determined (77,78). Accurate measurement of plasma insulin is of paramount importance for establishing comparable Homeostasis Model Assessment of IR (HOMA-IR) reference values across laboratories, although variation between ethnic populations may be a confounding factor that should be taken into consideration. At the present time HOMA-IR cut-offs are still highly method dependent. Table 2 illustrates the distribution of published cut-off points for defining IR, and confirm the warning of Wallace et al. (79): "The HOMA model has become a widely used clinical and epidemiological tool and, when used appropriately, it can yield valuable data. However, as with all models, the primary input data need to be robust, and the data need to be interpreted carefully." To address this issue, the IFCC (http://www.ifcc.org/ifcc-scientific-division/sd-working-groups/wg-sia/), in collaboration with the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD), has created the Working group on Standardisation of Insulin Assays (WG-SIA) with the mandate of improving the standardization of assays for insulin by the development of a candidate reference method based on liquid chromatography-tandem mass spectrometry, and of a lyophilized recombinant human insulin preparation as primary reference material. Although insulin resistance is a well-recognized clinical entity, there are currently no internationally accepted definition of its expression in children and adolescents. One well-characterized definition requires the presence of three or more factors which can be age-adjusted to define hyperinsulinaemia: Overweight, high systolic blood pressure, hypertriglyceridemia, low HDL-cholesterol and impaired fasting plasma glucose (84). Data on normal reference intervals for fasting insulinemia are scarce. Lack of standardized or harmonized insulin assays hampers comparison between laboratories and impedes coherent measures for treatment guidelines. Distinguishing MHO young patients from those unhealthy bears a major clinical importance as they are, for reasons that are yet to be defined, resistant to develop CMD; hence follow-up and treatment differ (64). Low-grade inflammation E. Levy, A.K. Saenger, M.W. Steffes, E. Delvin Pediatric obesity and cardiometabolic disorders: risk factors and biomarkers and cellular redox potential imbalance are, together with insulin resistance, key-role players in the development of the non-healthy state in obese subjects. INFLAMMATION Inflammation is the second cause in the development of CMD and NAFLD related to paediatric obesity. A number of biomarkers have been identified but primarily in the context of clinical trials, thus their specificity, sensitivity and predictive values have yet to be defined for screening and diagnostic purposes. C-Reactive Protein (CRP), a member of the pantraxin family involved in plaque instability, is the most commonly utilized inflammatory biomarker. Although the sensitivity of CRP is generally high, the specificity is low, particularly in the setting of potential low-grade inflammation. Nevertheless, discrete elevation in circulating CRP concentrations has been associated in the definition of the metabolic syndrome (84,85). Its advantage resides in its wide accessibility by central laboratories. However, as for any other biomarkers, well-defined age-, sex-and ethnicity-adjusted reference values or thresholds have to be defined if they are to be used for clinical purposes. The analytical sensitivity, even for the high-sensitivity CRP (hsCRP) test, however, limits the definition of reference ranges. One European population-based study reported that 44% of the 9855 children tested exhibited serum CRP concentrations below the detection limit (0.2 mg/l) and confirmed our observation (85) to the effect that obesity influenced serum CRP concentrations (86). C-Reactive Protein (CRP) is the most commonly utilized biomarker of inflammation. The specificity of CRP is questionable, particularly in the setting of low-level inflammation. Well-defined age-, sex-and ethnicity-adjusted reference values or thresholds have to be defined if they are to be used for clinical purposes. Visceral adipose tissue per se and its resident macrophages contribute importantly to systemic inflammation by secreting adipokines and pro-and anti-inflammatory cytokines. Indeed, clinical studies have consistently shown elevated blood concentrations of pro-inflammatory cytokines such as IL-6, IL-8, TNFα, PAI-1, resistin and amylin in overweight and obese insulinresistant youth (87)(88)(89)(90). However, this relationship does not imply unanimity. A recent report has noted that the relationship between pro-inflammatory and metabolic markers commonly observed in adults and pubertal adolescents is reversed in healthy black and white children before puberty, which warrants questions as to whether these inverse relationships modify the trajectory later in life (91). Population-based studies focused on evaluating pro-inflammatory and metabolic markers to determine which biomarkers constitute sensitive and specific tools in the context of a diagnosis of insulin resistance would be valuable. OXIDATIVE STRESS Oxidative stress is often a neglected cause of paediatric obesity-related morbidities, and no biomarkers have been successfully validated yet for routine clinical use. To our knowledge there are no clinical research studies demonstrating that circulating concentrations of malonyldialdehyde (MDA), Hydroxynonenal (HNE), advanced glycation end-products (AGEs) and 8-hydroxy-2-deoxyguanosine (8-OH-dG), which are surrogate markers for lipids, proteins and deoxyribonucleic acid damages respectively, are effective diagnostic tools for CMD in childhood and adolescence. In an observational study performed on 35 children between the ages of 12 and 18 years, Khelishadi et al. (92) reported that the age-and sex-adjusted changes in ox-LDL, waist circumference, CRP, MDA and body fat mass had the highest correlations with changes in coronary intima media thickness. More recently, in a population-based study, Galan-Chilet et al. (93) demonstrated a positive association of selenium at plasma concentrations above ~110 μg/L for 8-oxo-dG, but an inverse association with GSSG/GSH and MDA. They further identified potential risk genotypes associated with increased levels of oxidative stress markers with high selenium levels. CONCLUSIONS There is currently no single biomarker which can adequately define obesity-related CMD risk in paediatrics or adults. Prospective clinical trials should focus on devising a score based on well-characterized and appropriately validated biomarkers.
2018-04-03T00:34:13.805Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "4ace30e3e7ac6a2d7364397f5b177d6a51d533cc", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "b579e377f4d000e1a6d1b43e887d95689f681ee1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262173581
pes2o/s2orc
v3-fos-license
A novel chiral HPLC and LC-MS/MS method development for the triazole antifungal compound The objective of the present study was to separate and develop a chiral high performance liquid chromatography (HPLC) and sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) technique to estimate the (+) and (−) enantiomers of Albaconazole and validate the individual enantiomer of the drug. Albaconazole is used to treat for anti-fungal disease. The stationary phase was reverse phase Chiralpak IG-3 (250 × 4.6 mm, 5 µm) and (100 × 4.6 mm, 3 µm), whereas the isocratic mobile phase was ethanol and diethyl amine (100:0.1% v / v ratio HPLC) Acetonitrile and 10 mM ammonium bicarbonate (90:10 v / v ratio LC-MS/MS) and the flow rate was 1.0 and 0.5 ml/minute. The resolution of the (+) and (−) enantiomers were monitored using HPLC diode array detector (DAD) 240 signal and LC-electrospray ionization-MS/MS in positive transition at 432.0 m / z (M + H) for Albaconazole. The retention time of the (+) and (−) enantiomers of the drug was 6.952 and 9.955 minutes and 2.905 and 3.780 minutes by HPLC and LC-MS/MS. The major benefits of the LC-MS/MS are related to its improved selectivity, precision and accuracy and the lower variability in comparison to the HPLC-DAD. This study provided a rapid, sensitive and novel selective method to evaluate the (+) and (−) enantiomers in active pharmaceutical ingredients by HPLC and LC-MS/MS. INTRODUCTION Albaconazole is a triazole antifungal belongs to the class of 7-chloro-3-[(2R, 3R)-3-(2,4-difluorophenyl)-3-hydroxy-4-(1,2,4-triazol-1-yl) butan-2-yl]quinazolin-4-one (Amjad et al., 2016).Generally, azole compounds inhibit the steroid demethylation and the biosynthesis of a critical component of fungal membrane called ergosterol by blocking a cytochrome P 450 dependent enzyme: lanosterol 14-α-demethylase which is crucial for the conversion of lanosterol to ergosterol.Lack of ergosterol and accumulation of lanosterol-14-α-demethylase will increase the membrane permeability and lead to disruption of several enzymes in the membrane, such as chitin synthase (Maertens, 2004).This does not only inhibit its DNA replication, but also distracts cell growth that causes the death of yeast and fungi.Azoles also decrease the adhesion potential of pathogen cells to host tissues and impede the transformation of yeasts to mycelial form (Ghannoum and Rice, 1999;Sumrra et al., 2022).Therefore, they are widely applied as veterinary drugs (Bhanderi et al., 2009), as fungicides in agriculture (Brauer et al., 2019) and as antifungal agents for both humans and animals (Scorzoni et al., 2017;Zafa et al., 2021).Chirality plays a significant role in determining the pharmacological actions of chiral compounds and vital importance at the drug discovery stage (Ates et al., 2013;Zhang et al., 2005).One-third of all marketed drugs are now sold in a single isomeric form and chirality is now a significant factor in the development of new pharmaceuticals, with regulatory and therapeutic considerations driving the process (Mukherjee and Bera, 2012).The enantiomers of a chiral drug molecule may behave differently after administration, so the pharmaceutical industry places a high value on chiral resolution.To have a therapeutic effect, a molecule must engage a target receptor when it is administered.Drug molecules that are chiral will only fit into this receptor in one of their enantiomers (the eutomer), producing the desired therapeutic effect.A lesser effect could result from the other enantiomer (distomer), interacting or not with the receptor.The distomer can occasionally interact with different receptors, leading to side effects or even toxicity.In order to distinguish the eutomer from the distomer during drug substance identification and impurity determinations, additional research is needed on the enantiomers of active compounds during the development process.Racemates resolution is still difficult because of their similar characteristics in chiral environments, and work on highly specialized separation techniques is ongoing to resolve individual enantiomers (Liu et al., 2015).Based on literatures survey (Azhari et al., 2020;Bhowmick et al., 2021;Gazzinelli et al., 2022;Shekar et al., 2014), revealed that few analytical methods were reported for chiral separation on triazole antifungal drugs by high performance liquid chromatography (HPLC).Furthermore, the reported methods on chiral separation were more retention time and less sensitivity.As a result, our objective of this research is to separate and develop a novel, fast, selective and sensitive method with less retention time for chiral separation and estimation of (+) and (−) enantiomers in active pharmaceutical ingredients using HPLC and liquid chromatography tandem mass spectrometry (LC-MS/MS). Reagents YMC India private limited gifted pure Albaconazole (+/−) as a working standard.SD fine chemicals and Merck, Mumbai, India supplied the chemicals ammonium bicarbonate and solvents methanol and acetonitrile (HPLC and LC-MS grade).The Milli Q RO system was used to purify the water (Millipore, Bedford, UK). Instrumentation (HPLC and LC-MS/MS) HPLC-photo diode array (PDA) chromatographic fingerprints were obtained with an Agilent 1260 Infinity II HPLC instrument (Agilent Technologies, Waldbronn, Germany) equipped with a 1260 Infinity II quaternary pump, a 1260 Infinity II degasser, a 1260 Infinity II vial sampler, a 1260 Infinity II column thermostat, a 1260 Infinity II diode array detector (DAD) HS.PC with the Agilent open lab CDS software for data acquisition. Ultra fast liquid chromatography coupled with tandem triple quadrupole mass spectrometer (Shimadzu LC-MS/MS, Tokyo, Japan) equipped with interfaced by electrospray ionization (ESI) and solvent delivery system LC-20AD pump, SPD M20 PDA detector, SIL-20AC auto sampler, CTO 20AC column oven, CMB-20 alite controller.Using LC lab solution software, the data acquisition was performed.For the study, optimized factors include heat block temperature, desolvation line, Nebulizer gas, collision energy, etc.The mass spectrometer was run in positive ionization detection mode (M + H) with an ESI source.The nebulizer pressure was set to 345 kPa, ionization temp was set to 300°C, the capillary voltage was 5,000 V and gas flow rate was 11 l/minute.The collision cell gas was ultrapure nitrogen and the ionization source gas was nitrogen. Chromatographic conditions (HPLC and LC-MS/MS) The HPLC enantio-selective separation was achieved using a chiral stationary phase as reverse phase (RP) Chiral ART cellulose-SZ (250 × 4.6 mm, 5 µm) and the isocratic mobile phase composition of ethanol: ethanol and diethyl amine (DEA):(100%: 0.1% v/v ratio) at the flow rate of the detection of analyte was 1.0 ml/minute.Injection volume of 20 µl of each sample injects into the system and employed at an ambient column temperature.The total run time for the chiral separation was 20 minutes.The resolution target was detected using Agilent HPLC 1260 infinity II with a DAD detector. The LC-MS/MS enantio-selective separation was achieved using a chiral stationary phase as RP Chiralpak IG-3 (100 × 4.6 mm, 3 µm) and the isocratic mobile phase composition of acetonitrile and 10 Mm ammonium bicarbonate (90:10 v/v ratio) at the flow rate of the detection of analyte was 0.5 ml/minute.10 µl of injection volume of each sample injects into the system and employed at an ambient column temperature.The total run time for the chiral separation was 5 minutes.The resolution targets was detected using a Shimadzu-8030 triple quadrupole mass / Journal of Applied Pharmaceutical Science 13 (Suppl 1); 2023: 001-008 spectrometer with ESI interfaced with the mass analyzer (Hassan et al., 2022).The molecular ion spectra for albaconazole were found to be at m/z: 432.0 and the most prominent fragmentation peaks were observed at 45.0, 391.0, and 415.0.The most stable fragment of maximum intensity at 391.0 (daughter ion) (Sumrra et al., 2021).The mass spectrometer was run in positive ionization detection mode (M + H) with an multiple reaction monitoring (MRM) mode with the following transitions: m/z 432.0 (parent ion) → m/z 391.0 (daughter ion) for (+/−) Albaconazole, respectively (Figs. 2 and 3). Standard solution preparation for (+/−) Albaconazole (HPLC and LC-MS/MS) Working standard of (+/−) Albaconazole 1,000 µg/ ml concentration was prepared by dissolving 10 mg of the enantiomeric drug in a 10 ml volumetric flask with methanol and make up the volume with methanol.A working concentration of 1,000 ng/ml was prepared from the above solution.The calibration curve for (+) and (−) Albaconazole of 10-100 ng/ml enantiomeric drug was prepared using the working standard. Method validation for Albaconazole The optimized HPLC and LC-MS/MS method was validated in accordance with the ICH guidelines in the aspects of specificity and carry-over, limit of quantification (LOQ), limit of detection (LOD), linearity, accuracy and precision and robustness, etc. (ICH, 1996). Accuracy and precision The recovery of the method was used to define the accuracy of the method.According to ICH guideline, the accuracy of the proposed HPLC and LC-MS/MS method was evaluated from the three levels of quality control samples by analyzing the six replicates.The recovery of the precision was carried out and the percent RSD was recorded. Specificity and carry over The ability to clearly assess the analyte in the presence of components that might be anticipated to be present is known as specificity.Typically, these could be degradants, impurities, etc. LOD and LOQ LOD and LOQ were determined by signal to noise ratios (S/N) of 3:1 (LOD) and 10:1 (LOQ), respectively, in accordance with ICH guidelines a method is considered sensitive if it can detect incredibly low concentrations. Robustness The robustness of the developed method was determined by altering experimental conditions are mobile phase, flow rate and injection volume, etc. was studied. RESULTS AND DISCUSSION In this study, RP direct chiral HPLC and LC-MS/ MS technique has been developed and validated for the chiral resolution of (+/−) Albaconazole.In order to separate the enantiomers, develop a simple, sensitive and effective HPLC / Journal of Applied Pharmaceutical Science 13 (Suppl 1); 2023: 001-008 and LC-MS/MS method for (+/−) Albaconazole were carried out by selected optimal conditions such as, the resolution factor, theoretical plates, tailing factor, peak area, and peak asymmetry factor.For HPLC, RP Chiral Art Cellulose-SZ (250 × 4.6 mm, 5 µm) as the chiral stationary phase and ethanol: DEA: (100%: 0.1% v/v ratio) as the isocratic mobile phase with a flow rate of 1.0 ml/minute constitute the optimal chromatographic conditions.The total chromatographic resolution run time was 20 minutes.The (+) enantiomeric retention time was found to be 6.952 minutes and the (−) enantiomeric retention time was found to be 9.955 minutes, respectively (Fig. 1).For LC-MS/MS RP-Chiralpak IG-3 (100 × 4.6 mm, 3 µm) as the chiral stationary phase and acetonitrile and 10 Mm ammonium bicarbonate (90:10 v/v ratio) as the isocratic mobile phase with a flow rate of 0.5 ml/minute constitute the optimal chromatographic conditions.The total chromatographic resolution run time was 5 minutes.The (+) enantiomeric retention time was found to be 2.905 minutes and the (−) enantiomeric retention time was found to be 3.780 minutes, respectively (Fig. 4). Specificity During the elution time of the individual enantiomers for both the methods, no potential interference peaks were noticed.As a result, the method was found to be specific and highly sensitive. LOD and LOQ Based on the S/N ratio and minimum level of peak area and the LOD (4 µg/ml) and (3 ng/ml) as well as LOQ (10 µg/ml) and (10 ng/ml) were determined for the (+) and (−) enantiomers for the developed method by HPLC and LC-MS/MS. System suitability As per ICH guidelines, a system suitability study were conducted to determine the system suitability parameters are resolution factor, retention time (R T ), tailing factor and theoretical plates (N).The results were found to be within the limits (Tables 4 and 8)./ Journal of Applied Pharmaceutical Science 13 (Suppl 1); 2023: 001-008 CONCLUSION In conclusion although both methods here described are reliable, and fast to perform, the major benefits of the LC-MS/MS are related to its improved selectivity, precision and accuracy and the lower variability in comparison to the HPLC-DAD.As per ICH guidelines a sensitive direct chiral reverse HPLC and LC-MS/MS method for chiral separation for novel triazole antifungal compounds was developed and validated.As it provides good sensitivity and reproducibility.In spite of the elevated instrumentation cost, the LC-MS/MS method, here presented, is simple and rapid and therefore it could be applied to routine analysis of oxidative stress in clinical chemistry.This method will be useful for pharmaceutical, pharmacokinetics and bioequivalence study. PUBLISHER'S NOTE This journal remains neutral with regard to jurisdictional claims in published institutional affiliation.
2023-09-24T16:27:14.953Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "64cbedbf66048a2e3a24ef99455fe2226642abda", "oa_license": "CCBY", "oa_url": "https://japsonline.com/admin/php/uploads/3987_pdf.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "26f311ed440970c117dc2c91ae04157b670c561a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
209327810
pes2o/s2orc
v3-fos-license
Frequency response analysis of heavy-load palletizing robot considering elastic deformation For the palletizing robot’s operating characteristics of high speed, high acceleration, and heavy load, it is necessary to make a research on the structure optimization focusing on the vibration characteristics according to the mechanical and dynamic performance analysis. This article first introduces the mechanical feature and working principle of high-speed and heavy-load robot palletizer. Kinematics analysis is carried out by using D-H parameter method, which obtains positive kinematics solution and workspace. Jacobian matrix is deduced, and the relationship between joint space and Cartesian space is established. Second, for the reason that joint flexibility has a great influence on the vibration performance of the robot, a rigid–flexible coupling dynamic model is established based on the simplified model of the flexible reducer and Lagrange’s second equation to describe the joint flexibility of high-speed and heavy-load palletizing robot, and the vibration modes of the robot are analyzed. The influence of different joint stiffness on the frequency response of the system reveals the inherent properties of the heavy-load palletizing robot, which provides a theoretical basis for the optimal design and control of the heavy-load palletizing robot. Introduction With the wide application of high-speed and heavy-load palletizing robots in automobile, metallurgy, and logistics industries, automated production lines put forward higher requirements for the speed, load capacity, acceleration, and positioning accuracy of robots. The high-speed and heavy-load palletizing robot not only reduces the positioning accuracy of the palletizing robot but also limits the speed of the robot. 1,2 In view of the high-speed, high-acceleration, and heavyload working characteristics of high-speed and heavy-load palletizing robot, it is not enough to complete structural analysis only at the kinematics level. It is necessary to carry out dynamic analysis of the robot body. Many achievements have been made in the study of flexible models. Bridges and Dawson 3 took into account the non-linear flexibility such as transmission friction, which made the flexible joint model more appropriate. On the basis of considering the non-linear links including backlash, Murphy et al. 4 established a complete flexible dynamic model of the robot by using Newton Euler method. For typical harmonic drive, Ghorbel and colleagues 5,6 established the model of harmonic reducer and verified the influence of reducer flexibility on motion through theoretical and experimental analysis. Hong and colleagues [7][8][9] used the basic principle of continuum mechanics to establish a rigid-flexible coupling dynamic equation, which has a high accuracy of coupling terms. Based on the principle of virtual displacement, Lu et al. 10 supplemented and improved Kinetic Elastic Dynamic equation when modeling, and included the coupling term between elastic deformation and nominal motion of rigid body, which effectively improved the accuracy of the model. 11 Zhang et al. 12 proposed a structural modeling and dynamic analysis method of palletizing robot considering joint flexibility, and analyzed the vibration modes of the robot. On the basis of considering the deformation of components, driving motor, and speed reducer, Lou et al. 13 analyzed the static stiffness of the end of the whole machine, and established the static stiffness model of the end of the whole machine by means of linear superposition principle. In the past, the dynamic modeling and analysis of the palletizing robot only stayed at the level of kinematics of the robot, and did not analyze the key factor of the dynamic characteristics of the robot. Aiming at the above problems, starting from the working characteristics of high-speed and heavy-load palletizing robot, a structural modeling and dynamic characteristics analysis method of high-speed and heavy-load palletizing robot considering joint flexibility is proposed, and the effects of different joint stiffness on the frequency response characteristics of the system are studied. Through the study of this article, the inherent properties of the system are revealed, and the established kinematic and dynamic models lay a theoretical foundation for the trajectory planning and control system design in the future. In practical aspects, the conclusions of this article can be used to guide the determination of reasonable working range and structural optimization design of palletizing robotfor example, adding a balance block with appropriate mass size, which can help to improve the seismic performance and increase the working range of the system. Structural design of heavy-load palletizing robot The design load of the high-speed and heavy-load four-degree-of-freedom palletizing robot is 300 kg, which consists of four rotary joints: 1. The base is connected with the main frame through a rotating joint whose axis is perpendicular to the ground. 2. The main frame is mounted on the base by rotating joints to support the whole arm, on which there are big arm, small arm, and connecting bar to keep the wrist level. The parallel quadrilateral mechanism is composed of big arm, small arm, and connecting bar. This mechanism not only has the function of travel enlargement but also increases the stiffness of the whole arm. The selection of parameters of each link and the relative position of its installation will directly affect the position and posture of the robot in the workspace. 3. The motor and speed reduction mechanism of the third joint adopts a rear parallelogram mechanism, which can place the motor and speed reduction mechanism of the first three joints on the base and main framework, which will obviously improve the dynamic characteristics of the system and reduce the inertia of the system. 4. The wrist is connected with the arm through a rotating joint, and the end joint is connected by the superposition effect of parallel quadrilateral mechanism in series, which satisfies the wrist's controllability, ensures that the rotation axis of the wrist joint is always perpendicular to the ground, and reduces the control difficulty and shortens the handling period. The wrist is a flange, which can be used according to the stacking items. Similarly, different forms of actuators are connected to the flanged disk. The ontology of the robot is shown in Figure 1. Kinematic analysis First, the D-H coordinate system of the high-speed and heavy-load four-degree-offreedom palletizing robot is established. The coordinate system of the connecting bar is set as shown in Figure 2. According to the parameters of the connecting bar, the transformation matrices can be obtained The positive kinematics solution of the end effector can be obtained by matrix transformation The Jacobian matrix of palletizing robot is a function of structural parameters and joint variables J = À sin u 1 a 1 + a 3 À d 3 cos u 3 + d 2 cos u 2 ð Þ cos u 1 d 3 sin u 3 À d 2 sin u 2 ð ÞÀ d 3 cos u 1 sin u 3 0 cos u 1 a 1 + a 3 À d 3 cos u 3 + d 2 cos u 2 ð Þsin u 1 d 3 sin u 3 À d 2 sin u 2 ð Þ À d 3 sin u 1 sin u 3 0 0 Àd 3 cos u 3 + d 2 cos u 2 d 3 cos u 3 0 1 0 0 À1 The definition and numerical value of the main structural physical parameters of the high-speed and heavy-load palletizing robot are shown in Table 1. Dynamics equation of heavy-load palletizing robot system The second kind of Lagrange equation is used to model the high-speed and heavyload palletizing robot system as shown in Figure 2. First, the kinetic energy of the moving parts of the system is obtained where m bam is the mass of big arm motor, m sam is the mass of small arm motor, J baz1 is the moment of inertia of big arm relative to axis z1, J saz1 is the moment of inertia of small arm relative to axis z1, J endz1 is the moment of inertia of the end of palletizing robot relative to axis z1, J baend is the moment of inertia of the big arm relative to the end of itself, J saend is the moment of inertia of the big arm relative to the end of itself, J bac is the moment of inertia of the big arm relative to its own center of mass, J sac is the moment of inertia of the small arm relative to its own center of mass, m ba is the mass of big arm, m sa is the mass of small arm, and m end is the mass of the end of palletizing robot. The potential energy of the system includes gravitational potential energy and elastic potential energy, in which gravitational potential energy is Assuming that the deformation is concentrated at the end of the bar, the elastic potential energy of the system is where K ba is the equivalent stiffness to big arm, u bam is the rotation angle of big arm motor, K sa is the equivalent stiffness to small arm, and u sam is the rotation angle of small arm motor. Definition of pull equation: L = T À (P G + P E ), and there are some forms d dt Definition of generalized coordinates q j and generalized forces of systems Q j ( j = 1, 2, 3, 4, 5, 6) The Lagrange equation of equation (18) is written in matrix form where M is the mass matrix and K is the stiffness matrix. The expression of equation (21) in the state-variable space is where X is the state parameter, U is the input variable, Y is the output variable, and A, B, and C are the state matrix, input matrix, and output matrix of the system respectively. Each expression is By defining the output variable C utilization equation (22), the frequency response characteristics of the system are calculated according to the specific parameters of the system. Simulation analysis Static stiffness analysis of palletizing robot Structural stiffness of bars. For the heavy-load palletizing robot, the load is very large, and the bar cannot be processed according to the rigid body like the light-load robot. The influence of the static stiffness of the bar on the end deformation must be considered. Because the stiffness of the waist and the end effector are very large and the contribution to the end displacement is very small, only the bars in the robot need to be considered. First, the force of each bar in the whole heavy-load palletizing robot is analyzed. The main bars are two and three pairs. The force acting on a member can be divided into its own gravity and the internal force generated by the end load, as shown in Figure 3. Both gravity and internal force can be equivalent to the component forces along the axial and radial directions. For internal forces, d5, d7, d9, and d11 are only subjected to axial forces, while d1, d2, d6, and d8 are subjected to non-axial forces. At the same time, d1 and d2 are subjected to the bending moment produced by the motor. So d1, d2, d6, and d8 can be used as cantilever beam to calculate its deformation; d5, d7, d9, and d11 can calculate deformations according to simply supported beam, and d3 can calculate the deformation according to outrigger beam. Next, taking bar d1 as an example to calculate static stiffness. As shown in Figure 4, the axial component of gravity and internal force causes tension and compression deformation, while the radial component causes bending deformation. Table 2 shows the formulas of axial and radial deformations of bar d1 caused by internal forces and gravity. Here, l is the length of the bar; F x , F y and q a , q r (force/length) are the axial and radial components of the internal force and gravity of the bar respectively; A is the cross-section area of the bar; E is the elastic modulus; and I is the polar moment of inertia of the cross-section. After calculation, the ratio of radial to axial deformation caused by internal force and gravity is more than 50 (Dl r =Dl a .50), and the position repetition accuracy of the heavy-load palletizing robot is (60.5 mm), so the tension and compression deformation caused by the axial component of internal force and gravity can be neglected. The static stiffness of the member can be expressed as According to the above method, the static stiffness of each member of the palletizing robot can be obtained under the maximum load condition (the end load is 300 kg). Then, each member of the robot can be regarded as an elastic system composed of multi-stage linear springs in series and in parallel. 14,15 The equivalent static stiffness can be obtained by integrating its stiffness into the upper arm and the lower arm of the active member. Torsional stiffness of motor. For the study of static stiffness of motor, AC servo motor can be regarded as a mechanical torsional vibration system. 16 When the natural frequency of the system is v 0 , the torsional stiffness of the motor is as follows where t is the mechanical time constant of the motor (s), J is the moment of inertia of motor rotor (NÁm/s 2 ), and K d is the torsional stiffness of motor system (NÁm/rad). Torsional stiffness of RV reducer. For RV reducer, if the input shaft (input gear) is fixed and the torque is applied on the output shaft, the torsion corresponding to the torque will occur, thus the hysteresis curve can be drawn and the static stiffness can be calculated through the hysteresis curve. Specific methods are as follows: when the system eliminates the gap in a certain direction, the input shaft is fixed, the load on the output shaft is increased from zero to rated torque step by step, and the torsion Table 2. Formulas of axial and radial deformation of bars produced by internal force and gravity. Axial deformation Radial deformation Internal force Dl a = F l EA Dl r = F y l 3 3EI Gravity Dl a = q a l 2 2EA Dl r = 8EI q r l 4 angle corresponding to each stage of loading is measured at the end of the output shaft. The torsional stiffness of RV reducer is the incremental ratio of the load torque on the output shaft to the corresponding torsional angle: b/a; the corresponding relationship is shown in Figure 5. According to the principle of linear superposition, the end deformation Dx of each pair of joint-connecting bars can be decomposed into the end-deformation Dx motor, Dx reducer, and elastic deformation Dx component caused by the torsion of servo motor, RV reducer, that is, Dx = Dx motor + Dx reducer + Dx component. From the previous calculation of the static stiffness of motor and reducer, it can be seen that the static stiffnesses of motor and reducer are very large relative to the torque bars, and the deformation can be neglected. Therefore, the contribution of component deformation to the static stiffness of the end is not considered. Because the waist and the end actuator are irregular objects, the stiffness is very large, and the deformation can be neglected. In the end, the palletizing robot can be simplified to a two-bar tandem mechanism. The big arm and the small arm can be regarded as a special rigid body. The expression of the mass matrix obtained by calculation is as follows The diagonal stiffness of the stiffness matrix is equivalent to the stiffness of the first three joints. Space analysis of terminal motion of palletizing robot Jacobian matrix reflects the mapping relationship between Cartesian space coordinates at the end of the robot and joint space coordinates. In addition, the biparallelogram structure of the robot results in interference during operation. According to the structural characteristics and geometric relationship of the robot, the joint angle limits that the robot should not exceed are as follows À1658 u 1 1658 58 u 2 1308 608 u 3 2008 25 u 2 À u 3 1558 According to the structure characteristics of high-speed and heavy-load robot, the displacement of the end on the horizontal and vertical planes is plotted by MATLAB, as shown in Figure 6. Frequency response analysis of palletizing robot Compared with the natural frequency of undamped system, the existence of damping will reduce the natural frequency, but the relative degree is small, and the specific value of damping is difficult to obtain accurately. 17 In this article, the natural frequencies of the high-speed and heavy-load palletizing robot system are calculated, and the influence of damping on the natural frequencies is neglected while the accuracy is guaranteed. At the same time, according to the theory of non-linear vibration, the constant damping or non-linear damping can be neglected when calculating the natural frequencies, because the damping has little influence on the vibration law. 18,19 According to the above analysis, the natural frequencies of free vibration can be obtained. The vibration equation can be written in the form of equation (21), and the natural frequencies can be obtained from the following equation Defining D = M À1 K as a dynamic matrix, the relationship between the natural frequencies of the system and the eigenvalues of the D matrix is as follows The inertia matrix of the robot is related to the joint rotation angle; that is, the natural frequency of the robot is related to its attitude, and the natural frequency of the low-order vibration is determined by the structural parameters. 20 At the same time, it is also affected by the joint rotation angle u 2 and u 3 . The relationship between the first three vibration frequencies and the rotation angle of the high-speed and heavy-load palletizing robot with or without bars' elastic deformation effect are shown in Figures 7-9. From Figures 7(a), 8(a), and 9(a), it can be seen that the first three natural frequencies range of the high-speed and heavy-load palletizing robot with bars' elastic deformation effect are 11.17-15.28 Hz, 13.24-26.59 Hz, and 18.78-83.71 Hz, respectively. From Figures 7(b), 8(b), and 9(b), it can be seen that the first three natural frequencies range of the high-speed and heavy-load palletizing robot without bars' It can be found from the comparative study with or without bars' elastic deformation effect that considering the elastic deformation of the bars, the first three natural frequencies of the palletizing robot decrease, and the seismic performance of the system also decreases, which more accurately reflects the dynamic characteristics of the heavy-load and high-speed palletizing robot. Therefore, the following studies are all aimed at the case of considering the bars' elastic deformation. Effect of joint stiffness on frequency response characteristics of palletizing robot The influence of the first three joint stiffness changes on the natural frequency of the high-speed and heavy-load palletizing robot is studied, which serves as the theoretical basis for the structural optimization of the robot. The influence of the first joint stiffness (i.e. lumbar joint) on the first-order natural frequency. Figure 10 shows the effect of the first joint stiffness on the first natural frequency when it is increased by 2, 5, and 10 times respectively. When the stiffness of the first joint increases by 2, 5, and 10 times respectively, the ranges of firstorder natural frequency is as follows: 13.24-15.28 Hz. It can be seen from Figure 10 that increasing the stiffness of the first joint can slightly increase the natural frequency of the system, but the effect of increasing the stiffness of the first joint is very small, so the natural frequency of the system cannot be increased by only increasing the stiffness of the first joint. Figure 11 shows the effect of the first joint stiffness on the first natural frequency when it is reduced by 2, 5, and 10 times respectively. When the first joint stiffness is reduced by 2, 5, and 10 times respectively, the ranges of first-order natural frequency are as follows: 7.89-15.28 Hz, 4.99-14.51 Hz, and 3.53-13.98 Hz; as can be seen from Figure 11, the natural frequency of the system decreases significantly when the stiffness of the first joint decreases. Therefore, to ensure the seismic performance of the palletizing robot, sufficient stiffness of the first joint is needed and serves as an important basis for optimal design. The influence of the second joint stiffness (i.e. the big arm joint) on the first-order natural frequency. Figure 12 shows the effect of the second joint stiffness on the first natural frequency when it is increased by 2, 5, and 10 times respectively. When the stiffness of the second joint increases by 2, 5, and 10 times respectively, the range of first-order natural frequencies is as follows: 11.17-18.77 Hz; it can be seen from Figure 12 that increasing the stiffness of the second joint alone cannot significantly improve the first-order natural frequency, but if the stiffness of the second joint is increased at the same time, the seismic performance of the system can be improved by ensuring that the palletizing robot works in a wide range of arm rotation angles. Figure 13 shows the effect of the second joint stiffness on the first natural frequency when it is reduced by 2, 5, and 10 times respectively. When the stiffness of the second joint is reduced by 2, 5, and 10 times respectively, the range of first-order natural frequencies is as follows: 10.13-10.81 Hz, 6.67-6.83 Hz, 4.78-4.83 Hz; as can be seen from Figure 13, the natural frequency of the palletizing robot is greatly reduced when the stiffness of the second joint is reduced, and the seismic performance of the system is greatly reduced. The influence of the third joint stiffness (i.e. the small arm joint) on the first-order natural frequency. Figure 14 shows the effect of the third joint stiffness on the first natural frequency when it is increased by 2, 5, and 10 times respectively. When the stiffness of the third joint is increased by 2, 5, and 10 times respectively, the range of firstorder natural frequencies is as follows:11.17-15.28 Hz; it can be seen from Figure 14 that the seismic performance of the system can be significantly improved if the third joint stiffness is increased while the palletizing robot works in a larger arm rotation angle or a smaller arm rotation angle range. Figure 15 shows the effect of the third joint stiffness on the first natural frequency when it is reduced by 2, 5, and 10 times respectively. When the stiffness of the third joint is reduced by 2, 5, and 10 times respectively, the variation ranges of the first natural frequency are as follows: 11.17-13.26 Hz, 7.92-8.39 Hz, and 5.78-5.93 Hz; as can be seen from Figure 15, the natural frequency of the palletizing robot is also greatly reduced when the stiffness of the third joint is reduced, which greatly reduces the seismic performance of the system. In summary, in order to make the system have enough natural frequencies to improve the seismic performance of the system, first of all, it is necessary to ensure that the stiffness of the first joint (i.e. lumbar joint) is large enough, and the stiffness of the second and third joints (i.e. the big arm joint and the small arm joint) are moderately increased, while ensuring that the heavy-load palletizing robot operates in a larger rotation angle in the big arm or a lower rotation angle in small arm joint simultaneously. 2. Then, considering the working characteristics of high speed, high acceleration, and heavy load of the palletizing robot, a structural modeling and dynamic characteristics analysis of the high-speed and heavy-load palletizing robot considering joint flexibility is proposed, and a more accurate rigid body dynamic equation of the palletizing robot is established by using the second kind of Lagrange equation. 3. Finally, the influence of different joint stiffness on the frequency response characteristics of the system is analyzed. To improve the seismic performance of the system, it is necessary to ensure that the stiffness of the first joint is large enough, and the stiffness of the second and third joints is moderately increased. Following these analysis and improvement, the palletizing robot can work in the range of larger in big arm rotation angle or lower in small arm rotation angle. 4. The motivation of this article is to reveal the inherent properties of the highspeed and heavy-load palletizing robot through the analysis of dynamic characteristics, and to provide theoretical basis for structural optimization design. Based on the above theoretical research, a balance block with moderate mass and size can be added to the structure design of palletizing robot to increase its working range and improve the seismic performance of the system. At the same time, the establishment of kinematics and dynamics model lay a foundation for the follow-up trajectory planning and control system design. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study is supported by the Natural Science
2019-12-13T14:01:31.132Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "02854082468e2e5a465fc53aa553127daa36029f", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0036850419893856", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "6f3c2eeb043c7ade5a73d9e12dac41da54ef5925", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
227178585
pes2o/s2orc
v3-fos-license
Oral encapsulated transforming growth factor β1 reduces endogenous levels: Effect on inflammatory bowel disease BACKGROUND TreXTAM® is a combination of the key regulatory cytokine transforming growth factor beta (TGFβ) and all trans retinoic acid (ATRA) microencapsulated for oral delivery to immune structures of the gut. It is in development as a novel treatment for inflammatory bowel disease (IBD). AIM To measure TGFβ levels in blood and tissue after oral administration of encapsulated TGFβ. METHODS Animals were orally administered encapsulated TGFβ by gavage. Levels of drug substance in blood and in gut tissues at various times after administration were measured by ELISA. RESULTS We made the surprising discovery that oral administration of TreXTAM dramatically (approximately 50%) and significantly (P = 0.025) reduced TGFβ levels in colon, but not small intestine or mesenteric lymph nodes. Similarly, levels in rat serum after 25 d of thrice weekly dosing with either TreXTAM, or microencapsulated TGFβ alone (denoted as TPX6001) were significantly (P < 0.01) reduced from baseline levels. When tested in the SCID mouse CD4+CD25- adoptive cell transfer (ACT) model of IBD, oral TPX6001 alone provided only a transient benefit in terms of reduced weight loss. CONCLUSION These observations suggest a negative feedback mechanism in the gut whereby local delivery of TGFβ results in reduced local and systemic levels of the active form of TGFβ. Our findings suggest potential clinical implications for use of encapsulated TGFβ, perhaps in the context of IBD and/or other instances of fibrosis and/or pathological TGFβ signaling. INTRODUCTION TreXTAM ® is a proprietary micro-encapsulated drug product in development as an oral treatment for inflammatory bowel disease (IBD). It is the combination of the key regulatory cytokine transforming growth factor beta (TGFβ) encapsulated in into polylactic acid (PLA) particles; along with a signaling form of vitamin A, all trans retinoic acid (ATRA), encapsulated in poly D,L-lactide-co-glycolide (PLGA) particles [1] . Simultaneous ATRA and TGFβ signals synergize in promoting the differentiation and stabilization of regulatory T cells [2] . This is a completely novel strategy for the treatment of IBD, as no similar products exist. However, unlike ATRA, TGFβ is a protein macromolecule that must be protected against hydrolysis in the stomach to be effective via the oral route [3] . To address these challenges, we pioneered the development of phase inversion nano-encapsulation (PIN ® ) technology that utilizes a non-mechanical approach to preserve the structural integrity of macromolecules during the drug product manufacturing process. PIN encapsulated cytokines have demonstrated stability, bioactivity and efficacy in various preclinical models [21][22][23][24][25][26] . Particles with an average diameter of 0.1-5 microns [6] , are ideally suited to oral delivery as particles smaller than 5 microns in diameter readily traverse the gastrointestinal barrier [27][28][29] . Indeed, we had previously shown that orally administered insulin encapsulated in PIN particles resulted in localization of drug product to the gut, and efficient uptake at the intestinal border [6,7] . More recently we applied PIN technology to the development of TreXTAM and showed that oral administration effectively ameliorated disease in two different rodent IBD models [1] . Broadly, treatment of mice with established disease using the optimized dose/frequency regimen, achieved a dramatic 2 to 9-fold reduction in multiple markers of disease compared to control groups within 2 wk, in some cases approaching normal values. Importantly, treatment enhanced long-term survival over eight weeks with no detectable toxicity. Activity was associated with enhanced Foxp3 expression in the colonic lamina propria CD4+ CD25+ T-cells, and required both TGFβ and ATRA for maximal efficacy. We have recently reviewed potential cellular and November 8, 2020 Volume 11 Issue 5 L-Editor: A P-Editor: Li JH molecular mechanisms driving synergy, including cross-talk between ATRA and TGFβ signal transduction pathways [2] . During TreXTAM development, we studied TGFβ pharmacokinetics after oral administration of TreXTAM, or after the encapsulated cytokine (TPX6001) was given alone, without ATRA. We made the surprising discovery that oral administration of TreXTAM dramatically reduced TGFβ levels in colon and in blood, to below baseline levels. When encapsulated TGFβ (TPX6001) was given alone, three times a week for 25 d, we likewise observed serum TGFβ decreases below baseline (untreated) levels. Oral treatment with TPX6001 alone transiently ameliorated weight loss in the murine adoptive cell transfer (ACT) model of IBD. These observations suggest a negative feedback mechanism in the gut whereby local delivery of TGFβ results in reduced local and systemic levels of the active form of TGFβ. This finding suggests potential clinical implications for use of encapsulated TGFβ in the context of IBD and/or pathological TGFβ signaling. Preparation and characterization of TGFβ and ATRA loaded formulations Microsphere preparation: For tissue studies, TGFβ (Peprotech, Rocky Hill, NJ, United States) was encapsulated into bench-top scale, poly-lactic acid (PLA) particles (0.285 mg TGFβ per gram of final drug product; for simplicity and clarity the abbreviation TGFβ refers specifically to TGFβ1, unless otherwise noted) using PIN as described previously [30] . ATRA (Sigma) was encapsulated into poly-lactic-co-glycolic acid (PLGA) particles (1 mg of ATRA per gram of particles) using a modification of the solvent evaporation technique as in previous studies [31] . For PK studies, TGFβ and ATRA loaded microspheres (denoted TPX6001 and TPX7001, respectively) were synthesized at Lonza-Bend, Bend Oregon using a proprietary two-step spray dry process to manufacture larger, scaled up quantities. Briefly, in Step 1, lyophilized protein is mixed with excipients and dispersed. In Step 2, micronized protein + excipients are encapsulated, precipitated and collected. To reduce dose mass, TGFβ and ATRA spray dry drug products were loaded at 1 mg/g and 2 mg/g w/v. The release kinetics, bioactivity, morphology, long-term (1 year) stability, as well as the physicochemical properties of glass transition temperature and crystallinity were essentially identical in the bench lots and spray dried particles (data not shown). TGFβ and ATRA loaded PLA and PLGA particles were mixed cage-side in the indicated proportions to create TreXTAM, a proprietary combinatorial product designed to provide both TGFβ and ATRA signals thought to drive the development of regulatory T cells [32][33][34][35] . In vitro drug substance release: Formulations were release-tested using an in vitro release assay described previously [24] . Briefly, for TGFβ, 0.2 mL of a 10 mg/mL particle suspension was transferred to the wells of a 96-well plate in triplicate. The plate was incubated at 37 o C in 5% CO 2 , the supernatants were sampled at the indicated time points and stored at -20 o C until use. ATRA was extracted and measured by HPLC as in our previous studies [36] . The immune-reactive, active form of TGFβ was measured by assaying non-acidified samples in an ELISA (R&D Systems Quantikine ELISA kit Catalog# MB100B). This assay does not have significant cross-reactivity or interference with TGFβ2 or TGFβ3, and does not detect the latent form of TGFβ1 without acid treatment. ATRA extraction and analysis was performed as follows: 10 ± 0.1 mg ATRA containing microspheres were weighed into 15 mL Falcon tubes for each terminal time-point. 1 mL of 1 × PBS was added and placed on end-over-end rotator at 37 ºC. At predetermined time-points, the tubes were centrifuged and supernatant discarded. The remaining microspheres were flash frozen and lyophilized for 24 h. Microsphere samples were then extracted by adding 5 mL of pH 7 mobile phase (68:24:8 ratio of acetonitrile: 1% glacial acetic acid:ethanol) and bath sonicating for 45 min. Extracted samples were then run on a HPLC using a Waters Symmetry C18 Column (5.0 µm, 3.9 mm × 150 mm) at a flow rate of 1 mL/min using pH 7 mobile phase. Absorbance was measured at 356 nm. Pharmacokinetic studies Animals: The in-life phase of these studies was performed at Comparative Biosciences, Sunnyvale, California. 7 to 9-wk old Sprague-Dawley rats (males and females) were kept under standard laboratory conditions with free access to food and water. They November 8, 2020 Volume 11 Issue 5 were allowed to adapt one week before starting the study. The care and use of laboratory animals was in accordance with relevant IACUC-approved animal use protocols. Administration of encapsulated drug products: A 0.5-mL aliquot of TreXTAM (or TPX6001 alone) in aqueous suspension was prepared by reconstitution of drug products (TPX7001 and/or TPX6001) with distilled water and mixed in appropriate w/v proportions to achieve the targeted dosing. Animals were dosed by oral gavage. Blood samples were collected at fixed times after dosing. Tissues analysis: 7 to 9-wk old male Sprague-Dawley rats n = 3 per group) were untreated, or treated (oral gavage) with TreXTAM three times per week for four weeks. Four hours after the final dose, gut tissues were taken, frozen at -20 o C and stored until used. Tissues were then thawed and homogenized using a glass tube with the pestle insert, in the presence of EDTA-free SIGMAFAST™ Protease Inhibitor Cocktail Tablets (Sigma-Aldrich) used as per manufacturer's instructions. Levels of TGFβ1 and ATRA in lysates were measured as described above. Serum analysis: Serum levels of TGFβ1 were measured using an ELISA kit (R&D Systems, Minneapolis, MN; see above) with a slight modification from manufacturer's instructions. Samples were not acid-activated, minimizing detection of endogenous latent cytokine. For ATRA, a high-performance liquid chromatograph combined with a triple quadrupole mass spectrometer was used as in our previous studies [36] . SCID mouse CD4+CD25-T cell transfer colitis model The model was chosen because it recapitulates a regulatory T cell immunological basis of colitis, and was performed as in our previous studies [1] . Briefly: Animals: Six to 8-wk old BALB/c and CB-17 SCID mice (males and females; Jackson Laboratories, Bar Harbor, MA, United States) were kept under standard laboratory conditions with free access to food and water and allowed to adapt one week before starting the study. The care and use of laboratory animals was in accordance with a University at Buffalo IACUC-approved animal use protocol. Induction of colitis: Purified CD4+CD25-T-cells were adoptively-transferred to SCID recipients (4 × 10 5 cells per mouse, i.p.). Mice were randomized into groups when 10% of mice show 5% or greater weight loss and/or soft or bloody stools and treatment (3 × per week via oral gavage) started. Daily disease score was recorded for each animal as in our previous studies [1] and summarized for each group as cumulative disease score during treatment. Last recorded values of animals that died during treatment were brought forward. At the end of the treatment period, all mice were sacrificed, and colons scored grossly for pathology on a 0 (normal) to 5 (diseased; elongated, inflamed, lacking definable stools) scale. Histology was also performed as in our previous studies [1] . Six to eight H&E sections of colon representing ascending, transverse and descending colon per mouse were evaluated independently, in blinded fashion, by a board-certified pathologist (Pacific Tox Path, LLC, Ellensburg, WA, United States). A composite inflammation score was calculated based on (0-3) severity and extent of cellular infiltration, amount of mucus and degree of proliferation (maximum score of 12). Statistical analysis Significance (P ≤ 0.05) between experimental and control groups was determined using Student's t-test analysis. In experiments with multiple groups, homogeneity of inter-group variance was analyzed by ANOVA. November 8, 2020 Volume 11 Issue 5 In vitro release patterns of TGFβ and ATRA and typical appearance of PLA and PLGA microsphere particles For tissue studies, and for ACT studies, TGFβ and ATRA were encapsulated using PIN or solvent evaporation techniques bench top-processes (respectively) at 0.285 mg/g and 1 mg/g w/w, respectively. At 24 h, both TGFβ and ATRA drug products released bioactive (confirmed using TGFβ sensitive mouse lymphoblast cell line HT-2 or ATRA sensitive murine melanoma B16-F1cells; data not shown) TGFβ or ATRA ( Figure 1A and B respectively) as expected, indicating that both drug substances could potentially be delivered in active forms, simultaneously, in vivo after oral administration. TGFβ and ATRA in small and large intestine and MLN after oral Administration of TreXTAM to male rats To assess delivery of ATRA and TGFβ to gut, male Sprague-Dawley rats were fed with either blank particles or with TreXTAM (60 mg/kg and 30 mg/kg of TGFβ and ATRA loaded particles; denoted TPX6001 and TPX7001, respectively, and loaded at 0.286 mg/g and 1 mg/g, approximately 17 and 30 μg/kg respectively) three times per week for four weeks. Four hours after the final treatment, small intestine, large intestine and MLN were collected from each animal and frozen at -20 o C. ATRA and TGFβ levels in small and large intestine, as well as MLN, were determined by HPLC or ELISA, respectively (limit of detection 0.75 ng/mL and 0.4 pg/TGFβ/100 μg of protein, respectively). Levels of ATRA in small intestine and MLN of treated and untreated animals were at the limit of detection. Levels of ATRA in colon were virtually the same in treated and untreated animals (data not shown). TGFβ was also negligible in small intestine and MLN of treated and untreated animals. However, TGFβ levels in colon of treated animals were decreased over 50% compared to untreated animals ( Figure 2). This difference was significant (P = 0.025) suggesting a treatment associated attenuation of endogenous active TGFβ in colon tissue. Since those initial studies, we scaled up production of PLA encapsulated TGFβ (TPX6001) using the proprietary twostep stray dried manufacturing process described in the methods section. Production of PLGA encapsulated ATRA (TPX7001) has also been scaled up using spray drying methods. Release rates and physiochemical properties of the spray dried and bench top materials were virtually identical (data not shown). All pharmacokinetic work to follow was performed using spray-dried TGFβ PLA and ATRA PLGA (loaded at 0.1 and 0.2%, respectively) material. Pharmacokinetics following oral administration of TreXTAM We could not directly demonstrate simultaneous delivery of TGFβ or ATRA to gut tissue by oral TreXTAM (although we could see biological effects [1] ). To further investigate this issue in vivo, and as part of our development efforts, we tested oral TreXTAM in a 28-d GLP rat toxicology study. The relevant pharmacokinetic for ATRA after TreXTAM administration has been published previously [36] . Those studies reported that after a single oral TreXAM administration, serum ATRA levels peaked with a T max of 60 min and t ½ of 143 min. We report here that after oral administration of TreXTAM (30 mg/kg spray-dried encapsulated TGFβ and 30 mg/kg PLGA encapsulated ATRA) three times a week for 25 d, serum TGFβ levels were significantly reduced compared to those observed in the same animals on day 0, prior to any TreXTAM dosing (Figure 3; NB: The level of ELISA detection is approximately 150 pg/mL). This finding was reminiscent of our observations of reduced TGFβ in colon after dosing (Figure 2). We also note that in the pre-dose, naïve animals n = 24 per sex) females had higher endogenous levels of TGFβ compared to males (492 ± 107 pg/mL vs 324 ± 14 pg/mL). This difference was highly significant (P < 0.0001). Similar observations were reported previously by Knabbe et al [37] . Oral treatments with PLA encapsulated TGFβ reduce serum levels of TGFβ We also tested spray-dried PLA encapsulated TGFβ (TPX6001; loaded at 1 mg/g w/w) given alone in a similar 28-d GLP rat toxicology study (Figure 4). Once again, when similar analyses were performed on naive animals and on the same animals that had been dosed three times per week for 25 d, a dramatic and highly significant (P < 0.01) treatment-related reduction in serum TGFβ levels was evident for all dose groups (Figure 4). Indeed, the reduction in baseline serum TGFβ was dose dependent, in that the difference between the low and high dose group was also significant (P < 0.03). It Figure 1 Release profiles of transforming growth factor -loaded poly-lactic acid microspheres and all trans retinoic acid-loaded polylactic-co-glycolic acid microspheres. Transforming growth factor (TGF) was encapsulated in poly-lactic acid (PLA) microspheres (285 μg of TGF per gram of particles) using Phase Inversion Nano-encapsulation (PIN). All trans retinoic acid (ATRA) was encapsulated into poly-lactic-co-glycolic acid (PLGA) microspheres (1 mg of ATRA per gram of particles) using a modification of the solvent evaporation technique (see methods section). A: TGF-loaded microspheres were release-tested using the in vitro release assay as described in the methods section; B: ATRA-loaded microspheres were release-tested using an in vitro extraction assay as described in the methods section. Data are expressed as pg/mL or as μg/mL ± SE. TGFβ: Transforming growth factor ; ATRA: All trans retinoic acid. was also interesting to once again note that in naïve pre-dose animals n = 24 per sex), females had significantly (P = 0.001) higher levels of TGFβ than males, (492 ± 107 pg/mL vs 324 ± 14 pg/mL) indicating the same gender bias. Effect of TPX6001 on disease in the SCID mouse CD4+CD25-ACT model of IBD We next tested the IBD therapeutic potential of TPX6001 oral treatments when given alone, without ATRA, in the SCID mouse ACT model of IBD. This single, preliminary study used a highly challenging therapeutic iteration of the model. Treatments began at disease onset. There were no significant differences between groups in terms of body weight or disease score at the start of treatment. We found that TPX6001 treatment resulted in significant attenuation of weight loss ( Figure 5). The differences between the 5 mg and 40 mg doses (days 3 to 12) were significant (P = 0. 01) compared to animals treated with blank microspheres. The difference between the 10 mg and blank groups during that same period achieved only a trend (P = 0.12), possibly November 8, 2020 Volume 11 Issue 5 because of two deaths in the 10 mg group. It is also interesting to note that the high dose group showed the most benefit for the first 7 d of treatment, but then deteriorated rapidly. At the end of the study, for each group, we calculated cumulative disease score during treatment (blank fed group = 52; 5, 10 and 40 mg treatment groups = 49.5, 52.6, and 47, respectively); colon weight to length ratios (blank fed group = 55.3 ± 14.3; healthy age and sex matched controls = 27.9 ± 4.4; 5, 10 and 40 mg treatment groups = 52.8 ± 9.1, 56.5. ± 15.3, and 57.4 ± 9.9, respectively), gross pathology (blank fed group = 2.9 ± 0.9; 5, 10 and 40 mg treatment groups = 3.7 ± 1.4, 2.4. ± 1.4, and 3.8 ± 1.1, respectively) and histology composite inflammation scores (blank fed group = 8.75, and 5, 10 and 40 mg treatment groups = 9.4, 7.8, and 8.1, respectively). We found no significant differences, except at the 10 mg dose (P = 0.003), which again, may have been biased by the deaths of 2 animals in that group. We also note a trend in favor of treatment with respect to cumulative disease score, in the high dose group (P = 0.08). Therefore, we conclude only slight, transient benefit of TPX6001 treatment in this iteration of the ACT model. November 8, 2020 Volume 11 Issue 5 Figure 5 Therapeutic activity of transforming growth factor β loaded particles (TPX6001) in the SCID mouse adoptive CD4+ CD25-T-cell transfer model of inflammatory bowel disease. Mice (n = 6-9 per group) with established disease were weighed (day 0) and fed transforming growth factor β1 microspheres (5, 10, or 40 mg/mouse), or blank microspheres (40 mg/mouse) in 0.2 ml water 3 times per week for 2 wk. Mice were monitored for overall disease score and weighed 3 times per week for two weeks. Mice were sacrificed 2 d after the last dose, serum taken, colons weighed and measured; and colons samples prepared for histological analysis (five randomly selected sections from each mouse). Data are expressed as % change in body weight relative to day of first treatment. 5 and 40 mg/mouse TPX6001-treated groups were significantly different (P = 0.01 on days 3-12) from animals treated with blank microspheres. Multiple (28 d) oral treatments (thrice weekly) with either TreXTAM or encapsulated TGFβ were safe and well tolerated at the highest doses tested For both TreXTAM and PLA encapsulated TGFβ GLP pharmacokinetic studies, full industry standard toxicology analyses, including clinical observations clinical pathology, necropsy, histopathology and ophthalmology, were also performed on both male and female animals. There were no statistically significant differences in body weights or weekly food intake among groups, and no significant organ weight changes. There were no test article-related histopathological or other findings and no fibrosis was observed with even the highest doses at the end of treatment (Day 28) or at the end of a 56-d recovery period (data not shown). Encapsulated TGFβ, when given alone was as safe and as well tolerated as the TreXTAM combination. DISCUSSION We report here that oral TreXTAM produced a surprising and dramatic decrease in serum and colonic TGFβ levels. While we could not directly demonstrate simultaneous delivery of both drug substances to gut tissues, our in vitro and pharmacodynamics observations suggest that it was achieved. In animals given either TreXTAM or PLA encapsulated TGFβ (TPX6001) alone 3 times per week for 25 d, we observed dramatically lower serum TGFβ compared to the same animals before dosing. We also found evidence for a transient benefit of oral TPX6001, at least in terms of weight loss attenuation, in the murine adoptive cell transfer (ACT) model of IBD. TreXTAM is being developed as treatment for Crohn's disease (CD) and ulcerative colitis (UC). CD and UC are chronic disorders of the GI tract causing significant morbidity for over 1.4 million Americans [38] . An appreciation of common inflammatory pathways led to the joint designation "IBD". Symptoms include diarrhea, nausea, abdominal pain, weight loss, increased risk for colorectal cancer [39] and can be fatal [40] . Although etiologies are incompletely understood, genetic, immunologic and environmental factors all make significant contributions [38,41] . Human and animal studies implicate abnormal responses to commensal microflora and perturbed local immune homeostasis [38,39,41] . 'Biologics', macromolecules that target inflammatory lymphocytes or the cytokines they produce [42] have emerged as a new class of highly effective treatments. However, an estimated 30% of patients will not respond and of those who initially respond, 50% relapse within a year. A more recent review indicates only modest impact on surgical intervention rates [43] . The need for novel, targeted therapies remains acute. TreXTAM aims to address that need by taking advantage of the synergistic effects of ATRA and TGFβ on the differentiation and stabilization of regulatory T cells [2] . TGFβ is a pleiotropic cytokine with multiple effects on many cell types. It is a key regulator of T-cell biology, impacting thymocyte development, differentiation and effector function [44] . On the one hand, complete loss of TGFβ signaling leads to lymphoproliferative autoimmunity [45][46][47] , on the other hand, systemic administration in microgram doses protects in several autoimmune disease models [48][49][50][51] . Unfortunately, TGFβ is also associated with serious side effects, including pulmonary fibrosis [52][53][54][55] , scleroderma [56] , chronic GVHD [57] and glomerulonephropathies [58] . To circumvent these toxicities, local delivery via gene therapy has been proposed, but is inconvenient, transitory, imprecise and immunogenic [48,50,59] . There is no means to control signal transcription or translation, dose schedule, release rates or unwanted immune responses. TreXTAM, aims to circumvent this problems by local delivery and reduces systemic exposure of drug substances with the hope of reducing effective doses and toxicities. Because of the known fibrotic effects of TGFβ, exacerbation of fibrosis in the context of IBD was a serious concern of oral TreXTAM treatment. The results reported here suggest the opposite might be true, especially in colon, where TreXTAM reduced endogenous TGFβ levels. 28-d TreXTAM repeat dosing studies in rats, like the one reported here for encapsulated TGFβ alone, showed no TreXTAM induced fibrosis in any organ including small intestines and colon (data not shown). Further, we tested TreXTAM, both in healthy mice and in SCID animals with CD4+ CD25-induced colitis, for up to 8 wk, and likewise, found no increases in fibrosis in any organ (Auci et al, unpublished observations). Considering the results reported here, oral treatment with TreXTAM, or even treatment with encapsulated TGFβ alone, may be useful to stimulate autocrine negative feedback and reduce TGFβ levels, to prevent IBD associated fibrosis. Our inability to detect increased TGFβ in small intestine and MLN after TreXTAM treatment may be due to insignificant amounts of TGFβ delivered despite effective particle uptake in the Peyer's patches and MLN(7). This may relate to the failure of the particles to reach the colon or rapid degradation and/or deactivation of the released TGFβ. Uptake by other tissues, binding to cell surface proteins or other factors, as well as the potential conversion of TGFβ1 to TGFβ2, 3 or its latent form, would have prevented an increase from being detected. Perhaps most surprisingly, we observed a highly significant TreXTAM-associated decrease (approximately 50%) of active TGFβ in the colon. While this may relate to effects of the particles themselves, a more intriguing possibility involves ATRA amelioration of TGFβ expression and signaling [60] . Several studies report ATRA decreases TGFβ levels and/or signaling in various tissues [61][62][63][64] . ATRA modification of TGFβ signaling may also help explain the lack of treatment associated fibrosis observed in our previous studies [1] . Reduction of endogenous TGFβ in colon and its simultaneous delivery to immune structures such as Peyer's patches and MLN may contribute to the TreXTAM-associated benefits in models of IBD. Like observations in colon, decreases in systemic TGFβ were observed when the encapsulated cytokine was delivered with ATRA in the form of TreXTAM, but also when given alone. Therefore, at least the systemic attenuation of TGFβ levels do not require ATRA and can be achieved with just the encapsulated cytokine. The role of TreXTAM in IBD, including its prophylactic and/or therapeutic usefulness for Crohn's disease and/or colitis, awaits further studies in various models aimed at determining the contribution each component plays in the efficacy observed. Our finding of higher levels of TGFβ in female vs male rats is reminiscent of other studies in humans [65] and in non-human primates, where TGFβ levels were found to be higher in young females as compared to males. Interestingly, TGFβ levels decreased with age in females, and increased with age in males, suggesting effect of sex hormones [66] . The wide literature describing activities of TGFβ in the context of autoimmunity and infection has already been extensively reviewed [67] , and its consideration is beyond the scope of this work. Suffice to say that an important intersection for the cross talk between TGFβ signaling pathways and sex hormones may lie at the generation and stabilization of regulatory T cells. Reminiscent of our observations with TreXTAM in tissue, we found that oral TPX6001 when given alone, without ATRA, also reduced serum levels of the endogenous cytokine. The mechanism(s) by which oral treatment with encapsulated TGFβ could lead to reduction in systemic and tissue levels remain unknown. They may relate to synthesis or release of mediators by cells, increased uptake and/or deactivation by other tissues [68] and/or effects on pathways specific to immune structures of the gut. TGFβ is synthesized as an inactive precursor, a complex consisting of a TGFβ dimer, the latency-associated protein, and latent TGFβ binding protein [69] . Before TGFβ can exert its biological effects, both must be dissociated. Therefore, our findings may also relate to specific activation/deactivation pathways, which may be controlled by the gut. It is also possible that our findings relate to switching between immunologically (ELISA) distinct isoforms of TGFβ (1, 2 or 3) [70] . The potential biological significance of such switching is unclear. To our knowledge, we were the first to administer PLA encapsulated TGFβ via the oral route [1] . Our preliminary observations in the ACT model of IBD suggest only a transient benefit of oral TPX6001 treatment. However, several studies report activities of oral TGFβ, when given as an intact protein. Shiou et al [71] reported that oral administration of TGFβ (30 ng/mL) suppressed pro inflammatory cytokine production (including IL-6 and IL-8) in the gut of rat pups. The suppression was associated with suppressed NF-κB signaling. Systemic TGFβ levels were not measured. An earlier publication by Ando et al [72] reported increased serum TGFβ in mice after oral administration of the intact protein. Those studies also reported enhancement of oral tolerance. Additional studies in the ACT model, as well as other models of acute and chronic IBD, will be necessary to fairly evaluate the therapeutic potential of oral TPX6001 when given alone in IBD and perhaps also in other specific clinical situations where increasing TGFβ levels are pathogenic, for example against certain challenging forms of breast cancer [73] . Such studies are subjects of forthcoming work from our laboratories. CONCLUSION These observations suggest a negative feedback mechanism in the gut whereby local delivery of TGFβ results in reduced local and systemic levels of the active form of TGFβ. Our findings suggest potential clinical implications for use of encapsulated TGFβ, perhaps in the context of IBD and/or other instances of fibrosis and/or pathological TGFβ signaling. Research background TreXTAM ® is a combination of transforming growth factor beta (TGFβ) and all trans retinoic acid (ATRA) microencapsulated for oral delivery to immune structures of the gut. It is in development as a novel treatment for inflammatory bowel disease (IBD). Research motivation When given together, ATRA and TGFβ signals synergize in promoting the differentiation and stabilization of regulatory T cells. Research objectives This is a completely novel strategy for the treatment of IBD, as no similar products currently exist. TreXTAM would represent an entirely novel IBD treatment modality. Research methods During TreXTAM development, we studied TGFβ pharmacokinetics after oral administration of TreXTAM, or after the encapsulated cytokine (TPX6001) was given alone, without ATRA. This is required for combinatorial products. Research results We made the surprising discovery that oral administration of TreXTAM dramatically reduced TGFβ levels in colon and in blood, to below baseline levels. When encapsulated TGFβ (TPX6001) was given alone, three times a week for 25 d, we likewise observed serum TGFβ decreases below baseline (untreated) levels. Oral treatment with TPX6001 alone transiently ameliorated weight loss in the murine
2020-11-12T09:07:17.752Z
2020-11-08T00:00:00.000
{ "year": 2020, "sha1": "0f470eebb70d4e8fac45cffdb49eb2a6fe589a69", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4292/wjgpt.v11.i5.79", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77c382b330b48b12ff1d1ccba0c0bfec365ed741", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251409442
pes2o/s2orc
v3-fos-license
Thirty years of research on physical activity, mental health, and wellbeing: A scientometric analysis of hotspots and trends The sheer volume of research publications on physical activity, mental health, and wellbeing is overwhelming. The aim of this study was to perform a broad-ranging scientometric analysis to evaluate key themes and trends over the past decades, informing future lines of research. We searched the Web of Science Core Collection from inception until December 7, 2021, using the appropriate search terms such as “physical activity” or “mental health,” with no limitation of language or time. Eligible studies were articles, reviews, editorial material, and proceeding papers. We retrieved 55,353 documents published between 1905 and 2021. The annual scientific production is exponential with a mean annual growth rate of 6.8% since 1989. The 1988–2021 co-cited reference network identified 50 distinct clusters that presented significant modularity and silhouette scores indicating highly credible clusters (Q = 0.848, S = 0.939). This network identified 6 major research trends on physical activity, namely cardiovascular diseases, somatic disorders, cognitive decline/dementia, mental illness, athletes' performance, related health issues, and eating disorders, and the COVID-19 pandemic. A focus on the latest research trends found that greenness/urbanicity (2014), concussion/chronic traumatic encephalopathy (2015), and COVID-19 (2019) were the most active clusters of research. The USA research network was the most central, and the Chinese research network, although important in size, was relatively isolated. Our results strengthen and expand the central role of physical activity in public health, calling for the systematic involvement of physical activity professionals as stakeholders in public health decision-making process. Introduction Physical activity can be considered as medicine and has been used in both the treatment and prevention of a variety of chronic conditions (1). Longitudinal cohort studies demonstrate that a low cardiorespiratory fitness constitutes the largest attributable fraction for all-cause mortality (2). There is also overwhelming evidence that low physical activity (i.e., not meeting physical activity recommendations) is considered as an important risk factor for chronic conditions including some cancers, cardiovascular disease, diabetes, dementia, and in particular for a patient with mental illness (schizophrenia, bipolar disorder, or major depressive disorder) (3)(4)(5). Patients with mental illness have poor physical health compared with the general population, with reduced life expectancy and a higher risk of premature death beyond suicide, from natural causes (6). At least partially, among other factors, their poor physical health is due to higher sedentary behavior and lower physical activity compared with the general population (7,8). Physical activity, and its structured form of exercise, seem to affect the brain and mind, beyond physical health, both as a factor associated with poor mental health and quality of life and as a treatment for mental disorders (9). Indeed, exercise has shown to be efficacious in a number of mental disorders, according to a previous umbrella review pooling 27 systematic reviews (10,11). Exercise is also now seen as a potential preventive or diseasemodifying treatment of dementia and brain aging (12) or as a possible treatment for negative symptoms in schizophrenia (13). Importantly, systematic reviews, meta-analysis, and umbrella reviews have offered a deep synthesis of specific research questions addressed within the exponential volume of physical activity literature related to mental health and wellbeing. However, such systematic methods may not be appropriate to encompass hundreds or thousands of new publications per year. In fact, systematic reviews have to be narrow in their inclusion criteria and offer a comprehensive view on a specific and restricted research or clinical question. For instance, a meta-analysis can inform if an intervention is efficacious for a given population on an outcome of interest (14, 15) or an umbrella review can assess the credibility of an association between a risk factor and an incident condition (16)(17)(18)(19). Nevertheless, none of the two offers an insight on the temporal trend of research, the complex network of topics, authors, publications, networks, institutions, and their bibliometric performance. Gaining such overarching views of how an entire field of research on a particular topic is important and useful, in order to gauge how the academic literature is developing and inform the next steps for the science to pursue. The integration of developments in data visualization, text mining, and network analysis has permitted the emergence of a new framework and a new generation of research synthesis of both evidence and influence, named research weaving (20). This framework combines visual analytics and scientometrics to visualize and delineate the development of a field, its underlying intellectual structure and the dynamics of scholarly communication over time (21). A comprehensive delineation of how scientometrics and bibliometrics overlap and distinct can be found in Hood and Wilson 2001 paper (22). To the best of our knowledge, no broad-ranging scientometric study of research trends and influence networks of physical activity, mental health and wellbeing has yet been conducted. Thus, in this article, we present one to bridge the gap. Search strategy and data collection We searched the Web of Science Core Collection (WOSCC) on December 7, 2021, using a combination of keywords and Medical Subject Headings such as "physical activity, " "mental health, " and "mental illness * ." WOSCC provides full references and complete citations of articles published in major journals since 1900 and is one of the largest comprehensive sources for bibliometric studies (23). The full protocol with the search key is available on osf.io. This current study protocol is based on a first large-scale scientometric analysis (24). The database source was limited to the Web of Science Citation Index Expanded. The document types are limited to "article, " "review, " "editorial material, " and "proceeding papers, " without restrictions on language or time. The dataset was extracted from the WOSCC in tag-delimited plain text files. In order to assess the quality of the reference filtering process and the homogeneity of the dataset, we independently inspected each of the most cited references (604 articles in total), and Sabe et al. . FIGURE Co-citation reference network with cluster visualization ( -). The unit of measure are articles and constitutes nodes. Nodes are organized according to year of publication. The size of a node (article) is proportional to the number of times the node has been co-cited. Colored shades indicate the passage of the time, from past (purplish) to the present time (yellowish). a randomly selected sample of 10% of included articles to allow a margin of error (i.e., inclusion of non-relevant papers) of 5% with a 95% confidence interval (Supplementary Table 1; Figure 1). Objectives The primary outcome was to visualize research trends on physical activity related to mental health and wellbeing and to characterize the evolution of research trends using networks of co-cited references and networks of co-occurring keywords assigned to relevant publications. The secondary outcome was to provide clinicians, researchers, and policymakers with a specific unit of measure of the research network (countries, institutions, authors, and journals) and to identify emerging trends and limitations. Data analysis Two different software tools for constructing bibliometric networks were used: Bibliometrix R package (3.1.4) (25) and CiteSpace (version 5.8.R4) (21). Bibliometric outcomes included citation counts, co-citations, and co-occurrences. A co-citation count is defined as the frequency with which two published articles are cited together by subsequently published articles (26). Co-occurrence networks are based on how frequently two entities, such as keywords, appear in the same articles. The Bibliometrix R package was used for the analysis of publication outputs and the trend of growth. CiteSpace was used for the study of several types of networks, namely, networks of co-cited references, networks of co-cited authors, and co-occurrence networks of authors, keywords, institutions, and countries. For instance, the co-cited (authors') institutions network accounts for the cooperation between two or more institutions, which reflects the cooperation between authors and the influence networks. CiteSpace produces a variety of metrics of significance, with temporal metrics such as citation burstness, structural metrics such as betweenness centrality, modularity, and silhouette score as well as a combination of both, namely, the sigma metric. The betweenness centrality of a node measures the fraction of shortest paths in an underlying network passing through the node (27). The burstness of the frequency of an entity over time indicates a specific duration of a surge of the frequency (28). The sigma indicator combines structural and temporal properties of a node, namely, its betweenness centrality and citation burst (29). Modularity (the Q score) measures the quality of dividing a network into clusters, and the silhouette score (the S score) of a cluster measures the quality of a clustering configuration (30). The Q score ranges from 0 to +1. The cluster structure is considered significant with a Q score >0.3, and higher values indicate a well-structured network. The S score ranges from −1 to +1. If the S score is >0.3, 0.5, or 0.7, the network is considered homogenous, reasonable, . /fpubh. . or highly credible, respectively. In addition, we conducted a structural variation analysis that focuses on novel boundaryspanning connections to detect transformative papers ranked on their divergence modularity (31). These transformative papers can potentially change to the existing structure of knowledge. We extracted cluster labels from keywords associated with articles that are responsible for the formation of a cluster selected by the likelihood ratio test (p < 0.001). Each cluster was closely inspected, and eventually cluster labels were improved based on the authors' judgment. The second level of the data filtering process was applied during the generation of networks within each dataset (e.g., most cited reference) in order to detect duplicates, references without authors, or any non-relevant unit of measure that was excluded (e.g., DSM reference; CIM-10) or merged (e.g., author Motl RW and Motl W Robert). The g-index was used for all calculations. This index permits to give credit to lowly cited or non-cited papers while giving credit for highly cited papers, thus partially alleviating bias from highly cited papers as seen with the h-index (32). CiteSpace general parameters are reported in Supplementary Information 1. Results Analysis of publication outputs, major journals, and growth trend prediction We report a flowchart with detail of the 56,442 retrieved documents from the WOS Science citation index expanded and the different steps of our scientometric study: identification and screening of studies, software analyses, and expert review's interpretation (Supplementary Figure 1). Among the retrieved documents, 1,089 documents were excluded, and 55,353 documents encompassing 1,306,828 references were retained (47,105 articles; 6,671 reviews; 564 editorial material; 1,013 proceeding papers). The data filtering process consisted of the inspection of each 604 highly cited papers, editorial material, and proceeding papers and the inspection of 10% randomly selected titles of the retrieved documents. Only 4% (n = 224 articles) were not relevant (Supplementary Figure 1). The retained 55,353 articles were published between 1905 and May 2022 in 24 different languages (95.1% of articles in English). The annual scientific production is still in 2022 exponential with a mean annual growth rate of 6.8% since 1989 (n = 17) and 2022 (n = 5,604) ( Supplementary Figures 2, 3). The first article identified was a Franz SI and Hamilton GV article on "the effects of exercise upon the retardation in conditions of depression" published in the American Journal of Insanity (33). Analysis of co-citation reference: Clusters of research and most cited papers Clusters of research We constructed a synthesized network of co-cited references based on articles published during the 1988-2021 time period as suggested by CiteSpace after the removal of empty time intervals to optimize time slicing (Figure 1). In this network, each node represents a highly co-cited article. We further explored the latest research trends with the extraction of co-citation networks The 1988-2021 network identified 50 different clusters, with a single constellation of 26 clusters that reveals six distinct major trends of research on physical activity, namely cardiovascular disease, somatic disorders, cognitive decline/dementia, mental illness, athletes' performance, related health issues and eating disorders and COVID-19 pandemic. The link walkthrough over time between clusters based on burstness dynamics for the 1988-2021 network is available as a video on osf.io. Most cited papers We report the top 10 most co-cited references for the 1988-2021 time period in Table 1 Analysis of co-occurrence of keywords The use of author keywords can help identify the latest trends of research and choose search keywords for future reviews. The co-occurrence author keywords network for 1988-2021 is shown in Supplementary Figure 5, and the 2016-2021 time period is shown in Figure 2. In this network, each node is a highly co-occurring keyword. Both networks presented significant modularity and silhouette scores indicating credible clusters (Q = 0.3327, S = 0.6823 and Q = 0.3971, S = 0.6614 respectively). Analysis of influence and co-operation network Co-cited countries and co-cited institutions network We produced the co-cited countries and co-cited institutions network ( Figures 3A,B). Units of measures were authors' countries and authors' institutions. A significant modularity and silhouette score were found (Q = 0.5321; S = 0.785). Overall Co-authorship, co-cited and co-cited journals network Our dataset includes 1,306,827 citations with an average of 31.85 citations per document. About 175,508 different authors were found, with an average of 3.17 authors and 5.76 coauthors per document in 4,193 different sources (e.g., books and journals) (Supplementary Figure 1). We produced the co-authorship networks, which are the social networks encompassing researchers that reflect collaboration among them, each node representing a different highly cited co-author (Supplementary Tables 2G,H). We further produce the co-cited author network that permits to visualize "who cites who" for the last 5 years (2016-2021 network) was also conducted (Supplementary Figure 9). The burstness analysis revealed that the most co-cited first authors according to our datasets were Brooks SK, Wang CY, Ogden CL, Holmes EA, and Kandola SA. Figure 10). We conducted the co-cited journal network that retained 2,879 journals and showed the highly cited journals with high betweenness centrality (Supplementary Figure 11). (Supplementary The top five highly cited journals were Archives of General Psychiatry (JAMA), The Lancet, PLOS ONE, Medicine and Science in Sports and Exercise, and the New England Journal of Medicine ( Table 2). The burstness analysis further reveals that five journals with the latest beginning of burst were Frontiers in Psychology, The Lancet Psychiatry, International Journal of Environmental Research and Public Health, Nutrients, and Frontiers in Psychiatry (Supplementary Tables 2E,F). Summary of the main findings To the best of our knowledge, this is the first broad scientometric that proposes a comprehensive overview of the development of research on physical activity, mental health, and wellbeing. We retained 55,353 documents revealing an exponential growth of scientific production since the 90s. The USA holds for decades the leading position in research; however, China is very active since 2020 with an important burst of citations, mainly due to publication on COVID-19. The King's College London and Harvard University were the most influential institutions in terms of citation count. In supplement to actual reviews, this scientometric study reveals the influence and collaboration network, which could help researchers to identify major scholarly communities and establish potential research collaboration. Identification of research trends The six distinct major trends of research identified expose the history and the latest development of research on physical activity, mental health, and wellbeing. The first major trend of research concerns physical activity and cardiovascular disease, reminding the past and present intertwine. First research focused on cardiovascular disease (35). The large body of research on evidence synthesis of the last decades that mainly focused on the prevention to treatment role of physical activity for cardiovascular disease started with guidelines for exercise testing (37,78), and that continues to date with consideration of cardiometabolic risk factors (39). The extension of prevention and treatment of physical activity to other somatic disorders constituted the second major trend, making levels of physical activity a public health priority (41), that continues to date (79). Another trend, which emerged after 2000, is the potential of physical activity for the prevention and treatment of dementia with increased importance of evidence-synthesis studies (51,80,81). Physical activity has also been explored as a potential intervention for the prevention and treatment of dementia. As regards to prevention, it has been demonstrated that physical activity is a protective factor against Alzheimer's disease and other types of dementia (82, 83). As a treatment, recently an umbrella review has pooled evidence from as many as 27 systematic reviews, including 18 with meta-analyses, overall reporting on 28,205 participants with mild cognitive impairment or dementia (84). The authors showed that mindbody intervention and mixed physical activity interventions had a small effect on global cognition, whereas resistance training had a large effect on global cognition in those with mild cognitive impairment. In people affected by dementia, a small effect of physical activity/exercise emerged in improving global cognition in Alzheimer's disease and all types of dementia. Importantly, physical activity/exercise also improved other outcomes not strictly related to cognition, including the risk of falls, and neuropsychiatric symptoms. Adjacently, a massive body of evidence has organized an important trend of research on the benefits of physical activity for both prevention and treatment of severe mental disorders, in particular depression (4,85,86) and schizophrenia (71,87). More recently, the evidence has focused on evidence-synthesis (10,74) and mental health/wellbeing (9). Other lesser, although highly relevant trends were also uncovered, such as the importance of physical activity for athlete's performance (88,89). While most of the research efforts in that area have focused on how to optimize performance in the context of professional athletics (90), perfectionism, and excessive physical activity can also be a symptom of mental disorders, and eating disorders in particular (58). This research trends now focus on concussion and its consequence (chronic traumatic encephalopathy) (60). Finally, a large body of research has focused on physical activity and COVID-19. Physical activity is a protective factor for COVID-19 complications (91). During COVID-19 research has also focused on restrictions and physical activity (63). Finally, physical activity's relevance has also been shown to extend beyond the clinical sciences and start to dialogue with greenness and urban planning (66,92,93). Although various trends of research have developed these last decades, we can identify two important gaps, the one of the roles of physical activity in the prevention or treatment of substance-use disorders, and the one regarding the socioeconomic inequalities in access to physical exercise (94). Meta-review covering this subject (10) concluded that exercise can improve multiple mental health outcomes in those with alcohol-use disorders and substance-use disorders; however, further research is needed in these conditions, notably with the use of mind-body practices (95,96). Strengths and limitations This work has strengths and weaknesses. Strengths are its novel evidence-synthesis approach, complete systematic reviews, and meta-analysis, by providing information on the evolution of research trends over time, the visualization of networks of authors, countries, and institutions, and that go beyond common measures of academic bibliometric performance (i.e., impact factor, H-Index, number of papers or citations). This novel research framework permits repeatable, reproducible, and comparable analysis with less bias than conventional time-consuming reviews that are vulnerable to biased coverage/selection. Limitations are that, despite the quality check procedures outlined in the methods, this is not a systematic review. Furthermore, gathered data were only obtained from WOSCC, which can limit retrieved publication (94, 97). Also, the centrality and number of citations are not necessarily indicative of the quality of a work, as faulty publications can be highly cited because they are frequently criticized as well (98). Finally, no reporting guidance is available for scientometric studies yet, given their recent introduction in the literature. Conclusion In conclusion, researchers have consistently focused on the role of physical activity on cardiovascular disease, other somatic disorders, dementia, mental disorders, athlete's performance, and eating disorders and more recently on COVID-19 pandemic, which clearly shows the role of physical activity as medicine across physical and mental disorders. More recently, the literature has focused on green space, urban planning, and behavior change, further expanding the multidisciplinary reach of physical activity. Taken together our results strengthen and expand the specific and central role of physical activity in public health, calling for the systematic involvement of physical activity professionals as stakeholders in the public health decision-making process. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Author contributions MSa and MSo: conceptualization and writing-original draft preparation. MSa, CC, and MSo: methodology, formal analysis, and investigation. MSa, CC, OS, JD, DV, JF, LS, BS, SR, FS, and MSo: writing-review and editing. CC and MSo: supervision. All authors contributed to the article and approved the submitted version. Funding Open access funding was provided by the University of Geneva. Conflict of interest Author OS has received advisory board honoraria from Otsuka, Lilly, Lundbeck, Sandoz, and Janssen in an institutional account for research and teaching. Author JF has received consultancy fees from Parachute BH for a separate project. Author BS is on the Editorial Board of Ageing Research Reviews, Mental Health and Physical Activity, the Journal of Evidence Based Medicine and the Brazilian Journal of Psychiatry. Author BS has received honorarium from a co-edited a book on exercise and mental illness, advisory work from ASICS & ParachuteBH for unrelated work. Author MSo has received honoraria/has been a consultant for Angelini, Lundbeck and Otsuka. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-08-09T13:56:20.408Z
2022-08-09T00:00:00.000
{ "year": 2022, "sha1": "cadbb06df37fbfa01eb82854086f8727aeb20562", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.943435/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a5e538aebe028191e4b37f346083f92ca4e75c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
139285540
pes2o/s2orc
v3-fos-license
Novel Highly Titania Doped YSZ Anodes for SOFCs In the ternary system ZrO2-Y2O3-TiO2 compositions with titania concentrations of 18 mol-% can be dissolved in the cubic fluorite structure. The electrical properties of these compositions close to the high titania, low yttria limit were found to have a predominant ionic conductivity of about 0.01 Scm'1. Substantial electronic conductivity of about 0.2 Scm'1 at 930°C is introduced into the system at oxygen pressures below 10'13 atm. For applications in the SOFC, I-V polarisation studies were performed on anode compositions, screen printed on YSZ electrolytes, using a gold mesh current collector. A low effective contact area of about 20% of the geometrical area suggested that these materials need to be supplemented by a current collecting component. With increasing polarisation of the electrode, the effective contact area decreased due to oxidation of the electrode. However currents, related to the effective contact area were reasonable. By impedance studies the polarisation losses were associated with the electrode and electrolyte resistances, diffusion and charge transfer losses were not very large, perhaps indicating the benefit of an ionically conducting electrode. INTRODUCTION Mixed ionic and electronic conducting oxides show some very attractive characteristics for use as anode materials in solid oxide fuel cells (SOFC). Yttria stabilized zirconias with 1-12 mol% Ti dopant and cubic fluorite structure were considered as promising candidates for SOFC anodes, because they have excellent stability at high temperatures, good compatibility with the YSZ electrolyte, mixed conductivity and might offer high electrocatalytic activity [1,2,3]. A good electrocatalytic activity of such materials is attributed to their good electronic and ionic conductivity, increasing the ability to accelerate the electrode gas reaction. The charge-transfer reaction between lattice oxygen and fuel can then occur over the entire electrode area, whereas in Ni-YSZ cermets this electrode reaction is limited to the three phase contacts between fuel, electrode and electrolyte. However, because of the low electronic conductivity of these doped YSZ compositions with maximum 10 mol% titania additions (-0.02 Scm'1 at 1000°C under fuel cell conditions), these materials by themselves were concluded not to be promising candidates for fuel cell anodes [4]. Investigations of the phase diagram Y2C>3-ZrO2-TiO2 by Feighery et al. [5] revealed that the cubic fluorite structure of YSZ can be retained to much higher titanium contents up to 18 atom%. In this paper, results of investigations of the electrical properties of the most Electrochemical Society Proceedings Volume 99-19 541 promising high titania doped compositions with the general formula YxTiyZri_(X+y)O2-x/2 are reported along with polarisation studies on some of these compositions. EXPERIMENTAL As starting materials TiO2 (Aldrich), Y2O3 (Aldrich) and ZrO2 (Hopkin&Campbell Ltd) powders were used. The samples were synthesised by a solid state reaction between the oxides. The oxides were mixed in the ratios according to the respective formula, ball milled for 1 hr with high speed in ethanol in a zirconia container using zirconia balls and pressed into pellets. The pellets were sintered at 1500°C for 48 hrs in air. Powder X-ray diffraction measurements on the samples were performed to identify their structure, lattice parameter, and phase composition on a Stoe Stadi-P diffractometer . The densities of the pellets were between 75% and 90% that of the theoretical value. Different levels of doping with Titania, which acts as a sintering aid, and varying yttria contents have significant influence on the sintering characteristics and the density of the pellets. For impedance measurements in air a Solartron 1260 impedance analyser was used over the frequency range 10 MHz to lOOmHz. Measurements were performed on the as pre pared pellets, which were coated with an organo-platinum paste on each face. The plati num electrodes on the pellets were first dried at 120°C for 30 minutes and then fired at 1000°C for 1 hour. The samples with a density between 70 and 90% of theoretical density, were mounted in a compression jig with Pt wire electrodes. For the jig a temperature dependent resistance correction had been determined. The conductivity jig was fixed in a horizontal tube furnace and the a.c. impedance measurements were made in 50 degree steps in air between 400 and 1000°C. The electrical conductivity was measured in the oxygen partial pressure range from 0.21 atm to 1x1 O'22 atm at 930°C by DC techniques. The low oxygen pressures were obtained by flowing hydrogen into the apparatus. The actual oxygen partial pressure was measured by a zirconia sensor. Polarisation studies were performed using 4-electrode geometry. The working electrode (2 cm2) was screen printed onto a 300gm thick electrolyte, made from 8 mol% yttria-sta bilised zirconia with small amounts of alumina-addition (CeramTec AG). Contact was made with this electrode via a gold mesh. A Pt counter electrode was applied to the op posite face of the electrolyte plate and a Pt reference electrode was applied to each face of the electrolyte , Figure 1. A series of measurements were performed at increasing cur rents, with both dc and ac measurements being taken at each current level. Stability of the YvTiyZri4X +V )O2-x/2 System Investigations about the system Y2O3-TiO2-ZrO2, in which the extent of the cubic fluorite region was determined, have been published elsewhere by Feighery et al. [5]. Different compositions near the edge of the single phase fluorite region with the highest possible titania content have been investigated to maximise the electrical conductivity for application as SOFC anode materials ( Figure 2). Although the phase equilibria were determined for samples prepared in air at 1500, the data are believed to pertain well to experiments performed at 1000°C in fuel gas. The extent of the fluorite system in air at equilibrium at 1000°C may well be slightly less than that at 1500°C; however the extent of solid solution would be expected to be more extensive at low oxygen partial pressures. In any case, the system is very refractory and would only be expected to transform significantly over a period of months at 1000°C or less. The Ionic Conductivity of C ubic T itania Doped YSZ in A ir The temperature dependence of the conductivity of the YxTiyZri-(X +y)O2-x/2 fluorite solid solution samples (YZT) with high Ti content (18 atom-%) and close to the low Y limit (15 atom-%) in air has been previously reported [6]. Arrhenius conductivity plots show straight lines, indicating that there is not a change in the activation energy Ea within the temperature region of study. For these compositions the ionic conductivity decreases with the yttria content. The reason for this decrease is believed to be the same as for highly yttria doped YSZ. The ionic conductivity in YSZ decreases with Y-content above 8 mol% Y2O3, because of the formation of dopant-oxygen vacancy clusters. This is related to the greater likelihood for trapping oxygen vacancies by the formation of associates between the oxygen vacancies and the dopant cation [7]. An EXAFS study of 10 mol% YSZ by Catlow et al. [8] indicated that the oxygen vacancies must be preferentially sited adjacent to the Zr4+ ion and not to the Y3+ ion. A change in activation energy Ea for a dopant concentration below 20 atom% yttrium between 550 and 650°C was reported, which has been attributed to the dissociation of defect clusters [9,10]. In the system Y2Ch-TiO2-ZrO2 this change in Ea was not observed, which may be due to stronger dopant-vacancy interaction. Only a small increase of the activation energy Ea from about 1.15 eV (for compositions with Y<20atom%) to 1.25 (for Y=25 atom%) was observed as the Y-content was increased, which may be due to the tendency of the small Ti atoms to relieve lattice strain and so reduce activation energy. The magnitude of ionic conductivity of the titania doped samples is about one order of magnitude lower than 8mol%YSZ [2]. This has been attributed to a stronger association of the oxygen vacancies with the small Ti ions [11] or by a tetragonal short range order in the cubic fluorite structure [3]; however, it should be noted that the conductivity values are very similar to those for samples with similar Yz/, Vo' ' contents in the absence of Ti. As has been reported previously [12] it seems clear that the Yttrium concentration and hence degree of clustering is the dominant factor determining ionic conductivity. T o tal E lectrical C onductivity at D ifferent Oxygen P artial Pressures In Figure 3, the dependence of the total conductivity Gy on oxygen partial pressure at 930°C is given for the sample Yo.2Tio.1sZro.6201.9. The electronic contribution Ge has been calculated according to the equation Ge-Gy-G;, assuming that the concentration o f oxygen vacancies is effectively constant over the investigated pO2 range and that the value for Gy gives the value of the ionic conductivity Gi at high pO2. The electronic contribution of the conductivity Ge for the re-oxidation range between 10'14-1 O'20 atm is shown in Fig. 3 as open circles, fitted to a straight dotted line. The calculated slope for this log(Ge) vs. log(pO2) plot and the plots for all other measured samples follow a po2 14 dependence in the pC>2 range from 10'13 to 10'2° atm. Substantial electronic conductivity is introduced into the system at oxygen partial pressures below 10'13 atm via reduction of Ti4+ ions to Ti3+. The hysteresis between the reduction and oxidation curve in Fig 3 is related to the relatively high density of the sample (80% theoretical density), which did not allow this sample to reach equilibrium (complete reduction or reoxidation), at each measured point. The points obtained on reoxidation between oxygen partial pressures of 10'2° atm and 10' 13 atm and from 10'3 atm to 10'1 atm are the most reliable as the reoxidation occured @20 times more slowly than the initial reduction. It is always difficult to attain equilibrium in the region 1 O'10-10 5 atm. due to the extremely low concentration of active species available in the gas phase.. Results from DC polarisation measurements for single cells with an Yo.2TiyZri.(o.2+y)Oi.9 anode composition (with y=0.15 and 0.18) are presented in figure 5. Following from these experiments it appears that electrodes containing 18 atom % Ti achieve a higher performance than electrodes containing 15 atom% Ti. In the shown voltage-current characteristics the currents are related to the contact (or active) electrode area, calculated from impedance measurements at each current step. The actual cell voltage of the cell with a Yo.2Tio.i8Zro.620i.9-anode drops from open circuit voltage of about 1.1 V at OCV (1=0) to 0V at a current density of about 0.1 A/cm2, if the current is related to the total geometrical area of the cell (not shown in Fig. 5). Curve 2 reveals, that the "active" area (or contact area) performs well in the electrochemical oxidation of hydrogen. The ratio of the apparent contact area to physical contact area, which is about 20% at OCV, decreases with potential (figure 6), indicating that the electrode becomes oxidised and less conductive with increasing polarisation losses. The impedance spectra also show that for this fluorite composition ohmic losses due to the poor conductivity of the electrode exceed or, at the highest potential losses, are equal to the overpotential losses due to electrochemical phenomenon such as charge transfer and diffusion. This indicates that the high ionic conductivity of this electrode facilitates the fuel oxidation processes. Figure 6 Change of ratio of apparent to geometric electrode area with decreasing cell voltage (or increasing overpotential) measured on a Yo.2Tio.1sZro.6201.9 based single cell. CONCLUSIONS In the system TiO2-Y2O3-ZrO2 electronic conductivities of about 0.2 Scm'1 with a simultaneous ionic conductivity of 0.01 Scm'1 can be achieved on 18 mol% titania doped samples at 930°C and IO'20 atm. The stability of highly Ti-doped YSZ should be investigated under long-term fuel cell condition, i.e. at 800-1000°C in fuel gas. Highly titania doped YSZ compositions show a good mixed conductivity and may be considered as the ceramic component in cermet structures for the anode. Currents, Corrected for effective contact area, eg 400 mAcm'2 at 500mV, were promising; however the effective contact area was only of the order of 20%. With increasing polarisation of the electrode, the effective contact area decreased due to oxidation of the electrode. Impedance studies showed that most of the polarisation losses were associated with the electrode and electrolyte resistances; diffusion and charge transfer losses were not very large, perhaps indicating the benefit of an ionically conducting electrode. ACKNOWLEDGEMENT We thank the European Community (Research Network Contract No FMRX-CT97-0130 (D.G. 12 MSPS)) the EPRSC and Tioxide Specialities Ltd. for financial support.
2019-04-30T13:09:04.604Z
1999-01-01T00:00:00.000
{ "year": 1999, "sha1": "a31200997b37f2257a18398239fd7669f0acddc0", "oa_license": null, "oa_url": "https://doi.org/10.1149/199919.0541pv", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b807d6c50ee1c424f9375b9af22b9a7135c7340b", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
232044383
pes2o/s2orc
v3-fos-license
Harnessing wind energy on merchant ships: case study Flettner rotors onboard bulk carriers Shipping faces challenges of reducing the dependence on fossil fuels to align with the international regulations of ship emissions reduction. The maritime industry is in urgent need of searching about alternative energy sources for ships. This paper highlights the applicability of harnessing wind power for ships. Flettner rotors as a clean propulsion technology for commercial ships are introduced. As a case study, one of the bulk carrier ships operating between Damietta port in Egypt and Dunkirk port in France has been investigated. The results showed the high influence of the interaction between ship course and wind speed and direction on the net output power of Flettner rotors. The average net output power for each rotor will be 384 kW/h. Economically, the results reveal that the use of Flettner rotors will contribute to considerable savings, up to 22.28% of the annual ship’s fuel consumption. The pay-back period of the proposed concept will be 6 years with a considerable value of levelized cost of energy. Environmentally, NOx and CO2 emissions will be reduced by 270.4 and 9272 ton/year with cost-effectiveness of $1912 and $55.8/ton, respectively, at annual interest rate of 10%. Introduction More than 90% of the international trade is transported by ships (Jiang et al. 2018;Pasha et al. 2020). In year 2019, about 92,295 ships of 1.98 million deadweight shared in the maritime field activities, and more than 60,000 ships transported billions of tons of cargo worldwide (UNCTAD 2019). On the other hand, this growth contributed significantly to increasing the amount of emissions from ships. Annually, vessels emit large quantities of pollutants into the air, principally in the form of nitrogen oxide (NO x ), particulate matter (PM), and sulfur oxide (SO x ), which have been steadily expanding and affect human health Seddiek 2018, 2020;Seddiek 2016). Latest statistics revealed that maritime transport is responsible for producing 3% of the world's total greenhouse gas emissions, contributing to global warming and extreme weather effects (Abdelkhalek et al. 2014;Bouman et al. 2017;Sadek and Elgohary 2020;Seddiek 2017;UNCTAD 2019). In continuous steps, the International Maritime Organization (IMO) seeks to reduce the adverse effect of ship emissions by issuing the relative regulations within Annex VI of Marine Pollution Convention 78 (Ammar 2018(Ammar , 2019aAmmar and Seddiek 2017;Halff et al. 2018;IMO 2018IMO , 2019. In addition to the international legislations, IMO pursues to urge those interested in the shipping industry to reach zero emissions ship either through the use of alternative fuels or a clean source of energy onboard ships (Ammar 2019b;Joung et al. 2020;Scott et al. 2017). Regarding alternative fuels, all practical studies and researches proved that LNG can be considered an alternative and possible solution to replace conventional fuels in maritime shipping as it shows the lowest cost compared with the other alternative technologies Responsible editor: Philippe Garrigues (Elgohary et al. 2014;Mohseni et al. 2019). With reference to new power sources, many research studies have been examining the applicability of using fuel cells, solar panels, and wind turbines to be a source of energy onboard ships (Clodic et al. 2018;Ghenai et al. 2019;Welaya et al. 2011;Welaya et al. 2013) . Wind energy can be considered as an attractive option for marine applications. It is a renewable source and can be used in combination with low carbon fuels (Crist 2009;Parker 2013;Traut et al. 2014). The focus of the current paper is to evaluate the potential for Flettner rotor-powered ships. The literature review addresses the use of Flettner rotors in marine applications from the viewpoints of practical applications and researches. It was shown by De Marco et al. (2016) that Anton Flettner, in 1920, was the first engineer who studied the effectiveness of using Flettner rotors as ship propulsion system. Buckau ship is considered the first ship that enrolled this system in 1924 as a retrofit ship. Two years later, M/V Barbara became the first new build ship equipped by Flettner rotors (Seifert 2012). After a long time, in year 2010, Enercon, a wind energy company, launched a Flettner-powered cargo ship named E-Ship 1. Recently, two 30-m-tall rotor sails have been installed on a Maersk Tankers vessel (Norsepower 2015). Arief et al. (2018), Lele and Rao (2017), Searcy (2017), Seifert (2012), Talluri et al. (2018), and Traut et al. (2014) analyzed and evaluated the potential for implementing Flettner rotor systems for different ship types. As a step toward studying the applicability of the use of Flettner rotors in marine application, the present paper aims to analyze the Flettner rotor performance and to study the effect of different parameters on the turbine's net output power. These parameters include the effect of wind characteristics, ship speed, lift coefficient, motor speed, and rotation coefficient. In addition, a bulk carrier ship is investigated as a case study. An economic and environmental analysis for the case study is performed to evaluate the economic feasibility of Flettner rotors, levelized cost of energy, and their effectiveness in reducing ship emissions. Flettner rotors' principles and fundamentals In order to study the technical, environmental, and economic effects of using Flettner rotors onboard ships, the theoretical operation of these systems should be firstly presented. The main principle for the operation of Flettner rotors is the Magnus effect as shown in Fig. 1a (Lele and Rao 2017;Pearson 2014). It was first described by Heinrich Gustav Magnus in 1953 (Lele and Rao 2017;Talluri et al. 2018). The Magnus effect correlates with the velocity and pressure of a moving fluid (Gupta et al. 2017;Kray et al. 2014;Pezzotti et al. 2020). As the pressure of a moving fluid decreases, its velocity will be increased and vice versa. When induced wind current attacks a rotating cylinder, it retards the air on one direction and accelerates it on the opposite direction. Highand low-pressure regions will be formed around the rotating cylinder (Badalamenti and Prince 2008;Searcy 2017;Seifert 2012;Zdravkovich 1997). As a result of this pressure difference, lift and drag forces will be developed in the perpendicular and the parallel directions of the wind flow, respectively, as shown in Fig. 1b. The produced lift and drag forces by the Magnus effect are a supporting mean for ship propulsion if a rotating cylinder is properly mounted onboard the ship. The rotation of the cylinder is controlled by an electric motor mounted onboard the ship. The power consumed by the motor and the thrust produced from the Flettner rotor will regulate the amount of the main engine power that can be replaced. The produced thrust can be calculated as the summation of the lift and drag forces in the ship direction (Ballini et al. 2017;Bentin et al. 2016;Bordogna et al. 2020;Bordogna et al. 2019;De Marco et al. 2016;Kray et al. 2012;Mittal and Kumar 2003;Traut et al. 2014). Flettner rotor modeling In this section, a simple technical, environmental, and economic modeling is presented. The modeling is divided in two subsections. The first section will present the technical model of the Flettner rotor. However, the second part illustrates the economic and environmental issues in case of using Flettner rotor onboard ships. Technical modeling For a moving vessel with a speed of (V s ), the true wind speed and direction (V t , γ) will affect the Flettner rotor performance. This is because the changes in true wind direction to a moving vessel will result in a change in apparent wind speed (V a ). Consequently, the apparent wind speed will affect Flettner rotor to generate the thrust in ship direction. Figure 2 illustrates the angles between ship and wind velocities as well as two coordinate systems. The (X h , Y h , Z h ) coordinate system is used for the vessel hull, while (X f , Y f , Z f ) is introduced for the course through the ocean. The apparent wind speed (V a ) can be calculated as a function of the vessel speed (V s ) and the true wind speed (V t ), as expressed in Eq. (1), assuming very small drift angle. In addition, its direction can be determined using Eq. (2): The rotational speed of the Flettner rotor has a great influence on the produced power. The rotors are assumed to be structurally connected to the ship hull. The rotation coefficient (C rot ) is the ratio between the rotor rotational speed (U rot ) and the apparent wind speed as expressed in Eq. (3): The flat plate boundary layer theory can be used to calculate the resistive force and the power required to overcome the skin friction of Flettner rotor system. Schlichting's formula can be used to calculate the skin friction coefficient as a function of Reynolds number. The power needed to turn the rotor (P con ) and to overcome the friction force can be calculated using Eq. (4). It can be noted that this is an approximate assessment and additional aspects, as bearing roughness can also influence the required power: where ρ a is the air density, A r is the surface area of the rotor, Re is the Reynolds number. Re can be calculated using Eq. (5): where L Re is the characteristic length of rotor, and μ is the air dynamic viscosity. To determine the effective power in ship direction (P s ) and then the net output power of the Flettner rotor (P net ), the lift and drag forces are resolved. P s and P net can be calculated considering the ship propulsion efficiency η as expressed in Eqs. 6 and 7 (Lele and Rao 2017): where C L is the lift coefficient, C D is the drag coefficient, and A is the maximum wind-projected area of the Flettner rotor. The lift and drag coefficients for each Flettner rotor can be calculated based on the numerical modeling as expressed in Eqs. 8 and 9: where SR and AR are the spin and the aspect ratio of Flettner rotor. (a ijk ) and (b ijk ) are coefficients related to the geometrical and functional operations of Flettner rotors. Their values based on the fit procedure for the numerical results, as presented in De Marco et al. (2016). de/d is the large end plate and Flettner rotor diameter ratio. Equations 8 and 9 are valid for the following ranges: The effect of the interaction between Flettner rotors on the flow field includes potential and viscous parts. The viscous section impact appears as wake and turbulence produced by flow separation and vortices. The circulation induced changes in the apparent wind speed and direction for each Flettner rotor. The analytical solution for the potential flow effects in the x-y plane for a group of Flettner rotors is presented by Garzón and Figueroa (2017). The two components of the induced flow for each Flettner rotor in the X and Y directions, V x and V y , can be calculated using Eqs. 10 and 11, respectively (Tillig and Ringsberg 2020): where r is the Flettner rotor radius, and V ind. is the induced velocity at each Flettner rotor. V ind. can be calculated as a function of the rotor circulation Ω. The circulation can be calculated using known lift coefficients (C L ) using Kutta-Joukowsky formula or from model tests (Abbot and Von Doenhoff 1959;Bordogna et al. 2020). The relation between V ind. , Ω, and, C L is given in Eqs. 12 and 13 (Houghton et al. 2017;Swanson 1961): where, R c presents the ratio between the Flettner rotor radius (r) and the distance to the external vortex and (γ) is the vortex direction. The values of R c and γ according to model tests are 0.25 and 210°, respectively, in the range of 1.0 ≤ SR ≤ 4.0. Environmental and economic modeling To evaluate the environmental benefits in case of using Flettner rotors, the annual ship fuel saving (AFS FR ) is calculated using Eq. 14. It depends on the saved propulsion power during the trip (P FR ) in kW, specific fuel consumption of the main diesel engine (SFC ME ) in kg/kWh, the sailing hours per year (H), and number of Flettner rotors (N FR ). The annual reduction in emissions due to using Flettner rotors (AER FRS ) can be calculated using Eq. 15 (Ammar 2018;Ammar and Seddiek 2017: where F e is the emission factors for the engine in g/kWh. These factors for slow speed marine diesel engine operated with heavy fuel oil are 18.1, 10.29, 1.4, 620.62, 0.6, and 1.42 g/kWh corresponding to NO x , SO x , CO, CO 2 , HC, and PM emissions, respectively (IMO 2018(IMO , 2019. From an economic point of view, the annual Flettner turbines cost (AFTC) will depend mainly on the initial, installation, operation, and maintenance cost and could be estimated as shown in Eq. (16): where i is interest rate, n is the expected ship's working year after installation of turbines, CC is the initial cost of one rotor in USD per unit, C y D y is the installation cost of one rotor in USD, P cos is the power consumed for Flettner rotor in kW for one rotor, SFC AE is diesel generator-specific fuel consumption in kg per kW, F c is fuel cost in USD per ton, and C O&M is the maintenance and operation cost in USD per hour. On the other hand, the optimistic effect of Flettner turbines appears in the form of annual saving cost (AFTSC), which may be determined as follows: where, p is the expected increasing or reducing percent of fuel cost. Moreover, it preferred to estimate the levelized cost of energy (LCOE) as an indication of importance of applying the proposed concept realistically. This parameter is widely used, and it was presented by the International Renewable Energy Agency (IRENA), which formulated it to be expressed as follows (Aldersey-Williams and Rubert 2019): where x is the number of years of the investment period, t is one of the years during period x, I t is capital expenditures costs (CAPEX) in year (t), M t is operation and maintenance expenses costs (OPEX) in year (t), F t is the fuel cost in year (t). E t is the number of Megawatt hours generated by the Flettner rotors during year t, and r is the discount rate. It is important to combine the environmental benefits that can be achieved through the application of Flettner turbines technique onboard ships and the costs involved in this application. To calculate the cost-effectiveness (CE em ) in dollars per ton, the annual costs of the project are divided by the annual quantity of emission reduced as a result of this proposal, as follows (Ammar 2019b;Seddiek 2017, 2018): where ACV is the annual cost value of the proposed Flettner rotors ($/year), ASV is the annual fuel saving ($/year), and ER q is the annual emission reduction for each species of exhaust gases in ton/year. Case study Wadi Alkarm (IMO: 9460760) is a bulk carrier ship that was built in 2011. The ship is registered in Alexandria port with registry number of 3633 under the Egyptian flag. The principal dimensions of the ship length, breadth, and draft are 229 m, 32 m, and 14.46 m, respectively. The dead weight of the ship is 80,533 MT with a total gross tonnage of 43,736 MT. The transported cargo includes coal, iron, ore, and grain with Suez Canal gross and net tonnages of 45,016 MT and 40,414 MT, respectively. The main technical data for the case study can be summarized in Table 1 (FleetMon 2020; Marine Traffic 2020). M/V Wadi-Alkaram sails in a route from Damietta port in Egypt to Dunkirk port in France. Figure 3 shows the ship directions that changed at Tunisia, Algeria, Morocco, Spain, and Portugal. With average speed of 13 knots, the ship takes about 13 to 15 days for one route. Figure 4 illustrates the proposed locations of four Flettner rotors on the ship's main deck. The rotors will be distributed along the ship length, specifically at port and starboard of hatches number 2 and 4. The rotors will be installed in the way that lets free moving for crew to carry out the necessary operations and maintenance activities on the main deck, and it will not hinder the loading and unloading of the cargo. In terms of the possibility of installation, the bulk carrier ships are considered one of the most suitable ships for installing this type of turbines, as there are no winches on the surface where the goods are loading and unloaded, whether by external winches or by belts. The selected Flettner rotor model to be installed onboard the case study is the Norsepower rotor (Norsepower 2018). The rotors are installed onboard the ship on foundation with a bolt connection and height of 2.5 m. The rotor height and diameter are 24 m and 4 m, respectively. The total weight of the rotor including the foundation is 34 tons. The supported tower for the rotor is a cylindrical steel structure. The variable rotor speed changes from 0 to 225 rpm with 90 kW electric motor drive. The operational wind speed range for the rotor starts from 0 to 25 m/s with a survival wind speed of 70 m/s and a maximum continuous thrust of 175 kN. The following assumptions are made for simplicity regarding ship route, ship stability, and wind speed and direction. The investigated ship is assumed to travel on a fixed route at constant speed from Damietta port to Dunkirk port. The installation of the Flettner rotors will not have an effective impact on ship stability and displacement. The initial calculations for the Flettner rotor technical results are performed at ship speed of 13.5 knots, during cruise in the open sea, over true wind angles from 0°to 360°. Both ship drift angle and bearing friction are neglected. The ship propulsion system efficiency is assumed to be 60% (Lele and Rao 2017; Tillig and Ringsberg 2020; Traut et al. 2014). The values for the air density and dynamic viscosity are assumed 1.225 kg/m 3 and Results and discussion In this section, the technical, environmental, and economic results for using Flettner rotor onboard the selected case study are presented. The results imply the effect of wind speed variation, ship's speed, lift coefficient, motor speed, and rotation coefficient on the net output power of Flettner rotors. In addition, the environmental and economic effects of using Flettner rotor onboard the case study are discussed. Technical and environmental results It is important to include the aerodynamic interaction effects among different Flettner rotors when studying their performance onboard a ship. The potential part of this interaction is dominating as the apparent wind speeds and directions are changed according to the induced circulation (Tillig and Ringsberg 2020). The apparent wind speed and direction at each rotor is highly affected by the position and the circulation of the other rotors. Therefore, the lift and drag coefficients, as well as the optimum rpm, should be calculated according to the characteristics of the experienced wind speed and directions. The wind speeds and angles agree with these practices by sail boats at various locations in a fleet (Bethwaite 2013). The optimum rpm for each rotor is calculated according to wind speed, wind direction, and the induced velocity at each rotor, and the preferred spin ratio ranges from 1.0 to 3.0 for marine applications (De Marco et al. 2016). To show the interaction effects of the installed four Flettner rotors, the induced flow field for the case study, at V s = 13.5 knots and V t = 6.5 m/s, is illustrated in Fig. 5. The induced flow is calculated using MATLAB program, according to Eqs. 10-13, at true wind angle of 225°(V a = 12m/s, β = 158°). The longitudinal and transverse distances between rotors are 65 m and 25 m, respectively. In addition, the apparent wind speeds and directions at each rotor are shown in Fig. 5, considering the different interaction among the four rotors. Among the important parameters that define Flettner rotor performance are the lift and drag coefficients which can be calculated using Eqs. 8 and 9. They determine the lift and drag forces generated and consequently the system effective forces in ship direction and perpendicular to it. Figure 6 shows the effective forces in ship direction for the four Flettner rotors, organized according to Fig. 5, considering the aerodynamic effects with and without interaction. The effective force values are calculated at optimum rpm for each rotor whose diameter and height are 4 m and 24 m, respectively. These dimensions are consistent with the ship taken into consideration. The calculations are performed at ship speed of 13.5 knots, according to the wind characteristics of the route for the case study. The aim of these considerations is a rough evaluation of the potentiality of the Flettner rotors as a marine propulsion device. It can be noticed that the forward Flettner rotors 1 and 2 benefit from the interaction, while the aft rotors 3 and 4 suffer from it. In addition, the side force produced from each Flettner rotor is augmented as a result of flow interactions. The net output power of each Flettner rotor with and without aerodynamic interaction, organized according to Fig. 5, can be presented in a polar plot as shown in Fig. 7. It explains the variation rotor's net power output because of change of the true wind speed from 5 to 25 m/s. The other variables including lift and drag coefficients change for each rotor according to its circulation and spin ratio. It can be noticed that the rotor power consumption is increased as the wind velocity increases. The net output is positive as the thrust generated is higher than the resistive force. The net output power depends on the wind characteristics and ship speed. Figure 7 a and b show the effect of reducing ship speed from 13.5 knots to 10 knots on the net output power at different wind characteristics. The results are identical for the positive and negative angles Fig. 6 Effective force in ship direction from the four Flettner rotors with and without interaction at ship speed of 13.5 knots Fig. 5 The apparent wind speeds and directions for the four rotors at ship speed of 13.5 knots, V t =6.5 m/s, and γ=225°3 because of the symmetric characteristics of the net output power. The highest values of the net out power for the selected Flettner rotor can be obtained at true wind speeds higher than 22 m/s over ranges of wind angles from 105 to 135 degrees and from 225 to 255 degrees. The maximum net output power, shown in Fig. 7, is reduced from 704 to 437.5 kW at true wind directions of 120°and 240°when the ship speed is reduced from 13.5 to 10 knots. The speed of the rotor determines the required power for its operation which affects the net output power of the Flettner rotor. Figure 8 shows the effect of coefficient of Flettner rotor rotation, organized according to Fig. 5, on the net output power for the four rotors considering the aerodynamic interaction among them. The figure is drawn at an average rpm of 80 for the four rotors considering wind speed, wind direction, the induced velocity at each rotor, and different spin ratio for each rotor. As the coefficient of rotation increases, the net output power will be reduced due to the increased power required for rotor rotation. In contrast, the net output power is increased at low rotational coefficient values. To evaluate the environmental benefits of the Flettner rotors, the annul fuel saving is calculated based on the saved propulsion power. Using wind data available in the meteoblue climate data (Meteoblue 2020), the total annual fuel saving, from the four rotors, can be calculated as shown in Fig. 9. It shows the annual fuel saving based on the ship route and wind characteristics all over the year. The highest annual fuel saving, from Fig. 9, is 3528 ton/year achieved in January. On the other hand, the lowest annual fuel saving will be 223 ton corresponding to October wind characteristic data. The average net output power for the ship route all over the year based on wind speed, direction, and ship course is 1537 kW with an average annual fuel saving of 1693 ton/year for the used four rotors. Finally, using four Flettner rotors will lead to saving in ship fuel consumptions by 22.28%. From the calculated values of the annual fuel saving due to using Flettner rotors, the annual emission reduction can be estimated. Figure 10 shows the values of the reduced NO x , SO x , CO, CO 2 , and HC at true wind angles from 0°to 360°a nd ship speed of 13.5 knots. These values are calculated for the four Flettner rotors, according to the wind characteristics of the route for the case study. The highest emission reduction can be achieved at true wind angles of 120°and 240°. This is due to the increased rotor's lift force and the net output power. In contrast, the lowest reduction values will be at wind angles of 0°and 360°. The actual amount of reduced emissions due to using the Flettner rotor onboard the selected case study, based on the ship route from Damietta to Dunkirk ports, is shown in Fig. 11. The higher the value of emission reduction, the more environmental benefits will be gained. The highest annual emission reduction rate is 9272 ton/year for reducing CO 2 Environmental benefits due to using four Flettner rotor models onboard the case study at ship speed of 13.5 knots emissions. This is because of the high percent of carbon in the used heavy fuel oil used. The second-high reduced emission species is the NO x emissions due to the high nitrogen percent in the air during burning of marine fuel in the diesel engine of the ship. Economic results Using the data collected about M/V Wadi Alkarm regarding sailing periods, and fuel prices, Fig.12a presents various scenarios for the annual CAPEX cost in case of applying the proposed system onboard the ship. Cases I, II, and III show the variation of the annual Flettner turbine cost (AFTC) with the expected ship's working years, after installation of the rotors onboard, at different interest values. The number of working years plays a role in reaching the minimum value of annual capital cost. The AFTC value for the present study will be about $399,978, $517,212, and $650,463 per year, assuming annual interest rates of 5%, 10%, and 15%, respectively. Moreover, Fig. 12b presents the value of ship's annual saving cost due to installation of Flettner turbines onboard, which is achieved because of reducing the annual ship's fuel consumption, at different scenarios of fuel price trends. The figure implies the possibility of achieving variable annual saving cost, which could reach to $1.30 and $3.29 million by the end of the determined project's lifetime, at 5% and 10% increment percentages, respectively. However, this value would account only $175,000 if the fuel price showed a dramatic annual decrease by 5%. On the other side, Fig. 13 presents the net saving value (NSV) at different economic scenarios for the proposed concept. Case (b) presents NSV for the selected case study at the current fuel price, case (a) presents NSV with an expectation of fuel price increment by 10%; however, case (c) presents NSV with an expectation of fuel price reduction by 5%. Fig. 13, the proposed concept will be able to support the ship's operating costs after 6 and 8 working years for case (a) and case (b), respectively. On the other hand, with the condition of case (c), the project will be economically useless regardless of the ship's age. With reference to LCOE, Fig. 14 shows the tendency of this value at different conditions. Case I presents the value of LCOE for the case study with $0.05/kWh after 8 years of operation. Case II presents the effect of occurrence of an evolution in the turbine industry, which in turn will affect the cost of manufacturing and the accompanying reduction in the annual cost of the project. The current cost for one of the proposed Flettner rotors is $750,000 (GL-MEEP 2019). It can be noticed that a reduction in Flettner turbine cost by one third will lead to a reduction in LCOE by 30%. However, case III presents the effect of decreasing the annual ship's sailing days, which may have occurred due to the impact of a slowdown in the growth of the global economy. The figure shows that an accordance of decreasing the ship sailing period by 20% will increase the value of LCOE by 26%. This implies that applying the renewable energy onboard ships is sensible by the surrounding international events just as it is happening now with COVID-19. Figure 15 shows the cost-effectiveness in case of applying Flettner turbines for the selected ship. Depending on the current prices of marine fuel oils, the used heavy fuel oil price onboard the selected ship is $300/ton (Bunkerworld 2020). Consequently, the cost-effectiveness values for NO x and CO 2 emissions are 1912 and $55.78/ton, respectively, at an annual interest rate of 10%. The Flettner rotor cost-effectiveness values agree with the same values that could be achieved from other technologies for the identical target (Ammar 2018;Ammar and Seddiek 2017). These results can be considered as a reasonable outcome in the way of applying more attractive renewable energy sources onboard ships. Conclusions Zero-emission ships are the future target of the international maritime organization. With emphasis on wind power source, a literature review of the previous and current Flettner rotors powered ships in addition to the published researches is introduced. A simple model including the main technical features and the environmental and economic aspects is presented. The main conclusions can be summarized as follows: & From a technical point of view, ship course and wind speed and directions are the main factors that affect the net output power from the Flettner rotors. Based on the selected case study, ship route, and Flettner model, the average net output power for each rotor will be 384 kW/h. & From an environmental point of view, the optimum wind angles, for the selected rotor model, corresponding to the highest emission reduction percentages are 120°and 240°. For the case study, using the Flettner rotors onboard the ship will reduce the fuel consumption by 22.28 %. In addition, the NO x and CO 2 emissions will be reduced by 270.4 and 9272 ton/year with cost-effectiveness of $1912 and $55.8/ton, respectively, at annual interest rate of 10%. & From the economic viewpoint, various scenarios are taken to assist factors that affect this technology, to expect the annual capital cost, and to determine the payback period. The annual cost of the proposed concept will be $399,978, $517,212, and $650,463 per year at interest rate values of 5%, 10%, and 15% respectively. The selected rotors will be able to support the ship's economy after six working years. Moreover, a considerable levelized cost of energy value is achieved. & Finally, fossil fuel is becoming scarce, and future international legislations concerning with the maritime environment will push and accelerate the process of shifting to the renewable and the green power systems for shipping. Data availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Declarations Ethical approval Not applicable Consent to participate Not applicable Consent to publish Not applicable Competing interests The authors declare that they have no competing interests.
2021-02-25T15:13:47.284Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "7aea7e5a578241d4869556e6c20787d681635ca5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-12791-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7aea7e5a578241d4869556e6c20787d681635ca5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
212682379
pes2o/s2orc
v3-fos-license
Dichloromethylation of enones by carbon nitride photocatalysis Small organic radicals are ubiquitous intermediates in photocatalysis and are used in organic synthesis to install functional groups and to tune electronic properties and pharmacokinetic parameters of the final molecule. Development of new methods to generate small organic radicals with added functionality can further extend the utility of photocatalysis for synthetic needs. Herein, we present a method to generate dichloromethyl radicals from chloroform using a heterogeneous potassium poly(heptazine imide) (K-PHI) photocatalyst under visible light irradiation for C1-extension of the enone backbone. The method is applied on 15 enones, with γ,γ-dichloroketones yields of 18–89%. Due to negative zeta-potential (−40 mV) and small particle size (100 nm) K-PHI suspension is used in quasi-homogeneous flow-photoreactor increasing the productivity by 19 times compared to the batch approach. The resulting γ,γ-dichloroketones, are used as bifunctional building blocks to access value-added organic compounds such as substituted furans and pyrroles. C arbon nitrides (CNs) are "all-in-one" photocatalysts that mediate dozens of different photocatalytic reactions and enable bifunctionalization of (hetero)arenes in one pot 1 . The organic semiconductors have also been efficiently employed in a continuous flow system for chemical synthesis eliminating the last obstacle (poor light penetration in heterogeneous solidliquid mixture) on the way to widespread applications in organic synthesis 2 . Because of their low cost, ease of synthesis and stability against reactive intermediates and photobleaching, CNs already play an important role as heterogeneous photocatalysts for organic transformations [3][4][5] . CNs are also very versatile, and can be tailored depending on the application by bandgap engineering at the atomic and molecular level 6,7 . Most photocatalytic reactions are based on single electron transfer between the reagents and the photocatalyst 8 . Therefore, reactive open shell species are ubiquitous intermediates in photocatalytic processes [9][10][11] . Small organic radicals, such as CH 3 , CF 3 , CHF 2 1 , and perfluoroalky 12 , CH 3 O 13 etc. are used for the functionalization of the organic molecules in order to tune steric and electronic properties. Furthermore, the lipophilicity and metabolic stability of pharmaceuticals may be adjusted in this way 14,15 . Despite their importance for medicinal chemistry, CF 3 , alkyl, and CH 3 O groups are chemically stable. Therefore, further diversification of the molecule at these newly formed sites is problematic. For example, cleavage of C-F bond in CF 3 -group is extremely demanding 16 . The same applies to C-O bond in the CH 3 O-group 17,18 . Conversely CHCl 2 radical from the pool of small organic radicals is synthetically more useful. It enables the installation of an electrophilic carbon, and the C-Cl bonds can be conveniently cleaved using weak nucleophiles. In other words, the CHCl 2 radical allows for C 1 -extension of the substrate framework, while simultaneously adding a chemically active functionality 19 . From this point of view, the CHCl 2 radical can be regarded as a "small functional radical". Despite the obvious synthetic utility of the dichloromethyl radical, literature is still lacking reactions using dichloromethyl moieties in conjugate additions-the kind of reaction resembling a traditional polar Michael addition. The latter was well studied in photoredox catalysis [20][21][22][23] . An example shown in Fig. 1a employs methyl groups in tertiary amines and C=C double bond as coupling partners. The chemistry of dichloromethyl radicals is restricted to a few examples, while such radicals are generated predominantly by catalyst containing rare precious metals or dangerous chemicals (Fig. 1b, c). Our alternative approach uses cheap heterogenous carbon nitride (CN) photocatalysts (1-10 Euro per gram on a gram-scale synthesis) 24 and have a low toxicity 25 . We hypothesized that chloroform can be used as atom efficient source of CHCl 2 radicals. Although chloroform readily gives dichlorocarbene in the presence of strong bases, we concede that photocatalyst will alter the path of chloroform decomposition. Formation of the dichloromethyl radical thereby may be achieved by one-electron reduction of chloroform followed by elimination of a chloride anion. In order to trigger this process, we chose potassium poly (heptazine imide) (K-PHI), a member of the CN family 26 . Upon irradiation with visible light, metastable long-lived radicals are generated that have been used as a pool of electrons to reduce different substrates 27 . Earlier, we developed photocatalytic methods to synthesize thioamides 28 , dibenzyl sulfanes 29 , 1,3,4oxadiazoles 3 , N-fused pyrroles 30 , cyclopentanes 27 , and halogenated aromatic hydrocarbons using K-PHI 31 . In related works, the long-lived carbon nitride radicals were applied in the delayed evolution of hydrogen 32,33 . Due to the advantages of flow reactors 34,35 , several types of such photoreactors employing carbon nitrides have been reported -packed bed photoreactor 36 , serial micro batch photoreactors 2 , and triphasic flow photoreactor 37 . Due to relatively small particle size (average diameter 100 nm) and highly negative zeta-potential (−40 mV) 38 , K-PHI gives stable colloidal solution and has been used in quasi-homogeneous catalysis 39 . Due to these features colloidal solution of K-PHI can be used in simple plug-flow photoreactors that are designed for homogeneous reaction mixtures. All in all, we present an unusual photocatalyzed radical addition of dichloromethyl radicals to enones to form a new C-C bond (Fig. 1d). In this approach chloroform is used as a source of dichloromethyl radicals. The reaction is catalyzed by K-PHI using blue light irradiation. Using the discovered reaction, we show that light scattering by semiconductor particulate strongly affects their performance in batch reactors limiting the scalability of such transformations. A nineteen times higher productivity is achieved using a dedicated flow photoreactor employing quasihomogeneous K-PHI nanoparticles. Finally, dichloromethyl adducts, i.e., γ,γ-dichloroketones, are used to access bifunctional building blocks and several classes of heterocyclic compounds. Results Optimization of reaction conditions. Along these arguments, we studied the designed reaction between chalcone 1a, chloroform, tetrahydroisoquinoline (THIQ) as an electron donor and K-PHI as the photocatalyst (see SI for preparation and characterization of K-PHI, Supplementary Fig. 1). Dichloroketone 2a was synthesized initially with 17% yield when 1 equivalent of THIQ was used ( ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-15131-0 using 3 equivalents of triethanolamine (TEOA) as electron donor (entry 4). The optimum conditions include ten equivalents of TEOA, under which we achieved 97% yield (entry 5). The reaction does not proceed without catalyst, light or a sacrificial electron donor (entry 6-8). CDCl 3 is a suitable source of CDCl 2 radicals offering a route for d-labeled dichloroketones 2a-d 1 with 99% yield (entry 9). We also evaluated the robustness of the reaction using different alcohols as hole scavengers, observing the formation of the desired product in all cases, albeit in lower yield (Table S1, entry [11][12][13][14]. These results illustrate the better ability of amines to donate electrons compared to alcohols, due to lower oxidation potentials (e.g., +0.5 V for TEOA, +1.5 V for benzyl alcohol and +1.9 V for MeOH, EtOH, i PrOH (Supplementary Note 1). It is also supported by higher H 2 production rate over carbon nitride materials using TEOA as electron donor compared to MeOH and EtOH 40,41 and comparative tests of benzyl alcohol oxidation versus benzylamine 37,42 . Moderate heating (50°C) facilitates the reaction, as the yield of 2a was 64% when reaction was performed at 20°C (Supplementary Table 1, entry 21). We also compared the catalytic activity of other materials and photoredox complexes. Na-PHI gave 2a with 49% yield (entry 10) 43 . Similar behavior was already observed during the photocatalytic synthesis of thioamides 28 . Mesoporous graphitic carbon nitride (mpg-CN) gave 2a with comparable yield 85% (entry 11). The inorganic semiconductors CdS and TiO 2 gave 2a in 70 and 94% yield, respectively (entries 12,13). Homogeneous Ir(ppy) 3 gave 2a with 97% yield (entry 14), while [Ru(bpy) 3 ]Cl 2 only resulted in 8% of 2a (entry 15). Furthermore, it was also shown that recycled K-PHI remains photocatalytically active for at least two further cycles (Supplementary Table 2). Reaction scope. Using the optimized conditions fifteen dichloroketones have been isolated with 18-89% yield (Figs. 2a-o). The characterization of products was conducted by NMR analysis. Dichloroketones 2 do not give stable molecular ions in the mass spectra (electron ionization). For example, the expected m/z value for 2a is 292. However, a signal with m/z 221 was detected ( Supplementary Fig. 2). The latter corresponds to 2,4-diphenylfuran. In general 2,4-diarylsubstituted furans are products of oxygen nucleophilic attack at CHCl 2 -group followed by elimination of two molecules HCl under the conditions of GC-MS data acquisition. Below we employ the reactivity of CHCl 2 group in synthesis of pyrroles and furans. Nonetheless, elemental analysis of 2a revealed chlorine content in excellent agreement with the calculated content confirming the proposed structure. We further proved the product structure, using deuterated chloroform as dichloromethyl source, observing the rise of the triplet in the 13 Dichloromethylated chalcones bearing strong electron withdrawing groups, i.e., CN-, MeO 2 C-, and pyridin-2-yl, 1p-r, gave the corresponding products 2p-r in low yields as analyzed by 1 H-NMR spectrometry of the crude reaction mixture (Supplementary Note 2). Nevertheless, we envision toolbox of synthetic organic chemistry to be useful for further diversification of the products structures employing, for example, methyl-group in 2b, F-atoms in 2d,e,h and intrinsically reactive sites in 2i,j. Common reactive Michael acceptors, such as methyl vinyl ketone and acrylonitrile, gave only trace amounts of CHCl 2 addition to the double bond as evidenced by GC-MS (Supplementary Note 3). Even more reactive Michael acceptors, i.e., methacrolein, methyl acrylate, and 2-furanone, gave a complex mixture and the desired products could not be identified (Supplementary Note 4). In the course of studying suitable reagents to install C x Hal y H zgroups in the enone 1a, we tested other halogenated reagents, including dichloromethane, bromoform, iodoform, 1,1,2,2-tetrachloroethane and tetrachloromethane (Supplementary Table 1). Careful analysis of the reaction mixture revealed that addition of CHBr 2 -groups, CHI 2 -groups, and C 2 HCl 4 -groups to enone 1a indeed took place. However, the products are not stable and undergo further chemical transformations, such as HX elimination and subsequent cyclizations to 2,4-diphenylfuran (in case of bromoform and iodoform) or dichlorodihydropyranes (in case of tetrachloroethane) (Supplementary Note 5). Compared to bromoform and iodoform, chloroform is beneficial due to higher selectivity in the reaction of enones C1 backbone extension. Scaling the dichloromethylation reaction of 1a in batch led to gradual decrease of the dichloroketone yield, from 88% (on 0.05 mmol scale) to 23% (on 5 mmol scale) (Supplementary Table 3). After careful investigation, we concluded that the origin for such drastic drop of the product 2a yield is poor light penetration in the depth of the batch reactor due to light scattering by suspended semiconductor particles (Supplementary Note 6) 45 . Quasi-homogeneous flow photoreactor. In order to overcome limitations of the batch approach, we performed the reaction in a continuous flow reactor made out of thin (inner diameter 1.6 mm) fluorinated ethylene propylene (FEP) tubing (Fig. 3). The use of carbon nitrides has been reported in serial micro-batch reactors 2 , that use gas-liquid segments to avoid clogging. A simplified system is applicable for K-PHI due to relatively small particle diameter (100 nm) and negative zeta-potential (ζ) (Fig. 3a). Electrostatic stabilization allows pumping colloidal solution ( Fig. 3b and Supplementary Note 7) without using a gasliquid system (Fig. 3c). Using flow approach, 2a was obtained with 57% yield. Considering convenience of K-PHI suspension pumping through thin FEP tubing along with easiness of the photocatalyst recovery, the entire system enables quasihomogeneous photocatalysis in flow 39 . As seen from the light intensity measurements (Fig. 3d-f), the FEP tubing filled with the reaction mixture absorbs 74% [(I 0 − I T2 )/I T0 ] of light. These results suggest that by performing the reaction in flow, more homogeneous irradiation of K-PHI particulate is achieved compared to the reaction in batch (Supplementary Note 6). Furthermore, we solved the problem of poor light permeability through a semiconductor suspension and increased the productivity of γ,γ-dichloroketone 2a synthesis by a factor of 19. Application of γ,γ-dichloroketones in organic synthesis. Finally, the γ,γ-dichloroketones obtained by the photocatalytic generation and addition of dichloromethyl radicals to enones were used to install other reactive functional groups. As a practical example, dichloroketone 2a was converted to β-formyl ketone 3a by simple hydrolysis with 60% yield (Fig. 4). The ketoaldehyde 3a was then transformed into multi-substituted heterocycles (4a-6a) using microwave assisted condensations with a series of nucleophiles. For instance, furan and pyrrole scaffolds have been used in synthesis of bioactive substances 46,47 . Mechanism. To support the role of chloroform as electron acceptor, we developed a method for oxidative coupling of (I T1 ) was measured at zero distance from the FEP tubing. f FEP tubing filled with a reaction mixture under day light and blue light source. Transmitted light intensity (I T2 ) was measured at zero distance from the FEP tubing. g Photoreactor wrapped with PVC tubing to maintain the desired temperature during the experiment. h View from the top on the assembled flow photoreactor immersed into a glass beaker. The space between the beakers is filled with cooling agent (water). (Fig. 5) 48 . As example, we synthesized four imines with 83-100% yield. These results offer an alternative route for such transformations using chloroform as a solvent and electron acceptor ( Supplementary Fig. 3 for detailed mechanism of amines coupling). The proposed mechanism of the reported photocatalytic reaction is shown in Fig. 6. In the first step, K-PHI is excited by blue photons giving excited state of the photocatalyst (K-PHI*). Removal of an electron from TEOA by K-PHI* (reductive quenching of the photocatalyst), leads to the formation of the long-lived radical anion K-PHI •− , that has the typical deep green color 27,29 . Chloroform is subsequently oxidized by a single electron transfer event, forming the chloroform radical anion that eliminates a chloride anion resulting in a dichloromethyl radical. Addition of the dichloromethyl radical to the β-carbon atom of the enone gives intermediate i−1. Abstraction of hydrogen from TEOA leads to the desired product 2. While TEOA acts as hole scavenger, chloroform simultaneously acts as solvent and electron acceptor to complete the photocatalytic cycle, as it was already proposed by Chen et al. 49 It is also possible to detect traces of different chlorinated compounds, that result from side radical reactions of the dichloromethyl radical, via GC-MS. By running experiments in the absence of the enone, we observed the formation of halogenated compounds including tetrachloroethane that is likely formed via a homocoupling of dichloromethyl radicals (Supplementary Table 4; Supplementary Figs. 4, 5, 6). Discussion In this work, we extended the library of small organic radicals available for photocatalytic synthesis to dichloromethyl radicals than can be conveniently generated from chloroform. The method was validated in a 1,4 addition of dichloromethyl radicals to enones. The process is photocatalyzed by the heterogeneous, metal free carbon nitride photocatalyst K-PHI. Fifteen γ,γdichloroketones were isolated in moderate to excellent yield. The possibility to use other polyhalogenated compounds such as bromoform, iodoform and 1,1,2,2-tetrachloroethane has been demonstrated. Light scattering by carbon nitride particles has been identified as limiting factor for scaling these transformations. The results suggest that, in a typical photocatalytic experiment using 2.5 mg mL −1 of semiconductor particles, the penetration depth of light is only 3 mm. In polar solvent, such as DMSO, nanoparticles of K-PHI give stable suspension that was pumped through a thin (1.6 mm internal diameter) tubing. γ,γdichloroketone 2a has been also synthesized using quasihomogeneous photoreactor. The γ,γ-dichloroketones obtained in this work were proved to be useful building blocks with applications in the synthesis of bifunctional compounds that can be used for the preparation of heterocyclic bioactive molecules. The use of chloroform as solvent and electron acceptor was also demonstrated in the oxidative coupling of benzylamines. Methods Microwave reactions. Experiments were carried out in a CEM Discover® SP System microwave reactor. Mass spectral data were obtained using Agilent GC 6890 gas chromatograph, equipped with HP-5MS column (inner diameter = 0.25 mm, length = 30 m, and film = 0.25 μm), coupled with Agilent MSD 5975 mass spectrometer (electron ionization). Electrochemistry. Cyclic voltammetry (CV) measurements were performed in a glass single-compartment electrochemical cell. Glassy carbon (diameter 3 mm) was used as a working electrode (WE), Ag wire in AgNO 3 (0.01 M) with tetrabutylammonium perchlorate (0.1 M) in MeCN as a reference electrode (RE), Pt wire as a counter electrode. Each compound was studied in a 50 mM concentration in a 0.1 M tetrabutylammonium perchlorate (TBAP)/chloroform electrolyte solution (10 mL). Before voltammograms were recorded, the solution was purged with Ar, and an Ar flow was kept in the headspace volume of the electrochemical cell during CV measurements. A potential scan rate of 0.050 V s −1 was chosen, and the potential window ranging from +2.5 V to −2.5 V (and backwards) was investigated. Cyclic voltammetry was performed under room-temperature conditions (~20-22°C). Fig. 6 Proposed mechanism of the generation of dichloromethyl radicals and their addition to enones. Inset shows images of the reaction mixture before and after light irradiation and structures of TEOA oxidation products. ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-15131-0 Photocatalysts characterization. Zeta-potentials were measured in aqueous colloidal solution of K-PHI using a Malvern Zetasizer instrument. Hydrodynamic diameter of K-PHI particles in water was measured using Malvern Zetasizer instrument. General method for dichloro-ketone preparation (2a-l). A glass tube with rubber-lined cap was evacuated and filled with argon three times. To this tube triethanolamine (74.6 mg, 66 µL, 0.5 mmol), corresponding chalcone (50 µmol), K-PHI (5 mg) and chloroform (2 mL) were added. Resulting mixture was stirred at 50°C under irradiation of Blue LED (λ = 461 nm) for 20 h. Then reaction mixture was cooled to room temperature and centrifuged, clear solution was separated and solid residue was washed with chloroform (2 mL) and centrifuged again. Organic solutions were combined and evaporated to dryness. Residue after evaporation was purified by silica gel column chromatography using mixture of hexane/diethyl ether (98:2) as an eluent. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. The source data underlying Fig. 2a and Supplementary Fig. 1a-j are provided as a Source Data file. Code availability This study does not use custom computer code or algorithm to generate results that are reported in the paper and central to its main claims.
2020-03-13T14:43:28.416Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "32de0af92d6b6ca0c969c2ac23e14e7f478a3c9f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-15131-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32de0af92d6b6ca0c969c2ac23e14e7f478a3c9f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
236255015
pes2o/s2orc
v3-fos-license
Effects of Different Maize–Soybean Intercropping Patterns on Yield Attributes, Yield and B: C Ratio A field experiment was carried out to study the “Effects of different maize–soybean intercropping patterns on yield attributes, yield and B: C ratio” at the Agricultural Research Farm, Bhagwant University, Ajmer. Treatment consists of Sole maize (60x20 cm), Sole Soyabean (30x10 cm), Maize-Soybean (1:1) (60X20 cm), Maize-Soybean (1:1) (75X20 cm), Maize-Soybean (1:1) (90X20 cm), Maize-Soybean (1:2) (90X20 cm) and Maize-Soybean (2:6) (Paired row 45/180 cm). There were four replicated blocks and plot sizes measuring 7 m x 4.5 m laid out in a randomized complete block design (RCBD). Results of the experiment showed that the maize-soybean intercropping patterns had significant effect on maize stover and grain yields. Sole maize recorded significantly higher yield than intercropped maize under varying geometry and row proportion. However, it was at par with maize intercropped with soybean in 1:1 row proportion with 60 x 20 cm .The intercropping patterns affected significantly the PAR intercepted and the leaf area index. The soybean sole crop intercepted significantly more light and leaf area index (LAI) than all other treatments and/or crop. Further,, the yield of sole soybean was significantly superior over other intercropped treatments. The highest benefit cost ratio revealed that higher return per unit money invested for inputs used for raising crops. The highest B: C ratio was recorded with maize + soybean in 2:6 paired row (3.57) intercropping system. The least B: C ratio was recorded in sole soybean (2.45). Original Research Article Dudwal et al.; IJPSS, 33(12): 51-58, 2021; Article no.IJPSS.69320 52 INTRODUCTION In intercropping system there is one main crop cultivated with one or more added crops where the main crop is of primary importance due to economic or food production reasons [1]. In the SSA region, cereal and grain legumes intercrop is the most practiced by smallholder farmers [2]. The major reason why these farmers intercrop cereals and grain legumes is because they are particularly important human food as they are rich in protein and are sometimes sold for cash income [2]. In addition, intercrops give them the stability of the yields over several seasons), when one crop fails, the other might still give a reasonable yield [3] Furthermore, grain legumes help maintain and improve soil fertility due to their ability to biologically fix atmospheric nitrogen [4]. Intercropping of maize and legumes is widespread among smallholder farmers due to the ability of the legume to cope with soil erosion and with declining levels of soil fertility. The principal reasons for smallholder farmers to intercrop are flexibility, profit maximization, risk minimization against total crop failure, soil conservation and improvement of soil fertility, weed control and balanced nutrition [5]. Other advantages of intercropping include potential for increased profitability and low fixed costs for land as a result of a second crop in the same field. Furthermore, intercrop can give higher yield than sole crop yields, greater yield stability, more efficient use of nutrients, better weed control, provision of insurance against total crop failure, improved quality by variety, also maize as a sole crop requires a larger area to produce the same yield as maize in an intercropping system without mineral fertilizer on sandy soil in Sub-humid zones of Zimbabwe [6]. Intercropping maize with cowpea has been reported to increase light penetration in the intercrops, reduce water evaporation, and improve conservation of the soil moisture compared with maize alone [7].On the other hand, it is often believed that traditional intercropping systems are better in weeds, pests and diseases control compared to the monocrops, but it must be known that intercropping is an almost infinitely variable, and often complex, system in which adverse effects can also occur. As consequence of these, the optimum productivity of cereal-legume systems is still a big challenge to the stakeholders involved in this sector. This study will therefore contribute to useful information to smallholder farmers and other stakeholders on the optimum intercropping patterns, contribution of the system to the soil and the economic aspect of the maize-soybean cropping system. Maize and Soyabean is the major Kharif crop of the area. Hence an experiment was under taken on Maize and Soyabean intercropping in different patterns. MATERIALS AND METHODS An experiment was conducted at Agricultural Research Farm of Bhagwant University, Ajmer during Kharif season of 2018. The soils of experimental farm was sandy loam in texture; acidic in reaction (pH 6.72), poor in nitrogen (available N 202 Kg/ha), poor in phosphorus (19 P 2 O 5 Kg/ha) and moderate in potash (236 K 2 O Kg/ha). Treatment consists of Sole maize (60x20 cm), Sole Soyabean (30x10 cm), Maize-Soybean (1:1) (60X20 cm), Maize-Soybean (1:1) (75X20 cm), Maize-Soybean (1:1) (90X20 cm), Maize-Soybean (1:2) (90X20 cm) and Maize-Soybean (2:6) (Paired row 45/180 cm). There were four replicated blocks and plot sizes measuring 7 m x 4.5 m laid out in a randomized complete block design (RCBD). The land was prepared thoroughly by ploughing twice with the help of tractor followed by harrowing. The leveling was done to ensure uniform irrigation and proper drainage. Planking was done at the time of final land preparation to keep moisture intact in the soil. The field was cleaned by removing weeds and stubbles of previous crop. Maize and soyabean crop were sown in the first week of July 2018 by dibbling methods. Seeds were sown at 20 kg ha -1 at a depth of 2-3 cm, maintaining the row spacing as per treatments, followed by covering with soil. The rate of N, P 2 O 5 and K 2 O were applied at 20:50:0 kg ha -1 in the form of urea and single super phosphate, respectively. The entire dose of fertilizer was applied as basal, and then they were thoroughly mixed with the soil. The first irrigation was given at the time of sowing. The 2 nd irrigation was given at 35 DAS. Rainfall helped the crop to avoid further irrigations in between. During the crop season total rainfall received was 179.9 mm. The crop was infested with maruca and powdery mildew at flowering stage. By forecasting the pest, spraying of 5 per cent neem seed kernel extract was done at the initiation of flowering in order to repel the insect from egg lying. Spraying of insecticides quinolphos (0.2 %) + dichlorovas (0.07 %) and chlorpyriphos (0.25 %) + dichlorovas (0.07 %) was sprayed followed by first spray of neem kernel extract at 8-10 days interval. On the onset of powdery mildew, spraying of carbendazim at 0.1 % concentration was done. The crop was harvested at physiological maturity stage in the last week of Sept., 2018. First the borders were harvested and separated. Later, the crop from each net plot was harvested and sun dried for 3 days, bundled, tagged, weighed and transported to threshing floor. Threshing was done for each plot and computed to kg ha -1 basis. RESULTS AND DISCUSSION Data were analysed by guide lines given by Fisher [8]. Data on yield and yield parameters of maize and soyabean as influenced by maize planting geometry and row proportions in maize + soybean intercropping systems are shown in Table 1 and Table 2, respectively. Economics of maize and soybean intercropping system as influenced by planting geometry and row proportions in maize + soybean intercropping systems are presented in Table 3. Production efficiency indices of maize + soyabean intercropping system are reported in Table 3. Per cent light transmission ratio at different growth stages as influenced by maize planting geometry and row proportions in maize + soybean intercropping systems is presented in Table 5. Maize grain yield differed significantly due to planting geometry and row ratios of maize and soybean in intercropping system (Table 1). Sole maize recorded significantly higher yield (70.9 q ha -1 ) than intercropped maize under varying geometry and row proportion. However, it was at par with maize intercropped with soybean in 1:1 row proportion with 60 x 20 cm (70.0 q ha -1 ). The maize yield reduced from 70.0 to 46.6 q ha -1 in intercropping system. The reduction in maize yield was due to competition between two crops and reduced maize population from 100 to 66 per cent. The increase in yield is attributed to increase in population of maize. One of the reasons for non significant variations in the growth and yield parameters between sole and intercropped maize in different planting geometry row ratios may be due to uniform fertilizer application based on per cent plant population and other management practices in all treatments. Soybean being short duration and short saturated crop with tap root system did not compete with tall maize for growth resources viz., nutrients, light and moisture. The results agree with the findings of Kankeri [9]. There were no significant differences in grain yield per plant and hundred seeds weight. Further, the yield of sole soybean was significantly superior (21.8 q ha -1 ) over other intercropped treatments (Table 2). This might be attributed to presence of recommended plant stand under sole cropping as against decreased population under intercropping system (75.3 %). Similar results were reported by Pattanashetti [10]. The yields of intercropped soybean varied with planting geometry and row proportions of maize. Among intercropping treatments, maize intercropped with soybean in paired row of 2:6 row proportion recorded higher grain yield (19.2 q ha -1 ) than other intercropping. The yield of intercropped soybean decreased from 19.2 to 5.5 q ha -1 . This is because of lower availability of resources particularly light due to shading by tall maize crop. The results are in conformity with the findings of Singh et al., [11]. Maize equivalent yield (MEY) was significantly higher with maize + soybean paired row in 2:6 row proportion (94.70 q ha -1 ). This was due to higher yield from the intercrop soybean component and higher prices of soybean in the market. Least MEY was recorded in sole soybean (54.5 q ha -1 ). The net income was higher in maize + soybean in 2:6 paired row intercropping system (57,926 ha -1 ) than those of other intercropping systems. The highest benefit cost ratio revealed that higher return per unit money invested for inputs used for raising crops. The highest B: C ratio was recorded with maize + soybean in 2:6 paired row (3.57) intercropping system. The least B: C ratio was recorded in sole soybean (2.45). This is due to lower cost of cultivation and also due to higher net returns in these treatments due to higher market price of soybean (Table 3). Similarly, higher net returns were also recorded by Mohan [11] with maize and soybean intercropping. A perusal of data Table 4 indicates that among the intercropping systems, land equivalent ratio (LER) was the highest with maize intercropped with soybean in 2:6 paired row system (1.54). The higher LER with intercropping maize and soybean in 2:6 row ratio may be due to better performance of both the crops due to least competition for all growth resources in general and light in particular by greater complementary soybean. Such increase in LER in intercropping system was also observed by the earlier workers with maize + soybean [12]. Further, intercropping system of maize + soybean in 2:6 paired row system (50:75) resulted in significantly higher Area Time (Table 4). This was possibly due to greater temporal and spatial complementarily. These results agree with the results of Gardner and Kisakye [13] in maize + Phaseolus vulgaris intercropping system. The results data Table 5 revealed that, among all intercropping systems maize intercropped with soybean in 2:6 paired row system recorded least light transmission ratio compared to sole and other treatments. The average light transmission ratio (LTR) at 30 DAS decreased from 65.37 per cent in sole maize (60 x 20 cm in) to 59.73 -63.27 per cent in maize and soybean intercropping in different planting geometry and row ratios. The corresponding LTR per cent at 60 and 90 DAS were 34.52 to 31.22 and 30.57 to 27.13 per cent respectively. This resulted in improvement of average light interception at different phenological stages of maize. Thus, intercropping systems of maize and soybean with lower LTR was able to intercept more light compared to sole maize. CONCLUSION From the above study, it can be concluded that the maize-soybean intercropping patterns had significant effect on maize grain and stover yields. Sole maize recorded significantly higher yield than intercropped maize under varying geometry and row proportion. However, it was at par with maize intercropped with soybean in 1:1 row proportion with 60 x 20 cm. Further, the yield of sole soybean was significantly superior over other intercropped treatments. The highest B: C ratio was recorded with maize + soybean in 2:6 paired row (3.57) intercropping system.
2021-07-26T00:06:22.571Z
2021-06-05T00:00:00.000
{ "year": 2021, "sha1": "4225b186105a590b2c1522fa7f5559df97d342b0", "oa_license": null, "oa_url": "https://www.journalijpss.com/index.php/IJPSS/article/download/30486/57215", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ef613e4047530d53c0d8900de9ddb7cb58d941f1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
52054605
pes2o/s2orc
v3-fos-license
Targeted Sequencing of Respiratory Viruses in Clinical Specimens for Pathogen Identification and Genome-Wide Analysis A large number of viruses can individually and concurrently cause various respiratory illnesses. Metagenomic sequencing using next-generation sequencing (NGS) technology is capable of identifying a variety of pathogens. Here, we describe a method using a large panel of oligo probes to enrich sequence targets of 34 respiratory DNA and RNA viruses that reduces non-viral reads in NGS data and achieves high performance of sequencing-based pathogen identification. The approach can be applied to total nucleic acids purified from respiratory swabs stored in viral transport medium. Illumina TruSeq RNA Access Library procedure is used in targeted sequencing of respiratory viruses. The samples are subjected to RNA fragmentation, random reverse transcription, random PCR amplification, and ligation with barcoded library adaptors. The libraries are pooled and subjected to two rounds of enrichments by using a large panel of oligos designed to capture whole genomes of 34 respiratory viruses. The enriched libraries are amplified and sequenced using Illumina MiSeq sequencing system and reagents. This method can achieve viral detection sensitivity comparable with molecular assay and obtain partial to complete genome sequences for each virus to allow accurate genotyping and variant analysis. Electronic supplementary material: The online version of this chapter (10.1007/978-1-4939-8682-8_10) contains supplementary material, which is available to authorized users. Introduction In targeted DNA or RNA sequencing, a panel of oligonucleotide probes are used to capture nucleotide contents containing interested sequences, the enriched targets are sequenced using nextgeneration sequencing (NGS) to achieve sensitive detection and sequence-based analysis [1]. Several methods with comparable performance had been developed and successfully applied to sequencing of exomes, transcriptomes, cancer genes, significant pathogens, etc. [2][3][4][5][6][7][8]. Utilizing TruSeq RNA Access protocol and a large panel of oligos for whole-genome capture of 34 human respiratory viral pathogens (TruSeq RVP) enables enriching sequences of respiratory DNA and RNA viruses out of complex clinical specimens and production of genome sequences for genotyping and genetic variant analysis. The process includes collection, transportation, and storage of respiratory specimens, extraction of DNA and RNA, preparation of TruSeq fragment library, RVP enrichment, NGS and sequence data analysis (Fig. 1). The laboratory procedures require approximately 5 days from specimens to NGS run excluding specimen acquisition, NGS instrument run time and post-run data processing and analyses. Fig. 1 Outline of the targeted sequencing of respiratory viruses in total nucleic acids from clinical specimens. The method uses TruSeq RNA Access Library Prep protocol and custom RVP oligos for genome-wide capture of 34 human respiratory viruses for next-generation sequencing (NGS)-based detections The method is expected to detect all commonly known respiratory viral pathogens that are diagnosed with molecular tests such as Luminex xTAG RVP assay, FilmArray Respiratory Panel (RP) tests, multiplex real-time PCR tests for respiratory viral infection with comparable sensitivity and accuracy [9][10][11]. The large capture panel and the scheme of genome-wide capture allow detection of most known respiratory viruses and viruses with high sequence divergences. The method is robust in handling specimens with viruses of a wide range of concentrations as well as specimens of compromised quality, e.g., degraded nucleic acids or high nonviral contents. The high capture efficiency and the superior sensitivity from this method make it a powerful tool for discovery of respiratory virome. However, there is an elevated risk of false positivity which demands more stringent contamination control procedure for the investigation of clinical specimens (see Note 1). Materials Perform the procedures in a Biosafety Level 2 (BSL-2) laboratory with a certified biosafety cabinet (BSC). Use clinical specimens and agents that can cause infection within the biosafety cabinet. Extract and purify DNA and RNA contents from respiratory swabs stored in a viral transport medium. Divide purified DNA/RNA into small aliquots, e.g., 10 μL per vial and store at −80 °C (see Note 2). Methods Carry out all procedures at room temperature unless otherwise specified. Brief centrifugation or quick spin is done at 280 × g for 1 min in a benchtop centrifuge with microplate carriers for 96-well microplate or by a touch-spin using a microcentrifuge or a minicentrifuge for microcentrifuge tubes. Thaw frozen reagents at room temperature completely, vortex and centrifuge briefly, then keep them at room temperature or on ice until use. Immediately return all reagents to their original storage condition after use except for Resuspension Buffer which is kept at 4 °C after the initial use. 7). Determine DNA concentration in ng/μL for each library. The expected size distribution for the library is 200−500 bp with apparent peak at approximately 260 bp (Fig. 2). 5. During the incubation, prepare Pre-Mixed Elution in a microcentrifuge tube (to be used in 3.1.9. step 7): mix 28.5 μL of Enrichment Elution Buffer 1 and 1.5 μL of 2 N NaOH for each pool, and keep at room temperature until use. Quality Examination of the Library 6. Once the incubation is over, immediately remove the plate from the thermocycler and proceed to step 3.1.9. 1. Centrifuge the RAH1 plate, remove the seal, and transfer the first pool content to a clean 1.5 mL microcentrifuge tube labeled as S1, the second pool content to tube S2, and so on (see Note 9). Discard the RAH1 plate. Sequence the lib pool with PhiX control using Illumina MiSeq or NextSeq system. The acquired sequence data can be applied to bioinformatics data analysis pipeline, which may include data quality processing, the removal of human genome and transcriptome sequences, de novo assembly, and comparison of sequence contigs and unassembled reads with sequence databases using BLAST programs [12] (see Note 12). 6. The TruSeq RNA Access Library guide recommends using 10−100 ng of RNA as starting materials. Since total nucleic acids extracted from respiratory specimens contain both DNA and RNA and have highly variable concentrations from specimen to specimen, we use 8.5 μL of DNA/RNA for each sample in RNA fragmentation. 7. Agilent 2100 Bioanalyzer can analyze up to 12 samples in one run. An alternative high-throughput system, the Agilent TapeStation 4200, can analyze up to 96 samples in one run. Notes 8. The total hybridization time is about 2 h. Over hybridization may cause a high degree of nonspecific binding. 9. Instead of using microcentrifuge tubes and a magnetic beads separation rack, a 0.8 mL 96 deep well plate and a magnetic beads separation stand for 96-well plate can be used to process a large number of samples. 10. After two rounds of hybridization and capture and the second PCR amplification, the post-enrichment library appears having broadened size distribution with the peak shifted slightly to a higher molecular weight as compared with the fragment library shown in Fig. 2. 11. We always include Illumina PhiX Control v3 library in MiSeq runs. The PhiX control can be used for quick assessment of data quality and for technical trouble shooting. Besides, the addition of PhiX library can improve base calling quality for libraries with high G/C or A/T contents or low diversity sequences. The spiking ratio can be increased or reduced based on the needs and the nature of the specimens. 12. Using this method, partial to near-complete viral genome sequences can be obtained to allow genotyping and recombination and variant analysis. The method can detect both RNA and DNA viruses. During interpretation of the results, take cautious consideration of probable contamination and crosscontamination. If necessary, verify the results with additional tests such as molecular assays and/or by repeating the experiment with archived specimens or DNA/RNA aliquots. those of the authors and do not reflect the official policy of the Department of the Army, Department of Defense or U.S. Government. We declare that no conflict of interest exists. This is the work of U.S. government employees and may not be copyrighted (17 USC 105).
2018-08-22T21:31:15.510Z
2018-05-12T00:00:00.000
{ "year": 2018, "sha1": "6dc05bfd6e1a27adbf646cf949940fddca30aa11", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7121196?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7de08e540721065bcd2fd22009820e6b5da59fbe", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53336895
pes2o/s2orc
v3-fos-license
Predictive tests to evaluate oxidative potential of engineered nanomaterials Oxidative stress constitutes one of the principal injury mechanisms through which particulate toxicants (asbestos, crystalline silica, hard metals) and engineered nanomaterials can induce adverse health effects. ROS may be generated indirectly by activated cells and/or directly at the surface of the material. The occurrence of these processes depends upon the type of material. Many authors have recently demonstrated that metal oxides and carbon-based nanoparticles may influence (increasing or decreasing) the generation of oxygen radicals in a cell environment. Metal oxide, such as iron oxides, crystalline silica, and titanium dioxide are able to generate free radicals via different mechanisms causing an imbalance within oxidant species. The increase of ROS species may lead to inflammatory responses and in some cases to the development of cancer. On the other hand carbon-based nanomaterials, such as fullerene, carbon nanotubes, carbon black as well as cerium dioxide are able to scavenge the free radicals generated acting as antioxidant. The high numbers of new-engineered nanomaterials, which are introduced in the market, are exponentially increasing. Therefore the definition of toxicological strategies is urgently needed. The development of acellular screening tests will make possible the reduction of the number of in vitro and in vivo tests to be performed. An integrated protocol that may be used to predict the oxidant/antioxidant potential of engineered nanoparticles will be here presented. Introduction Nanotechnology rapidly developed in recent years and a large number of new nanomaterials for a wide range of applications have been introduced in the market [1,2]. Concerns exist about possible adverse health effects following human exposure to those nanomaterials, which may release nanoparticles [3,4]. Because of a growing number of new NPs is expected to be produced in the future is urgent to define strategies to predict their possible impact on health. Oxidative stress constitutes one of the principal injury mechanisms through which engineered NPs can induce adverse effects. Zhang and coworkers [5] recently reported the development of a quantitative structure-activity relationship (QSAR) model to predict the acute pulmonary inflammation potential of various metal oxide NPs based on the values of the conduction band energy levels. In fact, when the biological and material energetic states are similar, the permissive electron transfer could lead to the formation of oxidizing or reducing molecules that influence the level of antioxidants and/or increase the production of ROS. However, the proposed predictive tool may be not applicable to materials other than oxides. On the other hand processes different from electron transfer between surface and cells (e.g. physical interaction with biomolecules or mechanical damage to cells) likely participate to the overall mechanism of toxicity. The aim of this study is to get insight on the capability of three different nanomaterials to induce oxidative damage by generating ROS or by directly damaging biomolecules. Two types of metal oxides, i.e. titanium oxide and amorphous silica nanoparticles, were compared with a carbon soot sample. These samples were chosen since they are expected to be very different in surface reactivity. TiO 2 is known to be a potent photo-catalyst while amorphous silica is an inert covalent material. Carbon exhibits a reactivity that depends upon its allotropic form. An integrated protocol to evaluate the oxidative potential of the materials has been used. The capability of NPs to interfere with the ROS was evaluated by means of EPR spectroscopy/spin-trapping or probing technique while the ability to cause oxidative damage to lipids, proteins and DNA were evaluated by means of UV-Vis spectroscopy, SDS-PAGE electrophoresis and agarose gel electrophoresis, respectively. Irradiation conditions. Irradiation experiments were performed with a 500 W mercury/xenon lamp (Oriel instruments) equipped with an IR water filter to avoid the overheating of the suspensions. Simulated solar light was obtained by applying a 400nm cut-off filter that let to pass about 5% of UV light in the UV A region. 2.5. Scavenging activity. Scavenging activity has been evaluated following the procedure already reported [9,10]. Briefly, hydroxyl radicals were generated by Fenton reaction or by irradiating with a UV lamp (ThermoOriel UV lamp) a solution of hydrogen peroxide directly in the EPR spectrometer cavity. The reaction was repeated in the presence of carbon soot. All the experiments were repeated at least twice. 2.6. Oxidative damage to plasmid DNA. The reactivity of powders toward double stranded supercoiled plasmid pYES2 (Invitrogen) were been investigated in order to evaluate the potential of samples to cause direct oxidative damage [11]. All experiments were performed with ~ 0.2 mg of powder suspended in 30 l of milliQ water and then vortexed. To this suspensions 3 L of DNA solution (concentration 100-150ng/L) were added and then exposed to a simulated solar light (UV-Vis lamp using a filter having a cut-off of 400nm) for 20 minutes. As a control, DNA was also irradiated in the absence of any powders in order to exclude a direct damage to this molecule. After irradiation time the suspension was centrifuged and the supernatant used for agarose gel electrophoresis analysis. The samples were loaded on a 1% agarose gel (Promega) and, after electrophoresis; DNA bands were stained and visualized with ethidium bromide (Promega). Oxidative damage to linoleic acid The reactivity of powders toward linoleic acid has been investigated in order to evaluate the potential of sample to cause oxidative damage directly to lipids following a procedure previously reported [7]. Briefly, powders and linoleic acid were continuously stirred under the ambient light at 37 ºC for 72 h and then the formation of MDA was evaluated. The assay is based on the reactivity of MDA, a colorless end product of degradation, with tiobarbituric acid (TBA) to produce a pink adduct that absorbs at 535 nm. Oxidative damage to proteins Bovine serum albumin (BSA) (Sigma-Aldrich, Germany) has been chosen as model protein. All experiments were performed with ~ 5 mg of powder suspended in 50 l of phosphate buffer 5 mM pH 7.4 and then sonicated for 5 minutes. To this suspensions 50 L of BSA solution (1 mg/ml in phosphate buffer 5 mM pH 7.4) were added and then exposed to a simulated solar light (UV-Vis lamp using a filter having a cut-off of 400nm) for 1 hour. After irradiation time 10 L of SDS 10% were added in order to eliminate adsorbed proteins at the powder surface and then centrifuged (10 minutes at 10000 rpm). The supernatant, boiled at 100°C in the presence of LAEMMLI solution, was used for SDS-PAGE electrophoresis analysis. Results and discussion Particles may generate reactive oxygen species by a direct mechanism (surface-derived ROS), or by an indirect one relying on the alteration of mitochondrial functions or the activation of cells of the immune systems (cell-derived ROS) [12][13][14]. Oxidative stress may also derive by the release of redoxactive ions from particles to biofluids [15] or following the depletion of endogenous antioxidants by adsorption or reaction with the particle surface [16,17]. Finally, damage may follow the direct reaction of biomolecules with particles. Both generation of ROS and direct damage to biomolecules are related to the existence of surface sites accessible to the fluid and able to undergo redox cycling. The chemical nature of these reactive sites depends upon the type of solid. One of the most important reactions occurring at the surface of an inorganic material leading to the generation of ROS is the Fenton reaction, i.e. the generation of hydroxyl radicals through the reaction of hydrogen peroxide with metal ions at a low oxidative state: This reaction was reported to occur in the case of iron containing minerals [18] or particles deriving by grinding covalent solid e.g. quartz [12]. Fenton-like reactivity The ability to generate hydroxyl radicals in the presence of hydrogen peroxide has been evaluated by using EPR spectroscopy/spin trapping technique. Figure 1 reports the EPR spectra obtained by incubating titanium dioxide, amorphous silica or carbon nanoparticles in the presence of the spin trap DMPO and hydrogen peroxide (Figure 1). FeSO 4 was used as positive control: in this case the typical four lines EPR spectra of DMPO/ HO • was observed. Conversely, no signal was observed in the presence of the powders suggesting that all these materials are unable to reduce hydrogen peroxide. Note that the isometric sharp signal at field 3330-3340 G observed for carbon soot is due to intrinsic carbon centred free radicals, in the bulk of the material [19]. The reaction was performed in the dark since TiO 2 reacts with hydrogen peroxide if activated by UV light generating hydroxyl and superoxide radicals thorough a different mechanisms [7]. Titanium dioxide is in fact known to be a potent photo-catalyst. It generates high amount of reactive oxygen species (ROS) also in the absence of hydrogen peroxide when exposed to UV light in wet conditions. Under UV irradiation charge separation occurs in the bulk of the oxide leading to the promotion of an electron in the conduction band and to the formation of a hole in the valence band. When the charge carriers reach the surface of the solid reduction and oxidation reactions may occur following the interaction with the surrounding medium. The redox potential of charge carriers allow forming highly reactive radical species such as superoxide anion radicals (O 2 • -), through electron transfer to O 2 , and hydroxyl (HO • ) radicals through hole interaction with water and singlet oxygen ( 1 O 2 ). The photogenerated holes may also oxidize organic molecules generating carbon-centered free radicals. A comprehensive evaluation of the ROS generated by TiO 2 may be obtained by EPR spectroscopy. The EPR spectra obtained by incubating the TiO 2 sample in solutions containing different spin trap or probe molecules under simulated solar light are reported in figure 2 and compared with the spectra obtained in the dark. Three types of reactions have been considered. A remarkably intense six lines EPR spectrum correspondent to the trapped CO 2 •radicals was observed under UV irradiation. Albeit much less intense a signal was observed also in the dark. 3. The reduction of oxygen to superoxide radicals: The typical signal of O 2 •was observed under UV irradiation (Panel C) while no signal was detected in the dark. 4. The generation of singlet oxygen. Being diamagnetic this species is not detected by EPR. However, it may react with the spin probe 4-oxo-TMP to give a nitroxide radical as previously reported by other authors [8]. TiO 2 generates large amount of singlet oxygen when irradiated by UV light (Panel D). A residual photocatalytic activity was also observed in the dark. Oppositely to titanium dioxide carbon soot and amorphous silica, albeit irradiated with simulated solar light, did not show any activity in generating free radicals. Scavenging of free radicals Beside several materials exhibit at the surface redox active sites able to induce oxidative stress, there are some other that are intrinsically unable to generate free radicals. Amorphous silica and carbon are among them. However, there are numerous evidences that some carbon-based materials (e.g. fullerenes, carbon nanotubes and carbon black) may act as free radical acceptors [9,21,22]. The susceptibility of carbon-based materials to radical attack is well known and has been exploited to introduce functionalities at their surface [23][24][25]. Some of us previously reported that MWCNTs was able to scavenge oxygenated free radicals and in particular hydroxyl radicals, the most reactive among ROS [9,10]. This property makes these materials promising in all applications where radical reaction need to be controlled such as stabilizing additive for composite [26,27] and in medicine to prevent free-radicals mediated diseases (tumor, cardiovascular diseases and neurodegenerative disorder) [28]. Carbon soot, used in this study as model for carbon particles, derive from incomplete combustion processes or pyrolysis of carbon-containing materials, such as waste or fuel oils, diesel fuel, coal, wood, paper plastic and rubber. It is mainly composed by elemental carbon partially organized in graphenic/graphitic structures. Like other carbon based nanomaterials it scavenges HO • free radicals. Hydroxyl radicals may be generated by irradiating with a UV lamp a solution of hydrogen peroxide or by using the Fenton reaction (H 2 O 2 , FeSO 4 ). If the reaction is performed in the presence of a spin trap molecule an intense EPR signal is obtained (Figure 3 spectra a). When the reaction is repeated in the presence of carbon soot the signal disappear (Figure 3 spectra b) confirming the ability of carbon based material to scavenge free radicals. As reported elsewhere the mechanism of scavenging likely occurs through the addition of radicals to the particles [18]. Amorphous silica did not shown any free radicals scavenging activity in agreement with what previously reported [9]. Damage to biomolecules Reactive surface sites also react with organic molecules; the reaction with sodium formiate reported above is an example. Biomolecules may also undergo degradative reactions initiated by surface reactive sites. Therefore the ability to generate ROS is not the only parameter to be considered in the evaluation of the oxidative potential. The capability to directly damage lipids, protein and DNA may be evaluated by cell-free experiments. Lipids peroxidation was here evaluated by measuring the amount of malonyldialdehyde (MDA) generated after the reaction of linoleic acid with titanium oxide and amorphous silica nanoparticles after 72 h of incubation under ambient light irradiation ( Figure 4A). The oxidative potential damage of proteins by titanium oxide and amorphous silica nanoparticles was evaluated under simulated solar light irradiation for 1 h, by using bovine serum albumin (BSA) as model protein ( Figure 4B). As expected under irradiation TiO 2 oxidized both linoleic acid and BSA while amorphous silica did not. The oxidative potential on DNA by titanium dioxide, amorphous silica and carbon nanoparticles was evaluated under simulated solar light irradiation. Results are reported in figure 5. DNA damage (breaks) was performed by irradiating DNA in the presence of powders for 20 minutes and analyzing the supernatant by agarose electrophoresis. Irradiated DNA ( Figure 5, column 2) mainly remained in the supercoiled circular (SC) form as well as non-irradiated DNA (column 1) suggesting that direct irradiation did not damage DNA. Moreover, in both cases a band corresponding to open circular form (OC) was present, indicating the presence of DNA already damaged, probably during the plasmid DNA preparation. Addition of both amorphous silica (column 3) and carbon soot (column 4) did not modify the supercoiled/open circular DNA pattern suggesting that no damage occur in the presence of both powders. On the contrary, addition of TiO 2 powder (column 5) caused an increase of open circular form DNA and a partial conversion into linear (L) form of DNA (very low intensity band) suggesting damage to DNA caused by this powder. The contemporary addition of TiO 2 and carbon soot (column 6) partially reversed the damaging effects of TiO 2 powder on DNA. In fact, the bands corresponding to SC and OC forms of DNA show equal intensity indicating that a damage still occurred, but the absence of L form indicates that the damage is lower than that caused by TiO 2 alone suggesting a protective effect of carbon soot. The present data suggest that, oppositely to silica, carbon nanoparticles are not inert but are able to actively interact with the cellular ROS homeostasis by acting as active antioxidants. Conclusions The growing number of engineered nanoparticles which will potentially enter in the market makes urgent the need of screening protocols able to predict their toxic potential. The availability of validated high throughput screening tests will accelerate both the definition of SARs for nanomaterials and the assessment of the risk related to their exposure. However, when performing screening tests the peculiar chemical properties of each nanomaterial need to be considered. Furthermore, a detailed knowledge of the chemical processes, which may occurs at the nanoparticle/bio-systems interfaces, may help in understanding their behavior in vivo. Integrated chemical screening tests like the one proposed here might be promising tools for the understanding of such processes at a molecular level.
2018-10-14T20:25:27.602Z
2013-04-10T00:00:00.000
{ "year": 2013, "sha1": "f71f233def6d2af236ec47d33cb07fe579eee254", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/429/1/012024", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "010408c41def6178984b89093d00fef1ec0ceb14", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
255984266
pes2o/s2orc
v3-fos-license
PSAT1 is regulated by ATF4 and enhances cell proliferation via the GSK3β/β-catenin/cyclin D1 signaling pathway in ER-negative breast cancer A growing amount of evidence has indicated that PSAT1 is an oncogene that plays an important role in cancer progression and metastasis. In this study, we explored the expression and function of PSAT1 in estrogen receptor (ER)-negative breast cancer. The expression level of PSAT1 in breast cancer tissues and cells was analyzed using real-time-PCR (RT-PCR), TCGA datasets or immunohistochemistry (IHC). The overall survival of patients with ER-negative breast cancer stratified by the PSAT1 expression levels was evaluated using Kaplan-Meier analysis. The function of PSAT1 was analyzed using a series of in vitro assays. Moreover, a nude mouse model was used to evaluate the function of PSAT1 in vivo. qRT-PCR and western blot assays were used to evaluate gene and protein expression, respectively, in the indicated cells. In addition, we demonstrated that PSAT1 was activated by ATF4 by chromatin immunoprecipitation (ChIP) assays. mRNA expression of PSAT1 was up-regulated in ER-negative breast cancer. A tissue microarray that included 297 specimens of ER-negative breast cancer was subjected to an immunohistochemistry assay, which demonstrated that PSAT1 was overexpressed and predicted a poor clinical outcome of patients with this disease. Our data showed that PSAT1 promoted cell proliferation and tumorigenesis in vitro and in vivo. We further found that PSAT1 induced up-regulation of cyclin D1 via the GSK3β/β-catenin pathway, which eventually led to the acceleration of cell cycle progression. Furthermore, ATF4 was also overexpressed in ER-negative breast cancers, and a positive correlation between the ATF4 and PSAT1 mRNA levels was observed in ER-negative breast cancers. We further demonstrated that knockdown of ATF4 by siRNA reduced PSAT1 expression. Finally, chromatin immunoprecipitation (ChIP) assays showed that PSAT1 was a target of ATF4. PSAT1, which is overexpressed in ER-negative breast cancers, is activated by ATF4 and promotes cell cycle progression via regulation of the GSK3β/β-catenin/cyclin D1 pathway. Background Breast cancer is the most common female cancer and is the second leading cause of cancer-related deaths among females worldwide [1]. Estrogen receptor (ER)-positive breast cancers account for 60-70% of all breast cancers, but the remaining 30-40% of breast cancers are ERnegative breast tumors, which do not express ER, a protein with both prognostic and predictive value [2,3]. Unfortunately, ER-negative breast cancers are resistant to endocrine therapy, which reduces recurrence and mortality rates whether chemotherapy is given or not [3,4]. Therefore, ER-negative breast cancers recur and metastasize more readily, and consequently, patients with this cancer type have a worse prognosis and shorter survival rates compared with those with ER-positive breast cancers. This underscores the importance of the identification of new prognostic markers and additional drug targets for this class of breast cancer. Serine plays an essential role in the synthesis of biomolecules that support cell proliferation. Recent evidence implies that hyperactivation of serine contributes to tumorigenesis [5]. Within cells, serine is synthesized through a three-step reaction. First, 3-phosphoglycerate is oxidized into phosphohydroxypyruvate (pPYR) by phosphoglycerate dehydrogenase (PHGDH). Successively, phosphohydroxypyruvate (pPYR) is catalyzed by phosphoserine aminotransferase (PSAT1) to produce phosphoserine (pSER), which is then dephosphorylated by 1-3phosphoserine phosphatase (PSPH) to form serine. Two recent studies have reported that the gene encoding phosphoglycerate dehydrogenase (PHGDH) is amplified in a significant subset of human tumors, which supports the idea that metabolic reprogramming occurs as the result of genomic modifications of metabolic enzymes, which independently contribute to tumorigenesis [6,7]. PSAT1 was found to be up-regulated in colon cancer, esophageal squamous cell carcinoma (ESCC) and non-small cell lung cancer (NSCLC), and has been shown to enhance cell proliferation, metastasis and chemoresistance, which all contribute to a poor prognosis [8][9][10][11]. However, the expression and underlying mechanism of PSAT1 in ERnegative breast cancer are not well understood. These observations have prompted us to speculate the role of PSAT1 in the initiation and development of ER-negative breast cancer. Activating transcription factor 4 (ATF4) is a member of the cyclic adenosine monophosphate responsive element-binding (CREB) protein family, which has been reported to be a potent stress-response gene that is expressed in a wide variety of tumors [12,13]. ATF4 can protect tumor cells against stresses and a range of cancer therapeutic agents via the regulation of cellular adaptation to adverse circumstances [14][15][16][17][18][19]. Previous studies have shown that ATF4 overexpression exists in many tumors, which suggests that it may play an important role in tumor formation, progression and metastasis [17,[19][20][21][22]. In the current study, PSAT1 was significantly upregulated in ER-negative breast cancer and was correlated with a poor patient prognosis. Moreover, PSAT1 was found to be regulated by ATF4, which then activated the GSK-3β/β-catenin pathway. This resulted in the enhancement of cyclin D1 expression and the promotion of cell proliferation. Patients and tissue specimens The archival material used in this study was obtained from the Department of Pathology at the Harbin Medical University Cancer Hospital, and included tissues from 297 patients with histologically confirmed ERnegative breast cancer (Additional file 1) and 112 matched normal tissue samples from patients who presented from 2006 to 2007. For the extraction of protein and RNA, fresh tissues from individuals with ERnegative breast cancer and normal controls were collected and stored at −80°C immediately after resection [23]. None of the patients received adjuvant chemotherapy, immunotherapy, or radiotherapy before surgery, and the patients with recurrent tumors, metastatic disease, bilateral tumors, or other previous tumors were excluded. Pathologists diagnostically examined tumors for confirmation of ER-negative breast cancer and benign breast diseases. After surgery, adjuvant systemic therapy was determined according to the National Comprehensive Cancer Network (NCCN) guidelines. This study was approved by the Ethical Committees of Harbin Medical University. Written informed consent was obtained from all subjects who participated in this study. Plasmid, Lentivirus production and infection Regarding the knock down of PSAT1, two human PSAT1 targeted RNAi (RNAi#1: TTCCAAGTTTGGT GTGATT; RNAi#2: ACTCAGTGTTGTTAGAGAT) sequences were obtained from GeneChem Co. Ltd. (Shanghai, China). As a control, scrambled versions of these sequences were used. The sequences shown above were inserted into the GV248 vector plasmid. For the overexpression of PSAT1, full-length human PSAT1 cDNA was cloned into the pLVX-puro vector. Lentiviral particles were constructed and packaged by Shanghai GeneChem Co. Ltd. Briefly, the cells were infected with lentivirus to generate stable cell lines. After 24 h, the cells were transferred to medium containing 4 μg/ml puromycin and were cultured for 3 days. Cell proliferation assays A cell proliferation assay was performed with a CCK-8 kit (Beyotime Institute of Biotechnology, Shanghai, China) according to the manufacturer's instructions. Briefly, 2 × 10 3 cells were plated in each well of a 96-well plates and were cultured overnight. According to the instructions, Cell Counting Kit-8 (CCK-8) reagent was added at 24, 48, 72 or 96 h and incubated at 37°C for 1 h. Each assay was independently repeated three times in triplicate. Colony formation assays Cells were plated into a 6-well plate and cultured in media containing 10% FBS for 14 days. Colonies were fixed in methanol for 30 min, and 500 μl 0.5% crystal violet was added (Sigma, St. Louis, MO, USA) to each well for 30 min for visualization and counting. Migration and invasion assays Cells in serum-free media were placed into the upper chamber of an insert for the migration assays (8-μm pore size, Millipore), while for the invasion assays, the cells were seeded on plates coated with Matrigel (Sigma-Aldrich, USA). Medium containing 10% FBS was added to the lower chamber. After incubation at 37°C for 12 h(Migration) or 24 h(Invasion), non-invading cells that remained in the top chambers were removed with a cotton swab, and the cells that had migrated to the underside of the membrane were fixed in 100% methanol for 30 min, air-dried, stained with 0.5% crystal violet, imaged, and counted under a light microscope. Flow cytometry analysis Cells were seeded in 6-well plates, and after 24 h, the cells were harvested and washed twice with cold PBS. For the cell cycle analysis, the cells were fixed in ice-cold 75% ethanol overnight at 4°C. After fixation, the cells were washed and resuspended twice in PBS and were then incubated with propidium iodide (BD Bioscience, San Jose, CA, USA) and RNase for 30 min at room temperature. For the cell apoptosis analysis, the cells were stained with PE Annexin V and 7-AAD (BD Bioscience, San Jose, CA, USA) for 15 min at room temperature. The cells were then analyzed using a FACSCalibur flow cytometer (BD Biosciences, San Jose, CA, USA). Immunohistochemistry (IHC) A tissue microarray (TMA) that included samples from 297 consecutive patients with histologically confirmed estrogen receptor-negative breast cancer and 112 controls was generated according to a previously described method [26]. The tissue sections were dried at 70°C for 3 h for deparaffinization and hydration. Subsequently, the sections were washed with phosphate-buffered saline (PBS; 3 × 3 min). The washed sections were treated with 3% H 2 O 2 in the dark for 5 to 20 min. After washing in distilled water, the sections were again washed with PBS (3 × 5 min). Antigen retrieval was performed in citrate buffer (pH 6.0) at 100°C for 10 min. Each section was incubated with the polyclonal primary rabbit antibody against PSAT1 at a 1:100 dilutions (Abcam, Cambridge, MA, USA) overnight at 4°C. After washing with PBS (3 × 5 min), each section was further incubated with an anti-rabbit secondary antibody (1:200; Abcam, Cambridge, MA, USA) at room temperature for 30 min. After another wash in PBS (3 × 5 min), each section was immersed in 500 μl of diaminobenzidine (DAB) working solution at room temperature for 3 to 10 min. Finally, the slides were counterstained with hematoxylin and mounted in crystal mount medium. PSAT1 expression was analyzed and scored independently by two observers based on the intensity and the distribution of positively stained tumor cells, which were demarcated by yellow particles observed in the cytoplasm. The PSAT1 staining index was classified into four groups: level 0 (no staining), level 1 (0-20% of tumor cells stained), level 2 (20-50% of tumor cells stained) and level 3 (>50% of tumor cells stained). Overall expression was then graded as either negative expression (level 0) or positive expression (levels 1-3) [8]. Animal experiments Animal experiments were approved by the Medical Experimental Animal Care Commission of Harbin Medical University. BALB/C-nu/nu nude mice were obtained from Beijing Vital River Laboratory Animal Technology Company. Approximately 5 × 10 6 cells (HCC70-NC or HCC70-KD) or 8 × 10 6 cells (BT-549-Vector or BT-549-PSAT1) in 200 μl of serum-free medium were injected directly into the right dorsal flank per mouse. Tumor growth was measured with calipers every 3 days, and the tumor volumes were calculated using the formula: 1/2 (length × width 2 ). Mice were euthanized and tumor weight was examined 27 days after the injections. Chromatin immunoprecipitation assay Chromatin immunoprecipitation (ChIP) assays were performed using the ChIP Assay Kit (Beyotime, Shanghai, China) according to the manufacturer's protocol with slight modifications. Cells were cross-linked with 1% formaldehyde and terminated after 10 min by the addition of glycine at a final concentration of 0.125 M. DNA was immunoprecipitated from the sonicated cell lysates using an ATF4 antibody (Cell Signaling Technology, Beverly, MA, USA); IgG (BD Biosciences, San Diego, CA, USA) served as the negative control. The DNA was subjected to PCR to amplify the ATF4 binding sites. The amplified fragments were then analyzed on an agarose gel. Chromatin (10%) was used before immunoprecipitation as the input control. The primer sequence was as follow: 5′-GTTTGCATCCCTGCGTGT-3′ and 5′-CCGAGCT TCCTCACCAACT-3′. Statistical analyses Data analyses were performed using Graph Pad (Graph-Pad Prism, La Jolla, CA, USA), Excel (Microsoft Corp, Redmond, WA, USA) and SPSS 20.0 (SPSS, Chicago, IL, USA. The Chi-square test was used to assess correlations between PSAT1 expression and the clinicopathological features of ER-negative breast cancer patients. Survival curves were generated using the Kaplan-Meier method and the log-rank test. Student's t-test was used to determine significant differences between two experimental conditions. Data from The Cancer Genome Atlas for breast invasive carcinoma (TCGA BRCA) were downloaded from the UCSC Xena Database (https://tcga.xenahubs.net/download/TCGA.BRCA.sampleMap/HiSeqV2.gz; Full metadata) and were used to detect PSAT1 and ATF4 expression in various types of breast cancer. The level of significance was set at P < 0.05. PSAT1 was overexpressed in ER-negative breast cancer specimens as well as in breast cancer cell lines To investigate the potential role of PSAT1 in breast cancer, we first analyzed PSAT1 mRNA expression in breast cancer RNAseq data from the TCGA (Fig. 1a-c). We found that PSAT1 expression was significantly downregulated in breast cancers compared with normal tissues. Interestingly, PSAT1 expression was dramatically up-regulated in ER-negative breast cancers compared with ER-positive BC and normal tissues (P < 0.0001). Moreover, the results of the TCGA data analysis were validated using real-time PCR in 72 ER-negative breast tumors and 39 non-cancerous breast tissues (Fig. 1d). These results confirmed that the expression levels of PSAT1 mRNA were significantly increased in ERnegative breast cancers compared with non-tumor tissues. Next, the difference in PSAT1 protein expression levels between ER-negative BC and normal breast tissues was investigated using immunohistochemistry and western blotting. By analyzing 297 ER-negative breast tumor samples and 112 non-cancerous samples by immunohistochemistry for PSAT1, positive staining (brown) was detected in the majority of ER-negative tumor tissues but was detected less frequently in non-cancerous tissues (Fig. 1e, f ), indicating that protein expression of PSAT1 is elevated in ER-negative breast cancers compared with non-tumor tissues (p = 0.002). The same trend was also observed in the western blotting analysis (Fig. 1g). Across a set of breast cancer cell lines, ERnegative breast cancer cell lines had higher PSAT1 protein expression compared with non-transformed MCF-10A and MCF-7 (ER-positive breast cancer cell lines) (Fig. 1h). This is consistent with the finding that PSAT1 expression was up-regulated at the mRNA and protein levels in a higher fraction of ER-negative breast cancers. Overall, ER-negative breast cancer cell lines and tissues exhibited relatively higher levels of PSAT1 expression. These results imply that PSAT1 overexpression may play an important role in the development of ER-negative breast cancer. The clinical significance of PSAT1 in patients with ERnegative breast cancer To further investigate whether PSAT1 overexpression is involved in ER-negative breast cancer progression, the correlation between PSAT1 levels and the clinicopathological features of ER-negative breast cancers was examined. The relative mRNA expression levels of PSAT1 were analyzed in 72 ER-negative breast tumors, and PSAT1 upregulation was strongly associated with tumor size (P < 0.05, Fig. 2a) and axillary lymph node metastasis (P < 0.05, Fig. 2b). As shown in Table 1, statistical analyses using IHC results revealed that PSAT1 was positively correlated with tumor size (P = 0.024), TNM stage (P = 0.026) and Ki67 status (P < 0.028). We also detected PSAT1 expression in 107 of the 145 (73.79%) patients with histological grade I and II tumors, and in 80 of 152 (52.63%) patients with grade III tumors (P < 0.001). However, no significant association was found between PSAT1 and age, LNM, Her-2 status or P53 status. Therefore, we hypothesize that high expression of PSAT1 may be involved in tumor cell proliferation and may play an important role in ER- negative breast cancer development. Additionally, the Kaplan-Meier 5-year survival analysis showed that patients with ER-negative breast cancer with higher expression of PSAT1 had a remarkably poorer prognosis than those with low PSAT1 expression (P = 0.016, log-rank test; Fig. 2c). Together, high expression of PSAT1 may serve as a biomarker for poor prognosis in ER-negative breast cancer. Manipulation of PSAT1 levels in ER-negative breast cancer cells We performed western blotting analysis to compare the expression levels of PSAT1 in various ER-negative breast cancer cell lines and non-transformed MCF-10A and ERpositive MCF-7-cell lines. As shown in Fig. 1h, HCC-70 and MDA-MB-468 cells expressed higher levels of PSAT1 than the other cell lines. Therefore, to determine the function of PSAT1 in ER-negative breast cancer cells, HCC70 and MDA-MB-468 cells were infected with two specific short hairpin RNAs (shRNAs) using a lentivirus-mediated system to generate HCC70-KD and MDA-MB-468-KD cell lines. BT-549 stable PSAT1-overexpressing cells were established with a PSAT1-vector using a lentivirus-mediated system. Then, we detected the protein expression level of PSAT1 in these target cells. As shown in Figs. 3a and 4a, compared with control cells, PSAT1 was significantly knocked down in MDA-MB-468-KD and HCC70-KD cells, but PSAT1 expression was increased in BT-549-PSAT1 cells. Knockdown of PSAT1 inhibited tumorigenicity of ERnegative breast cancer cells To investigate the potential role of PSAT1 in ERnegative breast cancer cells, CCK-8 assays were Fig. 3b, the knockdown of PSAT1 significantly suppressed the viability of these two breast cancer cell lines compared with control cells. Moreover, the colony formation ability of these cells was drastically inhibited after PSAT1 was silenced compared with their respective controls (Fig. 3c). Given that the knockdown of PSAT1 inhibited the proliferation of ER-negative breast cancer cells, we sought to explore the underlying mechanisms using flow cytometry analysis. As shown in Fig. 3d, the flow cytometry results supported the idea that the suppression of PSAT1 led to a remarkable increase in the proportion of cells in G0/G1 phase, as well as a notable decrease in the proportion of cells in S phase compared with negative control HCC70 and MDA-MB-468 cells. Taken together, these results indicate that the knockdown of endogenous PSAT1 suppressed cell proliferation in vitro and inhibited G1/S transition of ER-negative breast cancer cells. Overexpression of PSAT1 promoted breast cancer cell proliferation in vitro To further validate the role of PSAT1 in the proliferation of ER-negative breast cancer cells, exogenous PSAT1 was stably transduced into BT-549 cells (Fig. 4a). As expected, compared with control cells, ectopic overexpression of PSAT1 significantly increased proliferation (Fig. 4b). Similarly, the result of the colony-formation assay showed that clonogenic survival was enhanced following elevated PSAT1 expression in BT-549 cells (Fig. 4c). As shown in Fig. 4d, flow cytometry showed a c d b Fig. 3 Knockdown of PSAT1 inhibited tumorigenicity of ER-negative breast cancer cells. a Western blot shows PSAT1 expression in HCC70 and MDA-MB-468 cells infected with Lenti-shPSAT1 or control. β-tubulin was used as a loading control. b CCK-8 assay was performed to determine the effect of PSAT1 silencing on the proliferation of the indicated cells at the indicated time points. c Knockdown of PSAT1 suppressed the colony formation ability of HCC70 and MDA-MB-468 cells compared with that of control cells. The values of the control cells were normalized to 1. For (b) and (c), the results are expressed as the mean ± SD; n = 3. d Cell cycle analysis of the indicated cells according to flow cytometry. *p < 0.05. **p < 0.01, ***P < 0.001 that ectopic PSAT1 expression markedly increased the proportion of S-phase cells and decreased the percentage of cells in G0/G1 phase. Collectively, these results suggest that exogenous PSAT1 promoted G1/S transition and thus enhanced the proliferation of ER-negative breast cancer cells. PSAT1 enhanced tumor formation of ER-negative breast cancer cells in a xenograft model Immunodeficient BALB/c mice carrying HCC70 and HCC70-KD1 tumor cells were used to ascertain the role of PSAT1 in the tumorigenesis of ER-negative breast cancer in vivo. HCC70-NC and HCC70-KD1 tumor cells were delivered subcutaneously into nude mice, and after 27 days of growth, the tumors were harvested and analyzed (Fig. 5a). As expected, the silencing of PSAT1 significantly suppressed HCC70 tumor growth in mice compared with the control group. The mean tumor volume (Fig. 5b) was significantly decreased from 844.0 ± 87.31 mm 3 to 350.7 ± 83.69 mm 3 and the mean tumor weight (Fig. 5c) declined from 1.000 ± 0.05774 g to 0.5500 ± 0.1088 g (both p < 0.001). Moreover, we also performed xenograft studies using BT-549 stably overexpressed for PSAT1 (Fig. 5d). As shown in the Fig. 5e and f, the in vivo tumor volume and weight of BT-549 cells was significantly increased from 218.3 ± 40.28 mm 3 to 877.0 ± 81.04 mm 3 (p < 0.0001) and from 0.2000 ± 0.03651 g to 0.7833 ± 0.07032 g (p < 0.0001) respectively, when PSAT1 was overexpressed. These results suggest that PSAT1 enhanced tumor growth of ER-negative breast cancer in vivo. PSAT1 regulated the expression of cyclin D1 through the GSK3β/β-catenin pathway Next, we investigated a potential mechanism for PSAT1 in the promotion of ER-negative breast cancer cell proliferation. Considering the function of PSAT1 in the promotion of G1/S phase transition in ER-negative breast cancer cells, cyclin D1, which is well known as an important regulator of G1 to S phase progression in many different cell types, was assessed by western blotting. As predicted, the expression of cyclin D1 was decreased in PSAT1-suppressed HCC70 and MDA-MB-468 cells but was increased in PSAT1-overexpressing BT-549 cells (Fig. 6a). Glycogen Synthase Kinase-3β (GSK3β), a serine/threonine protein kinase, has been considered as a potential tumor suppressor due to its ability to phosphorylate other proteins; it also has numerous cellular targets including cyclin D1 and βcatenin [27,28]. Hence, we detected the expression of GSK3β and determined its phosphorylation status. As shown in Fig. 6a, the phosphorylation of GSK3β was inhibited when PSAT1 was silenced but was enhanced by the introduction of ectopic PSAT1. Interestingly, the PSAT1 expression level was positively correlated with βcatenin expression (Fig. 6a and b). In addition, we also observed that the up-regulation of PSAT1 expression markedly promoted the accumulation of cytoplasmic/ nuclear β-catenin and caused the translocation of βcatenin from the cytoplasm to the nucleus. In contrast, the inhibition of PSAT1 dramatically attenuated the protein level of β-catenin in both the cytoplasm and the nucleus (Fig. 6c). PSAT1 is an important target of ATF4 A previous study has shown that ATF4 transcriptionally activates serine biosynthetic genes in response to serine starvation in non-small cell lung cancer cells [29]. We found that the expression of ATF4 was significantly up-regulated in ER-negative breast cancers compared with ER-positive breast cancers and normal tissue ( Fig. 7a and b). As shown in Fig. 7c, a positive correlation was observed between PSAT1 and ATF4 in ER-negative breast cancers (r = 0.2836; P < 0.0001). Consistently, in our current study, the silencing of ATF4 significantly reduced the mRNA and protein expression of PSAT1 ( Fig. 7d and e). CCK-8 assays revealed that the silencing of ATF4 significantly reduced cell growth compared with the control (Fig. 7f top). Moreover, cell growth activity enhanced by PSAT1 required ATF4 because the knockdown of ATF4 not only decreased the proliferation rate but also attenuated the enhanced effect caused by stable overexpression of PSAT1 (Fig. 7f, bottom). To further validate our results, promoter scanning using the JAS-PAR Database showed that the PSAT1 promoter a b c f The growth rate of the indicated cells was evaluated using a CCK-8 assay. g Schematic representation of the predicated ATF4 binding site within the PSAT1 promoter. h RT-PCR of the ChIP products validated the binding capacity of ATF4 to the PSAT1 promtor. The results of IgG were normalized to 1. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. TCGA = The Cancer Genome Atlas; BRCA = Breast carcinoma region contains a highly likely ATF4 binding site (Fig. 7h). ChIP assays further confirmed that ATF4 was efficiently bound to the PSAT1 promoter-specific region in both BT-549 and MDA-MB-468 cells. Taken together, these data indicate that ATF4 directly enhanced PSAT1 expression in ER-negative breast cancer. Discussion It is well known that cancer cells possess distinct metabolic characteristics that are distinguishable from those of nonmalignant cells. Recent evidence has shown that serine metabolic reprogramming is due to the corresponding genetic changes in metabolic enzymes and that these gene modifications independently contribute to tumorigenesis [6,7]. PSAT1 is the protein-coding gene of phosphoserine aminotransferase, which catalyzes serine biosynthesis. PSAT1 is overexpressed in colon cancers where it contributes to cell proliferation and chemoresistance, which result in a poor prognosis [9]. Liu et al. [8] have shown that PSAT1 expression was elevated in ESCC and that it was significantly associated with disease stage, lymph node metastasis, distant metastasis and poor outcome. A recent study has shown that the expression of PSAT1 was up-regulated in NSCLC, which was verified by an IHC assessment of 138 specimens and a qRT-PCR assay and its overexpression has also been associated with a poor prognosis of NSCLC. Martens et al. [30,31] showed that PSAT1 inactivation by promoter methylation and low mRNA levels were both associated with a good outcome after tamoxifen treatment in ER-positive breast cancer. Our current study found for the first time that the expression of PSAT1 was significantly upregulated in ER-negative breast cancers compared with ER-positive breast cancers, which was supported by the TCGA dataset. We then confirmed this finding by IHC using a tissue microarray, qRT-PCR and western blotting. Statistical analysis of these results showed that PSAT1 up-regulation was correlated with tumor development and poor prognosis. Previous studies have shown that PSAT1 plays a vital role in cell proliferation as it acts as an oncogene in colon cancer and NSCLC [9,11]. Possemato et al. [6] have illustrated that, through the suppression rate of the serine product, the inhibition of PSAT1 significantly decreased the proliferation of ER-negative breast cancer cells (MDA-MB-468 and BT-20) but not ER-positive breast cancer cells (MCF7). In this study, we also identified the function of PSAT1 in ER-negative breast cancer cells by applying gain-and loss-of-function approaches. We found that PSAT1 regulates the expression of cyclin D1, which is an important regulator of G1 to S phase in a variety of cancers, including breast cancer, to promote cell cycle progression [32][33][34]. Glycogen Synthase Kinase-3 (GSK-3), a serine/threonine protein kinase, was initially considered to be a key enzyme involved in glycogen metabolism [35], but is now recognized as a regulator of diverse cellular functions [36,37]. Due to its kinase activity, GSK-3β is able to target cyclin D1 and βcatenin [38] [28] for ubiquitin-dependent proteasomal degradation. Our current study has shown that PSAT1 enhanced the stability of cyclin D1 via the induction of the phosphorylation of GSK-3β. GSK-3β was inactivated by phosphorylation, which resulted in its accumulation and the nuclear translocation of β-catenin [39,40]. Consistently, we found that PSAT1 promoted the stability of β-catenin and its translocation into the nucleus through an enhancement of the phosphorylation of GSK-3β. β-catenin signaling has often been demonstrated to upregulate the transcription of the cyclin-D1 protein [41]. It is worthy to note that our current study of PSAT1 focused on GSK-3β, through which PSAT1 eventually enhanced the proliferation and metastasis of tumor cells [8,11]. Our current study found that PSAT1 enhanced the migration and invasiveness of ER-negative cells but reduced apoptosis (Additional file 2: Figure S1A and B). Given that previous studies have shown that GSK-3β is a promising target for cancer treatment, further research on the mechanism of PSAT1 and GSK-3β in ER-negative breast cancer may provide more valuable insight into optimal treatments for this type of breast cancer. Yan et al. [42] reported that PSAT1 was a direct target of miR-340 and that its overexpression partially reversed miR-340-mediated inhibition of viability, invasion and EMT in ESCC cells. ATF4 transcriptionally activates serine biosynthetic genes in response to serine starvation in non-small cell lung cancer, and additionally, it has been shown to play a crucial role in the regulation of PSAT1 after OSN (Oct4, Sox2, and Nanog) was expressed in mouse embryonic stem cells [43,44]. In this study, we found that the knockdown of ATF4 led to the down-regulation of PSAT1, and ChIP confirmed that ATF4 was bound to the ATF4-binding consensus sequences on the PSAT1 promoter in ER-negative breast cancer cells. Conclusions To conclude, for the first time, our study demonstrated that PSAT1 was significantly up-regulated in ER-negative breast cancer. Consequently, this up-regulation was able to enhance the proliferation of ER-negative breast cancer cells in vitro via the GSK3β/β-catenin/cyclin D1 pathway and was able to promote tumor development in vivo. In addition, further investigation showed that PSAT1 was activated directly by ATF4 in ER-negative breast cancer. These results indicate that, as an oncogene, PSAT1 plays a vital role in the development of ER-negative breast cancer.
2023-01-19T21:28:00.004Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "e5a070bff83b89fff9760b2aebd01ec94a9a6857", "oa_license": "CCBY", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-017-0648-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e5a070bff83b89fff9760b2aebd01ec94a9a6857", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
150146836
pes2o/s2orc
v3-fos-license
Diabetic Foot Management ; A 10-Year Study The medical records of 324 diabetic patients admitted to Al-Sader Teaching Hospital (Saddam Teaching Hospital previously) with foot lesions between April 1994 and April 2004 were studied retrospectively. Data were collected for various parameters, both personal and medical. The majority of patients were males, over fifty years of age and known diabetics. Peripheral neuropathy was the main predisposing factor while infected ulcer and gangrene of toe / toes were the most common forms of presentation. Wound swabs were positive for bacterial culture in 215 pts. (66.3%), 97.2% of which were polymicrobial. Dibridement was the most common surgical procedure. There were 6 deaths (1.85%) in the study group mainly due to uncontrolled sepsis with concurrent medical illnesses. It is concluded that foot complication is a common problem in elderly Iraqi diabetics, particularly males, peripheral neuropathy is the most common predisposing factor, foot infections are usually poly microbial and that the majority will need some form of surgical intervention that is mostly conservative rather than a major amputation. We suggest a team approach in the care for these patients which can be provided by establishing foot care clinic in large hospital. Introduction oot infections are a common and potentially serious problem for diabetic patients 1 .Nearly one half of all lower extremity amputations in diabetic patients occur as a result of uncontrolled infection even in the presence of adequate blood supply 2 . If amputation results, the contralateral limb is placed at greater risk for future disease and amputation 3 .In addition to major financial cost for medical care of this problem, the consequences include extensive human suffering, prolonged functional disability, risk of limb loss and associated mortality.When infection of the lower extremity develops in a diabetic patient, a team approach to the management must be employed.Optimal management of diabetes, aggressive local care, systemic antibiotics and surgery play important roles in determining its outcome 4 .F Foot complications in diabetic patients are common clinical problems in hospital practice in Iraq.This retrospective analysis was undertaken to study the epidemiology of foot complications in diabetics' attending Al-Sader Teaching Hospital, with a view in adding to the local data and comparing our results with other local &international studies. Material&Methods Medical records of all diabetic patients with foot lesions admitted to Al-Sader teaching hospital between April 1994 and April 2004 were analyzed.Data were collected for sex, age, duration of diabetes mellitus, nature of foot lesions, presence of peripheral vascular disease and or peripheral neuropathy, predisposing factors, concurrent medical illness, microbial flora of the foot lesions, types and numbers of surgical procedures, duration of hospital stay, morbidity and mortality.All patients, on admission, underwent history taking and clinical examination including the examination of lower limb for ischemia, peripheral neuropathy and the foot lesion itself.Peripheral vascular disease was defined as the presence of ischemic symptoms such as intermittent claudication or rest pain and/or the absence of pedal pulses (pedal pulses were assessed by palpation in all patients).Peripheral neuropathy was considered to be present in the absence of pain in the foot lesion.Complete blood count, fasting blood glucose level, blood urea, serum creatinine and electrolytes, urine analysis and radiological studies were all done for each patient on.Specimens for microbial cultures were collected from the depths of the wounds.In some patients, wound swabs were also collected at the time of surgical debridement.Insulin treatment was commenced in consultation with a physician.Antibiotics were generally given after collection of swabs from the foot lesions.Ampicilline, Gentamycine, and Metranidazole or a third generation Cephalosporin with Metranidazole were the usual combination used.Antibiotics were subsequently changed according to culture and sensitivity reports.Antibiotics were usually given for 10-14 days in most patients; however, they were used for longer periods in patients with persistent sepsis.Further management included daily wound dressing with povidone-iodine solution, normal saline soaks and occasionally hydrogen peroxide solution, surgical drainage of abscess, wound debridement and amputation. Results Three hundred and twenty four patients were admitted with foot problems during the ten-year period of the study (1994 -2004) .The majority (254 pts.; 78.35%) were males.Those over fifty years of age were 289 pts.(89.15%) and only 19 pts.(5.85%) were under forty years of age (Table I).Tw0 hundred and ninety one patients (89.8%) were known diabetics while 33(10.1%)were discovered to have diabetes on admission to hospital.In 282 pts.(87.03%) random and fasting blood glucose levels on admission were >12 mmol/L (normal level up to 10 mmol/L) and >7 mmol/L (normal level up to 6 mmol/L) respectively.History of trauma preceding foot lesions was present in 92 pts.(28.3%) while features of peripheral neuropathy were documented in 213 pts. Discussion About 15% of all diabetic patients will develops foot complications during their life time 5 .Diabetics who develop foot infections are usually above fifty years which is in agreement with our findings.It is generally agreed that neuropathy, angiopathy and immunopathy are, to various degrees, responsible for foot complications in diabetics.Neuropathy resulting in the loss of protective sensation and consequently minor trauma is the primary mechanism of skin breakdown.Usually patients are unaware of any trauma due to partial or complete loss of feeling in the foot 6 .Neuropathy was present in 65.7% of our patients and only 28.3% of them were aware of any trauma preceding their foot lesions.Peripheral arterial disease is another major contributory factor.When present, it tends to involve distal and smaller peripheral vessels.Diabetics with large vessel disease due to atherosclerosis present mainly with painful non-healing ulcers associated with history of intermittent claudication and rest pain 7 . Table V: Frequency of surgical procedures In our patients, palpation of pedal pulses was relied upon to diagnose the peripheral vascular insufficiency and we didn't verify the absence of pulsation by Doppler ultrasound, that is probably why, we found high rate of absence of pedal pulsation (35.1%) in spite of the fact that any inflammatory process which accompanies infections or other lesions causes local hyperemia and vasodilatations that is evident clinically by palpable pulses at that region.Another explanation would be the presence of significant foot edema which makes palpation difficult.Furthermore, all the information (history and examination) that was available in the data sheet was written by junior doctors who might have lacked the experience of palpation of pulses in difficult cases.Infection in diabetics is usually polymicrobial in nature.Louie et al have isolated a mean of 5.8 microbes from each specimen 8 .The numbers of bacteria isolated per patient in the present study was (2.4) which is generally lower than other studies.This may possibly be due to inadequate sampling, delay in transfer of swabs to the laboratory or, in some patients who were under the care of private practitioners, a course of antibiotics before admission to hospital.Staphylococci, Proteus and Bacteriodes fragilis were the most common gram positive, gram negative and anaerobe organisms isolated respectively in this study.This is consistent with the experience of others 9 . More than 10% of patients treated surgically needed multiple operations.This was in part due to conservative approach towards major amputations and the difficulties in accurate assessment of the extent of infectious process at the time of initial surgery.The morbidity associated with major amputations is greater in diabetics than in non diabetics, thus every effort should be made to adopt a conservative approach and perform the lowest level of amputation possible that permits walking without prosthesis even if it meant a longer hospital stay 10 .The nature of disease process itself is the primary factor for slow recovery and prolonged hospital stay for our patients.Their refusal to agree for amputation was another contributory factor.Care for diabetic foot complications put a heavy burden on the resources of health services.The application of preventive foot care and patient education can lead to dramatic reduction in amputation rates.Development of a team approach for the care of these patients is highly recommended.Such a team would ideally include a surgeon, physician, physiotherapist, dietitian and chiropodist.As such, care can be provided by establishing foot care clinics in large hospitals.
2019-05-12T13:27:44.695Z
2004-12-28T00:00:00.000
{ "year": 2004, "sha1": "7231249badd303c04c8727b9faaeef0c79c0c0e5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.33762/bsurg.2004.57534", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7231249badd303c04c8727b9faaeef0c79c0c0e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17187771
pes2o/s2orc
v3-fos-license
Caspase-Cleaved Tau Co-Localizes with Early Tangle Markers in the Human Vascular Dementia Brain Vascular dementia (VaD) is the second most common form of dementia in the United States and is characterized as a cerebral vessel vascular disease that leads to ischemic episodes. Whereas the relationship between caspase-cleaved tau and neurofibrillary tangles (NFTs) in Alzheimer’s disease (AD) has been previously described, whether caspase activation and cleavage of tau occurs in VaD is presently unknown. To investigate a potential role for caspase-cleaved tau in VaD, we analyzed seven confirmed cases of VaD by immunohistochemistry utilizing a well-characterized antibody that specifically detects caspase-cleaved tau truncated at Asp421. Application of this antibody (TauC3) revealed consistent labeling within NFTs, dystrophic neurites within plaque-rich regions and corpora amylacea (CA) in the human VaD brain. Labeling of CA by the TauC3 antibody was widespread throughout the hippocampus proper, was significantly higher compared to age matched controls, and co-localized with ubiquitin. Staining of the TauC3 antibody co-localized with MC-1, AT8, and PHF-1 within NFTs. Quantitative analysis indicated that roughly 90% of PHF-1-labeled NFTs contained caspase-cleaved tau. In addition, we documented the presence of active caspase-3 within plaques, blood vessels and pretangle neurons that co-localized with TauC3. Collectively, these data support a role for the activation of caspase-3 and proteolytic cleavage of TauC3 in VaD providing further support for the involvement of this family of proteases in NFT pathology. Introduction Vascular dementia (VaD) is the second leading cause of dementia in the USA, only trailing Alzheimer's disease (AD) and accounting for 15-20 percent of all types of dementia [1]. It has been estimated that 25-80% of all dementia cases show mixed pathologies between VaD and AD, therefore, contributing to the difficulty in diagnosing pure VaD [2]. An additional confounding factor in diagnosing VaD is the lack of widely accepted neuropathological criteria for VaD [3]. VaD is classified as a cerebral vessel vascular disease characterized by large and small infarcts, lacunes, hippocampal sclerosis, cerebral amyloid angiopathy (CAA) and white matter lesions [4]. The cognitive decline that is associated with VaD is believed to be the result of cerebral ischemia secondarily to the vascular changes. Similarly to what is found in AD, amyloid plaques, neurofibrillary pathology, and cholinergic deficits have also been documented in VaD, albeit to a lower degree than what has been found in AD [5]. Behaviorally, patients with VaD show loss in executive functions as an initial symptom, while in patients diagnosed with AD memory loss is often the earliest known symptom [6]. Additional symptoms of VaD include confusion, language deficits, restlessness and agitation, gait disturbances and depression [7]. Risk factors for VaD are predominantly cardiovascular and include, hypertension [8,9], hyperlipidemia [10], atherosclerosis [11], and diabetes [12][13][14]. Additionally, stroke is an important risk factor for dementia [15,16] with lacunar stroke the most common stroke subtype associated with VaD [17]. Similar to AD, neurofibrillary tangles (NFTs) are a common post-mortem finding in the human VaD brain but are usually present in lower numbers than in AD [5]. In AD, NFTs are composed of hyperphosphorylated forms of tau that accumulate within the entorhinal cortex and CA1 subfield of the hippocampus [18][19][20]. Besides hyperphosphorylation, post-translational modifications of tau, including proteolysis have been shown to be an important step in the evolution of NFTs. In this regard, numerous studies now support caspase cleavage of tau as an important mechanism contributing to the evolution of NFTs [21,22]. Thus, caspase activation and the cleavage of tau after Asp 421 is an early event preceding and possibly contributing to NFT formation [23][24][25][26]. To date, whether caspase activation and cleavage of tau occurs in VaD is not known despite the fact that ischemia is a well-known activator of apoptotic pathways and a major pathological finding in VaD [4]. Therefore, the purpose of the current study was to investigate the role of caspase-cleaved tau in post-mortem human VaD brain sections using a well-characterized antibody that detects caspase-cleaved tau truncated at Asp 421 [24]. Our findings are supportive of a role for the activation of caspase-3 and cleavage of tau in VaD, providing further support for the involvement of this family of proteases in NFT pathology. Materials and Methods Immunohistochemistry Autopsy brain tissue from seven neuropathologically confirmed VaD cases were studied. Case demographics are presented in Table 1. Fixed hippocampal tissue sections used in this study were provided by the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine. Approval from Boise State University Institutional Review Board was not obtained due to the exemption granted that all tissue sections were fixed and received from University of California, Irvine. Brain tissue obtained from University of California, Irvine were anonymized and never identified except by case number. Tissue donors or their next of kin provided informed signed consents to the Institute for Memory Impairments and Neurological Disorders for the use of their tissues in research (IRB 2014-1526). Free-floating 40 μm-thick sections were used for immunohistochemical studies as previously described [27]. For bright-field labeling, sections were washed with 0.1 M Tris-buffered saline (TBS), pH 7.4, and then pretreated with 3% hydrogen peroxide in 10% methanol to block endogenous peroxidase activity. Sections were subsequently washed in TBS with 0.1% Triton X-100 (TBS-A) and then blocked for thirty minutes in TBS-A with 3% bovine serum albumin (TBS-B). Sections were further incubated overnight at room temperature with the TauC3 (mouse monoclonal, 1:100). Following two washes with TBS-A and a wash in TBS-B, sections were incubated in anti-rabbit or mouse biotinylated anti-IgG (1 hour) and then in avidin biotin complex (1 hour) (ABC, Elite Immunoperoxidase, Vector Laboratories, Burlingame, CA, USA). The primary antibody was visualized using brown DAB substrate (Vector Laboratories). The periodic acidschiff (PAS) staining system was purchased from Sigma-Aldrich (St. Louis, MO) and was employed according to the manufacturer's instruction. Immunofluorescence Microscopy Primary antibodies utilized included the caspase-3-cleaved antibody (rabbit polyclonal, 1:50), PHF-1 (mouse monoclonal, 1:1,000), anti-Aβ (clone 6E10) antibody (mouse, 1:400) and TauC3 (mouse monoclonal, 1:100). The TauC3 antibody was purchased from EMD Millipore (Billerica, MA), while PHF-1 was a generous gift from Dr. Peter Davies (Albert Einstein College of Medicine, Bronx, NY). The anti-Aβ mAb 1560 (clone 6E10) was purchased from Covance (Dedham, MA). The cleaved caspase-3 (Asp175) antibody was purchased from Cell Signaling (Danvers, MA). The AT8, Tau antibody (HT7) and ubiquitin monoclonal antibodies were purchased from Pierce, ThermoFisher Scientific Inc. (Waltham, MA). With the exception of anti-Aβ mAb 1560 (see below), no antigen retrieval methods were employed. For doublelabel immunofluorescence co-localization studies, experiments were initiated by incubating in primary antibody overnight followed by application of the ABC, Elite Immunoperoxidase kit on day 2 (Vector Laboratories, Burlingame, CA, USA). In this case, instead of completing the staining use DAB substrate, we employed Alex fluor 488-labeled tyramide (green, Ex/ Em = 495/519) that was purchased as part of the TSA kit #12 (Life technologies, Green Island, NY). Following labeling with the primary antibody, sections were washed 3X in Tris buffer followed by incubations in Tris A (15 minutes) and Tris B (30 minutes). Sections were then incubated with the second primary antibody overnight at room temperature. On day 3, sections were incubated with secondary biotinylated-SP (long spacer) AffiniPure goat anti-mouse or rabbit IgG for 1 hour (Jackson Immuno Research Labs (West Grove, PA). This was followed by incubation in streptavidin Alex Fluor 555 conjugate for 1 hour (Life technologies, Green Island, NY). Following 3X washes in Tris buffer, sections were mounted and cover slipped using Pro-Long Gold Antifade with DAPI (Life technologies). To determine if cross-reactivity to reagents was a factor in double-labeling experiments, experiments were replicated with the antibodies in reverse. To visualize beta-amyloid staining, sections were pretreated for 5 minutes in 95% formic acid. To assess apoptosis, the Apoptag peroxidase kit was employed according to the manufacturer's instructions (EMD Millipore, Billerica, MA). An Olympus BX60 microscope with fluorescence capability equipped with a MagnaFire SP software system for photomicrography was employed for microscopic observation and photomicrography of the DAB-labeled and fluorescent sections. The fluorescent molecules were excited with a 100-W mercury lamp. Fluorescent-labeled molecules were detected using a filter set having a 460-500-nm wavelength band pass excitation filter, a 505-nm dichroic beam splitter, and a 510-560-nm band pass emission filter. Confocal microscopy For confocal immunofluorescence imaging, the primary antibodies were visualized with secondary antibodies tagged with either Alexa Fluor 488 or Alexa Fluor 555 (Invitrogen, Carlsbad, CA). Images were taken with a Zeiss LSM 510 Meta system combined with the Zeiss Axiovert Observer Z1 inverted microscope and ZEN 2009 imaging software (Carl Zeiss, Inc., Thornwood, NY). Confocal Z-stack and single plane images were acquired with an Argon (488 nm) and a HeNe (543 nm) laser source. Z-stacks images were acquired using a 20x Plan-Apochromat (NA 0.8) objective, emission band passes of 505-550 nm for the detection of the TauC3 (green channel, Alexa Fluor 488) and 550-600 nm for detection of PHF-1 (red channel, Alexa Fluor 555). All images displayed are 2-D, maximal intensity projections generated acquired Z-stacks. Single plane images were acquired with a 63x Plan-Apochromat oil-immersion objective (NA 1.4) with emission long pass of 505 nm for the detection of the TauC3 antibody (green channel, Alexa Fluor 488) and 550-600 nm for the detection of PHF-1 (red channel, Alexa Fluor 555). Western blot analysis Frozen tissue from either frontal cortex or cerebellum was homogenized in TPER buffer (Ther-moFisher) and centrifuged (18,000 x g, 10 min). The soluble fraction was removed and protein concentration was determined by the BCA method (Pierce). For each sample, 3 μg of protein were separated by SDS-PAGE (TGX gels, BIO-RAD), transferred to nitrocellulose, and probed with a monoclonal antibody to caspase-cleaved tau. Statistical analysis To determine the percent co-localization, a quantitative analysis was performed as described previously [27] by taking 20X immunofluorescence, overlapping images from three different fields in area CA1 in four separate VaD cases. Capturing was accomplished by using a 2.5x photo eyepiece, and a Sony high resolution CCD video camera (XC-77). For example, to determine the percent co-localization between TauC3 and PHF-1, photographs were analyzed by counting the number of TauC3, PHF-1-positive NFTs alone per 20X field for each case, and the number of cells labeled with both PHF-1 and TauC3. Data are representative of the average number (±S.D.) of each antibody alone or co-localized with both antibodies in each 20X field (3 fields total for 4 different cases). Statistical differences in this study were determined using Student's two-tailed T-test employing Microsoft Office Excel. To determine any possible correlations between the various groups, Pearson's coefficients were determined using Microsoft Office Excel. Caspase-cleaved tau immunoreactive pathology To determine if caspase-cleavage of tau can be detected in VaD, an immunohistochemical study utilizing the TauC3 antibody was performed utilizing fixed hippocampal brain sections from seven VaD cases. Case demographics for the VaD cases used in this study are presented in Table 1. As an initial step, we screened all seven cases for TauC3 immunoreactivity using bright-field microscopy. The TauC3 antibody reacts with caspase cleaved tau truncated at Asp 421 [24]. This antibody shows no reactivity with full-length tau or other tau C-terminal truncations and is specific for NFTs, and caspase-cleaved tau within neuritic plaques and neuropil threads [28]. Representative staining is depicted in Fig 1 indicating consistent labeling of TauC3 within NFTs (Fig 1A and 1B, arrow) as well as within neuritic plaques (Fig 1B, arrowhead) of the VaD brain. To determine any possible correlation of TauC3 labeling with NFTs, we quantified the number of TauC3-positive tangles in 6/7 VaD cases in which the Braak & Braak stage was known ( Table 1). The results indicated a positive correlation between these two variables (R 2 = .070) (Fig 1C). To confirm biochemically that the TauC3 antibody can detect caspase-cleaved tau truncated at Asp 421 , Western blot analysis was performed. In this case we compared two different areas, frontal cortex and cerebellum utilizing four different VaD cases. As shown in Fig 1, a band was observed in all four cases, however, the intensity of the bands appeared stronger in frontal cortex extracts as compared to cerebellum. Because beta-amyloid is thought to be a key initiator in the activation of apoptotic pathways leading to the caspase-cleavage of tau [22], we also compared two cases that pathologically were determined to have a significant beta-amyloid load (Stage A) versus two cases that had minimal beta-amyloid deposition (See Table 1). In this case the band corresponding to caspase-cleaved tau was more robust in those VaD cases with greater beta-amyloid loads (compare lanes 1 and 2 versus 3 and 4, Fig 1D, top panel). As a control, samples were also blotted with HT7, an antibody that detects full-length tau. In this case, total tau appeared to be consistently expressed in each brain region (Fig 1D, bottom panel). In addition to the labeling of NFTs, application of the TauC3 antibody also revealed staining of numerous round translucent structures (Fig 1E and 1F) within the dentate gyrus of the hippocampus. In this regard, strong immunolabeling with the TauC3 antibody was observed in all seven cases. Identification of apparent corpora amylacea in VaD Bright-field staining utilizing the TauC3 antibody consistently labeled the presence of numerous round structures that were ring-like in appearance in the dentate gyrus (Fig 2A). To determine if labeling within these structures was specific to caspase-cleaved tau, similar experiments were performed utilizing the anti-tau antibody HT7. Although this antibody labeled numerous neurons in the dentate gyrus region of VaD cases, there was a complete lack of staining within these round structures (Fig 2B). In addition, in age-matched control cases, these structures were only infrequently observed following application of the TauC3 antibody ( Fig 2C). Quantitative analysis of these structures in the hippocampus revealed a statistically significant difference in the number of these structures between VaD and age matched controls (Fig 2D). In an attempt to identify these structures, immunofluorescence double labeling was undertaken. Initially double labeling was performed with the TauC3 antibody and the nuclear stain DAPI. Colocalization was not observed (Fig 2E-2G), providing evidence that the spherical structures were not nuclei. To determine if labeled TauC3 structures were apoptotic cells, double labeling was assessed together with Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL). As indicated in Fig 2H-2J, co-localization was not observed providing evidence that these found structures are not apoptotic structures. Based on the morphological appearance of these spherical, translucent structures, we hypothesize they represent corpora amylacea (CA). Confirmation of TauC3 labeling within corpora amylacea To confirm these TauC3-positive structures were CA, we stained sections with PAS, a well known specific marker for CA [29]. Labeling of CA with PAS was evident within the same region, the dentate gyrus as for what we observed with TauC3 (arrows, Fig 3A). Additional experiments were undertaken to assess whether these structures stained positive for ubiquitin, another known marker for CA [30]. Application of an anti-ubiquitin antibody revealed an identical staining pattern as compared to TauC3 (Fig 3B) and this same anti-ubiquitin antibody strongly co-localized with TauC3 following double-label immunofluorescence studies (Fig 3C and 3D). Taken together, Figs 2 and 3 supported the presence of truncated tau within CA of the human VaD brain. Co-localization of caspase-cleaved tau within NFTs To determine the extent of co-localization of caspase-cleaved tau within NFTs, double-labeling immunofluorescence experiments were carried out using PHF-1 as a general marker for NFTs. Confocal analysis revealed strong co-localization with PHF-1 and TauC3 (Fig 4A-4C). A quantitative analysis indicated that approximately 90% of all identified PHF-1 labeled NFTs also labeled with TauC3 ( Fig 4E). Strong co-localization of PHF-1 with TauC3 was also observed within CA located within the hippocampus proper of VaD cases (Fig 4F-4J). Single-labeling of VaD cases revealed labeling of CA in close proximity to NFTs Experiments were also performed using only PHF-1 and bright-field microscopy. As shown in Fig 5, single label immunohistochemical experiments with PHF-1 revealed typical labeling of NFTs throughout the hippocampus (Fig 5A). In a subset of NFTs visualized at high magnification, we noticed the appearance of circular structures of roughly the same size and shape as CA in close proximity to PHF-1-labeled NFTs (arrows, B). That CA may be derived from a neuronal source and represent intracellular inclusions was supported by the presence of labeled structures of the same size and shape as CA within PHF-1 labeled neurons (arrow, Fig 5C). In addition, we found numerous PHF-1-lableled CA within plaque-rich regions in the hippocampus of the VaD brain (Fig 5D). TauC3 co-localizes with early tangle markers in the VaD brain Previous studies in AD have indicated that the C-terminal truncation of tau is an early event that may facilitate NFT formation [23,24]. Therefore, to examine a similar possible relationship in VaD, co-localization experiments were performed using MC-1 and AT8. MC-1 is a conformational specific antibody that recognizes aberrant folded conformational changes in tau, one of the earliest tau pathological events [31,32]. The antibody AT8 recognizes tau phosphorylated at both serine 202 and threonine 205, which are the first residues to be hyperphosphorylated [33,34]. PHF-1, in contrast, recognizes phosphorylation at serines 396 and 404 and reacts with more mature hyperphosphorylated forms of tau found primarily within late-stage tangles [35]. Stage A plaque load, whereas lanes 3 (Case 3, Table 1) and 4 (Case 2, Table 1) were designated as having a plaque load of 0. A band at 50 kDa corresponding to caspase-cleaved tau truncated at Asp 421 was identified in the FCTX of all four VaD cases and two of four cases in the CBL. The bottom panel of D depicts an identical experiment except transferred proteins were probed with HT7 (1:1,000), an antibody that detects total, full-length (FL) tau. (E and F): Low (E) and high magnification (F) of representative labeling from a VaD case utilizing the TauC3 antibody illustrating staining in the dentate gyrus of the hippocampus within numerous, round translucent structures. All scale bars represent 10 μm, except for Panel E, which represents 50 μm. doi:10.1371/journal.pone.0132637.g001 As shown in Fig 6, double-label immunofluorescence studies utilizing either MC-1 (Fig 6A-6C) or AT8 (Fig 6D-6F) led to strong co-localization with the TauC3 antibody. To assess the relationship between caspase-cleaved tau and full-length tau pathology, fluorescent double labeling for TauC3 and the C-terminal-specific antibody Tau46 [36] was performed. The results revealed a difference in subcellular localization between these two markers, suggesting that both full-length tau (Tau46, green) and cleaved tau (TauC3, red) are present within the same NFTs (Fig 6G-6I). Because the C-terminal epitope recognized by Tau46 has been shown to be liberated by executioner caspases [23], these results confirm the specificity of the TauC3 antibody for the C-terminal cleavage site within tau. It is noteworthy that although neither MC-1 nor AT8, labeled CA, application of the T46 antibody immunolabeled a subset of CA that co-localized with the TauC3 antibody Caspase-cleaved tau in neuropil threads within plaque-rich regions In addition to labeling CA and NFTs, the TauC3 antibody appeared to label neuropil threads within plaque rich regions in VaD cases (Fig 7A and 7B). To confirm the presence of caspasecleaved tau within neuropil threads of extracellular plaques, immunofluorescence double labeling was performed with the anti-Aβ (clone 6E10) antibody. As shown in Fig 7C-7E co localization between TauC3 and 6E10 was evident within extracellular plaques. Unlike labeling within NFTs, we did not observe consistent labeling of the TauC3 within neuropil threads within plaque-rich regions in all seven VaD cases examined (data not shown). Additional double-labeling with anti-Aβ and TauC3 in another representative VaD case is shown in Fig 7F-7H. Caspase-3 activation In a final set of experiments we sought to determine whether active caspase-3 co-localizes with TauC3 utilizing an antibody that specifically detects the active fragment of caspase-3 following cleavage at aspartate 175 of the enzyme. We were unable to detect co-localization of the two antibodies within fibrillar NFTs (Fig 8A-8F). However, we were able to detect faint caspase-3 labeling that co-localized with TauC3 within neurons that appeared morphologically to represent pretangles. Pretangles are defined as containing cytoplasmic tau immunoreactivity without apparent formation of fibrillary structures [37]. Activated caspase-3 was also found in plaques and blood vessels of the VaD brain (Fig 8D-8I). It is noteworthy, that labeling of pretangles with the TauC3 antibody was the exception not the rule and in general resulted in a much weaker immunofluorescence signal than TauC3 labeling of mature NFTs. Unlike for TauC3, active caspase-3 labeling was never identified within CA (Fig 8E and 8F). Discussion VaD is the seconding leading cause of dementia in the USA, and has a higher negative predictive value on survival in comparison with patients affected by AD [38]. Specific conditions that increase the potential for strokes or microbleeds including hypertension, hyperlipidemia, and atherosclerosis are important risk factors for VaD. Currently there is a lack of a widely accepted neuropathological criteria for VaD [3]. It has been estimated that 25-80% of all dementia cases show mixed pathologies between VaD and AD making it difficult to diagnose pure VaD [2]. Similarly to what is found in AD, amyloid plaques, neurofibrillary pathology, and cholinergic deficits have also been documented in VaD, albeit to a lower degree then what is observed in AD [5]. Although stroke is a well-known risk factor for VaD [16,17,39], whether the subsequent ischemia and potential activation of caspases occurs in VaD has not been investigated. Therefore, the purpose of the current study was to investigate the potential activation of caspases by examining caspase-cleaved tau in post-mortem human VaD brain sections using a well-characterized antibody (TauC3) that detects caspase-cleaved tau truncated at Asp 421 [24]. between TauC3 alone and TauC3 + PHF-1. Data indicated that roughly 90% of all labeled NFTS co-localized with both antibodies. (F and G): Low-(F) and High-field (arrows, G) double immunofluorescence overlap images of corpora amylacea within the dentate gyrus of a representative VaD case showing co-localization of TauC3 (green) and PHF-1 (red). (H-J): High magnification confocal images representing labeling of corpora amylacea with TauC3 (H), PHF-1 (I), and the merged image (J). Scale bars represent 10 μm in Panels D and G and 50 μm for Panel F. Screening seven pathologically confirmed cases of pure VaD (Table 1) with the TauC3 antibody revealed three consistent staining features: 1) labeling of TauC3 within NFTs; 2) identification of caspase-cleaved tau within apparent corpora amylacea; 3) labeling of neuritic plaques. NFTs are a common post-mortem finding in the human VaD brain but are usually present to a Caspase-Cleaved Tau in Vascular Dementia lower degree when compared to AD [5]. In AD, NFTs composed of hyperphosphorylated forms of tau accumulate within the entorhinal cortex and CA1 subfield of the hippocampus [18][19][20]. In addition to hyperphosphorylation, post-translational modifications of tau, including proteolysis have been shown to be an important step in the formation of NFTs. In this regard, numerous studies now support the caspase cleavage of tau as an important mechanism contributing to the evolution of NFTs [21,22]. Thus, caspase activation and the cleavage of tau after Asp 421 is an early event preceding and possibly contributing to NFT formation [23][24][25][26]. Our findings are supportive of a role for caspase-cleavage of tau in VaD, providing further support for the involvement of this family of proteases in NFT pathology. To corroborate these findings, we performed double-label experiments utilizing an antibody that detects active caspase-3. Although labeling with this antibody was observed in plaques, blood vessels, and pretangle neurons, we did not observe staining within fibrillar NFTs that labeled with PHF-1. These findings suggest that caspase-3 activation precedes caspase-cleavage of tau, and is no longer active in mature tangles, possibly due to turn over of the enzyme that is present in nominal concentrations within neurons. Our findings in VaD are in aligned with what has been observed in AD, namely that caspase activation and cleavage of tau is an early event that contributes to the evolution of NFTs [23][24][25][26]. One mechanism that may activate apoptotic pathways in VaD is the presence of beta-amyloid. Previous studies have supported a role for Representative immunofluorescence double labeling within the human VaD brain utilizing an antibody to active caspase-3 (green, Panels A and D) and TauC3 (red, Panels B and E), with the overlap images shown in Panels C and F. Labeling of active caspase-3 was evident within pretangles that co-localized with TauC3 (arrows, C). Co-localization of the two antibodies was also evident within plaques although TauC3 gave a much weaker fluorescence signal (F). In fibrillary NFTs only TauC3 was present (arrowhead, C and arrow, F). (G-I): Representative immunofluorescence double labeling with active caspase-3 (green, G) and the nuclear stain, DAPI (blue, H) indicating labeling within blood vessels of the VaD brain (I). Note the appearance of cuboid, elongated nuclei that typically define endothelial cell nuclei (arrows, H). All scale bars represent 10 μm. doi:10.1371/journal.pone.0132637.g008 beta-amyloid in initiating the activation of apoptotic pathways leading to caspase-3 activation and the C-terminal cleavage of tau [22][23][24]. In the present study, we were able to demonstrate by Western blot analysis that caspase-cleaved tau was significantly greater in VaD cases in which beta-amyloid deposition was confirmed post-mortem. These data would support that caspase-cleaved tau links beta-amyloid deposition to NFT formation as has been previously shown in AD [22][23][24]. Interesting, our Western blot analysis also revealed the presence of caspase-cleaved tau in the cerebellum. This result may not be all that surprising considering that cerebellar dysfunction has been postulated to play an important role in VaD [40]. In a previous immunohistochemical study we demonstrated the presence of caspase-cleaved tau in the cerebellum of the Alzheimer's disease brain despite the lack of beta-amyloid plaques in this region [41]. In the present study, screening of the cerebellum for beta amyloid by immunohistochemistry did not reveal any deposition of beta-amyloid (data not shown). Therefore, the presence of caspase-cleaved tau in the cerebellum of both the Alzheimer's and vascular disease brain does not appear to be directly related to the presence of beta-amyloid in this brain region. In addition to NFTs, the TauC3 antibody consistently labeled numerous translucent round structures in the dentate gyrus of the hippocampus proper. The lack of colocalization between TauC3 and Terminal deoxynucleotidyl transferase dUTP nick end labeling as well as with DAPI within these structures argue against these structures being apoptotic cells or nuclei. Based on the morphological appearance as well as positive labeling with PAS and ubiquitin antibodies we conclude that these structures are corpora amylacea (CA). CA are spherical, laminated, basophilic to eosinophilic structures located in the subpial, periventricular and perivascular regions [42,43]. It is note worthy, that the identified CA in the current study were not found in these regions but instead were prominent in the granule cell layer of the hippocampus. CA are inclusions found to accumulate in the central nervous system and are associated with normal aging as well as neurodegeneration [42]. Reports have shown that approximately 4% of the total weight of CA is composed of protein and that ubiquitin may be one of the primary protein components [30]. The presence of ubiquitin suggests that the accumulation of altered proteins may be involved in the pathogenesis of CA [30]. In addition to ubiquitin, studies have found CA to be reactive with anti-tau and to be present in larger numbers in neurodegenerative disease brains versus that of normal ageing brain [44][45][46]. Our results revealed the presence of caspase-cleaved tau within CA in the dentate gyrus and the number of labeled CA were significantly higher than what was observed in age-matched controls. Interesting, DAB staining of VaD cases with PHF-1 revealed immunoreactivity in apparent CA that were localized near or within labeled neurons (Fig 5). Our results are suggestive that CA originated as intracellular neuronal inclusions and these findings are supported by previous studies [44,47]. We hypothesize that tau may be modified by post-translational processes that includes phosphorylation and proteolysis and incorporated into these spherical structures. It has been suggested that CA are involved in the sequestration of potentially hazardous products of cellular metabolism including the presence of polymerized proteins [42,48]. Our data showing the presence of caspase-cleaved tau as well as positive staining with PHF-1 would support this notion and suggests that CA may play a protective role similar to what has been ascribed for Hirano bodies [49]. In conclusion, we investigated a potential role for caspase-cleaved tau in VaD utilizing a well-characterized antibody that specifically detects caspase-cleaved tau truncated at Asp 421 . We found that application of TauC3 revealed consistent labeling within NFTs, neuritic plaques, and CA in the human VaD brain. The presence of caspase-cleaved tau within CA that were regionally localized within the dentate gyrus is a novel finding. The localization of CA within the hippocampus proper and not in perivascular regions is suggestive that they may be involved in the disease pathogenesis. However, whether the presence of CA in VaD is contributing factor or simply a product of the disease process is not known and will require further investigation. Staining of the TauC3 antibody co-localized with PHF-1 within the majority of NFTs and our data are suggestive that caspase activation precedes tau cleavage in NFTs. Collectively, these data support a role for the activation of caspase-3 and proteolytic cleavage of TauC3 in VaD providing further support for the involvement of this family of proteases in NFT pathology. Supporting Information S1 Fig. TauC3 co-localizes with early tangle markers in the VaD brain. (DOCX)
2016-05-16T18:58:55.798Z
2015-07-10T00:00:00.000
{ "year": 2015, "sha1": "014c8b043199dc15e140b0093b6a1648b375d700", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0132637&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "014c8b043199dc15e140b0093b6a1648b375d700", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
25929563
pes2o/s2orc
v3-fos-license
Factors associated with work ability index (WAI) among intensive care units' (ICUs') nurses. OBJECTIVES Work ability is a crucial occupational health issue in health care settings where a high physical and psychosocial work capacity is required and a high risk of disabling injuries and illnesses is predictable. This study aims to examine the association between the work ability index (WAI) and individual characterizations, workload, fatigue, and diseases among intensive care units' (ICUs') nurses. METHODS The study sample included 214 nurses selected by a random sampling method from a target population consisting of 321 registered nurses working in eight ICUs. Multiple linear regression analysis was used to test the association between WAI scores and each of the independent variables. RESULTS Results of multivariate analysis revealed a strong and negative association between WAI scores and diseases (B=-5.82, 95% CI=-7.16, -4.48, P<0.001). Among the studied individual characterizations, body mass index (BMI) was significantly and inversely associated with WAI scores. A significant and negative association was also found between WAI scores and dimensions of MFI-20, such as general fatigue (B=-0.31, 95% CI=-0.53, -0.09, P=0.005) and physical fatigue (B=-0.44, 95% CI=-0.65, -0.23, P<0.001). From dimensions of workload, frustration (B=-0.04, 95% CI=-0.07, -0.02, P<0.001) and temporary demand (B=-0.04, 95% CI=-0.08, -0.0001, P=0.04) showed a negative and significant association with WAI scores, while performance showed a positive and significant association (B=0.04, 95% CI=0.01, 0.07, P=0.005). CONCLUSIONS Based on the study findings, development of health care programs with the aim of setting up a healthy work environment characterized by a well-structured preventive attitude toward controlling diseases, and a well-designed organizational framework toward increasing the level of performance and motivation, reducing the level of fatigue, as well as reducing the workload, is necessary to promote work ability among ICUs' nurses. Introduction Work ability is defined as how physically and mentally able is a worker to cope with his/her mental work and physical demands 1 ) . Work ability has been widely assessed by the work ability index (WAI), an instrument developed by the Finnish Institute of Occupational Health (FIOH) 2 ) . Studies showed that WAI is a helpful tool in predicting long-term sickness absence 3,4 ) , early retirement 3,5 ) , and identifying prognostic factors for mortality and work disability 6,7) . In recent years, there has been a growing interest in conducting studies on work ability in health care settings. Health care workers (HCWs), especially nurses working in intensive care units (ICUs), often operate in job settings where a high physical and psychosocial work capacity is required and a high risk of disabling injuries and illnesses is predictable. ICUs' nurses often perform heavy physical activities, such as lifting and transferring patients, working in poor postures, standing for long hours, which can lead to de-velopment of a wide range of disabling injuries, such as musculoskeletal disorders ( MSDs ) 8 ) . Nursing also increases the risk of long-term exposures to biological, chemical, and toxic substances that trigger disabling chronic diseases, such as occupational asthma, allergy, liver diseases, skin dermatitis, and kidney ailments 9) . Furthermore, from the viewpoint of mental dimension of work, it is inevitable for ICUs' nurses to perform complex tasks in critical situations, such as confronting unpredictable events, decision-making under time pressure, and dealing with aggressive relatives 10 ) . Based on the stressstrain concept, these situations impose a high mental load on nurses and brining long-run excessive job strain. New studies have suggested that fatigue is one of the immediate consequences of job strain, particularly in occupations in which job tasks require both intense physical and mental efforts simultaneously 11,12) , such as those found in health care settings. Job strain and fatigue have been recognized as important risk factors for impairing job performance and work ability [12][13][14] . Given these conditions along with other factors related to organizational characteristics, such as shift working and long and irregular working hours, have caused nurses to be exposed to a high risk of health attenuating determinants and increasingly affected their work ability. A previous study on work ability of health care professionals found a lower level of work ability among nurses as compared with other medical staff 15 ) . This condition may be more critical among ICUs' nurses who may experience higher levels of workload and health-related stressors. Literature review shows that little information is available on work ability and its associated factors in a particular group of health care staff, such as ICUs' nurses. Nurses constitute a majority of the HCWs' force in Iran. Although, in recent years, establishment of occupational safety and health services within the Iranian hospital health system has led to a better management of health-related stressors and caused an increased protection of HCWs from exposure to workplace safety and health hazards 16 ) , factors related to work ability among nurses still remain widely unknown. This study aims to determine the association between WAI and individual characterizations, workload, fatigue, and diseases among nurses working in ICUs of the hospitals affiliated to Shiraz University of Medical Sciences (SUMS). Materials and Methods This is a descriptive and cross-sectional survey design. Subject and study design The study subjects included nurses working in ICUs of the hospitals affiliated to SUMS. Based on a random sampling method, 250 individuals were randomly selected from a target population consisting of 321 registered nurses working in 8 ICUs. The required data were obtained through questionnaire. Data collection was performed during the working hours of the nurses and the head nurses helped the researchers in informing participants about the study purpose and the distribution and collection of questionnaires. A total of 222 questionnaires was returned of which 8 questionnaires were not completed and excluded from the analyses. Therefore, the study was performed among 214 nurses, corresponding to a sample of 85.60% of the population. The mean age of participants was 28.88 ( SD = 4.10), which ranged from 22 to 39 years. All nurses were asked to provide written consent prior to starting the study. Measurement of variables Work Ability In this study, work ability was measured by the WAI questionnaire 2 ) , which is calculated by summing the points of 7 items, including current work ability compared with the lifetime best (0-10 points), subjective work ability with regard to physical and mental demands of work (2-10 points), current number of diagnosed diseases by the physician (1-7 points), subjective estimated work impairment due to diseases (1-6 points), sickness absenteeism during the past year (1-5 points), personal prognosis for work ability 2 years from now (1, 4, or 7 points), and mental resources (1-4 points). The index score ranged from 7 to 49 points and the scores were categorized as poor, moderate, good, and excellent. In the original version, reference limits used to categorize WAI into 4 groups ( including poor ; 7-27 points, moderate ; 28-36 points, good; 37-43 points, and excellent; 44-49 points) were based on the distribution of scores for workers aged from 45 to 58 years. Since the nurses in this study had a mean age of 28.88 years, therefore, to prevent an overestimation of work ability, according to previous studies conducted by Kujala et al. based on the distribution of scores, three cut-off points, including the 15th percentile, median, and 85th percentile, were used among the categories 17,18) . The validity and reliability of the Persian version of WAI have been explored in a previous study among Iranian nurses, indicating satisfactory psychometric properties of the questionnaire 19) . Workload The NASA task-load index (NASA-TLX) was used to examine workload. NASA-TLX is one of the most sensitive and applicable instruments to assess subjective workload 20,21) . This index consists of 6 sub-scales with 21 gradations each. The sub-scales include mental demand, physical demand, temporal demand, performance, effort, and frustration. The first 3 sub-scales are related to demands imposed on an individual and the last 3 ones are related to interaction between an individual with his/her task. A score ranging from 0 to 100 is obtained on each scale. The overall workload ( OW ) score is calculated based on the weighted average of rating on the 6 subscales. To simplify, it has been suggested to eliminate the weighting process all together or weighting the subscales and then analyzing them individually 20) . Many researchers have eliminated the weighting procedure and instead use the raw test scores 22) . Therefore, in this study, the mean of the raw test scores (sum of scores of all 6 sub-scales divided by 6, the number of sub-scales) was considered as OW. The face validity and reliability of the Persian version of NASA-TLX among ICUs' nurses was confirmed by Mohammadi et al. 23) . Fatigue Multi-dimensional Fatigue Inventory (MFI) was used to assess fatigue. MFI consists of 20 items grouped in 5 dimensions, including general fatigue, physical fatigue, mental fatigue, reduced activity, and reduced motivation. Each dimension consists of 4 items scored on a 5-point scale. The possible range of the total score for each dimension is 4 to 20; higher scores indicate higher levels of fatigue. MFI has been recently translated and validated into Persian language 24) . Diseases Information on the prevalence and type of diseases was obtained based on the nurses' responses to item 3 of the WAI questionnaire, number of diagnosed diseases by the physician. The third item of the WAI questionnaire consists of a detailed list of diseases as follows: Trauma/injury from accident Musculoskeletal diseases (e.g., chronic disorders of the musculoskeletal system) Cardiovascular disease (e.g., hypertension, coronary heart disease, and myocardial infarction) Respiratory disease (e.g., chronic bronchitis and sinusitis, bronchial asthma, and emphysema) Mental disorder (e.g., slight/severe mental diseases, such as depression, tension, anxiety, and insomnia) Neurological and sensory diseases (e.g., problems or injuries related to hearing, visual diseases, neurological diseases, such as stroke, neuralgia, and migraine) Digestive disease (e.g., gall bladder stones, liver or pancreatic disease, and gastric or duodenal ulcer) Genitourinary disease (e.g., urinary tract infection, kidney disease, and genital disease) Skin diseases (e.g., allergic rash/eczema and other skin diseases) Tumors (malignant or benign) Endocrine and Metabolic diseases (e.g., obesity, diabetes, goiter, or other thyroid diseases) Blood diseases (e.g., anemia and other blood disorders) Birth defects Statistical Analysis Data was analyzed using SPSS software (version 21). The main study variables included both quantitative (BMI, WAI, general fatigue, mental fatigue, reduced activity, reduced motivation, mental demand, physical demand, temporal demand, performance, effort, frustration, and OW ) and qualitative measures ( gender, education, marital status, physical exercise, type of diseases, work experience, and smoking). Univariate analysis was used as the primary analysis strategy to determine the unadjusted association between the study variables and WAI. Multivariate linear regression was used to measure the adjusted association between the study variables and WAI. All variables (MFI-20, NASA-TLX, and diseases) were entered into the 3 different models adjusted by BMI, age, and job experience. The modeling procedure was started after collinearity between independent variables was measured using the variance inflation factor index ( VIF ) . The cut-off point of VIF was set at 10. The mean WAI score was 39.80 (SD=5; range 24-49) and based on the distribution of the scores, the 15th percentile was 35 points, median was 40 points, and 85th percentile was 45 points. Therefore, 35 points was selected as the cut-off point between poor and moderate work ability, 39 points as that between moderate and good work ability, and 44 points as that between good and excellent work ability, and WAI scores between 7-35, 36-39, 40-44, and 45-49 points were considered to be in the poor, moderate, good, and excellent categories, respectively. After the above-mentioned categorization, 17.8% of the studied population was in the poor, 25.7% in the moderate, 37.4% in the good, and 19.2% in the excellent category. The mean of WAI scores was not significantly different In this study, more than one quarter of the participants reported a chronic disease diagnosed by a physician. Musculoskeletal problems, digestive disease, and skin disease accounted for the most prevalent diseases ( Table 2). Based on the results from Table 2, 57.14% of the participants with trauma were at the poor level of WAI and 14.29% were at the moderate level. Those with musculoskeletal disease (67.76%), mental disorder (62.50%), digestive disease (52.94%), skin disease (60%), and genitourinary disease (87.50%) were at the poor level of WAI. Furthermore, those with a history of trauma (71.43% ) , musculoskeletal disease (20.00%), and digestive disease (10.59%) were more than 30 years. According to WAI scores, it was observed that the mean WAI score of nurses with diseases diagnosed by a physician was lower than those without diseases, a poor work ability level (WAI=35.42) as compared with a good work ability level (WAI=41.47). Based on the results of univariate analysis (Table 3) Multivariate analysis of the data was performed using linear regression with WAI scores (Table 4). Among the studied individual characterizations, BMI was significantly and inversely associated with WAI scores. On the other hand, age and job experience showed no significant association with WAI scores. Some dimensions of MFI-20, such as general fatigue ( B = -0.31, 95% CI = -0.53, -0.09, P=0.005) and physical fatigue (B=-0.44, 95% CI =-0.65, -0.23, P<0.001), showed negative and significant associations with WAI scores. The association between NASA-TLX subscales and WAI scores are also shown in Table 4. In this study, due to collinearity between OW and NASA-TLX subscales, OW was not entered into the final model. Hence, frustration (B=-0.04, 95% CI=-0.07, -0.02, P<0.001) and temporary demand (B=-0.04, 95% CI =-0.08, -0.0001, P=0.04) showed negative and significant associations with WAI scores, while performance showed a positive and significant association (B=0.04, 95%CI= Discussion This cross-sectional study examined the association between a wide range of demographic and clinical characteristics with work ability among ICUs' nurses. In this study, a low mean WAI score was found among ICUs' nurses, especially among those with a history of diseases. WAI was influenced by individual characterizations, diseases, fatigue, and workload. The findings of this study showed a lower mean WAI score for ICUs' nurses when compared with values reported from other studies for general hospitals' nurses. In this study, because the studied population included young adults, the mean age of 28.88 (SD=4.10) ranged from 22 to 39 years, WAI categorization was conducted by three cut-off points (15th percentile, median, and 85th percentile), based on the classification suggested by Kujala et al. for young employees 17,18) . The mean WAI score for ICUs' nurses in this study was in the moderate category (39.80 points; SD=5; range 24-49), which, in comparison with the findings of previous studies conducted among general hospitals' nurses, especially considering the mean age of the studied samples, seems to be at an unsatisfactory level of WAI. In a recent study 12) conducted among 272 nurses, the mean WAI score was 38.1 points (SD=5.7) for nursing technicians and assistants with a mean age of 41.7 years (SD=9.3; range 23-65 years) who were more than 10 years older than the nurses in this study, and it was in the good category. Likewise, the mean WAI score was 38.6 (SD=6.2) among 1,194 Brazilian nurses and nursing aides with a mean age of 40.3 (SD=13.1) years 25 ) , and it was 38.3 (SD=6.1) among 1212 Croatian nurses with a mean age of 42 (range 32-47) years 26 ) , and both were in the good category of WAI. The distribution of WAI scores revealed that 17.8% of nurses were in the poor (7-35 points) and 25.7% in the moderate (36-39 points) work ability category (in total, 43.5% of nurses were in poor to moderate WAI category). In line with this finding, the distribution of poor and moderate WAI scores among a sample including 236 Iranian nurses was 11% and 36.9%, respectively (in total, 47.9% of nurses were in poor to moderate WAI category ) 19 ) . However, the results of the study conducted by NEXT 27) , the most extensive international research on the collection of WAI data among 22,355 registered nurses from 10 European countries, showed that the distribution of poor (7-27 points) and moderate (28-36 points) WAI scores was 3.5% and 19.5%, respectively (in total, 23% nurses were in poor to moderate WAI category), almost half of those obtained in this study. However, when the distribution for each country was considered, only in Poland, similar to the results of this study, 42% of the nurses showed poor to moderate WAI scores. With respect to the obtained mean WAI score and high proportion of its poor to moderate categories, a high risk of disability and early retirement among the studied nurses can be predictable in the near future. Hence, planning and implementation of appropriate intervention measures to help improve work ability are recommended. The results of univariate analysis revealed statistically and inversely significant associations between age, job experience, and BMI and WAI scores. Age and job experience were also explored in previous studies [28][29][30] as significant determinants of WAI. It has been reported that aging is associated with a reduction in functional capacity, work ability, and employability 31) . In longitudinal surveys, BMI showed a U-shaped association with mortality and low WAI scores 32) . Overweight and obesity are associated with increased risks of a wide range of diseases, such as cardiovascular diseases, mental disorders, and musculoskeletal problems, and their final outcome is disability and early retirement 32) . The findings of this study revealed a strong and negative association between WAI scores and diseases. Chronic diseases have been identified to be one of the major determinants for disability and work-related absence 33,34) . In a recent study, Alavinia and Burdorf among 11,462 participants of the Survey of Health, Ageing and Retirement in Europe (SHARE) found that, independent from self-perceived poor health, chronic diseases were strongly related to the risk of unemployment and labor force exit 35 ) . In this study, more than one quarter of the participants reported a chronic disease, of which musculoskeletal problems, digestive diseases, and skin diseases accounted for the most prevalent diseases. These findings are in agreement with the findings of the study conducted by NEXT 27) . Tuomi et al. have introduced musculoskeletal problems as a disease with a negative effect on work ability 36) . Literature shows that working in ICUs impose a high degree of biomechanical and ergonomic risks that can lead to the development of chronic MSDs/discomforts 8) . The findings of the studies on MSDs in health care settings revealed that MSD symptoms seemed to be experienced much more by ICU nurses 8,37) . The prevalence of musculoskeletal problems among a sample of 201 Turkish ICU nurses 38) was 19.9% (n=40), which is higher than those reported in this study with a sample size consisting of 214 nurses. The reasons for higher prevalence of MSDs among nurses in ICUs as well as surgical wards have been explained by Kee and Seo 37 ) to be activities related to patient handling and transferring, such as moving, lifting, and repositioning of patients. According to the results, work ability was influenced by general fatigue (B=-0.31, 95% CI=-0.53, -0.09, P= 0.005 ) and physical fatigue ( B = -0.44, 95% CI = -0.65, -0.23, P<0.001). This is consistent with the findings of a recent study conducted by Vasconcelos et al. 12) , which re- ported that perceived fatigue was associated with inadequate work ability among Brazilian nurses. Fatigue is a gradual and cumulative process that reflects a decrement in vigilance, motivation, and ability of individuals to perform a particular task 39) . Furthermore, fatigue has been associated with a variety of both physical illness and mental health measures 40) , which may affect work ability. According to the study findings (Table 4), from the NASA-TLX sub-scales, temporary demand ( B = -0.04, 95% CI=-0.08, -0.0001, P=0.04) and frustration (B=-0.04, 95% CI=-0.07, -0.02, P<0.001) showed an inverse and significant association with WAI scores, while performance showed a direct and significant association (B=0.04, 95% CI=0.01, 0.07, P=0.005). Temporary demand and frustration have been identified as significant risk factors in the occurrence of MSDs 41 ) accounting for the main cause of permanent work disability (PWD) 42) . Tuomi et al. reported that the occurrence of poor work ability may be manifold when a combination of the presence of a disease, high work load, and high level of stress symptoms are to be considered 36) . They concluded that there are several workload factors, such as heavy physical work, hot and cold work environments, lack of freedom, few possibilities to develop, and role conflicts at work, which predict poor work ability or disability. Considering the above mentioned findings from Tuomi et al. and the results obtained from this study on the associations found between WAI scores and diseases, fatigue, and workload, it is suggested to develop a healthy work environment characterized by a well-structured preventive attitude toward controlling diseases and a well-designed organizational framework toward increasing the level of performance and motivation, reducing the level of fatigue, as well as reducing the workload in all hospitals equipped with ICUs. Study limitations The cross-sectional study design, nature of subjective collected data, small number of sample size, and unequal distribution of male and female nurses should be considered when using the findings of this study. Conclusion There is a lack of studies conducted on WAI among ICUs' nurses. With respect to the sensitivity and importance of the work in ICUs, it is necessary for nurses working in such units to have work ability and work capacity corresponding to their job demands. The results of this study showed that ICUs' nurses were to have a lower mean WAI score when compared with values reported for general hospitals' nurses. Given a moderate level of mean WAI score and a high proportion of its poor to moderate categories, it is recommended to develop appropriate intervention measures toward improving nurses' work ability. High prevalence of musculoskeletal problems underlines a crucial need for identifying and reducing ergonomic risk factors as well as redesigning jobs in such units. Based on the findings of this study, individual characterizations, disease, fatigue, and workload were the most important factors associated with work ability. Hence, de-velopment of health care programs with the aim of setting up a healthy work environment characterized by a wellstructured preventive attitude toward promotion of health and controlling diseases and a well-designed organizational framework toward increasing the level of performance and motivation, reducing the level of fatigue, as well as reducing the workload in all hospitals equipped with ICUs is necessary to promote nurses' work ability.
2018-04-03T04:01:38.578Z
2017-01-11T00:00:00.000
{ "year": 2017, "sha1": "72fc541c81c7c2eb55b6a312ac7d7d5da8e24ab1", "oa_license": "CCBYNCSA", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1539/joh.16-0060-OA", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ebda658a92caf1a1cb857d64d0c9f9d1ebf743cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245033643
pes2o/s2orc
v3-fos-license
Politics of attributing extreme events and disasters to climate change Climate change certainly shapes weather events. However, describing climate and weather as the cause of disasters can be misleading, since disasters are caused by pre‐existing fragilities and inequalities on the ground. Analytic frames that attribute disaster to climate can divert attention from these place‐based vulnerabilities and their socio‐political causes. Thus, while politicians may want to blame crises on climate change, members of the public may prefer to hold government accountable for inadequate investments in flood or drought prevention and precarious living conditions. To be both strategic and moral, framing choices must therefore be sensitive to context‐dependent political meanings and particularities, and to how the values implicit within analytic frames about the causes of disasters shape policy responses. Such sensitivity requires multicausal analysis of weather‐linked disasters to illuminate a broader range of means to reduce the damages associated with climate change and weather extremes. Through examples from around the world, and especially Brazil, we discuss how and why climate‐centric disaster framing can erase from view—and, thus, from policy agendas—the very socio‐economic and political factors that most centrally cause vulnerability and suffering in weather extremes and disasters. We also offer a theoretical discussion of why attribution is not neutral. Analytic frameworks always embed choices about factors that matter, and thus are inherently normative and consequential for understandings of responsibility and action. | INTRODUCTION: ATTRIBUTING-AND CONFLATING-WEATHER EXTREMES AND ENSUING CRISES Powerful science leaders hope that identification of the role of climate change in extreme weather events will "spur more immediate action" to mitigate climate change and to avert the damages associated with such events (McNutt, 2019). Scientists thus have an understandable "tremendous desire" (Trenberth et al., 2015, p. 725) to use "attribution science" to show the extent to which particular weather events, such as heat waves, storms, droughts, and floods, are caused by climate change (Trenberth et al., 2015, p. 725). Encouraged by improved scientific capacity to discern the role of climate change in individual extreme events, the next step often taken to inform journalists and the public is to also attribute damages that follow these climate events to the events and their anthropogenic components. The specious assumption is that better, more-precise attribution of extreme events to climate change also can be used to attribute damages that follow to climate change, as pointed out by Hulme et al. (2011). We review literature and evidence that illustrates why climate-centric framings of disasters can be misleading and problematic, even from a policy point of view. We urge caution to avoid conflation of the causes of extreme weather events and associated crises. Even where science can attribute such events to human emissions of greenhouse gases with some rigor, the damages that follow are centrally a function of vulnerabilities on the ground. Our narrative review thus argues for multi-causal analysis of weather-linked disasters, as such analysis illuminates a broader range of means to reduce the damages associated with climate change and extreme weather. To be strategic, framing choices must be sensitive to context-dependent political meanings and particularities, and to the values implicit within analytic frames about the causes of disasters and how they shape policy responses. Through examples from around the world, and especially Brazil, we show how the climate-centric disaster framing can contradict the experiences of those who suffer disasters because it erases from view-and, thus, from policy agendas-the very socio-economic and political factors that most centrally cause their vulnerability and suffering. In a final section, we offer a theoretical discussion of why attribution is not neutral, as analytic frameworks always embed choices about factors that matter, and thus are inherently normative and consequential for understandings of responsibility and action. At the level of policy, the tension between climate-centric framings of disasters and attributions that foreground political factors, not least poverty and socioeconomic inequality, is a function of the current climate regime's focus on greenhouse gas reductions (climate mitigation) over the reduction of deeper, social causes of both the pollution and the vulnerability. | PRESSURES IN FAVOR OF CLIMATE-CENTRIC DISASTER FRAMING The desire to persuade the public of the dangers of climate change via attributions of climate events pressures scientists and the media alike to attribute extreme climate events (and associated crises) to climate change. Dedicated to comprehensively monitor, analyze, and correct climate skepticism and related misinformation circulating in U.S. media and society, the progressive research and information center Media Matters for America regularly scolds U.S. media outlets for failing to mention that climate change is driving the conditions that create this "new normal" of frequent crisesas, for example, in the form of destructive wildfires (Robbins, 2015). Similarly, leading climatology communications advisors associated with the World Meteorological Organization (WMO) invoke examples from around the world to criticize media outlets for "far too often" failing to seize on "clear opportunity" to call attention to the climate as cause (Hassol et al., 2016). They coach experts to begin communications about such events by clearly defining climate change as cause, "[r]ather than starting with caveats, uncertainties, and what we cannot say," as scientists often do (ibid.). This climate-centric attribution communications strategy is further backed by a large body of literature that explicitly understands extreme weather events and related crises as valuable opportunities to raise public attention and drive discussion about climate change (e.g., see Albright & Crow, 2019;Davidson et al., 2019, among others reviewed in Lahsen et al., 2020. Thus, both scientific and popular discourses can end up framing disasters that follow extreme weather as if they were result of stressors "from the sky," rather than outcomes of pre-existing vulnerabilities on the ground (Foote, 2016;Friedman, 2016;Jankovi c & Schultz, 2017;Lustgarten, 2020;Rigaud et al., 2018). These attributions divert attention from other important-and treatable causes. Such "climate reductionism" (Hulme, 2011)-the attribution of crises to climate alone-also has implications for social and political understandings of potential responses, and for responsibility (Ribot, 2019). Decision makers interviewed in a study of U.S. stakeholder perceptions of scientific attributions of California's recent drought were, without exception, doubtful that the scientific attributions of extreme events to climate change would improve policy decisions, at least at the level of adaptation planning (Osaka & Bellamy, 2020, p. 10). Nevertheless, journalists interviewed as part of the same study expressed increasing inclination to attribute instances of extreme events to climate change in their reporting in recent years, responding to attribution science and to increased acceptance of this framing. Some journalists reported covering attribution science in response to perceived "pressure" from green groups or from the general public who were "asking the question" about the connection between climate change and events like the drought to highlight the climate factor (Osaka & Bellamy, 2020, pp. 6-7). Mindful of the importance of proper attribution, the study's authors conclude that if extreme event attribution is to be used as a tool for public communication, further research is needed into the effects of pressures and framing choices on publics' climate perceptions and beliefs (Osaka & Bellamy, 2020, p. 10). | MULTIPLE CAUSES OF CRISIS It is well established that disasters are caused by many factors, even when weather plays a role (Blaikie et al., 1994;Davis, 2002;Drèze & Sen, 1989;O'Brien et al., 2007;Ribot, 2014;Sen, 1982;Watts & Bohle, 1993). Similarly, it is well established that climate change is not a primary reason that migrants leave their homes, whether in Central America, the Sahel, Syria, or elsewhere (Boas, 2015;Mayer et al., 2013;Ribot et al., 2020). So, despite extreme weather, these outcomes cannot be attributed to weather alone-if at all. Security on the ground-conditions and policies in place-mediate damages that follow climate events. Further, these conditions have causes, which must be understood if we are to improve prevention of crises. Thus, the effects of these anthropogenic elements of climate remain contingent on conditions on the ground and the chains of causality that produce them (Blaikie, 1985). As an example, the complex causes of dangerous migration across the Sahara are illustrated by Figure 1, which shows many of causal chains that lead to what has been wrongly called a "climate migration" (Ribot et al., 2020). Climate events and trends are, however, inseparable from the many other interacting causes of departure. People understand that the damages they sustain are due to their pre-existing vulnerabilities. Indeed, there is no crisis without vulnerability (see Wisner et al., 2004). An extreme event may cause no damage in a well-prepared community. However, a vulnerable community may have damages that scale, or even multiply, with the force of the hazard. Vulnerability plays an empirical causal role in the losses and damages. A vulnerable community may attribute the damages to their vulnerabilities even if the triggering weather event carries an evident climate change signature. Affected populations rightly perceive cause within their local conditions (Ribot et al., 2020). It is one thing to link weather events such as heat waves, droughts, storms, and floods to anthropogenic climate change. It is a separate analytical step to attribute the associated crisis, losses, and damages, to these climate events. The two kinds of attribution are often linked by the assumption that better, more precise attribution of extreme events to climate change can be commutatively attributed to damages that follow . Regardless of the F I G U R E 1 Putting climate in place among causes of migration from Senegal (Ribot et al., 2020) magnitude of a climate event or the degree to which it is anthropogenic, however, the damages that follow depend on conditions in place (Sherbinin, 2020). Vulnerabilities on the ground must be analyzed and explained before attributions of damage can be made (Blaikie et al., 1994;Sen, 1982;Watts, 1983;Wisner et al., 2004). The role of weather can never be separated from pre-existing precarities (Ribot, 2014). Failing to capture the place-based social causes of observed or projected damage, the climate-centric narrative is not likely to resonate with lived realities. It may ring especially false to those who live displacement or who know about socio-economic marginalization and absent or weakly enforced social and legal protections (Ribot et al., 2020). Subject to violence and oppression and exploitation, few Honduran and other Latin American migrants are traveling north merely to escape climate change (Semple, 2019; see also Lustgarten, 2020; Rigaud et al., 2018). In 1000 household surveys and 100 migrant interviews, almost no Sahelian crossing the Sahara toward Europe mentions that they are fleeing drought. Rather, they explain their plight in terms of low prices for their crops, inadequate access to markets, and the lack of social services (Ribot et al., 2020). Similarly, people who fled an extremely violent Syria also do not think they were pushed by climate change (Fröhlich, 2016;Selby et al., 2017). In such cases, people are not likely to feel climate change is an important factor-for it is much less important than the precarity (a la Bourdieu 1997) that they must contend with day to day. Thus, attribution to climate or climate change may read false to those affected when they view their precarity as a result of their local and broader political-economic situation. It is, of course, good scientific practice to provide the most accurate causal attribution of climate events-identifying as far as possible their anthropogenic component. Yet the role, meaning, and effect of this information are contingent on local politics that shape the conditions of security and vulnerability that the climate event finds in place. To the extent that these framings are intended to draw attention to anthropogenic climate change to prevent future crises, it is ironic that they can divert attention from deeper social and political-economic causes of suffering, including the problematic conditions of violence and exploitation that fundamentally strain and diminish the very human lives that most analysts hope to protect. | DISASTERS AND RESPONSIBILITY ATTRIBUTIONS: THE POLITICS OF CLIMATE-CENTRIC FRAMINGS IN BRAZIL Socio-economic and political conditions turn extreme weather events into disasters (IPCC, 2012(IPCC, , 2014Ribot, 2014;Sen, 1982;Watts & Bohle, 1993). For those in the Global South who live in precarious situations, such conditions, or associated vulnerabilities, are starkly visible. They have also been revealed in Northern cases (see Somers, 2008, on Katrina). Attribution of crisis only to a climate event is therefore inadequate as a mechanical explanation , but also from a moral and strategic policy point of view. Attributing disaster to human-induced or humanaugmented climate events reduces the anthropogenic cause to far away greenhouse gas emissions, and this occludes the role of local poverty, precarious housing and the myriad other social and political-economic conditions that result from inequities, politics, and poor decision making (Castree et al., 2014). Moreover, these two levels are not sufficiently joined and addressed through current policy mechanisms; at both national and international levels, there is an avoidance of deep-cutting analysis and interventions into the systemic causes of pollution and inertia (Dimitrov, 2020;Harris, 2021;Park et al., 2008). Insufficient follow-through on early pledges from developed countries to fund climate adaptation under the United Nations Framework Convention on Climate Change offers limited opportunity to address poverty and other structural conditions that undermine adaptation and resilience (L. Friedman, 2021); addressing both remains centrally dependent on national and local funds and decision-making (Council of Foreign Relations, 2013). Traditional international development institutions do not fill the gaps (Lahsen et al., 2020, p. 226). Evidence shows that Brazilians are aware of the difficulty of simultaneously attributing disasters to climate change and to more local and socio-economic and political causes. Lahsen et al.'s (2020) study of Brazilian scientists, journalists, and civic leaders' discourses around two flooding and landslide disasters which occurred in 2008 and 2011, respectively, shows that even climate-concerned Brazilian environmental leaders systematically avoid adopting the climate frame for recurring weather-linked disasters, and that they sometimes even actively contest that frame. They show acute awareness of the political opportunity costs of attributing such recurring disasters to climate change, because it plays down the role of imprudent decision making by national and local decision makers. In the wake of the 2011 raininduced flooding and mudslides in the mountains of the state of Rio de Janeiro, for instance-one of Brazil's most costly and tragic rain-induced disasters in recent decades-Brazilian climate scientists pushed back against climate-centric framings. Their headline-disseminated message asserted unequivocally that "Warming did not Cause [the] Tragedy" (Lahsen et al., 2020, p. 219). As they rightly noted, events like these have been occurring for decades, and yet national mapping and early warning systems were persistently left sub-par. Vulnerabilities on the ground set people up for crisis; inadequately prepared for, the disasters caused by the flooding and landslides were expected events, even if their intensity was unprecedented. The Brazilian scientists called for urgent disaster prevention policies, including disaster mapping, warning systems, re-urbanization, relocation of houses, and helping poor people to secure housing in less disaster-prone areas (Folha de São Paulo, 2011). A climate-centric framing was not compatible with these urgent policy goals (Lahsen et al., 2020). Later scientific attribution studies (Otto et al., 2015) support these Brazilians' disinclination to frame the two disasters as functions of climate change. However, the considerations were primarily political. As another indication, the associated actors and national media criticized Brazilian decision makers who attributed the disaster to climate change, describing this as efforts to shirk responsibility for societal vulnerabilities caused by their poor decision making. For example, after the tragic 2008 flooding and landslide event in the same Southern state of Santa Catarina, national and international experts refuted then President "Lula" da Silva singular framing of the disaster as "certainly" and "intimately" linked to climate change and, as such, caused by "many developed countries that are not assuming proper responsibility" in policy negotiations under international climate treaties (Zero Hora, 2011). Noting that extreme flooding events are a long-standing problem in the region independently of anthropogenic climate change, they instead attributed the tragic disaster to unwise development decisions and inaction despite science-based recommendations for measures to reduce societal vulnerability (Correio do Brasil, 2011). A local environmental engineer denounced the climate-centric framing of the disaster as a "deliberate attempt to naturalize the catastrophe, to eliminate governmental responsibility" (ibid.). Experts emphatically noted that the most important adaptive response went unheeded: poverty reduction and proper government control of land occupation in areas at risk for land slides and flooding. Climate-centric framings of disasters can usefully call attention to the primary responsibility of Northern countries for causing climate change, but at the cost of displacing blame and responsibility from more local decision makers who could have reduced societal vulnerability in the face of extreme weather events, whatever the role of climate change in them. Here, as in many other places around the world, societal vulnerability in the face of such events has roots in investment practices and in a long history of colonial and post-colonial exploitation (Ribot, 2014, p. 673; see also Farmer et al., 2004;Franke & Chasin, 1980). For local elites, however, it is politically more comfortable and convenient to blame global climate change than it is to trace crises to histories of underdevelopment (Rodney, 1973) and exploitative international systems (Davis, 2002, p. 11014), which (like climate change attributions) lead responsibility attributions back to the over-developed counties (Ribot, 2014). To the extent that climate narratives have led to policies to guard against climate extremes, efforts have targeted things such as water retention or pumping, rather than policies that might support local security via agricultural prices, access to markets and credit, or social services (Brottem & Brooks, 2018;Ribot et al., 2020;Tschakert, 2007). Climate-centric attributions of recent flooding events have similarly been rejected in other parts of Brazil, also on the grounds of being overly convenient for national decision makers. Although recognizing that flooding events happen in the city of São Paulo "every year more frequently, with greater intensity and greater geographic distribution," a geologist protested: "We will not believe that global warming and urban waste are causing flooding." Instead, he unequivocally blamed policymakers' poor decision-making, stressing that concrete constructions and absence of investment bearing on land-use and drainage systems, among other types of sound planning, have left the cities without drainage (Santos, 2013; see Colette, 2019, for a similar case in Argentina). The persistent reality of lacking disaster preparedness in Brazil, as elsewhere, despite many years of domestic and international climate policy, illustrates the tenuous value of the climate frame as stimulant of improved mitigation or adaptation in many places (Lahsen et al., 2020), and might make disaster preparedness for its own sake a more strategic framing (Lahsen et al., 2020). These policy implications are unrelated to the science of climate change attribution-which may indeed be accurate. Attribution of disasters to both climate and to more local expressions and causes of vulnerability may be objectively correct in many instances. Popular communication tends toward simplification, however, as scientific nuances and qualifications can undermine clarity, sow confusion among publics, and fail to promote desired action paths (Hassol et al., 2016). Recognition of this also seems to inform climate-centric disaster framing (Lahsen et al., 2020). Against simplification, others advocate for a "cosmopolitan moment" in the public life of science (Raman & Pearce, 2020), hoping for "opportunity to forge a public culture comfortable with the epistemic diversity and ambiguity inherent to climate change, and yet a culture that can also reason together in the public good" (ibid., p. 1). But there are important obstacles to easy reconciliation of climate-centric disaster framing with vulnerability reduction, as we have suggested, including the long-standing tendency for developing countries to stress Northern, rich countries' primary responsibility for causing and, thus, morally, for also addressing responsibility. As noted, this prevalent discourse, supported by climate-centric framings of disasters, serves to hide developing countries' decision makers' co-responsibility. That is very apparent in Brazil, whose diplomats have led developing countries' emphasis on Northern primary responsibility in the climate regime (Viola & Franchini, 2017). This framing is difficult for Brazilians to challenge, since they tend to agree with the premise of differential responsibility, even if they desire stronger climate action (Lahsen et al., 2020), and that might contribute to their inclination toward alternative disaster framings. Governments are known to manage blame strategically, to avoid public awareness and pressure in response to their decisions against substantial policy on climate change (Howlett, 2017, p. 625). Finally is reason to challenge the pressures to always attribute crises to the weather or to climate change, and the underpinning assumption that doing so always will bring optimal policy outcomes. Who, then, is to decide the frame that Brazilians should adopt? Any objective judgment on that would require a reasoned, multiple-perspective-informed evidence analysis. Sometimes frames other than climate as cause might better serve public concerns and, even, achieve the hoped for "climate action." For example, although it does not come under the heading of climate policy, a strong National Forest Code in Brazil-if enforced-is a form of climate action to the extent that it preserves vegetation that is a carbon sink, including in strategic places where it can reduce the threat of floods and landslides (Silva, 2012). Here human wellbeing (via ecological sustainability) is central-and policy is aimed first at security. While climate change-related policies may also be worthy and have positive effect, forestry policy may be much more effective and perhaps more feasible, for reasons that Lahsen et al. (2020) discuss. Forest protection is certainly more within the purview of Brazil's government compared to tangible reduction of global climate change. Moreover, Brazilians have multiple more-immediate concerns. While they worry about climate change more than most other populations in the world (Leiserowitz, 2007;Lewis et al., 2019), they worry much more about deforestation: in 2012, 64% ranked it as the most important environmental problem for the country and the world, against 10% who chose to rank global climate change first (Brasil, 2012). Moreover, they attach great cultural pride and value in their biodiversity-rich, abundant natural environment (Brasil, 2012). Stressing climate change may be important, but for purposes of immediate security and popular preferences it is far from primary. The forestry example also shows that emphasizing climate change is not necessarily the best-or the only-means of stimulating climate-relevant action, whether in the form of mitigation or adaptation. As noted, relatively little institutional and financial support for climate adaptation and resilience is found nationally in Brazil and internationally under the United Nations Framework Convention on Climate Change, and traditional international development institutions also offer inadequate funding for adaptation (Lahsen et al., 2020, p. 226). In Brazil, "official climate policies do not necessarily translate into support for climate vulnerability reduction and adaptation via forestry, due to a disconnect between official climate policy and actual decision making at the level of forest-and more generally land-management" (ibid.). Rather than merged, these compatible agendas often remain parallel activities. "One frame fits all" therefore does not hold when it comes to attributions of weather-related crises or ranking the importance of environmental problems; climate-centric framings of disaster may yield more proactive policy responses in some contexts than in others. They also might be more relevant in countries where anti-climate-change forces abound, such as the United States. Anti-environmental campaigns and climate skepticism are less prevalent in the Global South (Painter & Ashe, 2012), or they can take forms other than skepticism about human-induced climate change (Lahsen 2017). Where such counter-forces are prevalent and obvious, attributions might help consolidate belief in climate change as real and to be reckoned with. One frame may not fit a single national context, either, when it comes to tendencies to link extreme weather events to climate change, even for the same subgroup of actors. Climate-concerned Brazilian scientists drew attention to nonclimatic factors in the case of the 2014-2015 Southeastern drought (e.g., poor governance by both public and private entities). 1 However, even in that same case, some prominent Brazilian scientists also adopted climate-centric framing, drawing attention to the loss of "flying rivers" caused by national deforestation. This shows the role of context and purpose in framing choices, and that the climate frame also sometimes serves Brazil's environmental coalition. | FRAMING AND NORMATIVITY Regardless of whether climate change is large, small, or unknown, disasters that follow extreme weather events have multiple causes, in Brazil and elsewhere. Analysts' choices of analytic frameworks always highlight one cause over others and are thus inherently political, whether or not they recognize this. Identifying the degree to which a climate event's intensity or duration is due to anthropogenic meddling is a quantitative matter that, even when rigorous, is still subject to dispute (Jézéquel et al., 2018). The analytic frames we use to explore the degree to which a climate event or its anthropogenic increment causes damage are normative-insofar as each frame locates causality in different factors and thus has different implications for responsibility and action. Norms are implicit in any analytic frame (Giddens, 1999;Sayer, 1992), including those specific to climate change (Callison, 2015;Hulme, 2011;Rudiak-Gould, 2015). Any framing of, or theoretical approach to, the analysis of human-environment relations embeds choices about the variables that matter and the relations among these variables. They embed choices about the import of structure, history, and context (Farmer et al., 2004;Sayer, 1992, p. 2). Indeed, we always observe and understand via a priori knowledge or axioms that prefigure experience (Lund, 2014). Which framing prevails in any given causal analysis vitally shapes understandings of problems we observe and their solutions. In turn, these understandings inevitably shape responsibility (Calabresi, 1975;Hart & Honoré, 1959) or "blame games" (Hood, 2010) whereby actors apply their frames to attribute responsibility for the creation and resolution of societal problems. Frames are chosen within value-laden perspectives by scientific analysts as much as by lay persons-given the implicit solutions and responsibilities that a selected frame will serve. This is partly why Media Matters for America and WMO communications scholars try to guide scientists' constructions and choices. Cause is thus contentious. It points a finger, identifying the responsible and the guilty (Calabresi, 1975;Hart & Honoré, 1959). Producing a seemingly pure neutral scientific ideal of causality that links climate events directly to damage erases some of that contention from view, since a biophysical chain of events leading back to a climate hazard blames everyone, being "anthropo"-genic (generated by all humans) and thus no one (Castree et al., 2014;Rudiak-Gould, 2015;Schwartz, 2019). Different frames embody different moral stances-whether the expert analyst is conscious of this or not. Cashore and Bernstein (2020, p. 1) show that "… experts carry hidden cognitive frames about how to conceive of the problem at hand. These frames, in turn, strongly influence policy prescriptions." Morality, thus, must be acknowledged in any analysis involving humans, because morality-the normative "shoulds," "oughts," expectations, desires and priorities that guide human action-is an empirically observable element in causality of any human action. While methods can be value free, theory, or the frames we bring to research, cannot (Sari, 2014, p. 235), since theory or frame can only be identified by a judgment of their effect on something, an outcome, that is humanly valued (Drèze & Sen, 1989, p. 15;Giddens, 1999, p. 5). Any motive for research or reporting is a human motive, and so no approach to knowing comes without purpose. Acknowledging the power and social content of frames makes us aware of the implicit judgments they always carry. This can clarify moral choices often obscured by reductionist technocratic discourses. | CONCLUSION Politics-sensitive analysis is needed to gauge the strategic value of climate-centric disaster attribution in any given context. Attributing the damages, even incremental damages, only to the climate change increment is incomplete and thus misleading-since even the increment can only be a function of the degree of vulnerability; the incremental damage, like disaster writ large, does not fall from the sky. We caution awareness: climate change never causes loss or damage independently of the social conditions on the ground in specific places; the degree to which climate change can trigger disaster depends on the degree to which people are already exposed and precarious. When explaining disaster, whether or not climate-related, we must explain and address such vulnerabilities-for which there are well-established analytic methods. We view climate change as a major problem for humanity. We do not challenge, nor would we ever diminish, the important scientific effort to attribute extreme weather events to anthropogenic climate change. Explaining and reducing climate change is imperative. We do suggest, however, that scientific research and journalistic accounts that attribute particular crises to climate events or climate change need to be examined for embedded assumptions about meanings, priorities, and causal relations, not least assumptions about the politics and policy consequences of climatecentric disaster attribution. The stress on climate as cause may be meant to call attention to human actions as the cause. However, the framing can skew attention toward stressors "from the sky" rather than to the social, and often more treatable, causes of weather-related crises. Climate-centric disaster framing is politically useful to actors with interest in diverting attention from local, national and international policy initiatives that might bring-or could have broughtmore direct and locally relevant remedial action. Where the purpose is to identify ways to reduce disasters and attribute responsibility for damage, it is imperative to attribute the associated damages to the causes of vulnerabilities in place. This is a separate analysis from that of demonstrating the degree to which climate change is a driver behind a given climate event. The latter can attribute the anthropogenic element of the climate event within the statistical possibilities of measure, trends, and projections. The vulnerability analysis can attribute damages that follow the climate event to on-the-ground susceptibilities to damage within the analytic possibilities of social and political-economic enquiry. It is encouraging that the IPCC (IPCC, 2012, 2014) now acknowledges social and political-economic causes of vulnerability as more central to the picture of climate crises. Reconciliation of climate-centric disaster attribution and vulnerability-centric disaster attribution remains difficult, however, since their framings of responsibility can lead in different directions. Further research might explore the extent to which this tension could be reduced (a) by systematically accounting for factors such as national and global overconsumption and making the causes of poverty and inequality central to climate analysis and policy foci across scale, which currently is not common, and (b) by expanding the international mitigation-centered climate regime to also treat climate adaptation and resilience policy as simultaneously local, national, and global responsibilities. These may be realistic first steps and policy goals.
2021-12-12T17:16:21.999Z
2021-12-08T00:00:00.000
{ "year": 2021, "sha1": "ed1892413db6dc4424ea380d37bdcfaf545c3df8", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wcc.750", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "8cedaee70b3f1b992ecff6d2d6d6216ac8d2a6d8", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
233834344
pes2o/s2orc
v3-fos-license
System Development of Making an Image Map Based on Google Earth With the advent of the information age, the establishment of high-precision, low-cost digital image maps are increasingly more important for surveying and mapping engineering, land management, forestry, and marine monitoring. Combining real remote sensing imagery with computer science and technology to produce an intuitive and clear digital image map has become a trend in the development of GIS systems today, but professional image data is expensive to purchase, resulting in greatly improving system development cost. Therefore, this paper proposes a new method based on Google Earth and GIS technology to develop a system for application. The system mainly includes geographic information collection (according to remote sensing image acquisition), geographic information storage and management (using database technology) and digital map production three Module. The key technologies of the system are the establishment of regional geographic model, attribute data management, the establishment of information database and information query and analysis. The system can provide digital image maps with high efficiency and low production, and establish the database to realize the timely exchange and sharing of geographic data. Compared with the traditional drawing method, the system reduces the cost of development, and improves the authenticity, intuitiveness and practicability of the GIS system; and compared with MapInfo and ArcView, the system is easier to operate and to draw efficiently without professional image data. The digital image map made by the system can save a lot of manpower, material and financial resources after the actual production, which improve the work efficiency and reduce work intensity, and also provide theoretical and technical guidance for the current surveying curriculum teaching. Introduction With the advent of the information age, the establishment of real and intuitive digital geospatial information is becoming increasingly important for urban planning, environmental monitoring and [1]. The vector digital maps presented in various traditional geographic information systems in the past only represent simple and primitive elements such as points, lines, and areas to represent key and required feature information, and use text for simple labeling and description. Information is abstract. The display is not intuitive and the readability is poor. Using real remote sensing image pictures as the background display and adding text, symbols, line segments and other primitives to supplement the description of the feature information can greatly improve the authenticity and practicality of the geographic information system and can better meet the actual needs of engineering construction. The new geographic information technology has a great influence on surveying and mapping engineering [2. The process of information collection and processing has basically achieved scientization, automation, and standardization; data storage and management have become faster and more convenient; data transmission and sharing have begun to become a breeze, abandoning traditional drawing transmission. Meanwhile, the formation of the Internet can make multiple users to execute different operations on the same map, which improves the efficiency. Therefore, modern surveying and mapping technology has a greater impact on the development of surveying and mapping. Topographic maps of various scales are used at each stage of engineering survey and design [3]. With the development of aerial surveying and remote sensing technology, high-definition Orthophoto Maps and satellite images are favored by surveying designers because of their intuitiveness and timeliness [4]. However, but its high cost also makes it unacceptable for general engineering units. The purpose of developing "System Development of Making an Image Map Based on Google Earth" is to hope that the system can provide digital maps with higher economic value and lower production costs for the design and planning of surveying and mapping engineering construction projects, and partially make up for the some shortcomings in the collection of geographic information, to solve the problem of location accuracy in engineering construction more accurately [5]. Google Earth Google Earth is a computer program that renders a 3D representation of Earth based primarily on satellite imagery [6]. The program maps the Earth by superimposing satellite images, aerial photography, and GIS data onto a 3D globe, allowing users to see cities and landscapes from various angles [7]. Users can explore the globe by entering addresses and coordinates, or by using a keyboard or mouse. The program can also be downloaded on a smartphone or tablet, using a touch screen or stylus to navigate. Users may use the program to add their own data using Keyhole Markup Language and upload them through various sources, such as forums or blogs. Google Earth is able to show various kinds of images overlaid on the surface of the earth and is also a Web Map Service client. Recently Google has revealed that Google Earth now covers more than 98 percent of the world, and has captured 10 million miles of Street View imagery, a distance that could circle the globe more than 400 times [8]. Image Map Image map is a kind of map with ground remote sensing image. It is a map that directly reflects the 3 geographical characteristics and spatial distribution of cartographic objects by using certain map symbols and notes through geometric correction, projection transformation and scale domestication through aerial or satellite remote sensing images [9]. The image map combines the advantages of both aerial photographs and line topographic maps. It not only contains the rich content information of aerial photographs, but also guarantees the topographic map's decoration and geometric accuracy [10]. The development of image maps is closely related to the development of aerial photography, aerial surveying technology and aerospace technology. Aerial photogrammetry has gone from analogue measurement in the 1930s to analytical photogrammetry in the 1970s; digital photogrammetry has risen in the late 1980s and has developed into the current stage of all-digital photogrammetry. The core technology benefits from the development of computer technology, communication technology, aviation (sky) remote sensing technology and digital image theory technology. Due to the high-tech infiltration of "3S" (GPS, RS, GIS technology), the image map is full of legendary gorgeous colors. Overall System Idea The construction of this project is based on RS, GIS technology and computer application technology to assist in the indoor collection of geographic information, and build a computer-assisted system to manage geographic information, by establishing a database and using digital maps as an operating platform to achieve timely exchange of geographic information and data sharing [11]. The system development is based on Google Earth imagery. It mainly includes geographic information collection (according to remote sensing image acquisition), geographic information storage and management (using database technology) and digital map production. The realization of the geographic information collection function is completed based on Google Earth's 3D image and the measured surveying feature points (obtaining 3D coordinates and input attribute values). The function of geographic information storage and management is to record the relevant data of the collected feature points in a data table for easy output. At the same time, it can also provide drawing data sources for drawing software such as CASS. The digital map making function is to make a digital map (drawing points, lines, areas) based on the collected feature points. Implementation of Information System Management. The information management system realizes information resource sharing through computer technology, establishes a table space in the database, and creates related information tables such as basic information tables and geographic location tables according to application requirements. The main function of the system is to display the geographic location of the collection point and related engineering information. Therefore, the main contents of the system construction are as follows: (1)Geographic Information Collection (2)Geographic Information Storage and Management (3)Digital map making (4)Daily information management and maintenance functions (edit, modify, output, etc.) Achieving the above goals involves the application of "3S" technology, the collection of geographic information, the establishment of databases, and software production. The system's key technologies involve the establishment of regional geographic models and the management of attribute data, as well as the establishment of information databases and information query and analysis. The relational tables in the database mainly include geographic location tables and geographic attribute tables, as shown in table 1 and 2. The two tables are linked by sequence number fields. This system is based on Google Earth and is an application system developed based on GIS technology. GPS and RS technologies will also be involved in the collection of information. The key point of system development is to collect geographic locations according to Google Earth remote sensing images and realize management and application of geographic information by GIS technology. The geographic information system has a strong function of spatial information analysis. The geographic information system can be used to quantitatively describe the spatial elements in the construction area collected according to Google Earth remote sensing images and other methods with points, lines, polygons, and polygons (such as figures 1). Such a digital map management system can: A. Efficiently display map information, create charts of visual geographic information, and provide related information services. B. Interactive drawing tools for creating graphics in maps. Enter static and dynamic geographic objects, symbols and georeferencing information layers. Realize information query and statistical analysis. The geographic information system is used as a platform to realize the management and query display of the construction goals in the area. C. Import standard GIS data layers offline (add new layers). D. Information query, query real-time and historical data. Based on the geographic information system platform, the results can be further combined with remote sensing applications to mark the results on an electronic map and display the results graphically in the form of reports and charts. E. The system leaves a certain margin on the basis of application services, which can be used to integrate other information services. System Architecture Design. 1. The goal of building Provide information exchange and data sharing through system construction. Based on the database application, describing the management objects in the database, developing a unified and standardized record encoding, classify the basic encoding according to the required file type, and use the basic encoding to make one-level, two-level to multi-level encoding design according to the application function. Finally, all system retrieval functions can be realized through the coding system. Combined with geographic information system to realize the interaction of query data and geographic information, business information is made clearer through intuitive location information (geographic information) reflection. The main functions of the system The system consists of multiple pages to facilitate the application of multiple functions. The specially designed code management module table is used to manage each function page in the database, which is convenient for users. The main functions of the system are as follows. A. Geographic Information Collection The feature points are determined by moving the cursor (as shown in figure 2) or image movement. Click the cursor to obtain the 3D coordinates of the feature points and record them in the data table. The data in the data table can be output as a dat file in the format required by CASS software. The geographic information (data) and attribute data in this file can be used to make digital maps. The above process is similar to the detail survey process in the surveying practice. The point surveying operated with the electronic total station during the practice is also to obtain the threedimensional coordinates of the survey point and store it in the instrument. After the surveying, the data in the electronic total station is exported to the computer [12]. You get the data file. This system has a very intuitive effect for teachers to explain to students how to survey and map terrain (identify feature points). The overall description of surveying is more visual than in the field, and it can be in many aspects and cases. In addition, you can use remote sensing images to browse geographic information to achieve comprehensive observation and seamless operation of 3D images-zoom in, zoom out, pan, rotate, etc. B. Geographic Information Storage and Management A variety of geographic information data can be entered into the system. The data entered includes multiple forms of data, such as text, pictures, audiovisual materials, etc., and the system automatically stores and manages them in categories. You can edit (add, modify, organize, delete) various data in the system according to authorized permissions. You can query various information in the database and association relationship according to various conditions and user requirements, to achieve simple aggregation, classification, filtering, sorting and other functions. According to the authorization authority, the original data and query result data can be output in different formats, that is printing and storing new files. C. Digital map making The coordinate information of the points collected by the remote sensing image can be stored in the database, and the geographic information can be represented graphically in the map to realize the location data collection and attribute data editing of the primitive. You can browse the geographic location of the area of interest and the surrounding geographic environment to understand the geographic environment, such as terrain conditions, features, etc. Achieving seamless operation of the map (zoom in, zoom out, pan, etc.) You can query geographic information (object location and its attributes) according to specified conditions, and visually express it in various forms such as text and images. It can edit digital maps, add and delete geographical objects, update attribute data, and change the style of map symbols. D. The combination of engineering project information and GIS information management Combining engineering project information with geographic information (map) display makes engineering information more intuitive and easier to read. Implementation of System Construction According to the principle of step-by-step implementation, openness and compatibility, the system meets the requirements of practicability and continuity. It uses advanced and reliable software and hardware facilities, and uses database technology to implement comprehensive data management and information sharing. It takes advantage of geographic information system technology to provide Analyze and synthesize a comprehensive information service system for infrastructure. Emphasis is placed on the combination of technology and management, the combination of advancedness and practicability, the combination of versatility and safety, the combination of reliability and operability, and the system production adopts a modular design. The construction process is as follows: (1) Schema design With reference to the existing technology and equipment, combined with domestic and foreign (2) System production The system production is to write code to form software. The production of this system involves GIS technology, and the production is completed using control technology. (3) System debugging After the preliminary completion of the system, the simulation data will be used for system debugging, and the system will be tested in the pilot project management after the system is stable. Implementation of Main Functions After the system is completed, it should have the following main functions: 1. Settings of the system login interface After the system starts, as shown in figure 3. Geographic information collection function Geographic information can be collected in a single point or in batches. The method of single-point acquisition has been described previously and is not repeated here. The method of batch acquisition is mainly designed for acquiring image maps. Since Google Earth does not provide free data downloads, the method researchers use here is to obtain a piece of satellite image by copying the screen. The basic principle is that when Google Earth browses the area of interest at an appropriate scale, and then displays a screenshot of the screen. For the designated screenshot area, the system software can automatically calculate and divide into blocks according to the set area size. After the screenshot is completed, the serial numbered divided satellite images are obtained. The specific operation method is as follows: First, select the coordinate difference (longitude and latitude difference) between the area to be collected and a given collection point. The system will calculate the position of each point to be collected according to the given parameters (longitude and latitude difference), as shown in the figure 4 and then point by point. Figure 4.Location of batch points. The collection of method is that after the system loads the Google Earth image, the Google Earth image is first moved to the center position coordinate as the acquisition point coordinate (generate a KML file with a given L, B or X, Y and run to achieve the purpose), and then intercept Google Earth image (screen copy) of this area, a key parameter to be determined in this process is to determine the height of the screenshot. Taking the screen resolution of a computer monitor set to 1024 × 768 as an example, when the height of the screenshot is 500m, the screen display area is 270m, and about 318 pixels per meter can be calculated. This kind of screenshot accuracy is used to form a 1: 110,000 image. It can meet the requirements. Of course, the high-definition images provided by Google Earth are various, including Quick Bird images with 0.6m resolution, Ikonos images with 1m resolution, SPOT images with 2.5m resolution, and so on. The resolution of the image itself is the key to determining sharpness. Reducing the height of the screenshot without limit does not necessarily improve the sharpness of the screenshot. Therefore, a simple method for determining the height of the screenshot is to reduce the observation height for the screenshot area until the image can no longer be clear and becomes slightly blurred, that is, the observation height at this time can be used as the screenshot height. Image positioning can use this sentence App_GE.SetCameraParams(YY, XX, H, EARTHLib.AltitudeModeGE.AbsoluteAltitudeGE, Scale_Height, 0, 0, 5) Scale_Height is the height of the area observed in the screenshot, which can be set arbitrarily by us. Secondly, after the image acquisition is completed, the image registration problem must be solved. When taking a screenshot, a high-resolution satellite image map of the screenshot area can be obtained after acquiring more than four registration points registration images at the same time. The registration points are usually collected at the four corners and center points of the photo, as shown in figure 5. The coordinates are WGS84 coordinates in decimal format. Simply by the projection conversion from the WGS84 ellipsoid to the 1980 Xi'an ellipsoid, the 1980 Xi'an coordinates of the four corner points of the picture can be obtained. Information storage function Information storage is to store the collected geospatial location information (coordinates) to the corresponding location of the data Experimental Results (Partial Results) There are still many difficult areas in China, especially some mountainous areas that have not been covered by large-scale topographic maps. This situation is far from meeting the requirements in the process of engineering survey design and construction. The large-scale planar image obtained by using the above method, combined with the medium-scale topographic map or DEM, can greatly compensate for the shortcomings of no large-scale topographic map. In this experiment, we used the above method to conduct experiments in a mountainous area in Zhejiang Province. We collected multiple points of data as shown in figure 8, and used this to make a satellite image. From the results of the sampling inspection, the error of plane coordinate is less than 0.0036 ". Such a result can bring great convenience to the construction of the project and play an important role in optimizing the design. The experimental area is selected as a rectangular area, and the area is divided into several small areas according to the set values of longitude and latitude differences (see figure 4, the blue box range is the selected area). In each area, we can see the locations of several points were collected, and the data storage form is shown in Table 3. All the points collected in the area are plotted on the map, as shown in figure 8. Figure 9. Image map. Social and Economic Benefits of Project Results (1)Social benefit The development of this product not only plays an important role in engineering planning, but also greatly aids the teaching of future measurement courses, especially the field training of measurement courses. This effect has a greater impact on students' better understanding of surveying and mapping knowledge and mastering of surveying and mapping skills than in the past. The level of students' knowledge mastery has a direct impact on the production of social benefits. In addition, after the system is converted into a drawing tool for actual production, it can save a lot of manpower, material resources and financial resources, improve work efficiency, reduce work intensity, shorten work time, and enable operators to work with pleasure. (2)Economic benefits According to the function of the system, a digital map with a certain accuracy (our preliminary study is that it can meet the point coordinate error <0.000010) can be made, and this map can fully meet the requirements of planning and design in water conservancy construction. Therefore, using this system to complete the corresponding work, because Google Earth remote sensing images are free, you can save a lot of human, material and financial resources. Research shortcomings After the product trial operation, the product still has the following deficiencies: 1.The products produced by the system have strong regional differences in accuracy assurance. For example, the accuracy of mountain areas is lower than that of plain areas. 2.The system operation is only partially automated and requires more manual intervention to complete the information collection. 3.The operationality of the produced maps needs to be further strengthened.
2021-05-07T00:04:16.795Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "c87427c296930b34821cb16e338ae20b86ba62af", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/697/1/012004", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fa40fc54d1dd9a9d50cf14bdfc4df364ac0b868c", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
247942317
pes2o/s2orc
v3-fos-license
Protein Biofortification in Lentils (Lens culinaris Medik.) Toward Human Health Lentil (Lens culinaris Medik.) is a nutritionally dense crop with significant quantities of protein, low-digestible carbohydrates, minerals, and vitamins. The amino acid composition of lentil protein can impact human health by maintaining amino acid balance for physiological functions and preventing protein-energy malnutrition and non-communicable diseases (NCDs). Thus, enhancing lentil protein quality through genetic biofortification, i.e., conventional plant breeding and molecular technologies, is vital for the nutritional improvement of lentil crops across the globe. This review highlights variation in protein concentration and quality across Lens species, genetic mechanisms controlling amino acid synthesis in plants, functions of amino acids, and the effect of antinutrients on the absorption of amino acids into the human body. Successful breeding strategies in lentils and other pulses are reviewed to demonstrate robust breeding approaches for protein biofortification. Future lentil breeding approaches will include rapid germplasm selection, phenotypic evaluation, genome-wide association studies, genetic engineering, and genome editing to select sequences that improve protein concentration and quality. INTRODUCTION Nutritional imbalances and deficiencies cause several malnutritional and non-communicable diseases (NCDs) in humans. A poor diet that lacks macro-and micronutrients, such as proteins, low-digestible carbohydrates (LDCs), fats, vitamins, and minerals, results in protein and micronutrient malnutrition. Low-digestible carbohydrates (LDs) are, also known as prebiotic carbohydrates, defined as 'a substrate that is selectively utilized by host microorganisms conferring a health benefit' (Gibson et al., 2017). These dietary prebiotic carbohydrates pass undigested through the upper digestive tract and are fermented by microorganisms in the colon for increased gut health. The most common human health impacts of malnutrition are stunting, intestinal health issues impairing digestion, obesity, overweight, and an increased risk of dietrelated NCDs (Branca et al., 2019). Major NCDs related to poor dietary intake that threatens human life include cardiovascular diseases, cancer, chronic respiratory diseases, and diabetes (World Health Organization, 2019). Notably, a protein-deficient diet leading to protein malnutrition has alarming consequences that affect infants, young children, and females across the globe (Semba, 2016). However, a protein-rich legume-based diet is a viable, sustainable, and healthy option to prevent malnutrition in developing countries. Though animal proteins are extensively utilized in human diets, plantbased proteins have grown in popularity. Their demand has increased globally due to nutritional value, low carbon input, and environmental concerns (Asif et al., 2013). Staple foods rich in macro-and micronutrients can alleviate the risk of malnutrition. Plant-based diets comprised mainly of cereals and legume staples are popular worldwide. Legume crops, including lentils (Lens culinaris Medikk.), have a protein concentration (20-30%) higher than cereals (10-12%) and thus have the potential to combat protein malnutrition and serve as gluten-and allergen-free protein sources. Lentil is highly nutritious, affordable and has a shorter cooking time than other pulse crops, and features high protein concentrations, low-digestible carbohydrates, minerals, vitamins, and low concentrations of phytic acid (Thavarajah et al., 2009;Kumar et al., 2015). Lentil is not a source of cholesterol, and its low-fat content makes it easier to digest than other pulse crops. Lentil proteins include both essential and non-essential amino acids but are notably low in the sulfur-containing amino acids methionine (Met) and cysteine (Cys; Khazaei et al., 2019). Biofortification is a possible approach to improve the unbalanced composition of amino acids in lentils through appropriate conventional breeding strategies and genomic selection. With increasing global protein demand, protein biofortification would justify lentils as a 'nutritional booster' to increase global nutritional security and combat malnutrition and NCDs. Storage protein quantities demonstrate high variability due to the quantitative nature of the genes regulating protein synthesis in the seeds (Kumar et al., 2020). Higher genotype × environmental interactions, indicated by the moderate broad sense heritability (31.31%), is another reason for the high variation in the storage protein concentration in lentil seeds (Gautam et al., 2018). Lentil seed proteins, excluding storage proteins, also have metabolic functions. These metabolic proteins regulate numerous physiological processes in the plant, including enzymatic activity and structural and physiological functions (Scippa et al., 2010). Ultimately, lentil seed protein composition contributes to human health by providing essential amino acids necessary for metabolic processes and nutritional balance in the human body. Optimizing the plant breeding process and location sourcing may help develop better protein-enriched lentil cultivars for global plant-based protein demand. The objectives of this paper are to review the protein concentration and quality variations within the genus Lens, pathways and genes regulating the synthesis of amino acids, functions of amino acids for human health, and breeding strategies related to lentil protein biofortification. LENTIL BIOFORTIFICATION Lentil is an annual diploid (2n = 2x = 14) cool-season food legume that originated in the Middle East (Cubero, 1981). The genus Lens comprises L. culinaris, L. ervoides, L. nigricans, and L. lamoletti. L. culinaris is further divided into four taxa: L. culinaris ssp. culinaris, L. culinaris ssp. orientalis, L. culinaris ssp. tomentosus, and L. culinaris ssp. odemensis (Ferguson et al., 2000). Lens genus has been classified as primary, secondary, tertiary, and quaternary genetic pools according to the phylogeny using the Genotyping-by-sequencing (GBS). The primary gene pool contains L. culinaris, L. orientalis, and L. tomentosus, whereas L. odemensis and L. lamoletti are in the secondary gene pool. However, each tertiary and quaternary gene pools contain single species, L. ervoides and L. nigricans, respectively (Wong et al., 2015). Of these, only L. culinaris ssp. culinaris is domesticated and cultivated worldwide, representing crops over a 5.01 M ha area with an annual production of 6.54 M tonnes. Canada is a leading producer, contributing about 44% of the world's lentils; other major lentil-producing countries are India, the United States of America (United States), Turkey, Australia, Nepal, and Bangladesh (FAOSTAT, 2021). Various researchers have reported protein concentrations in current lentil cultivars in the range of 20-30% (Table 1). In a study (Bhatty, 1986), similar protein concentrations in wild and cultivated lentils, indicating homogeneity for protein concentration in the genus Lens, were identified. However, a recent study (Kumar et al., 2016b) efficiently distinguished wild species from cultivated lentils for protein concentration. In this study, L. orientalis, an immediate progenitor of cultivated lentils, expressed the highest average protein (24.15%) among all the wild species, followed by L. ervoides (22.99%). Other wild species, L. odemensis, and L. nigricans showed slightly higher average protein content (19.7 and 19.53%, respectively) than L. culinaris. A similar protein level was seen in L. tomentosus (18.75%) and cultivated lentils (18.7%). Extensive variation was observed for protein content within L. orientalis and L. ervoides, ranging from 18.3 to 27.75% and 18.9 to 32.7%, respectively. ILWL-47, an L. ervoides accession, had an exceptionally high protein content of about 32.7% and is, therefore, a potential candidate for protein quality improvement in lentil breeding programs (Kumar et al., 2016b). Protein subunit fraction profiling has indicated variable levels of the albumin protein fraction (APF) and globulin protein fraction (GPF) among Lens species, with the wild species having higher APF and GPF concentrations than the cultivated species (Bhatty, 1982). Among the evaluated wild species, L. orientalis and L. ervoides contained higher APF and GPF levels than L. nigricans (Bhatty, 1982). The proportion of amino acids in lentil proteins varies across genotypes in the cultivated gene pool ( Table 2). Met and tryptophan (Trp) represent a minor fraction among all amino acids and are thus termed limiting amino acids. Comparing lentil protein with cereal proteins indicates the good nutritional complementation between Met and lysine (Lys), but to some extent, for Trp and threonine (Thr) because cereals are rich in both Met and Trp (Bhatty, 1986). Generally, all essential amino acids except Lys are deficient in lentils, but a moderate to the high proportion of non-essential amino acids are present (Khazaei et al., 2019). Lentil proteins are also lacking in other S-containing amino acids such as Cys. The albumin fraction of lentils contains more essential amino acids than the globulin fraction (Bhatty, 1982). Recent studies also indicate that amino acids vary among distinct species of the genus Lens, with a spectrum of variation seen for amino acid content among L. culinaris, L. orientalis, L. ervoides, L. nigricans, and L. odemensis. Phenylalanine (Phe), Met, valine (Val), leucine (Leu), and isoleucine (Ile) concentrations are significantly higher in wild species than cultivated lentils ( Table 3; Rozan et al., 2001). Similarly, the non-essential amino acid content is also higher in wild species than in L. culinaris. Such evidence signifies wild species are a potential source of candidate genes that can be harnessed to improve protein quality in cultivated lentils. GENETIC CONTROL FOR AMINO ACID BIOSYNTHESIS IN PLANTS The genetic mechanisms controlling seed protein concentration have similar regulation and pathways in different plants, including pulse crops. In pulse crops, genetic control of seed protein content has not been widely studied except in chickpea (Cicer arietinum), soybean (Glycine max), and pea (Pisum sativum). However, genetic control of seed protein content has been studied extensively in cereals (Mann et al., 2009;Olsen and Phillips, 2009;Chen et al., 2018;Borisjuk et al., 2019) and the model plant Arabidopsis thaliana (Jasinski et al., 2016). In chickpea, seven candidate genes that regulate seed protein concentration were identified using a (Sayeed and Njaa, 1985;Shekib et al., 1986;Kahraman, 2018 (Upadhyaya et al., 2016). In soybean, three QTL (qPro10a, qPro13a, and qPro17b) for protein were identified in a recombinant inbred line (RIL) population (Zhonghuang 24 × Huaxia 3) on chromosomes 10, 13, and 17, respectively (Liu et al., 2017). Several genes regulating the seed protein concentration in soybean were found on chromosomes 15 and 20 (Patil et al., 2017). Another gene, BIG SEEDS1 (BS1), controlling seed size, weight, and composition of amino acids in the protein, has been characterized in Medicago trunculata and soybean (Ge et al., 2016). Groups of highly coordinated genes (HCGs) controlling the aspartate family (Met, Ile, Lys, Thr, and Gly) and branched aromatic amino acid formation were also identified in A. thaliana (Less and Galili, 2009). These two HCGs have several genes controlling the formation of amino acids. The first group related to the aspartate family contained catabolic genes for THA1 (Thr to Gly metabolism), BCAT2 (Ile metabolism), MGL (Met catabolism), and LKR/SDH (Lys metabolism). However, the second group exclusively regulated Met metabolism and was termed the 'Met metabolism group.' It contained the genes AK/HSDH1 (encoding aspartate kinase enzyme for the formation of aspartate-4-semialdehyde, the first substrate for amino acid synthesis), CGS1 (Met synthesis), DAPD (Lys synthesis), SAMS3 (Met catabolism), BCAT3 (Ile metabolism), and BCAT4, MAM1, and MAML (Met catabolism). One of the two groups related to branched aromatic acids contained ten genes (ASA1, ASB, TSA2, TSB1/2, IGPS for Trp synthesis, CYP79B2 for Trp catabolism, PD for Phe synthesis, PAL1 and PAL2 for Phe catabolism, and TAT3 for tyrosine (Tyr) catabolism). In contrast, two genes (PAL3 and IGPS) were reported in the second group (Less and Galili, 2009). The genes regulating the synthesis of enzymes that mediate the formation of amino acids and their precursors have been extensively studied in plants (Table 4; Figure 1). In A. thaliana, glutamate is formed from precursor 2-oxoglutarate by enzymatic aminotransferases, a process that is regulated by 44 putative genes (Liepman and Olsen, 2004). Glutamate synthase production, which converts glutamine (Gln) to glutamate, is controlled by either one or two genes in the chloroplast and mitochondria (Gaufichon et al., 2016). Similarly, six genes encode Gln synthase, which converts glutamate to Gln, in A. thaliana (Forde and Lea, 2007). Glutamate is a precursor that synthesizes arginine (Arg) and proline (Pro) using 20 enzymes encoded by about 30 genes in A. thaliana (Majumdar et al., 2016). Glutamine with aspartate also forms asparagine (Asn) in plants by the transamination action of the Asn synthetase (AS) enzyme encoded by the asnB gene in eukaryotes (Gaufichon et al., 2010) and the ASN gene family (ASN1, ASN2, and ASN3) in Arabidopsis (Table 4; Arabidopsis Genome Initiative, 2000). A histidine (His) synthesis pathway revealed eight genes (ATP-PRT, PRATP/CH, ProFAR-I, IGPS, IGPD, HPA, HPP, and HDH) forming eight enzymes in A. thaliana (Rees et al., 2009). Two branched-chain amino acids, Val and Leu, form with the acetohydroxyacid synthase (AHAS) enzyme acting on pyruvate producing acetolactate. This enzyme forms the third branchedchain amino acid, Ile, by serving on a substrate formed from Thr in the pathway for 2-ketobutyrate converting Thr to Ile. A single gene encodes the AHAS enzyme in Arabidopsis (Singh and Shaner, 1995). The enzyme chorismate mutase (CM) is encoded by three genes (AtCM1, AtCM2, and AtCM3) and is a precursor for chorismate to form prephenate for Phe and Tyr biosynthesis in plants (Figure 1). The formation of Trp from chorismate is regulated by three genes (ASa1, ASa2, and ASb1) and seven putative genes (two Asa and five ASb genes) encoding anthranilate synthase (AS) enzyme-producing anthranilate ( Table 4). This anthranilate generates Trp using five enzymes (PAT1, PAI, IGPS, TS a, and TS b) encoded by eight genes in plants (Tzin and Galili, 2010;Parthasarathy et al., 2018). Aspartate regulates the formation of four essential amino acids, Ile, Lys, Met, and Thr, also termed aspartate-derived amino acids. Five genes encode aspartate formation enzymes in A. thaliana (Han et al., 2021). In C 3 plants, including lentils, two pathways are identified for serine (Ser) formation, namely photorespiratory and non-photorespiratory pathways in photosynthetic and non-photosynthetic tissues, respectively (Figure 1). The Ser produced in different pathways is converted into glycine (Gly) in non-photosynthetic tissues in the presence of the Ser hydroxymethyltransferase (SHM) enzyme. Ser also synthesizes Cys by following a two-step pathway in plants regulated by Ser acetyltransferase (SAT) and O-acetylserine (thiol)lyase (OASTL) enzymes encoded by five and nine genes, respectively (Howarth et al., 1997;Wirtz et al., 2004). AMINO ACIDS IMPACT HUMAN HEALTH Amino acids are the foundational units of proteins. Structural conformations have unique chemical properties due to basic (amide) and acidic (carboxylic) chemical groups. Based on the human nutritional requirements, amino acids have been classified in several ways-essential or non-essential. Essential amino acids are indispensable because the human body cannot synthesize them; hence, appropriate concentrations in the diet are necessary (Table 5). Non-essential amino acids, synthesized in the human body, are also called dispensable amino acids (Reeds, 2000). However, some non-essential amino acids are considered conditionally non-essential because their abundance in the human body declines in times of stress or sickness. External sources are required to maintain necessary quantities (Fürst and Young, 2000). The role of amino acids (individually or in combination) was first studied in rats to evaluate the necessity of Lys and Trp in food sources containing gliadin proteins. This initial study documented the adverse effects of amino acid deficiency on rats (Osborne and Mendel, 1914). Based on preliminary classical studies using model organisms (Ackroyd and Hopkins, 1916;Rose and Cox, 1924), an analogy of amino acid functions and dietary requirements in humans was first established by co-workers in 1947 (Rose et al., 1947). This study played a significant role in recognizing and classifying essential and non-essential amino acids based on their impacts on human health. Amino acids perform several crucial functions in the human body, either directly or indirectly. Amino acids have a specific role in gene expression (Oommen et al., 2005), signaling pathways for activation of immune systems , have nutraceutical effects for improving health status by regulating metabolic activities (Duranti, 2006), and can be used to treat genetic disorders (van Vliet et al., 2014). Amino acids govern the epigenetic regulation of gene expression through DNA modifications. DNA modifications Vauterin and Jacobs, 1994;Vauterin et al., 1999;Craciun et al., 2000;Sarrobert et al., 2000;Galili, 2011 such as methylation and acetylation occur due to the binding of DNA to C groups (methyl, acetyl) donated by Met, His, Ser, and Gly (Oommen et al., 2005;Kouzarides, 2007). Acetylation leads to the detachment of histones from DNA to favor its exposure-promoting transcription process. However, methylation plays a role in the reverse direction by densely packing the FIGURE 1 | Pathways synthesizing various essential (green boxes) and non-essential (purple boxes) amino acids. DNA and encouraging gene silencing (Wu, 2010). Studies also demonstrate the role of Gln in the regulation of intestinal gene expression in rats, promoting intestinal health concerning cell growth and antioxidation activity (Wang et al., 2008). Arg supplementation in rats leads to the upregulation of gene expression, preventing oxidative stress and promoting fatty acid metabolism and glucose metabolism (McKnight et al., 2010). At the transcriptional level, amino acids regulate the activity of RNA polymerase by altering its specificity for promoters and enhancing the binding of some repressors near the non-coding sequences adjacent to the promoter region (Oommen et al., 2005). Such studies demonstrate the remarkable contribution of different amino acids in regulating gene expression. The human immune system consists of both innate and acquired immune subsystems that regulate the response and protection of the human body upon pathogen attack (Calder, 1995). The innate immune system is a natural system that immediately activates when pathogens enter the body and can only prevent the entry and initial establishment of the pathogen. It comprises the physiological barriers, monocytes, macrophages, neutrophils, basophils, natural killer cells, mast cells, platelets, and various humoral factors (Buchanan et al., 2006). However, once the pathogen invades the innate immune system and colonizes, the acquired immune system is activated to decrease further pathogen progress. The acquired immune system consists of lymphocytes (T-and B-lymphocytes) that have immunological memory for invading pathogens (Calder, 2006). Human immune systems require a range of amino acids to produce immunoglobulins, cytokines, and other biomolecules to prevent diseases . Several amino acids (branched-chain amino acids: BCAA (Leu, Ile, and Val), alanine (Ala), Gln, Ser, Pro, and Thr) regulate the proliferation of lymphocytes (Li et al., 2007). These amino acids either directly participate (Ala, Ser, and Thr) or produce signal molecules or hormones (BCAA, Gln, and Pro) to stimulate lymphocyte proliferation and create various immune responses (Li et al., 2007). Moreover, BCAAs participate in lipid metabolism (Nishimura et al., 2010) and blood glucose maintenance. In females, BCAAs also regulate blastocyst development and embryo implantation, fetal growth by hormonal secretions, stimulate mammary gland function and lactation, and increase aspartate, Gln, and glutamate synthesis (Zhang et al., 2018). Met, His, Gly, and Phe regulate the synthesis of signaling molecules controlling immune responses. Individually or in combination, these amino acids control the production of immune cell signaling molecules, leading to major immunityboosting elements such as cytokines and antibodies (Li et al., 2007). Amino acid oxidases (AAOs) derived from L-isomers of Phe, Trp, Tyr, and Leu possess antimicrobial (Phua et al., 2012) and antitumoral functions (Lee et al., 2014). Legumes have antinutritional compounds, including trypsin and chymotrypsin inhibitors, phytic acids, and tannins, which reduce nutrient bioavailability (Vidal-Valverde et al., 1994;Shi et al., 2017). Lentil is naturally low in phytic acid (Thavarajah et al., 2009) and contains trypsin inhibitors (3.6-7.6 units/mg protein) and tannins (1.28-3.9 mg/g; Hefnawy, 2011). Inactivity of trypsin and chymotrypsin enzymes causes difficulties in lysis proteins into small peptides and eventually affects the release of amino acids from small peptides. Tannins are phenolic inhibitors that bind to proteins via Lys or Met cross-links (Davis, 1981) and make insoluble complexes with carbohydrates (Reddy et al., 1985). In lentils, trypsin and chymotrypsin inhibitors and phytic acids are present in seed cotyledons, whereas tannins are concentrated mainly in the seed coat (Dueñas et al., 2002). Different food processing methods, including dehulling and cooking, are recommended to reduce these antinutritional properties (Acquah et al., 2021). Dehulling effectively reduces the tannins by removing the seed coat (Goyal et al., 2009). In pulses, other common processing treatments are soaking, hydrothermal treatments (cooking and roasting), fermentation, and irradiation (Acquah et al., 2021). Soaking reduces trypsin and chymotrypsin inhibitors, phytic acids, and tannins in lentils depending on the soaking time (Shi et al., 2017). Thermal methods are recommended for denaturing trypsin and chymotrypsin inhibitors and removing tannin in lentils (Hefnawy, 2011). Fermentation and irradiation are alternate methods to reduce antinutritional compounds (Siddhuraju et al., 2002;Maleki and Razavi, 2021) but have not been widely studied in pulses. BREEDING APPROACHES FOR PROTEIN QUALITY IMPROVEMENT Pulse breeding programs focus on meeting the world's food demand and ensuring global food security. The primary objectives of these breeding programs are to increase the yield by efficient selection from available germplasm, introduce hybrid lines, cross contrasting lines to exploit heterosis, develop biotic and abiotic stress-tolerant cultivars, and induce mutations to generate novel variability with molecular and genomic techniques. Today, most conventional pulse breeding programs employ molecular markers for traits of interest. Genetic engineering technology has demonstrated remarkable potential to modify plants for specific breeding objectives. Thereby, technological advancement has broadened the scope of plant breeding to enable specialpurpose breeding programs such as nutritional quality improvement programs or nutritional breeding (Kumar et al., 2020). Conventional breeding approaches focus on improving highly heritable traits governed by a few genes. Quantitative traits with low heritability and high environmental effects, such as protein and other nutritional quality traits, do not significantly respond to selection by conventional breeding methods. In crop plants, including pulses, protein concentration negatively correlates with yield (Qureshi et al., 2013); therefore, selecting either trait negatively affects the other. For this reason, conventional approaches, such as mass selection, pedigree method, and bulk method, face challenges for protein quality improvement, but adding genetic markers into the breeding pipeline is possible. A comprehensive study comparing relative protein concentration among different lentil species identified a high protein accession, ILWL 47, belonging to L. ervoides (Bhatty, 1986). Lentil cultivar., IC317520, was identified as a high protein, sugar, and starch cultivar (Tripathi et al., 2019). The identified candidates can improve protein content in cultivated lentils by hybridization-based breeding methods. Compared to selection and hybridization-based methods, mutation breeding has improved legume protein. A mutant lentil variety, NIA-MASOOR-5, with increased protein concentration, high yield, and disease resistance was created by gamma irradiation of M-85 as a parent and released in Pakistan (Ali and Shaikh, 2007). Mutation using gamma radiation has increased protein levels in mutants obtained from Chiang Mai 60, SSRSN35-19-4, and EHP 275 cultivars of soybean (Yathaputanon et al., 2009). Some high-protein and low-fiber mutants were identified from gamma ray-irradiated and ethyl methanesulfonate (EMS)-treated Himso 1563 and TS 82 cultivars in soybean (Kavithamani et al., 2010). EMS also induced beneficial mutations for protein and oil content improvement in Huayu 22 and Yueyou 45 cultivars of peanut . A high-yielding and high-protein chickpea mutant variety, Hyprosola or Faridpur-1, was also developed by gamma irradiation in Bangladesh (Oram et al., 1987). TAEK-SAGEL is another gamma radiation-derived, high-protein mutant variety of chickpea released in Turkey (Saǧel et al., 2009). Such landmark achievements of mutation breeding in pulse crops, including lentils on a commercial scale, demonstrate the success of this method for improving quality traits. Genomic-assisted breeding demonstrates the broad potential for improving quantitative traits, which are highly complex, controlled by many genes, and environmentally influenced (Kumar et al., 2016a). The current genomic toolbox for breeding includes genetic marker development, linkage map construction, identifying QTL and alien introgressions, candidate gene discovery, diversity analysis, genome sequencing, and pangenome construction. The use of molecular markers to gear up genomic developments in lentils for various traits has been reviewed widely (Kumar et al., 2015). Several legume crops, including dry pea (Pisum sativum L.), soybean, and chickpea, have been broadly investigated for use in genomicassisted breeding to identify putative genomic regions governing seed protein concentration. The QTL mapping approach in dry pea revealed three genes regulating protein concentration using a linkage map of 207 markers (AFLP, RAPD, and STS markers;Tar'an et al., 2004). Another similar mapping study in dry pea using 204 markers (morphological, isozyme, AFLP, ISSR, STS, CAPS, and RAPD) identified genomic regions for seed protein concentration (Irzykowska and Wolko, 2004). Several other studies using genomic-assisted breeding in dry pea identified protein concentration-related genes (Tayeh et al., 2015). However, these studies are limited in the number of dry pea accessions used in each study and the genome-wide comparisons. Furthermore, a restriction-site associated DNA sequencing (RAD-seq) approach identified 47,472 SNP markers in a soybean RIL population (Liu et al., 2017), and several genes for the seed protein in soybean were found using transcriptome analysis, QTL mapping, and the genome-wide association study (GWAS) approach (Patil et al., 2017). A gene controlling seed size, weight, and composition of amino acids in total protein concentration were characterized in model legume Medicago trunculata and soybean using PCR-based markers and transcriptome profiling (Ge et al., 2016). Likewise, extensive studies in soybean have also identified several seed protein genes by exploiting genomic breeding approaches (Brummer et al., 1997;Sebolt et al., 2000;Chapman et al., 2003;Chung et al., 2003;Liang et al., 2010;Van and Mchale, 2017;Li et al., 2018;Huang et al., 2020). A highthroughput genotyping technology study identified 16,376 SNPs and revealed seven major genes for seed protein through a GWAS in 336 desi and Kabuli chickpea accessions (Upadhyaya et al., 2016). Such studies in legume crops demonstrate the success of marker-based genomic tools for improving protein concentration and quality. However, marker-based genomicassisted studies identifying genic regions associated with seed protein content and quality have not been reported in lentils so far. Genetic engineering technology has provided other insights to improve protein concentration in legumes. Protocols have been designed to develop transgenic lines in chickpea (Fontana et al., 1993), common bean (Russell et al., 1993), lupin (Molvig et al., 1997), peanuts (Brar et al., 1994), pea (Schroeder et al., 1993) and soybean (Hinchee et al., 1988). Several research groups have developed transgenic soybean lines with increased S-containing amino acids (Falco et al., 1995;Dinkins et al., 2001;Guo et al., 2020). Likewise, transformation studies to improve seed protein concentration in broad bean (Montamat et al., 1999), dry pea (Tegeder et al., 2007), and French bean (Tan et al., 2008) have also been reported. Recently, the genome-editing tool CRISPR/ Cas 9 has emerged as a revolutionary approach to improving staple food crops, but this approach is not widespread in pulses except in soybean. CLOSING REMARKS Most lentil breeding programs worldwide focus on yield improvement, disease resistance, biotic/abiotic stress tolerance, and germplasm diversity. Lentils are a nutrient-dense superfood to combat malnutrition and non-communicable diseases. As such, lentil protein quality has recently emerged as a target trait for lentil breeding programs due to the increased demand for plant-based protein. Conventional breeding is progressing for lentil crop nutritional improvement, but other genomic approaches are essential to speed up the breeding process due to the quantitative nature of these traits. Genome-wide association studies with conventional plant breeding approaches are appropriate for improving the genetic gain of quantitative traits by increasing selection accuracy through indirect selection (Rutkoski, 2019). For example, the genetic gain for lentil protein concentration can be achieved by selecting diverse parents, increasing the selection intensity, accuracy and reducing the selection cycle duration by increasing the number of generations per year. Conventional methods like pedigree, bulk, and mutation breeding can develop new breeding material using wild species, cultivars, landraces, advanced/ elite breeding lines, and genetic stocks (Figure 2). These breeding methods will generate broadly diversified germplasm used for phenotyping and genotyping platforms to enhance selection accuracy (Xu et al., 2017). However, these conventional methods do not increase the selection intensity due to low heritability, slow progression, and visual phenotypic selection (Cobb et al., 2019). Combining genomic-assisted breeding with rapid generation methods such as single-seed descent, speed breeding, and double haploid production will enhance selection intensity and shorten the selection cycle, resulting in increased genetic gain over time (Cobb et al., 2019; Figure 3). Future lentil breeding efforts should focus on the rapid diversification and evaluation of lentil germplasm for protein quality through conventional breeding approaches. The development and adoption of genomic resources and tools such as genetic engineering or genome editing may also contribute to the pace of conventional breeding in lentils and eventually lead to breakthroughs in lentil protein improvement programs to ensure nutritional security and improve human health. AUTHOR CONTRIBUTIONS SS is a doctoral student under the supervision of DT who drafted the paper objectives, wrote the first draft, revised and edited the final version of this paper. JLB, PT, and SK edited/ reviewed the final version and provided revisions and edits constructively. DT supervised SS and designed the objectives with SS, wrote parts of the paper, edited and revised the last version. All authors contributed to the article and approved the submitted version. FUNDING Funding support for this project was provided by the Organic Agriculture Research and Extension Initiative (OREI; award
2022-04-05T13:36:29.045Z
2022-04-05T00:00:00.000
{ "year": 2022, "sha1": "715df873ca705d9cac6d2cb058bd58390c2407e4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "715df873ca705d9cac6d2cb058bd58390c2407e4", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216215066
pes2o/s2orc
v3-fos-license
The efficacy of clinical pathway in gastric cancer surgery Objectives Clinical pathways are useful tools for surgical quality improvement and better peri-operative clinical outcomes for patients undergoing major surgery. This study aimed to evaluate the influence of clinical pathway on early postoperative outcomes for gastric cancer patients. Material and Methods The study was designed as a retrospective cohort observational study. Patients who had undergone curative gastrectomy for gastric cancer were evaluated by using the gastric cancer database, which was prospectively maintained. The patients were divided into two groups based on the date when the clinical pathway was first used: The control group (May 2015-May 2016) and the clinical pathway group (June 2016-December 2017). Early postoperative outcomes including the length of hospital stay, start of the day of diet, and 30-day complications including reoperation, and operative mortality were compared after propensity score matching. Results A total of 101 patients were analyzed, and the data of 70 patients (35 patients in each group) were compared after matching. Clinical pathway group demonstrated shorter hospital stay, earlier nasogastric tube removal, and start of earlier liquid/soft diet. Overall complication rate was lower in the clinical pathway group, while there was no statistically significant difference in major complication rates. No statistically significant difference was observed between the groups in terms of reoperation and operative mortality. Conclusion Clinical pathway may shorten the postoperative length of hospital stay and reduce the overall complication rate without increasing major morbidity in patients undergoing elective gastric cancer surgery. IntRODuCtIOn Hospitals, which are complex organizations consisting of many interconnected actions, are designed for patient-centered and effective healthcare (1,2). Having been managed with traditional concepts for many years, total quality management is now a new paradigm in healthcare organizations (3,4). Various strategies, such as enhanced recovery, outcome management, and integrated care pathways can be used as part of total quality management (5). Clinical pathways (CP), which are standardized comprehensive management systems, are useful tools for surgical quality improvement, designed to improve peri-operative outcomes such as hospital stay, morbidity, and cost (6,7). Effectiveness of CP for cardiothoracic, liver, and bariatric surgery has been shown in recent studies (7)(8)(9)(10). One of the major causes of cancer-related deaths, the only treatment option of gastric cancer in a majority of patients is surgical resection (11,12). However, gastrectomy for gastric cancer remains a high-risk procedure with significant morbidity and mortality (13,14). Clinical pathway has also been used for gastric cancer surgery, and studies have demonstrated improvement in peri-operative outcomes (15)(16)(17)(18). Enhanced Recovery After Surgery (ERAS) protocol is an evidence-based model of standardized clinical pathway system, which has been considered safe and effective in a recent meta-analysis in gastrectomy for gastric cancer patients (19). Besides, consensus guidelines for enhanced recovery after gastrectomy has been published by the ERAS  society (20). However, majority of the evidence regarding enhanced recovery pathways has originated from studies conducted in far eastern countries. Convincing evidence from western patient population is limited, and thus, the feasibility of clinical pathways in all gastric cancer patients, particularly in developing countries, remains controversial. Clinical pathway system, as part of a quality improvement program for gastric cancer patients, was implemented in June 2016 in a tertiary center from Turkey. In the present study, the influence of clinical pathway on early postoperative outcomes for gastric cancer patients was evaluated. Patients and Data Collection The study was designed as a retrospective cohort observational study. The database prospectively maintained for patients who had undergone surgical treatment for gastric cancer was reviewed. CP for gastric cancer surgery was implemented in June 2016 and modified in December 2017 with the use of a checklist system. Therefore, patients operated on in this period were selected as test group (CP group). Before the implementation of CP, patients were managed without any specific protocol, and these patients were selected as the historic control group (control group). Patients operated on before May 2015 were excluded to decrease the risk of experience bias. Signed informed consent was obtained from all patients prior to surgery. Ethics permission for the study was obtained from the ethics committee (2019/177). All consecutive patients who had undergone gastrectomy for gastric malignancy between May 2015 and December 2017 were evaluated. Exclusion criteria were: (1) patients who did not have gastric resection, (2) patients who only had palliative procedure including bypass or palliative resection (3) patients having distant metastasis, (4) patients requiring thoracotomy, (5) emergency surgery, and (6) patients who had malignancy other than adenocarcinoma. All data were retrieved from the electronic database developed in 2013 for patients who underwent upper gastrointestinal cancer surgery. The following data regarding patient demographics and clinical characteristics were extracted: age, sex, body mass index (BMI), American Society of Anesthesiologists (ASA) score, history of previous abdominal surgery, smoking habits, hemoglobin level, albumin level, tumor size, histologic differentiation, type of gastrectomy, type of lymphadenectomy, source of tumor, presence of neoadjuvant treatment, pathological stage, total number of removed lymph nodes. Surgical principles were in accordance with the Korean and Japanese gastric cancer treatment guidelines (21,22). D2 lymphadenectomy for advanced gastric cancer and D1+ lymphadenectomy for early gastric cancer were standard approaches, while D1 gastrectomy was used seldom in high-risk patients (23). Tumors were staged according to the 8th edition of the American Joint Committee on Cancer Staging System (24,25). Outcome measures were the length of hospital stay, the day of nasogastric tube removal, the day of starting sips of water (SOW), the day of starting soft diet, the day of drain removal, 30-day complication rate, 30-day reoperation rate, and operative mortality. Adverse events occurring within 30 days after surgery or within the hospitalization period were considered postoperative complications. Complications were classified according to the Clavien-Dindo classification system (26). Complications classified as grade 3 or higher were defined as major complications. Mortality that occurred within 30 days after surgery or during initial hospitalization was defined as operative mortality. Clinical Pathway for Gastric Cancer Surgery CP for gastric cancer surgery has initially been developed according to the current evidence on CP and published ERAS protocol for gastric cancer surgery and modified based on the institutional facilities and personal experiences (20). CP is summarized in Table 1. In brief, we divided peri-operative process into three main periods. The first period (preoperative preparation period) starts at the time when the patient's initial diagnosis of gastric cancer is established and is is primarily focused on the conformity of indication of surgical treatment, optimizing chronic diseases, nutritional counseling, and patient/family member's education. The second period (operative period) starts with the patient's admission for surgery, typically one day before the scheduled operation date, and ends when the patient comes back to the wardroom after surgery. Confirming the completeness of the preparation and the surgical procedure are the main elements of the second period. During operation, intra-abdominal drain and nasogastric tube are routinely used regardless of the gastrectomy type. Because the majority of the patients had advanced gastric cancer or tumors requiring total gastrectomy, laparoscopic approach was seldom used in the study period only for early gastric cancers requiring distal gastrectomy (23). The third period (postoperative care) primarily focused on postoperative care and ended when the patient was discharged from the hospital. Discharge criteria were; adequate mobilization, adequate pain management with oral analgesics, patient's willingness to be discharged, no fever, the ability to eat a soft diet, no vomiting/nausea, and no major complication. One week after discharge, all patients were invited to the outpatient clinic for early follow-up. Written clinical pathway was distributed to the surgical team members responsible for patient care, and they were educated on the items of the path. Before the implementation of CP, there was no specific written protocol on items such as nutritional counseling, postoperative diet instructions, catheter removal, drain removal, discharge criteria, and etc. Patients having gastrectomy were managed traditionally by the members of the surgical team. All surgical procedures during the study period were carried out by the same upper gastrointestinal surgeon. Statistics Analysis Continuous variables were presented as mean ± standard deviation for parametric distribution and as median (1 st -3 rd quartile) for nonparametric distribution. Chi-square test or Fisher's exact test (when 20% of expected frequencies in any cell was ≤ 5), Student's t-test and Mann-Whitney test were used for comparing the groups based on the type and characteristics of the data. All p values were two-sided, and statistical significance was defined as p< 0.05. R software (R Foundation for Statistical Computing, Vienna, Austria) with required packages was used for statistical analyses. To reduce selection bias, the "MatchIt" package with nearest-neighbor 1-1 matching was used to conduct a propensity-score matching analysis. Age, sex, albumin level, pathological stage, ASA score, and type of gastrectomy were used as covariates. RESuLtS A total of 147 patients underwent surgery due to gastric cancer during the study period. After the application of exclusion criteria, 101 patients were included into the analysis. Among them, thirty-five patients were managed with the traditional approach (control group), and sixty-six patients were managed with the clinical pathway approach (all-CP group). Propensity score matching generated a sample of 70 patients (35 patients in the control group and 35 patients in the matched-CP group). Comparison of Baseline Characteristics Between the Groups The comparison of baseline patient demographics is presented in Table 2. In the non-matched analysis, there were no statistically significant differences between the control group and the all-CP group concerning sex, BMI, ASA score, history of previous abdominal surgery, smoking status, and hemoglobin levels. However, the all-CP group tended to be older (not statistically significant) and had higher albumin levels (p= 0.049). In the matched analysis, there were no statistically significant differences between the control group and the matched-CP group concerning baseline patient demographics. The comparison of oncologic and surgical factors is presented in Table 3. In the non-matched analysis, there were no statistically significant differences between the control group and the all-CP group concerning tumor size, histological differentiation, type of gastrectomy, source of tumor, neoadjuvant chemotherapy, pathological stage, and the total number of removed lymph nodes. There was a statistically significant difference in the type of lymphadenectomy. More D2 lymphadenectomy was performed in the all-CP group compared to the control group (p= 0.047). In the matched analysis, there was no statistically significant difference between the control group and the matched-CP group concerning the type of lymphadenectomy as well as other factors. Baseline demographics, oncological factors, and surgical factors were well balanced between the control group and the matched-CP group. Comparison of Postoperative Outcomes Between the Control Group and the Matched-CP Group Postoperative clinical outcomes are presented in Table 4. Significantly shorter hospital stay (median 11 days vs. 9 days, p< 0.001), earlier nasogastric tube removal (median 4 days vs. 2 days, p< 0.001), shorter time from surgery to first SOW (median 4 days vs. 4 days, p< 0.001) and soft diet (median 5 days vs. 5 days, p= 0.013) were observed in the matched-CP group compared to the control group. There was no statistically significant difference between the control group and the matched-CP group concerning time to drain removal (median 6 days vs. 6 days, p= 0.851). The overall complication rate was lower in the matched-CP group, while there was no statistically significant difference in major complication rates. Sixty percent of the patients in the control group and 31.4% of the patients in the matched-CP group experienced complications (p= 0.016). Only one pa-tient (2.9%) in the control group and two patients (5.7%) in the matched-CP group experienced major complications. Besides, although there was no difference in terms of the distribution of complication grades, 20 patients (57.1%) in the control group and nine patients (25.7%) in the matched-CP group experienced grade-I or grade-II complications (p= 0.007). Neither anastomotic leakage nor bleeding was observed in the study population. Major complications were as follows: A patient from the control group experienced right pleural effusion following extended total gastrectomy, and tube thoracostomy was required. A patient from the matched-CP group was readmitted to the hospital after discharge (on postoperative 21 st day) with the complaint of acute mechanical small bowel obstruction. Adhesive band was found during surgery; the problem was solved with adhesiolysis, and the patient was discharged three days after reoperation. One other patient from the matched-CP group (with no surgery-related complication) experienced operative mortality on the 4 th postoperative day due to cardiac arrest. No statistically significant difference was observed between the groups in terms of reoperation and operative mortality. DISCuSSIOn The presented study investigated the influence of implementing a clinical pathway for patients undergoing elective gastric cancer surgery. Although both groups were comparable in terms of clinically relevant baseline characteristics, propensity score matching was used to decrease potential selection bias. Patients in the clinical pathway group demonstrated shorter hospital stay, earlier removal of the nasogastric tube, shorter time to diet while there was no difference in drain removal time. Using a clinical pathway also demonstrated less overall complication rate without increasing major complications. Although the concept of peri-operative interventions is defined by different names such as ERAS, fast-track, critical pathway, and clinical pathway, the primary purpose is to optimize the patient in the preoperative period, to reduce the metabolic stress resulting from surgical trauma during the operation and to return to normal life as soon as possible (27,28). Early studies for enhanced recovery protocols on gastrectomy for gastric cancer has started in Far Eastern countries where early-stage cancers constituted the majority of patients (16,29). In subsequent studies, the implementation of various protocols by each institute has made the standardization of enhanced recovery problematic. In 2014, the first comprehensive and evidence-based framework recommendations were published (20). A total of 25 items, 8 of which were procedure-specific, included different recommendation grades with different evidence levels. While deciding on the clinical pathway in our practice, we used institutional factors and personal experiences in addition to the available evidence and recommendations. Most of the general items (not-procedure specific) were included in our clinical pathway except for the items related to anesthesia. Among procedure-specific items, preoperative nutrition (strong recommendation), preoperative oral immunonutrition (weak recommendation), and systematic audit (strong recommendation) were included in our clinical pathway. However, we showed a selective approach in the use of some crucial elements of ERAS such as the use of laparoscopic surgery (strong recommendation for early gastric cancer requiring distal gastrectomy, weak recommendation for advanced gastric cancer and total gastrectomy), selective use of nasogastric decompression (strong recommendation), avoiding the use of abdominal drain (strong recommendation) and very early initiation of diet (weak recommendation). Surgical dogmas, as well as personal experiences, are likely to have affected this selective approach, even for highly experienced gastric cancer surgeons (30). However, the implementation of a novel approach has always been slow due to surgical dogmas but has finally been. It is to our belief that all essential items may be included in clinical pathway with increasing evidence and experience. One of the most important goals of the clinical pathway concept is shortening hospital stay, and shorter hospital stay was demonstrated in the presented study (median 11 days vs. 9 days, p< 0.001). Many factors, such as the defined discharge criteria in the clinical pathway group, earlier removal of the nasogastric tube, earlier initiation of oral food intake, and fewer complications may affect this shortening. Shortened hospital stay has been demonstrated in randomized studies evaluating the feasibility of enhanced recovery programs in gastric cancer patients. In the first randomized controlled trial, median hospital stay was six days in the fast-track protocol group, while the conventional group had 8-days length of hospital stay (p< 0.001) (29). In a subsequent randomized trial, shorter hospital stay has also been demonstrated in an enhanced recovery group (median 10 days vs. 9 days, p= 0.037) (31). Besides, in a recent study from the United States, the ERAS group has demonstrated shorter hospital stay with a mean difference of 2.3 days (mean 7.8 ± 3.6 days vs. 5.5 ± 2.0 days, p= 0.010) (18). The only study that showed that the ERAS program did not affect the length of hospital stay was the study published in Japan in 2012 (32). However, as the authors indicated, this result was probably due to the item "normal laboratory data on POD 7" which was included in the discharge criteria. Although the median 9-day hospital-stay in the presented study, which included mostly stage-III patients, is comparable to previous reports, we believe that this period may be shortened more by modifying the criteria together with the increasing experience. The biggest drawback of surgeons in the implementation of an enhanced recovery program is the possibility of increased complication rates. However, until now, there has been no increase in complication rates in both ERAS studies and studies that are specific to ERAS items. On the contrary, fewer complications have been obtained in the enhanced recovery group compared to the conventional group (31). In the presented study, Clavien-Dindo classification system was used to define the severity of the complications, and a decrease in the overall complication rate was demonstrated (60% vs. 31.4%, p= 0.016). While there was no significant difference between the two groups in major complications, particularly the difference in grade I/II complication rates (57.1% vs. 25.7%, p= 0.007) likely caused this improvement. The patients were more optimized for surgery with the help of the preoperative items such as nutritional support, breathing physiotherapy, and patient education on the process. In addition to optimal patient, postoperative care items such as early mobilization may explain the decrease in non-major complications. When creating the presented clinical pathway protocol, not only the enhanced recovery items but also surgical safety issues as part of surgical quality improvement were considered. In the "Optimal Resources for Surgical Quality and Safety" manual published by the American College of Surgeons in 2017, physician-led, team-based care was emphasized, and surgical care divided into five phases (33). Four of these phases were present in the presented clinical pathway; only the items for the post-discharge period were not included. In the future, the creation of systems using various tools, not only enhanced recovery items but also items from all phases of peri-operative care, will help us develop an ideal patient care program. Implementing a peri-operative patient care program is more feasible in developed countries such as the United States and Japan, in developing countries like Turkey, there is still a way to go. However, the presented study showed that better outcomes could be achieved by integrating evidence-based models into practice. The presented clinical pathway protocol has some points that need to be improved. Most importantly, we used a surgeon-led structure with the support of the members of the surgical team, ward nurses, and surgical residents to develop the protocol. However, ideal clinical pathway should be designed by a multidisciplinary team consisting of anesthesiologists, dieticians, and physiotherapists. Anesthesia-related items, which are the significant shortfall in the presented pathway, can only be resolved with a multidisciplinary approach. Another point that needs to be improved is the more use of the laparoscopic approach. The evidence on the feasibility and oncological safety of laparoscopic surgery in advanced gastric cancer is still being waited and possibly will be integrated into the algorithm in the near future (34,35). The presented study has an unavoidable selection bias, which is one of the main limitations of retrospective studies. Although patient characteristics such as age, sex, stage, type of gastrectomy were comparable in both groups, lymphadenectomy and albumin levels were different. Therefore, propensity-score matching was used to reduce selection bias, and ultimately, appropriately comparable groups were obtained. Another possible limitation in comparison with the historical cohort is the experience bias, although a single surgeon performed all surgeries. Early-period records were excluded to decrease this bias, and the study was limited to a narrow period. Despite these limitations, the presented study supports the contribution of clinical pathway to enhanced recovery in patients undergoing gastric cancer surgery in a developing country. Multidisciplinary, multicenter studies in which outcome measures such as cost analysis, compliance rates, patient experiences, and quality of life are included have more potential to demonstrate the effectiveness of enhanced recovery programs. COnCLuSIOn Clinical pathway can safely be implemented for patients undergoing elective gastric cancer surgery. Using clinical pathway may shorten the postoperative length of hospital stay and reduce the rate of complications without increasing major morbidity.
2020-04-02T09:31:18.375Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "38a3db2ce6b07327d137da26e3e653cf4983aa95", "oa_license": "CCBYNC", "oa_url": "https://turkjsurg.com/full-text-pdf/1693/eng", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "54bf6ad3a2d37a8814a3ab7f7a58856559bd364e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251271733
pes2o/s2orc
v3-fos-license
Women-Reported Barriers and Facilitators of Continued Engagement with Medications for Opioid Use Disorder Opioid-related fatalities increased exponentially during the COVID-19 pandemic and show little sign of abating. Despite decades of scientific evidence that sustained engagement with medications for opioid use disorders (MOUD) yields positive psychosocial outcomes, less than 30% of people with OUD engage in MOUD. Treatment rates are lowest for women. The aim of this project was to identify women-specific barriers and facilitators to treatment engagement, drawing from the lived experience of women in treatment. Data are provided from a parent study that used a community-partnered participatory research approach to adapt an evidence-based digital storytelling intervention for supporting continued MOUD treatment engagement. The parent study collected qualitative data between August and December 2018 from 20 women in Western Massachusetts who had received MOUD for at least 90 days. Using constructivist grounded theory, we identified major themes and selected illustrative quotations. Key barriers identified in this project include: (1) MOUD-specific discrimination encountered via social media, and in workplace and treatment/recovery settings; and (2) fear, perceptions, and experiences with MOUD, including mental health medication synergies, internalization of MOUD-related stigma, expectations of treatment duration, and opioid-specific mistrust of providers. Women identified two key facilitators to MOUD engagement: (1) feeling “safe” within treatment settings and (2) online communities as a source of positive reinforcement. We conclude with women-specific recommendations for research and interventions to improve MOUD engagement and provide human-centered care for this historically marginalized population. Introduction Opioid use in the U.S. has resulted in unparalleled rates of accidental injury, infectious disease (e.g., HIV, Hepatitis C), and premature death [1]. Opioid-related fatalities increased exponentially during the COVID-19 pandemic and show little sign of abating [2]. Some factors influencing opioid fatalities include social isolation; increased mental health issues; reduced access to treatment and services; and stress related to economic, social, and other factors. Prior to COVID-19, concerns specific to women were increasing rates of heroin use, slower decreases in rates of prescription opioid misuse [3], and increasing deaths associated with synthetic opioids and heroin [4]. Women with OUD currently remain a growing population of concern [5][6][7]. Sustained engagement with medications for opioid use disorder (MOUD) (e.g., buprenorphine, methadone, naltrexone) results in reduced mortality [8], lowered opioid use [9], fewer infectious disease risks [10], reduced engagement with the criminal justice system, and other positive outcomes [11]. The COVID-19 pandemic prompted innovations to reduce treatment barriers. For MOUD, this included relaxing regulations regarding take-home doses of methadone, expanding access to telemedicine, and allowing buprenorphine to be prescribed via telemedicine appointments [12,13]. These innovations show great promise for expanded access to treatment for people with OUD. However, we know the vast majority of people with OUD still do not access treatment [1]. According to 2019 estimates, less than 30% of individuals needing OUD treatment received MOUD [14]. Of those who do enter care, few remain engaged with MOUD long enough to achieve lasting recovery [15]; treatment rates are lower for women versus men [16,17]. Closing this treatment gap is paramount. Factors that contribute to low rates of treatment engagement among women with OUD include high rates of trauma and sexual exploitation [8], mental health comorbidities [18,19], chronic pain [8,19,20], socioeconomic vulnerability, and housing insecurity [21,22]. Furthermore, women with substance use disorders face harsh stigma related to social expectations around maternal health and caregiving roles [23][24][25]; fears related to loss of child custody are substantial [26]. Finally, compared to men, women experience more heightened feelings of OUD-related shame and other internalized self-depreciating thoughts [8,27,28], resulting in fractured relationships [29] that leave women with little family and social support to remain engaged with treatment [18]. Prior research on women and substance use points to important treatment facilitators. These include trauma-informed and gender-specific treatment programs [16,26,30], positive therapeutic alliances [24], and positive social support [18,31]. Despite these insights into the women-specific causes and outcomes of substance use treatment engagement, we still know relatively little about the social and contextual factors that influence why or how women who do access MOUD remain engaged with it over time [24,32]. This article explores this knowledge gap, focusing on women-reported barriers, as well as facilitators, to MOUD engagement. Drawing from qualitative findings, we share critical context on the social and structural factors that influence engagement with the medical system among this historically marginalized population. Study Design and Data Collection Data presented are a subset from an exploratory sequential mixed methods [33] study that used a community-partnered participatory research (CPPR) approach [34] to adapt an evidence-based digital storytelling intervention [29,35] for supporting continued MOUD treatment engagement. The parent study consisted of three phases: relationship building, exploratory data collection, and patient workgroup sessions ( Figure 1). Data in this manuscript draw from qualitative findings related to treatment engagement that surfaced during group and individual conversations with patients regarding adaptability and feasibility of a digital storytelling intervention during phase two of the project. Qualitative methods are best suited to generate novel findings that prompt innovations [36], of particular value when conducting research with a stigmatized population and topic (i.e., women and M/OUD). To elicit information for intervention development, qualitative data were collected from 20 women enrolled in two outpatient Opioid Treatment Programs (OTP) located in Western Massachusetts and operated by one of the region's largest behavioral health services providers. Both facilities are licensed to administer three FDA-approved MOUD maintenance therapies: methadone, buprenorphine, and naltrexone. Inclusion criteria included: (1) self-identified adult woman, (2) who has received MOUD from a participating OTP for at least 90 days, and (3) has no cognitive impairment that would disallow informed consent. Twenty women were recruited via flyers distributed in the OTPs, referrals from clinical staff, and participant word of mouth. Research staff conducted in-depth interviews; participants also completed brief anonymized surveys on demographics and questionnaires related to substance use history and treatment factors (Appendix A). Participants were compensated with USD25 gift cards. Interviews were digitally recorded, professionally transcribed, de-identified, and reviewed for accuracy. All procedures were approved by the OTP's affiliated Institutional Review Board, and each participant provided written informed consent. (Appendix A). Participants were compensated with USD25 gift cards. Interviews were digitally recorded, professionally transcribed, de-identified, and reviewed for accuracy. All procedures were approved by the OTP's affiliated Institutional Review Board, and each participant provided written informed consent. Data were collected from August to December 2018. In-depth group interviews were conducted with 20 patients and facilitated by the first and last author. The majority of sessions were attended by an average of 2-4 participants; two sessions were conducted with one individual each. Interview sessions lasted 1.5 to 2 h, and were conducted in private rooms at participating OTPs. Each interview opened with a grand tour prompt on barriers and facilitators to continued MOUD engagement. Remaining interview questions elicited information on attitudes, beliefs, and perceived utility of using a digital storytelling intervention to increase MOUD engagement; expected intervention outcomes; and potential challenges/solutions associated with intervention pilot-testing and further development (Appendix B). In this paper, we summarize qualitative findings from the grand tour prompt on barriers and facilitators. Data Analysis Data analysis was guided by constructivist grounded theory [37], an iterative approach that involves simultaneous data collection and analysis; inductive code development; using "constant comparison" to compare and contrast categories; memowriting to identify and define thematic categories and any connections between them, as well as identifying gaps; and sampling for construction of meaning, not for generalizability [37][38][39]. Our analytic strategy was to examine narrative content and context [40][41][42]. Narrative content analysis focused on women-specific paradigms of OUD found in the data at both the individual and group level. Contextual analysis focused on the perceptions and structural circumstances (e.g., historical, political, economic) that shape identity and experience [41]. Informed by standard qualitative data analysis procedures [33,43,44] Data were collected from August to December 2018. In-depth group interviews were conducted with 20 patients and facilitated by the first and last author. The majority of sessions were attended by an average of 2-4 participants; two sessions were conducted with one individual each. Interview sessions lasted 1.5 to 2 h, and were conducted in private rooms at participating OTPs. Each interview opened with a grand tour prompt on barriers and facilitators to continued MOUD engagement. Remaining interview questions elicited information on attitudes, beliefs, and perceived utility of using a digital storytelling intervention to increase MOUD engagement; expected intervention outcomes; and potential challenges/solutions associated with intervention pilot-testing and further development (Appendix B). In this paper, we summarize qualitative findings from the grand tour prompt on barriers and facilitators. Data Analysis Data analysis was guided by constructivist grounded theory [37], an iterative approach that involves simultaneous data collection and analysis; inductive code development; using "constant comparison" to compare and contrast categories; memo-writing to identify and define thematic categories and any connections between them, as well as identifying gaps; and sampling for construction of meaning, not for generalizability [37][38][39]. Our analytic strategy was to examine narrative content and context [40][41][42]. Narrative content analysis focused on women-specific paradigms of OUD found in the data at both the individual and group level. Contextual analysis focused on the perceptions and structural circumstances (e.g., historical, political, economic) that shape identity and experience [41]. Informed by standard qualitative data analysis procedures [33,43,44], the first and last authors independently reviewed transcripts and conducted open-coding, using theoretical memo-writing to identify and develop evolving themes. Next, each researcher composed a list of thematic codes derived directly from the data. Then, we reviewed and compared emerging themes collectively and iteratively to reach thematic saturation and determine final themes [33,43]. During data analysis, we assessed the selected quotes to ensure they represented the diversity of participants and perspectives. Member checking during patient workgroup sessions in phase three of the project (Figure 1) further ensured trustworthiness of our thematic findings. We used reflexivity to balance interpretive authority and participants' experiences and perceptions [33,44]; this included being conscious of power dynamics associated with conducting research with patients in clinical sites. To this last point, participants stated their eagerness to share MOUD treatment experiences with research staff that they would not share with clinical staff, expressing greater comfort in discussing these topics with individuals unaffiliated with the clinic and who do not provide care. Results Twenty women being treated for OUD participated in the study (Table 1). Participants self-identified as white non-Hispanic (65%), Latina (30%), and African-American (5%); the mean age was 36.6 years. Sixty-five percent reported some college or a bachelor's degree. Only 20% of the sample were employed full-time, and 80% had an average annual income of <USD20,000. Mean duration of opioid use was 4.6 years; average duration of current MOUD treatment was 2.8 years. Below, we summarize key thematic results (Table 2), highlighting barriers and facilitators to MOUD engagement. Barriers include two themes: (1) community-level social stigma and (2) fear, perceptions, and experiences with MOUD. Facilitators to MOUD engagement include (1) a sense of safety within treatment settings and (2) social media and online communities as a source of positive social support. Community-Level Social Stigma Women in our study commonly reported feeling unable to escape the identity of "drug user loser" a stigma that "doesn't go away until the day you die-you're always going to be 'that junkie.'" Despite positive treatment outcomes (e.g., abstaining from illicit substance use, employment, and maintaining child custody), women identified a persistent social discourse wherein successful engagement with MOUD did not guarantee an escape from gendered negative associations related to active substance use such as sex work and child neglect. Women reported that the social consequences of being "outed" (i.e., discovered) as a MOUD patient posed substantial barriers to continued MOUD engagement in two ways. First, MOUD directly links women to active substance use and second, MOUD is widely perceived to be an illegitimate form of treatment. Participants shared how MOUD-related community-level social stigma operates within three spaces: workplace settings, social media, and hierarchies within treatment and recovery settings. Workplace Settings Workplace environments were identified as sites of discrimination, which may explain one way that being "outed" as a MOUD patient can negatively impact treatment engagement. A considerable anxiety was the potential of employers learning of participants' MOUD status. In one group interview session, the general consensus was that women alternatively hid, or actively lied about, how MOUD treatment policies impacted job-related behaviors. For example, women agreed they would be more likely to tell an employer their child was sick, rather than explaining that lateness was treatment related (e.g., mandatory counseling or long lines). One participant who had not experienced workplace discrimination firmly believed that if she shared her MOUD status she would not be trusted by superiors or peers. She worried that if the cash register was "short" she would be blamed for stealing the money because of assumptions linking her past illicit substance use to criminal behavior. Women in multiple interview sessions discussed their reluctance to reveal their MOUD status in the workplace due to fear of inter-employee discrimination. As one example, a woman in treatment who worked in a hospital told her peers she sometimes wants to shout "I am one of them!" when coworkers speak disparagingly about "these people, nothing but drug addicts" who enter the hospital. A second woman reported driving hours to receive treatment outside her small, rural town, because at work she hears coworkers "talking all this nasty stuff" about "drug addicts." A third participant recounted a coworker who was open about his MOUD treatment. Although he did not experience negative repercussions related to his employment status, "everyone judged him behind his back." As a result, that participant was adamant she would "never" tell anyone at work about her MOUD status. Social Media In one group interview, women identified social media as a site of "constant debate", between whether OUD is a "disease or . . . a choice" and a space where the validity of MOUD as a legitimate medical treatment is publicly questioned. For example, the women shared examples of memes they perceived as derogatory. The caption of one image described read: "when you're on Suboxone you're not really clean-it's like the government is legal drug-dealing." Over "400 comments" were posted in response, predominantly reinforcing this messaging. Another meme was "laughing about 'when your girl says she's clean' and then it shows a picture of her walking into a methadone clinic or something," each perpetuating notions equating MOUD to active substance use. Other examples cited were memes or posts such as "people have to pay for Epi pens but Narcan ® is free?" and "a person chooses to [mess] up their life and they get to get saved?" Hierarchies within Treatment and Recovery Settings Across all interview sessions women shared experiences of between-women social hierarchies within treatment settings. These hierarchies position women against each other, resulting in discrimination levied from "drug addict to drug addict." This internalized stigma manifests as a hierarchical value system within treatment and recovery spaces, which can function as an impediment to peer support and deter women from treatment engagement. Women discussed encountering such recovery hierarchies within Alcoholics Anonymous (AA) and Narcotics Anonymous (NA) chapters, some of which did not consider MOUD a legitimate component of recovery. This experience was reportedly pronounced for pregnant women and mothers, especially those living in small, rural communities. In one example, a participant-a mother-attended AA meetings in her town, but introduced herself as an alcoholic to avoid judgement and receive the peer support she identified as crucial to her MOUD engagement. Although not self-identified as such, women appeared to internalize broader social stigma regarding women with OUD. Between women with OUD, recovery hierarchies were further entrenched by a predominating social stigma that associates women with OUD to sex work and incarceration. According to participants in a group interview session, when people "hear you're an addict, especially heroin or cocaine, they're like ' . . . she's a prostitute and she's dirty.'" Participants acknowledged that fear of these associations can minimize transparency about substance use history. At the same time, participants-who were in treatment-appeared to internalize this messaging and turn that same judgement towards women actively using. For example, during that group interview participants spoke disparagingly about women engaging in sex work in exchange for drugs, referring to them as "nasty." In that same discussion, participants in the group expressed strong opposition to providing MOUD to incarcerated individuals referring to such programs as "outrageous" and incarcerated people as not "deserving" of MOUD. Participant critiques of such programs-"you're in jail, why are our tax dollars going to that?!" and "there should be no drugs in jail"-equate MOUD to active substance use, a contradiction that appeared to go unnoticed by participants. Fears, Perceptions, and Experiences with MOUD Pharmacotherapies This project elicited three main concerns with regard to MOUD: (1) fear of side effects and medication synergies, (2) unrealistic expectations of treatment duration, and (3) opioidspecific provider mistrust. Fear of Side Effects and Medication Synergies Women identified various medication side effects and synergies, perceived or otherwise, which may negatively impact MOUD engagement. All women interviewed in the project were being treated with methadone. Two sisters who participated in one interview session reported their "ex-stepmother" was a registered nurse who believed methadone is "liquid fire that kills you from the inside out." When asked to identify side effects of methadone complaints expressed by three different women were that "it eats at your bones," "your teeth are all going to fall out," and it makes women "fat." A participant with extensive tooth decay worried aloud that "if this is happening to my teeth, I can't imagine what's going on with my bones." The other women in her group interview nodded in agreement, adding that MOUD side-effects related to physical attributes such as weight gain and visible tooth decay contributed to low self-esteem. Lastly, one woman receiving pharmaceutical treatment for both OUD and mental health comorbidities raised concerns related to perceived synergies between mental health pharmacotherapies and MOUD. Women in her group agreed that common concerns regarding medication synergies included "nodding off" (i.e., increased fatigue) and dampening of MOUD efficacy. Expectations of Treatment Duration When discussing patient-provider communication and education during one group interview session, participants identified frustrations related to unrealistic expectations of treatment duration. Upon enrollment, patients are reportedly told: "we're going to get you on a steady dose and wean you off within six months." Yet the average treatment duration among participants was nearly three years. Further, the prospect of long-term engagement with MOUD was noted to be a source of considerable discouragement. As one participant illustrated: "Every story ends up being somebody getting off [MOUD] and being great-for a very short period of time. I've never heard of it, actually. Of someone coming off of [MOUD] ever. I've never." Compounding unrealistic expectations of treatment duration was a purported lack of information for women to make an informed choice between MOUD pharmacotherapies (e.g., methadone, buprenorphine, or naltrexone). In a group interview discussion about medication, women interested in switching to buprenorphine expressed frustrations about feeling "stuck" on methadone treatment, expressing irritation with reported difficulties in tapering methadone doses. For one participant "Honestly, if I had the money, I would do pills for three months and then go to detox for pills . . . It's five times worse to come off methadone." Other participants in the group felt vexed that it can take "thirty to ninety days to withdraw from methadone . . . That is months of being sick. That is insane. If I had known that, I would never have gotten on it. Ever." Opioid-Specific Provider Mistrust A barrier to MOUD engagement identified in this project was increased mistrust of providers due to the iatrogenic impact of provider prescribing practices on the opioid crisis. In one group interview participant comments centering blame on physicians, for example, "I would say at least 50% of us, it (prescription opiate) was given to us by a doctor and that's how it started," were common. One woman additionally expressed a mistrust of providers related to conspiracy theories centered on unethical relationships between providers and the pharmaceutical industry. In agreeing with her, another woman in the group stated she was hesitant to believe that MOUD was the best treatment option "because on some end [the doctor] is also a pusher, a dealer. I'm coming here to get my fix and they're going to benefit [by getting] X amount of dollars for each person that comes here." Facilitators to Treatment Engagement In addition to the above findings on barriers to MOUD treatment engagement, we additionally report on women-identified facilitators of MOUD treatment engagement. Facilitators identified included feeling a sense of safety within treatment settings, and online communities as a source of support and encouragement. Sense of Safety within Treatment Settings Feeling safe within treatment settings was cited as paramount for participants to remain engaged with MOUD treatment protocols, especially for those with self-identified trauma histories. Simple acts of kindness from clinicians and staff fostered a sense of safety and loyalty to a particular treatment setting. For this highly stigmatized population, having someone remember their name or share a smile left participants feeling "shocked" yet valued. Examples of positive provider encouragement included "doctors where as soon as they found out I was on [MOUD], they give me high fives and they're like, that's amazing!" and a clinician who told one participant "I've learn[ed] a lot of things from you...I learn from you, you learn from me. We learn from each other." The integration of peer workers (i.e., "recovery coaches") into the treatment setting was identified as critical for fostering a sense of safety. Peer workers were described as being an important source of empathetic and relatable support grounded in shared lived experiences. During one group interview, women recounted feeling "frustrated" when assigned to counselors who compared heroin to "sugar or soda" or suggested going "for a jog around the block" as an alternative to "using." During this session women resoundingly reported wanting to interact with a peer worker-someone that "'shares the struggle,' because [i]t's not as easy as people think it is." On-site peer workers reportedly represent a form of "hope," in part due to their sustained recovery that is a requisite of the job. Women reported aspiring towards becoming certified as peer workers as a meaningful way to "give back" to others once they achieved treatment stability. Support from Online Communities Despite being a reported source of discrimination, participants simultaneously reported positive social connection and support from online communities of people enrolled in MOUD treatment. In one group interview, one woman shared that of her 250 Facebook friends, the majority were "people in recovery." A second woman described a "beforeand-after" post where people were sharing side-by-side images of themselves in active addiction versus MOUD treatment. Comments on those images were largely supportive, "like, 'how amazing, you're doing so great . . . you look 10 times better!'" The value of these groups was in feeling less alone-knowing "you're not the only one [enrolled in MOUD treatment]." Discussion and Implications for Practice Our findings on community-level sources of social stigma illustrate moral models of addiction that remain entrenched in society. Women universally shared that a barrier to MOUD engagement was their inability to escape their past identity as substance user, regardless of gains made through treatment. Furthermore, MOUD-based stigma highlights persistent notions of MOUD as a form of active substance use, despite medical guidelines to the contrary [45]. In keeping with the literature, the stigma experienced by women was largely due to gendered associations between OUD, sex work, and parental neglect [23,46]. Women expressed substantial fears associated with consequences of being "outed" as a person in MOUD treatment, and their intent to "pass" as "normal" by maintaining some level of secrecy. Our findings on the existence of hierarchies within treatment settings and recovery communities that can be discriminatory align with the existing, yet scant, literature [47]. Hierarchies can function as an impediment to peer support and deter women from treatment engagement. Considering the range of hierarchies [48,49] that may be present between women in clinical-and community-based treatment settings may offer important direction for future iterations of women-specific MOUD treatment programming. To be traumainformed [50], women-specific treatment programs should address relational factors that impact women [51][52][53], and incorporate guidelines for skill-building to improve communication so these groups can be sources of positive social support. Employment can be an important contributor to sustained recovery and MOUD engagement. Employment is particularly critical for women with OUD, who experience higher rates of socioeconomic insecurity compared to men [53][54][55]. Additionally, employment can be tied to custody requirements for mothers with OUD [56]. Our results indicated workplace environments as sources of discrimination. This finding coincides with emerging studies [57] and suggests a need for anti-stigma interventions and education outreach in the workplace, as well as potential collaborations between treatment and workplace settings (e.g., transportation access and shift flexibility). More research is needed to understand how workplace MOUD-based discrimination may impact men and women differently. An important discovery was inter-employee workplace stigma in medical settings, which may reinforce fears associated with provider stigma [58] and deter treatment engagement. As such, it may be useful to consider people with OUD as a distinct cultural group, i.e., a group that exposes individuals to different forms of discrimination that, in turn, contributes to health inequities. Following this idea, concepts from cultural humility [59,60] and structural competency [61] may offer strategies to address these barriers to MOUD engagement. Our findings on social media as a site of community-level discrimination are valuable, given its omnipresence in today's society. Suicide prevention and other public health efforts have recognized that hopeless media depictions of high-risk individuals may increase suicide attempts [62]. Similarly, condemning people seeking MOUD on social media may deter those considering treatment. Women-specific impacts of social media messaging on MOUD engagement is an underexplored and important avenue of research to explore. Although social media was identified as a site of discrimination, it was simultaneously a venue where women sought and received encouragement for sustained treatment and recovery. Studies have begun to explore ways people use online platforms for opioid-related help seeking [63][64][65]. Social media and online platforms or forums may have potential to facilitate sustained MOUD engagement. As such, mobile health interventions that leverage these platforms for positive reinforcement and social support may be important communitybased complements to clinical treatment protocols. Given increased social isolation, mental health concerns, and substance use risk associated with the COVID-19 pandemic [7] this is a particularly timely avenue to explore. Additionally, health communication efforts that promote "success stories" [29,66] related to long-term treatment engagement could reduce MOUD-related stigma at the community and individual level by normalizing MOUD and promoting positive benefits associated with sustained MOUD engagement. Women in our study had internalized messaging that MOUD is "substituting one drug for another" or has harmful health impacts. Other studies report how patients perceive methadone to be physically harmful [67][68][69][70][71][72], despite limited or mixed empirical evidence [73][74][75][76][77]. More research is needed to explore these relationships, and to consider the implication of gender on medication side-effects that impact a woman's physical appearance (e.g., weight gain and tooth decay). Additionally, better understanding synergistic relationships between MOUD and mental health medications is an important line of future research. In vocalizing fears, perceptions, and experiences related to MOUD pharmacotherapies, women identified a gap in patient-provider education around expectations of treatment duration, and a need for increased shared decision making in regard to MOUD selection. Women also expressed opioid-specific distrust of providers due to prescribing practices and the opioid crisis, and subsequent perceived unethical relationships between providers and the pharmaceutical industry. People that believe in medical conspiracy theories may be less likely to adhere to recommended treatment protocols [78], and is therefore important to examine. Taken together, these findings point to opportunities for improved patient education that could be incorporated into larger health communication interventions. The extant literature on women and substance use treatment engagement primarily identifies the importance of connecting women to programs that promote safety external to treatment settings, such as community-based programs for women experiencing intimate partner violence [18]. Our findings on facilitators of treatment engagement suggest the importance of creating a sense of safety within treatment settings. Basic acts of kindness constituted "safety" for women within treatment settings, which is understandable given the vulnerabilities of daily living [18,79] and fractured social relationships experienced by women with active OUD [23][24][25]. Collectively, these findings can guide programmatic interventions and staff trainings to foster a sense of safety within treatment spaces. Lastly, peer workers may be best positioned to address hierarchies within treatment and recovery settings, promote safe treatment environments, and identify relevant women-specific services by offering relatable support [80][81][82]. Increased funding for peer workers and creating opportunities for women to pursue this certification may provide "hope" for those new to treatment [29]. Taken together, women-identified barriers and facilitators to MOUD engagement elicited in this project hold important potential for MOUD engagement and OUD outcomes among this population. Key implications for future research and interventions at the community and clinical level are summarized below (Table 3). Although some of our findings may apply to both men and women, we posit they are experienced differently and require more investigation. Table 3. Implications for Research and Intervention. 1. Health communication efforts that promote treatment "success" stories 2. Mobile health interventions to promote group social support for treatment engagement for women with OUD 3. Outreach and education to address discrimination in workplace and online environments 4. Patient education around the concept of OUD as a chronic condition, the importance of MOUD for treating OUD, and realistic expectations for treatment duration 5. Shared decision making for MOUD selection between patients and providers 6. Interventions that assess and address social hierarchies within treatment settings 7. Understand MOUD side effects and interactions with mental health pharmacotherapies 8. Integrate peer workers into treatment settings 9. Examine and address opioid-specific provider mistrust Limitations Project findings are drawn from a convenience sample of 20 women enrolled in MOUD for at least 90 days. Recruitment was difficult for some group interview sessions, which resulted in occasional "no shows" and smaller groups than planned. Although small, our sample size aligns with norms in qualitative research and provides a depth of data and innovative findings [33,37]. Additionally, research suggests that longer duration of MOUD (e.g., ≥5 years) increases the likelihood of sustained recovery over the subsequent ten years [15]. Because the average length of treatment for our sample was 2.8 years, we did not distinguish factors associated with extended MOUD engagement, highlighting an area for future research. Conclusions We still know relatively little as to why or how women who do access MOUD remain engaged with it over time. Findings presented throughout this article provide critical context on the experiences of women in MOUD treatment, and are important additions to the substance use literature. Novel barriers to treatment engagement identified by women include community-based discrimination as experienced via social media and in the workplace; internalized stigma among MOUD patients that creates hierarchies within treatment settings; opioid-specific mistrust of providers; and women-specific perceptions of MOUD side effects, synergies, and treatment duration. We close by identifying facilitators to treatment engagement, including the importance of cultivating a sense of safety in treatment settings, the value of integrating peer workers into clinical settings, and the potential benefit of social media and other online platforms. In sum, project findings identify key implications for research and interventions to promote MOUD engagement for women with OUD. As treatment access continues to expand in response to COVID-era innovations regarding MOUD, addressing project-identified barriers and facilitators collectively at the community and clinical level hold potential for innovative patientcentered care and increased MOUD engagement for women with OUD. Substance Use Tell us about your use of each substance. Include use that was prescribed by a medical professional and use that was not prescribed, for example as used illicitly, "on the street," or "borrowed from a friend/family member." I'd like to tell you about "digital storytelling." The best way to do that is to show you an example. [play example] Making stories like this can have benefits for the person and for society. Today I want to ask you for your thoughts about using digital storytelling as a way to help women (1) enter and stay in treatment for their opioid problems and (2) become a part of their community. Appendix B.1. Topic 1: Potential Benefits and Limitations of a DST Intervention Imagine we are creating a set of digital stories as told by women receiving medications to treat opioid problems. Our goal is to have these women share their stories of resilience. For example, women might be asked to tell stories of how they overcame challenges to enter or stay in treatment. We would want to show these stories to women who have entered treatment recently, to encourage remaining in treatment, and also to women who are not yet in treatment but could benefit from it, to encourage treatment entry. 1. What are your initial thoughts about the pros and cons of having women in treatment for opioid problems share their stories of resilience through "digital storytelling"? a. Pros b. Cons Another goal is to use the stories as an opportunity for women being treated for opioid problems to integrate with their community. I define community integration like what you see on the paper in front of you. Let's go over this together. [go over info and check comprehension] 2. How might these stories be helpful, or not so helpful, for supporting physical integration of women who are_____? a. making the story b. sharing it with others? Probe: For example, creating stories might create a space where women learn from each other about the availability of community resources and how and why to access them 3. How about in relation to social integration? How might these stories be helpful, or not so helpful, for supporting social integration of women who are_____? a. making the story b. sharing it with others? Probe: For example, creating stories might provide opportunities for women to have interactions with non-substance-using community members, both in-person and online 4. How about in relation to psychological integration? How might these stories be helpful, or not so helpful, for supporting psychological integration of women who are_____? a. making the story b. sharing it with others? Probe: For example, by sharing how they entered and stayed in treatment, women might recognize that they themselves are models of resilience, which might give women a new personal identity (e.g., as an "opioid survivor") and thereby increase women's selfefficacy for continuing in treatment and also create a sense of belonging in the community. Women might also receive acceptance and support from community members because of their ability to overcome opioids. Such experiences might help women to feel valued in their community, and thus contribute to social cohesion and enable women to feel more in control of their future. 5. How might it be beneficial, or not, if some of the stories were to highlight connections between treatment with medications for opioid problems and a woman's ability to integrate with the broader community? 6. For which groups of women in treatment with medications for opioid problems might these types of stories of resilience be most helpful? a. in what ways and why? 7. For whom might the stories be least helpful, in what ways, and why? 8. How might stories of women's resilience create opportunities for learning by the treatment center staff on how to resolve barriers faced by women in treatment? a. How about especially in relation to women's ability to "self-manage" their treatment, and cope with their opioid use disorder as a chronic illness? b. How about in relation to women's integration with their community?
2022-08-03T15:22:15.465Z
2022-07-30T00:00:00.000
{ "year": 2022, "sha1": "10acf87e25e19e2b70492b237791cd9db8f516f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/15/9346/pdf?version=1659416688", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f21382235143e01ec9462b5e519ae34e2bb2615f", "s2fieldsofstudy": [ "Medicine", "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
49426545
pes2o/s2orc
v3-fos-license
Novel methods to estimate antiretroviral adherence: protocol for a longitudinal study Background There is currently no gold standard for assessing antiretroviral (ARV) adherence, so researchers often resort to the most feasible and cost-effective methods possible (eg, self-report), which may be biased or inaccurate. The goal of our study was to evaluate the feasibility and acceptability of innovative and remote methods to estimate ARV adherence, which can potentially be conducted with less time and financial resources in a wide range of clinic and research settings. Here, we describe the research protocol for studying these novel methods and some lessons learned. Methods The 6-month pilot study aimed to examine the feasibility and acceptability of a remotely conducted study to evaluate the correlation between: 1) text-messaged photographs of pharmacy refill dates for refill-based adherence; 2) text-messaged photographs of pills for pill count-based adherence; and 3) home-collected hair sample measures of ARV concentration for pharmacologic-based adherence. Participants were sent monthly automated text messages to collect refill dates and pill counts that were taken and sent via mobile telephone photographs, and hair collection kits every 2 months by mail. At the study end, feasibility was calculated by specific metrics, such as the receipt of hair samples and responses to text messages. Participants completed a quantitative survey and qualitative exit interviews to examine the acceptability of these adherence evaluation methods. The relationship between the 3 novel metrics of adherence and self-reported adherence will be assessed. Discussion Investigators conducting adherence research are often limited to using either self-reported adherence, which is subjective, biased, and often overestimated, or other more complex methods. Here, we describe the protocol for evaluating the feasibility and acceptability of 3 novel and remote methods of estimating adherence, with the aim of evaluating the relationships between them. Additionally, we note the lessons learned from the protocol implementation to date. We expect that these novel measures will be feasible and acceptable. The implications of this research will be the identification and evaluation of innovative and accurate metrics of ARV adherence for future implementation. adherence, 1 and clinicians and researchers often resort to the most feasible and cost-effective methods possible, which may result in biased or inaccurate estimates. Investigators conducting adherence research are often limited to using either self-reported adherence, a subjective and potentially overestimated measure that is prone to recall and social desirability biases, 4,5 or other more complex methods, such as pharmacologic measures or electronic drug monitoring, which require expertise and financial resources. Numerous direct measurement methods (eg, quantification of concentrations of active drug or metabolites in the blood, [6][7][8] urine, 9 and hair 10,11 ) and indirect methods (eg, patient selfreport, [12][13][14] pharmacy refill records, 15,16 pill counts, 12 and use of MEMS caps 12,14 ) have been employed, but there is no consensus over the best approach to assess medication adherence. Each method has advantages and disadvantages. Additionally, most ARV adherence research to date has required the physical presence of participants at a study site to take part in studies. This may create difficulties in recruitment and retention given the burden associated with the need to have access to modes of and funds for transportation and the time required for visits. These time and resource costs are further exacerbated when the study requires multiple visits. Furthermore, among people living with HIV (PLWH), perceived stigma or negative social consequences associated with participating in research may be the important barriers to participation. 17 In addition to personal inconveniences, these barriers to research participation can result in missing data and potentially biased results. In this study, we evaluated the feasibility and acceptability of 3 innovative methods to estimate ARV adherence using remote collection of data. These novel methods involve text-messaged photographs of pharmacy refill dates, text-messaged photographs of pills for pill count, and home collection of hair samples. All 3 can be conducted relatively quickly and with limited financial resources in a wide range of health care systems and research settings. Here, we describe our research protocol and lessons learned. Study overview and design We conducted a 6-month study to 1) assess the feasibility and acceptability of novel methods of estimating ARV adherence using text-messaged photographs of pharmacy refill dates and pill counts and home-collected hair samples; 2) examine the feasibility and acceptability of a study where all study activities, including recruitment, consent, hair sample collection, text messaging, and exit interviews, were conducted remotely; and 3) explore the relationship between ARV adherence based on self-report and the 3 novel metrics, specifically text-messaged photographs of pharmacy refill dates and pill counts, and drug levels in hair samples collected at home. Table 1 provides an overview of the study. Participants were asked to mail back home-collected hair samples at baseline, 2, 4, and 6 months using the hair collection kits sent to them by study staff. At baseline and once monthly, for 6 months, participants were sent 4 text messages, referred to as Adherence Survey, asking them to 1) rate their ARV medication adherence using a validated self-report item, 18 2) text message a photograph of the refill date on the ARV medication bottle that they were using at that time, 3) text message a photograph of the contents inside the ARV medication bottle or pillbox that they were using at that time, and 4) the approximate date when they picked up their latest ARV refill from the pharmacy. Additionally, at baseline and at 6 months, participants were asked about any extra ARV pills that they may have (ie, stock supply of medications). Finally, at 6 months, we conducted quantitative feasibility and acceptability surveys and qualitative exit interviews. We received approval from the University of California, San Francisco (UCSF) Institutional Review Board to conduct this study, and written informed consent was obtained from all participants. Inclusion and exclusion criteria Adults living with HIV who met the following criteria were eligible for study participation: 1. Being on 1 ARV regimen for at least 3 consecutive months prior to participation and report that they were unlikely to change ARV medications in the next 6 months: to allow Participants who reported receiving automated refills (and thus not having any active role in receiving their next refill through either contacting their pharmacy to generate a new refill or physically picking up their refill from the pharmacy), those who had chronic kidney disease necessitating renallydosed ARVs, and those who were unable to provide hair samples (due to baldness or other reasons, such as wearing a weave that prevented individuals from cutting hair close to the scalp) were excluded from the study. recruitment We advertised on online social media (ie, Facebook and Instagram) to recruit participants nationwide. Participants were offered a total of $270 for all study activities (see Section "Incentive Structure"). We spent a total of US$1,300 over 4 weeks for advertisement. The mean cost per click was ~$0.36, and we had about 2,771 clicks. We narrowed our advertisements to be shown to adults (.18 years of age) living in the USA and used key terms related to HIV; acquired immunodeficiency syndrome (AIDS); lesbian, gay, bisexual, and transgender; homosexuality; ACT UP; and same-sex marriage. In addition to the online social media advertisements, we created a Facebook page (www.facebook.com/RxPixStudy), a Twitter account (twitter.com/rxpixstudy), and a website (rxpix.ucsf.edu/) to further improve our web presence and provide additional information about the study. We sent a total of 25 emails to organizations serving PLWH nationwide to notify them about our study, posted flyers at clinics serving PLWH in the San Francisco Bay Area, and asked the UCSF Center for AIDS Prevention Studies Community Advisory Board to assist us in recruitment through email, social media, and word of mouth. Finally, through snowball sampling, we offered our participants $10 for each eligible individual they referred who consented to participate in the study. Enrollment Participants who emailed, called, or text messaged the study staff were given a brief description of the study, including the study's interest in collecting hair samples, to assess the participant's ability and willingness to submit hair samples. Interested individuals were then screened according to the inclusion/exclusion criteria. Those who were eligible and interested were asked for their contact information (including mobile telephone number; other telephone numbers; mailing address; email address; friend or family contact information; and social media [Facebook, Instagram, Twitter, Snapchat, and others] usernames). At this time, participants were scheduled for a 30-minute enrollment call before which they were requested to view "The RxPix Study: What to Expect" video (http://rxpix.ucsf.edu/videos) and gather all of their ARV medication containers in one place for an inventory. During the enrollment call, participants were given an opportunity to ask questions about the study activities, including the hair collection video. If they remained interested, they were then emailed a link to the consent form and baseline survey. data collection Qualtrics (Qualtrics, Provo, UT, USA; version March 2017), a data collection software to conduct online surveys, was used to obtain informed consent and collect initial demographic and clinical data, and for the final exit survey. The links to these surveys were emailed to participants. The baseline survey included questions regarding demographics (age, race/ ethnicity, sex/gender, sexual orientation, income, and education), use of alcohol or other substances, HIV clinical outcomes (CD4 + cell count, detectability of HIV viral load) engagement in HIV care, names of ARV medications, medication adherence over the past 30 days based on the visual analog scale 20 18 barriers to and facilitators of adherence, 21,22 and familiarity of use of technology for health care. At baseline and for months 1 through 6, we collected adherence data using text messaging. For this, we used the services of a company named Mosio, which offers text messaging software for clinical research, to automate the sending of our Adherence Surveys on a monthly basis and reminder text messages as needed. During the study, we used text messaging, telephone, and email to contact participants. The qualitative exit interviews were conducted by telephone and were audio-recorded. Study outcomes Study outcomes included the feasibility and acceptability of the various methods of estimating ARV medication adherence (ie, text-messaged photographs of pharmacy refill dates and pill counts and home-collected hair samples), the feasibility and acceptability of a study where all study activities were conducted remotely, and the relationship between the various methods of ARV medication adherence estimation. For feasibility and acceptability of our adherenceestimating methods and the remote research methodology, at 6 months, we conducted a quantitative survey among all study participants and a qualitative exit interview with one-third of participants who met the criteria for the following categories: 1) On Time: those who responded to the Adherence Survey and sent hair samples within the "early window period", that is, within 5 days after the Adherence Survey was sent and 11 days after hair kit was sent (N=12); 2) Early Inconsistent Hair Samples: participants who sent 1 or more hair samples up to 6 days after the "early window period" for hair sample, that is, 11 days after hair kit was sent (N=5); 3) Late Inconsistent Hair Samples: participants who sent 1 or more hair samples at least 7 days after the "early window period" for hair samples or not at all (N=5); 4) Inconsistent Texts: those who at any point in the study responded to the Adherence Surveys after the "early window period" for text message (ie, within 5 days after the Adherence Survey was sent) or did not complete the Adherence Survey at all (N=3); and 5) Least Consistent: participants who responded to the Adherence Surveys after the text message and sent their hair samples after the "early window period" or not at all (N=6). During these interviews, participants were asked about 1) the difficulties with each study component (text messaging, hair collection, etc.); 2) the likelihood of participating in other studies using a similar design; 3) the perceptions on privacy and security of data; 4) the perceptions on the potential impact of study procedures on their medication adherence, 5) the advantages and disadvantages of participating in an entirely remotely conducted research project, and 6) the problems with the collection of hair samples or ease of following hair collection instructions. Exit interviews lasted about 30 minutes and were audio-recorded for analysis. In addition to the list of specific technological problems (eg, mobile telephone breaks in service, email-related issues, etc.), we examined the feasibility using prespecified feasibility measures, which were evaluated by specific metrics as listed in Table 2. text messaging The mobile telephone number of enrolled participants was entered into a Health Insurance Portability and Accountability Act-compliant clinical research text messaging software called Mosio. For individuals who were taking .1 ARV pill per day (eg, those not on fixed-dose combinations), we chose to study a particular target ARV based on a prespecified hierarchy picked up your most recent refill (take your best guess). These messages were responded to during the initial telephone enrollment meeting so that participants could ask questions, their mobile telephones could be tested for text messaging photographs, any technical issues around picture quality and text messaging could be resolved, and study staff could collect baseline data. In addition to baseline, participants were asked these questions for months 1 through 6. Participants who did not respond were sent an automated reminder text message 1 and 4 days after they received these text messages. To encourage participants to send us their text message responses in a timely manner, we created an "early window period" of 5 days after receipt of the Adherence Survey as a metric of performance. Hair sample Medication concentrations in hair reflect drug uptake from the systemic circulation over weeks to months and provide a mean measure of ARV exposure. 23,24 The UCSF HAL has pioneered the use of small hair samples to monitor ARV medication adherence, 10,11,[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] has developed methods to extract and analyze ARV levels from hair, 31,32 and demonstrated that hair ARV levels are the strongest independent predictor of virologic success. 11,28,29,39 Unlike phlebotomy, hair collection is noninvasive and does not require specific skills, sterile equipment, or specialized storage conditions. In a prior study, 40 we had demonstrated that the home collection of hair was feasible and acceptable, and that there was a high degree of correlation and agreement between ARV levels in hair collected by trained study staff and at home by participants, as well as between hair collected from the back and side of the head, all without the evidence of measurement bias. The UCSF HAL performed the assays for ARV levels in hair. With its predecessor (the Drug Studies Unit) formed in 1977, this laboratory is equipped with modern facilities and highly trained staff to provide fully automated analysis for drugs and metabolites. The HAL has developed and reported methods to analyze TFV as well as other ARVs in human hair samples using liquid chromatography/tandem mass spectrometry (LC/MS-MS). 24,28,29,31,32,[41][42][43][44] Most of the HAL assays have been peer reviewed and approved by the National Institutes of Health's Division of AIDS Clinical Pharmacology and Quality Assurance Program. 45 Hair samples are collected using previously described methods, the proximal section (side closest to the scalp) is cut to 1.0 cm (representing the past month of exposure), the relevant ARV is extracted using optimized methods and analyzed via LC/MS-MS. For example, TFV in participants on either TDF or TAF is extracted with 50% methanol/ water containing 1% trifluroacetic acid, 0.5% hydrazine dihydrochloride, and internal standard in a 37°C shaking water bath overnight (.12 hours) and analyzed by LC/ MS-MS. 41 The relative error (%) and precision (coefficients of variation) for spiked quality control hair samples at low, medium, and high concentrations are all ,15%. This method to analyze TFV levels in hair was validated from 0.002 to 0.400 nanogram per milligram (ng/mg) hair, with a lower limit of quantitation at 0.002 ng/mg. 42,46 In addition to TFV and FTC, HAL will analyze hair ARV levels for DRV and DTG. We mailed participants hair collection kits containing 2 alcohol wipes, 1 piece of aluminum foil (cut into 4 × 4 inches), 2 adhesive labels, a sealable storage plastic bag (marked with the participant's unique identification number), 2 desiccant packs, and a self-posted envelope addressed to our university office. These kits included detailed instructions for hair collection (http://rxpix.ucsf.edu/hair-collectioninstructions) and the video link on our website demonstrating home collection of hair (http://rxpix.ucsf.edu/videos). Hair collection kits were mailed at baseline, 2, 4, and 6 months, and were sent 11 days before the sample due date. To encourage the participants to mail us their hair samples in a timely manner, we defined an "early window period" of 11 days after mailing of the hair collection kit as the goal. Hair samples can be stored at room temperature and are not biohazardous, so are easy to store and ship. The HAL was asked to provide hair ARV levels based on the prespecified ARV hierarchy (TDF . FTC . DRV . DTG . TAF). In other words, if a participant was taking multiple ARVs that could have been measured, the HAL used this hierarchy to determine the order of analysis while being attentive to the amount of hair sample available. Incentive structure We offered a total of $270 for the timely completion of all study activities, which was provided to participant via Clin-Card, a reloadable debit card that enabled remote participant reimbursements via a web-based portal. The incentive breakdown included $10 for baseline test text messages and $15 for 4 text messages at months 1 through 6, $5 for timely text message response (ie, text message photographs sent within 5 days of request) each month, $10 for the baseline survey, and $20 for the exit survey/interview. We used a gradually increasing incentive structure for the hair sample collection and shipment: $15 for the receipt of baseline hair sample, $20 for the 2-month hair sample, $25 for the 4-month hair sample, and $30 for the 6-month hair sample. Participants were offered an additional $5 for each hair sample mailed in a timely manner (ie, postmarked within 11 days after mail-out of hair kit). We extracted the following information from the ARV medication vial photographs: medication name, total number of tablets dispensed at the time of refill, and refill date. From the photograph of the contents of their ARV medication vial, we counted the number of tablets remaining to establish adherence based on pill count. 47,48 Additionally, at baseline and months 2, 4, and 6, we asked participants to mail us home-collected hair samples using our hair collection kits after viewing an online demonstration of the process. Finally, we conducted online exit surveys and telephone interviews with participants to evaluate the acceptability of our research methodology and their experience with the home collection of hair samples. Ability to comply with monthly text messaging and frequency of late or nonresponse constituted parameters for assessing the feasibility of this novel adherence-estimating method. Our ability to conduct a completely remote research project was used to evaluate the overall feasibility of our study design. Participants' responses to exit surveys and interviews were used to assess acceptability via quantitative and qualitative methods. Sample size estimates Self-reported adherence and TFV levels in hair have a 0.34 correlation at 8 weeks. 26 Given that self-report is subjective and is frequently overestimated, 0.34 is the minimum correlation that we required between our novel text message-based adherence measure and TFV levels in hair. To detect a minimum correlation of $0.34 at 6 months, we needed a minimum of 65 participants. We assumed a 20% attrition during the course of the study to calculate the total number of participants that we needed to enroll at baseline (N=82). Planned data analysis For our future data analysis, one-way frequency tables will be generated for all feasibility and acceptability measures. Audio-recorded interviews will be transcribed by a transcriptionist. For analysis of these recordings, broad themes will be identified, refined through discussion, and entered into a matrix using Microsoft Excel where each column corresponds to a theme and each row represents a case. This method allows for the identification of patterns in the distribution of themes for data analysis. 49 One investigator will categorize each interview (N=31) using this matrix. Another investigator will double code a random subsample (N=7) of the interviews, and coding discrepancies will be discussed by the 2 authors until consensus is reached or arbitrated by the first author. Collectively, the results from the quantitative exit survey analyses will complement the qualitative interview data. Measures Self-reported ARV medication adherence was evaluated by the adherence rating scale. 18 This single item has been linked to more objective adherence estimates, that is, MEMS caps. The approximate correlation with adherence percentage based on MEMS caps is as follows: very poor=0%, poor=20%, fair=40%, good=60%, very good=80%, and excellent=100%. We will calculate refill date-based measure of adherence (from text-messaged photos of ARV regimen vials) using the medication possession ratio (MPR) and proportion of days covered (PDC) formulas. 50 MPR is the ratio of the sum of days' supply for all fills in a specific period divided by the number of days in the period. PDC is the total number of days' supply in a specific period "covered" divided by the number of days in the period. MPR may result in an overestimation of adherence because it does not take overlapping days into account; therefore, it will be capped at 100%. These formulas yield a value of 0%-100%. We will calculate pill count-based ARV adherence using the methods established by Bangsberg and Kalichman. 47,48,51 It will be estimated by the difference between tablets counted by the study staff in 2 consecutive text-messaged photographs sent by the participant (eg, difference between the current and previous pill counts) divided by the total doses prescribed in a certain time period (eg, total number of tablets that should have been taken during the 30 days). This value will take into account the number of pills dispensed during that time period. This formula yields a value of 0%-100%. Finally, ARV levels in hair at baseline, and months 2, 4, and 6 will be measured as ng/mg hair and will be Patient Preference and Adherence 2018:12 submit your manuscript | www.dovepress.com 1039 Novel methods to estimate ArV adherence log-transformed to reduce skewed levels. We will assess hair concentrations as continuous measures. Prior studies have shown a graded relationship between hair ARV levels and virologic outcomes. 11,28,43 Statistical analysis First, we will conduct univariate analyses (eg, one-way frequency tables, measures of central tendency, and variability) to participants' baseline and exit surveys and standings on the feasibility measures listed in Table 2. We will then describe the correlation of ARV levels in hair averaged across baseline, and months 2, 4, and 6 with adherence estimated based on the rating scale, refill dates, and pill counts averaged over the 6-month study. Next, we will investigate the longitudinal relationship of ARV refill data, pill count, and ARV levels in hair at months 2, 4, and 6. These analyses will take advantage of the longitudinal nature of the data by using multilevel mixed-effects or generalized estimating equation models with separate between-subjects and within-subjects effects for the ARV predictor 52,53 with the latter optionally parameterized to represent the average within-participant change in adherence over time since baseline. Changes since baseline in adherence will enable us to investigate whether adherence changed as a result of joining the study. Maximum likelihood estimation or multiple imputation will be used to address the missing data in inferential analyses under the missing at random assumption. 54 Finally, we will conduct additional exploratory analyses as necessary. Ethics approval and consent to participate We received approval from the UCSF Institutional Review Board (IRB) to conduct this study and written consent from all participants. Discussion We have established a protocol, which we describe here for remotely conducting a study to assess the acceptability and feasibility of novel ARV medication adherence measurement approaches. While analyses evaluating the outcomes of the study are currently under way, the protocol established for this study was implemented successfully and offers guidance for others seeking to conduct research using similar methodologies. The advantages of tracking medication adherence both remotely and by these novel methods may translate to other HIV-and non-HIV-related studies; however, the implementation of the protocol to date has led to various lessons learned. contact information Because all activities were conducted remotely with no faceto-face contact with participants, it was critical to collect and maintain multiple sources of contact information from enrollees. Relying on mobile telephone numbers only is insufficient in the not infrequent occurrence of lost or stolen telephones. Similarly, relying on email contact was often inadequate, as some individuals did not monitor their email accounts with regularity. Rather, a combination of text messaging, emailing, and telephone calls was often needed to minimize the study attrition and loss to follow-up. Additionally, we collected social media contact information. remote payment The use of reloadable debit cards allowed for remote and timely disbursement of payment for participation in study activities. We, therefore, established tips for the optimal use of the cards. When mailing out ClinCards, we recommend that study staff consider waiting until the participant has confirmed the receipt of the card in the mail before registering the card to the individual and adding funds to the card. This avoids fraud resulting from others intercepting the card and allows the study to reuse the card should the envelope be returned in the mail. We also learned that it was wise to include a short information sheet with the mailed ClinCard, emphasizing that participants should treat the card like cash and to contact the study immediately if the card is lost or stolen, with the intent that this would preempt participant questions and assist with managing lost cards. We have created such an information sheet for our future research: http:// rxpix.ucsf.edu/clincard-quick-info. Home collection of hair samples The use of home hair collection for medication adherence estimation offers great promise, but has never been attempted on the scale implemented in this study. The benefit of this method is that hair samples do not require a cold chain or biohazard precautions for storage or shipment. They provide a mean measure of drug uptake and exposure over weeks to months. However, they require a specialty laboratory, such as HAL for analysis and reporting. We noted a few important steps to help improve efficiency in data collection and data accuracy above and beyond our approach. We recommend including the participant's unique identification number on the return envelope as well as the included sealable storage plastic bag. This facilitates the identification and documentation of the hair sample even if the participant fails to place it inside the storage bag provided. Additionally, a great deal of staff effort went into following up with participants about late or missing hair samples. We later realized that it assisted some participants to be notified by text or email on the day that we mailed out the hair kits so they knew to expect it in the mail. Another method that may work is to frequently remind participants of their study schedule dates at specific time points (eg, monthly). text messaging of photographs The photographed documentation of refill dates from medication vials was likewise an innovative aspect of the study. This approach is simple and cost-effective, and provides an objective measure of medication adherence. The limitation of this approach is that some participants reported that their prescription label was on a medication box, which they had thrown away; therefore, we recommend instructing participants in this scenario to take a picture of the refill date as soon as they pick up the refill, rather than waiting until their monthly text date. Another challenge of the study protocol was that we were unable to document how many participants may have switched to receive their refills in an automated manner. This would have made them ineligible to originally enroll in the study as part of our exclusion criteria. Similarly, photographed documentation of pill counts to elicit adherence is a novel method of estimating adherence. It too is simple and cost-effective, and provides an objective measure of medication adherence. However, in certain cases, it was difficult to track pills a participant may have taken that came from places other than their medication bottle (eg, borrowed pills from a partner or friend, pills taken during a hospital stay). We, therefore, recommend the inclusion of survey items to evaluate the occurrence and frequency of these deviations from the standard practice of taking pills from a designated prescription bottle. Finally, even though we inquired about stock medications, many participants did not know how many pills they had in addition to the ones they were using from their most recent medication bottle or had so many that they were unable to report an accurate count. Even though it may be time consuming, spending more time with each participant after enrollment and at the end of the study to establish an accurate number of stock medications will be very helpful to understanding pill count discrepancies. Other limitations of this study include the potential lack of generalizability due to voluntary response bias, in that participants were self-selected volunteers for this research. Additionally, information related to demographics and other medical data were self-reported and, therefore, subject to recall bias. In summary, this is the first study to examine the 3 novel methods to estimate ARV medication adherence among PLWH. Upon completion of data collection, we will analyze qualitative and quantitative data to examine the feasibility and acceptability of the remotely conducted research and the various methods to estimate ARV adherence, as well as the correlation of these estimations with each other. The expected outcome of this study is that these 3 novel methods will be feasible and acceptable, will have high levels of correlation with each other, and will contribute to the future adherence research. Since hair collection does not require a cold chain or biohazard precautions for storage or shipment, and estimation of adherence based on text-messaged photographs of pill counts and refill dates is objective and cost-effective, these methods may be the important steps to expanding objective adherence monitoring tools in the context of HIV treatment and prevention studies worldwide.
2018-07-10T00:14:25.514Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "de5a039c474c618bda31159cb475bb5d54633261", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=42673", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5df325b6b4890df97930f7589e08dcc2a2e6987", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14785713
pes2o/s2orc
v3-fos-license
An Efficient Algorithm to Automated Discovery of Interesting Positive and Negative Association Rules —Association Rule mining is very efficient technique for finding strong relation between correlated data. The correlation of data gives meaning full extraction process. For the discovering frequent items and the mining of positive rules, a variety of algorithms are used such as Apriori algorithm and tree based algorithm. But these algorithms do not consider negation occurrence of the attribute in them and also these rules are not in infrequent form. The discovery of infrequent itemsets is far more difficult than their counterparts, that is, frequent itemsets. These problems include infrequent itemsets discovery and generation of interest negative association rules, and their huge number as compared with positive association rules. The interesting discovery of association rules is an important and active area within data mining research. In this paper, an efficient algorithm is proposed for discovering interesting positive and negative association rules from frequent and infrequent items. The experimental results show the usefulness and effectiveness of the proposed algorithm. INTRODUCTION Association rules (ARs), a branch of data mining, have been studied successfully and extensively in many application domains including market basket analysis, intrusion detection, diagnosis decisions support, and telecommunications.However, the discovery of associations in an efficient way has been a major focus of the data mining research community [1][2]. Traditionally, the association rule mining algorithms target the extraction of frequent features (itemsets) ie, features boasting high frequency in a transactional database.However, many important itemsets with low support (i.e.infrequent) are ignored by these algorithms.These infrequent itemsets, despite their low support, can produce potentially important negative association rules (NARs) with high confidences, which are not observable among frequent data items.Therefore, discovery of potential negative association rules is important to build a reliable decision support system.The research in this paper extends discovery of positive as well as negative association rules of the forms A→¬B (or ¬A→B, ¬A→¬B), and so on. The researchers target three major problems in association rule mining: a) effectively extracting positive and negative association rules from real-life datasets.b) extracting negative association rules from the frequent and infrequent itemsets. c) the extraction of positive association rules from infrequent itemsets. The rest of this paper is organized as follows.In the second section, related work on association rule mining.In third section, description of interesting positive and negative association rules is presented.The fourth section, the proposed algorithm for discovering interesting positive and negative association rules is described.Experimental results are shown in fifth section.Conclusion and future work are presented in the sixth section. II. RELATED WORK A standard association rule is a rule of the form A→ B, where A and B are frequent itemsets in a transaction database and A∩B=Ø.This rule can be interpreted as "if itemset A is true of an instance in a database, so is itesmset B true of the same instance", with a certain level of significance as measured by two indicators, support and confidence.Rule support and confidence are two measures of rule interesting.What if we have a rule such as A→¬B, which says that the presence of A in a transaction implies that B is highly unlikely to be present in the same transaction.Rules of the form A→¬B are called negative rules.Negative rules indicate that the presence of some itemsets will imply the absence of other itemsets in the same transactions [3].Support-confidence framework for discovering association rules.The validity of an association rule has been based on two measures: the support; the percentage of transactions of the database containing both A and B; and the confidence; the percentage of the transactions in which B occurs relatively only to those transactions in which also A occurs [4]. Investigated the efficient mechanism of identifying positive and negative associations among frequent and infrequent itemsets using state-of the-art data mining technology is presented in [5]. Genetic algorithm GA for mining interesting rules from dataset has proved to generate more accurate results when compared to other formal methods available.The fitness function used in GA evaluates the quality of each rule [6].www.ijacsa.thesai.orgEfficacy contemplations for discovering interesting rules from frequent itemsets are suggested in [7][8]. A framework for fuzzy rules that extends the interesting measures for their validation from the crisp to the fuzzy case is presented in [9]. A fuzzy approach for mining association rules using the crisp methodology that involves the absent items is proposed in [10]. Another study introduced to extract interesting association rules from infrequent items by weighting the database "the weight of database must be determined and used the frequent items to discover the infrequent items" [11]. An interesting association rules mining algorithm is proposed to integrate Rule Interestingness measure during the process of mining frequent itemsets, which generates interesting frequent itemsets [12]. Traditional association rules algorithms mostly concentrate on positive association rules.Also, they generate a large number of rules, many of which are redundant not interesting to the users.The interestingness measures can be used an effective way to filter and then reduce the number of discovered association rules.Based on that, a unified framework is proposed for mining a complete set of interesting positive and negative association rules from both frequent and infrequent itemsets simultaneously III. DESCRIPTION OF POSITIVE AND NEGATIVE ASSOCIATION RULES Discovering association rules between items in large databases is a frequent task in knowledge discovery in database KDD.The purpose of this task is to discover hidden relations between items of sale transactions.This later is also known as the market basket database.An example of such a relation might be that 90% of customers that purchase bread and diaper also purchase milk.Let D be a database of transactions.Each transaction consists of a transaction identifier and a set of items {i1,i2 , ...,in} selected from the universe I of all possible descriptive items.Let D be a database of transactions as shown in Table 1. The items represents the customer database of sale transactions as a basket data.Each record in this database consists of items bought in a transaction.The problem is how it can be found some interesting (i.e.hidden) relations existing between the items in these transactions or some interesting rules that a manager (a user, a decider or a decision-maker) who owns this database can take some valuable decisions.Some rules derived from this database can {Coke}→{Milk},{Diaper}→{Beer},{Coke,Milk}→{Diaper}. A positive association rule is an expression of the form: A→B .Each association rule is characterized by means of its support and its confidence defined as follows: Supp (A→B) =Number of transactions containing (AUB) / Total number of transactions.conf (A→B) =supp (A→B) / supp (A).From the above example, rule {Coke}→{Milk} has support 40% and confidence 100%.According to the above measures, the support measure can be considered as the percentage of database transactions for which (AUB) evaluates to be true.The confidence measure is understood to be the conditional probability of the consequent given the antecedent.Association rule mining essentially boils down to discovering all association rules having support and confidence above userspecified thresholds, minsup and minconf, for respectively the support and the confidence of the rules.For example, from the 100% confidence of the rule {Coke},{Diaper}→ {Milk}.It can be concluded that customers that purchase coke and diaper also purchase milk. In the dataset, it exists other association rule: A→¬B, ¬A→B, ¬A→¬B.The rule A→¬B means the data objects which have itemsets A do not have the itemsets B. The rule ¬A → B means the data objects which do not have itemsets A have the itemsets B. The rule ¬A→¬B means the data objects which do not have itemsets A do not have the itemsets B. These rules can be called negative association rules.For the above example, from the 75% confidence of the rule {Bread} → {¬Coke}.It can be concluded that customers that purchase bread will not also purchase coke.The rule A→B can be called positive association rule.In the existing paper researchers expressed their views on negative association rule in Basket Market database.It is negative association rule which is very useful to the market basket administrator to adjust the business decision making from the customers database.It resolves the lack of past which is only researching positive association rules.This makes the decision makers and access pattern is mined more objective and comprehensive.In order to calculate, the support and confidence for negative association, it can be computed the measures through those of positive rules.IV.THE PROPOSED ALGORITHM B. Mining interesting association rules (both positive and negative) from the itemsets which we get in the first step. The interesting measure (lift) has to be greater than one, expressing a positive dependency among the itemsets.The value of lift less than one will express a negative relationship among the itemsets.Figure 1.shows the proposed algorithm. V. EXPERIEMENTAL RESULTS The performance of the proposed algorithm on different datasets is demonstrated below and all the codes are implemented under C# language. A. EXPERIMENT 1 Weather dataset is downloading from the UCI datasets repository.This dataset contains twelf items, fourteen transactions, and seventy words.It helps the researchers in weather forecasts.This datasets applied with varying minsupport and minconfidence values in table 2.We can see that the number of frequent itemsets decreases as we increase the minsupport value.However, a sharp increase in the number of infrequent itemsets can be observed.This can also be visualized in figure 2. 3. gives an account of the experimental results for different values of minimum support and minimum confidence.The liftF value has to be greater than one for a positive relationship between the itemsets; the resulting rule, however, may itself be positive or negative.The total number of positive rules and negative rules generated from both frequent and infrequent itemsets is given.Generates negative association rules of the A→¬B, ¬A→B, ¬A→¬B, which have greater confidence than the user defined threshold and lift greater than one, are extracted as negative association rules figure 3. B. EXPERIEMENT 2 The Groceries dataset contains one month (30 days) of realworld point-of-sale transaction data from a typical local grocery outlet.The data set contains 9835 transactions , the items are aggregated to 169 categories and the total number of words 43367.The frequent and infrequent itemset generation using Apriori algorithm takes only an extra time as compared to the traditional frequent itemset finding using Apriorialgorithm.This is because each item's support is calculated for checking against the threshold support value to be classified as frequent and infrequent; therefore, we get the infrequent items in the same pass as we get frequent items.The proposed algorithm implemented for Groceries dataset to mine positive and negative from frequent and infrequent items with different parameters (minsupport,minconfidence,3 items length).Table 4. shows that the number of frequent itemsets decreases as it increase the minsupport value.However, a sharp increase in the number of infrequent itemsets can be observed.This can also be visualized in figure 4. The total number of positive rules and negative rules generated from both frequent and infrequent itemsets which is given in Table 5. Generates negative association rules of the form A→¬B, ¬A→B, ¬A→¬B, which have greater confidence than the user defined threshold and lift greater than one, are extracted as negative association rules figure 5. Fig. 2 . Fig. 2. Frequent and infrequent itemsets generated with varying minimum support values Fig. 3 . Fig. 3. Intersiting positive and negative association rules generated with varying minimum supports and confidence values Fig. 4 .Fig. 5 . Fig. 4. Frequent and infrequent itemsets generated with varying minimum supportTABLE V. INTERSITING POSITIVE AND NEGATIVE ASSOCIATION RULES USING VARYING SUPPORT AND CONFIDENCE VALUES WITH LIFT>1 TABLE I . DATABASE WITH 5 TRANSACTIONS TABLE II . TOTAL GENERATED FREQUENT AND INFREQUENT ITEMSETS USING DIFFERENT SUPPORT VALUES TABLE IV . TOTAL GENERATED FREQUENT AND INFREQUENT ITEMSETS USING DIFFERENT SUPPORT VALUES
2015-12-02T01:35:28.312Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "41664df942369034c188fcf5a688630015d4c53f", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume6No6/Paper_23-An_Efficient_Algorithm_to_Automated_Discovery_of_Interesting.pdf", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "41664df942369034c188fcf5a688630015d4c53f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
86783896
pes2o/s2orc
v3-fos-license
Commentary : Evaluation of the Comorbidity Burden in Patients With Ankylosing Spondylitis Using a Large US Administrative Claims Data Set Page 17 of 23 Commentary: Evaluation of the Comorbidity Burden in Patients With Ankylosing Spondylitis Using a Large US Administrative Claims Data Set Jessica A. Walsh1, Xue Song2*, Gilwan Kim2, Yujin Park3 1University of Utah School of Medicine and Salt Lake City Veteran Affairs Medical Center, Division of Rheumatology, Salt Lake City, UT, USA 2IBM Watson Health, Cambridge, MA, USA 3Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA Cardiovascular Disease in Patients With Inflammatory Rheumatic Diseases Patients with chronic inflammatory rheumatic diseases, such as rheumatic arthritis, spondyloarthritis, and systemic lupus erythematosus, have an increased risk of cardiovascular disease [1][2][3][4][5][6][7][8] .The increased cardiovascular risk in patients with inflammatory rheumatic disease is likely related to systemic inflammation and traditional cardiovascular risk factors, such as hypertension, dyslipidemia, diabetes, smoking, and obesity, some of which are more prevalent in patients with rheumatic diseases.A link between inflammation and accelerated atherosclerosis has been identified in patients with inflammatory rheumatic disease [9][10][11] .Furthermore, endothelial dysfunction, oxidative stress, macrophage accumulation, toll-like receptor signaling, and proinflammatory cytokines have been implicated in atherogenesis 9,11,12 .Similar to the heterogeneity in traditional cardiovascular risk factors, there are differences between the autoimmune and inflammatory risk factors of rheumatic diseases; therefore, cardiovascular risk assessment and treatment should be tailored for each rheumatic disease.In addition to traditional risk factors and systemic inflammation, the use of specific nonsteroidal anti-inflammatory drugs (NSAIDs), which are often used in the management of some inflammatory rheumatic diseases, may play a role in the risk of cardiovascular disease [13][14][15][16] . The Prototype of Spondyloarthritis: Ankylosing Spondylitis Spondyloarthritis represents a group of inflammatory rheumatic disorders comprising ankylosing spondylitis (AS), nonradiographic axial spondyloarthritis, psoriatic arthritis, reactive arthritis, arthritis associated with inflammatory bowel disease, and undifferentiated spondyloarthropathies.With estimates of an overall prevalence of > 1% in the United States 17 , spondyloarthritis is at least as common as rheumatoid arthritis among whites [18][19][20][21] and is one of the most common chronic inflammatory disorders.Spondyloarthritis is characterized by peripheral arthritis and enthesitis, axial inflammation (ie, sacroiliitis and spondylitis), and new bone formation leading to ankylosis.Because spondyloarthritis develops relatively early in life and has a chronic, progressive course, the impact of the disease on patients can be substantial. Prevalence of AS in the United States has been estimated between 0.2% and 0.5% 17,[22][23][24] .Although the age of onset is typically the late teens through 40 years of age, delays in diagnosis by as much as 8 to 11 years may lead to diagnoses at an older age [25][26][27] .In addition to inflammation of the spine, joints, and entheses, patients with AS often present with peripheral arthritis, uveitis, psoriasis, and inflammatory bowel diseases.Furthermore, studies have shown that compared with the general population, patients with AS are at a higher risk of developing comorbidities including cardiovascular disease, diabetes, malignancies, and depression 6,14,[28][29][30][31][32][33][34][35][36][37][38] . Although previous studies of comorbidities in patients with AS have provided important information, most of these studies have been conducted outside of the United States.Because rates of comorbidities in the general population differ between the United States and other countries, there is a need to further understand comorbidities in US patients with AS.Here, we discuss the results of a recent real-world study, which examined the comorbidity burden of US patients with AS using a large national healthcare claims database.In addition, we review the current understanding of the risk of cardiovascular comorbidities in patients with AS. Comorbidities in AS Our recently published real-world study (Walsh JA, et al.Clin Rheumatol.2018;37[7]:1869-1878.) compared the prevalence and incidence of comorbidities between patients with AS and matched controls using medical and pharmacy claims data from the MarketScan ® Commercial and Medicare databases from 2012 through 2015.A total of 6679 patients with medical claims for AS were matched with 19,951 patients without AS at a ratio of up to 1:5 based on age, geographic location, index calendar year, and sex 39 .Patients with AS had a mean (SD) age of 50.8 (13.6) years, and 60.5% were men; matched controls had a mean age of 51.7 (13.4) years, and 60.8% were men 39 .The mean (SD) length of follow-up in patients with AS and in matched controls was 739 (139) days and 740 (139) days, respectively 39 .Patients with AS had a higher baseline comorbidity burden than matched controls (mean [SD] Deyo-Charlson Comorbidity Index score, 0.61 [1.15] vs 0.50 [1.14], P < 0.001) and were significantly more likely to have diagnoses of asthma, cardiovascular diseases, depression, dyslipidemia, gastrointestinal ulcers, malignancies, multiple sclerosis, osteoporosis, sleep apnea, spinal fracture, inflammatory bowel diseases, psoriasis, and uveitis (Table 1) 39 . Patients with AS had significantly higher incidence rates of all other comorbidities compared with matched controls, except for diabetes, dyslipidemia, and Parkinson disease (Table 2) 39 .In particular, for cardiovascular comorbidities, patients with AS had an approximately 1.25× higher incidence rate of angina, atherosclerosis, cerebrovascular disease/stroke, coronary artery disease, hypertension, myocardial infarction, and peripheral vascular disease and 2× higher incidence of venous thromboembolism compared with matched controls (Figure 1) 39 .The risk for cardiovascular disease persisted after statistical adjustments for baseline characteristics and comorbidities (including hypertension), as demonstrated in the published manuscript 39 .An important limitation of our study was the lack of body mass index data; therefore, obesity could not be evaluated as a comorbidity or be controlled for with related comorbidities such as cardiovascular disease and diabetes 39 .In addition, other risk factors that could have contributed to the development of comorbidities (eg, family history, smoking, alcohol consumption, and the use of over-the-counter NSAIDs) were not available in the data set 39 . Although our study did not examine the causality of cardiovascular comorbidities in patients with AS, the chronic inflammatory state of the disease may be linked to the development of these comorbidities 40 , as seen in rheumatoid arthritis 41 earlier in patients with AS compared with rheumatoid arthritis, and patients with AS are often undiagnosed for longer periods of time without having their underlying inflammation managed [25][26][27] ; therefore, the increased duration of uncontrolled inflammation may contribute to the higher risk of cardiovascular comorbidities in patients with AS.Furthermore, in patients with AS, NSAIDs are recommended as first-line therapy [42][43][44][45] and may be used more commonly and persistently in patients with AS than in those with other inflammatory rheumatic diseases.Further research is needed to evaluate the potential cause and effect relationships between AS and comorbidities. The elevated risk of cardiovascular disease in patients with AS shown in our study 39 supported evidence from published reports on the risk of developing new cardiovascular comorbidities in patients with AS 14,35,46 .A study from the Swedish National Patient Register showed a 50% higher risk of acute coronary syndrome and vascular thromboembolism and a 25% higher risk of stroke in patients with AS compared with the general population 46 . A meta-analysis of 18 studies of patients with AS and 12 studies of control patients reported a relative risk of myocardial infarction of 1.44 (95% CI, 1.25-1.67) in patients with AS compared with controls 6 .The same study also reported the results of a meta-analysis of 7 studies and reported a relative risk of stroke of 1.37 (95% CI, 1.08-1.73) 6.Furthermore, an administrative claims study from the Taiwan National Health Insurance Database showed a > 2-fold increase in the risk of stroke in patients with AS compared with a comparison cohort without AS 35 .Notably, the increased risk of cardiovascular disease in patients with AS was demonstrated globally despite geographic differences in baseline cardiovascular disease risk in the general population.Not all cardiac outcomes were assessed in our study.Valvular heart disease and conduction abnormalities are of interest in AS because they have been linked to aortitis and HLA-B27 positivity.In a study of Medicare beneficiaries over the age of 65 years, statistically higher risks were reported for mitral and aortic valve disease (OR, (n = 42,327) vs controls (n = 19,211,703) 47 .Rates of aortic valve procedures were also statistically higher in AS patients than controls (OR, 1.22-1.46),but rates of mitral valve procedures were similar between groups 47 .In addition, pacemaker insertions were evaluated as an estimate of serious and symptomatic conduction abnormalities and were more frequent in patients with AS than controls (OR, 1.11-1.32),particularly in older age groups 47 .These small risk differences do not support routine screening for valvular heart disease or conduction abnormalities in asymptomatic AS patients. The European League Against Rheumatism (EULAR) recommendations for cardiovascular disease risk management advise clinicians to be aware of the higher risk of cardiovascular disease in patients with inflammatory joint disease and screen patients for cardiovascular risk at least every 5 years and following changes in antirheumatic therapy 1 .Commonly used cardiovascular risk assessments in the general population are the Framingham Risk Score, the Systematic Coronary Risk Evaluation (SCORE), the Reynolds Risk Score, and the QRESEARCH Cardiovascular Risk Algorithm (QRisk) 2 score.However, these risk assessments may underestimate the cardiovascular disease risk in patients with AS because nontraditional cardiovascular risk factors are not included.The use of a relative risk chart has also been proposed as an alternative to the Systematic Coronary Risk Evaluation in patients aged < 50 years to determine the risk of cardiovascular disease 48 .Furthermore, the European League Against Rheumatism recommendations advise adapting cardiovascular risk assessments for patients with rheumatoid arthritis with a multiplication factor of 1.5 1 .Whether this multiplication factor should also apply to patients with AS remains unclear, but it may be an appropriate option in the absence of risk prediction models with proven accuracy and superiority in patients with inflammatory joint disease. Screening for asymptomatic atherosclerotic plaques using carotid ultrasound is recommended in patients with rheumatoid arthrits 1 and may be appropriate for patients with AS, especially younger patients 48 .Patients with AS are generally younger than patients with rheumatoid arthritis, and as a result, they may not receive the same cardiovascular screening.Because of the increased risk of cardiovascular disease in patients with AS compared with the general population, monitoring for cardiovascular disease may be needed at an earlier age than what is traditionally recommended for patients without AS. Prompt recognition and treatment of cardiovascular risk factors is important to decrease the morbidity and mortality associated with cardiovascular disease.Furthermore, the age and demographic characteristics of the individual patient must be considered.Patients with AS are diagnosed at a younger age than those with rheumatoid arthritis and are more likely to be male, which also increases their risk of cardiovascular disease.How age affects cardiovascular disease risk in patients with AS is unknown, although it has been explored in other inflammatory rheumatic diseases.Notably, younger women with systemic lupus erythematosus have a higher relative risk of cardiovascular disease compared with the general population than women with systemic lupus erythematosus who are > 60 years of age 7 . Conclusions Our AS comorbidities study 39 , which evaluated a large real-world sample of patients with AS, was among the first to evaluate comorbidities, including cardiovascular comorbidities, in US patients with AS compared with matched controls.Our study provides important information about the increased risks of comorbidities in US patients with AS, and research is needed to evaluate potential relationships between inflammation and comorbidities in patients with AS. Knowledge of the frequency and risk of comorbidities can assist rheumatologists and primary care physicians with comorbidity screening and strategies for management in patients with AS.Importantly, in addition to lifestyle management and counseling related to the traditional risk factors of cardiovascular disease, patients with AS may need diagnostic screening for cardiovascular disease at an earlier age than patients without AS, as well as further modification of the standard cardiovascular risk assessments.Furthermore, tailoring recommendations and treatment based on studies in patients with AS instead of adapting existing recommendations based on studies in patients with other inflammatory rheumatic diseases may provide optimal care for patients with AS. Table 2 : 1.06-1.51) in AS patients Proportions of patients with new comorbidities and the incidence rates per 100 patient-years AS, ankylosing spondylitis.
2019-03-28T13:33:33.274Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "5a781b47f0431ce4003dacce9c54cf17122149b3", "oa_license": "CCBY", "oa_url": "https://www.cardiologyresearchjournal.com/articles/commentary-evaluation-of-the-comorbidity-burden-in-patients-with-ankylosing-spondylitis-using-a-large-us-administrative-claims-dat.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5a781b47f0431ce4003dacce9c54cf17122149b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10785637
pes2o/s2orc
v3-fos-license
Visceral Fat Accumulation Is Associated with Colorectal Cancer in Postmenopausal Women Background Obesity is a known risk factor for colorectal cancer (CRC), and emerging data suggest that this association is mediated by visceral fat rather than total body fat. However, there is a lack of studies evaluating the association between visceral fat area and the prevalence of CRC. Methods To investigate the relationship between visceral adiposity and prevalence of CRC, data of 497 women diagnosed with CRC and 318 apparently healthy women were analysed and data of well-balanced 191 pairs of women with CRC and healthy women matched based on propensity scores were additionally analysed. Diagnosis of CRC was confirmed by colonoscopy and histology. Metabolic parameters were assessed, along with body composition, using computed tomography. Results The median visceral fat area was significantly higher in the CRC group compared with the control group before and after matching. The prevalence of CRC increased significantly with increasing visceral fat tertiles after matching (p for trend <0.01). A multivariate analysis showed that mean visceral fat area of individuals in the 67th percentile or greater group was associated with an increased prevalence of CRC (adjusted odds ratio: 1.80; 95% confidence interval: 1.12–2.91 before matching and adjusted odds ratio: 2.96; 95% confidence interval: 1.38–6.33) compared with that of individuals in the 33th percentile or lower group. Conclusion Thus, we conclude that visceral fat area is positively associated with the prevalence of CRC. Although we could not determine the causality, visceral adiposity may be associated with the risk of CRC. Further prospective studies are required to determine the benefits of controlling visceral obesity for reducing CRC risk. Introduction Obesity and cancer are emerging as two of the most serious health problems worldwide. Obesity is known to increase the risk of cardio-metabolic diseases including Type 2 diabetes mellitus (DM), cardiovascular disease, and metabolic syndrome [1,2]. Furthermore, the relationship between obesity and several types of cancer such as renal, oesophageal, colorectal, and breast cancer has also been reported [3,4]. The precise underlying mechanism that explains how obesity promotes these diseases is still unclear; however, recent evidence suggests that visceral adipose tissue may play a key role in this relationship. Visceral adipose tissue, largely distributed in the abdominal cavity, shows higher hormonal and metabolic activities than subcutaneous fat tissue [5]. Visceral adipocyte-secreted growth factors, proinflammatory cytokines, and adipokines are considered mediating factors associated with the carcinogenesis of obesity-related tumours [6]. Colorectal cancer (CRC) is well known as an 'obesity-related' cancer. Recent epidemiologic studies have shown that waist circumference or the waist-hip ratio, which reflect abdominal adiposity rather than total body mass index (BMI), showed greater association with increased risk of CRC [7][8][9]. These findings indicate that the regional distribution of adipose tissue, not overall adiposity, may contribute to the increased risk of CRC. Altered metabolic activity and systemic chronic inflammation induced by visceral adipose tissue are also considered to be related with colorectal carcinogenesis [10]. A few studies have assessed the relationship between CRC risk and visceral obesity using a direct method to measure visceral fat area; however, the results were inconclusive [11][12][13]. Some studies showed increased CRC risk with higher visceral adipose tissue accumulation. However, no significant relationship, and even opposing results, have been reported. Therefore, we investigated the relationship between the prevalence of CRC and visceral fat area by comparing a colorectal cancer group and a case-matched control group of Korean women. Ethical statement All subjects participated in the study voluntarily, and written informed consent was obtained from each participant. The study complied with the Declaration of Helsinki, and the Institutional Review Board of Yonsei University College of Medicine approved this study. Study subjects The study subjects consisted of 1920 postmenopausal women who visited the Department of Colorectal Surgery and were diagnosed with CRC during their visit and 670 postmenopausal women who visited the Health Promotion Centre and the Department of Family Medicine at Severance Hospital for routine health check-ups that included a screening colonoscopy between November 2010 and August 2012. Menopausal status was defined as having had no menstrual periods for 12 consecutive months without any biological or physiological cause. We excluded women who were taking medication for a diagnosis of hypertension, diabetes mellitus, chronic liver disease, chronic renal disease, coronary artery occlusive disease, or stroke. We also excluded women who underwent polyp removal procedures or who were diagnosed with CRC or other types of cancer prior to their participation in the study. After applying the exclusion criteria, a total of 497 women diagnosed with CRC were defined as the CRC group, and 318 apparently healthy women were defined as the control group. From the CRC and healthy groups, a well-balanced study population consisting of 199 pairs of women was selected by propensity score matching. Measurement of clinical parameters All subjects completed a questionnaire about their lifestyle, such as smoking, alcohol consumption, regular exercise, underlying medical conditions, and medications. Cigarette smoking was defined as current or past smokers, and alcohol consumption was defined as drinking alcohol more frequently than once per week or more than 70 grams per week during the previous year. Blood pressure was measured in the sitting position after the subject was asked to rest for longer than 10 minutes. The mean blood pressure (mmHg) was calculated using the systolic blood pressure (SBP) and diastolic blood pressure (DBP) as follows: (SBP+2XDBP)/3. Body mass index (BMI) was defined as weight (kg) divided by height squared (m 2 ). Blood samples were collected after at least 8 hours of fasting. Fasting glucose, aspartate aminotransferase (AST), alanine aminotransferase (ALT), creatinine, and total cholesterol levels were measured by using the Hitachi 7600 Automatic analyzer (High-Technologies Corporation, Hitachi, Tokyo, Japan). White blood cell (WBC) counts were measured using an automated blood cell counter (ADVIA 120, Bayer, NY, USA). The biomarkers were part of the routine tests for patients who were planning to receive CRC surgery. The control group also have received the same blood tests as a part of their routine health check-ups. Assessment of body composition Abdominal fat tissue areas were measured by computed tomography (Tomoscan 350; Philips, Mahwah, NJ, USA) as described previously [14]. A single cross-sectional CT image of a 3-mm thick slice at the level of L4-L5 interspace was obtained with the subject in a supine position. The visceral and subcutaneous fat areas were calculated at this slice using a commercially available software program (TeraRecon Aquarius; TeraRecon, CA, USA), which determined the fat area electronically by setting the attenuation range from 2150 to 250 Hounsfield units. Visceral adipose tissue areas were measured by delineating the intra-abdominal cavity at the internal aspect of the abdominal and oblique muscle walls surrounding the cavity and the posterior aspect of the vertebral body. The subcutaneous adipose tissue area was calculated by subtracting the VAT area from the total adipose tissue area. All measurements were performed by a skilled radiologist who was blinded to the patient data. The inter-and intra-coefficients of variation (CVs) for reproducibility were 1?4% and 0?5%, respectively. Diagnosis of CRC All participants received colonoscopic examinations performed by experienced gastroenterologists after bowel preparation with 4 litres of polyethylene glycol solution (Colyte; Taejun, Seoul, Korea). All procedures were performed by using a standard video colonoscope (CFQ240L, Olympus, Optical, Tokyo, Japan). Biopsies were taken from all detected suspicious lesions, and the final diagnosis of CRC was made by histopathological analysis. CRC was diagnosed if malignant cells were observed above the muscularis mucosae. The classification system recommended by the American Joint Committee on Cancer (AJCC) was used for tumour staging [15]. The locations of the tumours were recorded and divided into sigmoid, ascending, transverse, and descending colon, and rectum. Statistical analyses Data for demographic characteristics are represented as the mean 6 standard deviation or number (%). To reduce the effect of confounding factors that may affect the relationship between CRC and visceral adiposity, we adjusted for differences in the clinical basal characteristics between the CRC and control groups using propensity score matching [16]. The demographic characteristics of the CRC and control groups before matching were compared using two-sample t-tests for continuous data and Chi-square tests or Fisher's exact tests for categorical data. All variables constituting baseline demographic characteristics, such as age, BMI, smoking status, alcohol consumption, and regular exercise, were included as exact matching factors. A propensity score for the predicted probability of cancer in each woman was estimated using a logistic regression model fit with five factors. The controls were matched 1:1 with CRC patients. A nearest-neighbourmatching algorithm with a greedy heuristic was used to match patients for demographic characteristics. The matched demographic characteristics of the CRC and control groups were compared using paired t-tests for continuous data and McNemar tests for categorical data. The metabolic parameters were described as median and interquartile range, and differences between the two groups after matching were compared using Wilcoxon signed-rank tests. Tertiles were categorized as follows based on visceral fat areas: Q1: ,67.98 cm 2 , Q2: 67.98-91.67 cm 2 , Q3: .91.67 cm 2 . The prevalence of CRC according to the visceral fat tertiles was compared using the Cochran-Armitage trend test. The odds ratio and 95% confidence intervals (CI) for CRC were calculated using conditional logistic regression analyses after adjusting for confounding factors across visceral fat tertiles. All statistical analyses were performed using SAS software version 9.2 (SAS Institute Inc., Cary, NC, USA). Characteristics of the study population The clinical characteristics of the CRC and control groups before and after propensity score matching are given in Table 1. Women with CRC showed a significantly higher age and lower BMI, and lower incidence of regular exercise. After propensity score matching was completed, there were 199 matched pairs of participants. There were no significant differences in clinical characteristics between the two groups. Table 2 shows the metabolic parameters of the CRC and control groups before and after matching. Visceral fat area, visceral/subcutaneous fat ratio, mean blood pressure, fasting glucose levels, WBC count, and creatinine levels were significantly higher in the CRC group compared to the control group before and after matching (p,0.05). The subcutaneous fat area was significantly lower in the CRC group compared to the control group before and after matching (p,0.05). ALT levels were significantly higher in the control group only before matching (p,0.01) Table 3 describes the stage and location of the tumours in the CRC group before after matching. Categorization of patients according to cancer stage at first diagnosis revealed that 15.49% (n = 77) of patients were stage I, 24.55% (n = 122) were stage II, 25.15% (n = 125) were stage III, and 34.81% (n = 173) were stage IV before matching. After propensity score matching, 16.58% (n = 33) of patients were stage I, 24.12% (n = 48) were stage II, 23.12% (n = 46) were stage III, and 36.18% (n = 72) were stage IV. Of these, 276 (55.53%) patients had a tumour in the colon and 221 (44.47%) had a tumour in the rectum before matching and 113 (56.78%) patients had a tumour in the colon, and 86 patients (43.22%) had a tumour in the rectum after matching. Characteristics of colorectal neoplasms The prevalence of CRC based on visceral fat area tertiles The prevalence values of CRC based on the 3 visceral fat area tertiles (Q1, Q2, and Q3) were shown in Figure 1. Before matching, the prevalence values of CRC based on the 3 visceral fat area tertiles (Q1, Q2, and Q3) were 54.24%, 54.21%, and 74.54%, respectively (P,0.01, Figure 1A). After matching the prevalence of CRC increased significantly according to the visceral fat tertiles. (30.77%, 45.76% and 69.49%, respectively (P,0.01). ( Figure 1B) Table 4 and 5 shows the odds ratio of the prevalence of CRC based on the visceral fat area tertiles before and after propensity score matching. The multivariate-adjusted odds ratio (95% CI) for the highest versus the lowest visceral fat tertiles were 1.80 (1.19-2.91) (unmatched) and 2.96 (1.38-6.33) (matched) after adjusting for subcutaneous fat area, mean blood pressure, WBC counts, fasting glucose, total cholesterol, creatinine, AST, and ALT levels. These positive associations persisted even after separating the prevalence of cancer into colon (OR 3.47, CI; 1.24-9.68) or rectum (OR 4.15, CI; 1.05-16.34) sites in propensity score matching group. These positive associations also persisted after separating the group according to cancer stage as stage I, II (OR 3.64, CI; 1.41-9.39) and stage III, IV (OR 3.80, CI; 1.39-10.40) in propensity score matching group. Discussion Our cross-sectional study revealed a positive relationship between abdominal visceral obesity and CRC in Korean women. Visceral fat areas in the third tertile were associated with an approximately three times higher prevalence of CRC compared with areas in the first tertile after propensity score matching and adjusting for confounding factors (odds ratio: 2.96; 95% CI: 1.38-6.33). Furthermore, this association persisted after separating the cancer sites and stages. The prevalence of CRC has rapidly increased in the past 20 years in conjunction with the increasing prevalence of obesity worldwide [3]. Obesity is known to increase the risk of CRC significantly [10,17] and is also related with poor prognosis after treatment [18]. Recent studies have demonstrated the important role of visceral adiposity rather than general obesity in colorectal carcinogenesis [7][8][9]. However, these studies assessed CRC risk through direct measurement of visceral fat area using CT and provided conflicting results. Recent clinical studies have shown a significant association between CRC and visceral fat area. [11,12]. However, opposing results have also been reported. [13]. A small sample size, the confounding effect of unequal clinical characteristics of the participants, and the effect of tumour-related weight loss prior to the measurement of visceral fat are the factors that likely contributed to these unexpected results. In the present study, all of the participants underwent colonoscopy in the same hospital, and demographic characteristics between the control and CRC groups were carefully matched to reduce the effect of potential confounding factors. To our knowledge, this is the first study to compare the association between the prevalence of CRC and visceral fat area in confounding characteristics-matched cohorts. The precise mechanisms that explain the relationship between visceral adiposity and CRC remain unclear. However, we suggest some possible mechanisms based on our results. First, visceral adipocyte-secreted proinflammatory cytokines and adipokines may induce a protumourigenic status. Chronic inflammation promotes carcinogenesis by several mechanisms, including the enhancement of cancer cell proliferation and angiogenesis [19]. Previous studies have shown that visceral adipocytes secrete higher levels of proinflammatory cytokines, including interleukin 6 (IL-6) and tumour necrosis factor-alpha (TNF-a) [20]. Increased levels of these cytokines induce a protumourigenic environment [21]. Altered adipokine secretion may also affect colorectal carcinogenesis. For example, adiponectin which exhibits anti-tumour characteristics through anti-inflammatory and proapoptotic actions [22] shows a negative correlation with visceral fat mass [23]. Furthermore, lower adiponectin levels have been reported in CRC patients [24,25]. Therefore, systemic chronic inflammation and altered metabolic function may serve as a link for the association between visceral obesity and CRC. Insulin resistance is another factor that supports the association between visceral obesity and CRC. The correlation between visceral adipose tissue and insulin resistance is well established [26]. Lipolysis is more active in visceral adipose tissue than in subcutaneous adipose tissue, which results in the insulin resistance status being characterized as hyperinsulinemia [27]. Hyperinsulinemia is known to increase the risk of cancers, including CRC [28], and the prevalence of CRC is higher in Type II DM patients [29]. Insulin directly stimulates colorectal carcinogenesis by activating the anti-apoptotic and mitogenic cellular signalling pathways [22]. Furthermore, the role of insulin in regulating insulin-like growth factor (IGF) axis activity is also related with the tumourigenic effect of insulin. Chronic hyperinsulinemia inhibits the production of IGF-binding protein 1 (IGFBP-1) and IGFBP-2, which results in the increased bioavailability of IGF-1 [30]. IGF-1 acts as a procarcinogen by enhancing tumour cell proliferation and decreasing cell death [31]. These results collectively suggest that the increased insulin resistance induced by visceral adiposity may be associated with an increased risk of CRC. Figure 1. Comparison of the prevalence of colorectal cancer according to visceral fat tertiles before propensity score matching (Figure1-(A)). Comparison of the prevalence of colorectal cancer according to visceral fat tertiles after propensity score matching (Figure1-(B)). P-value was derived using the Cochran-Armitage trend test. doi:10.1371/journal.pone.0110587.g001 Table 4. Odds ratios and 95% confidence intervals for the prevalence of colorectal cancer according to visceral fat area tertiles before propensity score matching. In addition, the direct effect of visceral adiposity on the development of CRC also should be considered. Recently, Huffman et al. demonstrated the effect of visceral fat on the development of intestinal tumours, independent of known metabolic mediators [32]. Surgical removal of the visceral fat mass significantly reduced the risk of intestinal cancer in female mice; however, it failed to increase the levels of adiponectin and reduce the level of glucose, leptin, chemokines, and total adiposity. This result suggests that visceral adiposity, at least in part, might directly affect carcinogenesis in the gastrointestinal (GI) tract, independent of insulin resistance or inflammatory adipocytokines. Further experimental studies are needed to elucidate the precise mechanism by which visceral adiposity affects the prevalence of CRC. Our study demonstrated a significant relationship between visceral obesity and CRC in females in contrast to previous findings that showed relatively weak or no relationship between CRC and visceral obesity in female group [33][34][35]. However these studies have some limitations that most studies did not adjust the menopausal status and hormone replacement status that may affect the relationship between visceral obesity and CRC. For example Tobias et al [8]. have reported a significant relationship between CRC risk and the waist-hip ratio only in postmenopausal women who had not used HRT compared to HRT users. Because our data were obtained from postmenopausal women without HRT, our results may reflect the association of visceral obesity and CRC after minimizing the countering beneficial effects of exogenous oestrogen replacement. Additionally, many previous studies have shown a significant association between the risk of CRC and body composition, including waist circumference and waist: hip ratio in males [8,36]. Therefore, although we only investigated the relationship between CRC and visceral obesity in females, it is possible that these significant relationships also exist in the male population. Large-scale prospective studies are required to examine the precise roles of gender in relation to cancer prevalence and visceral obesity. Our study has several limitations. First, the cross-sectional design cannot establish a causal relationship between CRC and visceral fat area. Although our hypothesis suggested that visceral obesity might induce a higher risk of CRC, further prospective interventional studies are needed to elucidate this relationship. Second, we studied a small number of women who visited a single hospital. Therefore, our results do not allow for a generalization of the population at large. Third, we could not compare the levels of proinflammatory cytokines and adipokines that may act as important mediating factors because we used the data from the patients who visited the hospital for health check-up or for preoperative measurement. However, our results showed significantly higher WBC counts in the CRC group compared with the control group, which reflect the systemic inflammatory status of the CRC cells. Finally, due to the retrograde data collection method, clinically important variables, such as socio-economic status (including education and household income), could not be adjusted and may affect our results. In conclusion, our results demonstrate that visceral adiposity is independently associated with the prevalence of CRC in Korean women. Although we could not determine causality, our results collectively suggest that visceral obesity, as well as total obesity, may be associated with the risk of CRC. Further interventional prospective studies with larger sample sizes are required to understand the causal relationship between visceral adiposity and the prevalence of CRC, as well as to determine the benefits of controlling visceral obesity for reducing CRC risk.
2016-05-04T20:20:58.661Z
2014-11-17T00:00:00.000
{ "year": 2014, "sha1": "c4c59a01574b70424aedb693fb761930afdf3da7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110587&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4c59a01574b70424aedb693fb761930afdf3da7", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118960763
pes2o/s2orc
v3-fos-license
Comments on 2D dilaton gravity system with a hyperbolic dilaton potential We proceed to study a (1+1)-dimensional dilaton gravity system with a hyperbolic dilaton potential. Introducing a couple of new variables leads to two copies of Liouville equations with two constraint conditions. In particular, in conformal gauge, the constraints can be expressed with Schwarzian derivatives. We revisit the vacuum solutions in light of the new variables and reveal its dipole-like structure. Then we present a time-dependent solution which describes formation of a black hole with a pulse. Finally, the black hole thermodynamics is considered by taking account of conformal matters from two points of view: 1) the Bekenstein-Hawking entropy and 2) the boundary stress tensor. The former result agrees with the latter one with a certain counter-term. Introduction The AdS/CFT correspondence [1 -3] has been recognized as a realization of the holographic principle [4,5]. However, the rigorous proof has not been provided yet, although the integrable structure behind the correspondence has led to great advances along this direction (For a comprehensive review see [6]). A recent interest in the study of AdS/CFT is to construct a toy model which realizes a holographic principle at the full quantum level. Recently, Kitaev proposed an intriguing model [7] as a variant of the Sachdev-Ye (SY) model [8]. This is a one-dimensional quantum-mechanical system composed of N ≫ 1 Majorana fermions with a random, all-to-all quartic interaction. This model is now referred to as the Sachdev-Ye-Kitaev (SYK) model. For some recent progress, see [9][10][11][12][13][14][15][16][17][18][19][20]. A possible candidate of the gravity dual for the SYK model is a 1+1 dimensional dilaton gravity system with a certain dilaton potential (For a nice review see [21]). This model was originally introduced by Jackiw [22] and Teitelboim [23]. Then it has been further studied by Almheiri and Polchinski [24] in light of holography [25][26][27]. This model contains interesting solutions like renormalization group flow solutions, black holes, time-dependent solutions which describe formation of a black hole. Since the black hole is asymptotically AdS 2 , the boundary stress tensor computed in the standard manner leads to the associated entropy, which agrees with the Bekenstein-Hawking entropy. In the preceding work [28], we have studied deformations of this dilaton gravity system by employing a Yang-Baxter deformation technique [29][30][31]. The dilaton potential is deformed from a simple quadratic form to a hyperbolic function-type potential. We have presented the vacuum solutions and studied the associated geometries. As a remarkable feature, the UV region of the geometries is universally deformed to dS 2 and a new naked singularity is developed 1 . The vacuum solutions include a deformed black hole solution, which reduces to the original solution [24] in the undeformed limit. We have computed the entropy of the deformed black hole by evaluating the boundary stress tensor with a certain counter-term. The resulting entropy still agrees with the Bekenstein-Hawking entropy. In this paper, we will further study the dilaton gravity system with the hyperbolic dilaton potential. Introducing a couple of new variables leads to two copies of Liouville equations with two constraint conditions. As a remarkable feature, the constraints can be expressed in terms of Schwarzian derivatives. The new variables are so powerful to study solutions and enable us to reveal the dipole-like structure of the vacuum solutions. As a benefit, we present a time-dependent solution which describes formation of a black hole with a pulse. Finally, the black hole thermodynamics is considered by taking account of conformal matters from two points of view: 1) the Bekenstein-Hawking entropy and 2) the boundary stress tensor. The former result agrees with the latter one with a counter-term modified in a certain way. This paper is organized as follows. In section 2, we give a short review of the deformed dilaton gravity system. Then we revisit the vacuum solutions by introducing a couple of new variables. In section 3, we consider how to treat matter fields and derive a timedependent solution which describes formation of a black hole with a pulse. In section 4, adding conformal matters, we derive a deformed black hole solution. Then we reproduce the Bekenstein-Hawking entropy by computing the boundary stress tensor with a certain counter-term. Section 5 is devoted to conclusion and discussion. A dilaton gravity system with a hyperbolic potential In the following, we will work in the Lorentzian signature and the (1+1)-dimensional spacetime is described by the coordinates x µ = (t, x) (µ = 0, 1) . This system contains the metric g µν and the dilaton Φ as the basic ingredients. We may add other matter fields but will not do that in section 2. The classical action for g µν and Φ is given by [28] where G is a two-dimensional Newton constant, and R and g are Ricci scalar and determinant of g µν . The last term is the Gibbons-Hawking term that contains an extrinsic metric γ tt and an extrinsic curvature K . A remarkable point of this action is the second term. This is a dilaton potential of hyperbolic function, where η is a real constant parameter 2 . In the η → 0 limit, the classical action (2.1) reduces to the JT model (without matter fields) Thus the classical action (2.1) can be regarded as a deformation of the JT model. The vacuum solutions The deformed model (2.1) gives rise a three parameter family of vacuum solutions 3 [28], In this paper, we slightly changed the normalization of the dilaton potential from [28]. Therefore the constant factors of solutions are also changed. 3 Here the dilaton is turned on, but the solution is still called "vacuum" solution, according to the custom. where X and P are defined as 5) and the products X · P and P 2 are given by Here the metric of the embedding space M 2,1 is taken as η IJ = diag(−1, 1, −1) . This family labeled by α, β and γ is associated with the most general Yang-Baxter deformation. In other words, the effect of Yang-Baxter deformation appears only through the factor η(X · P ) . It should also be remarked that a black hole solution is contained as a special case [28]. Introducing a couple of new variables Let us first rewrite the metric into the following form: Then the classical action (2.1) can be rewritten as (2.10) In order to simplify this expression, it is helpful to introduce a couple of new valuables: Then the action (2.10) becomes the sum of two Liouville systems: By taking variations of the action (2.12) with respect to ω 1 and ω 2 , it is easy to derive the following equations of motion:R Taking a variation withg µν gives rise to the constraints whereT (1) µν andT (2) µν are the energy-momentum tensors defined as, respectively, 15) and the explicit forms are given bỹ Thus, by employing the new variables ω 1 and ω 2 , the deformed system (2.1) has been simplified drastically. Conformal gauge and Schwarzian derivatives In the following, we will work with the usual conformal gauge Then the equations of motion obtained from (2.1) are given by By solving the above equations, the general vacuum solution has been discussed in [28]. However, as we will show below, the deformed model (2.1) has a nice property, with which we can discuss classical solutions in a more systematic way. New variables revisited In conformal gauge, the classical action for ω 1 and ω 2 is further simplified as The equations of motion take the standard forms of the Liouville equation The general solutions of Liouville equation are given by 4 are arbitrary holomorphic and anti-holomorphic functions, respectively. Note that the equations (2.23) can be expressed by using the metric and dilaton. (2.25) By summing and subtracting them each other, the equations of motion (2.18) and (2.19) can be reproduced. By takingg µν = η µν in (2.16), the energy-momentum tensors are also rewritten as The By using the general solutions (2.24), the constraint conditions for the holomorphic (antiholomorphic) functions X + i (X − i ) can be rewritten as These constraints mean that the holomorphic (antiholomorphic) functions should be the same functions, up to linear fractional transformations Because e 2ω 1 > 0 and e 2ω 2 > 0 , determinants of the transformations must be positive: This ambiguity comes from the appearance of Schwarzian derivatives. Vacuum solutions revisited In this subsection, let us revisit the vacuum solutions by employing a couple of the new variables (2.11) . Before going to the detail, it is helpful to recall that the original metric and dilaton can be reconstructed from ω 1 and ω 2 through the following relations: Here let us take a parametrization for the linear fractional transformations, which come from (2.28) as follows: 5 5 Note that we can take this parametrization without loss of generality. Because of the constraint (2.30), we have to work in a restricted parameter region with Then the solutions in (2.24) are expressed as (2.34) Thus the general solution of ω and Φ 2 are also determined through the relation (2.31). Given that X ± (x ± ) = x ± , the deformed metric and dilaton become 6 This metric is the same as the result obtained in [28] as a Yang-Baxter deformation of AdS 2 , up to a scaling factor. For concreteness, let consider a simple case of (2.32) with α = 1, β = γ = 0 . Then conformal factors of the metrics for X 1 and X 2 are given by, respectively, (2.36) For each of the AdS 2 factors, the origin of the z-direction is shifted by ±η . Another example is the case with α = 1/2, β = 0, γ = µ/2 (where µ is a positive), in which we have considered a deformed black hole solution [28] 7 . 6 Here the condition (2.33) is consistent with the positivity of e 2ω1 and e 2ω2 . 7 Note that for arbitrary values of α, β and γ , black hole solutions can be realized by employing the following coordinate transformation, (2.37) Solutions with matter fields In this section, we shall include additional matter fields. Then the action is given by a sum of the dilaton part S Φ and the matter part S matter like Note here that we have not specified the concrete expression of the matter action S matter yet. In general, S matter may depend on the metric, dilaton as well as additional matter fields. Hence the inclusion of matter fields leads to the modified equations: Furthermore, one needs to take account of the equation of motion for the matter fields, which is provided as the conservation law of the energy-momentum tensor T µν defined as So far, it seems difficult to treat the general expression of T µν . Hence we will impose some conditions for T µν hereafter. A certain class of matter fields For simplicity, let us consider a certain class of matter fields by supposing the following properties: This case is very special because the equations of motion for ω 1 and ω 2 remain to be a pair of Liouville equations because the right-hand sides of the first and second equations in (3.2) vanish. Hence one can still use the general solutions (2.24). The constraints are also still written in terms of Schwarzian derivatives, but slightly modified like That is, the right-hand side does not vanish. To solve the set of equations, it is helpful to introduce new functions ϕ ± = ϕ ± (x ± ) defined as Note here that X ± 2 only have been utilized. Then by using ϕ ± , the Schwarzian derivatives can be rewritten as When the coordinates are taken as the constraints become Schrödinger equations as follows: Thus, for the simple class of matter fields, the constraints have been drastically simplified. A solution describing formation of a black hole As an example in the simple class, let us consider an ingoing matter pulse of energy E/(8πG): Note here that T µν does not depend on the dilaton Φ 2 and hence this case belongs to the simple class (3.4) . This pulse causes a shock-wave traveling on the null curve x − = 0 . Then the constraint for the anti-holomorphic part is written as By solving this equation, we obtain the following solution: Assuming the continuity, X − 2 is given by 8 Here a is an arbitrary integral constant and the scaling factor ϕ − (0) is fixed as (3.14) The remaining task is to determine X + 2 (x + ) . The constraint for ϕ + (x + ) is given by Thus one can determine ϕ + (x + ) and ∂ + X + 2 (x + ) as where γ and δ are constants. Hence X + 2 is obtained as with new constants α and β . For simplicity, we will set α = δ = 1, β = γ = 0 . That is, Thus one can obtain a solution of the two Liouville equations as follows: . As a result, the original metric and dilaton are given by . The undeformed limit η → 0 leads to a solution describing formation of a black hole in the undeformed model [24]. Note here that the energy-dependent constant in Φ 2 vanishes in the undeformed limit. At least so far, we have no idea for the physical interpretation of this constant. The deformed system with a conformal matter In this section, we will consider conformal matters, which do not belong to the previous class (3.4), and discuss the effect of them to thermodynamic quantities associated with a black hole solution. Let us study a conformal matter whose dynamics is governed by the classical action: Here N denotes the central charge of χ . It is worth noting that the conformal matter couples to dilaton as well as the Ricci scalar, in comparison to the undeformed case [24]. Then the energy-momentum tensor and a variation of S matter with respect to the dilaton are given by Hence the equations of motion are given by Note here that the third equation is still the Liouville equation, while the second equation acquired the source term due to the matter contribution. As we will see below, the system of equations (4.3) is still tractable and one can readily find out a black hole solution including the back-reaction from the conformal matter χ . A black hole solution with a conformal matter Let us derive a black hole solution. Given that the solution is static, χ can be expressed as GNη ∂ + ∂ − ω 1 + e 2ω 1 = 0 , GNµ . (4.5) Note that a numerical coefficient in the first equation is shifted by a certain constant as a non-trivial contribution of the conformal matter. Still, we can use the general solutions of Liouville equations given by By using X ± i (i = 1, 2) and the Schwarzian derivative, the constraints can be rewritten as GNµ . (4.7) It is an easy task to see that the hyperbolic-type coordinates satisfy the constraints (4.7) , where L ± 1,2 denote linear fractional transformations as in (2.29). Note that each of X ± 1,2 covers a partial region of the original spacetime. Hence the coordinate transformations (4.8) may lead to a black hole solution [24,28]. In fact, the Schwarzian derivatives have particular values like and hence these coordinates satisfy the constraints. Here we choose the following linear transformations L ± 1,2 : one can derive a deformed black hole solution with conformal matters: GNη . (4.12) The matter effect just changes the overall factor of the metric and shifts the dilaton by a constant. In the undeformed limit η → 0 , this solution reduces to a black hole solution with conformal matters presented in [24]: Black hole entropy In this subsection, we shall compute the entropy of the black hole solution with a conformal matter given in (4.11) and (4.12) from two points of view: 1) the Bekenstein-Hawking entropy and 2) the boundary stress tensor with a certain counter-term. 1) the Bekenstein-Hawking entropy Let us first compute the Bekenstein-Hawking entropy. From the metric (4.11), one can compute the Hawking temperature as (4.14) From the classical action, the effective Newton constant G eff is determined as Nχ . Note that the presence of the conformal matter fields is reflected as a shift of G eff . Given that the horizon area A is 1 , the Bekenstein-Hawking entropy S BH is computed as The terms in the last line are constants independent of the Hawking temperature. 2) the boundary stress tensor The next is to evaluate the entropy by computing the boundary stress tensor with a certain counter-term. In conformal gauge, the total action including the Gibbons-Hawking term can be rewritten as By using the explicit expression of the black hole solution in (4.11) and (4.12), the on-shell bulk action can be evaluated on the boundary, As argued in [28] 9 , the singularity of (4.11) is identified as the boundary Z 0 : As the bulk action approaches the boundary (Z → Z 0 ) , the bulk action (4.18) diverges and hence one needs to introduce a cut-off. When the regulator ǫ is introduced such that Z − Z 0 = ǫ , the on-shell action is expanded as To cancel the divergence, it is appropriate to add the following counter-term: 10 For an earlier argument for the relation between the singularity and the holographic screen, see [34]. 10 Note that in the undeformed limit η → 0, this counter-term reduces to the one in [24]. When µ = N = 0, this term becomes the dilaton potential 1 η sinh(2ηΦ 2 ). Here L is the overall factor of the metric defined as and scalar functions F and G are defined as (4.23) The extrinsic metric γ tt on the boundary is evaluated as In the undeformed limit η → 0, this counter-term reduces to This is nothing but the counter-term utilized in the undeformed model [24]. It is straightforward to check that the sum S = S Φ + S matter + S ct becomes finite on the boundary by using the expanded form of the counter-term (4.21): In a region near the boundary, the warped factor of the metric (4.11) is expanded as Hence, by normalizing the boundary metric aŝ the boundary stress tensor is defined as After all, T tt is evaluated as To compute the associated entropy, T tt should be identified with energy E like where we have used the expression of the Hawking temperature (4.14) . Then by solving the thermodynamic relation, the associated entropy is obtained as Here S T H =0 has appeared as an integration constant that measures the entropy at zero temperature. Thus the resulting entropy precisely agrees with the Bekenstein-Hawking entropy (4.16) , up to the temperature-independent constant. Conclusion and discussion In this paper, we have considered some matter contributions to a (1+1)-dimensional dilaton gravity system with a hyperbolic dilaton potential. By introducing a couple of new variables, this system has been rewritten into a pair of Liouville equations with two constraints. In particular, the constraints in conformal gauge can be expressed in terms of Schwarzian derivatives. We have revisited the vacuum solutions and revealed its dipole-like structure. The new variables are so powerful in studying solutions. As a benefit, we have constructed a time-dependent solution which describes formation of a black hole with a pulse. Finally, the black hole entropy has been considered by taking account of conformal matters. The Bekenstein-Hawking entropy agrees with the entropy computed from the boundary stress tensor with a certain counter-term. There are some future directions. The first is to clarify a connection between the system considered here and the doubled formalism such as Double Field Theory (DFT) [35][36][37] and Double Sigma Model (DSM) [38][39][40][41]. As well recognized, Yang-Baxter deformations of type IIB string theory defined on AdS 5 ×S 5 [42,43] are closely related to DFT and DSM [44][45][46] via the generalized supergravity [47,48]. A similar connection may be expected in the present lower-dimensional case as well, because the present system was originally constructed by employing the Yang-Baxter deformation technique. The second is to reveal the underlying symmetry. By following a nice work by Ikeda and Izawa [49], the hyperbolic dilaton potential leads to the expected q-deformed sl(2) algebra realized in the associated non-linear gauge theory. Elaborating this symmetry algebra helps us to identify the holographic dual. The third is to a generalization to include arbitrary matter fields and discuss the associated one-dimensional boundary theory by following [25,26]. It seems likely that the anticipated system is a deformed Schwarzian theory. Finally, it is interesting to consider a similar deformation of the asymptotically flat case [50] by following [51]. It is also nice to study how the holographic relation should be modified in the case with a reflecting dynamical boundary by generalizing [52]. The integrability techniques discussed there would still be useful even after performing Yang-Baxter deformations. We hope that the dipole-like structure uncovered here would shed light on a new aspect of the 2D dilaton gravity system and further the holographic principle as well.
2017-04-24T18:42:24.000Z
2017-04-24T00:00:00.000
{ "year": 2017, "sha1": "97abbbdcd9755cdd88217cfc6e199aeb492c54e3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nuclphysb.2017.07.013", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "97abbbdcd9755cdd88217cfc6e199aeb492c54e3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119254991
pes2o/s2orc
v3-fos-license
Amplification of Hypersound in Graphene with degenerate energy dispersion Hypersound amplification/absorption of acoustic phonons in Graphene with degenerate energy dispersion $\varepsilon(p)$ near the Fermi level was theoretically studied. For $k_{\beta}T<<1$ and $ql>>1$, the dependence of the absorption coefficient $\Gamma/\Gamma_0$ on ${V_D\over V_s}$ was studied where the results satisfied the Cerenkov effect. That is when ${V_D\over V_s}>1$, an amplification was obtained but for ${V_D\over V_s}<1$, an absorption was obtained which could lead to Acoustoelectric Effect (AE) in Graphene. A linear dependence of the $\Gamma/\Gamma_0$ on $\omega_q$ was observed where the result obtianed qualitatively agreed with an experimentally observed acoustoelectric current in Graphene via the Weinrich relation. It is interesting to note from this study that, frequencies above $10THz$ can be attained for $V_D = 1.1ms^{-1}$. This study permit the use of Graphene as hypersound phonon laser (SASER). Introduction Graphene, a member of the Carbon allotropes has exceptional properties for future nanoelectronics [1,2,3,4]. It is an ideal two-dimensional electron gas (2DEG) system made up of one layer of Carbon atom having a high electron mobility (µ) at room temperature with high mechanical and thermodynamic stability [5]. Several unusual phenomena such as half-integer quantum Hall effect [6], non-zero Berry's phase [7], and minimum conductivity [8] have been observed experimentally in Graphene. The most interesting property of Graphene is its linear energy dispersion E = ±hV F |k| (the Fermi velocity V F ≈ 10 8 ms −1 ) at the Fermi level with low-energy excitation. This makes graphenes applicable in advance electronics and optoelectronic devices such as sub-terahertz Field-effect transistors [9], infrared transparent electrodes [10] and T Hz plasmonic deives [11]. Currently, among the various studies on Graphene attracting much attention is the generation and detection of hypersound amplification or absorption of acoustic phonons [12]. It is known that, when an acoustic phonon passes through a semiconductor, it may interact with various elemental excitations which may lead to amplifcation or absorption of the phonons. The idea of acoustic wave amplification in bulk material was theoretically predicted by Tolpygo (1956), Uritskii [13], and Weinreich [14] and in N-Ge by Pomerantz [15]. Hypersound generation in bulk [16] and low-dimensional materials such as Superlattices [17,18,19,20], Cylindrical Quantum Wire [21], Quantum Wells [22] and Graphenes Nanoribbons (GNR) [23] have been studied. Akin to Cerenkov acoustic-phonon emission, when the drift velocity of electrons V D exceeds the sound velocity (V s ) of the host material [24] lead to amplification of the acoustic-phonons or when V D < V s causes absorption. This has been ultilised experimentally to confirm the breakdown of quantum Hall effect [25], the generation of coherent phonon-polariton radiation [26], and large acoustic gain in coherent phonon oscillators in semiconductors [27]. Furthermore, the emission and absorption of acoustic-phonons is used to provide detailed information on the excitation and relaxation mechanisms in semiconductors via deformation potential, where the effect of interactions can be used to determine the physical properties of the material. In particular, acoustic-phonons providing terahertz (10 12 Hz) hypersonic sources can lead to the attainment of phonon laser or SASER [28,29] in graphene via Cerenkov effect which is an intense field of research. Following the works of Nunes and Fonseca [32], Zhao et. al [33] proposed the possibility of attaining Cerenkov acoustic-phonon emission in Graphene whilst Insepov et. al [31], performed experimentally the surface acoustic wave Amplification by D.C voltage supply in Graphene. In this paper, the Cerenkov effect in graphene is archived where V D Vs > 1 gives hypersound amplification and V D Vs < 1 gives absorption of acoustic-phonons. The motivation for this work is to provide the theoretical framework that can lead to the attainment of SASER in Graphene for use as a phonon spectrometer, for generation of high-frequency electric oscillation, and as a non-destructive testing of microstructure and acoustic scanning system. The paper is organised as follows: In theory section, the theory underlying the amplification (Absorption) of acoustic-phonon via Cerenkov effect is presented. In the numerical analysis section, the final equation is analysed and presented in a graphical form. Lastly, the conclusion is presented in section 4. Theory We will proceed following the works of [32], here the acoustic wave will be consisdered as phonons of frequency (ω q ) in the short-wave region ql >> 1 (q is the acoustic wave number, l is the electron mean free path). The kinetic equation for the acoustic phonon population N q (t) in the graphene sheet is given by where g s = g v = 2 account the for spin and valley degeneracies respectively, N q (t) represent the number of phonons with a wave vector q at time t. The factor N q + 1 accounts for the presence of N q phonons in the system when the additional phonon is emitted. The f k (1 − f k ) represent the probability that the initial k state is occupied and the final electron state k is empty whilst the factor N q f k (1 − f k ) is that of the boson and fermion statistics. The unperturbed electron distribution function is given by the shifted Fermi- Dirac function as where f p is the Fermi-Dirac equilibrium function, with χ being the chemical potential, p is momentum of the electron, β = 1/kT , k is the Boltzmann constant and V D is the net drift velocity relative to the ion lattice site. In Eqn (1), the summation over k and k can be transformed into integrals by the prescription where A is the area of the sample, and assuming that N q (t) >> 1 yields where . Λ is the deformation potential constant, and ρ is the density of the graphene sheet. At low temperature k B T << 1, the distribution function become f (k) = exp(−β(ε(k))). Eqn(4) can be expressed as Using standard intergrals, Eqn(5) can be expressed finally where Numerical Analysis The Eqn (6) is analysed numerically for a normalized graph of Γ Γ 0 against V D Vs and ω q . The following parameters were used Λ = 9eV , T = 10K, V s = 2.1 × 10 6 cms −1 and q = 10 5 cm −1 . In Figure 1, the graph for the dependence of Γ Γ 0 on ω q is plotted. The graph was obtained at V D Vs < 1. The insert shows an experimentally obtained graph of an acoustoelectric current for gate-controlled Graphene [37]. The hypersound absorption graph qualitively agreed with the experimentally obtained graph via the Weinriech relation [36]. In Figure 2a, the dependence of Γ Γ 0 on V D Vs is analysed. From the graph, when V D Vs < 1, an absorption graph was observed, but when V D Vs > 1, gave an amplification of hypersound as is indicated in the work of Nunes and Fonseca [32]. To enhanced the observed Amplification (Absorption), a 3D graph was plotted for frequencies ω q = 0. Conclusion The generation of hypersound amplification (absorption) of acousticphonons in a gated controlled graphene is studied. The absorption obtained qualitatively agreed with an experimentally obtained acoustoelectric current in a gate-controlled graphene via the Weinrich relation. For V D Vs > 1, the hypersound amplification obtained is similar to that of Nunes and Fonseca. For a drift velocity of V D = 1.1V s , a field of E = 11.5V /cm was calculated. At frequency of 0.2T Hz, an amplification of Γ/Γ 0 = −3.17 is attained. From this work, the hypersound studies in graphene offers a much better source of higher phonon frequencies than the homogenous semiconductors which permit the use of graphene as hypersound phonon laser (SASER).
2015-06-21T15:26:15.000Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "5dbadf373f98d59b9e6330e4f2453e9e4f84ac34", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef4e3fbfd82cc7ddbf0b6dcbd059193e3f85a84c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218628631
pes2o/s2orc
v3-fos-license
Certain Classes of Univalent Functions With Negative Coefficients Defined By General Linear Operator In this study, a subclass of an univalent function with negative coefficients which is defined by a new general Linear operator have been introduced. The sharp results for coefficients estimators, distortion and closure bounds, Hadamard product, and Neighborhood, and this paper deals with the utilizing of many of the results for classical hypergeometric function, where there can be generalized to m-hypergeometric functions. A subclasses of univalent functions are presented, and it has involving operator which generalizes many well-known. Denote A the class of functions f and we have other results have been studied. Introduction Many researchers such as Mohammed and Darus [1], Aldweby and Darus [2],and others have used the mhypergeometric functions for studying certain families of mathematic viable functions in an open disk unit. The m-hypergeometric functions are generalized configuration of the classical hypergeometric functions. Then by assuming the limit m → 1, it would return to a classical hypergeometric function. The formal set of hypergeometric functions have been used and introduced by many famous researchers were started by Euler in (1748), Gauss (1813) and Cauchy (1852) see (Juma [3]). Also, it was converted a simple notation into a systematic theory of hypergemetric function in same trend of theory of Gauss hypergeometric function. Here, this study deals with the utilizing of many of the results for classical hypergeometric function, where there can be generalized to m-hypergeometric functions. In this work, a subclasses of univalent functions are introduced, and it has involving operator ( )which generalizes many well-known. Denote A the class of functions f of the form ( ) ∑ ( ) which are analytic and univalent in the open unit disk Ȗ={z ℂ |z|<1}. A function" f A is said to be starlike of complex order if the following" condition (see [4]) is satisfied: For complex* parameters c 1 ,…..c t and b 1 , where c any complex number and in terms of the Gamma function ( ) The study suggests that note that and by utilizing ratio test , the series (1.3)converges absolutely in open unit disk Ȗ, |m|<1 Is the m-Gauss hypergeometric function see [4], [5]. Recently Mohammed and Darus [1] defined the following: The Srivastava-Attiya operator T s,c : A → A is defined in [6] as: where z Ȗ , c ℂ {0, -1, -2, .….}, s ℂ and f A. This linear operator T S,C can be written as T S,C f (z)=G s,c (z) * *f (Z ) = (1+c) s ( (z,s,c)-c -s )*f (z), by utilizing the Hadamard product (convolution).Here, is the well-known Hurwitz -Lerch zeta function (see [6], [7]). It is also an important function of Analytic Number Theory such the De-Jonquiere function: We can define the linear operator (c i , b j )( f ) : A → A as follows: . 2."Confficients estimates and Other properties Sine |Re(z)|<|z| for all z , we have The above bounds are sharp. Proof. By theorem 1 , we have ∑ ( The result is sharp for function f (z),defined by Proof.l Let f i ( ) ∑ (i=1,2) belong to ( ) and let g(Z 1 )= 1 F 1( Z)+ 2 F 2 (Z) 1 with 1 and 2 no negative and 1 + 2 =1and we write ( ) The study " shall further try, to obtain the extreme " points in the following theorem". and we obtain In view of. theorem 1, this shows " that f(z) ( ) " " Conversely", and ∑ then we get ( ) ( ) ∑ ( ) □ Then Then , the Hadamrd product h (z) defiend by ( ) ∑ is in the sub class ( ) when 3."Neighbourhood and Hadamard product properties We get only to find the lagest μ 2 such that. Now by Cauchy -Schwarz inequality , we get We need only to show that Consequently , we also need to prove that 7) and Then the function ( ) defined as ( ) ∫ ( ) also belongs to ( ) Proof:: By virtue of G * (z) it follows" from (1.
2020-05-15T01:00:38.277Z
2019-12-22T00:00:00.000
{ "year": 2020, "sha1": "546937fd2d2107eecd7f21850cc4cb65d2040786", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "546937fd2d2107eecd7f21850cc4cb65d2040786", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
265326172
pes2o/s2orc
v3-fos-license
The Perils of Parliamentarism in Contrast to Presidentialism in Democratic Transition : The paper raised some doubts on a few academic literature’s arguments that Parliamentarism is better than Presidentialism for new democracies in the transitional period. It instead found that parliamentarism could also lead to critical perils to democratic transition, at least in some particular political situations, like increasing the instability of the government, encouraging political speculation and polarization, and allowing ruling parties to self-benefit from the manipulated electoral system. Instead, presidentialism could be conducive to addressing these perils of parliamentarism and enable a more robust, stable, and successful transition from authoritarianism to democracy for many countries. This paper took examples of presidentialism in Philippine and Taiwan. Compared with presidentialism, parliamentarism has several key weaknesses. They include government instability under minority rule, polarization in countries with enormous social cleavages, encouraging politicians to pursue political speculation and defect their electorates, and the likelihood of electoral system maneuver by ruling parties. Thus, for new democracies, presidentialism may perform better than parliamentarism, at least on some occasions. Introduction Nowadays, most political regimes have acknowledged democratic principles as the source of their legitimacy and claim themselves to be democracies.However, not all self-claimed "democracies" have established robust institutions that can respect and defend democratic values.For many countries undergoing political change, their democratic transition's fates usually hinge on whether their democratic institutions can balance stability and efficiency with inclusiveness and pluralism [1].Therefore, it is crucial for these transitional regimes to make appropriate institutional choices and arrangements. One of the most decisive choices that many new democracies are confronting and debating is whether to adopt parliamentary or presidential system.In parliamentary democracies, the executive branch's legitimacy rests on the confidence of the parliament.Presidents or monarchs of parliamentary regimes are usually symbolic figures as the head of state but with few substantial executive powers [2].In contrast, under presidential systems, the president is both the head of state and the head of the government, whose legitimacy is independent of the legislature [2].Both systems have advantages and disadvantages.For democracies in the transitional stage, many scholars like Linz [1] prefer parliamentarism to presidentialism based on the consideration of stability and pluralism.However, this paper will discuss the potential perils of adopting parliamentarism in new democracies.The paper will first discuss the potential risks of parliamentarism from both theoretical and empirical perspectives and compare it with presidentialism in some transitional political entities' cases, then try to defend the advantages of presidentialism in contrast to parliamentarism by taking some new democracies as examples. 2. The Perils of Parliamentarism Some literature argues that parliamentary system is better than presidentialism in maintaining political stability in democratic transitions.In the parliamentary system, the government is formed by the legislature's majority party.As Linz [1] argued, the system can function well to avoid the conflict between the executive and legislative power.It technologically could avoid the legitimacy conflict between the popularly elected president and parliament that may belong to divergent political camps.However, this advantage can become a source of governmental instability if the ruling party fails to enjoy an absolute majority in the parliament.In this circumstance, the incumbent party usually has to resort to forming a coalition government by seeking support from other minor parties to keep itself in power.On this occasion, the temporary coalitions could be very fragile.The possibility of such kind of government finishing their full terms then drops dramatically.Therefore, the potential conflict within the ruling coalition and between the weak government and strong opposition parties can bring uncertainties to countries at the critical transition stage [3]. Besides the possibility of the minority government, parliamentarism could also encourage another source of regime instability to emerge, which is political speculation behaviors.Parties defeated in the general elections by popular vote may use various means, like the promise of governmental positions, making political concessions, and granting economic benefits as enticements to lure ruling party representatives to turn to support them [3].This situation is more common and detrimental in those fragile democracies where political integrity and accountability are yet to be fully established and widely honored [2].Parliamentarism provides incentives and opportunities to ambitious politicians of opposition parties to make such political speculation because the executive power is generated from the legislative branch and can only be held accountable to the legislature.Thus, the defeated side could regain power through the "back door", as long as their parties get a majority of legislators' support.The people's will and democratic accountability mechanism could then be bypassed, as regime changes could happen regardless of previous election results [2]. The political instability in Malaysia from 2020 to 2022 is an example of the peril of parliamentarism.In 2018, the first democratic transfer of political power happened in this country, as Barisan Nasional (BN), a conservative ruling coalition since the independence of Malaysia for 61 years, was defeated by Pakatan Harapan (PH), a more liberal and multi-ethnic coalition [4].While Malaysia was considered an electoral authoritarian state in the past as the BN regime was criticized for implementing discriminative policies against ethnic minorities and systematically repressing domestic oppositions for years, this change was widely regarded as a remarkable progress of Malaysia's democratization process [4].According to the election results, The PH coalition should have enjoyed an absolute majority with 113 out of 222 seats [5].However, unexpectedly, the reformist government collapsed merely two years after their victory.At the beginning of 2020, some PH representatives defected to the BN.The BN leaders, defeated by the popular vote in the 2018 election, returned to power by making use of the intra-coalition conflicts of the PH and persuaded enough legislators to defect to the PH-led government.[6].Although the power transition occurred overnight regardless of the will of voters, it was consistent with the principle of parliamentarism as the leader of the majority party or coalition could form the government regardless of whether the majority is gained from elections or defections [2].If Malaysians could choose their head of the government by popular vote under a presidential-style electoral system, the winner of this election could enjoy at least a five years term even if his party lost control of the legislature.The defection would be meaningless because the source of legitimacy of the executive branch comes directly from the elections.The relative independence of the executive branch can also enable the head of the government to carry out reforms in his term more conveniently and comprehensively without worrying about being defected by allies and being threatened by the radical wing of the opposition parties. On the other hand, parliamentarism may also exacerbate political polarization.Linz [1] credited parliamentarism for avoiding zero-sum games and winner-take-all competition.There are also arguments that parliamentarism is better for democracies' consolidation by isolating extremists and encouraging consensus building [2].But based on empirical observation, arguments in favor of parliamentarism may be true to countries where ideological cleavages of political powers are minor but may not be well applicable to states that are struggling against divisiveness.In a political environment with a high degree of division, the strengths of centralists are too weak to be influential, political powers on both sides of the spectrum may have to turn to find their allies from extremists of their respective side and shift further away from the center [7].Under the presidential system, the president is expected to be the representative of the whole country.The need for broad representation incentivizes candidates to expand their coalition and appeal to more moderate.In contrast, parliamentary system gives extreme parties more space to survive.It allows them to win seats and share political powers by only appealing to their core electorates and consolidating their support even at the cost of increasing polarization. The breakdown of the Weimar republic can be an example to illustrate what may happen in a divisive state adopting parliamentarism.Although the Weimar Republic was not a typical parliamentary regime, it had many characteristics similar to parliamentarism, like the division of the head of state and government and the Chancellor was usually the leader of the parliament's majority party.Due to the socio-economic crisis and the proportional representative electoral system, the party system in the parliament became increasingly fragmented and divisive in the late 1920s [8].Both farleft and far-right parties gained tremendous ground during this period, and the space of centralists was significantly narrowed.This trend rendered the Weimar Republic to become a dysfunctional democracy as negotiations and compromises were almost impossible between parties at two extreme ends political spectrum, and finally led to the victory of the far-right Nazi Party in 1933 [8].Instead, a strong presidential system may save the democracy of the Weimar Republic as long as the president elected is a unifying figure who could exert his relative independence from partisan struggles in the parliament and make attempts to represent the divisive country as a whole. Moreover, for authoritarian states and new democracies, parliamentarism may have negative impacts on their democratization.Comparative research found that the electoral authoritarian regimes adopting parliamentarism enjoy longer life spans compared with presidential regimes [9].One of the main reasons is the parliamentary system enables governing parties to institutionalize themselves instead of centering on or being unduly influenced by the presidents.The institutionalization also undermines elites' incentives to oppose the governments by sharing power with them [9].For instance, although BN (the former governing coalition of Malaysia) was alleged to be dominated by the Malay power, it had successfully maintained its support from the elites of minorities by constantly sharing government positions with them for more than half a century, which is considered as a reason for why Malaysia's first transfer of political power came so late and arduous [4]. On the other hand, compared with presidential elections in which the whole country is a single voting unit, members of parliament are usually elected from their respective constituencies.This difference renders it easier for ruling parties to maneuver election results by various means like gerrymandering and setting electoral rules that are advantageous to them [9].Take Singapore as an example, its ruling party (People's Action Party, PAP) has a long historical record of manipulating election results by using a multiple magnitude plurality (MMP) electoral system and unfairly drawing the constituencies' boundaries.The MMP system is favorable to the PAP, which enjoys much more local resources for campaigns and candidates to run, while disadvantaging the opposition parties by raising the threshold to be elected.Meanwhile, gerrymandering ensures PAP distributes supportive electorates more equally in each district and dilutes the opposition parties' electoral base [10]. Therefore, overall, although parliamentarism solves the problem of dual legitimacy of presidential systems, the legitimacy of its own is easier to be weakened as the parliament is the only directly elected institution on the national level.Once the representativeness of the legislature is distorted or diminishes, few approaches are available to remedy that and to enable the expression of the authentic opinion of the majority.Now I will turn to presidentialism by using two successful new democracies' examples to illustrate its distinctive advantages to transitional regimes in comparison with parliamentarism. Presidentialism's Merits in the Philippines and Taiwan's Democratization The first example is the end of Marcos' dictatorship in the Philippines in 1986.In 1972, President Fredinand Marcos declared martial law and became the dictator of the Philippines.Facing the increasing pressure calling for democratic reform domestically, Marcos authorized a constitutional amendment in 1981 that allowed Philippine citizens to elect their president directly instead of elected by the government-controlled parliament, which was seen as opening the door to democratization [9]. In the 1986's general election, Marco encountered the challenge of the opposition parties' leader, Corazon Aquino.Although Marcos declared victory by winning 53.6% of the votes, opposition parties refused to concede and claimed large-scale electoral misconduct happened in the election.The anger led to a series of protests against Marcos, known as the People Power Revolution, and finally compelled him to flee to the US and end his dictatorship in the Philippines [9].Thinking retrospectively, if the head of government was elected by the parliament instead of voters, then Marco could extend his dictatorship easily by unfairly drawing the constituency map or forming a coalition with minor conservative parties, even if he failed to win the popular vote in parliamentary election [9].Presidentialism accelerated the Philippines' democratization by eliminating some advantages Marco enjoyed and forcing him to compete relatively fairly with the opposition.Besides, Taiwan's democratization could further justify the advantages of presidentialism from another perspective.The authoritarian regime of Kuomintang in Taiwan started in 1949 and continued for almost half a century.In the last years of the 1980s, forced by internal and external pressures, the regime's leader (Chiang Ching-Kuo) had to loosen the control of Taiwan society and allowed the formation of opposition parties.After the death of Chiang, his successor Lee Teng-hui continued to put forward reforms by ceasing martial law and dissolving the National Assembly, whose representatives had been seated for 44 years without elections [11].However, these reforms not only caused the resistance of conservative fractions of Kuomintang but were also criticized as too slow and accompanied by corruption by opposite parties.In 1993, some conservative KMT politicians left the party and formed the New Party, criticizing Lee Teng-hui and the KMT establishments for acquiescing to corruptive "black gold" politics and conniving the expansion of pro-independent political forces [11].In the legislative election two years later, Kuomintang almost lost its majority in the legislative Yuan while opposition parties won 79 of 164 seats in total [12]. In this situation, if Taiwan adopted parliamentarism instead of presidentialism, it was likely that the incumbent KMT cabinet would not pass the confidence vote with the objection of both the radical Democratic Progressive Party and conservative fractions of the KMT.Under a parliamentary system, even if the KMT still (unlikely) held a slim majority, the incumbent cabinet may still need to take collective responsibility for seat loss and give its way to more conservative political figures.The pace of democratization may therefore slow down and even stagnate.Otherwise, the opposition parties may unite together and elect a new premier to take place the KMT and accelerate reforms, which may, on the other hand, damage the interests of conservative KMT fractions and the KMT-affiliated military, which remained controlled vast resources and political clouts despite the primary steps of democratization.With their strong objections, the risk of political instability may rise substantially.No matter which situation happened, not only would political polarization increase, but also the prospect of Taiwan's democratization could be in serious question.Fortunately, presidentialism gave the incumbent president four years of fixed-term without direct threats and challenges from both sides and enabled the reformist administration to materialize a moderate reform platform without causing uncontrollable political division.Therefore, in the critical transition period of democratization, presidentialism, at least in many cases, has distinctive advantages of enabling political stability compared with parliamentarism [2]. Conclusion In conclusion, in contrast to some academic literature's argument, compared with presidentialism, parliamentarism has several key weaknesses.They include government instability under minority rule, polarization in countries with enormous social cleavages, encouraging politicians to pursue political speculation and defect their electorates, and the likelihood of electoral system maneuver by ruling parties.Thus, for new democracies, presidentialism may perform better than parliamentarism, at least on some occasions.However, the analyses by no means indicate that parliamentarism is inferior to presidentialism for all democracies, or presidentialism can guarantee the success of democracy.There are many other factors that can influence the improvement and consolidation of democracy, including social consensus and public participation.Only by considering all these factors, we can make wise choices and arrangements for our democracy and achieve the common good of our political community.
2023-11-22T16:11:45.196Z
2023-11-20T00:00:00.000
{ "year": 2023, "sha1": "840bee772c91dcdb4fbd3787b8db1f05ac1759e8", "oa_license": "CCBY", "oa_url": "https://lnep.ewapublishing.org/media/717d9064f8fc4339a7e79669a88b4a37.marked.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cf37c12b65ee1b5e04b6d31ea0a2c67cc462f7dd", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
215769887
pes2o/s2orc
v3-fos-license
Design of a Monte Carlo model based on dual-source computed tomography (DSCT) scanners for dose and image quality assessment using the Monte Carlo N-Particle (MCNP5) code Abstract The purpose of this work was to develop and validate a Monte Carlo model for a Dual Source Computed Tomography (DSCT) scanner based on the Monte Carlo N-particle radiation transport computer code (MCNP5). The geometry of the Siemens Somatom Definition CT scanner was modeled, taking into consideration the x-ray spectrum, bowtie filter, collimator, and detector system. The accuracy of the simulation from the dosimetry point of view was tested by calculating the Computed Tomography Dose Index (CTDI) values. Furthermore, typical quality assurance phantoms were modeled in order to assess the imaging aspects of the simulation. Simulated projection data were processed, using the MATLAB software, in order to reconstruct slices, using a Filtered Back Projection algorithm. CTDI, image noise, CT-number linearity, spatial and low contrast resolution were calculated using the simulated test phantoms. The results were compared using several published values including IMPACT, NIST and actual measurements. Bowtie filter shapes are in agreement with those theoretically expected. Results show that low contrast and spatial resolution are comparable with expected ones, taking into consideration the relatively limited number of events used for the simulation. The differences between simulated and nominal CT-number values were small. The present attempt to simulate a DSCT scanner could provide a powerful tool for dose assessment and support the training of clinical scientists in the imaging performance characteristics of Computed Tomography scanners. Introduction Computed tomography (CT) is a valuable diagnostic tool used in modern health care. Due to the rising concerns about radiation exposure, every effort must be made to ensure that CT examinations are performed under optimum conditions, in order to obtain the necessary diagnostic information, while keeping radiation dose to the patient as low as reasonably achievable (ALARA). A number of technical innovations have been introduced over the last years to meet that challenge (Automatic Exposure Control system, kVp switching, Adaptive Dose Shield and beam filtration). In 2004, the introduction of z-Flying Focal Spot (z-FFS) contributed to the improvement of the spatial resolution and, hence, diagnostic accuracy. The FFS allows for a deflection of the focal spot both in the rotation direction (α-FFS) and in the z-direction (z-FFS), thus doubling the sampling density [1]. The challenge to improve temporal resolution remained and it was met by the introduction of the Dual Source CT Scanner (DSCT). This system has two X-ray tubes and two arrays of detectors. The acquisition of two projections for each angle of the gantry, one from the low-energy tube and the other from the high-energy tube, improves image quality without increasing dose [2]. Monte Carlo (MC) methods have been used to model a CT system in order to help evaluate the impact of the various parameters to image quality and estimate the absorbed dose according to different examination protocols. Most of the MC simulation studies focus on comparing measured and simulated organ absorbed doses from CT helical and axial scans [3,4]. More specifically, Jarry et al [5] simulated a Multi-Detector CT (MDCT) scanner using the MCNP code. In their study, the x-ray source and phantoms were accurately modeled, for the estimation of the radiation dose. A complete MC simulation of a single source, single detector-row CT scanner was carried out by Ay and Zaidi [6] providing images from different phantoms. Kyriakou and Kalender [7] investigated the scatter for a DSCT scanner, simulating the geometry system without the z-FFS technique. Wysocka-Rabin et al, in 2011 [8] and Qamhiyeh et al [9], developed a Monte Carlo model for the Siemens SOMATOM Emotion CT scanner, with MC code BEAMnrc/EGSnrc, for producing CT images. This study was later extended to calculate CT numbers with accuracy. Until recently, an accurate Monte Carlo simulator of a DSCT was not available in the literature. Abadi et al. [10] introduce a realistic CT simulation platform that is compatible with highresolution 3D voxel-based computational phantoms, accounts for the geometry and physics of a given commercial CT scanner. The aim of the present study was to create and validate a simulation code of a particular DSCT scanner using image quality parameters measured on simulated phantoms. Initially, an MC simulator of the scanner was developed using the software package MCNP5 [11]. All the elements of the CT scanner, i.e. x-ray tube, bowtie filter, detector array were thoroughly included in the simulation. Then, four different phantoms were simulated and scanned using the simulated scanner, in order to investigate image quality parameters such as image noise, CT-number linearity, high contrast and low contrast resolution. The projections generated by the simulation were input into a reconstruction algorithm created using the MatLab software (MathWorksInc, Natick, MA, USA), for producing transaxial images of the phantoms. DSCT scanners The dual-source scanner that was simulated using MCNP was the Siemens Somatom Definition (Siemens Healthineers, Erlangen, Germany), which is equipped with two separate Straton X-ray tubes. Two slightly different versions of the scanner were simulated (Somatom Definition Flash and Somatom Definition AS) the technical specifications of which are presented in Table 1 [12][13][14]. Simulation of X-Ray Source and X-ray Spectrum The two different x-ray sources were simulated according to their technical characteristics ( Table 1). One of the key elements in simulating the X-ray source of CT systems is the accurate representation of the x-rays energy spectrum. The energy spectra were calculated using the MCNP5 code. The simulations were run in photon and electron mode (mode: P, E), considering all bremsstrahlung and characteristic x-ray production during electron transport. In the input file, an electron source was defined as a surface source. The anode was a tungsten plate with an anode angle of 7°. The focal spot size on the target was 0.7mm. The filter was placed in front of the exiting beam [13,14]. The F1 tally was used for calculating the energy spectrum [11]. MCNP5 tally outputs for the calculated spectrum were normalized to the total number of photons in the spectrum. To validate our X-ray tube model, we compared the MC calculated photon spectra with energy spectral distributions of x-ray source produced using the public software from Siemens website [15] to achieve good agreement. The 80 kV, 100 kV, and 120 kV spectra were calculated with an anode angle of 7° and a filtration of 3.0 mm Al and 0.9 mm Titanium. The 140 kV spectrum was calculated with an anode angle of 7° and a filtration of 3.0 mm Al, 0.9 mm Titanium and 0.4 mm Sn [16,17]. The Siemens Somatom Definition system is equipped with two Straton X-ray tubes which have an electromagnetic beam deflection system for the focal spot [18]. Flying Focal Spot (FFS) was simulated by defining two separate points in the zdirection at the location of the x-ray tube and by defining a "cookie cutter" cell in order to limit the direction of particles to a fan-angle covering the same detector elements in the zdirection. Considering the slice thickness S, the sampling distance at isocenter was S/2 [1]. The z-FFS was applied in Computed Tomography Dose Index (CTDI) measurements. Simulation of beam-shaping filter The scanner external filtration consists of the narrow (head) or standard (trunk) filter, which is used to reduce the dose across the lateral parts of the body. Due to the unwillingness of the manufacturer to disclosure their exact technical specifications (geometrical characteristics) the beam-shaping filters of the Somatom Definition were modeled using an indirect method: A single-source scanner was simulated with a beam-shaping filter created using the simplified shape of the basic Teflon bowtie filter as described by DeMarco et al [19] (Figure 1). A standard PMMA CTDI phantom was also simulated and was centered at the scanner isocenter. Changing the shape of the bowtie filter, namely the angle θ of the trajectory of a particle that does not enter the CTDI phantom, affected the MCNP5 calculated CTDI values at the center and the periphery of the CTDI phantom. Through trial and error, the angle θ was finally Stefania Chantzi et al: Monte Carlo simulation of a DSCT scanner selected as the one where the CTDI values coincided with published data [19,20]. Simulation of the detectors The detector elements were simulated across an arc of a circle with a diameter equal to the source-to-detector distance. In Somatom Definition AS, detector A consists of 672x40 elements. Due to the MCNP5 code limitations, the 672x40 elements were simulated in 21 lattices of 32 columns in the x direction and 40 rows in the z-direction. Detector B consists of 352x40 elements, which were simulated in 11 lattices of 32 columns in the x-y direction and 40 rows in the z Somatom Definition Flash, the 736x64 elements were simulated in 23 lattices of 32 columns in the x 64 rows in the z-direction. Detector B consists of 480x64 elements, which were simulated in 15 lattices of 32 columns in the x-y direction and 64 rows in the z-direction. For both scanners, detector material was gadolinium oxysulfide (GOS) with a density of 7.44 g cm -3 (Figure 2). CTDI test phantoms In this study, body and head CTDI dosimetry phantoms were used. Both phantoms were made of polymethyl (PMMA) with a density of 1.19 g·cm -3 and had a length of 15 cm. The diameter of the head phantom was 16 body phantom 32 cm. Each phantom incorporated five air filled holes in which pencil ion chambers were placed. The ion chamber was modeled as a set of four concentric cylinders with a length of 100 mm with C552 air-equivalent walls and electrode, polyacetal exterior cap, and a 3 cm [21,22]. Image quality test phantoms Four image quality test phantoms were simulated and were used to produce test images in order to validate the CT scanner simulation. Homogeneous water phantom. A simple homogeneous cylindrical (30 cm in diameter, 6 cm long) water phantom was simulated, in order to measure image noise in a reconstructed transverse slice (Figure 3a). Low contrast phantom. A cylindrical (16 cm in diameter, 6 cm long) water phantom was initially simulated ( Two semi-cylindrical blocks of PMMA, one 2 the other 20 mm thick, were then inserted inside the water phantom. Each of these blocks had four holes, measuring 1.5 cm, 1.0 cm, 0.5 cm and 0.2 cm in diameter, which were filled with water. Scanning this phantom with 10 would produce low contrast regions in the thin semi block. High contrast phantom. A cylindrical (16cm in diameter, 6cm long) PMMA phantom was simulated (Figure sets of air thru-holes (five holes per set). Diameter of holes is 2.0, 1.4, 1.2, 1.0, 0.6, and 0.4 mm. The distance between two consecutive holes in each set was equal to the respective hole diameter. Monte Carlo simulation of a DSCT scanner Pol J Med Phys Eng 20 13 selected as the one where the CTDI values coincided with The detector elements were simulated across an arc of a circle detector distance. In Somatom Definition AS, detector A consists of 672x40 elements. Due to the MCNP5 code limitations, the 672x40 elements were simulated in 21 lattices of 32 columns in the x-y direction. Detector B consists of hich were simulated in 11 lattices of 32 y direction and 40 rows in the z-direction. In Somatom Definition Flash, the 736x64 elements were simulated in 23 lattices of 32 columns in the x-y direction and B consists of 480x64 elements, which were simulated in 15 lattices of 32 columns in direction. For both scanners, detector material was gadolinium oxysulfide (GOS) In this study, body and head CTDI dosimetry phantoms were used. Both phantoms were made of polymethyl-methacrylate and had a length of 15 cm. The diameter of the head phantom was 16 cm and of the cm. Each phantom incorporated five airfilled holes in which pencil ion chambers were placed. The ion chamber was modeled as a set of four concentric cylinders with equivalent walls and cm 3 active volume Four image quality test phantoms were simulated and were used to produce test images in order to validate the CT scanner A simple homogeneous cal (30 cm in diameter, 6 cm long) water phantom was simulated, in order to measure image noise in a reconstructed cm in diameter, 6 cm long) water phantom was initially simulated (Figure 3b). cylindrical blocks of PMMA, one 2 mm thick and mm thick, were then inserted inside the water phantom. Each of these blocks had four holes, measuring cm in diameter, which were er. Scanning this phantom with 10 mm slices would produce low contrast regions in the thin semi-cylindrical A cylindrical (16cm in diameter, Figure 3c) with six ve holes per set). Diameter of holes is 2.0, 1.4, 1.2, 1.0, 0.6, and 0.4 mm. The distance between two consecutive holes in each set was equal to the respective hole Water phantom with inserts of different materials. cylindrical (16 cm in diameter, 6 cm long) water phantom was initially simulated (Figure 3d). Four cylindrical blocks (2 in diameter, 1 cm long) of different materials (air, PMMA, polyethylene and teflon) were then inserted towards the periphery of the water phantom. Test object validation and Monte Carlo simulation aspects The F6 tally was used for calculating energy depositions [11]. MCNP5 tally outputs for dose are in units of MeV/g per source particle. In order to convert the results to a more meaningful dose quantity (mGy/mAs), a CTDI method was applied. The method of CTDI estimation used in this study is similar to previous works by Jarry et al [5] and DeMarco et al. [19]. For CTDI calculations (free-in-air, in head phantom and in body phantom), a single axial 360° scan was simulated by a rotating source placed in a circle with a radius equal to the distance from the focal spot to isocenter. The rotation was performed in discrete 5° angular steps. The MC calcul with only an ionization chamber (IC) at isocenter, were used to compute the normalization factor. The normalization factor described previously is calculated by the following equation: = where CTDI (100,air,measured per 100 mAs)E is the air kerma per 100 mAs at the scanner isocenter given by IMPACT [12] for a given beam energy E and a collimator width and CTDI (100,air,simulated per particle)E is the Monte Carlo calculated air kerma per particle by simulating the ion chamber at the scanner isocenter for the same scanner settings [5]. The second set of calculations in the body and the head CTDI phantoms was performed under the same technical parameters as with the ionization chamber. The absolute dose at the center and the periphery of body phantom is calculated by the following equation: where D S,kV is the Monte Carlo simulated dose MeV/photon and CF is the normalization factor for a given kV [5]. For the normalization simulations and CTDI calculations, single axial scans were simulated using scan parameters as shown in Table 2. CTDI values in head phantom were calculated and compared with measured values of head phantom scanned at 120 kV (Dual Source mode). CTDI Monte Carlo simulations were made using 2x10 6 photon histories, resulting in tally uncertainties of less than 2% ( For phantom images, an axial (sequential) scan was performed to give reconstructed slices. 360 views were simulated at 1° between each view. The *F8 tally was used for calculating energy deposited on each detector element per tracked particle [11]. Because of MCNP5 does not simulate gantry rotation, the geometry of each view is created in Test object validation and Monte Carlo simulation aspects The F6 tally was used for calculating energy depositions [11]. MCNP5 tally outputs for dose are in units of MeV/g per source to convert the results to a more meaningful dose quantity (mGy/mAs), a CTDI method was applied. The method of CTDI estimation used in this study is similar to previous works by Jarry et al [5] and DeMarco et al. [19]. n head phantom and in body phantom), a single axial 360° scan was simulated by a rotating source placed in a circle with a radius equal to the distance from the focal spot to isocenter. The rotation was performed in discrete 5° angular steps. The MC calculations with only an ionization chamber (IC) at isocenter, were used to compute the normalization factor. The normalization factor described previously is calculated by the following equation: Eq. 1 is the air kerma per 100 mAs at the scanner isocenter given by IMPACT [12] for a given beam energy E and a collimator width and is the Monte Carlo calculated air mulating the ion chamber at the scanner The second set of calculations in the body and the head CTDI phantoms was performed under the same technical ionization chamber. The absolute dose at the center and the periphery of body phantom is calculated Eq. 2 is the Monte Carlo simulated dose MeV/photon and CF is the normalization factor for a given kV [5]. For the normalization simulations and CTDI calculations, single axial scans were simulated using scan parameters as . CTDI values in head phantom were calculated and compared with measured values of head ce mode). CTDI Monte photon histories, resulting in tally uncertainties of less than 2% (Figure 4). For phantom images, an axial (sequential) scan was performed to give reconstructed slices. 360 views were at 1° between each view. The *F8 tally was used for calculating energy deposited on each detector element per tracked particle [11]. Because of MCNP5 does not simulate gantry rotation, the geometry of each view is created in separate files. The initial in simulation without phantom in the FOV. To improve the efficiency of the simulation the variance reduction method was used. The Surface Source Write (SSW) option was used to increase the speed of MC simulations [11] ( Figure 5). separate files. The initial intensity was calculated, by simulation without phantom in the FOV. To improve the efficiency of the simulation the variance reduction method was used. The Surface Source Write (SSW) option was used to increase the speed of MC simulations [11] Typical CT acquisition parameters used in the It should be noted that for the phantom images, the simulations ran separately for each X-ray source-detectoror system, therefore no contributions from any scattered radiation from the first X-ray source to the second detector were calculated. This was necessary in order to keep the required computation time for the simulation to acceptable levels. Since it is known that reducing the mAs is expected to increase the noise (measured standard deviation) by 1/√(mAs), single axial scans of 30 cm-diameter water phantom were simulated for single source energy 120 kV using the 3 mm beam collimation with standard bowtie filter. In order to examine the impact of number of particles on the final image, scans were performed with 2x10 10 , 4x10 10 , 8x10 10 and 16x10 10 particles. An image using a slice thickness of 3 mm was reconstructed. Evaluation of image quality and validation of the simulation code were performed using a series of phantoms. Physical image quality parameters (image contrast, spatial resolution and noise) were measured in the simulated images of these tests objects and the results were compared with the expected ones. Axial scans were performed at appropriate locations on each phantom and transaxial slices were reconstructed. Both Xray sources at 100 kVp and 140 kVp were used, without z-FFS. The total number of photons was 1x10 10 . An image using a slice thickness of 10 mm was reconstructed. In all simulations, the statistical uncertainty was less than 2%. Image reconstruction In computed tomography systems, the most widely used reconstruction method is the filtered back-projection. For this study, a fast and powerful filtered back-projection algorithm for non-helical fan-beam CT setup was implemented. First, a rebinning of the geometry of the fan beam lines into parallel lines was performed using appropriate interpolation. Figure 6 shows the fan-beam and parallel-beam geometry. The relations between the different coordinates are: Then, the reconstruction algorithm performs a convolution of the parallel projection data, P θ (ρ), with a ramp filter, H(ω), according to: where F and F -1 denote the Fourier transform and the inverse Fourier transform respectively [23]. Then back-projection is applied, which means smearing of the filtered projection data over the image plane according to: The input parameters required by the algorithm are the simulated projections, the source to isocenter distance, the detector element size at isocenter, the angular step between projections and the matrix size. The output is an image matrix representing the values of the attenuation coefficients of the imaged object. The following steps were followed for the reconstruction of the images of all test phantoms: First, the data were filtered in the spatial domain. Next, the filtered data were back-projected. During reconstruction, additional filtering was utilized to eliminate ring artifacts in each projection. This filter was applied to projection data before the back-projection algorithm. The method adopted for processing the projection data acquired with a dual-source computed tomography (DSCT) imaging system, comprised of the following steps: a) acquisition of two separate projection data sets, one from each x-ray tube b) insertion of the raw data structure into MATLAB for reconstruction and c) creation of the final image by appropriately weighing the two reconstructed images using the following relation: where w is the weighting factor, f denotes the CT value in the mixed image, and f low and f high are the CT values of the low and high kV image, respectively [13]. For water phantom, projections are smoothed using a local average of the k-nearest neighbors, resulting in decreased noise. The value of k controls the smoothness. Image noise was evaluated as a standard deviation of Hounsfield unit values in a circular region of interest (ROI), positioned at the water phantom center. Results were expressed as standard deviation (SD) of CT numbers. For low-contrast phantom, the mean CT-numbers of water and PMMA were calculated in a ROI of 12-pixels in diameter. X-ray spectrum calculations The corresponding output photon energy spectra, which were simulated according to the technical characteristics, are shown in Figure 7. The energy spectra provided the probability of specific energy values for the MCNP code. The number of photons relates to the center of each energy interval (1 keV). The mean x-ray spectrum energies for spectra simulated and spectra obtained from Siemens website [15], for 80, 100, 120, 140 and 140 Sn kV (with an additional 0.4-mm tin filter), are compared in Table 3. CTDI dose calculations Initially, the calculation was made free-in-air for a single tube potential 120 kVp and a beam collimation 18 mm for both (narrow and standard) beam shaping filters. Table 4 displays the reported values by ImPACT (Imaging Performance Assessment of CT scanners) and the calculated values of the CTDI free-in-air. Table 4, also, summarizes conversions factors obtained by MCNP code and used in order to convert the tally F6:p results of the MC simulation given in units of MeV/g/source particle to absorbed dose in units of mGy/100mAs. Then, the center and peripheral CTDI 100 calculations for the head and body phantoms were obtained under the same conditions as in the free-in-air calculation. All simulation results were normalized to 100 mAs using the conversion factor, which was calculated above. Table 5 presents the reported values by ImPACT (Imaging Performance Assessment of CT scanners) and the results simulated in MCNP code for both beam shaping filters. Table 4. Conversion factors from MeV/g.particle to mGy/100mAs, obtained from measurements and simulations for 18 mm collimation and for the single tube potential of 120 kV. Then, the calculation was made using dual-source dual-energy mode (both tubes at 120kV) and beam collimation 38.4 mm. Table 6 displays the experimental values obtained by the measurement of CTDI free-in-air and those simulated in MCNP code. Table 6, also, shows the conversion factor obtained by MCNP code and used in order to convert the tally F6:p results of the MC simulation given in units of MeV/g/source particle to absorbed dose in units of mGy/100mAs for dual-source dual-energy mode. In Table 7, the center and peripheral CTDI 100 calculations for the head phantom were obtained under the same conditions as in the free-in-air measurement are shown. All simulation results were normalized to 100 mAs using the conversion factor, which was calculated above. Table 7 presents the experimental results and the results simulated in MCNP code for narrow beam shaping filter. Validation of simulation results using image quality Image noise Figure 8 shows the simulated cylindrical water-filled phantom profiles, using MCNP5 based CT simulator. Simulated profiles were divided by the corresponding blank scan and were normalized at the central detector element. Table 8 shows the results of standard deviation (SD in circular ROI) of CT numbers in a simulated reconstructed image from the water phantom, obtained with the traditional FBP reconstruction algorithm (3 mm slice thickness) at single tube potential 120 kV, when the simulation took into account a quantity of 2x10 10 , 4x10 10 , 8x10 10 and 16x10 10 photons per projection. The image noise was inversely correlated to the number of particles. Low-contrast resolution The simulated reconstructed image from the low-contrast module is shown in Figure 9b. In the semicircular region of 2mm width (which is the low contrast section of the phantom) the two larger holes (1.5 and 1.0 cm in diameter) were visible, whereas in the semicircular region of 20 mm width the three larger holes (1.5, 1.0 and 0.5 cm in diameter) were visible. Table 9 lists the simulated CT numbers obtained from ROIs in water and PMMA in both semicylindrical blocks. Spatial resolution Figure 9c shows the simulated image obtained from the highcontrast resolution phantom using a filtered backprojection algorithm and a slice thickness of 10 mm. X-ray sources were at 100 kVp and 140 kVp, without z-FFS. The total number of photons was 1x10 10 . In this image, the 3th group of air-holes (1.2 mm in diameter) was marginally discernible. CT number linearity Figure 9d shows the simulated image obtained from the CT number linearity phantom using a filtered backprojection algorithm and a slice thickness of 10 mm. X-ray sources were at 100 kVp and 140 kVp, without z-FFS. The total number of photons was 1x10 10 . The simulated image is the mixed image of the low 100 kV image (50%) and the high 100kV image, which is equivalent to a 120 kV image. Discussion Monte Carlo techniques have proved to be a powerful tool for the simulation of the construction and the performance of CT scanners as well as for dose assessment in clinical procedures. The used number of photons in all the simulations corresponded to a very low mAs value, which had an important impact on our results. Time limitations imposed by the MC method don't permit the usage of a larger number. In the present study, Somatom Definition CT scanner was simulated taking into consideration both structural and functional characteristics. Unavoidably, a few approximations regarding the geometry of the bowtie filter and the detectors were included. An equivalent source model was developed for the simulation of the z-FFS. A careful CTDI validation was performed and the simulation results demonstrate good agreement with the expected data, as reported in the literature and the actual measurements. More specifically, the small discrepancy observed in CTDI 100 values for the body phantom can be explained by apparent (unavoidable) differences between the exact technical specifications of the scanner and those used in the simulation code. This discrepancy in the body phantom is smaller at the center than at the periphery, which is close to -2.25%. In previously published works that performed Monte Carlo simulations for CTDI estimation, the discrepancies between measured and simulated values varied between -2.6 to 8.6% [5,[19][20][21]. Furthermore, additional proof of the accuracy of the simulation code could be the agreement between the simulated CT numbers and the nominal values (0 HU for water). We found a mean value of -8.78 HU and an SD 17.19 HU. The high value of image noise is due to the small number of photons used for the simulation (1 x 10 10 for each projection angle) in order to keep the simulation run-time at acceptable levels. The dependence of image noise on the number of particles, as illustrated in Table 8, agrees with ~1/√2 expected reduction. Doubling the number of particles results in the reduction of image noise by a factor 1/√2. Only one portion of the low-contrast phantom, with the 2 mm semi-cylindrical PMMA block can be used to assess low contrast resolution, using a 10 mm slice thickness and taking advantage of the partial volume effect. Using a 12-pixels in diameter ROI inside the largest (1.5 cm in diameter) water cylinder, the measured CT number was found -0.83 HU, whereas the PMMA CT number in a similar ROI was found 12.41 HU. The corresponding measurements in the other portion of the phantom yielded values of -2.78 HU for water and 108.16 HU for PMMA. In the high-contrast resolution phantom, the 3 th set of airfilled holes (1.2 mm) was visible, which is a logical value, taking into consideration the small number of photons used for the simulation (which correspond to very few mAs) and the smoothing filters applied during the reconstruction. The deviations of the calculated CT numbers of each material from those reported by NIST were acceptable. Furthermore, in the CT number linearity test object, some artifacts are present, which could affect the accuracy of the ROI CT number measurements. These artifacts may occur due to the simulated geometry of detectors and the small number of photons used. The CT numbers estimated in the present study are very close to those reported by Gulliksrud et al [25] who performed measurements of the CATPHAN phantom. The Teflon CT number in the present simulation (976 HU) is also in agreement with the calculated value of 970 HU reported by Sharma et al [26] using one single source at 130 kV. Our calculated CT number of PMMA is 10% (12 HU) lower than the nominal value of 120 HU. CT number of air (-967) is well within the acceptable range from -960 to -994 HU, for CT scanners of different manufacturers [26]. There were, however, some limitations to this study, arising primarily from the lack of the exact technical specifications of the DSCT scanner, including the newer iterative reconstruction algorithms (SAFIRE) available on Siemens CT scanners. Image reconstruction using filtered backprojection is generally inferior to iterative reconstruction, which can offer lower image noise and better low contrast resolution. Another limitation of our filtered backprojection reconstruction is that it cannot be used with helical scan protocols. Also, the influence of cross scatter has not been considered for the DSCT system evaluated in this study, because the simulations of each X-ray source-detectoror system for investigation of image quality parameters run at separates files. So, the scattered radiation produced from tubes was not evaluated, using MC code. Conclusion This study presents a method for modeling a DSCT with z-FFS. The reported results validate the modeling and MCNP code. Therefore, the present simulation could be extended to include more CT scanning protocols and computational anthropomorphic phantoms, in order to provide patient dosimetric information with reasonable accuracy. Work in progress includes Monte Carlo simulations with more image quality phantoms, which would further validate the code and could be used for the development of educational e-learning tools for medical physicists and trainee radiologists.
2020-04-16T13:35:22.478Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "01c0551677dd64965c0b08ec5fbf5f543d52ad92", "oa_license": null, "oa_url": "https://doi.org/10.2478/pjmpe-2020-0002", "oa_status": "GOLD", "pdf_src": "DeGruyter", "pdf_hash": "01c0551677dd64965c0b08ec5fbf5f543d52ad92", "s2fieldsofstudy": [ "Physics", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
7212406
pes2o/s2orc
v3-fos-license
HERALD (Health Economics using Routine Anonymised Linked Data) Background Health economic analysis traditionally relies on patient derived questionnaire data, routine datasets, and outcomes data from experimental randomised control trials and other clinical studies, which are generally used as stand-alone datasets. Herein, we outline the potential implications of linking these datasets to give one single joined up data-resource for health economic analysis. Method The linkage of individual level data from questionnaires with routinely-captured health care data allows the entire patient journey to be mapped both retrospectively and prospectively. We illustrate this with examples from an Ankylosing Spondylitis (AS) cohort by linking patient reported study dataset with the routinely collected general practitioner (GP) data, inpatient (IP) and outpatient (OP) datasets, and Accident and Emergency department data in Wales. The linked data system allows: (1) retrospective and prospective tracking of patient pathways through multiple healthcare facilities; (2) validation and clarification of patient-reported recall data, complementing the questionnaire/routine data information; (3) obtaining objective measure of the costs of chronic conditions for a longer time horizon, and during the pre-diagnosis period; (4) assessment of health service usage, referral histories, prescribed drugs and co-morbidities; and (5) profiling and stratification of patients relating to disease manifestation, lifestyles, co-morbidities, and associated costs. Results Using the GP data system we tracked about 183 AS patients retrospectively and prospectively from the date of questionnaire completion to gather the following information: (a) number of GP events; (b) presence of a GP 'drug' read codes; and (c) the presence of a GP 'diagnostic' read codes. We tracked 236 and 296 AS patients through the OP and IP data systems respectively to count the number of OP visits; and IP admissions and duration. The results are presented under several patient stratification schemes based on disease severity, functions, age, sex, and the onset of disease symptoms. Conclusion The linked data system offers unique opportunities for enhanced longitudinal health economic analysis not possible through the use of traditional isolated datasets. Additionally, this data linkage provides important information to improve diagnostic and referral pathways, and thus helps maximise clinical efficiency and efficiency in the use of resources. Background Health service research have tended to rely on data generated from randomised controlled trials (RCT), observational studies based on patient derived questionnaire data, and routinely assembled data abstracted from the primary and secondary care patient record -popularly known as routine data [1]. Data generated from these three routes are predominantly used as stand-alone datasets, and alongside non-health data such as demographic and geographical data [1,2]. The health sector analyses are enriched when the patient-level data generated from these different sources are linked. Many countries worldwide already routinely capture health care data that can be used for such purposes. For example, the Scottish Morbidity Linked Dataset which encompasses Scottish Health Survey records, linked to NHS acute and psychiatric hospital records, cancer, and death registrations provides powerful research database http://www.esds.ac.uk/government/shes/; in France, record linkage between a hospital database and the French national mortality database offers new prospects for large prognostic studies based on hospital data [3]; and in Norway, the Medical Birth Registry of Norway is routinely linked with the Central Population Register, and can be linked with the other central health registers. This paper discusses the potential methodological advantages in the conduct of health economics analyses using patient-derived questionnaire data linked with routinely collected information and secondary care clinical datasets available in Wales, United Kingdom, with examples from a research cohort. SAIL databank In order to realise the potential of electronically-held routinely collected information to conduct and support health-related research, the Health Information Research Unit (HIRU) at the College of Medicine at Swansea University, as part of the Welsh Assembly Government's commitment to the UK Clinical Research Collaboration (UKCRC), has set up the Secure Anonymised Information Linkage (SAIL) databank [4,5]. The SAIL databank brings together and links a wide range of person-based data. SAIL utilises a split-file approach to anonymisation to overcome issues of confidentiality and disclosure in health-related data warehousing by creating personallevel unique and encrypted identifiers for merging information from various sources [4,5]. The range of complementary sets of data includes clinical data from rheumatologists, existing routinely collected datasets such as the General Practice (GP) records, outpatient (OP) clinical data, inpatient (IP) episodes, accident and emergency (A&E) department, pathology data, NHS administrative register, breast and cervical cancer screening data, all Wales injury surveillance system, all Wales perinatal survey, congenital anomaly register and information service, birth and death data from the Office for National Statistics, and social services databases. Data linkage HIRU uses the MACRAL (Matching Algorithm for Consistent Results in Anonymised Linkage) algorithm to create encrypted Anonymised Linking Field (ALF) for each individual [4]. The ALFs are mainly created based on the patient's NHS number; and if the NHS number is absent in a dataset, a mixture of other identifying variables like forename, surname, gender, postcode of residence, and date of birth are used for probabilistic matching, while maintaining complete anonymity for the end users [4]. This linkage allows us to follow the patient pathway through the NHS system both retrospectively and prospectively from a reference date (e.g. questionnaire completion date). This system also allows linkage of data collected through patient questionnaires with other routinely collected datasets in the SAIL system. Data linkage with PAS cohort As part of the Medical Research Council (MRC) Patient Research Cohort Initiative, a cohort of people with ankylosing spondylitis (AS), i.e. the Welsh populationbased ankylosing spondylitis (PAS) cohort, has been developed using data collected from patient completed questionnaires linked with routine data [6]. The study aims to recruit 1000 AS patients living in Wales and currently about 500+ AS patients are participating. This study has ethical approval from the London Multi-centre Research Ethics committee and the written consent of participants was obtained according to the Declaration of Helsinki. For the PAS cohort, the data collected from patients with a diagnosis of AS can be linked to other routinely collected datasets using the SAIL system. To highlight the potential benefits of using the linked routine data, this paper uses information on healthcare visits as reported by the AS patients through questionnaires and explores patient pathways in terms of actual events obtained from the linked GP, OP, IP, and A&E datasets in the SAIL databank. Potential benefits of using linked data The paper attempts to demonstrate the strength of data linkage in extracting complementary information from routine sources that are beyond the scope and time horizon of the study data, and does not intend to test hypotheses with regards to the data quality pertaining to the specific variable of interest or the individual datasets. The potential benefits of using linked data are discussed below. Retrospective and prospective tracking of patient pathways With data spanning multiple years and the ability to link records across several datasets, it is possible using SAIL to track the healthcare utilisation history of patients in receipt of some form of intervention for a given condition across multiple healthcare sectors both before and after their index/reference healthcare event. Therefore, the SAIL data linkage system allows tracking of the patient pathways, both retrospectively and prospectively. Linkage with GP data system provides information about patients' primary care events going back many years including; previous diagnoses, referrals, presenting symptoms, investigation results and previous medications. This dataset can also be used to follow the patient at every visit to the GP and therefore record the development of associated conditions and use of comedications. Linkage with IP data will record all hospital visits, surgery and hospital treatment. Linkage with the mortality datasets will ensure the dataset remains relevant and can examine survival of included patients. Linkage with A&E datasets will give information on emergency visits. Validation of patient-reported recall data The use of linked routine data allows cross-checking of patient-reported recall data with actual health care events at the personal level. The inherent limitations (or strengths) of the data quality pertaining to the survey questionnaires under the recall method can be flagged; and an assessment of the generalisability of the patientreported data can be made. On the other hand, data obtained from routinely collected data systems often require careful interpretation with respect to their quality, validity, timelines, bias, confounding and statistical stability [1]. With the triangulation of datasets in the SAIL system, the validity and reliability of single datasets can be assessed [6]. The triangulation process will at least flag the discrepancies, and we can then have an idea about any quality issue pertaining to both the routine and questionnaire data. However, this paper does not intend to make assertions with regards to the data quality of individual datasets, and views the linking of datasets primarily as a source of extracting complementary information which are beyond the scope and time horizon of traditional study data. Objective measure of the cost of illness Cost of illness studies are typically subject to a degree of scrutiny with regards to the sources and methods of estimating the quantities and prices, the specification of study perspective, and the identification of the timeframe to which the costs apply [7][8][9]. The use of linked data enhances the precision of the healthcare use information and the timelines within which the costs incurred; and therefore will help provide an objective estimate of the cost and burden of diseases to the funders, health service (NHS in the United Kingdom), society and the individual at each stage of disease over a prolonged period of time. In many conditions there is a delay between the onset of symptoms and establishing a diagnosis, during which period the patients still utilise healthcare resources. The linked routine data can provide information about the patients' visits to health care facilities during this symptomatic pre-diagnosis period, when the requirement for diagnostic investigations is often greatest. For example, within the SAIL data system, using the encrypted ALF, we can identify patients from a cohort of any particular disease who were diagnosed during a reference time (as indicated by first appearance of a specific diagnostic read code); and link those with various datasets (e.g. GP data, IP hospital admissions, OP, A&E data etc.) to track their pre-diagnosis visits to healthcare facilities since the date the symptom onset (as reported by the patients or established from the GP or A&E records). This allows one to compare the extent of health service utilisation, and therefore related costs, before and after the symptoms developed. In addition, one can also calculate the costs as a result of delayed diagnosis. The linked healthcare analysis within SAIL need not be confined to deducing the extent of health service utilisation during the pre-and post-diagnosis illness periods but can additionally be performed to ascertain the size of the direct medical costs associated with the index illness that are incurred across different healthcare sectors. This is possible, for example, with the combined use of the cost figures included in the Trust Financial Return 2 (TFR2) accounts [10] that incorporate expenditures relating specifically to A&E attendances, IP admissions and OP contacts; and the cross-sector (i.e. primary care, secondary care, IP, OP, A&E etc.) health services utilisation at the individual level obtained from the linked data system. A system such as the SAIL system therefore not only allows the index event for a given condition to be identified but additionally introduces a longitudinal, temporal, dimension to the analysis as each of the healthcare sectors captured within SAIL can be searched for multiple years pre-and post-index healthcare event to determine the extent of health service utilisation and direct medical costs, made possible by the inclusion of an ALF within all of the SAIL datasets. This provides a more objective estimate of the actual costs of chronic conditions and any interventions. Healthcare pathways and referral history A retrospective analysis of the patient's healthcare history can identify the types of referrals to healthcare services made at different points in time, thus, giving an assessment of health service usage and recommendations for improving patient care pathways. In particular, the linked GP data would provide important information to improve diagnostic and referral pathways, and thus maximise clinical efficiency and efficiency in the use of resources. The temporal aspects of the linked data sets also help conducting event history analysis, survival analysis and other relevant statistical and econometric models. Profiling of patients The linked SAIL data includes diverse sets of information, which enables profiling and stratification of patients relating to disease manifestations and severity, lifestyle, co-morbidities, and associated costs. Additionally, given that many chronic conditions have heterogeneous manifestations with a variable course and unpredictable episodes of exacerbation, the analysis can be carried out under several person stratification schemes based on severity of disease, various demographic attributes, and socio-economic conditions. This stratification will facilitate early targeting of interventions to patients at highest risk, thereby improving the cost-effectiveness ratio of these interventions. Results An example using a patient with ankylosing spondylitis Here we present an example of one AS patient's health service usage history by tracking the healthcare events through the linked datasets and comparing this with self-reported data. To preserve complete anonymity, the actual dates are modified by replacing with fictitious dates. As part of the PAS cohort study, the patient completed a questionnaire during the first week of November 2009, in which s/he was asked to recall the number of visits to the GP, OP, IP, A&E, and to various health professionals during the three months before the questionnaire completion date. The patient reported 4 GP visits, 1 OP visit, 1 IP visit, no A&E visit, and visited a rheumatologist, a radiologist, and a chiropractor once each. Distances to the healthcare facilities were 1 mile, 3.5 miles, and 3 miles for GP, OP, and IP, respectively. In each case the patient used their own car; and was accompanied by someone during the GP and IP visits. The patient also reported having taken pain reducing medicines (paracetamol, ibuprofen, and naproxen); having undergone an MRI scan and had blood and urine tests during the 3 months recall period. Using the unique ALF, we tracked the patient's healthcare pathways through the routine data in SAIL system. Figure 1 plots the patient's healthcare events from the OP, GP, and A&E datasets for a 2 year period (i.e. August 2008 to August 2010), which represents the timeline approximately one year before and one year after the completion date of the questionnaire by the patient. The linked routine data show 10 GP events, 2 OP visits and 1 A&E visit during the 3 month recall period. There is no IP visit recorded during this period. Therefore, the self-reported IP visit in the questionnaire may actually be an A&E visit, which would correlate with the routine data. Out of those 10 GP events, not all of them are physical visits by the patients, but may include any event (e.g. letter encounter, prescription collection, telephone conversation etc.). Further exploration of the GP read codes and descriptions for these 10 GP events yields information about medication, tests, and other GP related encounters, as shown in Table 1. Retrospective and prospective tracking of events reveals that there are 51 such GP events during the 2 year period ( Figure 1). There are no OP, IP, or A&E visits before or after the recall period, indicating the danger of extrapolating patient reported 3 month recall data in the questionnaire over a longer period (e.g. one year). However, going further back through the linked data system in terms of the timeline (not shown in the figure), it was found that the patient made 4 OP visits during August, October, and November of 2005; and 1 IP visit on the first week of June 1999. Examples using the PAS cohort linked to GP data system Using the GP data we tracked about 183 AS patients from the PAS cohort retrospectively and prospectively from the date of questionnaire completion to gather the following information: (a) number of GP events; (b) presence of a GP parent family 'drugs' read codes; and (c) the presence of a GP parent family 'diagnostic' read codes. GP events and visits from the linked routine and questionnaire data Table 2 presents the average number of GP events for the AS patients grouped under several stratification schemes based on baseline disease severity score (low and high BASDAI); disease function score (low and high BASFI); age, sex, and the age of the onset of first symptoms. Results are presented for the retrospective 3 months recall period, 1 year period, and 5 years periodthe questionnaire completion dates being the reference date for each patient. In the last column of Table 2, we present the self-reported number of GP visits during the 3 month period from the questionnaires. As mentioned earlier the GP 'event' and 'visits' are to be construed differently. The GP event is defined as the unique dates for each patient where we can find GP read codes indicating administrative actions, referrals, visits, telephone conversation, prescription collection, symptoms, diagnosis, prescription drugs etc., pertaining to the particular patient. The GP visits refer to the physical visits to the GP by the patient. In the questionnaire the patients were asked about the visits to the GP. In the presence of numerous read codes, it is beyond the scope this paper to identify 'visits' from within the 'events'. Nevertheless, the number of actual visits obtained from the questionnaire can be used as complementary information as to what proportion of the GP events were GP visits. Table 2 shows that the patients with low disease severity have less GP visits and events than the patients with high disease severity. The patients with low disease severity, had 2.81, 12.92, and 61.92 events recorded in the GP system during the 3 months, 1 year, and 5 year retrospective period as opposed to 4.25, 17.79, and 80.28 events for the high disease severity groups. These ratios are consistent with those for the number of selfreported visits obtained from the questionnaires during the 3 month recall period, which are 1.31 visits for the low severity group and 1.78 for the high severity group. The 3 month GP events and self-reported GP visits in the high disease severity group were 1.51 and 1.36 times respectively more than in the low disease severity group and the between-group relative differences are largely consistent throughout the 5 year period (see Table 2). The same strategy can be applied to the other stratification models. The data in Table 2 indicates that the largest discrepancies between self-reported GP visits and routine data GP events are in the groups stratified by age and gender. We postulate that this suggests that older patients with AS (age ≥50) either tend to underreport GP visits or have more non-visit related GP events (e.g. for prescriptions) compared to younger patients, or that the reverse is true for younger patients. Similar hypotheses can be generated for female and male patients. This may have important implications as AS affects men more commonly than women, with onset in late teens or early adult years. Table 3 indicates the presence of parent drugs in the AS patients' GP history. The GP read codes starting with small letters a-z indicate the parent drug family the prescribed medicines belong to. Starting with the small letters a-z, the drug related GP read codes extends up to 4 more sub-digits to specifically identify the drug. It is beyond the scope of this paper to go beyond the first parent groups. Table 3 shows, in different retrospective time spans, how many patients' GP read codes include the mentioning of a particular drug code at least once. It is evident that the highest number of patients is prescribed musculoskeletal and joint drugs (read code 'j'), followed by the central nervous system drugs (which include analgesics) (d). Other drug classes frequently recorded in the AS patients include gastro-intestinal system drugs (a), cardiovascular system drugs (b), and the skin drugs (m), which is consistent with the association of these diseases with AS. One could go beyond the parent read codes and specifically identify exactly which drug was prescribed at which date. The drug codes for the 3 months after (prospective) the dates of baseline questionnaire completion for 103 AS patients are shown in the last column of Table 3. Table 4 similarly shows the presence of parent disease diagnostic read codes (start with capital letters A-Z) in the GP data system for the AS patients at 10 years, 5 years, 1 year and 3 months retrospective, and 3 months prospective time span, relative to the date of questionnaire completion. The most frequent disease group as indicated by the read codes fall under the musculoskeletal/connective tissue (N), skin/subcutaneous tissue disease (M), nervous system/sense organ diseases (F), respiratory system disorders (H), digestive system disorders (J), symptoms, signs, ill-defined conditions (R), and infectious/parasitic diseases (A). These are consistent with the conditions that are associated with AS (e.g. psoriasis, uveitis, colitis) or complicate the treatment (e.g. respiratory infections in patients on immunosuppressive therapy). Again, further exploration of the parent read codes by going beyond the first digit would reveal the specific disease diagnosis. Examples using the PAS cohort linked to OP and IP data systems We tracked 236 and 296 AS patients retrospectively through the OP and IP data systems respectively to derive the average number of OP visits and IP admissions made by the patients grouped under different stratification schemes. The results are presented in Table 5. The table has two panels -columns 1-4 relate to OP visits, and columns 5-8 relate to the IP admissions. OP visits The estimates in columns 3 and 4 of Table 5 report the average number of visits for the 3 months recall period obtained from the routine data system and the questionnaires respectively. In principle, the numbers in these two columns should match as they relate to OP visits only. It is observed that patients in all groups tend to overestimate the number of OP visits in the questionnaire. The over-reporting of OP visits is most marked in those with high disease activity (BASDAI) and functional impairment (BASFI). We postulate that one reason for this may be as a result of these patients finding it more difficult to physically attend OP clinics due to Note: Baseline questionnaire completion is the reference date. The last column applies to the patients whose data in the GP system exist at least 3 months post questionnaire completion date their greater disease activity and disability. Again, this may have important implications when using patientreported data to estimate utilisation of healthcare resources. Figure 2 shows the number of self-reported and recorded OP visits by the AS patients. The x-axis shows the number of visits during the 3 months recall period self-reported in the questionnaire and the y-axis shows the corresponding number of visits obtained from the records of OP data. The numbers in the bracket is the number of patients. For instance, out of 79 patients who reported in the questionnaire to have visited once to the OP during the 3 months recall period, we found 29 patients with zero visit, 34 patients with 1 visit, 9 patients with 2 visits, and 7 patients more than 2 visits. It can be seen that in general, the patients overestimated the number of OP visits when completing the questionnaires. 29 patients reported an OP visit that was not recorded during the 3 month recall period, highlighting issues with using recall data. IP visits In the second panel of Table 5 (i.e. columns 5-8), we tracked 296 AS patients through the IP data system to obtain the number of IP admissions. Again, in principle, columns 7 and 8 should match, and indeed these results for IP admissions are more similar than the corresponding data for OP visits (columns 3 and 4). The results in columns 7 and 8 suggest that, in contrast to OP visits, patients tend to underestimate the number of IP visits. Younger patients and those with less severe disease severity (BASDAI) and functional impairment (BASFI) were most likely to underestimate IP admissions. We postulate that this is because the admission in these patients was more likely to be for a reason unrelated to their AS, and therefore overlooked and not reported in a questionnaire for an AS study. Tracking through the retrospective data of those who had at least one IP admission, Table 6 indicates additional complementary information on the number of days spent in hospital. This information was not captured in the questionnaire data. These data indicate that older patients, those with longer disease duration, higher disease activity and functional impairment spent significantly more days in hospital than their relevant comparison groups. This may also have been a contributory factor to the relative under-reporting of IP admissions in patients with lower disease severity or functional impairment. Note: Baseline questionnaire completion is the reference date. The last column applies to the patients whose data in the GP system exist at least 3 months post questionnaire completion date Discussion The above examples demonstrate that linked routine data enables validation and clarification of patient reported data; the retrospective and prospective tracking of the patient healthcare utilization and pathways; and the referral history in a cohort of patients with AS. Such analysis makes it possible to deduce whether the anonymised individuals in question were suffering any common co-morbidities, in receipt of healthcare treatment prior to the occurrence of the reference event, whilst it also allows any frequent complications requiring medical attention in the days, months, years following the event (which could be an intervention or questionnaire) to be identified. From the methodological perspective, any linkage system would add new dimensions and perspective to traditional health related research (e.g. complement and enhance the results of RCTs), as a resource for clinical audits, and in a variety of health impact assessment exercises. For example, the longitudinal routine data would allow an assessment of the impact of specific healthcare interventions on subsequent healthcare utilisation (e.g. A&E visits or hospital admission). Important limitations of solely relying on questionnaire data include reliance on accurate patient recall and that the healthcare events of interest may not occur within the limited recall period (e.g. 3 months), but just before the recall period or after the completion date of the questionnaire. This makes extrapolation of the questionnaire data for an extended period of time unreliable. The longitudinal linked routine data comes into aid in this respect. The linkage of the questionnaire data from the PAS patients with the GP data as shown above enhances and helps make sense of the rich information obtained from the GP Read codes. This constitutes a rich health history for these AS patients, for whom we can carry out patient pathway analysis from various clinical and economic aspects. In particular, in keeping with other AS cohorts, these patients had an average lag of about 8 years from symptom onset to AS diagnosis, on which we can conduct pre-and post-diagnosis analyses of health care utilisation. Again, a matrix of traits based on the PAS questionnaire information linked with the SAIL data system will help profiling of AS patients for health and other related interventions. Table 7 summarises the types of information gathered through the PAS questionnaires, which could all be linked with the routine data sources as well as various demographic, socio-economic, and environmental attributes of the patients. The use of HERALD methodology can stratify groups of patients to identify the early characteristics of patients who subsequently develop severe disease, thus, enabling these patients to be targeted with early aggressive therapy in order to prevent severe damage and need for surgery. This profiling can be used to estimate the potential resource savings of focusing treatment on those patients with patterns of disease suggestive of the development of a severe outcome. All these will directly affect patient care for AS in terms of informing NHS service provision and NICE guidelines for the use of expensive biological therapies, and informing the process of assessment of costeffectiveness. In principle, the methods developed for the PAS cohort and described here can be extrapolated to be used in other chronic disease conditions [6]; thus improving patient care for all those conditions. Linked routine data provides many opportunities for enhanced healthcare research and allows evaluation of impacts beyond the limited primary outcomes of interventional studies. As an example, the expanding SAIL databank in Wales already holds over a billion anonymised records from various databases, which can be anonymously linked at the individual record level. The combination of routine data with information from patients and RCTs allows the validation of real-life data and its application for clinical research. These linkable databases provide factual and continuous information with rich clinical and non-clinical details, which offers wide ranging opportunities in the realm of conducting evaluative research, clinical epidemiology, trial recruitment, genetic research, basic research of biological markers, stratified medicine, post-trial surveillance, risk assessment, service delivery evaluation, resource use, decision analysis, identification of early disease predictors, and Co-morbidities, family history, age of diagnosis and first symptoms, disease activity [11], function [12], Quality of life (EQ-5D) [13], and visits to health professionals Not at work (3 Months) Previous occupation, activity impairment questionnaire At work (3 months) Work questionnaire, including information about current and previous occupation, activity impairment and work limitations questionnaire (WLQ) [14], work productivity and activity impairment questionnaire (WPAI-SHP) [15,16] AS costs (9 months) AS Cost questionnaire including detailed patient-level information about visits to health care facilities, professionals, AS related pathology and other tests, other conditions, medications, costs of various aspects of treatment and disability Exercise and Fatigue (15 months) International Physical Activity Questionnaire (IPAQ) [17], disease activity, function, Behavioural Regulation in Exercise Questionnaire (BREQ-2) [18], Pittsburgh Sleep Quality Index [19], and the Hospital Anxiety and Depression Scale [20,21]) Medication (0,3,6,9,12,15 Months) Medication Legend: AS patients in the PAS cohort were asked to consent to completing questionnaires either online if they have internet access, or by post. A website has been developed to give access to the questionnaires http://www.ashealth.co.uk/ the identification of subjects for prospective studies [6,22,23]. This data system also offers the opportunity for post-marketing surveillance and pharmacovigilance of new expensive, and often potentially dangerous, healthcare interventions in real-life settings. Complementing this resource with targeted health economic analysis, as proposed in the HERALD methodology, offers a unique opportunity to deliver the level of health economic data required to evaluate and drive forward cost-effective modern healthcare services. Conclusion The linkage of routine data, patient completed questionnaires and trial data offers unique opportunities for enhanced health economic analysis, including assessment of the validity, reliability and generalisability of health economic data not possible through the use of traditional isolated datasets. The information obtained from the linked data system would help improving patient pathways, and thus maximise clinical efficiency and efficiency in the use of resources.
2014-10-01T00:00:00.000Z
2012-03-29T00:00:00.000
{ "year": 2012, "sha1": "db95a38774b043cad6c8eeb283643bcf8bffd1a3", "oa_license": "CCBY", "oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/1472-6947-12-24", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a674accd6f664b67c5fcb83e43ce1af65ab3d1ab", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
51710064
pes2o/s2orc
v3-fos-license
Complex Economic Activities Concentrate in Large Cities Why do some economic activities agglomerate more than others? And, why does the agglomeration of some economic activities continue to increase despite recent developments in communication and transportation technologies? In this paper, we present evidence that complex economic activities concentrate more in large cities. We find this to be true for technologies, scientific publications, industries, and occupations. Using historical patent data, we show that the urban concentration of complex economic activities has been continuously increasing since 1850. These findings suggest that the increasing urban concentration of jobs and innovation might be a consequence of the growing complexity of the economy. Introduction In the year 2000, the San Francisco Bay Area produced more than 139 patents per 100,000 people, or 12% of all patenting activity in the United States. Fifteen years later, the Bay Area had more than doubled its per capita patenting output, generating more than 340 patents per 100,000 people. The Bay Area accounts for over 18% of all patenting activity in the U.S. This is more U.S. patents than any country in the world with the exception of Japan. But why is invention concentrated in places like the Bay Area? And why has this concentration increased so rapidly despite recent advances in communication and transportation technologies? Is the spatial concentration of economic activities a special characteristic of patents in high-tech industries, or is it a more general feature affecting all sectors of the economy? In this paper we show that the more complex an economic activity is, the larger is its tendency to concentrate in large cities. We find this to be true for patents, research papers, industries, and occupations. Complex industries, such as biotech and semiconductors, exhibit a much greater degree of concentration in large cities than less complex industries such as apparel and furniture manufacturing. Using historical patent data, we show that the concentration in large cities of more complex inventions has increased continuously since at least 1850, while that of the least complex technologies has decreased since the 1970s. The agglomeration of economic activities is considered a key ingredient of knowledge creation and economic growth (1,2). The standard hypotheses used to explain agglomeration is that firms care about sharing, learning, and matching (3). Firms seek locations where they can share inputs with other economic agents, learn from others, and match with the right employees. However, we still have much to learn about why some economic activities concentrate more than others. Two recently developed strands of literature can shed some light on the factors that explain the variation of spatial concentration across activities: the literature on urban concentration (4) and the literature on economic complexity (5-7). The literature on urban concentration shows evidence that economic outputs increase faster than city size (they scale super-linearly) (4). This super-linear scaling is known to vary across economic activities (8,9), but it is not clear why some activities scale more super linearly than others. The literature on economic complexity has shown that economies engaged in more complex-more knowledge intense-activities are richer and grow faster (5,10). Here, we show that the spatial concentration of economic activities increases with their complexity. But why would complex economic activities concentrate more in large cities? More complex economic processes require a deeper division of knowledge (11), and thus, operate more efficiently in large cities. This is consistent with the idea that knowledge complexity pushes individuals to narrow their expertise and specialize (12). This division of knowledge creates coordination costs that can be solved by the multiple interaction opportunities provided by cities (13,14). Here, we validate this idea by showing that differences in the urban concentration of economic activities, as measured by their scaling exponents (4), are largely explained by differences in their level of complexity. That is, technologies that recombine more recent knowledge, research fields that involve larger scientific teams, industries that hire more educated workers, and occupations that require more years of education, are more concentrated in large cities than less complex technologies, research papers, industries and occupations. Moreover, using historical patent data going back to 1850, we show that the concentration of more complex forms of knowledge production has increased continuously over the last century and a half. These findings explain why some economic processes concentrate disproportionally in cities and contribute to our general understanding of the spatial organization of the economy. DATA We analyze the spatial distribution of patents, research papers, industries, and occupations in 353 Metropolitan Statistical Areas (MSAs) of the United States. For recent patents, we use the Patent Network Dataverse (15), providing longitude and latitude coordinates of inventor addresses for patents granted by the United States Patent and Trademark Office (USPTO) from 1975 to 2010. For historical patents (1850-1974), we use HistPat. HistPat was built using optically recognized and publicly-available documents from the USPTO, combining text-mining algorithms with statistical models to provide geographical information for historical patent documents (16). We disaggregate patents into 30 technologies as defined by the National Bureau of Economic Research (2-digit sub-categories) (17). For scientific papers, we use publication data from Elsevier's Scopus database covering the time period 1996-2008 (8,18). Publications are disaggregated into 23 scientific disciplines as defined by the Scopus classification (2-digit major thematic categories). For industries, we use 2015 GDP data from the Bureau of Economic Analysis to quantify the economic output of MSAs in 18 industries as defined by the North American Industry Classification System (2-digit NAICS). For occupations, we use 2015 employment statistics from the Bureau of Labor Statistics (BLS) disaggregated into 22 occupations according to the Standard Occupational Classification system (2-digit SOC). Population data originate in the US Census. See supplementary material (SMsection 1) for additional information. Figure 1 shows the urban concentration of patents ( Fig. 1 A), research papers (Fig 1 B), industries (Fig 1 C), and occupations (Fig 1 D) in the United States. Peaks are, respectively, proportional to the number of patents, the number of research papers, GDP, and the total employment of each metro area. In all four cases we find economic activities to be highly concentrated, especially in large cities. Figures 1 E-H characterize this urban concentration by showing the scaling laws followed by patents, research papers, industries, and occupations. Scaling-laws in cities are power-law relationships of the form y~x , where x is the population of a city, y is a measure of output (patents, papers, GDP, or jobs), and is the scaling exponent. In the case of patents (Figure 1 E), the number of patents granted to a city scales super-linearly with population with an exponent of =1.26. In the case of research papers, the number of papers published by authors in a metro area grows as the =1.54 power of that metro area's population. GDP on the other hand, grows as the =1.11 power of population, and total employment grows as the =1.04 power of the population in an MSA (these scaling exponents are in agreement with those reported in (7)). RESULTS Next, we repeat this exercise by studying the scaling laws followed by specific technologies, research areas, industries, and occupations. Scaling relationships for pairs of economic activities with large differences in their scaling exponents: I patents in "computer, hardware, and software" and "pipes and joints", J Research papers in "Neuroscience" and "Arts and Humanities," K economic output (GDP) of "professional and scientific activities" and "retail trade", L employment in "computer and mathematical" occupations and in "installation, maintenance, and repair." In Figure 2, we explain the urban concentration of economic activities using measures of their knowledge complexity. For technologies, we measure complexity using the vintage of the knowledge combined in the patent, measured as the average year of appearance of the subclasses in which the patent makes a knowledge claim. This assumes patents that recombine more recent knowledge are-on average-more complex (19). For scientific fields, we use the average size of the team involved in a scientific publication. Team size is a direct interpretation of the idea that complex scientific activities require a finer division of knowledge (20). For industries, we use the average years of education of an industry's employees. For occupations, we use average years of education as a measure of the specialization required to participate in each activity. Because we compare the spatial concentration of economic activities with their economic complexity, we avoid using complexity measures that are derived from spatial information (8). For more information about these definitions and robustness analyses see section 3 of the SM. In all cases, we observe that the spatial concentration of economic processes increases with their knowledge complexity. For technologies, it increases with the recency of the combined sub-classes (Pearson's r = 0.82, p<1x10 -3 ); for scientific fields, it increases with the average number of authors of a paper in a field (Pearson's r = 0.72, p<1x10 -3 ); for industries, it increases with the years of education of the workers employed in that industry (Pearson's r = 0.70, p<1x10 -3 ); and for occupations, it increases with the average years of education of the workers within that occupational category (Pearson's r = 0.62, p<1x10 -3 ). In all four cases, the more complex the economic activity, the more super-linearly it scales with population, meaning that more complex economic activities concentrate more in large cities. We confirm the statistical significance of this relationship using regression analysis and a variety of alternative measures of spatial concentration and economic complexity (see section 3 of SM). Next, we look at historical data to ask whether the concentration of economic activities has increased with the complexity of the economy? To explore this question, we use historical patent data, since it provides the longest time series (going back to 1850). Figure 3 A shows the scaling exponent observed for the top 25% most complex patents granted each decade between 1850 and 2010 (red line). It shows that the urban concentration of complex technologies, those that recombine newer knowledge, has continuously increased for the past 150 years and has accelerated with each industrial revolution. Starting with the second industrial revolution (1870), urban scaling of complex knowledge became increasingly super-linear, growing from a scaling exponent of 1.15 in 1870 to 1.55 by the 1930s. The urban concentration of the most complex patents then plateaued, and continued to increase after the 1970s I.T. revolution, reaching a scaling exponent of almost 1.8 in 2010. The least complex patents (light yellow line), on the other hand, have always been less concentrated than complex patents. After the 1970s, their urban concentration even started to decrease, with a scaling exponent falling to less than 1.2. The IT revolution has therefore been followed by an increasing concentration of the most complex technologies in cities, and a decreasing urban concentration of the least complex ones. Robustness analyses can be found in section 4 of the SM. To further explore the evolution of the spatial concentration of patenting activity, we separate patents into their six main technological categories, as defined by the NBER: "Mechanical", "Chemical", "Electrical & Electronic", "Computers & Communication", "Drugs & Medical", and "Others". Figure 3 B shows the scaling exponent observed for each of these technological categories for each decade between 1850 and 2010. "Mechanical" and "Others" are the technologies that exhibit the highest scaling in the mid nineteenth century, meaning they are the ones most concentrated in large cities, with "Others" mostly composed of patents related to textiles during this period. The scaling exponent of these categories, however, does not grow substantially during the following decades, meaning that most of the rise in scaling observed after 1870 for all patents (Figure 3 A) can be attributed to an increase in the urban concentration of "Electrical & Electronic" patents. Starting in 1950, "Computers & Communications" and "Drugs & Medical" become increasingly more concentrated, reaching the highest scaling exponents observed for all categories. Together, these results show that the urban concentration of patenting activity exhibits a long-term cycle, rising during the heyday of the technologies developed, and then declining once technologies mature. DISCUSSION Why economic activities concentrate remains one of the longest standing puzzles in economic geography, urban science, and economic development. Yet, while there are many theories that can be used to explain the general tendency for economic activities to agglomerate (e.g. matching, learning, and sharing), we still need a better understanding of: (i) why some activities have a stronger tendency to agglomerate than others? And (ii), why economic activities continue to agglomerate despite recent advances in communication and transportation technologies? Here we use differences in complexity to explain variations in the degree to which economic processes agglomerate. We argue that complex economic activities tend to be more concentrated in large urban areas because they require a deeper division of knowledge and labor. This also tells us that much of the (tacit) knowledge needed to perform these activities is embodied in social networks and that does not travel well through digital communication channels (20). The increase in agglomeration for more complex economic processes is measured in their respective scaling exponents. For patents, research papers, industries, and occupations, we find that the more complex, or more knowledge intensive the activity is, the more likely it is to exhibit super-linear scaling. Moreover, when we look at over a century of patenting activity in the U.S., we find that the dynamics of urban agglomeration are not static. On the contrary, the concentration of patenting activity in urban areas has increased during most of the last century and a half, especially during the second industrial revolution. During the IT revolution the concentration of the most and least complex activities diverged. The most complex technologies have reached unprecedented levels of urban agglomeration, while the least complex technologies experienced a decline in their agglomeration levels with the rise of communication technologies. This could explain why the world has become more flat for some activities (22) , and more spiky for others (23). The finding that more complex economic activities agglomerate more strongly has important implications for spatial inequality. If complexity and agglomeration cannot be divorced, the spatial inequality observed among large and small cities will continue to increase with future technological progress. This would happen as firms working in the complex economic activities that drive economic growth, such as pharma, artificial intelligence, and data services, continue to concentrate in a few large cities. Policymakers must recognize that the forces generating growth and innovation may be the same forces that are contributing to increasing spatial inequality.
2018-07-20T15:17:40.000Z
2018-07-20T00:00:00.000
{ "year": 2018, "sha1": "ddc198944c5553b48cf10ad1bcd3e12e7323319c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.07887", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ddc198944c5553b48cf10ad1bcd3e12e7323319c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Medicine", "Business", "Mathematics" ] }
52902830
pes2o/s2orc
v3-fos-license
5-N-Carboxyimino-6-N-chloroaminopyrimidine-2,4(3H)-dione as a hypochlorite-specific oxidation product of uric acid Although uric acid is known to react with many reactive oxygen species, its specific oxidation products have not been fully characterized. We now report that 5-N-carboxyimino-6-N-chloroaminopyrimidine-2,4(3H)-dione (CCPD) is a hypochlorite (ClO−)-specific oxidation product of uric acid. The yield of CCPD was 40–70% regardless of the rate of mixing of ClO− with uric acid. A previously reported product, allantoin (AL), was a minor product. Its yield (0–20%) decreased with decreasing rate of mixing of ClO− with uric acid, indicating that allantoin is less important in vivo. Kinetic studies revealed that the formation of CCPD required two molecules of ClO− per uric acid reacted. The identity of CCPD was determined from its molecular formula (C5H3ClN4O4) measured by LC/time-of-flight mass spectrometry and a plausible reaction mechanism. This assumption was verified by the fact that all mass fragments (m/z −173, −138, −113, and −110) fit with the chemical structure of CCPD and its tautomers. Isolated CCPD was stable at pH 6.0–8.0 at 37°C for at least 6 h. The above results and the fact that uric acid is widely distributed in the human body at relatively high concentrations indicate that CCPD is a good marker of ClO− generation in vivo. Introduction O xidative stress is associated with lipid peroxidation, (1) DNA damage, (2) and protein carbonylation, (3) and thus can cause many diseases such as cancer, (4) diabetes, (5) Alzheimer's disease, (6) and ischemia reperfusion injury. (7,8) Since oxidative stress is initiated by the formation of reactive oxygen species (ROS), identification of specific ROS in vivo is important in pathological studies. For identifying ROS in vivo, detection of ROS-specific oxidation products of endogenous antioxidants is a reasonable strategy. Uric acid (UA, Fig. 1) is a suitable substrate for this purpose. Uric acid, which is a terminal metabolite of purine in primates including humans, is widely distributed in body fluid at relatively high concentrations. It reacts with various ROS (9)(10)(11) to afford specific products (Fig. 1), e.g., free radical-induced oxidation gives allantoin (AL), (12) ONOO − -induced oxidation yields triuret, (13) and nitric oxide (NO • ) gives 6-aminouracil. (14) Recently, we identified parabanic acid as a singlet oxygen-specific oxidation product of UA and demonstrated its formation on human skin surfaces after sunlight exposure. (15) On the other hand, a hypochlorite (ClO − )-specific oxidation product of UA has not yet been characterized. ClO − oxidizes sulfide to sulfoxide, (16) converts hydrogen peroxide (H 2 O 2 ) to singlet oxygen, (17) and chlorinates tyrosine to 3-chlorotyrosine. (18) Myeloperoxidase released from activated neutrophils catalyzes the reaction of Cl − with H 2 O 2 to form ClO − , showing strong microbicidal action against germs including bacteria and Norwalk virus. However, excess ClO − causes oxidative damage to living tissues, especially under acute inflammatory conditions. In this study, we focused on a ClO − -specific oxidation product of UA and identified it as 5-N-carboxyimino-6-N-chloroaminopyridine-2,4(3H)-dione (CCPD, Fig. 1) using time-of-flight mass spectrometry (TOFMS) and a plausible reaction mechanism. The yield of CCPD was 40-70%. Isolated CCPD was stable at pH 6.0-8.0 at 37°C for 6 h. The above results and the fact that UA is widely distributed in the human body at relatively high concentrations indicate that CCPD is a good marker of ClO − generation in vivo. Materials and Methods Chemicals. UA, NaOCl, and other chemicals were purchased from Wako Pure Chemical Industries, Ltd. (Osaka, Japan) and used as received. The concentration of NaOCl was determined as 1.95 M by titration with 0.1 M sodium thiosulfate. Reaction of UA and ClO − . The reaction of UA and ClO − was conducted at room temperature. UA (220-1,000 µM) was dissolved in 30 ml of 100 mM phosphate buffer solution (pH 7.4) and the solution was stirred by a magnetic stirrer. The NaOCl solution (19.5-195 mM) was introduced into the UA solution (30 ml) at a constant rate (0.25-2.08 µl/min) using a syringe pump (Harvard Apparatus, Holliston, Massachusetts) or added instantaneously to the UA solution. Decay of UA and formation of an unknown product (U1) were monitored by HPLC, LC/TOFMS, and LC/MS/MS, as described below. HPLC analysis and isolation. UA, U1, and AL were measured by a reversed-phase HPLC equipped with a UV detector monitoring the absorption at 210 nm. The mobile phase was aqueous ammonium acetate (40 mM) delivered at a rate of 1.0 ml/min. An ODS column (Capcellpak C18, UG80, Shiseido, Tokyo, Japan; 5 µm, 4.6 mm × 250 mm) was used for separation. Retention times for UA, U1, and AL were 7.8, 6.0, and 2.5 min, respectively. For the isolation of U1, a preparative HPLC system was used. The mobile phase and the separation column were aqueous ammonium acetate (40 mM) delivered at a rate of 3.0 ml/min and an ODS column (Supelcosil SPLC-18, Sigma-Aldrich Japan, Tokyo, Japan; 5 µm, 250 mm × 10.0 mm), respectively. The retention time of U1 was 6.0 min and the elution containing U1 was collected. The U1 fraction was further purified by HPLC as follows. The mobile phase was 15% methanol delivered at 1.0 ml/min. The separation column was a Develosil C30-UG column (Nomura O Chemical Co., Ltd., Tokyo, Japan; 5 µm, 250 mm × 4.6 mm). The fractionation was monitored by the absorption at 210 nm. Solvents of U1 fractions were removed under N 2 gas flow. U1 was then redissolved in water and stored at 4°C. LC/TOFMS analysis. To obtain accurate mass-to-charge ratios (m/z) of U1, HPLC combined with TOFMS (JMS-T100LC, JEOL, Ltd., Tokyo, Japan) was used. Negative ionization was performed by electrospray ionization (ESI) at an ionization potential of −2,000 V. The optimized applied voltages to the ring lens, outer orifice, inner orifice, and ion guide were −5 V, −10 V, −5 V, and −500 V, respectively, for measurement of the U1 dominant ion. Fragmentation was carried out with an applied voltage to the inner orifice at −50 V. To obtain accurate m/z values, trifluoroacetic acid (TFA) was used as an internal calibration standard. LC/MS/MS analysis. U1 and AL were quantified using an LC/MS/MS system (LCMS-8040, Shimadzu, Kyoto, Japan). Aqueous formic acid (0.2 ml/min, pH 3.5) was used as the mobile phase with a Develosil C30-UG column (Nomura Chemical Co., Ltd., Tokyo, Japan; 5 µm, 250 mm × 2.0 mm). Negative ionization was performed at −3.2 kV using an electrospray probe. For identification and quantification of each compound, multiple reaction monitoring measurements were obtained. Optimized combinations of product and precursor ions for U1 and AL were determined as −110/−217 and −97/−157, respectively. Chromatographic retention times of U1 and AL were 35 and 4.5 min, respectively. Stability of CCPD in solution. The isolated CCPD was dissolved in phosphate buffered solutions adjusted to various pHs (6.0, 7.0, 7.4, and 8.0). Each solution was stored at 37°C or room temperature and the change in the CCPD concentration was determined by HPLC for 6 h or 7 days, respectively. Results and Discussion Primary product of ClO − induced oxidation of UA. When 100 mM phosphate buffer (pH 7.4) containing UA (230 µM) was mixed with NaOCl continuously (1.35 µM/min) using a syringe pump, an unidentified peak U1 was observed on the HPLC chromatogram of 20 min after the beginning of NaClO introduction ( Fig. 2A). The peak increased over time with the concomitant decrease of UA, but no formation of AL was observed (Fig. 2B). The reaction mixture was analyzed by LC/TOFMS with negative ESI and the MS spectrum of U1 is shown in Fig. 2C. The accurate m/z value of the dominant anion was determined to be −216.97421 using TFA as an internal standard. Therefore, the chemical formula of U1 was estimated as C 5 H 3 ClN 4 O 4 and the presence of Cl was indicated by the monoisotopic m/z of the 37 Cl derivative (m/z = −218.97160). We next purified U1 using two different reversed-phase HPLC conditions as described in Materials and Methods. LC/TOFMS analysis of isolated U1 gave four fragment ions whose m/z values were −172.98242, −137.99124, −112.99384, and −109.99697 (Fig. 2D) and their molecular formulas were estimated as [( Kinetic studies. Next, we compared rates of NaOCl introduction (R i ) and UA decomposition (R d ) because the R i /R d ratio indicates the pseudo-stoichiometric number of the reaction ( Table 1). The R i /R d values were approximately 2 at low R i conditions (<1.35 µM/min), indicating that one molecule of UA reacted with two molecules of ClO − . In other words, two molecules of ClO − are required for the formation of one molecule of CCPD. When R i was greater than 6.50 µM/min, AL was detected as a byproduct and the R i /R d values increased to ~2.7, indicating that formation of one molecule of AL requires at least 3 molecules of ClO − . This was also the case in instantaneous mixing (Table 1). However, we will not go into details of this since AL is not a ClO − -specific major oxidation product of UA. (1) Therefore, HO − and Cl − can be eliminated from the reaction product. A proposed reaction scheme is shown in Fig. 3. The lactim (N=C-O-H) of UA and ClO − form a 6-membered ring and release (4). Cleavage at the C8-N9 bond results in the formation of intermediate 5, which is isomerized to a carboxyl anion (6) and then protonated to form 5-N-carboxyimino-6-N-chloroaminopyrimidine-2,4(3H)-dione (CCPD) (7). Thus, the release of HO − and HCl and protonation are equal to the elimination of HO − and Cl − . As expected, the molecular formula of CCPD is C 5 H 3 ClN 4 O 4 , which is the same as that of U1. CCPD has many tautomers such as 8 and 9. To confirm that CCPD is the true ClO − -induced oxidation product of UA, matching of 4 fragments [( with CCPD was examined. As shown in Fig. 3, all fragments can be found in CCPD and its tautomers (8 and 9). Based on the above results, we concluded that CCPD is the ClO − -specific oxidation product of UA. It should be noted that 1 H and 13 C NMR spectroscopies were not useful to identify this type of compound since there are few protons and the structures of C=O and C=N are oft repeated. Stability of CCPD in aqueous solution at various pHs. The effect of pH on the stability of aqueous CCPD solution was examined next. CCPD was very stable at all pHs (6.0-8.0) examined at 37°C for 6 h (Fig. 4A) and relatively stable at room temperature for 7 days (Fig. 4B). These results indicate that CCPD is a good marker of ClO − formation in vivo. We plan to apply this probe to plasma samples from patients associated with acute inflammation such as sepsis. Conclusions A ClO − -specific oxidation product was produced from two molecules of ClO − and one molecule of UA. It was identified as CCPD by its mass number and plausible reaction scheme and confirmed by mass fragments. Aqueous CCPD was stable at physiological pH. These results suggest that CCPD can be a good indicator of ClO − generation in vivo. Acknowledgments We thank Dr. Kazutoshi Watanabe for his valuable comments.
2018-10-18T01:22:57.226Z
2018-07-11T00:00:00.000
{ "year": 2018, "sha1": "5b97b4f506b528d03a64942b722fdf1af2c23c5c", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jcbn/63/2/63_18-6/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b97b4f506b528d03a64942b722fdf1af2c23c5c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
73620684
pes2o/s2orc
v3-fos-license
Phosphate mineral fertilizers , trace metals and human health Fertilizers, indispensable as they may seem, are nevertheless materials that also clearly cause serious environmental contamination notably in the agricultural soils. The dire necessity for increased food production has been more marked than ever before. Mineral fertilizers, which are indeed an important nutrient source used for enhanced food production, have unfortunately now become a ‘necessary evil’. Excessive and continuous use of nitrogen and phosphorous fertilizers for decades have converted the agricultural soils into virtual chemical time bombs. Phosphate rocks by their very geological and mineralogical nature contain a host of environmentally hazardous chemical elements such as Cd, Pb, Hg, U Cr and As among others. The superphosphates are particularly abundant in these hazardous elements and they contaminate the agricultural soils through the use of fertilizer. The leachability and dispersion of some of these toxic elements are most pronounced in some types of soils such as andisols. After the discovery of the dreaded disease ‘Itai-Itai’ cadmium has been listed as one of the most potentially dangerous elements found in phosphate fertilizers. Uranium, apart from its radiotoxicity, is chemotoxic and on account of these two properties, it is considered as a disease causing element. The geochemical pathways lead these toxic elements into food crops, soil, water, air and ultimately the human body tissues via the food chain. Several diseases are known to be caused by the excessive presence of the toxic elements and among them gastrointestinal, pulmonary and kidney ailments are most noteworthy. INTRODUCTION This review provides information on the abundance of toxic metals in phosphate rocks and phosphate fertilizers and their impact on soil pollution, accumulation in plants and effects on human health.Two elements, namely cadmium (Cd) and uranium (U) are considered in detail due to their importance as potent toxic materials as well as the availability of much data. Much of the world's phosphate fertilizers are produced from phosphate rocks which contain the mineral apatite [Ca 5 (PO 4 ) 3 OH,F,Cl].The term phosphate rock 5 however is rather vaguely defined and generally encompasses naturally occurring geological materials that contain one or more phosphate minerals suitable for commercial use.The term rock phosphate is also used mainly in the field of agriculture.Mineralogically, the phosphate rocks have different origins and chemical and physical properties.The principal phosphate minerals in them are the apatite (Ca phosphates).Chemically, a pure fluor-apatite would contain 42% P 2 O 5 while francolite, another mineral found in phosphate rocks has 34% P 2 O 5 .The five main types of phosphate deposits mined are: i) marine deposits ii) igneous deposits iii) metamorphic deposits iv) biogenic deposits v) secondary deposits formed by weathering It has been estimated that 75% of the world's phosphate resources are obtained from sedimentary, marine rock deposits while 15-20% are obtained from igneous and weathered deposits.The biogenic resources account only for 1-2% 1 .Fluoro-apatite [Ca 5 (PO 4 ) 3 F] is found mainly in igneous and metamorphic deposits and the hydroxyl-apatite [Ca 5 (PO 4 ) 3 OH] is found in biogenic deposits such as bone and teeth in addition to igneous and metamorphic types.Francolite [Ca 10-x-y Na x (PO 4 ) 6-z ) (CO 3 ) 2 F 0.4 F 2 ] is common in the marine phosphates and to a lesser extent in carbonatite, an igneous type of phosphate.Journal of the National Science Foundation of Sri Lanka 37 (3) Phosphorus (P), like potassium and nitrogen is an essential element for plant growth and the phosphate fertilizer industry is therefore a major global concern. Very large phosphate deposits are mined in many parts of the world (Tables 1 and 2), while the small to medium deposits often lie dormant due to economic, geographic and technological reasons.In international trade, phosphate ranks just below coal and hydrocarbons, indicating its major importance in agriculture and industry.During the last two decades, 80-90% of world phosphate rock output had been used in the fertilizer industry.The phosphate fertilizer categories include the basic slag ( uses phosphoric acid for acidulation (Figure 1). Contamination of phosphate fertilizer by toxic elements has been observed during their processing using the above methods.Fluorides, metals such as Cd, mercury (Hg), lead (Pb), U and chromium (Cr) have been found to be significantly high in some of the final products ready for marketing.Radionuclides are also often carried through from the phosphate rock and become accumulated.The quantities of these undesirable toxic materials can be dangerously high if the processing does not include adequate cleaner production methodology.Precipitation of heavy metals is highly desirable, particularly when the raw phosphate materials contain an abundance of toxic material.Due to the presence of these toxic elements that have a major negative impact on the environment many countries have enforced stringent laws on the maximum permissible levels of the toxic elements in fertilizer products. Hazardous elements in phosphate fertilizers The entry of various heavy metals into the human food chain via various agricultural products has been given increased attention in recent years due to their possible health impacts.Some potentially toxic metals and trace elements present in agricultural soils enter the human body easily through the food chain.Low levels of all of these elements occur naturally in soils and some are essential for plants or animals.Even the essential trace elements create a toxicity problem if high levels are present in the environment 2 .inorganic fertilizers such as superphosphates and rock phosphates may also contain different quantities of potentially toxic heavy metals or compounds derived mainly from parent rock materials.They may also result from other sources, such as corroded equipment, catalysts, reagents and materials added to commercial preparates as fillers, coaters, conditioners, etc. (i.e.gypsum, kaolin, limestone) 3 .Application of such fertilizers could lead to a modification of the natural geobiochemical equilibria which may affect human health adversely.Application of potentially toxic metals into agricultural soils is of great concern because they do not degrade and remain in the soil indefinitely 3 .Fertilization coupled with irrigation can cause substantial changes in the hydrology and chemistry of groundwater in agricultural areas.Trace substances in mineral fertilizers added to the soils could easily leach and contaminate groundwater resources. It is now an established fact that mineral fertilizers are a major source of the inorganic elements that enter food materials.The raw materials used to produce the fertilizers are the ultimate sources of these elements and the phosphate fertilizers are therefore particularly rich in toxic elements (Table 3).The global application of these phosphate fertilizers is enormous (Figure 2). During the late 1960's, Japan experienced the disastrous effects of cadmium, when the dreaded "Itai-Itai" disease was discovered.The rice plant (Oryza sativa) which produces the staple diet of millions of people in Asia was known to absorb cadmium readily and any input into a rice field of a Cd-rich fertilizer (i.e.phosphates) was considered to be dangerous.Austria, Finland and Sweden enforced limits for Cd in fertilizers at the international level 5 .As a result of this, the average Cd content of fertilizers in Sweden fell from 80 mg/kg to about 8 mg/kg P 2 O 5 . The two main geochemical pathways of trace elements are: i) Raw materials (phosphate rocks) fertilizer soil plant food human body ii) Raw materials (phosphate rocks) fertilizers water human body Figure 3 illustrates a model depicting these different geochemical pathways of toxic trace elements affecting human health.It is clearly seen that the toxic elements have their origin in the phosphate ores and from the phosphate fertilizer production processes.If stringent precautions are not taken and the proper regulations not implemented, the agricultural fields such as the rice fields of tropical Asia would be a sink for these hazardous elements.Considering the very large quantities of phosphate fertilizers added and that too in several seasons annually, the danger posed to human health is very large.Cadmium in soils and phosphate fertilizers van Kauwenbergh 4 has considered about 16 elements that are associated with phosphate rocks and fertilizers which are potentially hazardous to human health.Mineralogically, the apatite structure is known to host more than 25 elements which include hazardous elements such as Cd, As, Cr, Hg, Pb, Se, U and V 5 .These elements are probably found as substituents within the apatite structure or substituting in other associated minerals within the phosphate rocks or as adsorbants on the apatite surface.It is generally proposed that divalent calcium is replaced by many other divalent cations including Hg or Pb.In the case of other ions such as V, Cr or U, coupled replacement can be expected. Weathering processes, which bring about secondary mineral deposits, also contribute quite significantly to the enrichment of metals in the phosphate deposits.Uranium accumulations in the Florida phosphate deposits are known to be caused by such processes 6 .U and Rare Earth Elements (REE) in phosphate rocks of Morocco and Queensland in Australia are accumulated in this manner 7 .An interesting study by van Kauwenbergh 8 showed that approximately 25% of Cd in the highly weathered zone of the Togo phosphate deposit is associated with the calcite component and that phosphate and cadmium tend to concentrate during the leaching of carbonate-bearing beds and removal of calcite. Table 4 shows the enrichment (depletion) factors of some toxic elements.It is observed that Cr, Hg and V are considered normal in abundance in sedimentary phosphates rocks.The most significant feature is that Cd and U are the most enriched and potentially hazardous elements in sedimentary phosphate rocks with enrichment factors of 69 and 30, respectively.Table 5 shows the phosphate and cadmium contents of some sedimentary phosphate rocks from some countries with phosphate deposits. Uranium in soil and phosphate fertilizers Uranium has five main oxidation states (+2, +3, +4, +5 and +6).Of these +4 and +6 are commonly found in the natural environment.Out of the three naturally occurring isotopes ( 234 U, 235 U and 238 U), 99% is 238 U which has a half life of 4.46x10 9 years.It emits alpha, beta and gamma emissions 9 .Apart from the radioactive emissions, uranium is also chemo-toxic and hence it is considered as an element of great environmental concern. Mineralogically, uranium is found as uraninite (UO 2 ), brannerite (U,Ca,Ce)(Ti,Fe) 2 O 6 and carnotite K 2 (UO 2 ) 2 (VO 4 ) 2 .3H 2 O.As an accessory element, it is found in the minerals apatite, zircon, allanite and monazite as well as in complexes with organic matter and phosphatic ironstone.Uranium is found in a variety of chemical forms in soils, food and drinking water 10 .In the earth's crust, uranium is generally found as oxides (UO 2 , U 3 O 8 ).In soils, about 80-90% of uranium is present in the +6 state as the uranyl cation (UO 2 It has been reported that the global uranium concentrations in soils range from 0.3 to 11.7 mg/kg 12 , the average background concentration being 2 mg/kg 13 material are particularly significant in uranium mobility. An increased soil cation-exchange capacity (CEC) will retain more uranium while carbonates present increases the mobility of uranium 14,15 .Acidic soils with poor adsorption characteristics, alkaline soils with carbonate minerals and the presence of chelates such as citric acid are known to increase uranium mobility and plant accumulation 16 . In water, the mobility of uranium depends on factors such as pH, redox status and the concentration of dissolved ions.Even as a metal, it shows high solubility particularly in oxidizing, alkaline and carbonate-rich waters.It is also soluble in strongly acid waters.Under these conditions the main species in solution is the uranyl cation (UO 2 2-) 17 .In neutral to alkaline oxidizing conditions, soluble uranyl-carbonate complexes i.e.UO 2 (CO 3 ) 2 2-are most common 18,19 .Under reducing conditions, insoluble UO 2 can be observed.Uranium is known to be sensitive to redox conditions and occurs in U 6+ state under oxic conditions.It is found as a complex ion solution, mostly with carbonate ligands and phosphate, fluoride and sulphates.In the groundwater, uranium occurs in higher concentration as compared to surface water on account of the large solid/solution ratios in aquifers and the greater influence of water-rock interactions 20 .The affinity of uranium for organic matter and phosphates is marked and hence the importance of uranium contamination in agricultural fertilizers. Takeda et al. 21showed the accumulation of uranium in a cultivated Andisol in Japan subjected to long term application of fertilizers.The large quantities of uranium contained in phosphate fertilizers was expected to enrich agricultural soils in uranium after application of the fertilizer [22][23][24] .Guzman et al. 25 noted that the uranium concentration in the phosphate fertilizers used on Mexican lands ranges between 70 and 200 mg/kg.They were of the view that it is capable of generating toxic effects in all the trophic levels if 2.1 mg/L are surpassed in soil and 20 μg/L in the water supply [26][27][28] . Under flooded conditions such as those observed in rice fields, virtually all water soluble uranium in fertilizers was absorbed by Andisols 24 hours after application 29 .Soil types, quality of fertilizer and its application rate are among some of the factors that influence the accumulation of uranium in surface soils.In view of the fact that about 90% of uranium input to the field was attributable to superphosphate, Takeda et al. 21estimated that the annual input of uranium from the application of fertilizer material was 3.0 mg/(m 2 y).Ahmed and El-Arabi 30 carried out a study on the natural radioactivity in farm soil and phosphate fertilizer in Qena, Upper Egypt.Since phosphate fertilizers are used extensively in agriculture, uranium concentrations in phosphate fertilizers obtained from different parts of the world are most useful.This is particularly important for those developing countries of the tropical belt, where laws pertaining to radio-and chemo-toxicity of the metals in fertilizers may not be rigidly enforced.Further, in the tropical countries in view of the high rain fall and leaching , dissolution of metals may be intense, and their geochemical distributions are such that there may be a great impact on the environment and the food chain. It has been shown that the concentration of uranium correlates with the P 2 O 5 concentration of fertilizers 31,32 .It was noted that the 232 Th series contributes only in a minor way to the radioactivity in phosphates compared to the uranium series 33,34 .The naturally occurring 40 K is also known to be present in soils and phosphate fertilizers.Figure 4 shows the distribution of 226 Ra, 232 Th and 40 K in phosphate fertilizers, farm soils and Nile Island soils in the Upper Egypt 30 .A study by Makweba and Holm 35 in Arusha, Tanzania had shown that the radioactivity of superphosphate, triple superphosphate and phosphogypsum had activity concentrations as high as 400 Bq/kg.Phosphate fertilizers from Pakistan had the 238 U activity concentrations of 799 Bq/kg while that from Jordan was 4.28 Bq/kg 36 .Further, the activity concentration of Ra in the SSP fertilizer was found to be 1043 Bq/kg which is significantly higher than normal background values.The uranium concentrations in the Brazilian phosphate fertilizers ranged from 5.17 to 54.3 mg/kg and were in good agreement with the results reported for similar fertilizers in other countries (Table 6). The use of phosphate fertilizers was also considered as a uranium enrichment factor in soils and groundwater 37 .After investigating the effect of decades-long application of uranium rich fertilizer on the uranium concentration of irrigation drainage, Zielinski et al. 38 showed that there was a minimal impact of fertilizer-U compared to natural uranium leached from local soils. Heavy metals entering the food chain from fertilizers As mentioned earlier, Cd has been intensively studied for its impact on the environment and human health.Plants are known to show highly variable capacities to absorb and translocate metals from vegetative tissues to grain and subsequently to the human body.Further, this is also of great importance to grazing animals who may ingest Cd and other heavy metals from a source of phosphate fertilizer quite easily. Phosphate fertilizers contain toxic elements such as Cd, U, Hg, Pb, Fe, Mo, Ra, rare earth elements and Cr among others and these tend to accumulate in the agricultural soils over many years.Many experiments have been carried out on the labile nature of toxic elements, notably Cd.Even though Cd accumulation poses a threat of contamination of agricultural soil from the more soluble phosphate fertilizers such as triple superphosphate.Due to slow leaching out of the metal bioaccumulation is considered by some workers to be not too intensive, though some other experiments suggest a significant accumulation of the metal in plant tissues.The soil conditions, notably pH and the plant varieties however, play a major role in the bioaccumulation of cadmium. The accumulation of heavy metals in some vegetables after phosphate fertilizer application was studied by Oyedel et al. 39 .They showed that the Cd, Pb and Hg contents of the soils had increased significantly with the addition of fertilizer by the 14-60% over the control soil.Root and shoot accumulation of the heavy metals by the plants had also increased after fertilizer application, with Cd and Pb being particularly high.From among the metals, Cd showed the highest transfer ratio from soil to plant tissues (Figure 5).Guzman et al. 40 , in their study of contamination of corn growing areas due to intensive fertilizer application in the high plane of Mexico had pointed out that the occurrence of phosphate was approximately 100 times greater in the agricultural areas compared to the non-agricultural areas.They quantified the entry of phosphate and uranium in the vadose zone, where the quantity of phosphate was 443 g/kg and that of uranium was 198 mg/kg.They observed that from the total phosphate, a fraction of 15-20 % was assimilated by crops while the rest remained in the inter phase vadose zone-water. Mendes et al. 41 studied the bioavailability of cadmium and lead in a soil amended with phosphorus fertilizers in Brazil (Table 7).In spite of the relatively high concentration of Pb in the fertilizers, this element was not detected in the shoots of the velvet bean plants.They attributed this result as being due probably to the low Pb translocation in plants and its preferential accumulation in roots.However, the low availability of this metal in alkaline soils as the one used in the work of Mendes et al. 41 , along with the low solubility of Pb phosphates 42 , seemed to be a significant factor.The application of agronomic rates of the phosphates fertilizers may not increase the Pb concentration above the levels naturally found in soils.They emphasized however, that monitoring Pb uptake by plants in soils needs to be done in the long term, since Pb availability can increase due to chemical alterations in the soil, particularly those with low pH. Journal of the National Science Foundation of Sri Lanka 37 (3) September 2009 Plant uptake of heavy metal contaminants in phosphate fertilizers As discussed by Mortvedt and Beaton 43 , plant species differed considerably in their ability to take up Cd.Leafy vegetables absorb more Cd than grasses, and only 12-18 % of the Cd in cereal plants was translocated into the grain.However, soil application of TSP containing Cd resulted in increased Cd concentrations in both cereal grains and the edible portions of vegetables.Top dressing pastures with TSP also resulted in increased Cd of pasture species, especially that of subterranean clover (Trifolium subterraneum L.) 44 .Reuss et al. 45 had also found greater Cd uptake by radish (Raphanus sativus L.), lettuce (Latuca sativa L.) and peas from soil applications of TSP containing 870 mg Cd/kg P than from Ca(H 2 PO 4 ) 2 , which is the main P compound in TSP.In plants the uptake of Cr, Ni, and Pb was quite variable and was not directly related to their concentrations in P fertilizers 46 .It should however be mentioned that there are significant differences among plant species in their ability to take up Cd and other heavy metals. Mortvedt and Beaton 43 report that the average weekly per capita Cd intake in the USA was estimated at about 100 µg, compared to the maximum weekly Cd intake of 400-500 µg approved by the World Health Organization 47 .Estimated per capita weekly Cd intake in Australia was 125-225 µg 48 .While Cd uptake by crops might be somewhat higher on P-fertilized acid soils, doubt has been expressed about the weekly Cd intake by humans increasing significantly. September 2009 Journal of the National Science Foundation of Sri Lanka 37 (3) Health aspects As shown in the model depicted earlier, the geochemical pathways of metals which had originated from the source phosphate rocks and the phosphate fertilizers produced from them ultimately enter the human tissues resulting in diseases thereafter.Even though some studies have shown that there is only a low entry of these hazardous metals into the human tissues from the soils and plants fed with phosphate fertilizers, the cumulative effects over a long period of time, certainly is a matter of concern.This is particularly so, when low quality phosphate fertilizers (generally known to contain an array of trace metals) are applied over several years.Cd, U, Hg and Pb have been studied extensively for their health effects. Even though uranium is rather abundant in the environment, it has no known metabolic functions in animals and hence, it is regarded as being non-essential 49 .The absorption of uranium from the gastrointestinal tract depends upon the solubility of the uranium compound 49 , previous food consumption 50,51 , and the concomitant administration of oxidizing agents such as Fe 3+ ion and quinhydrone.The average human gastrointestinal absorption of uranium is 1-2% 52 .After the ingestion of uranium, it is rapidly taken up into the bloodstream 51 , where it becomes associated mainly with red cells 53 .A non-diffusible uranyl-albumin complex is known to form in equilibrium with a diffusible ionic uranyl hydrogen carbonate complex (UO 2 HCO 3+ ) in the plasma 54 .The uranyl compounds are known to show high affinity for phosphate, carboxyl and hydroxyl groups and therefore, they combine readily with proteins and nucleotides to form stable complexes.Removal of uranium from the bloodstream takes place rapidly and it accumulates in the kidneys and skeleton.The latter is the major site of uranium accumulation of the uranyl ion replacing calcium in the hydroxyapatite complex of bone crystals.Chemically, the main effect of uranium in humans is nephritis 55 .Nephrotoxicity of uranium has been studied by Kurttio et al. 56 who measured uranium concentration in drinking water and urine in 325 persons who had drilled wells for drinking water.They observed that the median uranium concentration in drinking water was 2 µg/L and in urine 13 ng/mmol creatinine, resulting in the median daily intake of 39 µg.They have concluded that uranium exposure is weakly associated with altered proximal tubular function suggesting that even low uranium concentrations in drinking water can cause nephrotoxic effects.Figure 6 shows the correlation of uranium concentration in drinking water and that of urine. The effect of Cd on human health has been studied intensively, particularly after the discovery of the Itai-Itai disease in Japan.Cadmium has no particular physiological function within the human body and as shown in Figure 7, cadmium poisoning can lead to kidney, bone and pulmonary damage.There are three main ways of cadmium resorption in the human body, namely, gastrointestinal, pulmonary and dermal.The uptake of Cd through the human gastrointestinal system is about 5% of the total amount ingested, depending on the exact dose and nutritional composition 57 .It has been reported that an average German citizen has a daily intake of 3.0-35 µg of Cd, 95% of which comes from food and drink.Of particular importance was the fact that people with low iron supplies showed a 6% higher uptake of Cd, than those with normal iron contents.This may have an important impact on anaemic people, notably in the developing countries, exposed to excessive cadmium ingestion.Within the body, once taken up by the blood, most of the Cd is transported to proteins such as albumin and metallothionein (Figure 8).The main organ for long term Cd accumulation is the kidney, where the half-life period of Cd is about 10 years.Extensive accumulation of Cd in the kidney results in tubules cell necrosis.Accordingly, the blood concentration of Cd serves as a reliable indictor of recent exposure, while the urinary concentration reflects past exposure body burden and renal accumulation.There is a correlation of urinary Cd excretion with the degree of Cd-induced kidney damage.A urinary excretion of 2.5 µg Cd per g of creatinine reflects a renal tubular damage degree of 4%.Cadmium poisoning and bone damage is well known, the Itai-Itai disease being a good example.Several studies had come to the conclusion that environmental exposure to Cd can cause skeletal demineralization.Even though the exact manner in which Cd affects bone mineralization is not known, there appears to be a direct influence on osteoblast and osteoclast function via renal dysfunction. Metals in phosphate fertilizers used in Sri Lanka Phosphate fertilizers are widely used in agricultural activities in Sri Lanka to supply crops with adequate amounts of P for growth and development.Mainly two types of phosphate fertilizers are available in the Sri Lankan market.The Eppawala rock phosphate produced in Sri Lanka has very low solubility and hence it is used only for long term crops such as tea, rubber and coconut.The other variety is imported granular TSP which is used mainly for perennial crops such as rice and vegetables. Chandrajith et al. 58 described the heavy metals and activity concentration of radionuclides such as 40 K, 226 Ra and 232 Th in rice field soils and commonly used fertilizers in Sri Lanka.Table 8 shows the heavy metal contents of the TSP available in the Sri Lankan market.The results indicate that the trace element levels vary widely in the TSP available in the local market, probably depending on the country of origin.However, identification of the origin of the fertilizers is not possible with the samples collected from the market.In some samples extremely high amounts of uranium were recorded.It is also noted that in some cases, the trace metal contents exceed the standard values recommended by the Sri Lanka Standards Institute.The results obtained from their study indicated that the U content in TSP collected from the local market varies from 5.8 to 364 mg/kg.However the content of U in rice field soils samples was below 7 mg/kg with an average of 3.6 mg/kg.In Sri Lanka, farmers use TSP twice a year (three times in some cases) for their rice cultivations and the average amount of TSP applied for rice fields is 85 kg/ha per season.Similarly higher 40 K levels were noted in both rice paddy soils and TSP available in the local market. The activity values of 40 K in rice paddy soils collected from Anuradhapura, Giradurukotte and Kandy ranged from 542 to 680, 308 to 637 and 296 to 846 Bq/kg, respectively.These values are greater than the typical world average value of 370 Bq/kg (UNSCEAR 1993) 12 .The radioactivity of 40 K in two reference sites (undisturbed natural forests) from the dry zone and the wet zone respectively were 258 and 138 Bq/kg, and these 40 K values are lower than that of paddy soils, which are highly modified by anthropogenic activities such as puddling, submerging and by artificial fertilizer applications.The activity values of 40 K obtained from different fertilizers collected from the Sri Lankan market were 103-15606 Bq/kg. Effect of fertilizer applications to rice fields in Sri Lanka should be investigated fully in terms of accumulation of metals in soils and their transfer into drinking water and finally into the human body particularly 14 because of the widespread chronic kidney disease in the dry zone region with an uncertain aetiology. Setting regulations for heavy metals in fertilizers Curtis and Smith 59 , who developed a model for the setting up of regulations for heavy metals in fertilizers, stated that a mathematical model for fertilizer application (fertilizer risk model) has 3 principal components; i) A description of metal accumulation in soil.ii) A description of exposure pathways to humans.iii) A description of toxicity risk associated with exposure. Each part of the model represents an approximation of what might happen in an actual agricultural setting.The selection of model parameters is always intended to overestimate the actual potential risk to human health in order to provide maximum health protection.An acceptable risk level is given and a corresponding maximum 'safe' threshold concentration of a heavy metal in a fertilizer is assigned.Therefore, concentrations higher than this threshold will result in risk levels higher than those deemed to be acceptable. Figure 4 : Figure 4: Distribution of 226 Ra, 232 Th and 40 K in phosphate fertilizers, farm soils and Nile island's soils 30 Figure 5 :Figure 6 : Figure 5:The magnitude of shoot to soil ratios for the heavy metals and phosphorous39 Cd Pb Hg P Figure 8 : Figure 8: Metabolism, storage and excretion of cadmium in the human body 57 . Figure 7 : Figure 7: Effects of cadmium on several organ systems 57 . Triple superphosphate Wet-process phosphoric acid (WPA) Diammonium phosphate (DAP) Monoammonium phosphate (MAP), 3 Figure 1: Relationship of phosphate rock and phosphate fertilizers 4Application of inorganic fertilizers frequently results in the addition of certain trace elements that are already present in soils as traces.Depending on their origin, Journal of the National Science Foundation of Sri Lanka 37(3)September 2009 Table 3 : Concentrations of hazardous elements in phosphate rocks (mg/kg) 62 Table 4 : 4 In the soils, conditions such as high biological and chemical oxygen demand (BOD and COD), water saturation, carbonate content, organics, pH and parent Potentially hazardous trace element abundances in sedimentary phosphorites, sedimentary phosphate rock, and average shale4 2009Journal of the National Science Foundation of Sri Lanka 37(3) Table 5 : 4hosphate and cadmium contents of sedimentary phosphate rocks4 Table 6 : 63mparison of the intervals of uranium concentrations in phosphate fertilizers produced in different countries63 Table 7 : 41 and Cd contents in the phosphate fertilizers used and metals quantities incorporated by the highest rate41 Table 8 : 58alytical results of triple super phosphates collected from different locations in Sri Lanka (in mg/kg)58
2018-12-20T18:46:48.600Z
2009-10-14T00:00:00.000
{ "year": 2009, "sha1": "48e7b5c24ca954774083365cebc6d32e39b6269c", "oa_license": "CCBY", "oa_url": "http://jnsfsl.sljol.info/articles/10.4038/jnsfsr.v37i3.1219/galley/1117/download/", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "48e7b5c24ca954774083365cebc6d32e39b6269c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
258211827
pes2o/s2orc
v3-fos-license
Investigating algorithm-oriented flexibility and structure-informed flexibility in mathematics learning Procedural flexibility is a crucial element of deep procedural knowledge involving the selection of “the most appropriate strategy.” However, there exists unavoidable nuance when one attempts to define “the most appropriate” strategy, which leads to two types of procedural flexibility: (a) structure-informed flexibility (Struct-Flex), which refers to a preference for situationally appropriate strategies; (b) algorithm-oriented flexibility (Algo-Flex), which refers to a preference for standard algorithms. The current study investigated the distinction between these two flexibility types and tested potential predictors for both types of flexibility. The study data were collected from 412 Grade 9 through 12 students from 19 math classrooms in the southeastern region of the U.S. Chi-squared tests and regression models revealed that: (a) problem type was strongly correlated to and predictive of a preference for using algorithms in algebra; (b) students’ current high school math course grade was associated with preference for standard algorithms; (c) students’ current high school math course level and their classroom assignment were related to preference for situationally appropriate strategies. The results suggest that, in this sample, Algo-Flex was as prevalent as Struct-Flex, and the two flexibility types had different predictors. This paper can advance our practical knowledge of U.S. students’ strategy choices. The identified predictors of Algo-Flex and Struct-Flex can also inform the design of relevant educational interventions. Future studies may explore longitudinal datasets to untangle the relationship between course level, age, and flexibility, as well as examine flexibility in Asia in comparison to Europe and North America. Introduction Flexibility in mathematical problem-solving is an important component of mathematical proficiency. Flexibility, or procedural flexibility, is often considered to have two core components: knowledge of multiple solution strategies and the ability to apply "the most appropriate" strategy for a given problem or problem-solving circumstance (e.g., Star, 2005). The identification of a strategy as "the most appropriate" has traditionally been determined by disciplinary experts, based on the extent that there is a close match between the affordances of a given strategy and the structural features of problems when it can be used, such that the strategy offers an efficient means for reaching a correct solution. Here we use the phrase "situationally appropriate" to highlight the connection between strategy appropriateness and a particular problem-solving circumstance and problem structure. A "standard" strategy (or standard algorithm), meanwhile, refers to a generally applicable solution procedure that is commonly (and often explicitly) taught in many textbooks (Star & Seifert, 2006). For example, in the linear equation 3(x + 1) = 15, a standard algorithm for solving this equation involves distributing the 3 first. At the same time, a more situationally appropriate strategy would begin by dividing both sides of the equation by 3. A flexible solver not only knows both strategies but also may elect to use the latter one for this problem. Both conceptual and procedural knowledge independently support procedural flexibility (Schneider et al., 2011). This capacity to implement methods accurately, efficiently, and flexibly is pivotal to achieving mathematical fluency (National Council of Teachers of Mathematics [NCTM], 2014b). Prior studies on procedural flexibility have explored theoretical aspects of this construct (e.g., Star, 2005), methods for assessing flexibility (e.g., Xu et al., 2017), and instructional interventions for promoting procedural flexibility (e.g., Star et al., 2015). Concerning the assessment of flexibility, Star et al. (2022) identified three criteria for flexibility: (a) demonstrated use of a standard algorithm; (b) displayed competence in alternative strategies most suited to a given problem-solving situation; (c) verified recognition of circumstance-specific (situationally appropriate) methods as the best for solving a given problem. They also identified potential flexibility as an intermediate step toward the achievement of flexibility (where a student has achieved two of the three criteria above) as well as spontaneous flexibility, where a solver uses a situationally appropriate strategy in the first attempt on a problem. Flexibility in arithmetic, linear algebra, calculus, and other domains of mathematics has been investigated in past studies (Maciejewski & Star, 2016;Maciejewski & Star, 2019;Shaw et al., 2020;Star & Rittle-Johnson, 2008). Additional research studies have investigated constructs closely related to flexibility, including strategy choice and adaptability (DeCaro, 2016;Liu et al., 2018;Star et al., 2015;Star et al., 2022;Star & Rittle-Johnson, 2009;Wang et al., 2019). In addition, cross-national studies of flexibility (e.g., Hästö et al., 2019) have also found that flexibility, particularly students' strategy choices, differs across countries and that the notion of flexibility may have nuanced cultural influences. For example, Star et al. (2022) summarized how Finnish, Swedish, and Spanish students differed in the diversity of their strategy usage and reasoning, as well as how curriculum differences among these three countries could influence the formation of procedural flexibility. Concerning students' selection of the most appropriate strategies, prior research suggests that students do not consistently elect to use a situationally appropriate strategy, even when they have knowledge of multiple strategies. For example, Newton et al. (2010) found an inconsistency between students' knowledge and the use of strategies most suited to problem contexts. Other studies have also documented relatively low spontaneous use of situationally appropriate strategies Xu et al., 2017). Furthermore, students with knowledge of multiple strategies do not consistently indicate a preference for situationally appropriate strategies, when they are asked to identify which of several given strategies is the "best" (Star et al., 2015;Star & Rittle-Johnson, 2008;Wang et al., 2019). However, prior research has also begun to grapple with unavoidable nuance in even defining which strategy is the most appropriate for a given problem, recognizing that students (and experts) might disagree about which strategy is best for a given problem Star & Madnani, 2004). For the example linear equation 3(x + 1) = 15, some students and experts might argue that the standard algorithm (distributing first) is a better approach, perhaps because this algorithm is broadly applicable, well-practiced, and easily implemented. Disciplinary considerations about strategy appropriatenessboth those that appear to be more objective such as counting the number of steps or operations to determine strategy efficiency (Star & Seifert, 2006), as well as those that appear more subjective, such as elegance (Hardy, 1940) may suggest that a particular strategy is more appropriate, as is arguably the case for the "divide by 3 first" strategy in the above example. But personal and individual student considerations may push strongly against this determination (e.g., Star et al., 2022;Verschaffel et al., 2009). For example, in a recent cross-cultural study on the procedural flexibility of secondary school students in Spain, Finland, and Sweden , Spanish students demonstrated knowledge of multiple strategies but infrequent use of situationally appropriate ones. In contrast to students from Finland and Sweden, Spanish students tended to rely uponand identify as "best"standard algorithms for equation solving, even when they exhibited knowledge of strategies that the authors proposed as being situationally appropriate. Star et al. (2022) characterized the Finnish and Swedish students as being generally more flexible than Spanish students, as a result of their greater use of situationally appropriate strategies. But the authors acknowledge that Spanish students may instead be exhibiting a different kind of flexibility, in that their preference and reliance upon the standard algorithm may indicate both fluency and flexibility. Although this profilestudents with knowledge of multiple strategies who indicate a strong preference for the standard algorithmwas present in both Liu et al. (2018) and Xu et al. (2017), in neither study did these authors consider whether the nuance in which strategies were the most appropriate might necessitate a reformulation of the construct of procedural flexibility. The current paper builds on Star et al. (2022) to consider this issue more systematically. Prior conceptions of flexibility universally rely upon the identification of expert-defined most appropriate strategies, where disciplinary considerations such as the structural and mathematical features of a problem and which strategies are optimally matched to this structure are used to identify the "best" strategy. Alternative conceptions of the appropriateness of strategiessuch as a preference for the standard algorithmcould constitute a different form of flexibility, which points to a previously underexplored group of multiple-strategy problem-solvers. Here we explore, in the context of American high school math classrooms, the prevalence of and characteristics of these two different concepts of procedural flexibilityone relying on structural features of problems that lead to expert-identified situationally appropriate strategies, and the other relying on a strong preference for the standard algorithm. 1.1 Related literature 1.1.1 Procedural flexibility in algebra problem-solving. Algebraic thinking and skills are foundational knowledge for many professions and careers (NCTM, 2014a). Prior studies on procedural flexibility in algebra problem-solving have examined its definition and measurement, experts' and students' strategy preferences, as well as other factors impacting flexibility and interventions that may promote its development. Previously, flexibility has been defined as the ability to solve a given mathematical problem with multiple methods and to select the most appropriate strategy (e.g., Star, 2005). The identification of an appropriate strategy in algebra can vary considerably, both from experts' and students' perspectives. In Star and Newton's (2009) study of eight school algebra experts, experts agreed that situationally appropriate strategies could enable students to solve a given problem efficiently, could reduce strategy complexity (which has implications for speed and accuracy), and could be attentive to the problem-dependent structure. In addition to these criteria, strategy preferences can also focus more on individual experiences (Verschaffel et al., 2009). Star and Madnani (2004) interviewed 23 sixth graders who were learning strategies for solving linear equations. Students identified several criteria that they used for determining which strategy was "best," including the strategy's length and complexity, solution accuracy, and execution time, as well as their own self-confidence in the implementation of the strategy. Similarly, Newton et al. (2010) interviewed six high school algebra students on their choices of strategies. The emergent theme from the qualitative data suggested that these students' preferences were based on their familiarity with the strategy as well as their perceptions of the strategy's understandability and efficiency. In addition, Jiang et al. (2022) also suggest that students' prior familiarity with a method could affect their strategy choice. They referred to strategy choice models (e.g., Shrager and Siegler, 1998), which state that individuals adapt their strategy choices via recalling both costs and benefits of using a strategy from their previous experiences. When two individuals make different choices in terms of which strategy they believe is optimal, strategy choice models suggest that this may be due to either (a) the relative associative strength of the strategies, which relates both to the simplicity of retrieving strategies from long-term memory and the strength of the connection between a strategy and relevant problem structures; or (b) the confidence criteria associated with each strategy, which is an internal benchmark for a strategy's usefulness and correctness. Thus, there is strong empirical and theoretical support in the prior literature for differences among students and experts in the determination of which strategies are better than others. In the current study, we are particularly interested in exploring students whose flexibility results from the use of situationally appropriate strategies, as opposed to students with knowledge of multiple strategies but strong preferences for standard algorithms. Our work draws heavily on prior studies on how procedural flexibility is assessed and operationalized. A common distinction made in assessments of flexibility contrasts knowledge of strategies with the use of strategies. For example, Rittle-Johnson and Star's (2007) assessment of flexibility included three subcomponents: the ability to generate multiple strategies for two given problems, recognition of possible solution steps for two linear equations, and evaluation of unconventional solutions for two algebra problems. The generation questions related to the use of strategies, while the recognition and evaluation questions were based on knowledge of strategies. Similarly, Xu et al. (2017; see also Star et al., 2022) developed the triphase flexibility assessment for measuring students' flexibility. In the first phase of this assessment, students were asked to solve 12 linear equation problems accurately and efficiently in the order presented. In phase II, students were required to re-solve the same set of 12 problems, but using a strategy different from the one used in phase I. In phase III, students referred to their prior solutions in phases I and II and indicated their choice (among the strategies that they produced) as to which strategy was the best. And in phase IV, Xu et al. (2017) implemented an additional strategy identification task where students were shown a set of several strategies and were asked to choose the best one from among the provided options. This assessment was validated with high internal consistency coefficients (both Cronbach's alpha > 0.88), significant criterion-related validity between strategy identification and potential flexibility (r = 0.38, p < 0.01), high composite reliability (CR = 0.998), and high convergent validity (AVE = 0.98). Prior work has also explored predictors for flexibility in algebra equation solving. For example, Star et al. (2015) collected flexibility assessment data from 39 teachers and 841 students and conducted two-level models on the relationship between students' flexibility scores and student demographic factors, student propensity factors, and teacher opportunity factors. The results showed prior knowledge as a strong predictor and gender as a correlated variable for flexibility. Star et al. (2015) also found the asking of open-ended questions to be common in high flexibility gain teachers' classrooms, though other teacher opportunity factors were not as significant. Similarly, Newton et al. (2020) assessed 66 eighth graders from five Algebra I classes in a single Midwestern suburban school. Their results indicated a significant effect of prior knowledge on students' flexibility. As another example, Hattikudur et al. (2016) addressed the predictor question as well but in an intervention setting, using a sample of 112 undergraduate students from a single non-math course. They found that increased prior knowledge is an effective indicator of better flexibility. In addition to challenging existing conceptions of "the most appropriate strategy" and "procedural flexibility," the present study builds on these prior studies in identifying predictors for procedural flexibility and investigates how problem structural characteristics and student math prior knowledge related to flexibility. 1.1.2 Structure-informed flexibility and algorithm-oriented flexibility. As noted above and consistent with past research (e.g., Star & Seifert, 2006), we make a distinction between standard algorithms and situationally appropriate strategies for solving mathematics problems. First, we define standard strategies as generalized and conventional algorithms that can be used for solving a particular class of math problems. For example, in linear equations such as 3(x + 1) = 15, a standard algorithm involves using distribution first. Alternatives to the standard algorithm can be used to solve math problems; among these possible alternative strategies, some are situationally appropriatemeaning that the features of the strategy are matched to structural features of the problem such that the situationally appropriate strategy may afford certain advantages to the solver. These advantages may include that the situationally appropriate strategies may take fewer steps to implement, may be faster to execute, may reduce the likelihood of error, and/or could be considered more elegant mathematically. Prior research has indicated that experts often show preferences for situationally appropriate strategies over standard algorithms Star & Newton, 2009). While standard algorithms are more generally applicable and can often be used without taking actions that may be necessary as a result of particular features of a given problem, situationally appropriate strategies are applicable in a more limited range of problems and leverage the linkage between problem-specific structural features and affordances offered by the strategy about these structural features. For example, consider the equation 4(x + 2) + 3(x + 2) = 21. The standard algorithm involves distributing as a first step. This algorithm is applicable not only to this problem but to variants such as 4(x + 2) + 3(x + 1) = 21. However, a situationally appropriate strategy for the former problem involves combining the common factor x + 2 as a first step, to obtain 7(x + 2) = 21. This situationally appropriate strategy leverages a structural feature of the problem, namely the repeated x + 2 variable terms. Thinking more about the differences between standard algorithms and situationally appropriate strategies, we next consider two types of procedural flexibility, among students who exhibit knowledge of both types of strategies. We distinguish between (a) the student who generally exhibits a preference for situationally appropriate strategies -we refer to this type of flexibility as structure-informed flexibility (or Struct-Flex), and (b) the student who generally exhibits a preference for the standard algorithmwe refer to this type of flexibility as algorithm-oriented flexibility (or Algo-Flex). Prior literature has only considered students who exhibit Struct-Flex as actually being flexible (e.g., Star et al., 2022). But other literature has suggested the importance of both types of flexibility. For example, in Maciejewski and Star's (2019) study, two types of students' justifications for procedural actions in problem-solving were identified. Algorithmic justifications were made in justifying a problem-solving step without attention to how and whether the step might help in the future, while anticipatory justifications were tailored to a particular problem and problem-solving situation. Struct-Flex students would seem to engage more in anticipatory justification, while Algo-Flex students would engage more in algorithmic justification. Similarly, Star et al. (2022) and Jiang et al. (2022) discuss the case of Spanish middle and high school students, many of whom exhibit knowledge of both standard and situationally appropriate strategies but choose to use and/or express a preference for standard algorithms. In this prior work, these Spanish students were deemed to be lacking in (structure-informed) flexibility, but here we suggest that they instead may be exhibiting Algo-Flex. Prior definitions of flexibility have placed an emphasis on expert-defined situational appropriateness. However, the choice to use the standard algorithm as one's "go-to" strategy when one knows multiple strategies is not necessarily a sign of inflexibility in mathematical problem-solving. In other words, choosing the standard algorithm can be simply an indication of personal preference, including the belief that the standard algorithm is the best strategy for a given problem. Thus, it is important to broaden conceptions of flexibility to include both strategy choices based on problem structure as well as those based on standard algorithms. At the same time, we recognize that Algo-Flexwhile a possible outcomemay not always be considered as a desired outcome. For curricula that prioritize recognizing and responding to problem structures, Struct-Flex may be more ideal than Algo-Flex. In such cases, understanding the predictors of each type of flexibility may be helpful with relevant instructional intervention designs. Research questions The current study examines the prevalence of, features of, and relationship between structure-informed flexibility (Struct-Flex) and algorithm-oriented flexibility (Algo-Flex) in algebra and arithmetic problem-solving. We investigate whether types of problems and/or structural features of problems are associated with either type of flexibility, as well as the characteristics of students who exhibit each type of flexibility. The following research questions are explored in this study: (1) To what extent do high school students exhibit Algo-Flex or Struct-Flex on an assessment containing algebra and arithmetic tasks? (2) Does the prevalence of Algo-Flex and Struct-Flex depend upon the encountered problem type (algebra versus arithmetic)? (3) Does the prevalence of Struct-Flex and Algo-Flex among high school students depend on student characteristics such as current high school math course level (algebra vs. geometry; high vs. low) and/or current math course grade (high vs. low)? Based on the prior literature, for research question 1, we would expect that rates of Struct-Flex would be relatively low. For example, in a sample of 791 middle school and high school students in Finland, Sweden, and Spain, Star et al. (2022) found that students exhibited Struct-Flex on only 4.2% of the assessed problems. It is more difficult to estimate the prevalence of Algo-Flex, as prior work did not consider this category of students to be flexible. But among 82 Spanish high school students, Star et al. (2022) found that 45.1% of students exhibited knowledge of both standard and situationally appropriate strategies on at least one problem in the assessment, yet only 6.1% of students were found to have (structure-informed) flexibility, suggesting that the remaining 39.0% of Spanish high school students in this sample had Algo-Flex. For Research Question 2, while no a priori hypothesis is made, there may be an association between mathematical domain and strategy choices. Prior studies have examined flexibility in various mathematical domains, including linear equation solving (e.g., Newton et al., 2020;Star & Rittle-Johnson, 2008;Star & Seifert, 2006), fractions (Newton et al., 2010), and calculus (Maciejewski & Star, 2016). The mathematical differences among these domains and the ages at which students encounter them are two important factors that could influence students' problemsolving accuracy and strategy choices. As for Research Question 3, we hypothesize current high school courses may be correlated with the prevalence of Struct-Flex. Previous findings emphasize the effect of prior algebra knowledge on (structure-informed) flexibility (Schneider et al., 2011;Star & Seifert, 2006) and the growth in (structure-informed) flexibility as students age and gain more knowledge in math . Based on this, we would expect levels of Struct-Flex to positively associate with students' age and high school math course level. But we have no a priori hypotheses about the relationship between Algo-Flex and student characteristics such as gender, math course grade, or current math course level. Participants Participants in this study were a convenience sample of 412 students from 19 math classrooms in one large high school in the Southeastern United States. As illustrated in Table 1, 163 (39.56%) students were 15-year-olds, 114 (27.67%) were 14-year-olds, 82 (19.90%) were 16-year-olds, and the remaining 53 (12.86%) were older groups (17 or 18 years old). 328 (79.61%) of the students were from the 9th or 10th grade, while 84 (20.39%) were from the 11th or 12th grade. With respect to high school math courses, 153 (37.14%) were in an Honors Geometry course, and 123 (29.85%) were in an Honors Algebra 2 course. The Geometry and the AP Statistics courses had the least number of students. Looking at self-reported course grades, 304 (73.79%) of the participants indicated their belief that their course grade would be an A or B, and a much lower percentage of 108 (26.21%) indicated their belief that their grade would be a C or below. The gender ratio was quite balanced, with 196 (47.57%) of students self-reporting as female and 216 (52.43%) as male. District policies did not allow for the collection of demographic information about students such as race, ethnicity, or socioeconomic status. Measures A modified tri-phase mathematical flexibility assessment (Xu et al., 2017) was used in this study. In the first phase (Part 1) of the assessment, participants were asked to complete five problems and show their work. Problems 1, 4, and 5 were from the domain of arithmetic, while problems 2 and 3 were from the domain of algebra. All five problems were taken from or closely adapted from those used in prior literature (e.g., Newton et al., 2010;Star et al., 2022;Star & Seifert, 2006). In addition, all five problems could be completed using standard algorithms or situationally appropriate strategies (see Table 2). For example, in problem 1, the standard algorithm involves first finding a common denominator for all fractions and then adding them. The situationally appropriate strategy takes advantage of the structural features of the problem, particularly that both the first and third terms, as well as the second and fourth terms, have the same denominator and add to 2 and 1, respectively. Students were given 15 min to complete Part 1 of the assessment. When instructed to do so, students moved to Part 2 of the assessment, where they were asked to generate an additional strategy for each of the same five problems. (During Part 2 of the assessment, students were instructed that they could not look back at their Part 1 work.) Students were given 15 min to complete Part 2. The procedures for Part 1 and Part 2 of the assessment in the present study were identical to those used by Xu et al. (2017), although some of the math problems used here were different from those used by Xu et al. (2017). After completing Parts 1 and 2 of the modified Tri-Phase assessment and when instructed to do so, students moved to Part 3 of the assessment, which was newly designed for the present study. In Part 3, students were presented with a list of three correct methods labeled A, B, and C for each problem (see Appendix). One provided method was the standard algorithm, and one was the situationally appropriate strategy for that problem. The third provided method was either a less efficient variant of the standard algorithm or a less efficient variant of the situationally appropriate strategy. Students were asked to evaluate the similarity of each method as compared to the method they had used in Part 1 for that problem, on a five-point Likert scale ranging from Very Different to Very Similar. In addition, students were asked to rank the goodness of each method on a five-point Likert scale ranging from Not Very Good to Very Good. Coding Two graduate students in mathematics education independently coded all of students' strategies, as described below. All disagreements were subsequently resolved. Table 2 lists examples of standard and situationally appropriate strategies. For problem 1, the standard algorithm involves first using a common denominator of 9 (or 27) for the four fractions. As noted above, the situationally appropriate strategy adds the first and third terms, and the second and fourth terms, as these pairs of fractions add to 2 and 1 respectively. For problem 2, the standard algorithm uses distribution as the first step, while the situationally appropriate strategy divides both sides by 3 as a first step. In problem 3, the standard algorithm again used distribution first, while the situationally appropriate strategy combined the "like" x + 2 terms. In problem 4, the standard algorithm involved multiplying the two pairs of fractions in the order provided and then adding the results, while the situationally appropriate strategy involved factoring out the common 13 10 term, leaving two fractions that easily add to 1. In problem 5, the standard algorithm involved adding terms from left to right, while the situationally appropriate strategy leverages the fact that by commuting terms, the easy-to-compute sums 146 − 46 and 12 + 88 can be obtained. Students' strategies were coded as one of the following categories: situationally appropriate, standard algorithm, blank, and other, where "other" was a broad category that generally included incomplete, incoherent, or idiosyncratic strategies that were judged by coders to be substantially less efficient than the standard algorithm and the situationally appropriate strategy. Note that the strategy categorization in this study was expert-defined and did not focus on the correctness of solutions. Of particular interest in the present analysis are students' use of situationally appropriate and standard strategies. Recall that students were asked to solve each problem twice, once in Part 1 and again in Part 2. Also, in Part 3 students were shown three strategies for each problem (the standard algorithm, a situationally appropriate strategy, and a third strategy that was judged by experts to be either worse than the standard algorithm or an improvement on the standard algorithm but not as good as the situationally appropriate strategy) and asked to provide a similarity rating and a goodness rating. In cases where a student exhibited knowledge of both the standard algorithm and a situationally appropriate strategy on a given problem in parts 1 and 2, a third coder (also a graduate student in mathematics education) assigned the student a Struct-Flex score (0 or 1) and a Algo-Flex score (0 or 1) for that problem, as described below (see Table 3). Students were assigned a Struct-Flex score of 1 (and an Algo-Flex score of 0) to indicate knowledge of both the standard algorithm and the situationally appropriate strategy for that problem, along with a preference for the situationally appropriate strategy. Students were assigned an Algo-Flex score of 1 (and a Struct-Flex score of 0) to indicate knowledge of both the standard algorithm and the situationally appropriate strategy for that problem, along with a preference for the standard algorithm. Knowledge of both the standard algorithm and the situationally appropriate strategy was indicated by the use of these strategies in Parts 1 and 2. In addition, strategy knowledge was confirmed by examining whether the student correctly identified which of the three provided strategies in Part 3 was most similar to the student's Part 1 strategy for that problem, by using a similarity rating of 4 or 5 (Similar or Very Similar). Strategy preference was determined as follows. A strategy was deemed to be the student's preferred strategy for a problem if the strategy met the following two conditions: (a) the strategy was used by the student in Part 1 (e.g., on their first attempt on the problem, as has been done in prior studies Star et al., 2022); and (b) the Part 1 strategy was deemed by the student to be the best strategy in the Part 3 goodness ratings, by virtue of receiving the highest goodness rating score. These problem-specific Struct-Flex and Algo-Flex scores were used to generate the following additional codes: (1) A total Struct-Flex score (0 to 5) and a total Algo-Flex score (0 to 5) for the collection of five problems, computed from the total of the five problem-specific Struct-Flex and Algo-Flex scores. (2) Looking at the three arithmetic problems (problems 1, 4, and 5), students were given an arithmetic Struct-Flex score (0 to 3) and an arithmetic Algo-Flex score (0 to 3), computed from the sum of the Struct-Flex and Algo-Flex scores for problems 1, 4, and 5. (3) Looking at the two algebra problems (problems 2 and 3), students were given an algebra Struct-Flex score (0 to 2) and an algebra Algo-Flex score (0 to 2), computed from the sum of the Struct-Flex and Algo-Flex scores for problems 2 and 3. Finally, each student was assigned a global flexibility category (global Algo-Flex, or global Struct-Flex) indicating whether their flexibility in the assessment as a whole was more consistent with Algo-Flex or Struct-Flex, as determined by whether the total Algo-Flex score or total Struct-flex score was greater. In cases where students' Struct-Flex and Algo-Flex scores for the whole assessment were equal, they were assigned both global flexibility categories, to indicate that they exhibited Algo-Flex and Struct-Flex in equal amounts overall. For example, consider a student who received a Struct-Flex score of 1 for each of problems 1, 2, and 3, and an Algo-Flex score of 1 for each of problems 4 and 5. This student would have a total Struct-Flex score of 3 (from problems 1, 2, and 3), a total Algo-Flex score of 2 (from problems 4 and 5), an arithmetic Struct-Flex score of 1 (from problem 1), an arithmetic Algo-Flex score of 2 (from problems 4 and 5), an algebra Struct-Flex score of 2 (from problems 2 and 3), and an algebra Algo-Flex score of 0. The student would receive a global flexibility category of Struct-Flex, because the total Struct-Flex score of 3 is greater than the total Algo-Flex score of 2. Method of analysis Research Question 1 asked about the extent to which high school students exhibited Struct-Flex or Algo-Flex. To answer this question, we looked at the percentage of students showing Algo-Flex and Struct-Flex by problem number and problem type (arithmetic vs. algebra). Descriptive statistics were first used to identify the numbers of students demonstrating each flexibility type in any given problem. Then, we calculated ratios of the number of students with global Algo-Flex to the number with global Struct-Flex for the assessment as a whole. We also compared the mean total Algo-Flex score with the mean total Struct-Flex score to examine the prevalence of each flexibility type in the sample. Research Question 2 probed the relationship between problem characteristics and flexibility types. For problem type, we started with a contingency table and conducted Pearson's Chi-squared tests to test our null hypothesis that there is no difference in the flexibility type between various problem types. We then proceeded to conduct simple logistic regressions on the problem types. We chose the simple logistic regression model because both the independent variable (problem type) and the dependent variable (flexibility type) are categorical. Specifically, we used the following population model: where Algo-Flex indicates that a student exhibit Algo-Flex for the problem and Algebra means that the problem type is algebra. Results In this section, we begin by providing descriptive information about students' strategy choices on all problems in the assessment, their similarity rankings, and their goodness ratings. Then, we look specifically at the subset of students who showed knowledge of both standard and situationally appropriate strategies to address students' Algo-Flex and Struct-Flex performance on the assessment as a whole, on the arithmetic problems, and on the algebra problems. Next, we answer the proposed research questions on the prevalence of, the features of, and the relationship between Algo-Flex and Struct-Flex in algebra and arithmetic problem-solving. Descriptives 3.1.1 Students' strategy choices. Tables 4 and 5 below show the strategy distribution for the problems in parts 1 and 2 of the assessment, for the entire sample (n = 412). With respect to the percentage of students who used each type of strategy, our results indicated the following. In Part 1 of the assessment, problem 5 had the highest percentage of situationally appropriate strategy users, where 176 (42.72%) students used the situationally appropriate strategy, while problems 3 and 4 had the lowest percentage of situationally appropriate strategy users (<4%). With respect to standard algorithm users in Part 1, problem 3 had the highest percentage (89%), while problem 4 had the lowest percentage (43%). In Part 2, problem 5 continued to have the highest percentage of situationally appropriate strategy users (54%), while problem 4 continued to have the lowest percentage of situationally appropriate strategy users (9%). In Part 2, problem 3 still had the highest percentage of standard algorithm users (30.10%), and problem 5 had the lowest percentage of standard algorithm users (13.59%). In general, there were more standard algorithm users in all problems as compared to users of other strategies. In Part 2, more students used the situationally appropriate strategy for problem 2 (173 (41.99%)) and problem 5 (223 (54.13%)), while problem 1 (161 (39.08%)), problem 3 (189 (45.87%)) and problem 4 (310 (75.24%)) had more students leaving the problem blank or adopting other strategies. Looking separately at the two problem types, a majority of students used the standard strategy on the algebra problems (problem 2: 87.86%; problem 3: 89.56%) in Part 1, while students did not show a strong inclination toward either strategy on the arithmetic problems (problems 1, 4, and 5) in both Parts 1 and 2. Table 5 shows the relationship between students' strategies in Parts 1 and 2. In Part 1, there were 389 instances of students using situationally appropriate strategies and 1276 instances of students using standard algorithms, across all five problems. Among the 389 instances where the situationally appropriate strategy was used in Part 1, in 123 (31.62%) cases, students continued to use situationally appropriate strategies in Part 2, while in 160 (41.13%) cases, students switched to using the standard algorithm. Among the 1276 instances of students using standard algorithms across all five problems, in 269 (21.08%) cases, students continued to use standard algorithms in Part 2, while in 488 (38.24%) cases, students switched to using situationally appropriate strategies. In sum, across all five problems, 3.1.2 Similarity ratings. In Part 3 of the assessment, students were shown three strategies for each problem and asked to indicate the degree of similarity between each given strategy and the strategy that they had used for that problem in Part 1. Overall, in determining the similarity between their implemented Part 1 strategy and the Part 3 provided strategies, 298 out of 344 (86.63%) students correctly rated similarity in problem 1, 378 out of 391 (96.68%) students did so for problem 2, 340 out of 384 (88.54%) did so for problem 3, 147 out of 191 (76.96%) did so for problem 4, and 219 out of 355 (61.69%) did so for problem 5. (Correctness of similarity rating was determined by whether students gave a rating of 4 or a 5 (out of 5) for the similarity of the strategy in Part 3situationally appropriate or standard algorithmthat corresponded to what they had used in Part 1.) For users of the situationally appropriate strategy, students were generally able to recognize the similarity between this strategy when presented in Part 3 and their own use of this strategy in Part 1, as follows: problem 1 (123 out of 157 (78.34%)), problem 2 (26 out of 29 (89.66%)), problem 3 (7 out of 15 (46.67%)), problem 4 (5 out of 12 (41.67%)), and problem 5 (77 out of 176 (43.75%)). Goodness rankings. As one component of the Algo-Flex/Struct-Flex determination for each problem, students were coded as to whether they preferred the standard algorithm, the situationally appropriate strategy, or neither, based in part on how they ranked the goodness of presented situationally appropriate and standard strategies for each problem in Part 3 (e.g., for which strategy they gave the highest goodness rating). In ranking the given strategies on their goodness, the largest percentage of students preferring the standard algorithm was 71.60% (295 out of 412) in problem 3, and the lowest percentage was 9.22% (38 out of 412) in problem 2. Regarding a preference for the situationally appropriate strategy, most students (357 out of 412 (86.65%)) expressed such a preference in problem 2, and few of them (66 out of 412 (16.02%)) did so in problem 3 (see Figure 1). More students rated situationally appropriate strategies as better than standard algorithms in problems 1, 2, and 4, and more students saw the standard algorithm as better than the situationally appropriate strategy in problem 3. Note that there exist students who assigned the highest goodness ratings to both standard algorithms and situationally appropriate strategies (problem 1: 40 (9.71%); problem 2: 17 (4.13%); problem 3: 26 (6.31%); problem 4: 42 (10.19%); problem 5: 24(5.83%)). Prevalence of algo-flex and struct-flex Note that we were centrally interested in learning about the relationship between flexibility type, problem type, and student characteristics. Our investigations centered on the following three questions: (1) To what extent do high school students exhibit Algo-Flex or Struct-Flex on an assessment containing algebra and arithmetic tasks? (2) Does the prevalence of Algo-Flex and Struct-Flex depend upon the encountered problem type (algebra versus arithmetic)? (3) Does the prevalence of Struct-Flex and Algo-Flex among high school students depend on student characteristics like current high school math course level (algebra vs. geometry; high vs. low) and/or current math course grade (high vs. low)? Research question 1. To understand the degree to which students showed Algo-Flex and Struct-Flex, we looked at the descriptive data for both flexibility types. While 78.64% of all students (324 out of 412) in our sample showed knowledge of both the situationally appropriate and standard strategies on at least one problem, only 23.06% of the total sample (95 out of 412) showed Struct-Flex on at least one problem, and only 24.27% of the total sample (100 out of 412) showed Algo-Flex on at least one problem. In other words, about a quarter of students used both types of strategies on at least one problem on the assessment and exhibited a preference for situationally appropriate strategies on at least one problem, while another quarter of students used both types of strategies but exhibited a preference for standard algorithms on at least one problem. Turning to an examination of only those students who showed knowledge of both the standard and the situationally appropriate strategies across parts 1 and 2 on at least one problem and the extent to which these students exhibited Algo-Flex and/or Struct-Flex, our results indicated the following. Looking first at Algo-Flex at the problem level (see Table 6), the percentage of students who used both the situationally appropriate and standard strategies and indicated a preference for the standard algorithm was around 3.16% on problem 1, 0.73% on problem 2, 14.32% on problem 3, 0.73% on problem 4, and 9.22% on problem 5. The average percentage of students showing Algo-Flex in the algebra problems was about 7.52% (31 out of 412), while the average percentage showing Algo-Flex in the arithmetic problems was approximately 4.37% (18 out of 412). The average algebra Algo-Flex score was 0.15 (out of 2), and the average arithmetic Algo-Flex score was 0.13 (out of 3). The total Algo-Flex average score across the 412 students who used both the situationally appropriate and standard strategies on any problem was 0.28 out of 5. For Struct-Flex on the problem level, the percentage of students who used both the situationally appropriate and standard strategies and indicated a preference for situationally appropriate strategies was 16.99% on problem 1, 4.37% on problem 2, 0.49% on problem 3, 0.24% on problem 4, and 4.37% on problem 5. The average percentage of students with Struct-Flex was about 2.43% (10 out of 412) for algebra problems, as compared to around 7.20% (29.67 out of 412) for arithmetic problems. Algebra Struct-Flex had an average of 0.05 (out of 2), and arithmetic Struct-Flex had an average of 0.22 (out of 3). The total Struct-Flex score average was 0.27 (out of 5). Across all problems, 23.54% of students (97 out of 412) showed global Algo-Flex, while 22.33% of total students (92 out of 412) showed global Struct-Flex, for the assessment as a whole. Research question 2. Our second research question examined the association between problem type (arithmetic vs. algebra) and flexibility type. We first looked at problem-level nuances for Algo-Flex in algebra. Here, we used Pearson's Chi-squared tests and logistic regression models to examine the significance of each flexibility type's frequency for arithmetic and algebra. As shown in Figure 2, we found that more students exhibited Algo-Flex in the algebra problems, while more students exhibited Struct-Flex in the arithmetic problems. For students exhibiting Algo-Flex, more of them did so in algebra as compared to in arithmetic. On the contrary, on arithmetic problems, a greater proportion of students exhibited Struct-Flex problem than on the algebra problems. A Pearson's Chi-squared test with Yates' continuity correction on all students, who exhibited multiple strategies and showed preference for a strategy type (n = 212), indicates a significant correlation, with a moderate effect between problem type and flexibility type (χ 2 = 28.51***, Cohen's w = 0.37). An additional logistic regression model confirmed that problem type was a significant predictor of the flexibility type (β = 1.70***, McFadden's Pseudo R 2 = 0.11). We fitted the following model: AlgoFlex = −0.47 + 1.70 × Algebra. For arithmetic problems, the log(odds) that a student exhibited Algo-Flex was −0.47, which means students were less likely to show Algo-Flex for arithmetic problems. For algebra problems, on the other hand, there was an increase of 1.70 in the log(odds) that a student exhibited Algo-Flex, which means that the log(odds) that a student showed Algo-Flex was 1.23 for algebra problems. Both p-values were far below 0.05, and thus, the log(odds) and the log(odds ratio) were both statistically significant. In summary, the algebra problem type was a significant predictor of Algo-Flex. Research question 3. Our third research question explored how the prevalence of global Algo-Flex and Struct-Flex depended on student characteristics. In the former section, we discussed how each student was given a global flexibility score indicating whether they exhibited more Struct-Flex or more Algo-Flex (or neither) on the assessment as a whole. With respect to global Algo-Flex, the histogram on the percentage of students with global Algo-Flex in Figure 3 highlights the difference between students with a high course grade (e.g., a grade of A or B; 26.97% (82 out of 304) global Algo-Flex rate) versus a low course grade (e.g., a grade of C and below; 13.89% (15 out of 108) global Algo-Flex rate). Figure 4 shows the rate of global Algo-Flex for each math course, showing that Algebra I Honors has the lowest global Algo-Flex rate of 12.50% (5 out of 40) while AP Statistics has the highest global Algo-Flex rate of 34.62% (9 out of 26). Figure 3 also shows the percentage of students with global Algo-Flex for the assessment as a whole in terms of female (24.10%) versus male (23.04%) and various age groups (28.95% for 14-year-olds, 25.77% for 15-year-olds, 14.63% for 16-year-olds, 14.63% for 17-year-olds, and 33.33% for 18-year-olds; 23.46% on average). On the other hand, the global Struct-Flex for the assessment as a whole does not seem to distinguish much by current math course grade (high course grade: 25.00% (76 out of 304); low course grade: 14.81% (16 out of 108)), as shown in Figure 3. 22.05% of female students (43 out of 195) exhibited global Struct-Flex for the assessment as a whole compared to 22.58% of male students (49 out of 217). The age difference was small among 14-year-olds (22.81%), 15-year-olds (23.31%), 16-year-olds (19.51%), and 17-year-olds (28.27%). (There were only 12 students in the age of 18, so the result of a 0% global Struct-Flex rate could be biased.) Yet, by current math course level in Figure 4, there is a visible difference between high-level math courses (Geometry Honors: 24.84% (38 out of 153), Algebra II Honors: 28.46% (35 out of 123), AP Statistics: 30.77% (8 out of 26), high-level average: 28.02%) versus low-level courses (Algebra I Honors: 5.00% (2 out of 40), Geometry: 10.00% (2 out of 20), Algebra II: 14.00% (7 out of 50), low-level average: 9.67%). These descriptive results suggest that the prevalence of Algo-Flex and Struct-Flex varied based on students' course grades, course levels, gender, and age. To further examine the relationships between the prevalence of Struct-Flex and Algo-Flex and how they might be related to student and course characteristics, we performed three-level (courses containing classrooms which then contained students) logistic regression analyses, using current math course level (high versus low), gender (female versus male), age (between 14 and 18), and course grade (high versus low) as independent variables. The dichotomous categorical dependent variables were global Algo-Flex and global Struct-Flex for the assessment as a whole. In Model 0, we used a fully unconditional three-level null model (see Table 7) without including any predictors. The intercept of 0.31 indicated the estimated odds ratio of exhibiting global Algo-Flex for the assessment as a whole for all students in all courses. Within-subject variance at Level 1 (student) was 3.29, between-subject variance at Level 2 (classroom) was 0.09, and between-subject variance at Level 3 (course) was negligible. This means that 2.66% of the overall variance in global Algo-Flex for the assessment as a whole could be accounted for by classroom-level factors, while the greatest proportion of the variance (97.34%) was related to differences between students within classrooms within courses. This null model revealed that the differences between students in Algo-Flex far outweighed the differences between classroom and course groups. Notwithstanding, classroom-level differences could still explain partial variance in global Algo-Flex for the assessment as a whole. Next, variables were added at each level. In Model 1, controlling for age and gender, the significant student-level predictor was current math course grade (low versus high) with a coefficient of 0.48 odds ratio, indicating that the lower the student's course grade, the more likely the students showed global Algo-Flex for the assessment as a whole. In Model 2, the course-level predictor level (low versus high) was not found to be significant, but the low course grade variable remained an indicator for a higher chance of performing global Algo-Flex for assessment as a whole. The fitted Model 2 was lAlgo ijk = −0.54 − 0.02 × male ijk − 0.16 × age ijk − 0.65 × lowgrade ijk + 0.49 × lowlevel k . (Note that these coefficients are in logits.) Comparing the full model 2 and the null model 0, a majority of overall variance (99.70%) was still related to student-level factors, while 0.30% remained explainable by classroom-level differences. Similarly, in Table 8, we show the results of the same logistic regression models for global Struct-Flex for the assessment as a whole. The null model 3 contained no potential factors from any level. The student-level within-subject variance was 3.29, the classroom-level between-subject variance was 0.21, and the course-level variance was 0.16. In other words, 5.74% of the total variance was explainable by classroom variables, 4.37% of the total variance existed in course level factors, and the remaining 89.89% was caused by individual differences. Adding in student-level factors in Model 4, none of the variables was found to be a significant indicator. However, the added courselevel factor level was shown to be significant in Model 5. The coefficient of 0.33 odds ratio here indicated that students from low-level courses are 0.33 times as likely to have exhibited global Struct-Flex across all problems compared to students from high-level courses. The fitted Model 5 was lStruct ijk = −0.99 + 0.07 × male ijk + 0.01 × age ijk − 0.49 × lowgrade ijk − 1.11 × lowlevel k . (Note that these coefficients are in logits.) Discussion This paper reports the prevalence and characteristics of two types of procedural flexibility: Struct-Flex, where students have knowledge of standard algorithms and situationally appropriate strategies but indicated a preference for using expert-identified situationally appropriate strategies; and Algo-Flex, where students have knowledge of both types of strategies but indicated a preference for standard algorithms. Understanding the prevalence of and characteristics of these two types of flexibility can further our understanding of mathematical proficiency, especially procedural flexibility. Findings on factors that induce Algo-Flex or Struct-Flex can also advance our practical knowledge of students' strategy choices and may help with relevant educational intervention designs. These two types of flexibility were previously mentioned in Star et al. (2022), where Spanish students were shown to exhibit a kind of flexibility that relied heavily on the standard algorithm. Similar patterns of students showing a preference for the standard algorithm were found in Liu et al. (2018) and Xu et al. (2017). By identifying and defining these two types of flexibility types, the present study more formally posits a reformulation of the construct of procedural flexibility based on the unavoidable nuances in determining situationally appropriate strategies. We also assessed several relevant courselevel and student-level factors to explore the characteristics of Algo-Flex and Struct-Flex. Our first research question examined the frequency with which high schoolers exhibited Algo-Flex or Struct-Flex on algebra and arithmetic tasks. Consistent with our hypotheses, the rate of Struct-Flex was relatively low in algebra. Our results indicated that about three quarters of students had knowledge of both standard and situationally appropriate strategies, but a third of them preferred to use standard algorithms and thus exhibited Algo-Flex. The ratio of students who exhibited Algo-Flex over Struct-Flex (61:18) was notably higher on algebra problems, whereas this ratio was only 51:82 on arithmetic problems. Yet at the same time, and contrary to our hypotheses, the proportion of students exhibiting global Struct-Flex for the assessment as a whole was approximately the same as the proportion exhibiting global Algo-Flex. In other words, for the assessment as a whole, no flexibility type was more prevalent, but on algebra tasks, Algo-Flex was common. This result indicates that the present U.S. sample was similar to the Spanish sample in Star et al. (2022) regarding Algo-Flex on algebra problem-solving but was dissimilar to the Spanish sample with respect to the rates of Algo-Flex and Struct-Flex across all problems. This noticeable difference in flexibility prevalence based on problem types leads to our next research question. In our second research question examining the relationship between the flexibility type and problem type (algebra versus arithmetic), statistical tests revealed that algebra problems were linked to the presence of Algo-Flex, while arithmetic problems were linked to the presence of Struct-Flex. Specifically, for the algebra problems, more students exhibited Algo-Flex as compared to Struct-Flex, while the opposite was true for the arithmetic problems. These results suggest that students preferred the standard algorithm for linear equations, even when they showed knowledge of both standard and situationally appropriate strategies for solving these types of algebra problems. Students' solving behaviors were quite different on arithmetic problems, where the number of students who knew both the situationally appropriate and standard strategies was twice the frequency for algebra. Among these students, most exhibited Struct-Flex on the arithmetic problems. In sum, problem type was correlated to flexibility type, and students who knew both types of strategies were less likely to prefer situationally appropriate strategies on algebra problems. Students' preference for the standard algorithm when solving linear equation problems could be connected to their familiarity and confidence in applying this strategy. According to the strategy choice models described by Jiang et al. (2022), if students had greater exposure to standard algorithms (as compared to situationally appropriate strategies) in their math classes, this would explain their tendency to more frequently use the standard strategy. The process of repeatedly seeing and using standard algorithms and then storing these experiences in long-term memory would have led to a strengthening of the accessibility of standard algorithms. As a result, when students encountered a linear equation problem that was unfamiliar, they were more likely to recall the standard algorithm. Thus, the present resultsand students' tendency to prefer the standard algorithm when solving algebra problemssuggest that these students have had much greater exposure to the standard algorithm in their algebra classes, as compared to other, situationally appropriate strategies. These results align with prior studies, particularly the finding from Newton et al. (2010), showing that knowledge of multiple strategies does not guarantee students' preference for the more efficient ones, especially when students are relatively more fluent in using the standard strategy. In the current sample, our analysis of students' strategy preferences, among those who possessed knowledge of multiple strategies, indicated that one-third of such students preferred standard algorithms. However, interestingly, our results diverged from another finding from Newton et al.'s (2010). They described how students in their sample were more likely to use situationally appropriate strategies when a given problem task was difficult or from a more advanced topic. In contrast, our results suggest that as problems involved more advanced mathematical ideas (e.g., arithmetic versus algebra problems), multiple-strategies problem solvers exhibited a stronger preference for the standard algorithm. These differences in findings could be due to the differences in students' mathematics knowledge levels in the two samples. While Newton et al. (2010) suggested that weakness in students' knowledge may have been an important force behind students' use of alternate strategies in their study, in the present sample students exhibited greater facility in the use of multiple strategies and thus could have been more attentive to the solution accuracy and the familiarity of the strategies that they chose to use. Our third research question concerned the relationships between the prevalence of Algo-Flex and Struct-Flex and student characteristics such as current math course level and math course grade. We found that higher course grades predicted the prevalence of global Algo-Flex for the assessment as a whole. In other words, students who self-reported earning a higher course grade and who showed knowledge of multiple strategies were more likely to exhibit a preference for the standard algorithm. Age, gender, and current course level did not act as potential factors explaining global Algo-Flex in this analysis. Since higher course grades are generally related to greater accuracy on course assessments, this finding could be interpreted as providing further support for the link between use of the standard algorithm and higher accuracy in problem solving. One prior study found that the use of standard algorithms had a greater influence on problem-solving accuracy than the use of situationally appropriate strategies see also Garcia Coppersmith & Star, 2022). In addition, accuracy is frequently mentioned as an influential criterion that students use when identifying which strategies are "best" (Star & Madnani, 2004). Our results also indicated that students' current math course level was a significant predictor of global Struct-Flex for the assessment as a whole. In other words, there was an association between high school math courses and global Struct-Flex prevalence, such that students in relatively higher-level courses (Algebra II Honors, Geometry Honors, and AP Statistics) were more likely to exhibit global Struct-Flex than were students in relatively lower courses (Algebra I Honors, Algebra II, and Geometry). This result aligns with prior work (e.g., Schneider et al., 2011) on how high prior math knowledge correlates with the likelihood that students will gain the ability to use multiple strategies and choose the situationally appropriate ones. Moreover, our results also show classroom-level differences as well. Even within a single high school math course, the prevalence of Struct-Flex problem solvers varied substantially. This variation could be a product of classroom-level factors, such as teachers' teaching experiences as well as their own math knowledge and flexibility. However, we note that our models did not indicate that age was a factor that explained the prevalence of global Struct-Flex. Future studies might benefit from the use of a longitudinal design to untangle the relationship between course level, age, and flexibility. Limitations of the current study include the following. (1) We conducted the flexibility assessment on a convenience sample, which restricts our ability to generalize the present results beyond our sample. (2) We were not able to collect classroom-level data, including teacher characteristics or measures of instructional quality, which would have been useful to explore the relationship between flexibility and instruction that prior research has proposed. (3) Our cross-sectional design limits our ability to make claims about the development of flexibility; for a more comprehensive analysis of the development of both types of flexibility, future studies may explore longitudinal data. Conclusion The present study addressed an aspect of flexibility that has been only implicitly referred to in prior research (e.g., Star et al., 2022) but has not been named or formalized to datenamely, algorithm-oriented flexibility, or a tendency for students who know and use multiple strategies to exhibit a preference for the use of the standard algorithm. While structure-informed flexibility reflects a more traditional form of flexibility, with a preference for situationally appropriate strategies, students who exhibit algorithm-oriented flexibility appear to be making a rational, informed choice to use the standard algorithm, given its wide applicability, ease of execution, and derived solution accuracy. Core to algorithm-oriented flexibility is the preference for using the standard algorithm, even when one knows alternative strategies. Our identification of this second form of procedural flexibility helps to explain the gap found in many prior studies between the number of students who exhibit knowledge of multiple strategies and the much smaller subset of those students who demonstrate spontaneous flexibility by using situationally appropriate strategies in their first attempt on a problem. Furthermore, we found that a given problem type (algebra or arithmetic) as well as students' course grade and course level helped to explain the relative frequency of algorithm-oriented or structure-informed flexibility. In particular, our results suggest that students tend to show Algo-Flex in algebra problem-solving, that course grade might attract students to use standard algorithms, and that there is an increasing trend toward the use of situationally appropriate strategies that utilize problem structure as students' math knowledge grows. These results from a U.S. high school student sample can advance the field's current understanding of procedural flexibility in mathematical problem-solving. Future studies could continue to explore the prevalence and characteristics of Algo-Flex in other Asian countries and observe if there exist any similarities or differences in procedural flexibility among European, North American, and Asian students. At the same time, these different predictors may inform the design of relevant educational interventions targeting Struct-Flex.
2023-04-19T15:11:59.052Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "53d1b96cc728ce9cd48bbd695f05be622878cfb3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/27527263231163593", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "4329f4fbb87ccf7800d528aaffdcfb68fc0dde57", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
199434886
pes2o/s2orc
v3-fos-license
Ropinirole hydrochloride remedy for amyotrophic lateral sclerosis – Protocol for a randomized, double-blind, placebo-controlled, single-center, and open-label continuation phase I/IIa clinical trial (ROPALS trial) Introduction Amyotrophic lateral sclerosis (ALS) is an intractable and incurable neurological disease. It is a progressive disease characterized by muscle atrophy and weakness caused by selective vulnerability of upper and lower motor neurons. In disease research, it has been common to use mouse models carrying mutations in responsible genes for familial ALS as pathological models of ALS. However, there is no model that has reproduced the actual conditions of human spinal cord pathology. Thus, we developed a method of producing human spinal motor neurons using human induced pluripotent stem cells (iPSCs) and an innovative experimental technique for drug screening. As a result, ropinirole hydrochloride was eventually discovered after considering such results as its preferable transitivity in the brain and tolerability, including possible adverse reactions. Therefore, we explore the safety, tolerability and efficacy of ropinirole hydrochloride as an ALS treatment in this clinical trial. Methods The ROPALS trial is a single-center double-blind randomized parallel group-controlled trial of the safety, tolerability, and efficacy of the ropinirole hydrochloride extended-release tablet (Requip CR) at 2- to 16-mg doses in patients with ALS. Twenty patients will be recruited for the active drug group (fifteen patients) and placebo group (five patients). All patients will be able to receive the standard ALS treatment of riluzole if not changed the dosage during this trial. The primary outcome will be safety and tolerability at 24 weeks, defined from the date of randomization. Secondary outcome will be the efficacy, including any change in the ALS Functional Rating Scale-Revised (ALSFRS-R), change in the Combined Assessment of Function and Survival (CAFS), and the composite endpoint as a sum of Z-transformed scores on various clinical items. Notably, we will perform an explorative search for a drug effect evaluation using the patient-derived iPSCs to prove this trial concept. Eligible patients will have El Escorial Possible, clinically possible and laboratory-supported, clinically probable, or clinically definite amyotrophic lateral sclerosis with disease duration less than 60 months (inclusive), an ALSFRS-R score ≥2 points on all items and age from 20 to 80 years. Conclusion Patient recruitment began in December 2018 and the last patient is expected to complete the trial protocol in November 2020. Trial registration Current controlled trials UMIN000034954 and JMA-IIA00397 Protocol version version 1.6 (Date; 5/Apr/2019). Introduction Amyotrophic lateral sclerosis (ALS) is an intractable and incurable progressive neurological disease, which is characterized by muscle atrophy and weakness caused by selective vulnerability of upper and lower motor neurons. Patients with ALS develop symptoms such as gait difficulty, dysarthria, dysphagia, and respiratory disorder, which confound their freedom and ability to communicate. However, their consciousness and perception are completely normal, and this feature of the disease significantly reduces their quality of life (QOL) [1]. The annual crude prevalence and incidence rates per 100 000 people per year were 9.9 (95% CI 9.7e10.1) and 2.2 (95% CI 2.1e2.3), respectively, in Japan. The maleefemale ratio was approximately 1.5 and the age group with the highest prevalence as well as incidence was 70e79 years [2]. ALS develops mainly after middle-age, which impedes engaging in society. Therefore, the psychological and financial burdens of patients and their family are serious. While the clinical course varies among patients, the median time from onset to death or to the use of respiratory support has been reported to be 20e48 months. Familial ALS (FALS) accounts for 5%e10% of all ALS cases, and the other 90% are classified as sporadic ALS (SALS), which has not been clearly elucidated for its genetic background or etiologic factors. More than 100 point mutations spanning the SOD1 sequence have been identified in patients with FALS (gain of function type). In addition, at least 25 responsible genes have also been reported [1]. Therefore, to develop treatment options for ALS, including SALS, which represents the majority of ALS cases, an approach to treating pathological conditions that are common to FALS and SALS is required. Specific loss and degeneration of upper and lower motor neurons and their nerve fibers are present in both FALS and SALS; thus, preventing motor neuron degeneration and death is key in developing treatment options that are common to both forms of ALS. The pathological conditions of ALS have been studied, and the cellular process that follows after its onsetdmitochondrial dysfunction, protein aggregation, oxidative stress, agitation toxicity, inflammatory response, and apoptosisdhas been partly elucidated. Mitochondrial abnormalities can occur as an initial event of neurodegeneration or secondary to other cellular processes, and may also be the cause of oxidative stress, agitation toxicity, and apoptosis in some cases [1]. Riluzole, which is considered to exert neuroprotective effects by reducing glutamate toxicity, has been approved for the treatment of ALS. Riluzole was shown to potentially increase survival in some clinical trials and is therefore used widely in Japan [3e8]. However, the effect of riluzole is not completely satisfactory for patients. Furthermore, edaravone injection solution was approved for the additional indication of inhibiting the progression of functional disorder in ALS. However, no study has been conducted to confirm the impact on survival, and the beneficial effect on survival has not yet been verified [9,10]. Under these circumstances, development of treatment options to promote motor neuron survival is anticipated. Ropinirole hydrochloride (trade name: Requip Tablets 0.25 mg, 1 mg, and 2 mg) is a dopamine receptor agonist with a non-ergot alkaloid chemical structure, which was synthesized and developed based on the structure of dopamine by GlaxoSmithKline Ltd. Co (UK). Ropinirole hydrochloride is selective for the D 2 subtype of dopamine receptors. It was first approved for the indication of Parkinson's disease in the UK in July 1996 and was later approved worldwide. The drug in this trial (Requip Controlled-release (CR) Tablets 2 mg and CR Tablets 8 mg) is an extended-release formulation of ropinirole hydrochloride. This product was first approved in the Slovak Republic in 2006 and is now approved worldwide. Ropinirole hydrochloride not only improves motor symptoms of Parkinson's disease by stimulating the dopamine receptors (particularly dopamine D 2 -like receptor), but also, exhibits the following neuroprotective properties in animal models: 1) preventing the decrease in 6-OHDA-induced striatal dopamine levels [11], 2) increasing glutathione, SOD, and catalase activities [11,12], 3) promoting neurotrophic factor production in the ventral mesencephalon [13] and 4) promoting neural stem cell proliferation in the subventricular zone [14]. Pramipexole hydrochloride (PPX), with activity as a D 2 -like dopamine receptor agonist, which shares common function as dopamine agonist with ropinirole hydrochloride, has been demonstrated to have a protective effect on mitochondria and a free radical scavenging effect. Therefore, with hopes of improving the pathological conditions of ALS, a clinical study was conducted using dexpramipexole (RPPX), which is the R(þ) enantiomer of PPX. RPPX does not have the dopamine receptor agonist activity, so has no adverse drug reactions [ADRs] by the dopamine receptor agonist activity). A phase I clinical study of RPPX was conducted as a randomized, double-blind, placebo-controlled study in 54 healthy volunteers. In that study, RPPX was well tolerated at doses up to 300 mg/data [15]. In the historical-controlled phase II study that followed, RPPX was administered to 30 ALS patients at a dose of 30 mg/day for 6 months. It was tolerated and improved the slope of decline on the ALS Functional Rating Scale-Revised (ALSFRS-R) score by 13% [16]. In a dose escalation study in 10 ALS patients, the dose of RPPX was increased to a maximum of 300 mg/day, which was confirmed to be safe and tolerable with no dopaminergic ADRs reported. This study was continued as an extension study, in which RPPX was administered at doses of 30 mg/day and 60 mg/day for 6 months for comparison. As a result, the decline (exacerbation) of the slope of the ALSFRS-R score was smaller at 60 mg/day than at 30 mg/day [16]. Next, a randomized, double-blind, placebocontrolled, phase II study was conducted, and the safety and tolerability of RPPX were evaluated in ALS patients. This study was divided into two parts: at Stage 1, 102 subjects were randomized to receive either RPPX 50 mg/day, 150 mg/day, 300 mg/day, or placebo for 12 weeks. At Stage 2, 92 subjects who underwent a 4-week washout were randomized to receive either 50 mg/day or 300 mg/day for 24 weeks. RPPX was generally safe and well tolerated. The slope of the ALSFRS-R score was markedly reduced in the higher dose group at both Stages 1 and 2, and the hazard ratio of mortality was reduced by 68% in the 300 mg/day group, compared with the 50 mg/day group at Stage 2 (p ¼ 0.07, log-rank test). Treatment at 300 mg/day was significantly more beneficial in terms of the integrated outcome of the changes in ALSFRS-R and mortality (p ¼ 0.046, joint-rank test) [17]. Based on these results, a phase III, multicenter, randomized, double-blind, placebo-controlled study of RPPX (EMPOWER) was conducted in ALS patients in the US, Canada, Australia, and Europe; however, regrettably, the results were clinically insignificant [18]. Nevertheless, there is still plenty of opportunities for improvement in clinical study design, including selection of patients, treatment method, and evaluation methods (especially methods other than the ALSFRS-R). In ALS research, it is common to use mouse models carrying mutations in responsible genes for FALS as pathological models of ALS. However, the mutant SOD-1 transgenic mice model that has been most frequently used in previous pre-clinical studies do not show the aggregation of phosphorylated TDP-43, the most typical pathogenic feature of human ALS. While the recently reported TDP-43 or FUS transgenic/knock-in mice models showed some human ALS-like pathology [19], such as aggregation of TDP-43/FUS proteins, these model mice have not been used for the successful development of new drugs for ALS so far. Thus, we developed a method of producing human spinal motor neurons using human iPSCs and an innovative experimental technique for drug screening [20]. Using this system, spinal motor neurons were produced from iPSCs from healthy individuals as well as patients of familial ALS (TDP-43 and FUS mutations) and/or SALS. Then, drug screening was carried out with existing drug libraries with improved ALS-related phenotypes using patient-derived spinal motor neurons in a dish. As a result, several candidate drugs came up, and ropinirole hydrochloride was eventually discovered after considering such results as bloodebrain barrier permeability and tolerability. As mentioned above, the previous Phase III clinical trial for ALS (the EMPOWER study) used RPPX (R(þ) enantiomer of PPX, with no D 2 R-agonist activity) [18]. Notably, by using an in vitro model, we showed that ropinirole hydrochloride had significantly superior anti-ALS therapeutic activity compared with the alreadyapproved drugs for ALS (riluzole and edaravone), PPX or RPPX [20], suggesting the rationale to use ropinirole hydrochloride to conduct the present clinical trial. In the present clinical trial (the ROPALS trial), we explore the safety, tolerability and efficacy of ropinirole hydrochloride to ALS. Study objectives Primary Objective: To exploratively assess the safety (type, frequency, and severity of adverse events [AEs], and time course of laboratory test values) and tolerability of the ropinirole hydrochloride extended-release tablet in ALS patients. Secondary Objective: To exploratively assess the efficacy of the ropinirole hydrochloride extended-release tablet, compared with placebo, in terms of delay in the progression of ALS. Subject population Patients affected by probable (clinically or laboratory supported) or definite ALS [21] must satisfy all the inclusion and exclusion criteria (Table 1) upon the interim registration during the 28-day screening period through clinical evaluation and laboratory and instrumental assessment. Screening assessments include general and neurological examinations, ALSFRS-R, blood sampling, biochemical and pregnancy evaluations (for fertile females), urinalysis, ECG, and spirometry. Moreover, patients also must meet all the inclusion and exclusion criteria (Table 1) upon the official registration after the 3-month run-in period. Preparation of written information and informed consent form The investigator will prepare written information for subjects and an informed consent form (hereinafter collectively referred to as the informed consent document). The informed consent document is an all-in-one document or a set of documents, and will be Table 1 Inclusion and exclusion criteria for the ROPALS trial. Inclusion criteria Exclusion criteria [Interim Registration] 1) Patients who have a diagnosis of "clinically possible and laboratory-supported ALS," "clinically probable ALS," or "clinically definite ALS" according to the criteria for the diagnosis of ALS (El Escorial revised, World Congress of Neurosurgery) and who are within 60 months after onset of the disease. 2) Grade 1 or 2 according to the ALS Severity Classification (Specific Disease Research Survey, Ministry of Health, Labour and Welfare, January 1, 2007). 3) Japanese patients between 20 and 80 years of age at the time of informed consent. 4) ALSFRS-R score !2 points on all items ("Handwriting" and "Eating motion (1)" should be scored !2 points on each side). 5) Forced vital capacity (%FVC) !70%. 6) Written informed consent for participation in the study provided by themselves. 7) Ability to be treated in outpatient settings (partially under hospitalization) during the study. [Official Registration] 8) Change in ALSFRS-R score within the range between À2 and À5 points during the 12-week run-in period. 9) Have not started riluzole treatment, have not reduced the dose of riluzole, or have not discontinued riluzole treatment after the start of the run-in period. 10) Have not used edaravone or high-dose mecobalamin (25 mg or 50 mg) after the start of the run-in period. 11) Ability to be treated in outpatient settings (partially under hospitalization) during the study. revised, as appropriate. The prepared informed consent document will be submitted to the head of the study site to obtain approval of the IRB prior to the start of the study. Matters to be contained in the informed consent document Items listed below must be at least contained in the written information for subjects. (1) That the study involves research (2) The purpose of the study (3) The name, title, and contact information of the investigator (4) The study procedure(s) (including experimental aspects of the study, subject inclusion criteria, and probability for random assignment to each treatment) (5) Reasonably expected benefits, and foreseeable risks or inconveniences to subjects (When there is no intended clinical benefit to subjects, the subjects should be made aware of this.) (6) Presence/absence of alternative courses of treatment, and if present, their expected notable benefits and risks in a study in patients (7) The expected duration of subject's participation in the study (8) That the subject's participation in the study is voluntary and that the subject can withdraw from or refuse participation in the study or his/her legal representative can withdraw the subject from or refuse his/her participation in the study at any time, without penalty or loss of benefits to which the subject is otherwise entitled (9) That individuals involved in the study, including monitors, auditors, IRB, etc. and regulatory authorities, may request direct access to source documents, without violating the subject's confidentiality, and that, by signing or sealing the informed consent form, the subject or his/her legal representative authorizes such access (10) That the subject's identity remains confidential even when the study results are published (11) The person(s) to contact at the study site for further information about the study and subject's rights or in the event of a study-related health injury (12) Compensation and/or treatment available to the subject in the event of a study-related health injury (13) The type of the IRB that reviews the appropriateness etc. of the study, items to be reviewed at each IRB meeting, and other IRB-related matters in the study (14) The planned number of subjects involved in the study (15) That the subject or his/her legal representative will be informed immediately when information is obtained that may affect the subject's or his/her legal representative's willingness to continue participation in the study (16) Conditions or reasons for withdrawing the subject from his/ her participation in the study (17) The anticipated financial burden, if any, on the subject for participation in the study (18) The anticipated prorated payment, if any, to the subject for participation in the study (e.g., agreement on payment estimation) (19) Matters to be adhered to by the subjects. Method of obtaining informed consent (1) Prior to the start of the study, the investigator will distribute the informed consent document approved by the IRB to patients as prospective study subjects and provide them with an adequate explanation of the contents of the study. A study collaborator may provide a supplementary explanation. Explanations should be provided in as plain language as possible so that patients can understand them, based on the informed consent document for the study, and patient's questions must be adequately answered. After confirming that the patients have fully understood the contents of the explanation, the investigator will obtain their voluntary written informed consent for participation in the study. Interim registration will take place within 28 days of informed consent obtainment. The presence of cognitive impairment is not listed in the exclusion criteria (Table 1) because as many ALS patients as possible are to be recruited for the evaluation of safety profiles of ropinirole hydrochloride. However, we will be very careful in obtaining the informed consent from the ALS patients with possible cognitive impairment. First, when the cognitive impairment is too severe for them to perform "writing" or "using chopsticks", they are to be excluded by the ALSFRS-R criteria (Table 1). Second, when the patients do not fully understand the protocol, informed consent can be obtained from their close proxies. Importantly, however, these patients can also be excluded based on investigator's judgement (Table 1). (2) The investigator who provides the explanation and the patient will affix their names/seals or signatures to the informed consent form, with the date. The study collaborator who provides a supplementary explanation will also affix his/her name/seal or signature to the informed consent form, with the date. (3) If the patient is unable to sign the informed consent document due to a loss of upper limb function caused by ALS symptoms, the investigator will provide an adequate explanation in the presence of a fair witness, and obtain voluntary written informed consent to participate in the study from the patient. The witness will also affix his/her name/seal or signature to the informed consent form, with the date, and provide the relationship with the patient. If the patient is physically unable to sign, his/her witness will write the reason for his/her inability to give an authentic signature to the informed consent form. (4) The investigator will issue the signed and dated informed consent document to the subjects before their participation in the study. The original informed consent form will be appropriately retained in accordance with the regulations of the study site. Revision of the informed consent document (1) If new important information that could be relevant to the subject's willingness to continue is obtained, the investigator will immediately decide whether or not to revise the informed consent document based on the obtained information. (2) If it is deemed necessary to revise the informed consent document, the investigator must revise the document and forward it to the IRB to reobtain its approval. (3) In the case of the above (2), the investigator will immediately notify the subjects already participating in the study of the matter verbally, confirm their willingness to continue participation in the study, and record the result in the medical record. (4) The investigator will provide subjects already participating in the study with an explanation using the informed consent document reapproved by the IRB, and obtain voluntary written informed consent for continued participation in the study from the subjects. (5) As in the case of obtaining the initial informed consent, the investigator who provides the information and the subject will affix their names/seals or signatures to the informed consent form, with the date. The study collaborator who provides a supplementary explanation will also affix his/her name/seal or signature to the informed consent form, with the date. (6) The investigator will issue the signed and dated informed consent document to the subjects. The original informed consent form will be appropriately retained in accordance with the regulations of the study site. Caregivers The Zarit Caregiver Burden Interview is set as an endpoint in this study. Because this assessment will be conducted by caregivers of the subjects, written informed consent must also be obtained from caregivers. Caregiver's assessment will be made wherever possible, and subjects are able to participate in the study even if their caregivers do not provide informed consent. Subjects will designate their caregivers involved in the assessment. Subjects are allowed to designate two or more caregivers or change the caregivers during the study. If the study for the subject is discontinued, the caregiver's assessment will be ended upon completion of the assessment at the time of discontinuation (wherever possible). Preparation of written information and informed consent form The investigator will prepare the informed consent document for caregivers. The informed consent document is an all-in-one document or a set of documents, and will be revised, as appropriate. The prepared informed consent document will be submitted to the head of the study site to obtain IRB's approval prior to the start of the study. Matters to be contained in the informed consent document Items listed below must be at least contained in the informed consent document. (1) Qualification required for caregivers involved in the assessment (2) Roles of caregivers (3) That caregiver's participation in the study is voluntary and that the caregiver can withdraw from or refuse participation in the study at any time, without penalty or loss of benefits to which the subject is otherwise entitled (4) Information collected (5) Use of study data and protection of privacy (6) The name, title, and contact information of the investigator Method of obtaining informed consent (1) Prior to the start of the study, the investigator will distribute the informed consent document approved by the IRB to caregivers of prospective study subjects, and provide them with an adequate explanation of the contents of the study. A study collaborator may provide a supplementary explanation. Explanations should be provided in as plain language as possible so that patients can understand them, based on the informed consent document for the study, and caregiver's questions must be adequately answered. After confirming that the caregivers have fully understood the contents of the explanation, the investigator will obtain their voluntary written informed consent for participation in the study. (2) The investigator who provides the explanation and the caregiver will affix their names/seals or signatures to the informed consent form, with the date. The study collaborator who provides a supplementary explanation will also affix his/her name/ seal or signature to the informed consent form, with the date. (3) The investigator will issue the signed and dated informed consent document to the caregivers before their participation in the study. The original informed consent form will be appropriately retained in accordance with the regulations of the study site. Study design The flow of this study is shown Tables 2a,b. This study consists of the following periods. Phase and type of the study (1) Screening period (from informed consent to interim registration) (2) Run-in period: 12 weeks (from interim registration to official registration) (3) Double-blind period: 24 weeks (4) Tapering treatment period: 0e2 weeks (5) Continued treatment period (open-label) (only for subjects willing to receive continued treatment): 4e22 weeks (6) Tapering treatment period (after the continued treatment period): 0e2 weeks (7) Follow-up period (after the double-blind period, the continued treatment period, or the time of discontinuation): within 28 days [Screening period] After obtaining informed consent, necessary tests/observations will be performed. Eligibility assessment will then be conducted, and interim registration will take place. Interim registration will be performed within 28 days of informed consent obtainment. [Run-in period] After interim registration, eligibility will be reconfirmed during the run-in period (12 weeks ± 7 days), and official registration will take place. In addition to the criteria for interim registration, subjects must have the change in ALSFRS-R score within the range between À2 and À5 points during the 12-week run-in period to be eligible for official registration. This criterion will be confirmed to complete official registration. [Double-blind period] After the first dose of the study drug, the dose will be increased once weekly. Treatment with the study drug (study treatment) will be started at a first dose of 2 mg, followed by increases in the dose to a maximum of 16 mg, and subjects will be monitored until Week 24. Study treatment will be started within 15 days after official registration. The last dose of study treatment during the double-blind period will be administered on the preceding day of Week 25. If the study proceeds to the continued treatment period, the double-blind period is defined as the period before study drug administration at Week-25. In principle, subjects will be monitored under hospitalization for approximately 1 week from the preceding day of the first dose of the study treatment (subjects are allowed to be temporarily discharged during the specified test period if their hospital discharge is considered valid by the investigator). Subsequently, a once-weekly dose increase (allowable range: ±3 days), treatment, and monitoring will be conducted in outpatient settings. [Tapering treatment period] After the double-blind period, the dose of the study drug will be tapered in accordance with the Study Drug Tapering Protocol (Table 4). If the study does not proceed to the continued treatment period, the study treatment will be completed. [Continued treatment period] Upon completion of the double-blind period, the subjects can choose whether to complete the study or continue treatment with the active drug under an open-label design (continued treatment period). The continued treatment period is 4e22 weeks; if any of the criteria listed in "12.1 Discontinuation Criteria for Subjects" are met, the study for the relevant subject should be discontinued even before the 22-week period is attained. For subjects who are unable to stay in the study for at least 4 weeks after proceeding to the continued treatment period, the study will be discontinued at the end of the double-blind period without proceeding to the continued treatment period. When proceeding to the continued treatment period, the dose of the study drug will be tapered (it will take 2 weeks in the case of reducing the dose from the maximum of 16 mg) for both the active drug and placebo groups in accordance with the Study Drug Tapering Protocol (Table 4) to maintain the blindness. Subsequently, treatment with the active drug will be started at a dose of 2 mg, followed by increases in the dose to a maximum of 16 mg in accordance with the Study Drug Titration Protocol (Table 3). [Tapering treatment period (after the continued treatment period)] After the end of the continued treatment period, the dose of the study drug will be tapered in accordance with the Study Drug Tapering Protocol (Table 4), and the study treatment will be completed. [Follow-up period] The final observation will be performed within 28 days after the end of the tapering treatment period. Method of blinding The study drug randomization manager will confirm the indistinguishability in appearance and packaging form among the ropinirole hydrochloride extended-release 2 mg tablet, the ropinirole hydrochloride extended-release 8 mg tablet, and placebo before drug assignment. Table 2a The flow of this study. (Table 4). *2: When the study does not proceed to the continued treatment period, the procedure during the follow-up period should be performed. Table 2b The flow chart of this study. The study drug randomization manager will prepare the treatment code and emergency code in accordance with the procedural document separately prepared. Methods of randomization and assignment The investigator will enter the information required for registration in an electronic data capture (EDC) system. Subjects who are eligible for the study will be randomized to either the active drug or placebo group on the EDC system. The result of treatment assignment and the registration number will be transmitted automatically via e-mail to the unblinded pharmacist of the study site. Subjects will be randomly assigned to either the active drug or placebo at a 3:1 ratio by dynamic allocation incorporating probabilistic elements with the following variables as allocation adjustment factors. Primary endpoints Type, frequency, and severity of AEs, and time course of laboratory test values, and intergroup difference in the proportion of discontinued subjects during the 24-week double-blind period (from official registration to the final observation at Week 24 of the double-blind period). Secondary endpoints (1) Ratio of change in the ALSFRS-R score every 4 weeks between pre-treatment and post-treatment assessments The change in the ALSFRS-R score every 4 weeks during the runin period and the change in the ALSFRS-R score every 4 weeks during the 24-week double-blind period will be calculated, and the latter-to-former ratio will be determined as the delta (D) ALSFRS-R ratio. The change in the ALSFRS-R score every 4 weeks during the 24-week double-blind period will be calculated using a simple linear regression model with the measured ALSFRS-R score as a response variable and the number of days from the treatment start day at each measurement time point as an explanatory variable. The ratio between the treatment groups will be tested for comparison. (2) Intergroup difference in the change in the ALSFRS-R score (DALSFRS-R) during the 24-week double-blind period (from Day 1 to Week 24 of the double-blind period) The ALSFRS-R score will be assessed according to the specified schedule. The difference (DALSFRS-R) in the change from the day of the first dose of the study treatment in the ALSFRS-R score at Week 24 of the double-blind period between the treatment groups will be tested for comparison. (3) Change in the ALSFRS-R score during the continued treatment period (from the assessment at the start to the final assessment of the continued treatment period) and during the overall treatment period (from Day 1 of the double-blind period to the final assessment of the continued treatment period) (DALSFRS-R). (4) Combined Assessment of Function and Survival (CAFS) score [22] during the 24-week double-blind period (from Day 1 to Week 24 of the double-blind period), during the continued treatment period (from the assessment at the start to the final assessment of the continued treatment period), and during the overall treatment period (from Day 1 of the double-blind period to the final assessment of the continued treatment period). (5) Composite endpoint as a sum of Z-transformed scores on the following items [23]. ALSFRS-R sub-score of each domain (bulbar function, limb function, and respiratory function) ALS severity classification Simple respiratory function test (FEV1, FEV6) Detailed respiratory function test (VC, %FVC, FEV1%) Manual muscle testing (MMT) score (limb and trunk muscles) Table 4 Study drug tapering protocol for proceeding to the continued treatment period. DALSFRS À R ratio ¼ Change in ALSFRS À R score every 4 weeks during the 24 À week double À blind period Change in ALSFRS À R score every 4 weeks during the run À in period 1) Quantitative muscle strength (The same muscle as for the MMT assessment should be used.) 2) Grip strength and pinch strength 3) Modified Norris Scale (Bulbar Symptom Score) 4) Tongue pressure 5) Body weight 6) Amount of physical activity and number of steps 7) Objective muscle mass determined using computed tomography (CT) for skeletal muscle 8) Amyotrophic Lateral Sclerosis Assessment Questionnaire-40 (ALSAQ-40) score (QOL assessment) (6) Time to death or time to a specified state of disease progression The time to onset of any of the following events from the day of the first dose of treatment will be assessed. Death, inability of independent ambulation, loss of unilateral upper limb function, a) tracheostomy, respiratory support, b) tube feeding, c) loss of vocal conversation, d) and inability of oral administration. e) a) Loss of unilateral upper limb function: a condition where the subject is unable to grip a pen in one hand, as a guide. b) Respiratory support: Noninvasive respiratory support during all-day hours (generally, at least 22 h except for meal hours) or invasive respiratory support. c) A condition where "Swallowing" on the ALSFRS-R is scored 0 points: "nothing by mouth (NPO); exclusively parenteral or enteral feeding." d) Loss of vocal conversation: barely able to speak to express emotions or unable to speak. e) Inability to take oral medications: the disease progresses for reasons other than the above a) to c), which renders the subject incapable of orally taking the medication. (7) Time to %FVC of 50% The length of time until %FVC decreases to 50% from the day of the first dose of the study treatment will be assessed. The time to decrease of at least 6 points in ALSFRS-R score (at least a 6-points decrease in the ALSFRS-R score [DALSFRS-R] from the day of the first dose of the study treatment) from the ALSFRS-R score measurement on the day of the first dose of the study treatment will be assessed. (9) Proportion of patients who discontinued the treatment (discontinuation rate) during the period from Day 1 of the double-blind period to the final assessment of the continued treatment period. Exploratory endpoints (1) Comparison of the in vitro drug effect evaluation and clinical effect using patient iPSC-derived motor neurons Blood samples will be collected from subjects who have provided separate informed consent and iPSCs will be established at the Department of Physiology, Keio University School of Medicine. These iPSCs will be directed to differentiate into motor neurons to reproduce the pathological conditions of ALS. The cells will then be treated with ropinirole hydrochloride and will be assessed for a delay in the progression of ALS. The correlation between the results and the change in phenotype of subjects treated with medication will be examined. (2) Explorative search of new biomarkers for diagnosis, pathology, and drug effect evaluation 9) Measurement of biomarkers related to the ALS pathology, including TDP-43 and NfL in blood and spinal fluid Proteins such as TDP-43 and NfL, which are biomarkers related to ALS pathology, in blood and spinal fluid collected from subjects will be measured using single molecule arrays (Simoa™) or immunomagnetic reduction (IMR) assay. 10) RNA expression analysis before and after treatment with ropinirole hydrochloride Total and micro RNAs will be extracted from exosomes in blood and spinal fluid collected from subjects and analyzed using microarrays or RNA-seq. RNA extracts will be used for network analysis etc. to identify variable factors associated with disease progression and Hub genes that may contribute to the therapeutic effect of ropinirole hydrochloride. (3) Search of known familial ALS genes Blood samples collected from subjects who have provided informed consent will be transported to the Department of Neurology, Tohoku University School of Medicine, and mutations in known FALS-related genes will be searched using a targeted resequencing panel for screening of ALS. This assessment will be conducted for subjects who can be assessed by caregivers. Subjects will designate a caregiver involved in the assessment. Subjects are allowed to designate several caregivers but it is preferable to conduct the assessment by the same caregiver as much as possible. Caregivers who are designated as the rater will fill in the Zarit Caregiver Burden Interview (Assessment of Caregivers' Burden), and place it in an envelope to keep it out of the subject's sight, and submit it to the investigator. Target sample size and sample size calculation Twenty subjects for official registration (15 subjects for the active drug group and 5 subjects for the placebo group). Up to 24 subjects (18 subjects for the active drug group and 6 subjects for the placebo group) can be registered. The target number of subjects enrolled in this study was set at 20, taking feasibility into consideration. Considering the seriousness of the disease, the ratio of subjects treated with the active drug and placebo is 3:1 (15 subjects:5 subjects). A summary of biostatistical considerations related to the safety assessment for the design of this study is shown below. In this study, the sample size of the placebo group is limited because of ethical considerations, and a comparison between the active drug group and the control group will therefore be made in an explorative manner, and statistical assessment will be conducted mainly for each treatment group. As for the safety assessment, the primary objective of this study, if any clinically significant AEs occur with an incidence of approximately 10% in the active drug group, the scale of this study is enough to detect such an AE with an 80% probability. In other words, clinically significant AEs with relatively low incidences can be detected with a certain probability in this study. As for the efficacy assessment, the secondary objective of this study, the change from Day 1 in the ALSFRS-R score during the 24-week double-blind period (exacerbation of symptoms) will be assessed as the primary endpoint. In two past confirmatory studies of edaravone in ALS patients [10,26], the weighted mean change in the ALSFRS-R score at Week 24 in the placebo group (n ¼ 99 and n ¼ 66) was À6.8 points. Assuming that the true value of the change in ALSFRS-R score and its standard deviation (SD) in the active drug group are À5.5 points, which is similar to the value in the edaravone group, and 6 points, respectively, the probability that the point estimate of the mean change in the active drug group does not exceed the threshold (À6.8 points) is 80% with a sample size of 15 subjects in the active drug group. The efficacy will be exploratively assessed using the point estimate of the mean score and information to plan a next-phase clinical study will be collected. Seven tablets of the study drug will be packaged in a pressthrough package (PTP) sheet, and 20 PTP sheets will then be packed in a small box. Study period (2) Labeling The study drug labeling contains information including a statement of "For clinical study use," study drug code, manufacturing number, storage method, expiratory date, and name, affiliation, title, and address of the sponsor-investigator. Storage method The study drug should be stored at room temperature. Methods of study drug handling, storage, and management The study drug manager will store and manage the study drug in accordance with the "Procedure for Study Drug Management" prepared by the sponsor-investigator. The study drug manager will dispose of unused study drugs after the end of the study. Use the study drug is not allowed for purposes other than this study (another clinical study, animal studies, basic experiments, etc.). Emergency code breaking If it becomes necessary to urgently identify the study drug for a subject for his/her treatment and safety assurance, the investigator may request the study drug randomization manager to break the emergency code. The detailed procedure for emergency code breaking will be specified in the procedural document separately prepared. Preparation of the subject screening list The investigator will prepare a subject screening list, list all subjects who have received an explanation for informed consent, and assign subject identification (ID) codes to subjects who have provided informed consent. The investigator will manage the registration numbers and other information pertaining to the registered subjects (including those who discontinue or suspend treatment). Registration of subjects (1) Informed consent to interim registration The investigator will perform the tests/examinations that are required to assess the eligibility of subjects during the screening period after obtaining informed consent. The investigator will confirm that the subjects satisfy the inclusion and exclusion criteria upon interim registration, and fill in the items that are required for interim registration in the EDC system. Interim registration will take place within 28 days of obtaining informed consent. (2) Official registration The investigator will perform the tests/examinations and observations for the run-in period that are required for official registration, confirm that the subjects satisfy the inclusion and exclusion criteria upon official registration, and fill in the items that are required for official registration in the EDC system. After official registration, the registration number will be automatically assigned by the EDC system. The investigator will confirm that official registration has been completed, then prescribe the study drug. Study treatment will be started within 15 days after official registration. There is no concern about administration in the assessment of the test/examination and observation items, general conditions, and vital signs on the day of administration. Dose and dosage regimen (2) Criteria for dose adjustment of the study drug Study treatment will be started at a dose of 2 mg once daily, followed by increases in weekly increments of 2 mg (to a maximum of 16 mg) ( Table 3). If side effects (drowsiness, vertigo, dizziness, etc.) that can be objectively tolerated but interfere with ADL appear, the same dose will be maintained at the discretion of the investigator, or will be reduced every week until the side effects disappear, with maintenance doses being administered in an amount without the side effects. If the dosage is reduced to 2 mg and the side effects that are objectively acceptable but impair ADL are not alleviated, the drug will be discontinued. Subjects who satisfy all of the following criteria are eligible for proceeding to the continued treatment period. (1) Subjects are voluntarily willing to receive continued treatment. (2) Subjects do not meet any of the criteria listed in "12.1 Discontinuation Criteria for Subjects." (3) Subjects can receive study treatment for at least 4 weeks after proceeding to the continued treatment period. (4) Subjects can proceed to the continued treatment period in the judgment of the investigator. Method of proceeding to the continued treatment period (1) The investigator will explain the details of the continued treatment period to subjects who satisfy "7.4.1 Criteria for Continued Treatment" by Week 24 of the double-blind period, confirm their willingness, and obtain their written informed consent. (2) When proceeding to the continued treatment period, the dose of the study drug will be tapered to 2 mg (it will take 3 weeks in the case of reducing the dose from the maximum of 16 mg) for both the active drug and placebo groups in accordance with the Study Drug Tapering Protocol (Table 4) to maintain the blindness. Subsequently, treatment will be started, followed by increases in the dose to a maximum of 16 mg in accordance with the Study Drug Titration Protocol ( Table 3). The period of open-label treatment with the active drug will be extended within the range of a maximum of 48 weeks from the first dose of study treatment. In the continued treatment period, subjects who have been assigned to receive placebo in the double-blind period will be exposed to the active drug and may thus be at a risk for developing AEs. The subjects should therefore be adequately explained this matter before the start of treatment. is prohibited, regardless of dose and treatment regimen, during the period from interim registration to the end of the study (to the end of continued treatment for subjects who receive continued treatment) or to the time of discontinuation. Although edaravone is an important therapeutic choice for ALS patients, daily administration of edaravone in Keio University Hospital is practically difficult because visiting our hospital every day will be a big burden on the subjects who are living in distant regions of Japan. Edaravone can be administered in local clinics or hospitals, however we think that 1) the efficacy of ropinirole hydrochloride (the secondary outcome of this trial) might be obscured by edaravone because it works as a ROS scavenger, which is one of the possible underlying mechanisms of ropinirole hydrochloride as well, 2) management of participants is by a single institution, Keio University Hospital, is preferable. Thus, we explain these reasons very carefully and obtain informed consent from only patients who are not receiving edaravone, excluding patients who want to continue edaravone treatment. Restricted concomitant drugs Concomitant use of riluzole (brand name: Rilutek Tablets 50 mg or Riluzole Tablets 50 mg "AA") is allowed during the period from obtaining informed consent to the end of the study (to the end of continued treatment for subjects who receive continued treatment) or to the time of discontinuation. Subjects who are not receiving riluzole before providing informed consent are not allowed to start treatment with riluzole after providing informed consent. Use of riluzole is not a requirement. Descriptions of concomitant drugs and therapies The investigator or the study collaborator will enter the following information on concomitant drugs and therapies used during the period from obtaining informed consent to the end of the follow-up period or the time of discontinuation into the concomitant drug and therapy pages of the EDC system. The investigator, the study collaborator, or the study drug manager (or the person in charge) will provide subjects with instructions for administration, keeping the following in mind. 1) Subjects must take the drug as instructed by the physician. 2) Subjects must bring unused drugs (including spare drugs) and empty PTP sheets at the subsequent visit. (2) Instructions for lifestyle The investigator or the study collaborator will provide subjects with instructions for lifestyle, keeping the following in mind. 1) Subjects must undergo the medical examination and other tests/examinations on the designated days. When the subject cannot make a visit on the scheduled day, he/she must contact the investigator and seek his/her instructions. 2) Subjects must bring the Clinical Study Participation Card with them and present it when receiving a medical attention at another hospital or at other departments of this hospital. Subjects who are using drugs prescribed by doctors other than the investigator of this study or drugs purchased at pharmacies are required to inform the investigator or the study collaborator. Subjects who start using an additional drug during the study are also required to contact the investigator or the study collaborator before beginning use. 3) Subjects must try not to modify their lifestyle (daily exercise, meals, etc.) as much as possible. 4) Subjects must contact study staff if they have an abnormal condition in their body. 5) Subjects must use an effective form of birth control (e.g., condom, pill, diaphragm, intrauterine devices (IUD), implantable contraceptives, spermicide) during the study period if they are sexually active. 6) Subjects must not engage in potentially hazardous activities, including car driving, machine operation, or working in a high place. (3) Instructions on how to fill in the dosing diary The investigator or the study collaborator will distribute a rainy weather information form at the start of the run-in period and a dosing diary at the start of study treatment to the subjects. At this time, the investigator will explain how to fill in the diary and instruct them to fill in the diary every day during the run-in period and the study drug treatment period. The investigator will also instruct them to describe rainy weather information. In addition, if the upper limb function of the subjects deteriorates and the description becomes difficult, substitutes can write for the subjects. In the case of an allograph, the investigator or the study collaborator will instruct the subjects to identify the where the entry was written and write the name of substitutes and relationship with the subjects in the margin of the dosing diary. (4) Contacting another attending doctor by subjects The investigator will check whether the subject is receiving medical attention other than that in this study. If the subject is receiving medical attention from another physician, the investigator will contact the relevant physician, with the subject's consent, to inform the physician that the subject is participating in the study. In addition, the investigator or the study collaborator will issue the Clinical Study Participation Card etc. to the subjects and instruct them to present it at another hospital or at other departments of this hospital to inform other physicians that he/she is participating in the study. The following subject characteristics will be investigated during the screening period. Age (date of birth), date of informed consent, gender, race, and presence or absence of allergies (drug, food, and others) Medical history and concomitant diseases Medical history of diseases (previous diseases, including history of surgery, in the past 5 years, in principle; a definite time frame will not be established for the history of cancers and other diseases that may affect the assessment in this study in the judgment of the investigator etc.) and concomitant diseases will be investigated at 12 weeks after interim registration. Events that occur during the investigation, at 12 weeks after interim registration to the day preceding the first dose of study treatment, will be handled as follows: 11) Events that have resolved before the day of the first dose of study treatment: handled as previous diseases 12) Events that persist on the day of the first dose of study treatment: handled as concomitant diseases Investigations of concomitant drugs and therapies Concomitant drugs and therapies that are used during the period from obtaining informed consent to the end or discontinuation of observation will be investigated for the following items. (1) Concomitant drugs: name of drug, dose, delivery route, start date, end date, and reason for use (2) Concomitant therapies: name/content of therapy, start date, end date, and reason for use Investigation of the primary disease The primary disease will be investigated for the following items during the screening period. Classification of ALS (sporadic, familial), previous treatment, time of onset, criteria for the diagnosis of ALS (El Escorial revised, World Congress of Neurosurgery), ALS Severity Classification (Specific Disease Research Survey, Ministry of Health, Labour and Welfare, January 1, 2007), family history (second-degree relatives), and initial symptoms (bulbar paralysis, upper limb symptoms, lower limb symptoms, respiratory muscle paralysis) Height Height will be measured during the screening period. 8.1.6. Body weight Body weight will be measured during the screening period, at 12 weeks after interim registration, before the start of the first dose of the study treatment (from 3 days before to the day of the first dose), before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, body weight will be measured before study drug administration at each of the following time points, in addition to the above time points. Measurement will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. Study treatment compliance will be investigated for the following items by checking the dosing dairy filled out by the subjects during the treatment period. Date of administration, dose, and time of the final administration before each visit day General conditions General conditions (physical findings) will be examined during the period from the start of the screening period to the end or discontinuation of observation. Vital signs (1) Blood pressure, body temperature, and pulse rate Measurement will be performed under the same conditions throughout the study period. Blood pressure, body temperature, and pulse rate will be measured during the screening period, at 12 weeks after interim registration, before the start of the first dose of the study treatment (3 days before to the day of the first dose), the day following the first dose of the study treatment, before study drug administration at Weeks 2, 3, 5, 9, 13, 17, 21, and 24 after the start of study treatment, during the follow-up period, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, blood pressure, body temperature, and pulse rate will be measured before study drug administration at each of the following time points, in addition to the above time points and during the followup period. Measurement will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. [ Respiratory rate will be measured during the screening period, at 12 weeks after interim registration, before the start of the first dose of the study treatment (3 days before to the day of the first dose), the day following the first dose of the study treatment, before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of study treatment, during the follow-up period, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, respiratory rate will be measured before the study drug administration at each of the following time points, in addition to the above time points and during the follow-up period. Measurement will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks Twelve-lead electrocardiography (ECG) will be performed during the screening period, at 12 weeks after interim registration, before study drug administration at Week 24 after the start of the study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, 12lead ECG will be performed before study drug administration at each of the following time points, in addition to the above time points. Measurement will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Week 46 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 6 mg] Weeks 26 and 47 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 8 mge16 mg] Weeks 27 and 48 after the start of the study treatment, or the time of termination Screening for infections Screening for infections will be performed using serum samples during the screening period and at 12 weeks after interim registration. HTLV-1 antibody test, HIV antibody test, HBs antigen test, HCV antibody test, and TPHA (only during the screening period) Conventional laboratory tests Blood and urine samples will be collected during the screening period, at 12 weeks after interim registration, the day following the first dose of the study treatment, before study drug administration at Weeks 5, 9, 13, 17, 21, and 24 after the start of study treatment, at the follow-up period and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, blood and urine samples will be collected before study drug administration at each of the following time points and at the follow-up period, in addition to the above time points. Blood and urine samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. Specific laboratory tests Blood samples will be collected before the start of the first dose of the study treatment (3 days before to the day of the first dose), before study drug administration at Week 24 after the start of study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, blood samples will be collected before study drug administration at the following time points, in addition to the above time points. Blood samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Week 46 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 6 mg] Week 47 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 8 mge16 mg] Week 48 after the start of the study treatment, or the time of termination [Tests] (1) Blood biochemistry: four fractions of fatty acids (2) Urinalysis: 8-OHdG (CRE-corrected) 8.1.14. Blood ropinirole concentrations Blood samples will be collected at 12 weeks after interim registration, Week 2 after the start of the study treatment, visits in the week following a dose increase, before study drug administration at Weeks 13 and 24, and the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, blood samples will be collected before study drug administration in the week following a dose increase after the start of study treatment and at each of the following time points, in addition to the above time points. Blood samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks Subjects of childbearing potential will be tested for pregnancy by the urine human chorionic gonadotropin (HCG) test during the screening period. The presence of pregnancy will also be confirmed by the serum HCG test at 12 weeks after interim registration. The presence of pregnancy will be further confirmed by the urine HCG test at the time of discontinuation (when possible). For subjects of childbearing potential who do not proceed to the continued treatment period, the urine HCG test will be performed at Week 24, in addition to the above time points. For subjects of childbearing potential who proceed to the continued treatment period, the urine HCG test will be performed at the following time points, in addition to the above time points. The test will also be performed, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Week 46 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 6 mg] Week 47 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 8 mge16 mg] Week 48 after the start of the study treatment, or the time of termination Both urine and serum HCG tests that are specific for the beta subunit (HCG-b) will be used in this study. A pregnancy test is not required for men, surgically sterile women, hysterectomized or bilaterally ovariectomized women, and women with at least 1 year elapsing after their last menstruation because the possibility of pregnancy can be ruled out in these subjects. Cerebrospinal fluid (CSF) test The CSF test will be performed using lumbar puncture before the start of the first dose of the study treatment (3 days before to the day of the first dose), before study drug administration at Week 24 after the start of the study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the CSF test will be performed before study drug administration at each of the following time points, in addition to the above time points. The test will also be performed, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Week 46 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 6 mg] Week 47 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 8 mge16 mg] Week 48 after the start of the study treatment, or the time of termination [Tests] CSF pressure, appearance, cell count, quantitative protein, albumin, quantitative glucose, LDH, Cl, IgG, CRP, hypersensitive CRP and 8-OHdG Ropinirole concentration in CSF CSF samples will be collected using lumbar puncture before the start of the first dose of study treatment (3 days before to the day of the first dose), before study drug administration at Week 24 after the start of the study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, spinal fluid samples will be collected before study drug administration at each of the following time points, in addition to the above time points. CSF samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Week 46 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 6 mg] Week 47 after the start of the study treatment, or the time of termination [Dose at the end of the double-blind period: 8 mge16 mg] Week 48 after the start of the study treatment, or the time of termination ALS Functional Rating Scale-Revised Assessment by ALSFRS-R will be conducted during the screening period (within 7 days before interim registration), at 4, 8, and 12 weeks after interim registration, before the start of the first dose of study treatment (3 days before to the day of the first dose), before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the assessment by ALSFRS-R will be conducted before study drug administration at each of the following time points, in addition to the above time points. The assessment will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. Welfare, Japan) will be conducted during the screening period, at 12 weeks after interim registration, before the start of the first dose of study treatment (3 days before to the day of the first dose), before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the assessment by ALS severity classification will be conducted before study drug administration at each of the following time points, in addition to the above time points. The assessment will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks The simple respiratory function test will be performed during the screening period, at 12 weeks after interim registration, before the start of the first dose of study treatment (3 days before to the day of the first dose), before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the simple respiratory function test will be performed before study drug administration at each of the following time points, in addition to the above time points. The test will also be performed, when possible, at the time of discontinuation, even after the continued treatment period. The detailed respiratory function test will be conducted during the screening period, at 12 weeks after interim registration, before study drug administration at Weeks 13 and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the detailed respiratory function test will be performed before study drug administration at each of the following time points, in addition to the above time points. The test will also be performed, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks Blood gas analysis (PaCO 2 , PaO 2 , pH, and HCO 3 -) will be performed if %FVC is 50% or less after the start of the run-in period. In addition, the following formula will be used for the "forced vital capacity prediction value" for calculating % FVC. Assessment by ALSAQ0 [10,27,28] will be conducted at 12 weeks after interim registration, before the start of the first dose of study treatment (3 days before to the day of the first dose), before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the assessment by ALSAQ-40 will be conducted before study drug administration at each of the following time points, in addition to the above time points. The assessment will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. Muscle strength will be quantitatively determined using an instrument for measuring muscle strength [29]. Neurological assessment will be conducted at 12 weeks after interim registration, before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of the study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the neurological assessment will be conducted before study drug administration at each of the following time points, in addition to the above time points. The assessment will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. Tongue pressure will be quantitatively determined using an instrument for measuring tongue pressure in addition to the Modified Norris Scale (bulbar symptom score) [30,31]. Assessment by the Modified Norris Scale (bulbar symptom score) will be conducted during the screening period, at 12 weeks after interim registration, before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the assessment by the Modified Norris Scale (bulbar symptom score) will be conducted before study drug administration at each of the following time points, in addition to the above time points. The assessment will also be conducted, when possible, at the time of discontinuation, even after the continued treatment period. . Amount of physical activity and number of steps The amount of physical activity and the number of steps in daily living will be quantitatively determined using the Active style Pro manufactured and distributed by OMRON Corporation [32]. Prior to the assessment, the rater will check the rainy weather information form and the dosing diary filled out by subjects. The data will be confirmed at 4, 8, and 12 weeks after interim registration, before study drug administration at Weeks 5,9,13,17,21, and 24 after the start of study treatment, during the follow-up period, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the data will be confirmed before study drug administration at each of the following time points and during the follow-up period, in addition to the above time points. The data will also be confirmed, when possible, at the time of discontinuation, even after the continued treatment period. Confirmation of death, inability of independent ambulation, loss of unilateral upper limb function, tracheostomy, respiratory support, tube feeding, and loss of vocal conversation, inability to take oral medications Whether any of the defined events are present will be confirmed before the start of the first dose of study treatment (3 days before to the day of the first dose), and during the period from the day of the first dose to the end of observation or the time of discontinuation. Loss of unilateral upper limb function, respiratory support, tube feeding, loss of vocal conversation and inability to take oral medications are defined as the conditions shown below. (1) Loss of unilateral upper limb function: a condition where the subject is unable to grip a pen in one hand, as a guide. (2) Respiratory support: noninvasive respiratory support during all-day hours (generally, at least 22 h except for meal hours) or invasive respiratory support. (3) A condition where "Swallowing" on ALSFRS-R is scored 0 points: "nothing by mouth (NPO); exclusively parenteral or enteral feeding." (4) Loss of vocal conversation: barely able to speak to express emotions or unable to speak. (5) Inability to take oral medications: the subject's condition deteriorates for reasons other than (1) to (3) and oral administration becomes impossible. Skeletal muscle computed tomography Skeletal muscle CT scanning will be performed during the screening period, at Week 1 after the start of study treatment (3 days before the first dose to Day 7 after the start of treatment), before study drug administration at Week 24, and at the time of discontinuation (when possible). Exploratory endpoints (1) Comparison of the in vitro drug effect evaluation and clinical effect using subjects' iPSC-derived neurons Blood samples will be collected from subjects who have provided informed consent for iPSC production during the period from after interim registration to before the start of the first dose of study treatment (3 days before to the day of the first dose). Motor neurons will be induced in iPSCs produced from peripheral blood cells. The motor neurons will be analyzed for (1) confirmation that the ALS pathology has been represented; (2) assessment of disease improvement after treatment with ropinirole hydrochloride; (3) exploration of the action mechanism of ropinirole hydrochloride; and (4) comparison with clinical outcomes [19]. (2) Explorative search of new biomarkers for diagnosis, pathology, and drug effect evaluation (6) Testing of TDP-43, NfL, etc. in blood and CSF Blood samples will be collected before the start of the first dose of the investigational drug (3 days before the day of administration of the first dose) from 12 weeks after interim registration and at Weeks 13 and 24 after the start of the study treatment. CSF samples will also be collected before the start of the first dose of the investigational drug (3 days before the day of administration of the first dose) from 12 weeks after interim registration and at Week 24 after the start of the study treatment. Blood and CSF samples will be collected, when possible, at the time of discontinuation. For subjects who proceed to the continued treatment period, blood and CSF samples will be collected at each of the following time points, in addition to the above time points. Blood and CSF samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks 35 (blood) and 46 after the start of the study treatment, or the time of termination (blood and CSF) [Dose at the end of the double-blind period: 6 mg] Weeks 36 (blood) and 47 after the start of the study treatment, or the time of termination (blood and CSF) [Dose at the end of the double-blind period: 8 mge16 mg] Weeks 37 (blood) and 48 after the start of the study treatment, or the time of termination (blood and CSF) (7) RNA expression analysis in blood and CSF Blood samples will be collected from subjects who have provided informed consent for RNA expression analysis before the start of the first dose of the investigational drug (3 days before the day of administration of the first dose) from 12 weeks after interim registration and at Weeks 13 and 24 after the start of the study treatment. CSF samples will be collected before the start of the first dose of the investigational drug (3 days before the day of administration of first dose) from 12 weeks after interim registration and at Week 24 after the start of the study treatment. Blood and CSF samples will be collected, when possible, at the time of discontinuation. For subjects who proceed to the continued treatment period, blood and CSF samples will be collected at each of the following time points, in addition to the above time points. Blood and CSF samples will also be collected, when possible, at the time of discontinuation, even after the continued treatment period. [Dose at the end of the double-blind period: 2 mge4 mg] Weeks 35 (blood) and 46 after the start of the study treatment, or the time of termination (blood and CSF) [Dose at the end of the double-blind period: 6 mg] Weeks 36 (blood) and 47 after the start of the study treatment, or the time of termination (blood and CSF) [Dose at the end of the double-blind period: 8 mge16 mg] Weeks 37 (blood) and 48 after the start of the study treatment, or the time of termination (blood and CSF) (3) Examination of known familial ALS genes Blood samples will be collected from subjects who have provided informed consent for the gene test at 12 weeks after interim registration. (4) Zarit Caregiver Burden Interview Assessment by the Zarit Caregiver Burden Interview [24,25] will be conducted for subjects who can be assessed by caregivers who have provided informed consent for the assessment at 12 weeks after interim registration, before the start of the first dose of the study treatment (3 days before to the day of the first dose), at Weeks 5,9,13,17,21, and 24 after the start of the study treatment, and at the time of discontinuation (when possible). For subjects who proceed to the continued treatment period, the assessment by the Zarit Caregiver Burden Interview will be conducted at each of the following time points, in addition to the above time points. The assessment will be also conducted, when possible, at the time of discontinuation, even after the continued treatment period. Adverse events An AE is any unfavorable and unintended sign (including a laboratory abnormality), symptom, disease or disorder in a subject administered a study drug, whether it is related to the study drug or not. Over dosage and improper use of the study drug are also handed as AEs. Events that occur during the period from the day of the first dose of study treatment to the end of the follow-up period (within 28 days after the end of treatment) are handled as AEs. The following events are not handled as AEs. Weight loss, muscular weakness, arthralgia, myalgia, motor disorder, dyslalia, respiratory disorder, dysphagia, cognitive dysfunction, anxiety disorder, and depression symptoms, which are considered to be symptoms resulting from an exacerbation of the primary disease in the judgment of the investigator Procedures performed only for the purpose of testing (e.g., endoscopy) Modification of concomitant diseases that is considered to be within the predictable/foreseeable range by the investigator Progression of the primary disease that is considered to be within the predictable/foreseeable range Absence of any unfavorable medical occurrence (e.g., protocolspecified hospitalization, hospitalization already scheduled at the time of obtaining informed consent, hospitalization for nonmedical but social reasons, hospitalization to enhance the convenience of visits for treatment/testing, etc.) Changes in laboratory test values that meet any of the following are handled as AEs. (1) If any action for study treatment (cessation, discontinuation) becomes necessary owing to a change in laboratory test values (2) If use of any drug or procedure for treatment becomes necessary owing to a change in laboratory test values (3) If any surgical intervention has been introduced owing to a change in laboratory test values (4) If none of the above applies but the change in laboratory test values is an event of medical concern in the judgment of the investigator, etc. Action taken for adverse events If an AE occurs, the investigator should perform appropriate action(s) or treatment. Reported AEs should be followed until the following conditions are attained. AEs that are persisting at the scheduled end day of observation will be followed in the same manner. The AE has resolved or is resolving (or stable) In the case of sequela(e), the symptom has become fixed The AE has been followed adequately and further follow-up is no longer necessary in the judgement of the investigator The investigator etc. will enter the details of all the AEs reported, including the date of onset, severity, causal relationship with the study drug, presence/absence of treatment and, if any, the content, and outcome, in the EDC system. The investigator may seek the opinion of the Independent Data Monitoring Committee about these AEs. Definition of serious adverse events A serious adverse event (SAE) is any of the reported AEs that: 1) results in death or is life-threatening, 2) requires inpatient hospitalization or prolongation of the existing hospitalization (except for protocol-specified hospitalization, hospitalization already scheduled at the time of obtaining informed consent, hospitalization for non-medical but social reasons, hospitalization to enhance the convenience of visits for treatment/testing, etc.), 3) results in disability/incapacity, 4) may result in disability/incapacity, 5) is serious according to the above (1) to (4), and 6) is a congenital anomaly/birth defect in the next generation. Reporting of serious adverse events If the information of an SAE that occurs during the period from the day of informed consent to the end of the follow-up period (within 28 days after the end of treatment) is obtained, the investigator will immediately report to the head of the study site and the study drug supplier. Upon receiving a request for provision of further necessary information from the study drug supplier, the head of the study site, or the IRB, the investigator should respond to this. If it becomes necessary to break the emergency code, the investigator will carry out code breaking in accordance with the procedural document. The investigator will report all SAEs to the Independent Data Monitoring Committee. If the relevant SAE needs to be reported immediately to the regulatory authorities, the investigator will report it within the timeframe specified by the regulatory authorities according to the content of the SAE. Assessment of severity of adverse events The investigator will assess the severity of all AEs as mild, moderate, or severe using a 3-grade rating scale based on the grades specified in the "Criteria for Seriousness Classification of ADRs, etc. [Notification No. 80 of the Safety Division, Pharmaceutical Affairs Bureau (PAB), dated June 29, 1992]." For events that are not listed in the "Criteria for Seriousness Classification of ADRs, etc. [Notification No. 80 of the Safety Division, Pharmaceutical Affairs Bureau (PAB), dated June 29, 1992]," the investigator will determine the severity by referring to the following criteria. 1) Mild: easily tolerable without intervention, 2) Moderate: requires intervention but does not preclude post-treatment tests/ examinations or observations, or 3) Severe: severely interferes with the activities of daily living (ADL). If the severity of an AE changes during the study period, the highest grade observed during the period will be entered in the EDC system. Assessment of causal relationship The investigator will assess the causal relationship with the study drug for all AEs in accordance with the following categories. Related: The AE resolves after discontinuation of treatment, the AE recurs after resumption of treatment, a statement that the AE could be related to the study drug is provided in the investigator's brochure, there is no confounding risk factor, the AE is consistent with the amount and/or duration of exposure, the potential relationship with concomitant disease(s), etc. is ruled out, etc. Unrelated: The reasonable causal relationship between the study drug and the AE is unlikely. Assessment of outcome The outcome of AEs will be assessed on the following 6-grade rating scale. Significant adverse events Non-specific significant AEs are not defined in this study. Action taken in the case of pregnancy Investigators will explain at the start of the study that subjects should immediately inform the investigator if any sign of pregnancy is found due to a failure of birth control, e.g., delay in the period for female subjects or male subjects' partners. If a female subject or a male subject's partner is suspected of being pregnant, the investigator should not provide the study treatment until the potential pregnancy is ruled out based on a pregnancy test result. If a female subject or a male subject's partner is found to be pregnant, the investigator will discontinue the study for the relevant subject and identify the type of study drug by breaking the code of the study drug. If the drug administered to the subject is the active drug, the investigator will immediately report the matter in writing to the head of the study site and the study drug supplier. The investigator will follow the relevant subject until the completion of delivery or pregnancy. Pregnancy-related SAEs (miscarriage, abortion, birth defect/congenital anomaly) will be handled in accordance with the same procedures as those in "8.3.3 Reporting of Serious Adverse Events." Independent Data Monitoring Committee The investigator may seek the opinion of the Independent Data Monitoring Committee about the study progress, and evaluation of safety data, as well as efficacy data, if necessary. Even in this case, the sponsor/investigator is responsible for the final decisionmaking. The responsibilities of the Independent Data Monitoring Committee are shown below. Safety monitoring (1) The Committee will examine the details of SAEs reported in this study and conduct risk assessment for the study. The Committee will recommend whether to further continue the study and protocol revisions, including a change in the inclusion criteria to reduce the risk of AEs, as appropriate. (2) For SAEs that are difficult to differentiate from exacerbation of the primary disease among those related to the events defined as "death or a specified state of disease progression" assessed as a secondary endpoint, the risk of these events having been caused by the active drug will be assessed. Monitoring of the implementation status of the study Data related to the implementation status of the study will be monitored to guarantee the quality of this study. The data include the status of subject registration, validity of study subjects, status of withdrawals/dropouts, and protocol compliance status. Risks (1) As with other dopamine receptor agonists, sudden onset of sleep and somnolence occurring in the ADL, e.g., when driving, have been reported in patients receiving this product or the ropinirole hydrochloride tablet. Some of these events were associated with accidents. In addition, some patients who experienced sudden onset of sleep had no warning symptoms, such as somnolence, beforehand or experienced such events for the first time after 1 or more years had elapsed from the start of treatment with this product. (2) Psychiatric symptoms such as hallucinations and delusions are considered to be associated with excessive dopamine receptor stimulation [33]. Treatment with dopamine receptor agonists, including this product, may potentially exacerbate these psychiatric symptoms. (3) Dopamine D 2 receptor agonists, including this product, may cause a decrease in heart rate through inhibition of norepinephrine release from peripheral nerve endings [34]. (4) In a study in UK, the pharmacokinetics of this product was compared in patients with Parkinson's disease who were divided into three age groups: <65 years, 65e75 years, and >75 years. Oral clearance (CL/F) decreased with increasing age, with a prolonged elimination half-life (T 1/2 ) observed [35]. In a Japanese clinical study, the incidence of psychiatric symptoms, including hallucination, was reported to be higher in older adult patients (!65 years) than in younger patients (<65 years). Benefits The therapeutic effect of this product for ALS has been confirmed in in vitro models. However, whether this product is effective in human patients with ALS will be exploratively assessed for the first time in this study. This study therefore does not guarantee the therapeutic effect of this product in treating ALS patients. Nevertheless, when considering that there is no truly effective established approach to the treatment of ALS at present, it is deemed quite meaningful to assess the safety, tolerability, and efficacy of this product. 11. Discontinuation criteria for subjects and the procedure Discontinuation criteria for subjects If any of the following apply, the study will be discontinued. (1) Subject's request for discontinuation of the study (2) Subject's withdrawal of consent Subjects who withdraw informed consent for participation in the study while participating in the study will be handled as discontinued subjects. (3) Unable to start the first dose of the study drug. (4) Marked decrease in respiratory function If any of the following apply, the study will be discontinued. %FVC is 50% and PaCO 2 in blood is !50 mmHg. Tracheostomy is performed. Noninvasive respiratory support is required during all-day hours (generally, at least 22 h except for meal hours) or invasive respiratory support becomes necessary. This refers to the subject being unable to visit the hospital to receive the study drug and/or undergo observations due to his/her death or significant progression of disease. (7) Unable to visit the hospital twice in a row during the doubleblind period or a total five times during the study. These subjects are deemed inappropriate for efficacy assessment and are therefore handled as discontinued subjects. If a similar situation occurs during the continued treatment period, the study for the subject should not be discontinued immediately; if the subject continues to be unable to visit the hospital, the investigator should then determine whether to continue the study for the subject. (8) If the study drug assigned to a subject is identified by emergency code breaking, the subsequent treatment of the subject should be discontinued. (9) Discontinuation based on the decision of the investigator etc. If any of the following apply, the investigator etc. may discontinue the study. 1) The investigator etc. determine that continued study is inappropriate as a result of safety assessment of the study drug based on subject's clinical symptoms, laboratory test values, vital signs, ECGs, etc. 2) The investigator etc. determine that the subject is unable to comply with the protocol. 3) The investigator determines that the subject is ineligible for the study because he/she is found not to satisfy the inclusion or exclusion criteria after interim or official registration, or for other reasons. 4) Other circumstances in which the study should be discontinued in the judgment of the investigator etc. Upon decision of discontinuation of the study, the investigator will provide the subject with an explanation about the discontinuation, reason for discontinuation, and required tests/examinations and treatment, etc. The investigator will then proceed with these tests/examinations, treatment, etc. This does not apply to cases where the subject withdraws consent for these tests/examinations etc. Procedure for discontinuation If the study for a subject is discontinued after the start of study treatment, the investigator will take appropriate measures for the relevant subject. The investigator will perform the tests/examinations and observations scheduled at the time of discontinuation, when possible. The assessment at the time of discontinuation will be performed within 12 weeks of discontinuation. Rapid dose reduction or discontinuation of this drug can cause malignant syndrome, such as high-grade fever, consciousness disturbance, hypermyotonia, dyskinesia, and shock. Therefore, the investigator should consider the condition of the subject at the time of discontinuation and determine the need for gradually tapering the dose. If the investigator performs gradual tapering, the "Drug Tapering Protocol (Table 4)" will be followed. The investigator will enter information relevant to the discontinuation, including the date, reason, details, background information and action taken in the EDC system. If the study is discontinued because of AEs, the name of the AE leading to discontinuation will be entered in the discontinuation page of the EDC system. The date of discontinuation is defined as the day when assessment of discontinuation is performed; however, if the assessment of discontinuation cannot be conducted, the day of discontinuation is determined to be the date of discontinuation. Subjects who have not undergone observations and tests/examinations scheduled at the time of discontinuation, or who have no visit scheduled after discontinuation, will be followed by letter (mailed) or phone to collect information on the reason, subsequent course, etc. The collected information will be entered in the discontinuation page of the EDC system. The investigator or the study collaborator will make every effort to collect the dosing record of subjects who have no visit scheduled after discontinuation, by mail and other means. Statistical analysis Details of statistical analysis will be documented in the Statistical Analysis Plan prepared separately. The Statistical Analysis Plan will be finalized by the time of code breaking. The following two analysis sets are defined in this study, and analysis will be performed in each analysis set. 1) Full Analysis Set (FAS) The full analysis set (FAS) is based on the intention-to-treat (ITT) principle. The FAS is a subset of all subjects enrolled in the study but excludes the subjects listed below. Subjects in violation of the eligibility criteria (subjects who failed to satisfy major registration criteria for this study) Subjects who have not received any dose of the study drug Subjects who have no data at baseline or during the treatment period Subjects who withdrew informed consent in the course of the study and refused the use of all of their data 2) Per Protocol Set (PPS) The per protocol set (PPS) is a subset of subjects who are included in the efficacy assessment according to the criteria for case handling prepared before data lock. For endpoints that are measured over time, the case and data inclusion/exclusion criteria will be prepared for each time point. The FAS is the primary analysis set of efficacy assessment in this study. Statistical analysis in the PPS will be performed only on the change from baseline in ALSFRS-R score at Week 24, which is an important secondary endpoint. (2) Safety The safety analysis set is the subset excluding the following subjects from all subjects included in this study. -Subjects who have not received any investigational drug -Subjects who have withdrawn consent during this study and refused the use of all of their data Data handling (1) Imputation of missing data Imputation of missing data will be performed for efficacy endpoints. Details of the imputation method and items for imputation will be documented in the Statistical Analysis Plan separately prepared. (2) Case handling criteria The case handling criteria will be determined before data lock. Baseline patient characteristics Summary statistics of patient characteristics (age, sex, body weight, etc.) and baseline characteristics will be calculated by treatment group. Statistical analysis of efficacy The change from baseline (Day 1) in ALSFRS-R score at Week 24 will be analyzed as an important secondary efficacy endpoint. Summary statistics of the measured value and the change from baseline, and the two-sided 95% confidence interval (CI) will be calculated by treatment group. A null hypothesis, that the change from baseline at each time point is 0, will be tested by treatment group using a one-sample t-test. The least squares mean will be compared between the treatment groups using contrasts by an analysis of covariance (ANCOVA) model with the baseline value as a covariate. The least squares mean difference and the two-sided 95% CI will be calculated. For the secondary endpoints listed below, continuous data will be analyzed in the same manner as for the change in ALSFRS-R score. If a significant deviation is found in the distribution of dependent variables, a non-parametric approach will be used. For binary data, the point estimate of the ratio will be calculated by treatment group and the two-sided 95% CI for the ratio will be calculated using the Clopper Pearson method. The two-sided 95% CI for the difference in the ratio between the treatment groups will also be calculated using the normal approximation method. For survival time data, KaplaneMeier plots will be generated and survival function will then be estimated. The number and proportion of subjects for each maintenance dose (a maximum of 16 mg) will be summarized by treatment group. Secondary endpoints (1) Ratio of change in ALSFRS-R score every 4 weeks between pre-treatment and post-treatment assessments. Furthermore, analysis will be performed on the following exploratory endpoints: (1) The number of AEs and ADRs and the number of subjects with AEs and ADRs will be tabulated by treatment group and the twosided 95% CI for the incidence will be calculated using the Clopper Pearson method. Laboratory test values and vital signs For continuous safety variables, summary statistics of the measured value at each time point and change from baseline, and twosided 95% CI will be calculated by treatment group. For discrete variables, a cross table of data at baseline and each time point will be prepared. Level of significance and multiplicity All the analyses in this study will be performed at a two-sided 5% significance level and two-sided 95% confidence level. Efficacy analysis is the secondary objective, and adjustment for multiplicity of tests among the endpoints or time points will not be performed. For safety analysis, statistical power will be prioritized, and adjustment for multiplicity among the endpoints or time points will not be performed. Primary analysis Data will be locked after the end of the double-blind period of all the subjects but before the end of the continued treatment period of all the subjects, and will then be analyzed. Deviations from originally planned statistical analyses If any analysis is performed using a different method from that originally specified in the protocol, all changes should be reported in the clinical study report. Quality control and assurance of the study The sponsor-investigator must conduct "quality control of the study" and "quality assurance of the study" in accordance with the procedural document prepared separately to maintain the quality and reliability of the study. The study site must cooperate with the quality control and assurance of the study by the sponsorinvestigator. In the conduct of quality control of the study, the monitor will confirm that the study is conducted in accordance with the operating procedure for clinical studies prepared by the study site, the latest protocol, and GCP through direct access, as appropriate. The monitor will also confirm that descriptions in the CRF reported by the investigator are accurate and complete and that they are verifiable against study-related records including source documents. To guarantee that the study is conducted in accordance with the protocol and GCP, an auditor will conduct audits in accordance with the procedural document and confirm that quality control is conducted appropriately. The CROs for this trial are CTD Inc. and DOT WORLD Co., Ltd. Ethical conduct of the study This study must be conducted in consideration of the ethical principles based on the Declaration of Helsinki, and in adherence to the Pharmaceutical and Medical Device Act (PMD Act), GCP, and standard protocols. Institutional review board The IRB of Keio University Hospital reviewed whether to conduct and continue the study from the standpoints of its ethical, scientific, and medical validity based on the descriptions in the investigator's brochure, protocol, informed consent document, and sample CRF, and approved this trial. Confidentiality of subjects The subject identification code will be used for subject registration and subject identification in the CRF. Personnel involved in this study must protect the confidentiality of subjects at times of direct access to source documents for study procedures, publication in medical journals, submission of materials to regulatory authorities, etc. Retention of records etc. (1) Records etc. will be retained at the study site The archiving manager designated by the head of the study site will retain those study-related documents and records that should be retained at the study site until the date defined in 1) or 2) below, whichever comes later. However, if the sponsor-investigator deems it necessary to retain them for a longer period, the study site will discuss the specific period and method of retention with the sponsor-investigator. If it is decided that data related to the clinical study results collected in the study are not included in the application dossier, the study drug supplier should notify the head of the study site of the matter and the reason in writing. 1) Date of marketing approval for the study drug (date of approval for partial changes in the approved items in the case of additional indications) (or the date when 3 years have passed since the notification that development of the drug is discontinued or if the clinical study results are not included in the application dossier) 2) Date when 3 years have passed since discontinuation or completion of the study If marketing approval for the study drug is obtained or discontinuation of development is decided due to a failure to obtain approval, the study drug supplier will notify the head of the study site of the matter in writing. (2) Records etc. retained by the sponsor-investigator (investigator) The sponsor-investigator (investigator) will retain study-related documents and records that should be retained by the sponsorinvestigator until the date defined in 1) or 2) below, whichever comes later. The sponsor-investigator (investigator) will discuss the response after the end of the retention period with the study drug supplier. 1) Date of marketing approval for the study drug (or the date of approval for partial changes in the approved items in the case of additional indications) (date when 3 years have passed since the notification that development of the drug is discontinued or if the clinical study results are not included in the application dossier) 2) Date when 3 years have passed since discontinuation or completion of the study If marketing approval for the study drug is obtained or discontinuation of development is decided due to a failure to obtain approval, the study drug supplier will notify the head of the study site of the matter in writing. Financial source and conflicts of interest This study will be conducted under the sponsorship of the Japan Agency for Medical Research and Development (AMED) (JP 18ek0109329h0001) and K Pharma, Inc. As for the study drug, all test drugs and part of the comparator will be supplied free-ofcharge by GlaxoSmithKline K.K. It will be determined at the Conflicts of Interest Management Committee of Keio University that these do not fall under the conflicts of interest acts. Compensation for study-related health injuries If a subject suffers any study-related health injury, the study site will provide the relevant subject with treatment and other necessary measures. Medical care provision system The investigator and the study site will organize sufficient systems that allow for provision of medical care for the treatment of ADRs of the study drug etc. and make every effort to provide the best possible treatment for relevant health injuries. Purchase of insurance The investigator shall purchase insurance to guarantee the execution of compensation for subject's health injuries and is responsible for compensation according to the regulations on clinical study insurance. The investigator (or subinvestigator) or the study collaborator will distribute written information for compensation to subjects when providing explanations about informed consent for participation in the study. Study registration This study was registered to the two databases listed below. 19. Protocol compliance, deviation, or change Protocol compliance The investigator must comply with this protocol. Protocol deviation or modification The investigator must not deviate from the protocol or modify the protocol without written approval based on the IRB's prior review. However, the investigator is allowed to do this in unavoidable medical situations, including the case where such an action is required to avoid an urgent risk to the subject, without prior approval of the IRB. In such a case, the investigator confirms that the content of and reason for deviation or modification and subsequent protocol revision are appropriate, and submit the draft to the head of the study site and the IRB as soon as possible to gain its approval. The agreement of the head of the study site is also required. The investigator must record all protocol deviations. Only for protocol deviations that arise in unavoidable medical situations, including the case where such an action is required to avoid an urgent risk to subjects, the investigator will prepare a written document to explain the deviation and the reason, and immediately submit it to the head of the study site. The investigator (or sub-investigator) will retain a copy of the document. The investigator will immediately submit the report on all changes in the study procedures that may significantly affect the conduct of the study or increase the risk to subjects to the head of the study site and the IRB. Protocol revision If it is deemed necessary to modify the protocol in the course of the study, the sponsor-investigator will revise the protocol. The sponsor-investigator will immediately notify the head of the study site of the content of the revision in writing, and obtain the IRB's approval through the head of the study site. If a revision of the protocol is announced from the head of the study site based on the IRB's opinion, the sponsor-investigator will decide whether the changes are valid and revise the protocol, if necessary. The sponsorinvestigator will immediately notify the head of the study site of the content of the revision in writing, and obtain the IRB's approval through the head of the study site. Ownership and publication of results Intellectual property rights etc. arising from this study shall belong to the researchers. The researchers and drug suppliers shall use part or all of clinical study results for the purpose of application for marketing approval of the study drug. In doing this, the clinical study results are partially disclosed in accordance with applicable laws and regulations; however, each subject's personal information remains protected. Conclusion We believe that this study will be proof of concept for iPSC-drug discovery if ropinirole hydrochloride is effective in ALS patients. Patient recruitment began in Dec 2018 and the last patient is expected to complete the trial protocol in November 2020. Ethics approval and consent to participate The study protocol was approved by the Ethics Committee of Keio University based on the Declaration of Helsinki. Written informed consent was obtained from the all participants of this trial.
2019-08-07T02:44:43.477Z
2019-01-10T00:00:00.000
{ "year": 2019, "sha1": "4e613aa084b3abd246684b52cc2932cbe4e88659", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.reth.2019.07.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2cf836f5f41406be2ed940b001121b450804f8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5893509
pes2o/s2orc
v3-fos-license
Strategic modulation of response inhibition in task-switching Residual activations from previous task performance usually prime the system toward response repetition. However, when the task switches, the repetition of a response (RR) produces longer reaction times and higher error rates. Some researchers assumed that these RR costs reflect strategic inhibition of just executed responses and that this serves for preventing perseveration errors. We investigated whether the basic level of response inhibition is adapted to the overall risk of response perseveration. In a series of 3 experiments, we presented different proportions of stimuli that carry either a high or a low risk of perseveration. Additionally, the discriminability of high- and low-risk stimuli was varied. The results indicate that individuals apply several processing and control strategies, depending on the mixture of stimulus types. When discriminability was high, control was adapted on a trial-by trial basis, which presumably reduces mental effort (Experiment 1). When trial-based strategies were prevented, RR costs for low-risk stimuli varied with the overall proportion of high-risk stimuli (Experiments 2 and 3), indicating an adaptation of the basic level of response inhibition. INTRODUCTION The environment is often ambiguous about the appropriate response for a given task. For instance, different features of a stimulus might be associated with different actions, so that stimulus processing activates competing responses, which can result in suboptimal performance or even errors (cf. Desimone and Duncan, 1995). One mechanism to prevent such errors is selective attention that can be used to filter out irrelevant stimulus information (cf. Kahneman and Treisman, 1984;Bundesen, 1990;Hübner et al., 2010). However, in some situations perceptual filtering can be difficult or even impossible (e.g., Stroop, 1935;Simon, 1969;Eriksen and Eriksen, 1974). In these cases suppression of irrelevant response activation might be applied as an alternative mechanism for limiting the error rate (e.g., Ridderinkhof, 2002). In addition to activation produced by irrelevant features of the current stimulus, residual activation left over from previous task performance can also bias responding. For instance, when participants switch between overlapping tasks that share mental representations, persistent activation of the representations that were involved in performing the previous task, interferes with current task processing, which usually impairs performance (e.g., Allport et al., 1994;Masson et al., 2003;Yeung and Monsell, 2003; see Kiesel et al., 2010, for a review). The interference increases the risk of erroneously re-executing either the previous task (task perseveration errors), or the pervious response (response perseveration error). To control such perseverations, it has been assumed that individuals are equipped with inhibitory mechanisms (e.g., Mayr and Keele, 2000;Hübner and Druey, 2006;Juvina and Taatgen, 2009). The basic idea is that task representations that were active on the previous trial are inhibited-in whole or in part-in order to control the error rate by reducing their perseverative influence on the current processing. From the different components of a task representation that could be inhibited, the current study is concerned with the inhibition of response representations (Hübner and Druey, 2006;Cooper and Marí-Beffa, 2008). For simplicity, we will call this type of inhibition response inhibition. Given that response inhibition is an anti-perseverative mechanism in task switching, an important question is how flexibly its strength can be adjusted to the risk of response perseveration which is related to the degree of irrelevant response activation. For instance, stronger inhibition seems advantageous in task contexts where irrelevant stimulus features frequently reactivate the previous (but now wrong) response. This would increase the overall risk of perseveration, compared to conditions, where such activations occur less frequently. Thus, a reasonable hypothesis is to assume that the strength of response inhibition is strategically adjusted to the overall risk of response perseveration errors (Hübner and Druey, 2006;Steinhauser et al., 2009). Up to now, however, evidence for this strategic-adaptation hypothesis is inconclusive (Grzyb and Hübner, 2013a). In typical task-switching studies investigating the adaptability of response inhibition, the ratio of high-risk to low-risk trials is manipulated, i.e., the proportion of trials with a stimulus that increases the risk of response perseveration is varied. If the strategic-adaptation hypothesis is correct, then response inhibition should increase with the proportion of high-risk stimuli. However, in a previous study (Grzyb and Hübner, 2013a) no proportion effect was found. Yet, to conclude that there is no strategic adaptation might be premature, because in that study high-risk stimuli could easily be discriminated from low-risk stimuli perceptually. As a consequence, participants could have adjusted response inhibition to the current stimulus-type. If such a specific processing of different stimulus-types is applicable on a trial-by-trial basis, then an overall strategic adaptation of response inhibition to the proportion of high-risk stimuli might be unnecessary. Therefore, the aim of the present study was to investigate how trial-based strategies affect the overall adaptation of response inhibition. As our results show, strategic adaptation to overall control demands takes place only when trial-based strategies are prevented. But before we report our results in detail, we review the relevant literature on response inhibition in task-switching studies. RESPONSE INHIBITION IN TASK-SWITCHING In task-witching studies a characteristic interaction can be observed between the transition of tasks and responses (e.g., Rogers and Monsell, 1995;Kleinsorge and Heuer, 1999;Meiran, 2000;Meiran et al., 2000;Schuch and Koch, 2004;Druey, 2006, 2008;Cooper and Marí-Beffa, 2008;Druey and Hübner, 2008a;Koch et al., 2011). When comparing performance on trials where the response of the previous trial repeats with performance on trials where the response shifts (RSs), response repetition (RR) benefits can be found on taskrepetition trials and RR costs on task-switch trials. Several ideas have been proposed for explaining this interaction (e.g., Rogers and Monsell, 1995). Here we focus on the idea that responses are inhibited after their execution to prevent perseveration errors. The idea of response inhibition as an anti-perseverative mechanism has a long tradition (e.g., Smith, 1968), but recently gained additional attention in the area of task switching. Cooper and Marí-Beffa (Cooper and Marí-Beffa, 2008;Marí-Beffa et al., 2012), for instance, argued that in natural contexts a switch from one task to another is normally accompanied by a shift from one response or effector to another (see also, Mayr and Bryck, 2007). In these cases, inhibiting a response after its execution would facilitate a switch from one action to another by inducing a RS bias. In task-switching studies, however, response mappings often overlap between tasks such that the same response is also part of different tasks (e.g., judging the parity of numerals by pressing one of two response keys, and categorizing letters as consonants or vowels by pressing the same keys). With such stimulus-response mappings, the response can repeat even if the task switches. As a result, RR usually leads to performance costs, presumably because the inhibition has to be overcome to re-execute the previous response (Hübner and Druey, 2006). The situation is different on taskrepetition trials. Here, RR occurs together with a repetition of the stimulus category (cf. Pashler and Baylis, 1991), so that episodes of previous and current trial features match (Altmann, 2011). The corresponding positive effects usually outweigh the negative effect of response inhibition (but see, e.g., Cooper and Marí-Beffa, 2008). In sum, RR produces benefits on task-repetition trials, but costs on task-switch trials, which explains the observed interaction between the transition of tasks and responses in task-switch studies. STRATEGIC ADAPTATION OF RESPONSE INHIBITION If inhibition is considered as control mechanisms, then an important question is whether its strength can be modulated strategically. For the Simon task, for instance, where response inhibition also plays an important role for control, it has been shown that the strength of inhibition can strategically be adapted to different demands, but only when sufficient information about the corresponding condition is provided (Hübner and Mishra, 2013). Note that such a strategic adaptation must not necessarily be based on a deliberate choice of a certain strength of response inhibition. It is also conceivable that the strength results from a more abstract feed-back loop that simply controls the error rate. The specific mechanisms might remain unconscious. Here, we simply mean by "strategy" any top-down influence on performance that depends on the conditions of the specific task context. In task switching, for instance, the inhibition of a just abandoned task (backward inhibition; Mayr and Keele, 2000) is assumed to be stronger in blocks where tasks always switch compared to blocks were the frequency of task switches is lower (e.g., Dreisbach and Haider, 2006;Philipp and Koch, 2006). This inhibition seems to be adaptive, because frequent task switches increase the interference between tasks, increasing the difficulty of task performance. This means that the risk of an erroneous re-execution of the just performed task (task perseveration error) is increased, which would be counteracted by stronger backward inhibition. Similarly, it has been hypothesized that the strength of response inhibition is strategically adapted to the risk of an erroneous re-execution of the last response (response perseveration error; Hübner and Druey, 2006). The risk should be especially high if stimulus features frequently activate the previous but now wrong response. Unfortunately, evidence for a strategic adaptation of response inhibition in task switching is inconclusive. Studies supporting the strategic-adaptation hypothesis usually compared RR effects between low-and high-risk task-switching contexts (e.g., Lien et al., 2003;Hübner and Druey, 2006). In a study by Hübner and Druey (2006), for instance, univalent and bivalent stimuli served as low-and high-risk stimuli, respectively (a description of univalent and bivalent stimuli can be found in Table 1). The risk of perseveration is low for univalent stimuli, because they activate only the relevant task and the correct response. Bivalent stimuli, in contrast, activate both tasks and, thus, also a stimulus category and an associated response of the irrelevant task. Accordingly, Hübner and Druey (2006) reasoned that the latter stimuli should pose a higher risk of response perseveration error than univalent stimuli. Consequently, if the proportion of bivalent stimuli is increased response inhibition should strategically be increased in order to control response perseverations. Stronger inhibition, however, should also increase the costs (or reduce the benefits) if a response has to be repeated. Indeed, in line with this reasoning, Hübner and Druey (2006) observed larger RR costs on task-switch trials and smaller RR benefits on task-repetition trials in conditions with 100% high-risk stimuli, compared to conditions with 100% low-risk stimuli. A recent study where different proportions of high-risk stimuli were used (Grzyb and Hübner, 2013a), however, questions whether Hübner and Druey's (2006) findings can best be explained by a strategic adaptation of response inhibition. In that study Grzyb and Hübner used bivalent-incongruent stimuli as high-risk stimuli (see Table 1). These stimuli pose a rather high risk of response perseveration, because they not only activate the wrong task (due to bivalency) but also the wrong response (due to incongruency). Therefore, on a RS trial, the activation of the wrong response adds to the activation carried over from the previous trial thereby increasing the risk of an erroneous RR. For comparison, univalent stimuli served as low-risk stimuli. Replicating the results of Hübner and Druey (2006), Grzyb and Hübner (2013a) found larger RR costs in conditions with 100% high-risk stimuli than in conditions with 100% lowrisk stimuli. Unexpectedly, however, RR costs for the respective stimulus-types remained the same when the stimulus types were mixed (50% low-risk, 50% high-risk) within a block of trials. This trial-based variation in RR costs cannot be explained by an overall response-inhibition strategy that depends on the proportion of the stimulus types. Rather, the result suggests that some trial-based mechanisms-related to the current stimulus type-modulated the RR costs. To explain the stimulus-type dependent RR costs, Grzyb and Hübner (2013a) proposed the amplification of response conflict (ARC) account. According to this idea, RR costs do not only vary with the strength of response inhibition, but also with the current stimulus type. Given a certain strength of response inhibition, different RR costs result for high and low-risk stimuli, because response inhibition modulates response conflict differently depending on the overlap between the inhibited response and the correct response. On RR trials, for instance, the inhibited and the correct response fully overlap. Thus, for a bivalentincongruent stimulus the response conflict on RR trials is amplified, because the correct response is inhibited, while the activation of the competing wrong response remains unaffected. On RS trials, in contrast, the response conflict is smaller, because response inhibition now exclusively reduces the activation of the wrong response. Note that these effects are not the consequences of varying degrees of response inhibition. Nonetheless, this pattern of effects results in larger RR costs for bivalent-incongruent stimuli, compared to low-risk (e.g., neutral) stimuli, which do not elicit a response conflict (for the effect of ARC on RR benefits on task-repetition trials see Grzyb and Hübner, 2013b). Do the results of Grzyb and Hübner (2013a) imply that there is no strategic adaptation of response inhibition? Such a conclusion might be premature. One reason is that Grzyb and Hübner mixed only neutral (e.g., "#A#") and bivalentincongruent (e.g., "3A3") stimuli. Because these two stimulus types can be easily discriminated perceptually, participants might have applied a stimulus-type specific inhibition strategy in a trialby-trial manner, especially, as bivalency was perfectly coupled with response conflict (Koch et al., 2010). As a consequence, an overall strategy might not have been necessary. Moreover, because such a stimulus-type specific response inhibition and ARC would affect the size of RR costs similarly, Grzyb and Hübner's (2013a) trial-based effect might have, at least partially, be the result of stimulus-type specific response inhibition and not only of ARC. OBJECTIVE OF THE CURRENT STUDY The aim of the present study was to again test the idea that response inhibition can strategically be adapted to the overall risk of response perseveration. This time, however, we tried to prevent trial-based strategies by including a further stimulus type that makes perceptual discriminability rather difficult. As in Grzyb and Hübner (2013a), we used two-task sequences in which a taskswitch was required on every trial (cf. Figure 1). To control for effects of previous-trial congruency on RR costs (cf. Druey and Hübner, 2008b;Grzyb and Hübner, 2012), we kept the stimulus type for the first task constant, and only varied the type for the second task. For both tasks compound stimuli were used, consisting of a target item and a distractor item (see Table 1). The strength of response inhibition was assessed by the RR costs for responses to stimuli in the second task. In a first step, we tested the effect of perceptual discriminability on RR costs. Therefore, in Experiment 1, we replicated the results of Grzyb and Hübner (2013a) with an even lower proportion of high-risk stimuli. Then, in Experiment 2, we decreased the perceptual discriminability between high-and lowrisk stimuli by uncoupling bivalency and incongruency. This was obtained by including bivalent-congruent stimuli in the second task. As a result, trial-based effects were indeed reduced. Finally, in Experiment 3, we tested the strategic-adaptation hypothesis by mixing the same three stimulus types as in Experiment 2, but by further reducing the proportion of high-risk stimuli. The results clearly show that the overall strength of response inhibition can be gradually adapted to the proportion of high-risk stimuli. EXPERIMENT 1 Experiment 1 should replicate the main results of Grzyb and Hübner (2013a), i.e., larger RR costs for bivalent-incongruent stimuli than for neutral ones, and provide a baseline for Experiment 2. Whereas in Grzyb and Hübner (2013a) the proportion of bivalent-incongruent stimuli was 1/2, it was reduced to www.frontiersin.org August 2013 | Volume 4 | Article 545 | 3 FIGURE 1 | (A) Mapping of stimulus categories to responses for the two tasks. (B) Schematic examples of trials in different conditions. A cue indicates the relevant judgment for Task 1. Task 2 is always a switch to the alternative judgment. In the depicted example Task 1 is the even-odd judgment (the cue "g/u" is an abbreviation of the German category words "gerade" (even) and "ungerade" (odd)). RR, response repetition; RS, response shift; For details see text. 1/3 in the present experiment. Nonetheless we expected the same pattern of RR costs as in Grzyb and Hübner (2013a). According to the ARC account, RR costs for bivalent-incongruent stimuli should be increased, because response inhibition amplifies the response conflict elicited by these stimuli only on RR trials. Moreover, because bivalency was easily discriminable and uniquely coupled with incongruency, it was again possible to use a stimulus-type specific response inhibition. If such a strategy would indeed be applied, it would also increase RR costs specifically for bivalent-incongruent stimuli. Participants Thirty-four students of the Universität Konstanz participated in the experiment (6 male; M = 22 years). All participants had normal or corrected-to-normal vision and were either paid 8 Euro per hour or fulfilled a course requirement. Apparatus and stimuli The stimuli were presented on a 19-inch color monitor with a resolution of 1280 × 1024 pixels and a refresh rate of 60 Hz. A PC controlled stimulus presentation and response registration running the software package Presentation (Neurobehavioral Systems, Albany, CA, USA; www.neurobs.com). The two buttons of a regular computer mouse served as response buttons. The stimuli were constructed using letters (G, K, R, A, E, U) and numerals (4, 6, 8, 3, 5, 7) as stimulus items. There were also three neutral symbol ( * , &, %) that were unrelated to any task. Each stimulus array-S1 for Task1, S2 for Task2-consisted of three items. Similar to a flanker stimulus, two identical items were presented to both sides of a central item. For S1 the target item was always the central item. For S2, on each trial it was randomly determined whether the central item or the flanker items were the target. The spatial uncertainty of the target item should allow for a strong effect of the distractor item which should increase bivalency and incongruency effects. The items in S1 were always univalent-congruent, i.e., target and distractor items were related to the same task (letters or numerals) and were associated with the same response (cf. Table 1). S2 was either neutral or bivalentincongruent. Neutral stimuli were composed of the target item and a neutral symbol as distractor items. Bivalent-incongruent stimuli consisted, i.e., target and distractor items were related to different tasks (a letter and a numeral) and were associated with different responses. A stimulus pattern subtended a visual angle of approximately 5.5 • width and of 2.1 • height. The stimuli were displayed in white on a black background. Procedure At the beginning of each trial a cue was presented for 800 ms that indicated the relevant judgment for Task1 (see Figure 1). Cues were abbreviations of the indicated judgment, i.e., "g/u" (odd/even judgment; German words "gerade" (even) and "ungerade" (odd)) and "k/v" (consonant/vowel judgment; German words "Konsonant" (consonant) and "Vokal" (vowel)). After a blank screen of 200 ms the stimulus S1 for Task1was presented and remained visible until response. The stimulus S2 for Task2 was displayed 1500 ms after S1 or, if the response time for S1 was longer, after that response. The result of a judgment had to be indicated by pressing one of two response buttons (left, right), which were the same for each task. The categories even and consonant were mapped to the left button, odd and vowel to the right button. After an error a short feedback tone (500 Hz, 100 ms) was presented. The next trial started 1000 ms after the second response. Participants were instructed to prepare for the upcoming tasks and to respond as fast as possible while keeping accuracy above 90%. The experiment consisted of 12 blocks each encompassing 72 trials. The first two blocks served as training blocks and were not analyzed. Design In all experiments the dependent variables were the response latencies to S1 (RT1) and to S2 (RT2) and the corresponding error rates ER1 and ER2. From these measures we calculated RR costs as the mean performance on RR trials minus that on RS trials. The experiment followed a within-participant design with response transition (repetition, shift) and S2 type (neutral, bivalent-incongruent) as independent variables. Although we included only task-switch trials, due to the two-task sequence procedure, inter-trial sequences were random. Therefore, there could be task repetitions and task shifts from Task 2 on one trial to Task 1 on the next trial. These inter-trial transitions were not analyzed. RESULTS Trials with RT1 > 1500 ms were excluded from the analysis (2.04% of all trials). RT1 The mean latency for correct responses to S1 was 581 ms (SE = 18.38 ms). RT2 Anticipatory errors (RT2 < 150 ms) and extreme outliers (RT2 > 3500 ms) were excluded from the analysis of second response (together, less than 0.3% in each condition) as well as trials with incorrect responses to S1. Mean latencies of correct responses were entered into a two-way ANOVA with the independent variables response transition (repetition, shift) and S2 type (neutral, bivalent-incongruent) realized within participants. Results are depicted in Figure 2. DISCUSSION As expected, we found substantially larger RR costs for bivalentincongruent stimuli than for neutral ones in both response times and error rates, which replicates and generalizes the findings of Grzyb and Hübner (2013a). It seems that the difference in RR costs between the two stimulus types is independent of their FIGURE 2 | Mean response times and errors rates in conditions of Experiments 1. "RR" and "RS" denote response repetition and response shifts, respectively. "Bi-inc S2" denote bivalent-incongruent stimuli on Task2 (see Table 1 for details of stimulus classification). The percentages indicate the relative proportion of the respective stimulus-types. Error bars represent standard errors of the mean. proportion, which is in line with the ARC account (Grzyb and Hübner, 2013a). On RR trials, the inhibition of the last response reduces the activation of the correct response which increases the response conflict elicited by bivalent-incongruent stimuli. As a consequence, RR costs are larger for bivalent-incongruent stimuli than for neutral ones. However, the current experimental condition might represent a special case, because bivalency was uniquely coupled with incongruency. The resulting high perceptual discriminability between the two stimulus types also enabled trial-based strategies, e.g., stimulus-type specific response inhibition. Thus, it is open whether the observed differences in RR costs were exclusively due to amplification or also to stimulus-type specific response inhibition. To test this question, we conducted the next experiment. EXPERIMENT 2 In this experiment we tried to prevent stimulus-type specific response inhibition. We hypothesized that this might be obtained by also presenting bivalent-congruent stimuli as S2. Because these stimuli are perceptually similar to bivalentincongruent stimuli (cf . Table 1), participants cannot easily "see" whether a stimulus is incongruent, i.e., whether it poses a high risk of perseveration, or not. Consequently, the strategy to increase response inhibition when a highrisk stimulus is presented should be difficult to apply. Thus, to test whether our hypothesis is valid, we mixed neutral, bivalent-congruent, and bivalent-incongruent S2 in equal proportions. Assuming that this procedure prevents stimulus-type specific response inhibition (i.e., response inhibition is the same for all stimulus-types), we can formulate the following hypotheses. First, if the pattern of RR costs in Experiment 1 was exclusively due to an automatic ARC by response inhibition (i.e., stimulus-type specific response inhibition was irrelevant in Experiment 1), then we should observe the same results in the present experiment. Second, if the pattern of RR costs in Experiment 1 was exclusively due to stimulus-type (i.e., univalent vs. bivalent) specific response inhibition, then we should find similar RR costs for all stimulus types in the present experiment. Moreover, RR costs for bivalent-incongruent stimuli should be smaller than in Experiment 1. Third, if both ARC and stimulus-type specific response inhibition contributed to the pattern of RR costs in Experiment 1, then we should again find an increase of RR costs for bivalent-incongruent stimuli, but this increase should be smaller than in Experiment 1 (the increase should be reduced by the amount stimulustype specific response inhibition contributed to the effect in Experiment 1). Finally, the inclusion of bivalent-congruent stimuli also allowed us to test a prediction of the ARC account (Grzyb and Hübner, 2013a). It follows from this account that RR costs should not be larger for bivalent-congruent stimuli than for neutral ones, because bivalent-congruent stimuli induce no response conflict that could be amplified. Thus, for both bivalent-congruent and neutral stimuli the only factor that is relevant for the size of RR costs is the strength of response inhibition. Because the strength of response inhibition should be the same for both stimulus-types, we expected similar RR costs for neutral and bivalent-congruent stimuli. Participants Thirty-six students of the Universität Konstanz participated in the experiment. All participants had normal or corrected-tonormal vision and were either paid 8 Euro per hour or fulfilled a course requirement. Four participants were excluded from analysis, because of poor performance on the task (final sample: 8 males; M = 23 years) 1 . Poor performance was defined as RT2 or ER2 larger than two standard deviations above the group mean (RT2 > 1165 ms, ER2 > 18.2%). Stimuli and procedure In addition to the two stimulus-types in Experiment 1, S2 could also be bivalent-congruent. Similar to bivalent-incongruent stimuli, bivalent-congruent ones consisted of stimulus items of both tasks (a letter and a numeral), which, however, both activated the same response. The procedure was identical 1 The exclusion of participants in this and in the following experiment did not change the pattern of results nor the conclusion of the study. to Experiment 1 except that neutral, bivalent-congruent, and bivalent-incongruent S2 were presented on one third of the trials, respectively. RESULTS Trials with RT1 > 1500 ms were not analyzed (2.18% of all trials). Results are depicted in Figure 3. RT1 The mean latency for correct responses to S1 was 603 ms, SE = 15.32 ms. RT2 Anticipatory errors (RT2 < 150 ms) and extreme outliers (RT2 > 3500 ms) were excluded from the analysis of the second response (together less than 0.3% in each condition) as well as trials with incorrect responses to S1. Mean latencies of correct responses to S2 were entered into a two-way ANOVA with the independent variables response transition (repetition, shift) and S2 type (neutral, bivalent-congruent, bivalent-incongruent) realized within participants. The analysis revealed significant main effects of S2 type, F(2, 62) = 79.7, p < 0.001, and response transition, F(1, 31) = 26.5, p < 0.001. Responses to neutral stimuli were faster than those to bivalent-congruent and bivalent-incongruent ones (M = 647 ms, SE = 12.58 ms vs. M = 746 ms, SE = 18.12 ms and FIGURE 3 | Mean response times and errors rates in conditions of Experiment 2 (red) and 3 (blue). "RR" and "RS" denote response repetition and response shifts, respectively. "Bi-con S2" and "bi-inc S2" denote bivalent-congruent and bivalent-incongruent stimuli on Task2, respectively (see Table 1 for details). The percentages indicate the relative proportion of the respective stimulus-types in the experiments. Error bars represent standard errors of the mean. M = 742 ms, SE = 17.53 ms), and RRs were slower than RSs (M = 734 ms, SE = 14.94 ms vs. M = 689 ms, SE = 12.71 ms). Concerning the interaction between the two variables, there was only a small trend, F(2, 62) = 2.23, p = 0.15. COMPARISON WITH EXPERIMENT 1 We also compared the performance in the present experiment with that in Experiment 1. To this end, we calculated three-way ANOVAs with the independent variable experiment (Experiment 1, Experiment 2) realized between-participants and the independent variables response transition (repetition, shift) and S2 type (neutral, bivalent-incongruent) realized withinparticipants. We report only significant results involving the between-participant variable experiment. The analyses of RT2 revealed a significant two-way interaction between experiment and S2 type, F(1, 64) = 23.9, p < 0.001. The interaction showed that the slowing for responses to bivalentincongruent compared to neutral S2 was more pronounced in Experiment 1 (neutral M = 625 ms, SE = 15.72 ms, bivalentincongruent M = 828 ms, SE = 27.03 ms) than in Experiment 2 (neutral M = 647 ms, SE = 12.58 ms, bivalent-incongruent M = 742 ms, SE = 17.53 ms). The three-way interaction between experiment, S2 type, and response transition was also significant, F(1, 64) = 8.46, p < 0.01. This reflects the finding that the RR costs for bivalent-incongruent S2 were reliably larger than those for neutral S2 in Experiment 1, and only by trend in the present one. Put differently, whereas RR costs were larger in Experiment 1 compared to Experiment 2 for bivalentincongruent S2, F(1, 64) = 4.48, p < 0.05, they did not differ for neutral S2, F(1, 64) < 1. In a corresponding analysis of ER2 there were no significant main effects or interactions. Finally, to see whether the basic level of response inhibition differed between the experiments we compared RR costs for neutral stimuli, because they represent a relatively direct measure of response inhibition. This analysis revealed that RR costs in the error rates for neutral stimuli were larger in Experiment 2 than in Experiment 1, F(1, 66) = 4.07, p < 0.05. DISCUSSION In the latencies, the increase in RR costs between bivalentincongruent compared to neutral stimuli was again reliable, although, this time, it was significantly smaller than in Experiment 1 (19 vs. 89 ms). In the error rates, the increase in RR costs for bivalent-incongruent stimuli was also reliable, but did not differ between experiments. This pattern of results is in line with our third hypothesis and indicates that in both experiments response inhibition amplified response conflict on RR trials, which increased RR costs. And the fact that RR costs for high-risk stimuli were smaller in the present experiment than in Experiment 1 suggests that some trial-based strategy must also have been effective in our first experiment (significantly increasing RR costs in RT for high-risk stimuli). By including bivalent-congruent stimuli this strategy had little or no effect in the present experiment. Do our data support the assumption that participants in Experiment 1 had specifically increased response inhibition on-the-fly for high-risk (bivalent-incongruent) stimuli? In our first experiment high-risk stimuli could easily be discriminated perceptually from low-risk stimuli. By including bivalent-congruent stimuli, however, which are low-risk, discriminability was considerably reduced in the present experiment. Consequently, high-risk stimuli could not be detected quickly, which prevented stimulus-type specific response inhibition. Unfortunately, although the assumption of stimulustype specific response inhibition explains why RR costs were much smaller for bivalent-incongruent stimuli in the present experiment, it cannot account for the fact that the reduction of RR costs occurred only in the latencies. Thus, it seems that some other trial-based strategy was also involved. A possible additional trial-based strategy in this respect could be to only prepare the upcoming task endogenously if necessary. By including bivalent-congruent stimuli we not only altered stimulus discriminability, but also the proportion of bivalent stimuli. In Experiment 1 only 1/3 of the trials were bivalent, whereas in the present experiment their proportion was 2/3. On bivalent trials the relevant task set has to be selected endogenously on the basis of internal representation (e.g., memory content about the last task). In contrast, on univalent trials the stimulus activates only the correct task set, so that no or only little endogenous control is necessary. Consequently, in univalent contexts participants can reduce their internal control efforts by outsourcing (cf. Mayr and Bryck, 2007) task control to the stimuli. Thus, because bivalent stimuli were relatively rare in Experiment 1, a favorable trial-based strategy would have been to outsource control, i.e., to rely on stimulus-driven control for task selection if the stimulus is neutral, and to increase topdown control only if a high-risk stimulus was detected. Such a stimulus-dependent task preparation would result in delayed responses to bivalent-incongruent stimuli, because the correct response can only be selected after the relevant task set has endogenously been implemented. Interestingly, delayed responding to bivalent-incongruent stimuli can also explain why RR costs were larger in Experiment 1-simply because response inhibition had more time to bias response selection 2 . The effect of delayed processing on error rates is less clear. On the one hand, more time for response inhibition should also increase RR costs in the error rate. On the other hand, though, accuracy generally increases with response time in flanker-task like paradigms (cf. Hübner et al., 2010). It is difficult to predict how these effects add up. However, it is possible that they cancel each other out, which would explain that the RR costs in the error rates did not differ between our experiments. Thus, stimulus-dependent task preparation might explain the relatively large increase for RR costs for bivalent-incongruent stimuli in the latencies in Experiment 1. We will come back to task preparation in the General Discussion. Our results clearly indicate that different processing styles were applied in our first two experiments. Was inhibitory control adapted accordingly? The comparison of Experiment 1 and 2 suggests that this was indeed the case. RR costs for neutral stimuli, which represent a relatively direct measure of response inhibition, were larger in Experiment 2 than in Experiment 1. This result indicates that the basic level of response inhibition was larger in Experiment 2, and suggests that overall control strategies (e.g., inhibitory control) were more important in Experiment 2, presumably because trial-based strategies were more difficult to apply. Another important finding of Experiment 2 is that RR costs were larger for bivalent-incongruent stimuli than for bivalentcongruent ones. This result was predicted by the ARC account. According to this account RR costs were smaller for bivalentcongruent stimuli, because they do not activate the wrong response. Consequently, inhibition and response conflict cannot amplify each other. Our finding is also important for refuting a possible objection. One might have argued that the increased RR costs for bivalent-incongruent stimuli are, at least partly, the result of a scaling effect. Because response times are longer for those stimuli, RR costs also increase. However, mean response times for bivalent-congruent stimuli were similar to those for bivalent-incongruent ones, but RR costs nevertheless differed substantially between these stimulus types. Thus, the increase in RR costs for bivalent-incongruent stimuli is not simply the result of longer response times. Why was the increase in RR costs for bivalent-incongruent stimuli in Experiment 2 much stronger in accuracy than in the latencies? Notably, an analogous difference holds for the congruency effect, i.e., better performance for bivalent-congruent stimuli compared to bivalent-incongruent ones. The congruency effect was practically absent in response times but substantial in error rates (cf. Figure 3). However, this is not unusual for studies applying compound stimuli (cf. Rogers and Monsell, 1995). Thus, if the effect of incongruency is more pronounced in error rates and if this effect is amplified by response inhibition (ARC) one should expect that the increase in RR costs is also more pronounced in error rates. Taken together, the results of Experiments 1 and 2 show that, if stimulus-type dependent trial-based strategies are possible, then there is little or no overall strategic control. Moreover, it seems that the summed effects of several processing strategies make it difficult to assess the actual strength of response inhibition. Such effects might also have limited the validity of previous studies (Grzyb and Hübner, 2013a) that were conducted to provide evidence for a strategic adaptation of response inhibition to the overall risk of perseveration. Our present results indicate that applying both bivalent-congruent and bivalent-incongruent stimuli is more appropriate for such an objective. EXPERIMENT 3 The results of our first two experiments suggest that strategies of adapting overall response inhibition to the risk of perseveration might be applied only if trial-based strategies are prevented, as in the previous experiment. Therefore, we conducted a similar experiment in which the proportion of high-risk stimuli was even further reduced. In Experiment 2, neutral, bivalentcongruent, and bivalent-incongruent stimuli had an equal proportion. In the present experiment, though, bivalent-incongruent stimuli occurred only on 10% of the trials, whereas the other stimulus-types were equal in proportion (45%). Because the overall risk of response perseveration was rather low (only 10% high-risk, i.e., bivalent-incongruent stimuli), and because trial-based processing was prevented (due to the inclusion of bivalent-congruent stimuli), we expected that the basic level of response inhibition would be adapted to this low risk. As a result, RR costs for neutral and bivalent-congruent stimuli should be substantially smaller than in Experiment 2. Predicting results for bivalent-incongruent stimuli was more difficult. Because their proportion was rather low, it could be expected that the congruency effect would be relatively large (e.g., Hübner et al., 2010). According to the ARC account (Grzyb and Hübner, 2013a), response inhibition should amplify the negative effects of incongruency only on RR trials thereby increasing RR costs. Thus, it was possible that both effects, i.e., reduced response inhibition and increased effect of incongruency, would counterbalance each other. Participants Thirty-four students of the Universität Konstanz participated in the experiment. All participants had normal or corrected-tonormal vision and were either paid 8 Euro per hour or fulfilled a course requirement. Two participants was excluded from analysis because of poor performance on the task (final sample: 9 males; M = 23 years), where poor performance was defined as RT2 or ER2 larger than 2 standard deviations above the group mean (RT2 > 1073 ms; ER2 > 13.4%). Stimuli and procedure The stimuli and procedure were identical to those in Experiment 3 except that, bivalent-incongruent S2 occurred on 11.1% or the trials (8/72), whereas neutral and bivalent-congruent S2 occurred on 44.4% of the trials (32/72), respectively. RESULTS Trials with RT1 > 1500 ms were not analyzed (1.37% of all trials). Results are depicted in Figure 3. RT1 The mean latency for S1 was 541 ms, SE = 19.72 ms. RT2 Anticipatory errors (RT2 < 150 ms) and extreme outliers (RT2 > 3500 ms) were excluded from the analysis of second response (together less than 0.3% in each condition) as well as trials with incorrect responses to S1. Mean latencies of correct responses were entered into a two-way ANOVA with the independent variables response transition (repetition, shift) and S2 type (neutral, bivalent-congruent, bivalent-incongruent) realized within participants. COMPARISON WITH EXPERIMENT 2 The performance in the present experiment was also compared with that in Experiment 2. We subjected RT2 and ER2 into two separate three-way ANOVAs with the independent variable experiment (Experiment 2, Experiment 3) realized between participants and the independent variables response transition (repetition, shift) and S2 type (neutral, bivalent-congruent, bivalent-incongruent) realized within participants. We report only significant results involving the variable experiment. DISCUSSION RR costs were again reliable, and differed between the stimulus types. However, as expected, RR costs for neutral and bivalent-congruent stimuli were significantly smaller than in Experiment 2. This result supports our hypothesis that the basic level of response inhibition is strategically controlled, if trial-based strategies cannot be applied. Compared to Experiment 2, the smaller proportion of high-risk stimuli in the present experiment reduced the risk of perseveration errors. Consequently, response inhibition was generally smaller. For bivalent-incongruent stimuli, the smaller response inhibition did not lead to smaller RR costs. This confirms the idea that the size of RR costs for bivalent-incongruent stimuli depends on at least two factors; the strength of response inhibition and the magnitude of the response conflict, the latter passively increasing RR cost (ARC). While response inhibition was reduced in the present experiment, response conflict was larger, which can be seen in a larger congruency effect even on RS trials (cf. Figure 3). Therefore, the finding of similar RR costs for bivalent-incongruent stimuli in Experiments 2 and 3 is in line with our assumption that the effect of reduced overall inhibition on the size of RR costs was compensated for by the larger amplification effect due to the increased response conflict on trials with bivalent-incongruent stimuli (Grzyb and Hübner, 2013a). GENERAL DISCUSSION The present study investigated to what extent response inhibition can strategically be adjusted to the overall demands of a task context. According to the response-inhibition account of RR effects in task switching (Hübner and Druey, 2006; see also Marí-Beffa et al., 2012), responses are strategically inhibited to control the error rate in task-switching contexts, where perseveration errors are likely to occur due to residual activations left over from previous task performance. Because the risk of committing such errors is relatively high for bivalent-incongruent stimuli, conditions with a high proportion of these stimuli pose a higher overall risk of perseveration errors than conditions with a small proportion. Therefore, it is likely that individuals strategically increase the basic level of response inhibition under such conditions. In a previous study, however, no such adaptation effect was found (Grzyb and Hübner, 2013a). Although RR costs were larger for bivalent-incongruent stimuli than for neutral ones, this effect was independent of their proportion. However, in Grzyb and Hübner (2013a) study, low-and high-risk stimuli could easily be discriminated perceptually. Thus, instead of an overall inhibition strategy, a trial-based strategy could have been applied. For instance, response inhibition could have been increased on-the-fly after a high-risk stimulus was detected. To test the strategic-adaptation hypothesis more strictly, we therefore had to establish a condition in which trial-based strategies are hard to apply. This was realized in Experiment 2 by also presenting bivalent-congruent stimuli in addition to neutral and bivalent-incongruent ones. Bivalent-congruent stimuli also pose a low risk of response perseveration, but are difficult to discriminate perceptually from bivalent-incongruent stimuli. Accordingly, trial-based strategies should be hard to apply with this mixture of stimulus types. For comparison, however, we first (Experiment 1) collected data in a similar way as Grzyb and Hübner (2013a). Indeed, comparing the results of our first two experiments revealed that the difference in RR costs between bivalent-incongruent and neutral stimuli was smaller in Experiment 2. This result indicates that some trial-based strategy is applied if high-and low-risk stimuli can easily be discriminated and that this strategy further increase RR costs for bivalent-incongruent stimuli. Importantly, RR costs for neutral stimuli were larger in Experiment 2, compared to Experiment 1. Because these cost can be considered as a relatively pure measure of response inhibition (e.g., Grzyb and Hübner, 2013a), this result shows that the basic level of response inhibition was generally larger in Experiment 2. This finding supports our idea that the basic level of inhibition is strategically adapted, given that trail-based strategies cannot be applied. If our idea holds, then the proportion of high-risk stimuli should have an effect on RR costs in conditions where stimulus types are mixed as in Experiment 2. This hypothesis was tested in Experiment 3. In comparison to Experiment 2, we reduced the proportion of bivalent-incongruent stimuli by 70%. As a result, this reduction caused smaller RR costs, which strongly supports the strategic-adaptation hypothesis of response inhibition. Previous studies yielded only indirect evidence for strategic adaptation of response inhibition to the risk of response perseveration errors. After comparing the effects in pure and mixed task contexts (where only one or several tasks are performed, respectively), several authors argued for an all-or-none adaptation of response inhibition and suggested that the last response might be inhibited only in mixed but not in pure task contexts (Steinhauser et al., 2009;Marí-Beffa et al., 2012). Our study extends this view by demonstrating a gradual adaptation of response inhibition in mixed contexts to the overall risk of response perseveration errors. The comparison of Experiments 1 and 2 suggests that some trial-based strategy was applied in Experiment 1. One possible strategy seems to be that participants increased response inhibition on-the-fly when a high-risk stimulus was detected. The stronger response inhibition should affect RR costs for highrisk stimuli in both response times and error rates. However, we merely observed effects on RR costs in the latencies and not in the error rates. Therefore, we concluded that a different trial-based strategy must have been applied. Because two thirds of the stimuli in Experiment 1 were neutral, exogenous activation was largely sufficient to select the correct task and response on the corresponding trials. Only on trials with bivalent stimuli the task had to be selected endogenously. Moreover, the different stimulus types could easily be discriminated. Therefore, a possible strategy was to prepare the required task only if necessary, i.e., when a bivalentincongruent stimulus or conflict was detected. Such a strategy presumably minimized mental effort by outsourcing task control (cf. Mayr and Bryck, 2007). Its drawback, however, was that on bivalent-incongruent trials, the task had to be selected after stimulus onset, which increased the response time and interference (e.g., Rogers and Monsell, 1995;Steinhauser and Hübner, 2007). If we assume that the effects of response inhibition increase with stimulus processing time, then such a stimulus-type dependent task preparation also explains why RR costs in the response times for bivalent-incongruent stimuli were larger in Experiment 1 than in Experiment 2. In the error rates, there was no difference in RR costs between the experiments, because the effect of the increased response inhibition was presumably counterbalanced by the fact that accuracy generally increases with response time (e.g., Hübner et al., 2010). IMPLICATION FOR ALTERNATIVE ACCOUNTS OF RR COSTS The present results are also relevant with respect to other accounts of RR costs in task-switching. For example, one class of accounts explains RR costs in task switching as a result of binding and strengthening. According to this idea (Meiran, 2000), a categoryresponse (C-R) rule is strengthened after a response was selected by this rule, whereas other rules associated with the same response are weakened. As a consequence, if the task switches and the same response needs to be selected, it has to be activated by the just weakened rule, which explains the costs (see also Schuch and Koch, 2004). Closely related is the idea that partial matches between the previous and the current processing episode lead to interference with current processing, because the previous episode is automatically retrieved if any of its features repeats (Altmann, 2011). On task-switch trials, where the response switches, there is no overlap between the previous and the current episode and, therefore, no interference. In contrast, if the response repeats then some episodic features (i.e., the response) overlap between the episodes. Hence, the pervious episode is retrieved eliciting interference with current processing which worsens performance. These alternative accounts share the common assumption that RR costs are caused exclusively by non-strategic, bottomup mechanisms. As a consequence, they have difficulties in explaining a modulation of RR costs by the proportion of high-risk stimuli. The response inhibition account, in contrast, explains this context effect with the strategic inhibition of the last response in order to prevent response perseveration errors. Thus, the proportion effect observed in the present study strongly suggest that, even if binding and retrieval mechanisms may partly account for RR effects in task-switching, an additional mechanism that can be controlled strategically, has to be assumed. An obvious candidate in this respect is response inhibition (cf. Marí-Beffa et al., 2012;Grzyb and Hübner, 2013a). CONCLUSION The present study supports the idea that the strength of response inhibition can strategically be adapted to the overall risk of perseveration errors, e.g., to the proportion of high-risk stimuli. However, such a strategy is mainly applied when trial-based strategies are not feasible, for instance, because low-and high-risk stimuli are difficult to discriminate.
2016-05-12T22:15:10.714Z
2013-08-22T00:00:00.000
{ "year": 2013, "sha1": "4347184f005db4976cd631ed9a39b17d422128ba", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00545/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "977aabcc122606ca02b6696baa3af083e8048195", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
203299468
pes2o/s2orc
v3-fos-license
SOCIAL MEDIA AND PURCHASE INTENTION: FINDINGS FOR FUTURE EMPIRICAL DIRECTIONS Purpose: The purpose of this current paper is to underline the importance of social media prospects on enhancing purchase intentions, particularly in the automobile industry. The paper caters to highlighting how social media advertising, brand imaging and brand equity developed through social media can enhance purchase intentions. Design/Methodology/Approach: The paper caters to critical appraisal of literature available on the topics and the predictor and outcome variables studied in this regard. Automobile business is very lucrative and it has been noticed that such practices make a major impact in boosting buying intentions of customers. Findings: The finding of the paper is a development of a conceptual framework highlighting the potential of brand image, social media advertising and brand equity towards boosting purchase intentions. The paper has concluded with a framework for future scholars to energize on the concept of social media prospects for the achievement of organizational goals and objectives. Originality/Value: The current study is based mainly on critical review of the prominent literature and offer detailed understanding on the undertaken variables. INTRODUCTION For every car Dealer Company it is a challenging task to find out the new ways to increase the number of customers, what it motivates them, especially because different people are motivated by different things. What actually the managers need to do is to improve the sales volume of companies. The main challenge that car dealers are facing today is to reach the goals of the companies. Luckily, the manager has the power to the key environmental reasons that are very important to increase the number of customers. Today the result of organizations is extremely dependent on the new ways of advertisement through different social media networking. Hence, for the car company it's vital to find the most influential social media so that it can plan a suitable advertisement strategy and gain better results. The right combination of immaterial and material advertisements can boost up the increase in number of customers and enhance increase in profit of the car company. The most important reason, that the manager directs, is his or her relationship with every customer. The second most significant factor in a manager's capability is to promote best advertisement through social media. The communication between the management and customers should be more frequent and more transparent, customers to be informed with true and valid information, and to have access through every channel to advertise the car company to larger number of customers such internet, particularly social media. A well planned and efficient advertisement system to increase the number of customers is important. The correct type of advertisement is developed in harmony to the car company's advertisement philosophy, strategies and procedures. The importance of social media has increased at its peak. The online information developed rapidly through the so called Social Networking Websites (SNW) (Statistics, 2011; Takele, 2018; Ukwueze et al., 2018). The main reason of developing so fast the social media is by developing the Web 2.0 technologies, that it helped to increase the communication between the people and creating online forums, blogs, also mobile and web applications such as Facebook, Snapchat, Myspace, Instagram, Twitter, etc (Wirtz et al., 2013). Promotion in social media it has decreased a lot of the expenditures of the car companies, comparing to classical ways of advertisements Managers in most car companies started understanding the importance of social media in order to increase the interactivity with the customers and finding new customers. Social media helped the car companies to get fast feedback, to order faster and to improve the products and services much faster than without the social media. Hence, the interest to do research have increased recently in terms of the effects in the private life, in the culture of the young generation, education and identity (Lipsman, Seeing the influence of social media in society, car companies are looking for ways in increasing customers "likes" and "shares" for profit purposes (Andriole, 2010). Hence, it's very difficult to find a company that doesn't have a single The meaning of brand image is the subjective perception for a certain brand that it goes to the minds of one customer. It is a belief, impression, idea, in general it is the think for a certain brand. The image for a brand is not static, and it can develop over time. This image is formed at the customers' minds after they get into interaction with the brand or they have any kind of experience with it. Hence, the interaction it can be in many forms and not only coming after purchasing or using a certain product or service. Brand equity is described by the brand value. This value is coming from the perception of the customers for a certain product or service or an experience with it. If customers think highly for a certain product, they give high value for the product, and it gets positive brand equity which can bring to increase in profits and vice versa. The brand equity is related to brand name and symbol. It can be created by making the products memorable, easily recognizable and better quality and reliability from competitors. The focus of the research is to explore and to study the influence of social media advertisements especially Facebook and Instagram in enhancing brand image of car companies. The strong intention behind the topic selection is because a lot of car companies in the Kingdom of Bahrain rely on social media advertisements for accomplishment and competitiveness RESEARCH AIM AND OBJECTIVES The aim of this study is to find out the direction of the new media changes of advertising through the social media advertising and it effect on the car companies and services. Beside this, the research tries to find out these factors: • To recognize how web-based social networking publicizing using social media can upgrade exchange between auto organizations and clients by multi-way correspondence. • To identify customer realization towards social media advertising approach and relationship in order to develop a brand image for car companies. • Distinguish the activities shoppers' positive or negative take to publicize on social media. • To observe how Instagram advertising influences users in generating further information from a brand. THEORETICAL FRAMEWORK In this section it will be reviewed the main theoretical framework that might be appropriate to the study and better understanding of social media. The theory is given by Jodi (2013) JC social media agency, based in the UK. Content structures the foundation of social media for car business. Having solid content adds to numerous objectives of social media marketing and is the way to utilizing those exceptionally essential calculations. Extensively, social media content contains three distinct components. Each component of content shared social media has a fluctuating level of selfpromotion, value-adding, and interaction (see figure 1): • Value-addingengaging the crowd somehow; making a positive response, • Self-promotionto sell directly products or to promote the brand to crowd, • Interactionplanning to make a genuine two-way discussion with people online "Content is everything" is the well-established expression with regards to social media marketing. Social media content gives the establishment of a flourishing of a social presence particularly in case the company is not very famous brand. Example, Twitter is the most famous social media network for business. On Twitter the companies can interact with current and prospect customers and do collaborations. Through other networks, companies can interact once the other person has interacted with you, example is commenting advertisements on Facebook. Therefore, content in social media is everything. It creates interaction between people. A self-promotional post constitutes anything that is more promotional than interaction and value-adding. The large number of posts can be any dots over the triangle. But, the company should target the right side of triangle that is the desired dot with high value adding and interaction. Out of 20 posts, one might be The first step to know before the implementation of the social media advertising of companies is important to know the potential usage and its effectiveness. According to Vemuri (2012) the values of social media can be grouped in: Deepen Customer Relationships, accelerated awareness, Foster Innovations and Drive transactions. The meaning of Social networking sites (SNS) is that it gives the possibility to users in creating public profiles in a certain web page and to create a relationship with the users that also have accounts in the same web page. The purpose in using the social networking sites can be of many kinds, such as for discussion online, chat rooms and other purposes (Beal, n.d). The site gives many opportunities to the users; as such except text it gives possibility to add videos, graphics and pictures. There are many social networking sites, but the most popular ones are Facebook, Twitter, MySpace and LinkedIn. Facebook is the most popular social media in the world and it has increased the number of monthly active users up to 2 billion as of the 3 rd quarter of 2017, by doubling the number of active monthly users from 1 billion as of the 3 rd quarter of 2012 (Statista, 2017b). The purpose of social media changes slightly between them. If you consider Facebook, it is more for friends and users that know each other in their life, but Twitter and MySpace have more limited group of friends. The leader of the social media is Facebook, as it got the highest rank from marketers based on the popularity and huge numbers of users from the youth generation. Therefore, this attracts also the possibility for advertisements such as for events, games, applications, fan pages, and offering the possibility for direct texting with each other's (Lin and Utz, 2015). Other two most popular social media are Instagram and Snapchat. Instagram is a social networking application that became trendy very fast even though it was established recently, in 2010, and was bought by Facebook in 2012. It is considered as the most influential social network in the world. The main purpose is to share and edit photos and videos through Smartphone's, which are displayed on your profile and those that follow you can see your posts and vice versa. The application is very user-friendly. The number of users has increased rapidly up to 800 million monthly active users as of September 2017. Snapchat was created more recently, in 2011 but very quickly became one of the most demanded social media applications in the world. The platform it has a similar purpose, to share images and videos through Smartphone's, but the difference with Instagram is that they these pictures and videos can disappear after some time. It is a serious competitor to Facebook and Instagram since it offers similar products and it turned down a 3 billion USD buying offer from Facebook in January 2014 (Molloy, 2017). The number of daily active users worldwide has increased constantly since its creation when it reached the highest usage of 178 million as of the third quarter of 2017 (Statista, 2017d). The 5 th Arab social media report is studying the impact of social media on businesses. According to the study the main drivers for usage of social media in business are: business growth, improving company image, social media as a marketing tool, job opportunities, becoming more consumer centric, training employees, improving inter office relations, improving service operations, driving entrepreneurship, innovation and new technologies, globalization, high marketing and advertising spend. Despite above mentioned positive impacts, social media also bring negative impacts to businesses such as: inaccuracy in information in planning the strategies of the businesses, the existence of fake products/brands brings lack of trust among the users of social media in companies and decrease in direct communication between employees at businesses. Social media as a marketing tool is driven by cheap advertisement of products, the customers are targeted directly, the size of the customers is very large, the message is distributed very fast and the need to improve in terms of reliance in order to increase the sales of the companies. Advertising on Social Media Advertising in social media is very new possibility for the organizations because it is much more interactive between the users, as an example that is Facebook, that is the dominant social media (Logan et al., 2012). As such, advertising on Facebook, gives the chance to the users to actively interact with each other for the advertisements in the page by clicking at options of "like" or the option of "share" the advertisements to their friends and to check who else has liked or shared the advertisements. The effect of advertising is linked to the credibility, since through the social media users can show a very good feedback to organizations in terms of the reliability of the product or service advertised. Providing entertaining and informative content for food/beverages category it is increasing the usage of online visits on Facebook brand pages. Additionally, more attractive are remunerations for commenting by the users. The interactivity is decreasing between the users for posts made by moderator, and the vividness increases by which the most attractive post is picture. Another interesting finding is that if the posts are posted during the day the number of comments is increasing, but not in the peak working hours, because the comments would be much lower (Cvijikj and Michahelles, 2013). The study shows that the Snapchat is very important social media, as the marketers might have made a mistake, but the comments need to get from Snapchat. The research finds that 45% of college students might check the snap from not popular brand, while 73% might check the snap from a very popular brand. Also 69% answered that they might add as a friend on Snapchat a famous brand that shows that the users are more into the known brands. When they are asked for the type of promotion, 67% favoured sales offer and 58% preferred coupons to get information for brands. Social media can be considered as an important tool for purchases in Finland, but it is playing a crucial role in informing the customers especially for sales campaigns. Consumers consider that the speed of information is very fast through the social media compared to traditional media, but the content of an information might be false for a certain product. It gives a chance to consumer to increase communication with the companies. The findings show that consumers are not interested to share the information with their friends or peers, the so called word of mouth. It plays a crucial role in decision making of purchases. The behavior of the consumers doesn't change even after the advertisement in social media, because the consumers again go in traditional way of processes, in order to make a straightforward decision once they get the advertisement in social media (Lee, 2013). Social media is helping users to access to information through online communities, reviews and suggestions. Consumers through social media can get online support from their peers, by that trust in networks is increasing. This trust will result into motivation for purchasing online and social media will become more useful (Tahir et al., 2019). The trust plays a crucial role in e-commerce. Social media has made it possible for consumers to get access into information of other consumers and by that to share the content easily. All these indications are good factors for the number of online users to increase ( The presence of brand on social media it is important for consumers to trust more the product. The communication of brand through social media brings to brand image positivity. The study done in Finland, found that the influence of social media on brand image is more important to younger generations and females. There are huge differences in time usage of social media related to brand image between the genders and age generations. Companies should give importance to both methods of advertisements, traditional and social media (Jokinen, 2016). The findings of Gorgani (2016) are that electronic word of mouth on social media improves brand image, brand awareness, brand attitude and brand equity. In order for small and medium enterprises to sustain their positions in the market they need to have effective usage of social media networks. The study was done on Iranian jewellery Design Company. The author finds that electronic word of mouth is crucial for brand equity. The study analysed only the Facebook users and not others social media users' comments. Electronic word of mouth has a strong influence on customers in Bahrain, because they believe the online advertisements, and they share the information as trusty to others. The information that they get is important to the customers for decision making of purchases. The information posted on social media it brings additional value to businesses in terms of brand awareness and positive brand image. It is also highlighted that the content advertised in social media brings to customers' increase in safe side, interest and quality (Shuqair et The study made by Elmasri and Hilal (2015) for the social media e-marketing campaign for the e-government of the Kingdom of Bahrain, shows that Bahraini citizens don't see any changes provided by e-government through social media. The legislations of government utilizing online networking don't have settled long-term objective for the correspondence they look with the Bahraini nationals. The utilization of social media advertisement in e-government it varies according to social culture and type of government. The importance of social media is studied by both foreign and local literature. The foreign literature starts with the definition of social media referring to as online media that gives the possibility to the users to talk, to participate, to share, to network and to bookmark online. Social media is similar to Web 2.0 technology. The number of expenses on social media marketing doubled in the last 5 years. There is a need for potential usage and effectiveness. The meaning of SNS is to give chance to users to open public accounts online and to make relations with others. They are similar to Web 2.0. The most famous SNSs are Facebook, Twitter, Myspace and LinkedIn. The highest numbers of user is having Facebook account that reached to 2 billion in 2017. These SNSs differ slightly with each other in terms of their application, but they have a common purpose to create relationships between people. The findings of the foreign studies are that advertisement in social media is a new possibility for companies. Since Facebook has the largest number of users, it gives better chances for advertisement. Users have an option to like or to share the advertisement and also to comment on it. Studies show that advertisement on social media is efficient information for the companies. An interesting research by Cvijikj and Michahelles (2013) shows that if the posts are posted during the day, the number of comments is increasing, but not in the peak working hours. Brand image means a set of brands kept in memory by customers. By brand image are expressed the quality of products and services. Brand equity is related to differences in brand awareness that the clients are getting it through marketing for certain product or service. The meaning of brand loyalty is buying certain brand within the range of products. The findings of the local studies are that electronic word of mouth has a strong influence on customers in Bahrain, because they believe the online advertisements, and they share the information as trusty to others. Another study shows that Bahraini citizens don't see any changes provided by e-government through social media. CONCEPTUAL FRAMEWORK Based on the review and critical appraisal of the literature, the current paper forwards the following framework for future scholars: Figure 4: Conceptual Framework Based on this the present study tested the following hypothesis: H1: there will be a positive relationship between brand image and purchase intention H2: there will be a positive relationship between social media advertising and purchase intention H3: There will be a positive relationship between brand equity and purchase intention SAMPLING Automobile sector was chosen for the present study whereby, a major car trading company was chosen to target respondents. All the four branches of the company were considered for targeting employees. A total of 250 questionnaires were distributed during the period of January to March 2019 out of which 219 were found correctly filled and hence were used for final analysis. Convenience sampling was used for the study. DATA ANALYSIS Structural equation modelling using Smart PLS 2.0 was deployed for the study (Ringle et al., 2005). Several studies have been using structural equation modelling through SMART PLS 2.0 (Ahmed, Majid, Al-Aali & Mozammel, 2019). Therein, two step assessment was done based on the scholarly recommendations which included assessment of measurement model followed by the assessment of structural model (Hair et al., 2013). MEASUREMENT MODEL The measurement model examined individual item reliability, convergent reliability and average variance extracted to ensure that the model is correct for final analysis. In this, the study accessed AVE scores and composite reliability. As per fornell and Larcker (1981), the AVE scores for each construct should be above 0.50 and composite reliability scores to be above 0.70. Results of the measurement model in Table 1 and Figure 5 shows that the study has achieved acceptable scores for both. Figure 5: Measurement Model Following this the study also assessed discriminant validity whereby as per Fornell and Larcker (1981), the square root values of the AVE scores should be examined. The table 2 expresses that the constructs have significant discriminant validity. STRUCTURAL MODEL Following the effective examination of measurement model, the study moved to stage two to assess the significance of the hypothesized relationships. Table 3 and figure 6 have reported significant relationship between social media brand image and purchase intentions thus supporting hypothesis 1. Accordingly, the study also reported significant positive relationship between social media advertisement and brand image thus landing support to hypothesis two. towards the end, the structural equation modelling also reported significant positive results between social media based brand equity and purchase intentions, conclusively supporting hypothesis three. DISCUSSION The results have confirmed that social media has a significant impact on boosting individual purchase intentions. In particular, the present study has asserted that social media advertising can attract target audience and, in a way, persuade them to boost their willingness to purchase, as per the finds of hypothesis 1. Accordingly, the study has confirmed that brand with high brand equity can make a major impact on their customers` purchase intentions as per hypothesis two results. Lastly, the study has also forwarded significant results for the social media-based brand image and purchase intentions. The results have thus confirmed that social media has a great role to play for businesses in the current era. Through using social media platforms, businesses can effectively boost their sales figures. This also indicates that businesses need to work on developing strategies to have a dedicated team or unit to work on social media elements for better financial prospects. The study also suggests implications for the management working in the automobile industry to work on capitalizing the availability of social media to reach out to more people through enhancing their advertising, brand equity and brand image. Since the present study worked on automobile sector, organizations from other sectors need to research further to see how social media can be viable for them. CONCLUSION The present study has effectively concluded a significant positive relationship of social media and its different prospects that could facilitate in marketing towards boosting purchase intentions. The study in particular has confirmed the notable role of social media advertisement, brand image and brand equity towards boosting purchase intentions of customers in the automobile industry in Bahrain.
2019-09-17T02:46:03.456Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "b28999ed88768d522fdcbbaefdc7f47f3f1bce83", "oa_license": null, "oa_url": "https://doi.org/10.18510/hssr.2019.7419", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3c3194ca4de886b0a12c326185a17506cbe01e32", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }