id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
221910189
pes2o/s2orc
v3-fos-license
The mediation role of sleep quality in the association between the incidence of unhealthy movement behaviors during the COVID-19 quarantine and mental health Background Our aim was to investigate the mediating role of worsening sleep quality in the association of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use with loneliness, sadness, and anxiety. Methods Data of 45,161 Brazilian adults from a nationwide behavior survey, conducted between April 24th and May 24th (2020), were used. Participants reported physical inactivity (PI; <150 min/week), high TV-viewing (TV; ≥4 h/day), and high computer/tablet use (PC; ≥4 h/day) before and during COVID-19 quarantine (exposures). For incidence indicators, we only considered participants without the risk behavior before quarantine. Changes in sleep quality during the quarantine period (maintained/got better or worsened) were treated as a mediator. Elevated frequencies of feelings of loneliness, sadness (feel sad, crestfallen, or depressed), and anxiety (feel worried, anxious, or nervous) during the pandemic period were the study outcomes. Analyses were adjusted for sex, age group, highest academic achievement, working status during quarantine, skin color, previous diagnosis of depression, and adherence to quarantine. Mediation models were created using the Karlson Holm Breen method. Results The incidence of PI, high TV, and high PC use were associated with loneliness, sadness, and anxiety feelings. Worsening sleep quality partly mediated the association of the incidence of PI, high TV, and high PC use with loneliness (PI:30.9%; TV:19.6%; PC: 30.5%), sadness (PI:29.8%; TV:29.3%; PC: 39.1%), and anxiety (PI:21.9%; TV:30.0%; PC:38.5%). Conclusion The association of the incidence of physical inactivity and sedentary behaviors with mental health indicators is partly mediated by worsening sleep quality during the COVID-19 pandemic quarantine. Introduction The new coronavirus (COVID-19) spread quickly worldwide and reached Brazil in February (2020). To contain the COVID-19, social isolation measures such as quarantine are recommended [1]. Although beneficial for containing COVID-19, quarantine measures frequently affect other health risk factors, leading to an increase in the adoption of unhealthy behaviors, especially those related to human movement behaviors (physical activity and sedentary behaviors), as well as affecting sleep and mental health indicators [2e4]. Previous studies in China and Italy reported that the pandemic affected different indicators of mental health, including well-being and psychological distress as well as increasing symptoms of depression and anxiety [2e7]. Similarly, sleep quality was highly affected by the COVID-19 pandemic [2,5], especially in the areas most affected by COVID-19 [5]. Before the COVID-19 pandemic, physical inactivity and sedentary behaviors were associated with both lower sleep quality and lower mental health indicators [8e12]. Therefore, considering the COVID-19 pandemic quarantine scenario, beyond the negative effects of quarantine itself on sleep quality and mental health [2,3,13], the reductions in physical activity and increased sedentary behaviors could be associated with a greater degree of negative effects. For example, previous findings from Austria and the UK showed that lower physical activity practice during the COVID-19 quarantine was associated with lower well-being, symptoms of depression, anxiety, and insomnia [14,15]. However, the association between the incidence of unhealthy movement behaviors with mental health, as well as the role of worsening sleep quality in this association, remain unknown. In this sense, cross-sectional and prospective studies found that sleep quality can mediate the association between movement behaviors (especially sedentary behavior) and mental health indicators [16e18]. Therefore, the reductions in physical activity practice and increased sedentary behavior during COVID-19 quarantine can affect sleep quality, which, in turn, can be associated with poorer mental health. Thus, we investigate the mediating role of worsening sleep quality in the association of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use with loneliness, sadness, and anxiety. Sample This was a national cross-sectional health survey, with retrospective information. Data collection was conducted between April 24th and May 24th, 2020. Participants were invited through a chain sampling procedure. In the first stage, the 15 researchers involved in the study chose a total of 200 other researchers from different states in Brazil. Next, each one of the 200 researchers chose 20 people from their social network, making a total of 4000 people chosen. The people chosen in the first stage were called the seeds of the chain recruitment. These seeds sent the survey link to at least 12 people from their social networks, obeying a stratification by sex, age range (18e39; 40e59; 60þ), and educational level (incomplete high school or less; complete high school or more). In addition, information about the survey was circulated through press releases, social communications from participating research institutions, state health departments, and social media. All procedures were approved by the National Research Ethics Commission (CONEP) (process: 30598320.1.0000.5241). The total sample was composed of 45,161 participants. The sample was weighted according to characteristics from the National Household Sample Survey (2019), considering the population of each state, education, age, sex, and prevalence of chronic diseases, aiming to recruit a nationally representative sample. Physical activity and sedentary behavior For physical activity, participants were asked "Before the COVID-19 pandemic, how many days a week did you practice any type of physical exercise or sport? (do not consider physical therapy)" and "During the COVID-19 pandemic how many days a week do you practice any type of physical exercise or sport? (do not consider physical therapy)". Possible answers were: 1) less than 1 day/week; 2) 1e2 days/week; 3) 3e4 days/week; or 4) 5 or more days/week. For those reporting physical activity practice, we also asked: "How long does this activity last?". Possible answers were: 1) less than 30 min; 2) 30e45 min; 3) 46e60 min; or 4) more than 1 h. We defined "before the pandemic" as before the initial restraint measures adopted in Brazil, in the middle of March and "during the pandemic", as the period in which the participants were completing the questionnaire. We classified physical inactivity using the recommendation of 150 min/week [19]. For our analysis purpose, we created an incidence indicator of physical inactivity, only considering participants who were active before quarantine (those that remained physically active vs. those that became physically inactive during quarantine). For TV-viewing, participants were asked: "Usually, before the pandemic, how many hours a day did you use to spend watching television?" and "During the pandemic, how many hours a day have you been watching television?". Possible answers for both were 1) none; 2) less than 1 h/day; 3) between 1 and less than 2 h/day; 4) between 2 and less than 3 h/day; 5) between 3 and less than 4 h/ day; 6) between 4 and less than 5 h/day; 7) between 5 and less than 6 h/day; 8) 6 h/day or more. In addition, computer/tablet use was assessed using two questions "Usually, before the pandemic, how many hours a day did you use to spend using a computer or tablet?" and "During the pandemic, how many hours a day do you usually spend using a computer or tablet?" with open answers. TV-viewing and computer/tablet use were classified using the cut-off point of 4 h/day at both moments (before and during quarantine). For our analysis purposes, we considered only TV-viewing and computer/ tablet incidence, calculated as: those without high TV-viewing or high computer/tablet use before quarantine (those who maintained low TV-viewing/computer use vs. those who changed to present high TV-viewing/computer use during quarantine). Worsening sleep quality Worsening sleep quality was assessed through the question "Has the pandemic affected the quality of your sleep?", with the possible answers: 1) "It has not affected anything, I still sleep well", 2) "With the pandemic, I have started having sleep problems", 3) "I already had sleep problems and they have persisted during the pandemic", 4) "I already had sleep problems and they have got worse", or 5) "I already had sleep problems, but they have decreased". We considered as worsening sleep quality those who reported starting to present sleep problems during the pandemic and those reporting worsening in sleep problems. Mental health As mental health indicators, we adopted three questions regarding feelings of loneliness, sadness, and anxiety only during the pandemic. The difference from the behavioral dimensions assessed is explained by the fact that physiological variables are less stable and more difficult to recall [20,21]. For loneliness, participants were asked: "During the pandemic period, how often have you felt isolated or alone?", for sadness: "During the pandemic period, how often have you felt sad, crestfallen, or depressed?" and for anxiety, participants were asked "In the period of the pandemic, how often have you felt worried, anxious, or nervous?". Possible answers for each question were: 1) "Never", 2) "a few times", 3) "Often", or 4) "Always". We classified as positive for loneliness, sadness, and anxiety those participants who answered "often" or "always". Covariates We used sex, age group, highest academic achievement, working status during quarantine, skin color and adherence to quarantine as covariates. The highest academic achievement was classified as incomplete high school, complete high school, and college education or more. Working status during quarantine was classified as currently not working, working in a normal routine, and home office. Skin color was classified as white or other. Adherence to quarantine was classified as positive for those only going to grocery stores and pharmacies or staying strictly at home, leaving only for health care needs; and negative for those reporting that they continued a normal life or tried to stay away from people, reducing contact a little, not visiting the elderly, but carrying on working and leaving home as usual. Statistical procedures We used values of weighted frequencies and 95% confidence intervals for descriptive statistics and non-crossed 95% confidence intervals as an indicative of differences between groups [22]. Mediation analysis was conducted to assess the influence of worsening sleep quality on the associations of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use (those that began to present the risk behavior during quarantine) with mental health indicators. The association between exposures and the mediator was assessed using crude and adjusted logistic regression models. The Karlson Holm Breen method was used for the mediation [23]. This method was applied using logistic regression models and decomposes the total effect (without the mediator effect) of a variable into direct (the direct association of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use with mental health indicators, accounting for a potential mediator effect -worsening sleep quality) and indirect effects (the mediation effect). This estimation also provides the percentage of explanation by the mediator (mediated percentage). We previously tested for potential exposure  mediator interactions, which were not significant [24]. The theoretical mediation model for analysis is presented in Fig. 1. All analyses were conducted in STATA 15.1. Results Due to missing data and after excluding participants with unhealthy behaviors for the incidence indicators before COVID-19 pandemic quarantine, 16,059 individuals composed the sample for the incidence of physical inactivity, 40,903 composed the sample for the incidence of high TV-viewing, and 20,752 for the incidence of high computer/tablet use. Characteristics of the sample are presented in Table 1. Participants who reported worsened sleep quality, and felt loneliness, sadness, and anxiety were more frequent within the incidence of physical inactivity, TV-viewing, and computer/tablet use groups. Table 2 shows the association between the exposures (incidence of unhealthy movement behaviors) and the mediator (worsening sleep quality). In the adjusted models, the incidences of physical inactivity (OR: 1.51; 95%CI: 1.18e1.94), high TV-viewing (OR: 1.63; 95%CI: 1.42e1.87), and high computer/table use (OR: 1.91; 95%CI: 1.61e2.27) were associated with higher odds of worsening sleep quality. The mediation models of the influence of worsening sleep quality in the association between the incidence of unhealthy movement behaviors and mental health indicators are presented in Table 3. The incidences of physical inactivity, high TV-viewing, and high computer/tablet use were associated with loneliness, sadness, and anxiety feelings. In addition, worsening sleep quality mediated part of the association of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use with loneliness, sadness, and anxiety, with a higher mediation effect for the incidence of high computer/tablet use. Discussion We aimed to investigate whether changes in sleep quality mediate the associations between the incidence of unhealthy movement behaviors and mental health during COVID-19 quarantine. Our main finding was that worsening sleep quality mediated part of the associations of the incidences of physical inactivity, high TV-viewing, and high computer/tablet use and mental health. In addition, the mediation effect was higher for the association of the incidence of high computer/tablet use with sadness and anxiety. The COVID-19 pandemic quarantine measures have promoted several changes in movement behaviors in different countries, reducing physical activity levels and increasing sedentary behavior [25]. In addition, quarantine measures were associated with higher psychological distress, lower mental health indicators, and sleep disturbances [2,13]. Considering the times before the COVID-19 pandemic, movement behaviors were associated with mental health [8] as well as sleep quality [10,12] and sleep quality was prospectively associated with mental health [26]. In this sense, sleep quality could act as a mediator of the association between movement behaviors and mental health [16e18]. Therefore, our findings agree with previous studies from before quarantine [16e18]. However, these results suggest that the association between the incidence of unhealthy movement behaviors and mental health can be mediated by worsening sleep quality even in relatively short periods of movement deprivation and quarantine. Even though the specific mechanisms of the effect of sleep problems in the association between unhealthy movement behaviors and mental health were not specifically studied, there are several shared mechanisms. The associations of movement behaviors with sleep quality can occur through different mechanisms such as regulation of circadian rhythm, increased body temperature, improved physical fitness, changing melatonin release, and increasing the time spent outdoors in light exposure [27e29]. Furthermore, physical inactivity can be associated with disorders such as sleep apnea, which is associated with lower sleep quality [30]. Similarly, higher sedentary behavior, especially screen time can be associated with higher exposure to blue light, which can affect mechanisms related to melatonin release, which is detrimental to sleep quality [31]. In this sense, poorer sleep quality due to the incidence of unhealthy movement behaviors could explain part of the association of the incidence of Table 2 Association of incidence of unhealthy movement behaviors with worsening sleep quality during the COVID-19 quarantine. Incidence of unhealthy movement behaviors OR (95%CI) Crude Note. Reference groups: Maintained physically active for physical inactivity analysis, maintained with low TV-viewing for TV-viewing analysis and maintained with low computer/tablet use for computer/tablet use analysis. a Models are adjusted for sex, age group, highest academic achievement, working status during quarantine, ethnicity, previous diagnosis of depression, and adherence to quarantine. Table 3 Mediation of sleep quality changes in the association of incidence of unhealthy movement behaviors and mental health. unhealthy movement behavior and mental health. In addition, the mediation can be explained by sharing some mechanisms, as poorer sleep quality can be associated with higher inflammation, which is associated with poorer mental health indicators [32e34]. We highlight that, to our knowledge, this is the first study to explore the mediation role of worsening sleep quality in the association of the incidence of physical inactivity, high TV-viewing, and high computer/tablet use with mental health during COVID-19 pandemic quarantine. Therefore, the promotion of physical activity practice and reduction in sedentary behavior, as recommended by the World Health Organization [35] could be an important strategy to mitigate part of the negative effect of COVID-19 quarantine on mental health. Furthermore, interventions aiming to improve mental health during COVID-19 quarantine should focus on movement behaviors and sleep quality in an integrated manner, as worsening sleep quality mediated part of the effect of the incidence of unhealthy movement behaviors on mental health indicators. Some limitations should be considered for the interpretation of our findings. Firstly, the present study used a retrospective design for questions related to behaviors before quarantine, which could contain recall bias. Second, as this was a web-based survey, our sample contains a low representativity of people with low socioeconomic conditions as well as those without access to the internet, which may represent a bias even with a weighted sample. Third, the questionnaire only included questions about the leisure-time domain of physical activity, which is the most associated with mental health [36], but it is possible that reductions in other domains such as transport may also be negative for mental health. Fourth, the lack of standardized questionnaires for the exposures sleep quality and mental health indicators could limit the extrapolation of our findings. However, we present data from more than 40,000 Brazilian adults, weighted for a national representation of population distribution during the COVID-19 pandemic and we consider this as a strength. Thus, the association of the incidence of high sedentary behaviors and physical inactivity with mental health indicators is partly mediated by worsening sleep quality during COVID-19 pandemic quarantine. Policies addressing the increase in unhealthy behaviors as well as sleep quality and mental health are important during quarantine, with physical activity representing an effective and affordable non-pharmacological option to improve sleep and mental health.
2020-09-26T13:06:15.958Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "afcd642d5e72b88b22c781fade342149d2b36455", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sleep.2020.09.021", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c980833e4720ed6ece22a88b64d0b0828db95fe6", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211247186
pes2o/s2orc
v3-fos-license
Whole-genome sequencing to explore nosocomial transmission and virulence in neonatal methicillin-susceptible Staphylococcus aureus bacteremia Background Neonatal Staphylococcus aureus (S. aureus) bacteremia is an important cause of morbidity and mortality. In this study, we examined whether methicillin-susceptible S. aureus (MSSA) transmission and genetic makeup contribute to the occurrence of neonatal S. aureus bacteremia. Methods A retrospective, single-centre study was performed. All patients were included who suffered from S. aureus bacteremia in the neonatal intensive care unit (NICU), Erasmus MC-Sophia, Rotterdam, the Netherlands, between January 2011 and November 2017. Whole-genome sequencing (WGS) was used to characterize the S. aureus isolates, as was also done in comparison to reference genomes. Transmission was considered likely in case of genetically indistinguishable S. aureus isolates. Results Excluding coagulase-negative staphylococci (CoNS), S. aureus was the most common cause of neonatal bacteremia. Twelve percent (n = 112) of all 926 positive blood cultures from neonates grew S. aureus. Based on core genome multilocus sequence typing (cgMLST), 12 clusters of genetically indistinguishable MSSA isolates were found, containing 33 isolates in total (2–4 isolates per cluster). In seven of these clusters, at least two of the identified MSSA isolates were collected within a time period of one month. Six virulence genes were present in 98–100% of all MSSA isolates. In comparison to S. aureus reference genomes, toxin genes encoding staphylococcal enterotoxin A (sea) and toxic shock syndrome toxin 1 (tsst-1) were present more often in the genomes of bacteremia isolates. Conclusion Transmission of MSSA is a contributing factor to the occurrence of S. aureus bacteremia in neonates. Sea and tsst-1 might play a role in neonatal S. aureus bacteremia. Introduction Staphylococcus aureus (S. aureus) is a well-established nosocomial pathogen that causes multiple types of neonatal infections [1,2]. Invasive S. aureus infections in neonates (e.g. bacteremia) are common in very low birth weight (VLBW) infants, which makes this bacterial species one of the most important pathogens in neonatal intensive care units (NICU) [3][4][5]. A significant risk factor for S. aureus bacteremia in VLBW infants is the presence of intravascular catheters, which are frequently required [6][7][8]. In addition, S. aureus bacteremia can result in severe complications such as endocarditis and osteomyelitis [5,9,10]. All-cause mortality among neonates suffering from S. aureus bacteremia varies between 10 and 20% [7,11]. So there is an urgent need to prevent this infection. To prevent S. aureus bacteremia in neonates, it is important to know the factors contributing to the high frequency and severity of this infection. Previously, the virulence factors tsst-1 and sea were implicated to play a role in S. aureus bacteremia [12][13][14]. Furthermore, transmission of S. aureus might contribute to the high frequency of bacteremia. Outbreaks of methicillin-resistant S. aureus (MRSA) at the NICU are described and relatively easy to detect [15][16][17][18]. Meanwhile, the detection of methicillin-sensitive S. aureus (MSSA) outbreaks seems to be more difficult, excluding outbreaks in patients who suffer from a skin infection [19][20][21][22]. In this study, whole-genome sequencing (WGS), the typing method with the highest discriminatory power, was used to determine whether MSSA transmission and genetic makeup, contribute to the occurrence of neonatal S. aureus bacteremia. Population The NICU of Erasmus MC-Sophia, Rotterdam, the Netherlands, is a level IV, 27-beds facility. It is divided into four units with six to eight beds each. Per year, about 750 neonates are admitted. Nearly 40% of them are below 32 weeks of gestation and were in majority born in this hospital. Screening We included neonates with a presumed infection, of whom blood cultures were obtained between January 2011 and November 2017 that showed to be positive for S. aureus. Clinical data concerning gender, gestational age, birth weight and survival were obtained from patient records. S. aureus isolates Blood from neonates was cultured in BACTEC plus PEDS aerobic bottles and incubated in the Bactec FX (BD, Heidelberg, Germany). In case of positive blood cultures, plates were inoculated and, after 16-24 h of incubation at 37°C, screened for S. aureus based on colony morphology. Identification was performed by means of a latex agglutination test (Slidex Staph Plus, bioMérieux, Marcy-l'Etoile, France) and/or via matrix-assisted laser desorption/ionisation, time-of-flight, mass spectrometry (MALDI-TOF MS system, Bruker). S. aureus isolates were stored at − 20°C or -80°C until use. The VITEK 2 system (bioMérieux) was used for antimicrobial susceptibility testing (AST). Whole-genome sequencing Transmission S. aureus isolates were processed according to the bio-Mérieux EpiSeq cs V1 programme and sent to LGC Genomics GmbH (Berlin, Germany) for next-generation sequencing (NGS). We used Illumina chemistry, which generated paired end 2 × 150 bp reads. Sequences were assembled using the proprietary built-in assembler from CLC Genomics Workbench v11 software (Qiagen, Hilden, Germany) with default parameters. We analysed them by means of the available S. aureus core genome multilocus sequence typing scheme (cgMLST) [23] in BioNumerics 7.6.3 (bioMérieux, Sint-Martens-Latem, Belgium) which contains 1861 loci. Allele calling was performed using two algorithms, one based on the assembly using a BLAST approach (assembly-based calling) and one based on the trimmed sequencing data using a kmer based approach (assembly-free calling). A consensus of both algorithms was used to assign final allele calls: when both algorithms were in agreement or when an allele call was made by only one of the algorithms, the allele call was considered in the consensus. However, when both algorithms were in disagreement, the allele call was not considered in the consensus. Both allele calling algorithms were executed using default parameters. Conventional MLST types were inferred in silico from the WGS data. To this end, the seven MLST loci were identified using the sequence extraction tool and the MLST plugin from BioNumerics 7.6.3 that is synchronized to the pubMLST.org public repository (accession date: April 5, 2019). For the visualisation of the genetic relatedness between the isolates, we used a minimum spanning tree for the cgMLST data. The MST was generated using default parameters, and no re-sampling was performed. Isolates containing less than 12 allelic differences in the S. aureus core genome were considered genetically indistinguishable [23]. We defined a cluster as more than two genetically indistinguishable isolates and, within a cluster, considered transmission of S. aureus likely. To further validate the results based on the cgMLST approach, as additional method, we evaluated transmission events using a SNP based approach (Additional file 1: Table S1). Virulence The presence of virulence genes was assessed, using the sequence extraction tool in BioNumerics 7.6.3. Extraction parameters (percentage coverage and identity) were individualised to accommodate for the different levels of sequence diversity within and between the virulence genes. Anticipating problems upon assembling virulence genes containing repetitive motifs (sdrA, −B and -C, clfA and -B, cna, sasG) using the short read sequence data, only the largest non-repetitive part of these genes was used for quering. In order to obtain data from a general S. aureus population, the prevalence of virulence genes was also assessed by means of the available genomic sequences in the Refseq Genome Database, using the BLAST interface (https://blast.ncbi.nlm.nih.gov/Blast. cgi). This database contained 10,288 S. aureus genomes at the time of analysis. Virulence gene-specific search parameters were used as discussed above. Role and function of the S. aureus virulence genes were described in more detail earlier [12,24]. An overview of analysed virulence genes, their role, search parameters and query sequence are shown in Additional file 2: Table S2. Patient characteristics After coagulase-negative staphylococci (CoNS), MSSA was the most frequent causative pathogen of bacteremia in neonates. Several species of CoNS were isolated from neonatal blood, but they were considered to be one group. Twelve percent (n = 112) of 926 positive blood cultures from neonates (one blood culture per episode per patient), taken in the period January 2011 -November 2017, were positive for MSSA. Fifty-nine of the 112 neonates (52.7%) with MSSA bacteremia were male. The median (interquartile range) for gestational age and birth weight were 26 3/7 (25 1/7-30) weeks and 880 (680-1150) grams, respectively. The onset of all episodes of MSSA bacteremia occurred 72 h after birth, at a median postnatal age of 10 (7-19) days. The overall mortality among the included 112 patients was 20.5% while 11 of these 23 neonates died of MSSA septicaemia. Genetic relatedness One hundred and four MSSA isolates from the total of 112 neonatal bloodstream isolates (93%) were available and therefore included for WGS (including only the first isolate per patient). Based on WGS, a total of 23 classical MLST types were identified. The most predominant MLST types were ST5 and ST45 (for both n = 16). For 11 MSSA isolates a novel MLST type was found. To assess the genetic relatedness between the 104 isolates based on the more discriminatory cgMLST scheme, we visualised the number of allelic differences of the isolates in Fig. 1. Twelve cgMLST clusters of genetically indistinguishable isolates were observed, containing a total of 33 isolates (2-4 isolates per cluster). In seven of these cgMLST clusters, at least two of the identified MSSA isolates were collected within a time period of one month. In two cgMLST clusters, all MSSA isolates were found within a time period of one year, but the shortest time interval between isolates of two neonates was forty days. In the other three cgMLST clusters, there was a time interval of more than one year between culturing the MSSA bloodstream isolates of two neonates. The SNP approach confirmed our results based on the cgMLST approach (Additional file 1: Table S1). S. aureus virulence genes An overview of virulence genes present in the 104 MSSA isolates is provided in Table 1. Of the immunomodulatory proteins, staphylococcal complement inhibitor (scin) was present in 100% of all bloodstream isolates. Alphahemolysin (hla) was present in 99% of the isolates. We also found a 98-100% presence of the MSCRAMMs clumping factors A and B (clfA, clfB), immunodominant surface antigen A (isaA) and iron-responsive surface determinants A and H (isdA, isdH). When compared to a reference population of S. aureus genomes, a few observations stand out. Remarkably, staphylococcal enterotoxin A (sea) and toxic shock syndrome toxin 1 (tsst-1) were, respectively, 2.6 and 3.2 times more prevalent among the 104 neonatal bloodstream isolates, relative to the reference genomes. Likewise, staphylococcal enterotoxin h (seh) was 3.4 times more prevalent although, in absolute numbers, this involved only a few isolates (6/ 104 versus 173/10288). For the other virulence genes, no such increases were detected (Table 1). Discussion At our level IV neonatal intensive care unit, as in many centres [3][4][5], S. aureus is a frequent cause of neonatal bacteremia. In our study, we explored the role of MSSA transmission and the possible contribution of virulence genes. By using WGS, 12 different cgMLST clusters of MSSA isolates were found. Seven of these twelve cgMLST clusters included at least two MSSA isolates, cultured from blood of neonates within one month, indicative for transmission. Transmission should therefore be considered as a contributing factor for the frequent occurrence of neonatal S. aureus bacteremia, as was recently described by Rouard et al. [13]. Although it seems reasonable to assume that transmission, irrespective of the source, can only occur through the hands of healthcare workers (HCWs), we did not prove this, since we did not culture the environment, nor the HCWs or parents. Still, general measures such as improvement of the current (daily) cleaning, disinfection procedures as well as hand hygiene, will be likely to help. It was already proven that neonatal hospital-acquired infections could in part be prevented by strict infection control measures [8,25,26]. In addition, reinforcement of the implementation of central-line bundles has the potential to reduce the incidence of central line-associated bloodstream infections (CLABSIs); although these bundles are already implemented, compliance can still be improved and additional measures can be explored [27]. Besides transmission, it was determined whether the presence of certain virulence factors is associated with neonatal S. aureus bacteremia. Since it was difficult to define a suitable control population of neonates, we chose to compare neonatal S. aureus bacteremia isolates to all available S. aureus genomes from the Refseq Genome Database (N = 10,288 at the time of analysis). Remarkably, the genes sea and tsst-1 were found a factor 2.6 and 3.2 times more often in the MSSA bloodstream isolates, compared to the reference genomes in the Refseq Genome Database. The overrepresentation of tsst-1 could not be explained by the frequent presence of MLST ST5 and ST45 in our isolates collection, since tsst-1 was not Fig. 1 Minimum spanning tree, based on the core genome of 104 S. aureus isolates. Colours indicate the classical MLST sequence types (ST). Twelve cgMLST clusters containing at least two isolates with a maximum of eleven allelic differences are indicated with a grey background associated with these sequence types. On the other hand, 11 of the 25 isolates carrying sea were found in ST5 isolates. Still, this cannot be the full explanation for finding an association between sea and neonatal MSSA bacteremia. Many studies have been executed on S. aureus toxins and their pathogenic roles, particularly on sea and tsst-1. Previously, it was described that antibody responses to these two specific toxins were higher in patients with S. aureus bacteremia, compared to control patients [12]. In addition, in a recent publication about a NICU MSSA outbreak, tsst-1 and especially sea were found in bloodstream isolates, compared to colonisation isolates [13]. Another review article describes the association of these toxins with bacteremia [14]. Therefore, this may suggest that sea and tsst-1 might play a role in the pathogenesis of S. aureus bacteremia. The other virulence genes were present in virtually all study isolates, but in virtually all reference genomes as well (Table 1). Our study has its limitations. It was performed retrospectively, in a single centre. We considered less than 12 Conclusions In conclusion, transmission of MSSA seems a contributing factor to the occurrence of S. aureus bacteremia in neonates. The possibility of MSSA transmission in neonatal intensive care should be explored to prevent this invasive and serious infection. The exact role of sea and tsst-1 warrants further investigation. Additional file 1: Author's contributions BS contributed to the study design, collected, analysed and interpreted the data, and drafted the initial manuscript. NV conceptualised and designed the study, collected, analysed, interpreted and supervised the data collection and critically revised and reviewed the manuscript for intellectual content. MV contributed to the study design, while MV WB collected and interpreted the data, and critically revised and reviewed the manuscript for intellectual content. RK IR contributed to the study design, collected data, and critically revised and reviewed the manuscript for intellectual content. DDC CK contributed to the study design, collected, analysed and interpreted the data, and critically revised and reviewed the manuscript for intellectual content. AVB WG conceptualised and designed the study, interpreted the data and critically revised and reviewed the manuscript for intellectual content. The author(s) read and approved the final manuscript. Funding bioMérieux funded this study. bioMérieux designs, develops and sells diagnostic tests in the domain of infectious diseases. The company had no influence on the design and execution of the current study. Availability of data and materials The data generated and analysed in this study are included in the current article. Ethics approval and consent to participate Because this was a retrospective observational study in which anonymised patient data were used, collected during routine clinical practice, informed consent was not mandatory according to the Dutch Medical Research Involving Human Subjects Act (WMO). The Institutional Ethics Review Board of the Erasmus MC reviewed the study protocol and provided an exemption from formal ethical assessment (MEC-2015-306), based on the noninterventional design. The study was carried out in accordance with the current ethical guidelines for epidemiological research. Consent for publication Not applicable. Competing interests All other authors declare that there are no competing personal or institutional interests. Author details
2020-02-23T15:27:07.809Z
2020-02-22T00:00:00.000
{ "year": 2020, "sha1": "978c58a247665dc2182e9a1dbd5ece0131fda9c7", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-020-0699-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "978c58a247665dc2182e9a1dbd5ece0131fda9c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118583420
pes2o/s2orc
v3-fos-license
Second harmonic generation from metallo-dielectric multilayer photonic band gap structures We experimentally and theoretically investigate the second order nonlinear optical response of metallo-dielectric multilayer structures composed of Ag and Ta2O5 layers, deposited by magnetron sputtering. Second harmonic generation measurements were performed in reflection mode as a function of incidence angle, using femtosecond pulses originating from a Ti:Sapphire laser system tuned at 800 nm. The dependence of the generated signal was investigated as a function of pump intensity and polarization state. Our experimental results show that the conversion efficiency from a periodic metallo-dielectric sample may be enhanced by at least a factor of 30 with respect to the conversion efficiency from a single metal layer, thanks in part to the increased number of active surfaces, pump field localization and penetration inside the metal layers. The conversion efficiency maximum shifts from 70 degrees for the single silver layer down to approximately 55 degrees for the stack. The experimental results are found to be in good agreement with calculations based on coupled Maxwell-Drude oscillators under the action of a nonlinear Lorentz force term. Introduction The study of second order nonlinear optical effects in centrosymmetric media has intrigued researchers since the early days of nonlinear optics because it displays peculiar dynamical characteristics with respect to more conventional, non-centrosymmetric media. These peculiarities arise because the electric dipole term vanishes when inversion symmetry is present in the lattice structure. The situation just described applies to most metals, since they posses simple cubic crystal structure. The linear optical susceptibility in metals thus typically includes contributions from conduction [1] and bound [2] electrons. In 1964, using a classical oscillator electron model, Adler pointed out [3] that in centrosymmetric media the SH source terms consist of a magnetic dipole term, originating from the Lorentz force on the electrons, and of an electric quadrupole contribution, through the Coulomb force. Subsequently, Jha [4] used a free electron gas model to show that the quadrupole source term was equivalent to a nonlinear surface contribution. He was the first to propose that SHG in metals could be explained by two phenomenological contributions having the form 2ω ω ω ω ω α β = ∇ + × P E •E E H , where α and β are predetermined, frequencydependent coefficients [4] that multiply a surface and a volume (or bulk) contribution, respectively. The first experimental results outlined by Brown and coworkers appeared to confirm the existence of two SH source terms by two-fold excitation of a silver layer by a pump linearly polarized normal and parallel to the plane of incidence [5,6], which can in turn excite volume or surface sources, respectively. Later, Bloembergen and Shen [7] noted that the SHG reported in references [5,6] was most likely due to contributions from core electrons, as agreement with theory is obtained only when both free and bound electron contributions are considered [8]. Meanwhile, more experimental progress was made as additional metals and configurations were explored. For example, SHG was reported in total internal reflection from a film immersed in a denser medium [9], from opaque films deposited on glass prisms [10], and from thin metal films sandwiched between two dielectric layers [11], where coupling with surface plasmons and enhanced SHG was also observed. Several phenomenological approaches were proposed in order to fit the experimental data, as exemplified by the work of Rudnick and Stern [12], who also used two parameters to describe the nonlinear SH source currents. Recently, the use of metals for applications in the optical range has grown together with the interest in their applications in nonlinear optics. It has been shown that thin metal films can be included into multilayer structures to achieve high transmittance in the visible range and beyond, despite the high imaginary part of the index of refraction typical of metals [13,14]. These metallo-dielectric, multilayer structures, also known as transparent metals, consist of both periodic and symmetric structures, composed by the alternation of metallic and dielectric or semiconductor layers. Ordinarily, light can propagate inside a thick metal layer only up to a small distance (this distance is known as the skin depth, which for typical metals ranges between 5 and 10nm in the visible range) beyond which it is mostly attenuated. In the case of transparent metals, the skin depth limit is overcome. A resonant tunneling mechanism renders hundreds of nanometers of metal transparent, and allows both TE-and TM-polarized fields to become localized inside both the metal and dielectric layers, without the usual detriments of absorption associated with the high imaginary index component. These structures thus turn out to be an extraordinary instrument to access and enhance the nonlinear optical response of nonlinear layers [15], and in particular, the second [16] and third [17,18] order optical nonlinearities of metals. This latter feature is particularly interesting when investigating second order nonlinear effects because most metals present centrosymmetric crystal structure, so that the SH source terms arise from magnetic dipole and electric quadrupole contributions [19]. In the case of bulk metals, surface effects play a dominant role and are responsible for most of the generated signal. On the other hand, in noncentrosymmetric crystals surface effects may become significant and contribute to SHG only when the films are either amorphous or very thin. Thus the opportunity of including several metal layers into stacks where the light can become strongly localized opens new vistas, and may broaden the range of likely applications due to the possibility of increasing the number of active surfaces and volume contributions for the enhancement of SHG. Sample Preparation In our experiments, we measured the second harmonic signal in the blue spectral region (400 nm) generated by different Ag/Ta 2 O 5 multilayer structures. The pump consisted of pulses approximately 150fs in duration, originating from a Titanium:Sapphire pulsed laser system centered at a wavelength of 800nm, and having a repetition rate of ~1 kHz. The samples were grown on glass substrates by means of a magnetron sputtering system [10]. Magnetron sputtering is a well-established technique for thin film deposition that allows the deposition of several materials without breaking vacuum, and thus it is well-suited for the fabrication of multilayer structures. In what follows, we will describe the details of sample preparation and realization, the experimental setup used to conduct the SHG measurements, and the theoretical model that we adopted for the analysis of the experimental data. Second Harmonic Generation The second harmonic signal was measured to evaluate the conversion efficiency [23] of the multilayer samples, and compared to the conversion efficiency of a single metal layer. The technique consists in measuring the reflected SH signal for a given polarization state of both the fundamental input and SH output beams. The schematic representation of the experimental setup used for the measurements is shown in Figure 2. The main beam was focused onto the sample by a lens having a 150-mm focal length. The polarization states of both the fundamental and the generated beams may be set via a half-wavelength plate placed before the lens, while a polarizer for signal analyzing is placed before the detector. In order to remove the SH signal produced by the plate's crystals, due to the short pulse duration, a long pass filter (GG495, Thorlabs) was placed after the half-wave plate. The sample was placed on a rotational stage which allowed setting of the incidence angle with a resolution of 0.5 degrees. The transverse profile of the fundamental beam was measured and resulted to be Gaussian with a waist ranging from 400 to 700 μm, depending on the sample-to-focus distance. This distance could be varied by adjusting the lens's position, and by repositioning the rotational sample holder in the center of the beam. After being reflected by the sample, the fundamental and second harmonic beams were sent through a glass prism and thus separated. A set of dichroic filters was then used to further suppress any residual and scattered FF, thus ensuring that only the SH beam was directed to the photomultiplier tube, and then analyzed by a 500 MHz digital oscilloscope. The photomultiplier output was then fed into a box-car averager, increasing the signal-tonoise ratio. The calibration curve of the photomultiplier response was accurately performed with a reference BBO crystal. When necessary, detector saturation was prevented by using a set of linear neutral density filters whose transmittance value was taken into account in the data processing. The incident FF light was strictly plane polarized, and the polarization state introduced by the half-wave plate was checked by preliminary calibration carried out with a second crossed polarizer used to analyze the polarization of FF beam before the sample, in order to avoid undesired components of the FF electric field. Experimental measurements show that the largest signal is recorded when the polarization of the fundamental beam is set to p , while the SH signal is always p -polarized. The first set of measurements was done by increasing the FF peak power, in order to check for a quadratic dependence of the SH signal on the FF peak power. We investigated a number of periodic and symmetric samples (the latter having dielectric entry and exit layers), and in all cases we found that the generated power at 400 nm has a quadratic dependence on the FF squared peak power. The next step was to investigate the laser light polarization dependence of the SH signal by varying the angle φ between FF polarization state and the plane of incidence. According to the arguments presented in e.g. reference [6], when the FF electric field component in the plane of incidence is zero ( s-polarized)  there should be only a bulk nonlinear contribution excited through the Lorentz force. This contribution, which is directed longitudinally, in the same direction as the wave vector (radiation pressure), can still propagate in the presence of a boundary, i.e. for nonzero incidence angle. On the other hand, when the FF is polarized in the plane of incidence, the SH contribution is predominantly of surface origin, arising from the induced nonlinear currents, or equivalently from longitudinal field discontinuities. In order to measure the SH dependence on the FF polarization direction, the polarization of the FF was varied between 90° and 0°, at a fixed incidence angle of 45°. By analyzing the SH polarization direction we found that the SH light is always polarized in the plane of incidence, i.e. p -polarized. In Fig.(3) we report the curves of the SH signal as a function of the polarization direction of FF beam, φ. The figure shows that the SH signal does not go to zero when the FF is polarized normally with respect to the plane of incidence, thus indicating that in the multilayer structure the nonlinear process is excited also via the bulk term of the nonlinearity. Thus, by introducing the parameter M as the ratio of the relative second harmonic signal measured for φ=90° and φ=0° respectively, for a single Ag layer, which was found to be in the range 0.02 to 0.06. This result means that SHG from volume contributions is not negligible, and that volume sources can be excited in multilayer stacks by choosing suitable dielectric layer thicknesses between two consecutive metal layers to form a transparent metallo-dielectric photonic band gap structure. Theoretical Model Before discussing the angular dependence of second harmonic generation that we measured we describe the theoretical model that we adopted to predict second harmonic generation from centrosymmetric materials. Although Sipe's hydrodynamic model [24] is widely used to analyze experimental data [10,11,25], we assume that the metal consists of a free electron gas described by the Drude model, under the action of a driving electromagnetic field [1,26]. Under these conditions, longitudinal and transverse nonlinear currents arise under the action of the nonlinear Lorentz force [26]. It is widely known that metallic data cannot generally be fitted throughout the visible and near IR ranges by a single set of (γ,ω p ) parameters, which stand for damping coefficient and plasma frequency, respectively. Actual metal data, as exemplified in Palik's handbook [22], displays core electron contributions well into the visible range, so that a more complex system of equations must be used. One possible way to proceed is to supplement the simple Drude model with one or more Lorentz oscillator equations that describe core electrons. Since for the moment we are interested in two frequencies only, FF and SH, a simpler way forward consists of fitting the data using the Drude model and two different sets of (γ,ω p ) parameters, each set fitted to the frequency of interest. In doing so we also seek to match the slope of the complex dielectric function at each frequency, in order to impart the correct group velocities to both the FF and SH frequencies. In principle, this procedure can be repeated for an arbitrary number of harmonics, but it becomes more difficult to simultaneously fit both the dielectric function and its derivative in proximity of the plasma frequency. We note however that for structures only a fraction of a wavelength thick fitting the group velocity is not as important as fitting the dielectric function, since propagation distances are extremely small [18]. In Gaussian units, the system of equations we aim to solve is thus as follows: We assume a right-handed coordinate system, and p -polarized (TM) pump and second harmonic fields of the type: The corresponding macroscopic polarization is given by: and as follows for the SH: We have chosen λ r =1μm as the reference wavelength, and have adopted the following scaling: / and / It is known that the effective electron mass in silver is close to the bare electron mass [27]. In the context of Eqs. (4)(5) above and so for simplicity, we choose m to the bare electron mass. The linear dielectric response of silver is assumed to be Drude-like, as follows: to propagate the fields [18], and a simple, second-order accurate predictor-corrector algorithm to advance the temporal solutions of the currents and polarizations. Results and Discussion In Fig.(4) we depict typical incident and scattered pump and SH pulses. Details about grid size and other discretization parameters are found in the caption. In Fig.(5) we report our predictions of SH conversion efficiency η vs. incident angle using our model, for a single 20nm-thick silver layer. We define conversion efficiency as the ratio of either transmitted or reflected SH energy divided by incident pump energy. The results are consistent with the results found throughout the literature, that is, maximum conversion efficiency for Ag occurs at approximately 70° on reflection. Our results suggest that the transmitted SH signal is also peaked at 70°. For a peak pump intensity of approximately 6 GW/cm 2 , the predicted conversion efficiency upon reflection is ~1.4x10 -11 . The calculated conversion efficiencies quickly converge for pulses only a few tens of femtoseconds in duration and having a relatively small spot size because, unlike the multilayer stack, the single metal layer presents no significant structure in the transmission function. In Fig.(6) we show the predicted transmitted and reflected SHG efficiencies η as a function of incident angle, for the 5-period, metallo-dielectric stack depicted in Fig.(1). The incident field is Gaussian in space and time, with a spot size approximately 30μm wide and 150fs duration (~1/e width). The input peak intensity is taken to be roughly 6 GW/cm 2 . A maximum conversion efficiency of ~2.8x10 -10 (an improvement of a factor of ~20 compared to the single metal layer) is thus predicted for the multilayer stack, and it occurs at ~55° for both transmission and reflection coefficients. The essential results suggest that SHG is most efficient for pulses that are long enough to resolve the features of the transmission resonances shown in Fig.(1), when pump penetration depth inside the metal and absorption are maximized, and when the longitudinal component of the electric field displays the largest discontinuities. In Fig.(7) we show a plot of the corresponding forward and backward SH conversion efficiency as a function of pulse duration. Longer, narrower bandwidth pulses tend to better localize inside the stack, leading to higher local field intensities. These findings are consistent with the results discussed in reference [16], where an illustrative, simplified model of SHG from a metallo-dielectric stack was discussed in the context of normal incidence and uniformly distributed nonlinear dipoles. Field localization, the bandwidth of the incident pulse, and phase matching conditions are usually of central importance in the study of SHG in structures of finite length [28,29]. It has been demonstrated that it is possible for either phase matching conditions [29] or field localization effects [28] to dominate the conversion process, depending on structure size and field overlap. In typical symmetric and asymmetric transparent metallo-dielectric stacks that are less than one wavelength thick, the fields become localized inside both the metal and the dielectric layers [16,17,30]. The calculations consistently show that linear pump and nonlinear SH currents and dipoles are present at each metal surface with relatively high field intensities inside each layer and longitudinal field discontinuities at every interface. We illustrate this in Fig Our results suggests that a number of factors combine to yield enhanced SHG, namely: (i) field localization inside the metal layers; (ii) pulse duration, which is intimately connected to the first point; (iii) tuning at frequencies near the long wavelength band edge, where field penetration depth inside the metal and linear absorption are maximized; and (iv) the ability to establish field discontinuities and nonlinear dipole distributions throughout the stack. All these factors may be termed as volume contributions that have no counterpart in isolated, relatively thick metal layers. It is relatively easy to establish that in isolated metal layers surface effects are directly responsible for most of the observed SHG. One may show this by calculating the field profiles, and by monitoring the difference between the intensities just inside and just outside the entry surface. In Fig.(9) we plot such a field discontinuity as a function of incident angle for an incident field of unitary amplitude. An examination of the figure and only a cursory comparison with Fig.(5) suffices to confirm a direct correlation between surface effects and large SH conversion efficiencies, as both display the same angular dependence. Thinner metal layers display a similar response. We now examine a simple example that illustrates how volume effects may indeed dominate over surface effects in the case of transparent metal stacks. We examine the field profiles for the same periodic structure we have considered above, except that now we turn the structure around so that the field is incident on the metal instead of the dielectric layer. Of course, the linear transmittance properties of the stack do not change, regardless of the direction of approach. However, if light is incident on the metal layer, a large field discontinuity is recorded at the metal interface rather than the dielectric interface. All things being equal, the large field discontinuity present in one stack does not at all improve SHG conversion efficiencies. In fact, reversing the stack yields slightly lower reflection SH efficiency, with even smaller transmitted SHG. As a result, one might surmise that volume contributions must be compensating the evident surface effect that characterizes the sample if it is positioned in a way that light is incident on the metal side. Similar results were obtained for a variety of stacks. These results seem to suggest that volume contributions may indeed play a role more pronounced than one may be able to presently discern. However, to arrive at such a definitive conclusion, one should construct a model where it is possible to selectively isolate surface from volume contributions, and then integrate the equations of motion to record the effect. Unfortunately, the model we use suggests that inside the transparent metallo-dielectric stacks surface and volume contributions may be inextricably linked, thus making it difficult to distinguish their relative importance as the fields actually penetrate and are relatively intense inside the metal layers. There are several issues that one must take into consideration when adding, subtracting, thickening, or thinning metal or dielectric layers. For example, adding periods generally increases reflections, shifts and narrows the transmission resonances, and the field becomes better localized inside the dielectric layers because the metal layers act as better mirrors. Thinning the metal layers and increasing their number requires adjustment of the dielectric layer thickness in order to keep the resonance tunneling mechanism operating within a desired wavelength range, and to keep both fields tuned inside a pass band. One might think that having as many metal layers as possible can increase conversion efficiency. This is generally not the case, because volume contributions also come in the form of enhanced linear absorption (as a result of field localization inside the metal), which can overwhelm any nonlinear gain. Therefore, structures that contain many layers actually may perform worse than a single metal layer, as the FF, the SH or both fields may slide into their respective gaps. Finally, it is noteworthy that for relatively large incident angles, such as those we are considering, the scattered SH fields are generated as they propagate sideways along the length of the metal layers, for several tens of microns before they exit the structure, as Fig.4 suggests. This naturally translates into a great deal of effective instantaneous losses, which combine with an instantaneous gain large enough to yield the modest conversion efficiencies that we observe. These results thus generally suggest that although there is a strong hint that volume contributions may in effect play a role far more important than surface discontinuities, the examples we have investigated, which include periodic and a variety of symmetric, more transmissive stacks, at present suggest that it is difficult to extract their relative importance. When designing the stacks one should be make judicious choices in the selection of the number of layers, their relative thickness, and tune the fields at the long wavelength band edge, at a place of relatively high linear absorption, where the fields are still well-localized inside the metal. At the same time, one should avoid tuning where there is strong feedback, such as at the peak of narrow resonances, which have a tendency to kick the field back into the dielectric layers and reduce nonlinear gain. One can easily see that the subject is extremely complex and interesting, primarily because it brings us full circle to fundamental questions and issues explored during the early history of nonlinear optics. For this reason alone the subject deserves to be investigated further. Suffice it to say here that the newly acquired ability of the fields to penetrate and dwell inside metal layers combined with the ability to excite multiple metal surfaces changes the dynamical characteristics of SHG in metals, with competing surface and volume contributions. This statement is reflected in our simulations and in our findings, as reported above. One final point worthy of note should be made about spot size. Although the spot size used in the calculations (~30μm) is significantly smaller compared to that used in the experiments (upward of 500μm), a 30-μm beam width corresponds to a fairly narrow bandwidth of transverse k-vectors that tend to resolve well all features found in the transmission function of Fig.1, for example. In other words, plane wave results are quickly achieved provided the beam waist is taken to be at least several wavelengths wide. In Fig.(10) we report the measurements performed in reflection mode as a function of the incidence angle for the 5-period Ta 2 O 5 /Ag sample for an input FF intensity of ~6GW/cm 2 . The polarization of both fundamental and generated beams lies in the plane of incidence. As a comparison, we also plot the measurements obtained for a single Ag layer 20nm thick, obtained under similar experimental conditions. Just as predicted by our model in Figs. (5)(6), the signal arising from the multilayer structure displays a maximum value at an incidence angle of ~55°, instead of 70° for the single metal layer. The theoretical predictions for reflections are also plotted in Fig.(10). The experimental data reported in the figure suggests that the SH signal generated inside the metallo-dielectric stack is enhanced by approximately a factor of 30 relative to the maximum conversion efficiency of the single 20nm-thick Ag layer. Given the extreme complexity of the model, as exemplified by Eqs. (4)(5), and some uncertainty about the precise peak intensity that reaches the stack, one may objectively state that the agreement between our theory and our experiment is quite good, especially considering that the theoretical model has no adjustable parameters. Other possible sources of uncertainty include small deviations in layer thicknesses, and third order effects inside the metal layers, that may lead to band shifts and nonlinear absorption [17,18], which the current model does not take into account. Further studies will focus on extending the model to include a third harmonic frequency, third order effects, and the evaluation of conversion efficiency for other geometrical configurations and metals that might further clarify the relative importance and interplay between surface and volume contributions. Conclusions In summary, we have theoretically and experimentally investigated second harmonic generation from metallo-dielectric, Ta beam polarization state, measured in reflection mode, at an incidence angle of 45°, for the periodic sample described in Figure 1. φ represents the angle between pump beam polarization direction and the plane of incidence, i.e. when φ=0° pump beam is p -polarized while for φ= ±90° pump beam is ŝ -polarized. The SH signal was found to be always polarized in the plane of incidence ( p ). Fig.(1). Incident pulse duration is ~150 fs, with a spot size approximately 30 microns wide, and peak intensity ~6 GW/cm 2 . In this case, field localization and pump penetration inside the sample causes nearly 50% of the incident pump energy to be absorbed. Maximum conversion efficiency and maximum absorption angles nearly coincide. On the right axis we show the remaining pump energy (▲, right axis, normalized to unity).
2019-04-12T17:05:00.652Z
2008-01-04T00:00:00.000
{ "year": 2008, "sha1": "0b21197ea0b58a1fe68334e5f40b215d0516cc51", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0801.0637", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b21197ea0b58a1fe68334e5f40b215d0516cc51", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
42867834
pes2o/s2orc
v3-fos-license
Raman scattering from a superconductivity-induced bound state in $MgB_2$ It is shown that the sharp peak in the $E_{2g}$ Raman spectrum of superconducting $MgB_2$ is due to a bound state caused by the electron-phonon coupling. Our theory explains why this peak appears only in the spectra with $E_{2g}$ symmetry and only in the $\sigma$ but not $\pi$ bands. The properties of the bound state and the Raman spectrum are investigated, also in the presence of impurity scattering. It is shown that the sharp peak in the E2g Raman spectrum of superconducting M gB2 is due to a bound state caused by the electron-phonon coupling. Our theory explains why this peak appears only in the spectra with E2g symmetry and only in the σ but not π bands. The properties of the bound state and the Raman spectrum are investigated, also in the presence of impurity scattering. PACS numbers:74.70.Ad, 74.25.Nf, 74.25.Kc Electronic Raman scattering in superconductors probes the density and, to some extent, the momentum dependence of quasi-particle excitations across the gap. As a result the spectra are usually dominated by broad pair breaking peaks reflecting the magnitude and anisotropy of the gap as well as scattering processes of the quasi-particles. Experimental spectra in the two-gap superconductor M gB 2 show the typical features of a dirty s-wave superconductor [1,2]. A new and surprising feature in the E 2g spectrum is a sharp peak near the larger of the two gaps which in our opinion cannot be explained as a pair breaking peak because of the large impurity scattering rate of quasi-particles in the present M gB 2 samples. Below we will argue that this peak is the analogue of the magnetic resonance peak observed in high-T c cuprates [3] where the electron-phonon interaction takes over the role of the Heisenberg interaction and strongly scatters the excitations across the gap. Neglecting the momenta of the incident and scattered photons the differential Raman cross section can be written as e is the charge of an electron, n(ω) the Bose distribution function, ω the difference in frequency between incident and scattered light, and χ ′′ the imaginary part of a retarded Green's function χ at zero momentum. The corresponding Matsubara function is where β is the inverse temperature 1/T , ω n = 2nπT are bosonic Matusbara frequencies, T τ the time ordering operator, and ρ an effective density operator with momentum zero. In the following we will be interested in spectra related to the E 2g symmetry of D 6h , the point group of M gB 2 . ρ has then the form c † σn (k) and c σn (k) are creation and annihilation operators for electrons with momentum k, spin direction σ, band label n, and energy ǫ n (k). M gB 2 has only one Raman-active q = 0 phonon, and this phonon has E 2g symmetry. Its frequency is denoted by Ω, its creation and annihilation operators by b † and b, respectively, and the corresponding element of the phononic Raman tensor by R. Without loss of generality we assume that this phonon transforms as the first basis vector of the two-dimensional E 2g representation in accordance with Eq.(4). For the evaluation of χ we use the Hamiltonian H = ∆ n is the gap parameter for s-wave superconductivity in the band n, Ω j (q) the frequency of a general phonon with momentum q and branch label j, g the coupling constant for intraband electron-phonon scattering and V a random potential for intraband impurity scattering. Interband phonon scattering can be neglected in H ′ because only zero momentum transfers occur in the approximation used below. In a first step we perform an infinite summation over bubble diagrams by introducing the irreducible Green's functionχ.χ contains all diagrams to χ which cannot be decomposed into two parts by cutting one phonon line. The average over impurities yields impuritiy lines with similar properties as phonon lines. However, bubble diagrams connected by impurity lines are not possible in this case so that the above definition of irreducibility is appropriate. Analytically, one obtains χ =χ 11 + (χ 12 + R)D(R +χ 21 ), The omitted frequency and momentum arguments in the Green's functions in Eqs. (7) and (8) are iω n and 0, respectively.χ 11 denotes the irreducible Green's function associated with the two vertices γ n (k). Similarly, the vertices in the functionsχ 12 andχ 22 are γ n (k) and g n (k0), and two times g n (k0), respectively. The free phonon progator D (0) is given by −2Ω/(ω 2 n + Ω 2 ). A sensible approximation for the evaluation ofχ is the ladder approximation plus the corresponding self-energy corrections where the interaction lines are due to phonons or impurities. Only that part of the interaction can contribute in the ladder diagrams which transforms in the same way as the vertices γ and g. This means in our case that only the E 2g component of the phonon-mediated interaction, which usually is considered to be neglegible in a s-wave superconductor, would enter. We therefore will evaluateχ only in the presence of random impurities using the Born approximation and the dirty limit for each band. Assuming that the vertices and the interaction can be evaluated right on the Fermi surface, expanding γ and g in terms of Fermi surface harmonics [4] and assuming that the interaction is diagonal in L we obtaiñ is the density of states at the Fermi surface for one spin direction due to the band n and 1/τ n an effective scattering rate. After an analytic continuation iω n → ω + iη the imaginary part of the Green's functioñ χ, which is independent of L, becomes [5] Θ is the theta function and F , E, and Π are complete elliptical integrals of the first, second, and third kinds, respectively. The existence of only two gaps in the experimental spectra [1,2,6] as well as theoretical arguments [7] suggest that the dirty limit applies even within the twoband complex of σ and π bands (denoted by the index ρ in the following) and that the interband impurity scattering between σ and π bands is neglegible. As a result ∆ n and τ −1 n can be considered to be the same within the manifold of σ or π bands. Introducing then the effective couplings we obtainχ Choosing for the first function Φ (n) 1 (k) the properly normalized function ∼ γ n (k), the summations over L collapse to one term L = 1 if either i or j is equal to 1. In the clean limit 1/τ → 0 Eq.(12) reduces to the analytical formula Eq.(16) of Ref. ( [8]). Using the tight-binding fit to the band structure of Ref. [9], eV as energy and the lattice constant a as length units, we find N (1) F = 0.104, where n = 1, 2 denote the light and heavy σ bands and n = 3, 4 the π bands. In the case of π bands we have scaled the published tight-binding parameters slightly in order to reproduce the correct densities. Furthermore, we obtain α 1 , and λ (σ) 11 reflect the fact that the σ bands are rather isotropic in the ab-plane near their narrow cylindrical Fermi surfaces and thus cannot contribute much in the E 2g channel. As a consequence we may safely put λ and the scattering rate 1/τ σ . We will first consider the σ contribution toχ and drop the index σ everywhere in order to simplify the notation. Fig.1 shows the real (dashed lines) and imaginary (solid lines) part ofχ for the gap 2∆ = 110cm −1 and two scattering rates 1/τ = 0 and 1/τ = 200cm −1 . In the clean case the imaginary partχ ′′ exhibits a square root singularity if ω approaches 2∆ from above. Impurity scattering transforms this singularity into a step at 2∆ with height 2π∆τ , and produces for sufficiently strong scattering a very broad minimum near 1/τ . The real part χ ′ shows in the clean case a square root singularity below 2∆ and finite and positive values above 2∆. Taking impurity scattering into accountχ ′ becomes much more symmetric around 2∆ compared to the clean case, Energy ω b /2∆ (solid curves) and spectral weight Z b (dashed curves) of the bound state for two impurity scattering rates τ −1 as a function of the electron-phonon coupling constant λ. The curves in Fig. 1 suggest that the phonon Green's function D will develop a bound state inside the gap [10], and that this will occur for all values for λ and 1/τ . The frequency of the bound state, ω b , is determined by the vanishing of the denominator of D, i.e., by the equation, Expanding the denominator of D around ω b one finds for the spectral weight Z b Fig.2 shows ω b and Z b as a function of λ for the two scattering rates 1/τ = 0 and 1/τ = 200cm −1 . ω b approaches zero at λ CDW = 0.50 and 0.47, respectively. This means that for λ > λ CDW the superconducting state is unstable against the formation of a charge density wave with E 2g symmetry. For λ < λ CDW ω b and Z b increase and decrease rapidly with decreasing λ approaching their limiting values 1 and 0 in an exponential (for 1/τ = 0) or powerlawlike (for 1/τ = 0) manner. One peculiar feature of D is that any background imaginary part in its self-energy, for instance due to π electrons, will be diminuished by the factor 1/Z b near 2∆. This means that the bound state will not be broadened and disappear in χ ′′ for small λ's, even in the presence of strong impurity scattering, but instead will sharpen and loose weight when ω b approaches 1. To make the situation more realistic one should allow for broadening effects due to inhomogenieties etc. which may be taken into account by foldingχ ′ with a Gaussian with a width δ [2,5]. Fig.3 shows χ ′′ for three different λ and a width of δ = 10cm −1 . The employed large scattering rate τ −1 = 200cm −1 has wiped out completely the usual pair breaking peak due to the square root divergence of χ ′′ at 2∆. On the other hand, the electron-phonon coupling accumulates spectral weight near 2∆ and produces a pronounced bound state inside the gap at larger couplings. Figs.2 and 3 allow to estimate a realistic value for λ. Identifying ω b with the observed sharp peak in the E 2g spectrum we have ω b = 104 ± 1 cm −1 whereas a recent tunneling experiment [6] gave for the gap 2∆ = 114 ± 6 cm −1 . This suggest that ω b is different from 2∆, i.e., the sharp peak should not be identified with the gap, and the ratio ω b /2∆ is 0.91 ± 0.06. Fig.2 indicates then that λ must be smaller than 0.3. Comparing the intensities of the bound state and the phonon line in Fig.3 with the experimental curve [1] one finds that λ must be near the interval between 0.15 and 0.20. These values are substantially smaller than the value 0.38 obtained from band structure calculations [11]. Additional evidence for the importance of the electronphonon coupling in the E 2g spectrum comes from the experimental result that the two A 1g and the E 1g spectra do not show any pronounced peak near 2∆. We explain this by the fact that, according to group theory, M gB 2 has no q = 0 phonons with such symmetries so that no bound states can be formed. Further support for the present theory comes from the absence of a peak near the gap of about 47 cm −1 in the π bands. LDA calculations show that the coupling of π electrons to the E 2g phonon is by a factor 3 or 4 smaller than for σ electrons [11]. We find that the bound state structure near the π gap is invisible for such a coupling which explains its absence in the experimental spectra. Using λ 11 /R 2 = 1/400, λ = 0.16, Ω = 620cm −1 , δ = 5cm −1 , a π gap of 43cm −1 and a somewhat reduced σ gap of 96.5 cm −1 , accounting for the finite temperature of the experimental data, the resulting Raman cross section is shown as the dashed line in Fig.4, together with the experimental E 2g spectrum [1]. The dashed line reproduces well the main features of the experimental curve, especially at small frequencies. The quantitative discrepancy in the phonon region suggests that only part of the phonon broadening is caused by the electron-phonon coupling and that anharmonicity and deviations from the assumed constant density of states may play a role. The theoretical superconductivity-induced hardening of the phonon frequency in Fig.4 is 5 cm −1 and thus somewhat smaller than the experimental value [1] of 7 − 10 cm −1 . In conclusion, we have shown that the E 2g spectrum in superconducting M gB 2 can be understood as a superposition of a phonon line coupled strongly to σ electrons creating hereby a bound state in the gap, and a background due to rather uncorrelated π electrons. This means that M gB 2 is to our knowledge the first s-wave superconductor where a bound state in the superconducting gap due to residual interactions has been observed and identified. The obtained electron-phonon coupling constant λ ∼ 0.2 for σ electrons is only half of the band structure value, a discrepancy, which presently is not well understood. The author thanks O. Dolgov, J. Kortus and I. Mazin for useful discussions.
2018-04-03T05:25:16.805Z
2003-02-11T00:00:00.000
{ "year": 2003, "sha1": "5af2e841510c8f6613b72c4cfb0017f24b57bc7f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0302215", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "60e7aabf152809c81d2ff182e63b84338754dd2e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
6771462
pes2o/s2orc
v3-fos-license
Simultaneous strain and temperature fiber grating laser sensor based on radio-frequency measurement We propose and experimentally demonstrate a novel simultaneous strain and temperature fiber optic sensor. The sensing head is formed by two concatenated ultra-short distributed Bragg reflector lasers that operate in single longitude mode with two polarization modes. The total length of the sensing head is only 18 mm. The two lasers generate two polarization mode beat notes in the radio-frequency range which show different frequency response to strain and temperature. Simultaneous strain and temperature measurement can be achieved by radio-frequency measurement. This approach has distinctive advantages of ease of interrogation and avoidance of expensive wavelength detection. ©2011 Optical Society of America OCIS codes: (060.2370) Fiber optics sensors; (060.2840) Heterodyne; (060.3510) Lasers, fiber; (060.3735) Fiber Bragg gratings. References and links 1. S. W. James, M. L. Dockney, and R. P. Tatam, “Simultaneous independent temperature and strain measurement using in-fibre Bragg grating sensors,” Electron. Lett. 32(12), 1133–1134 (1996). 2. P. M. Cavaleiro, F. M. Araújo, L. A. Ferreira, J. L. Santos, and F. Farahi, “Simultaneous measurement of strain and temperature using Bragg gratings written in germanosilicate and boron-codoped germanosilicate fibers,” IEEE Photon. Technol. Lett. 11(12), 1635–1637 (1999). 3. H. B. Liu, H. Y. Liu, G. D. Peng, and P. L. Chu, “Strain and temperature sensor using a combination of polymer and silica fibre Bragg gratings,” Opt. Commun. 219(1-6), 139–142 (2003). 4. M. G. Xu, J. L. Archambault, L. Reekie, and J. P. Dakin, “Discrimination between strain and temperature effects using dual-wavelength fibre grating sensors,” Electron. Lett. 30(13), 1085–1087 (1994). 5. M. Sudo, M. Nakai, K. Himeno, S. Suzaki, A. Wada, and R. Yamauchi, “Simultaneous measurement of temperature and strain using PANDA fiber grating,” in Proc. 12th International Conference Optical Fibre Sensors, pp. 170–173, Williamsburg, Virginia, USA, October 28–31, 1997. 6. B. O. Guan, H. Y. Tam, X. M. Tao, and X. Y. Dong, “Simultaneous strain and temperature measurement using a superstructure fiber Bragg grating,” IEEE Photon. Technol. Lett. 12(6), 675–677 (2000). 7. H. F. Lima, P. F. Antunes, J. D. L. Pinto, and R. N. Nogueira, “Simultaneous Measurement of Strain and Temperature With a Single Fiber Bragg Grating Written in a Tapered Optical Fiber,” IEEE Sens. J. 10(2), 269– 273 (2010). 8. B. O. Guan, H. Y. Tam, S. L. Ho, W. H. Chung, and X. Y. Dong, “Simultaneous strain and temperature measurement using a single fibre Bragg grating,” Electron. Lett. 36(12), 1018–1019 (2000). 9. H. J. Patrick, G. M. Williams, A. D. Kersey, J. R. Pedrazzani, and A. M. Vengsarkar, “Hybrid fiber Bragg grating/long period fiber grating sensor for strain/temperature discrimination,” IEEE Photon. Technol. Lett. 8(9), 1223–1225 (1996). 10. T. Lui, G. F. Fernando, L. Zhang, I. Bennion, Y. J. Rao, and D. A. Jackson, “Simultaneous strain and temperature measurement using a combined fibre Bragg grating/extrinsic Fabry-Perot sensor,” in Proc. 12th International Conference Optical Fibre Sensors, pp. 40–43, Williamsburg, Virginia, USA, October 28–31, 1997. 11. D. P. Zhou, L. Wei, W. K. Liu, Y. Liu, and J. W. Y. Lit, “Simultaneous measurement for strain and temperature using fiber Bragg gratings and multimode fibers,” Appl. Opt. 47(10), 1668–1672 (2008). 12. B. Dong, J. Z. Hao, C. Y. Liaw, B. Lin, and S. C. Tjin, “Simultaneous strain and temperature measurement using a compact photonic crystal fiber inter-modal interferometer and a fiber Bragg grating,” Appl. Opt. 49(32), 6232– 6235 (2010). 13. L. Y. Shao, X. Y. Dong, A. P. Zhang, H. Y. Tam, and S. L. He, “High-resolution strain and temperature sensor based on distributed Bragg reflector fiber laser,” IEEE Photon. Technol. Lett. 19(20), 1598–1600 (2007). #148869 $15.00 USD Received 7 Jun 2011; revised 5 Aug 2011; accepted 22 Aug 2011; published 3 Oct 2011 (C) 2011 OSA 10 October 2011 / Vol. 19, No. 21 / OPTICS EXPRESS 20650 14. O. Hadeler, E. Rønnekleiv, M. Ibsen, and R. I. Laming, “Polarimetric distributed feedback fiber laser sensor for simultaneous strain and temperature measurements,” Appl. Opt. 38(10), 1953–1958 (1999). 15. R. I. Crickmore, M. J. Gunning, J. Stefanov, and J. P. Dakin, “Beat frequency measurement system for multiple dual polarization fiber DFB lasers,” IEEE Sens. J. 3(1), 115–120 (2003). 16. B. O. Guan, Y. N. Tan, and H. Y. Tam, “Dual polarization fiber grating laser hydrophone,” Opt. Express 17(22), 19544–19550 (2009). 17. Y. Zhang, B. O. Guan, and H. Y. Tam, “Ultra-short distributed Bragg reflector fiber laser for sensing applications,” Opt. Express 17(12), 10050–10055 (2009). Introduction There has been considerable interest in developing simultaneous strain and temperature fiber optic sensors.This is not only because cross sensitivity is a key issue for the practical applications of fiber optic sensors, but also because multi-parameter sensors can reduce the complexity of the sensing systems in situations requiring multi-parameter and multi-point measurement.The principle of simultaneous strain and temperature sensors are usually based on the detection of two physical parameters which have different sensitivities to strain and temperature.Fiber Bragg gratings have been of great interest in sensing technology in recent years because of their small size, wavelength-encoding and multiplexing capability.Many techniques based on fiber Bragg gratings have been reported for simultaneous strain and temperature measurement.A simple and straightforward approach is to employ two independent Bragg gratings with the first one subjected to strain and temperature and the second one isolated from strain.The concept of a sensing head formed by two Bragg gratings with different strain and temperature response has been explored.Examples include configurations based on two gratings in different diameter fibers [1], in different dopant fibers [2], in different base material fibers [3], and operating at different wavelengths [4].Several approaches based on a single Bragg grating for simultaneous strain and temperature measurement was also demonstrated, such as utilization of a single Bragg grating in birefringent fibers [5], superstructure Bragg grating [6], a single Bragg grating in tapered fiber [7], and a single Bragg grating straddling over the junction of two fibers [8].A number of schemes based on a sensing head formed by Bragg gratings in combination with other fiber optic devices have also been demonstrated.The configurations include the combination of two Bragg gratings and a long period grating [9], the combination of a Bragg grating and Fabry-Perot interferometer [10], the combination of a Bragg grating and Mach-Zehnder interferometer [11], and the combination of a Bragg grating and photonic crystal fiber based inter-modal interferometer [12].Recently, a sensing head formed by a dual-polarization fiber grating laser was demonstrated [13], where the mean wavelength and polarization mode beat frequency of the laser were utilized to discriminate strain and temperature.All above approaches can be divided into two categories.The first category is based on the detection of two separate wavelengths which have different response to strain and temperature.The second category is based on the detection of wavelength and intensity (or spectrum bandwidth).The disadvantage of the second category is, the intensity detection undercuts multiplexing capability of fiber Bragg grating sensors.For all above simultaneous strain and temperature sensors, wavelength detection is necessary.However, it is know that, complex and expensive optical systems are required to achieve accurate wavelength measurement.The high cost of the wavelength detection unit impedes further applications of fiber Bragg grating sensors.It will be highly desirable if we can develop a simultaneous strain and temperature sensor which not only shares the advantages of fiber Bragg grating sensors but also avoids expensive wavelength detection. Polarimetric fiber grating laser sensor converts the measurrand into a corresponding change in the beat frequency between the two polarization modes from the laser [14][15][16].Because the beat frequency is in the radio-frequency range, this type of sensor has distinctive advantages of ease of interrogation and avoidance of expensive wavelength detection that is required in the passive fiber Bragg grating sensors.In this paper, we present a novel simultaneous strain and temperature fiber optic sensor based on radio-frequency measurement.The sensing head is formed by two concatenated ultra-short distributed Bragg reflector (DBR) lasers.The total length of the sensing head is only 18 mm.Both lasers operate in robust single longitude mode with two polarization modes.Each laser generates a polarization mode beat note at radio-frequency range.The two lasers have different beat frequencies which exhibit different response to strain and temperature.Simultaneous strain and temperature measurement can be achieved by monitoring the two beat frequencies. Principle Figure 1 shows the schematic diagram of the proposed simultaneous strain and temperature sensor.The sensor head is formed by two concatenated DBR fiber lasers with the first one fabricated in Er-doped fiber and the second one fabricated in Er/Yb co-doped fiber.Typical DBR fiber lasers are a few cm long, leading to the laser longitude mode spacing much smaller than the grating reflection bandwidth.As a result, there are multiple modes that meet conditions for lasing.The dominant mode oscillates first and other modes are suppressed, so normally the lasers can operate in single longitude mode.However, mode hopping will occur when the laser is subjected to external perturbations that distort the grating spectrum, such as temperature or strain gradient or a localized perturbation to the subsection of the Bragg gratings.This is a key problem limiting the practical applications of DBR fiber lasers.To address this problem, here ultra-short DBR fiber lasers which have longitude mode spacing comparable to the grating reflection bandwidth were employed.Because the ultra-short cavity supports only one longitude mode, it absolutely obviates possibility of mode hopping when the laser is subjected to any external perturbations.Here the DBR fiber lasers operate in single longitude mode with two polarization states.When the laser output is monitored with a high speed photodetector, a beat note will be generated by the two polarization lines.The beat frequency is given by where c is the light speed in vacuum, λ 0 is the laser wavelength, n 0 and B are the average index and birefringence of the optical fiber, respectively.Typical the beat frequency is in the range from several hundred MHz to several GHz. When the DBR laser is subjected to strain or temperature perturbation, the birefringence will be changed.As a result, the beat frequency will shift and so can be considered as an effective signal output.The response of the beat frequency to strain and temperature can be expressed as where p e , α, and ξ are the strain-optic coefficient, thermal expansion coefficient, and thermooptic coefficient of the optical fiber.Because of different dopant and slightly different structure-parameters, the DBR lasers in Er-doped fiber and Er/Yb DBR fiber exhibit different beat-frequency-response to strain and temperature.When temperature and strain change simultaneously, using the Eq. ( 2 The coefficient matrix K can be defined by separately measuring the strain and temperature responses of the polarization beat frequency of the two lasers.Then the strain and temperature can be determined simultaneously by measuring the beat frequencies of the Erdoped fiber laser and the Er/Yb co-doped fiber laser. Experiment and results The ultra-short DBR fiber lasers were fabricated by directly inscribing two wavelengthmatched Bragg gratings in active fibers using the setup described in [17].A 193 nm excimer laser and phase mask method were used.Because the 193 nm UV light induces index change by two-photon excitation process, it does not require hydrogen loading to photosensitize the fiber.This avoids the laser efficiency degradation due to hydrogen-induced loss at pump wavelength and excited-state lifetime reduction of Er 3+ ions.The DBR laser in Er/Yb codoped fiber consisted of 2.2-mm-long low reflectivity grating, 4-mm-long high reflectivity grating, and 2 mm grating spacing.The total length of the Er/Yb co-doped fiber laser was 8.2 mm.The DBR laser in Er-doped fiber consisted of two 3-mm-long gratings and 2-mm grating spacing.The entire length of the Er-doped DBR fiber laser was only 8 mm.The two DBR lasers were concatenated in a single fiber as the sensing head.The total length of the sensing head was only 18 mm.The 980 nm pump light was launched into the laser array from the Erdoped fiber laser side through a wavelength division multiplexer (WDM).The backward laser output was launched into a high speed photodetector (PD) through a polarization controller (PC) and an in-line polarizer.A radio-frequency spectrum analyzer was used to monitor the beat notes of the lasers. Figure 2 shows the output spectrum of the laser array with pump power setting to 187 mW.The Er-doped fiber laser operated around 1536.12 nm with signal-to-noise ratio of ~55 dB.The Er/Yb co-doped fiber laser operated around 1539 nm with signal-to-noise ratio of ~60 dB. Figure 3 shows the beat note spectrum of the laser array measured with the radiofrequency spectrum analyzer.The Er-doped fiber laser generated a beat note at 2.664 GHz with signal-to-noise ratio better than 50 dB.The Er/Yb co-doped fiber lasers generated a beat note at 1.336 GHz with signal-to-noise ratio better than 60 dB.The beat frequency of the Erdoped fiber laser is much higher than that of the Er/Yb co-doped fiber.This denotes that the Er-doped fiber has much higher birefringence than the Er/Yb co-doped fiber.In spite of the Er-doped fiber laser in the front, the Er/Yb co-coped fiber laser had higher laser output and stronger beat note.This is because the Yb ions have strong absorption at 980 nm and transfer their energy to the Er ions with high efficiency, significantly increased the laser efficiency.The strain response was investigated by bonding both sides of the sensing head onto two translation stages with epoxy.While the sensing head was stretched with the translation stage, the laser beat frequencies were monitored with the radio-frequency spectrum analyzer.The environment temperature was kept at 15 °C during the strain response measurement.Applied strain was calculated from the elongation of the stretched fiber divided by the original length.During the strain response characterization, environment temperature was kept constant.Figure 4 shows measured sensor response to strain in the range from 0 to 1200 με.It is clear that the beat frequencies increase with strain, and the strain sensitivity of Er-doped fiber laser is higher than the Er/Yb co-doped fiber laser.The strain coefficients of the Er-doped fiber laser and Er/Yb co-doped fiber laser were estimated, using linear regression fits, as k Er,ε = 8.75 ± 0.104 KHz/με(R 2 = 0.9986), and k Er/Yb,ε = 6.42 ± 0.068 KHz/με (R 2 = 0.9989), respectively.The temperature response was investigated by putting the sensing head into a tube oven.A thermocouple was placed near the sensing head for measurement of the temperature.The sensing head was kept unstrained.Figure 5 shows the measured beat frequency shifts as functions of temperature in the range from 15 °C to 100 °C.As shown in Fig. 5, the beat frequencies decrease with temperature, and the temperature sensitivity of the Er/Yb co-doped fiber laser is higher than the Er-doped fiber laser.The temperature coefficients of the Erdoped fiber laser and Er/Yb co-doped fiber laser were estimated, using linear regression fits, as k Eε,T = 678 ± 5.52 KHz/ °C (R 2 = 0.9995), and k Er/Yb,T = 1142 ± 4.11 KHz/°C (R 2 = 0.9999), respectively. By taking the inverse matrix of K and the measured coefficients in ( where the units of δε, δT, δ(Δν Er ), and δ(Δν Er/Yb ) are με, °C, KHz and KHz, respectively.One can thus employ the coefficient matrix above to simultaneously determine strain and temperature by measuring the two fiber lasers' beat frequency shifts of the sensor.In our experiments, the sensor was interrogated with a RF spectrum analyzer (Anritsu MS2661C) with resolution of 10 kHz, which denotes resolutions of 1.56 με and 0.015 °C for strain and temperature measurement, respectively. Conclusion We reported a novel fiber-optic sensor for simultaneous strain and temperature measurement based on radio-frequency detection.The sensor head was formed by two concatenated ultrashort DBR fiber lasers with the first one fabricated in Er-doped fiber and the second one fabricated in Er/Yb co-doped fiber.The total length of the sensing head was only 18 mm.The two lasers generate two polarization mode beat notes at radio-frequency range, which show different frequency response to strain and temperature.Simultaneous strain and temperature measurement can be achieved by monitoring the two beat frequencies.The distinctive advantages of the proposed simultaneous strain and temperature sensor are ease of interrogation and avoidance of expensive wavelength detection.Other advantages include absolute frequency encoding and capability to multiplex a number of sensors on a single fiber by use of frequency division multiplexing technique. Fig. 1 . Fig. 1.Schematic diagram of the proposed simultaneous strain and temperature sensor. Fig. 4 . Fig. 4. Strain response of the proposed simultaneous strain and temperature sensor. Fig. 5 . Fig. 5. Temperature response of the proposed simultaneous strain and temperature sensor.
2018-04-03T03:34:41.602Z
2011-10-10T00:00:00.000
{ "year": 2011, "sha1": "4c44716080e197c31cf19b92e3ec16df23b3e9cd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.19.020650", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "569d509e1b2ef456322cc6eee3213752a7a14e53", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
8285128
pes2o/s2orc
v3-fos-license
Non-traumatic myositis ossificans mimicking a malignant neoplasm in an 83-year-old woman: a case report Introduction Myositis ossificans is a benign, self-limiting condition that usually affects young, athletically active men. To the best of our knowledge, this case report describes the oldest recorded patient with myositis ossificans. Case presentation Our patient was an 83-year-old Japanese woman who presented with a one week history of a palpable mass in the left thigh. She had a history of surgery for transverse colon cancer and lung cancer at the ages of 73 and 80, respectively. Clinical and radiological examinations suggested a malignant neoplasm such as metastatic carcinoma or extraskeletal osteosarcoma. A diagnosis of myositis ossificans was made by core needle biopsy. Our patient was asymptomatic and had no recurrence at one year follow-up. Conclusion Clinicians should consider myositis ossificans as a possible diagnosis for a soft tissue mass in the limb of an older patient, thereby avoiding unnecessarily aggressive therapy. Introduction Myositis ossificans (MO) is a benign lesion of heterotopic ossification that chiefly affects active adolescents and young adults, with a slight male predominance. Any part of the body may be involved, but the anterior thigh is the most common site. This lesion is clearly related to trauma in 60% to 75% of cases [1]. Despite a clinically and histologically distinct entity, MO still causes considerable difficulties in diagnosis. We report a case of MO arising in the thigh of an older patient without any history of trauma. Case presentation An 83-year-old Japanese woman was referred to our hospital with a one week history of a palpable mass in the anteriomedial aspect of the left thigh. There was no history of antecedent trauma, but our patient had a history of surgery for transverse colon cancer and lung cancer at the ages of 73 and 80, respectively. Physical examination revealed a tender, firm, and non-mobile mass that was 7 × 6 cm in size. Laboratory data were within the normal limits, including erythrocyte sedimentation rate, C-reactive protein and white blood cell counts. A plain radiograph did not show any alteration. A magnetic resonance imaging (MRI) scan revealed a 6 × 5 cm poorly defined mass in the left vastus medialis muscle ( Figure 1). On T1-weighted and T2-weighted images, the mass showed isointense and heterogeneous hyperintense signals, respectively. After intravenous gadolinium injection, the mass was enhanced significantly. Surrounding muscle edema was identified. Tc-99 m hydroxymethylenediphosphonate bone scintigraphy showed dense uptake in the medial soft tissue of the left thigh ( Figure 2). The possibility of a malignant neoplasm was proposed, and a core needle biopsy was performed. Microscopically, the lesion was composed of a proliferation of fibroblasts admixed with foci of bone trabeculae lined by plump osteoblasts (Figure 3). Abnormal mitotic figures and nuclear pleomorphism were absent. These features were considered compatible with a diagnosis of MO. Our patient underwent a clinical and radiological follow-up. At three weeks after onset, a computed tomography (CT) scan demonstrated peripheral ossification of the lesion, thus further confirming MO ( Figure 4). The symptoms resolved completely within two months. At one year follow-up, she was asymptomatic and had no recurrence. Discussion MO, a benign condition, is commonly defined as a heterotopic ossification of soft tissues. MO can be seen at any age, but rarely occurs in babies or older patients [1]. To the best of our knowledge, the youngest documented patient was a five-month-old girl [2] and the oldest an 81-year-old woman [3]. The pathogenesis of MO is still uncertain. In cases with an apparent history of traumatic injury, it can be assumed that the process commences with tissue necrosis or hemorrhage followed by exuberant reparative fibroblastic and vascular proliferation with eventual ossification. In a small number of cases, etiologies may include burns, infections or drug abuse. However, nontraumatic cases have been documented in the literature [4,5]. In most of these cases, repetitive minor mechanical injuries, ischemia or inflammation have been implicated as possible causative factors [1]. Our case seems to belong to the non-traumatic MO group. The zoning phenomenon of peripheral maturation is the most important diagnostic feature. Various radiological techniques have been applied for the detection and follow-up of MO [6]. Plain radiographs are usually normal at onset. In later stages, mineralization is present at the periphery and has a ring-like configuration. CT is the best imaging modality for diagnosing MO. MRI is a sensitive technique for identifying small, early lesions but is non-specific. Extensive muscle edema may be seen. Bone scintigraphy is very sensitive in the early detection of MO, demonstrating increased uptake in damaged muscle. Differential diagnostic problems may arise in both early and late stages. In the earlier stages, the differential diagnoses should include extraskeletal osteosarcoma and synovial sarcoma when peripheral ossification is incomplete. In later stages, MO must be distinguished from parosteal or extraskeletal osteosarcoma and chondrosarcoma [6,7]. However, osteosarcoma usually lacks a zoning pattern of peripheral maturation. The differential diagnosis may also include metastatic carcinoma in our case. Skeletal muscle metastasis is relatively rare. The most frequent affected sites include the abdominal wall, back, thigh, chest wall, and shoulder. The most common primary tumor is located in the lung and the most common histological diagnosis is adenocarcinoma [8][9][10]. Not surprisingly, ossifying skeletal muscle metastases have been reported in the literature [11,12]. In most cases, ossification is produced by osteoblasts originating by metaplasia from stromal fibroblasts. The clinical distinction between metastatic carcinoma to skeletal muscle and primary soft tissue tumor is critical because treatment and prognosis are markedly different. However, we were unable to eliminate the possibility of a metastatic carcinoma on the basis of clinical and radiological features. The treatment of MO is usually conservative because of its self-limiting character and spontaneous regression. However, surgical excision is advised when joint function is impaired, neurovascular impingement is encountered, or the lesion is unusually large or painful. Surgery should only be undertaken on mature lesions.
2016-05-15T06:45:48.179Z
2010-08-12T00:00:00.000
{ "year": 2010, "sha1": "30d952a0e01fd2c5d3ace058cd772b069d19065c", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-4-270", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b9d83e3f7d8d941ad4efba2d833db06090b67dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270656547
pes2o/s2orc
v3-fos-license
Extracellular Microenvironment Alterations in Ductal Carcinoma In Situ and Invasive Breast Cancer Pathologies by Multiplexed Spatial Proteomics Ductal carcinoma in situ (DCIS) is a heterogeneous breast disease that remains challenging to treat due to its unpredictable progression to invasive breast cancer (IBC). Contemporary literature has become increasingly focused on extracellular matrix (ECM) alterations with breast cancer progression. However, the spatial regulation of the ECM proteome in DCIS has yet to be investigated in relation to IBC. We hypothesized that DCIS and IBC present distinct ECM proteomes that could discriminate between these pathologies. Tissue sections of pure DCIS, mixed DCIS-IBC, or pure IBC (n = 22) with detailed pathological annotations were investigated by multiplexed spatial proteomics. Across tissues, 1,005 ECM peptides were detected in pathologically annotated regions and their surrounding extracellular microenvironments. A comparison of DCIS to IBC pathologies demonstrated 43 significantly altered ECM peptides. Notably, eight fibrillar collagen peptides could distinguish with high specificity and sensitivity between DCIS and IBC. Lesion-targeted proteomic imaging revealed heterogeneity of the ECM proteome surrounding individual DCIS lesions. Multiplexed spatial proteomics reported an invasive cancer field effect, in which DCIS lesions in closer proximity to IBC shared a more similar ECM profile to IBC than distal counterparts. Defining the ECM proteomic microenvironment provides novel molecular insights relating to DCIS and IBC. Introduction Ductal carcinoma in situ (DCIS) represents approximately 20% of breast cancers currently diagnosed in US women and is considered a non-obligatory pathway to invasive breast cancer (IBC).The incidence rate of DCIS has seen a relatively recent rise, which is attributed to increased mammographic screening efforts [1,2].While the expansion in screening has undoubtedly had many positive effects, it has often led to overtreatment of DCIS, since DCIS patients who do not undergo treatment may have between a 14 and 53% risk of developing IBC [1].Despite the variable risk of disease progression, the standard of care remains local excision for all patients, often combined with radiation therapy and hormone therapy dependent on receptor status.This therapeutic plan is not without risks, including the development of secondary cancers, coronary events, and pulmonary dysfunction [1].To avoid overtreatment and its complications, improved biological markers are needed to better stratify patients into low-and high-risk categories. While there has been a continued effort to discover reliable prognosticators, very few have been clinically integrated.Oncotype DX is a 21-gene assay with demonstrated predictive value in IBC [3] that has been adapted into a 12-gene DCIS recurrence score.This recurrence score has some predictive value with low DCIS scores correlating with a lower risk of IBC recurrence [3].However, this modality necessitates further validation to be integrated widely into clinical practice [4].Challenges to the development of effective prognosticators in DCIS have included, but are not limited to, the similar copy number alterations and gene expression patterns between DCIS and invasive ductal carcinoma (IDC), an invasive breast cancer type arising within the mammary ducts [5].Other risk stratification models have focused on clinical characteristics.Namely, a clinical risk score consisting of ER status (a clinically used biomarker) [6], presence of comedo necrosis, and age at diagnosis was found to be associated with increased ipsilateral recurrence risk [7] yet has not been widely adopted due to controversy regarding the prognostic accuracy of recurrence.While histopathological evaluation for features such as high nuclear grade and the architectural pattern of comedo necrosis has demonstrated prognostic potential, consistent assessment of these features has been clinically difficult to achieve [1].Thus, contemporary efforts have begun to focus on the tumor microenvironment to identify predictive markers in DCIS [8,9]. To expand upon recent work on the DCIS microenvironment, we focused our investigation on the extracellular matrix (ECM) proteome of DCIS, mixed IBC-DCIS, and IBC.It is well-documented that alterations to the ECM occur throughout breast cancer progression, including stiffening and increases in density [10].Compared to normal breast tissue, invasive and noninvasive breast cancers have increased deposition, thickening, and linearization of collagen fibers with malignant transformation [10].Various collagen types have been investigated and demonstrated to have increased expression in DCIS by immunohistochemistry [11,12].In addition to these global alterations in collagen expression, post-translational dysregulation of collagen fibers drives the cellular-matrix interface to influence cell signaling.Prolyl-4-hydroxylases, which hydroxylate proline residues within collagens, have been shown to have increased expression in breast cancer invasion and metastases [13,14], yet sites of proline hydroxylation remain largely unmapped.Peptidelevel alterations including differences in post-translational modifications can be spatially explored with high-resolution mass spectrometry imaging [15][16][17][18][19][20].While collagen is known to have potential as a prognosticator in DCIS [11], our approach is novel in its ability to spatially define multiple peptide alterations to specific pathological regions.Within this study, we used ECM-targeted MALDI-QTOF imaging to further our understanding of the spatial regulation of the collagen proteome in DCIS and IBC. Study Overview The primary aim of this study was to evaluate the spatial regulation of the fibrillar collagen proteome between high nuclear grade DCIS and IBC pathologies.Twentytwo specimens from seventeen patients were annotated by pathologists as DCIS (n = 9), IBC (n = 4), or containing both DCIS and IBC lesions (n = 8) (Table 1, Figure S1, Tables S1-S3).Of the patients with relevant medical history documentation, the median age of the patients was 58.4 years.A majority of samples were from lumpectomies, with a smaller proportion from mastectomies (Table 1).Tissues were mapped for translational and post-translational collagen regulation using ECM-targeted mass spectrometry imaging across tissue sections to encompass all histopathologic features.A four-sample subset was imaged at high spatial resolutions and used for a detailed investigation of the ECM proteomic profiles of individual DCIS lesions.Proteomic sequencing demonstrated complex variations in the post-translational regulation of fibrillar collagens that could be spatially defined.Additional studies were performed to explore other proteomic features of DCIS pathology using targeted enzymatic approaches coupled to sequencing proteomics (Figure 1).The main finding is that there exists proteomic modulation of fibrillar collagen between DCIS and IBC pathologies, providing strong evidence for larger studies defining the extracellular pathologies related to DCIS.Study workflow.H&E slides were annotated by a breast pathologist with regions defined as DCIS (blue) or IBC (red or orange).On a subsequent tissue section, slides were prepared for collagenase digestion to target the extracellular matrix (ECM).Mass spectrometry imaging (MSI) was performed with matrix-assisted laser desorption/ionization-quadrupole time-of-flight (MALDI-QTOF) imaging.Four samples were annotated per pathological lesion type and underwent the collagenase MSI workflow with high-resolution imaging at an individual lesion level.From the remaining eighteen-sample subset, specific slides were selected for further proteomic analysis multiplexing either tryptic or elastase digestion followed by mass spectrometry imaging.This schema was created in Biorender.com. Spatial Mapping of the Extracellular Proteome Defines DCIS Histopathology DCIS pathologies are linked to alterations in collagen organization, which are associated with patient outcomes [9,[21][22][23].However, the discrete regulation of the collagen proteome within its heterogeneous pathologies remains unexplored.To understand the spatial proteomic modulation of collagen domains and associated extracellular proteins in DCIS, targeted spatial proteomics was completed on an eighteen-sample cohort.Collagenase was used to digest extracellular matrix proteins into peptides that were detected for their spatial relationship to tissue pathologies using mass spectrometry imaging (Figure 2A).This targeted approach has the capability to report triple helical collagen domain regulation, which modulates crucial cell and protein interactions that span cellular responses to clinical outcomes [22,[24][25][26][27]. Within this eighteen-sample cohort, invasive breast cancer regions were specifically defined as IDC by a pathologist.Spectral comparison of DCIS to IDC regions revealed complex peptide signatures with significantly different intensity profiles (Figure 2B).Heuristic spatial segmentation of 843,210 pixels and 1,005 putatively identified peptide peaks showed 14 primary clusters uniquely localized to histopathological features across the cohort.Largely, clusters defined IDC and DCIS regions, mapped to adjacent stromal tissues, or surrounded a subset of the invasive lesions (Figure S2).An example mixed DCIS-IDC specimen with spatially distinct DCIS and IDC pathological regions had five unique proteomic clusters represented (Figure 2C,D).Notably, proteomic cluster 5 was over-represented in IDC and surrounding stromal regions.To identify the proteomic composition, liquid chromatography-tandem mass spectrometry (LC-MS/MS) was performed.Approximately 56% of peptides identified were collagens with over 79% mapping to fibrillar collagens (Figure 2E, Table S4).Distinct localization of specific peptides to pathological regions was noted.A putatively identified collagen α2(I) sequence showed high-intensity patterns localized within ductal regions, including those containing DCIS lesions.In con-trast, another putatively identified peptide from filamin C [28,29] circumscribed stromal regions surrounding ducts and localized within the IDC region (Figure 2F).A comparison of averaged intensity patterns of the putatively identified ECM peptides demonstrated a unique ECM proteomic signature between DCIS, mixed DCIS-IDC, and IDC specimens (Figure 2G).To discern if putatively identified peptide peaks could separate samples by pathology present, a Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) was performed on annotated regions of DCIS or IDC and defined by their specimen classification of DCIS, mixed DCIS-IDC, or IDC.Distinct clustering patterns between the three specimen classifications seemed to suggest pathology-dependent proteomic variations between DCIS, mixed DCIS-IDC, and IDC samples (Figure 2H).A subset of peptide peaks was selected as most predictive in driving these specimen classifications (Figure 2I).This might suggest a distinct extracellular matrix proteome across specimen classifications.In summary, a complex spatially mapped extracellular proteome was defined within DCIS and IDC pathologies.The spatial extracellular proteome was defined predominantly as fibrillar collagens that included post-translationally modified domains.containing DCIS lesions.In contrast, another putatively identified peptide from filamin C [28,29] circumscribed stromal regions surrounding ducts and localized within the IDC region (Figure 2F).A comparison of averaged intensity patterns of the putatively identified ECM peptides demonstrated a unique ECM proteomic signature between DCIS, mixed DCIS-IDC, and IDC specimens (Figure 2G).To discern if putatively identified peptide peaks could separate samples by pathology present, a Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) was performed on annotated regions of DCIS or IDC and defined by their specimen classification of DCIS, mixed DCIS-IDC, or IDC.Distinct clustering patterns between the three specimen classifications seemed to suggest pathology-dependent proteomic variations between DCIS, mixed DCIS-IDC, and IDC samples (Figure 2H).A subset of peptide peaks was selected as most predictive in driving these specimen classifications (Figure 2I).This might suggest a distinct extracellular matrix proteome across specimen classifications.In summary, a complex spatially mapped extracellular proteome was defined within DCIS and IDC pathologies.The spatial extracellular proteome was defined predominantly as fibrillar collagens that included post-translationally modified domains.(B) Spectra from pathologist-defined lesions with DCIS shown in blue and IDC shown in red demonstrate different relative peak intensity profiles.R. int.denotes the normalized relative intensity of peaks computed in mMass ® .(C) Hematoxylin and eosin-stained image of a mixed DCIS-IDC specimen demonstrates DCIS (blue) and IDC pathology (red).(D) Spatial segmentation analysis was used to define five main proteomic clusters.Cluster 1 (dark blue) annotates to adipocyte regions; Cluster 2 (green) defines borders between adipocyte and stroma; Cluster 3 (pink) localizes to stroma that includes DCIS lesions; Cluster 4 (blue) is localized to stroma and adipocytes primarily between tumor and adjacent tissue; Cluster 5 (yellow) annotates to the cancer region with diminishing detection distant from the tumor.(E) Pie chart depicting the proportion of peptide sequences identified from select protein classifications.Collagen fraction is further divided into collagen structural categories.(F) Spatial heat maps of a ColIα2 peptide depicted in red show distinct localization to DCIS lesions and surrounding ductal regions compared to the filamin-C peptide, which borders ductal regions and localizes to IDC.INPPL1 denotes inositol polyphosphate phosphatase like 1. Images were normalized to an internal peptide standard.Putative identifications were made by matching imaging data to an ECM database.Numbers following identification indicate the amino acid positions within the entire protein sequence.(G) Extracellular matrix peptides distinguished between DCIS, IDC, and DCIS-IDC.Heatmap is the average peptide expression detected across tissue images.(H) Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) of pathological regions depicts distinct clustering of regions by specimen classifications of DCIS (n = 9), mixed DCIS-IDC (n = 6), and IDC (n = 4).(I) Loadings plot from sPLS-DA depicts the top ten peptide peaks that discriminate between specimen types.Ppm calculations between MALDI-QTOF imaging and LC-MS/MS were within 5 mass accuracy.sPLS-DA and heat map analyses were performed with MetaboAnalyst 5.0. Fibrillar Collagen Domains Define Pathological Regions of DCIS and IDC To investigate specific ECM peptides that might be differentially represented between DCIS and IDC pathologies, the relative intensities of the 1,005 putatively identified peptide peaks between DCIS and IDC lesions were compared.Forty-three putatively identified ECM peptides were found to have significantly different intensity patterns between DCIS and IDC pathologies (Figure 3A).Seven peptide peaks were found to have altered fold changes between DCIS and IDC pathologies (Figure 3B).Of the LC-MS/MS-identified sequences from the 1,005-peptide list, eight collagen peptides were discovered to have significantly different peak intensities comparing DCIS and IDC lesions.Notably, these peptides could discriminate between lesion types (Figure 3C, Table S5) and were identified sequences within fibrillar collagens, specifically collagen α1(I), collagen α2(I), collagen α1(II), collagen α1(III), and collagen α2(V) chains.Given the importance of hydroxylation of proline residues for triple helical stability and its influence on cell function, it was not surprising that many of the differentially expressed collagen sequences contained hydroxylated proline residues and were within the annotated triple helical segment [30] (Figure 3D).Importantly, not all prolines were hydroxylated, and the probability of the modification at each site is shown as a numerical value in parentheses.Certain peptides such as the collagen α1(I) peptide (m/z 1084.498GPSGASGERGP(0.06)P(0.94))demonstrated high intensities within DCIS regions, while others including a different collagen α1(I) peptide (m/z 1458.701GLQGM(1)P( 1)GERGAAGLP( 1)) exhibited high intensities within IDC regions and adjacent stroma (Figure 3E).Altogether, this suggests that distinct posttranslationally modified collagen sequences contained within the triple helical segment can discriminate between DCIS and IDC pathologies. Extracellular Microenvironment Contributes to Intra-Tumoral Heterogeneity Intra-tumoral heterogeneity is a well-known characteristic of DCIS that contributes to clinical challenges in pathological evaluation for risk assessment of later breast cancer events [31,32].An initial investigation of proteomic intra-tumoral heterogeneity examined four samples with 59 individual lesions defined by architectural pattern and nuclear grade (Figures 4A and S3-S5, Table S1).Between individual lesions, 41 sequenced peptides linked to the high spatial resolution imaging data were compared for differences in proteomic expression.A spatial segmentation analysis of individual regions of interest demonstrated unique proteomic clusters between regions of interest.Regions that were closer together appeared to share more similar protein clusters than those further away (Figures 4C and S3-S5).An example peptide identified from the collagen α1(I) chain near its cell interaction domain reported differential intensity patterns between regions of interest (Figure 4B,C).Hierarchical clustering of the 41 peptides per lesion demonstrated diversity in ECM proteomic patterns across architecture types (Figure 4D).The sPSL-DA analysis revealed both portions of overlap and distinction between architectural patterns (Figure 4E).Taken together, this seemed to suggest the ECM proteome surrounding individual lesions varied and could not be entirely explained by different archetypes.When stratifying each lesion by nuclear grade, the ECM proteomic profile exhibited some similarity within the same nuclear grade classification (Figure 4F).Moreover, multivariate analysis demonstrated a region of overlap and an area of distinction between nuclear grades 2 and 3 in this specimen (Figure 4G).Altogether, this case demonstrated some variability in expression patterns between lesions of the same nuclear grade and archetype.Analysis of individual DCIS lesions supports the contribution of the extracellular microenvironment to intratumoral heterogeneity in DCIS. Distinct Tryptic Peptide Profiles Define Pathological Regions To further test the potential for multi-omic studies in assessing DCIS, we increased our investigation of the proteomic niche of DCIS and IDC using trypsin, which provides untargeted, primarily cellular proteomic information.After the collagenase data collection described previously, a tryptic digest was performed on five samples within the eighteen-sample cohort (Figure 5A,B).A segmentation analysis of 214,558 spectra and 1,104 LC-MS/MS-identified tryptic peaks revealed distinctly localized proteomic clusters.DCIS076 was found to be the most distinct specimen containing proteomic clusters not represented within the other samples.Additionally, discrete proteomic groups spatially overlaid ductal compartments within pathologically annotated regions and localized to the adjacent stroma (Figure 5C).To further understand the tryptic proteomic profiles of these samples, a gene ontology (GO) analysis was performed on our LC-MS/MS-identified proteins (Figure 5D, Table S6).Notably, many extracellular matrix terms were identified within top-ranking categories, highlighting the importance of extracellular matrix alterations in DCIS and IBC previously described in the literature [8,9,21].To assess for pathologyspecific proteomic variations, a comparison of tryptic peptide intensity profiles between pathological regions and normal adjacent ductal regions was performed and reported 47 peptides with significantly different intensities (Figure 5E, Table S7).An sPLS-DA analysis demonstrated a distinct clustering of tumor lesions and normal adjacent ductal regions (Figure 5F).Specific peaks such as m/z 1045.564identified as a desmoplakin peptide were spatially distributed outside the pathological annotations and within the adjacent microenvironment.Others such as m/z 958.566, a peptide from nicotinate phosphoribosyltransferase, primarily localized to the cellular compartments, whereas m/z 1240.671, a collagen α-1(I) chain peptide from the triple helical segment, surrounded cellularly dense regions and localized to adjacent stromal tissues (Figure 5G and Figure S6).It is interesting to note that both desmoplakin, an important constituent of desmosomes [34], and nicotinate phosphoribosyltransferase, involved in NAD + biosynthesis [35], have been linked to breast cancer.Overall, the data demonstrate pathology-dependent proteomic alterations within the surrounding ECM and cellular compartments. Serial Enzymatic Digest Reveals Pathology-Specific Proteomes and Proteomic Field Cancerization Field cancerization is defined as a similarity of molecular alterations between carcinomas and their surrounding tissues.In both DCIS and IBC, cancer field effects are thought to contribute to local recurrence following surgical resection [36].While field cancerization at the epigenetic and genetic levels has been well-studied [37][38][39], little is understood about proteomic modulation of the IBC field on adjacent DCIS regions.Given its lesion heterogeneity, DCIS076 was selected as a case study to explore architectural patterns and distance from IDC's influence on the DCIS proteomic niche.Serial enzymatic digestion coupled with MALDI-QTOF imaging was performed to capture an expanded spatial view of the DCIS-IDC proteome (Figure 6A, Tables S6 and S8).DCIS lesions were divided into categories based on distance from the IDC region: DCIS regions within IDC, DCIS regions adjacent to IDC (0 µm from the invasive border), DCIS regions distal to the IDC (over 0 µm but within 1.0 mm of the invasive region), and DCIS regions farthest from IDC (greater than 1.0 mm from the invasive region) (Figure 6B).Individual lesions were defined as solid, cribriform, or comedo necrosis architectural patterns.A segmentation analysis of collagenase-digested peptides reported proteomic clusters localized to IDC-and DCISannotated regions (Figures 6C and S7).Average intensity patterns of 53 identified ECM peptides from the collagenase digest revealed differences between distance classifications with DCIS lesions in the IDC region reporting the most distinct ECM signature by hierarchical clustering (Figure 6D).The sPLS-DA plot demonstrated a similar trend with DCIS lesions inside the IDC region displaying the least amount of overlap with lesions of other distance classifications (Figure S7).Spatial localization demonstrated increased intensities of certain peaks within the IDC region, such as m/z 1291.664, which corresponded to a collagen α6(VI) chain peptide located within the non-triple helical region [30].Interestingly, gene expression of the collagen α6(VI) chain has been reported to be upregulated in the triple-negative primary tumors compared to the axillary lymph node metastases, which could suggest its importance within the primary tumor microenvironment [40].Other peaks showed increased intensities outside the IDC region such as m/z 1588.781, a fibronectin peptide within domain 17 [30], and m/z 1458.701,corresponding to a collagen α1(I) chain peptide near the cell interaction domain [24] (Figures 6E and S8). To capture the cellular proteomic niche, a tryptic digest was then performed on DCIS076.A segmentation analysis of 128 putatively identified tryptic peptide peaks similarly reported distinct proteomic clusters localized to DCIS and IDC regions (Figures 6F and S6).As with the collagenase proteomic profiles, average intensity patterns of tryptic peptides demonstrated that the DCIS lesions within the IDC region had a unique proteome when compared to the DCIS lesions of other distance classifications (Figure 6G).Specific tryptic peptides such as m/z 955.566, identified from nicotinate phosphoribosyltransferase, exhibited high-intensity profiles within the IDC and DCIS regions.While other tryptic peptides such as m/z 1550.809from the collagen α2(I) chain and m/z 1797.841from the collagen α1(III) chain were localized primarily to regions outside the DCIS and IDC lesions (Figures 6H and S9).In contemporary literature, a transcriptional signature including COL1A2 was associated with reduced overall survival in breast cancer [41].Similarly, COL3A1 has been reported to have an important role in breast cancer progression as knockdown studies in triple-negative breast cancer cell lines have been linked to reduced invasion and proliferation [42].Taken together, these data support a proteomic field effect from the invasive breast cancer region. Given the significance of the extracellular matrix and associated proteins within our tryptic digest, a subsequent elastase digest was performed to target elastin, which has been associated with breast cancer invasion [43].The elastase digest of DCIS076 primarily targeted ECM-associated proteins with collagens and elastin reported as the top proteomic hits.A segmentation analysis of LC-MS/MS-identified elastin and elastin-associated peptides reported distinct clusters within pathologically annotated regions and others in surrounding adjacent regions (Figures 6I and S6).Average intensity patterns between distance classifications reported proteomic variations between DCIS lesions of varying distances from the IDC region (Figures 6J and S1).A spatial investigation of elastin peptides demonstrated distinct peaks with increased intensity profiles within the IDC region and DCIS lesions such as m/z 906.472, while peaks such as m/z 1240.669 and m/z 854.462 reported increased relative intensity patterns outside IDC and DCIS lesions (Figure 6K).Furthermore, these data highlight the utility of serial enzymatic digestion to expand upon the number of peptides with distinct intensity patterns between pathologies and enhance the characterization of the DCIS proteomic niche.Importantly, these proteomic findings are supported by contemporary literature in the breast cancer field.DCIS lesions within the invasive cancer field have a distinct proteomic signature, which suggests that the spatial regulation of the DCIS proteome is influenced by the invasive breast cancer field. Discussion Transitions from benign to invasive breast cancer are earmarked with progressive changes in the structure and composition of stroma, with limited reports of how proteomic alteration contributes to the pathology of the breast microenvironment.This study establishes that dynamic collagen proteomic regulation occurs throughout the breast tissue microenvironment.The use of a collagen-targeting proteomic imaging method applicable to clinically archived tissue provided novel insight into the preinvasive breast microenvironment and transitions to invasive cancer.A major finding was that distinct collagen proteomic profiles could distinguish DCIS, DCIS-IDC, and IDC.Intriguingly, multiplexing both cellular and extracellular proteomic approaches reported a field cancerization effect that demarked tumor site localization with gradients extending to select DCIS lesions outside of the primary tumor site.The field effect included a contrasting proteome that differentiated adjacent normal ductal tissue.Spatially powered proteomic analysis further reported that heterogeneity exists within the regional microenvironment of DCIS lesions, defining the boundaries of lesion pathology.While lesion heterogeneity at the genomic level has been demonstrated [44], the current study offers new insights into the proteomic composition of the local ECM microenvironment at an individual lesion level.The spatially driven multiplexed approach supported that phenotypic heterogeneity in nuclear grade and archetype [45] extends to the localized proteome and may provide a molecular differentiator for noninvasive to invasive cancer pathologies. The current literature depicts collagen fiber regulation as a distinguishing signature within the breast microenvironment predictive of recurrent DCIS [46], prognostic of early breast cancer [21,47], and altering with progressive breast cancer [48].This study found that specific fibrillar collagen domains that included post-translational modifications showed altered intensity distribution between DCIS and IDC.These collagen domains presented as strong single classifiers that differentiated DCIS from IDC.This supports previous work by antibody staining that fibrillar collagens differentiate the tumor microenvironment in DCIS and IDC [8,9].The current study advanced this concept to report amino acid sequences of the regulated domains distinguishing DCIS from IDC. Notably, many of the collagen sequences that differentiated DCIS and IDC included the post-translational modification of proline hydroxylation.Proline hydroxylation was reported at specific residues within the collagen sequences, highlighting that the breast microenvironment is marked by dynamic, yet site-specific post-translational regulation of collagen structure.Collagen hydroxylated proline residues constitute cell binding domains [24,25], previously linked to the regulation of tumor dormancy [49] and controlling immune cell localization [50].Prolyl-4-hydroxylase, the enzyme that primarily hydroxylates proline residues within collagen, modifies tumor progression [14,51], is essential for metastasis [52], and is linked to poor survival outcomes in breast cancer [53].We pose that site-specific collagen hydroxylation of the DCIS microenvironment is an important priming component for the evolvement of IBC and represents a potential breast cancer prognosticator.Further incorporation of ECM peptides into a classification algorithm may lead to a novel model for DCIS risk stratification or could improve existing classification systems.Integration with clinically utilized biomarkers such as estrogen receptor status or pathological grading could strengthen the predictive value of these identified ECM peptides. Cancer field effects were observed in the breast collagen proteome.Cancer field effects or cancerization is a paradigm by which a normal cell can acquire pro-tumorigenic features and influence surrounding areas, or fields, to promote cancer [54], a concept that inherently involves the extracellular microenvironment.Field cancerization is considered relative to cancer evolution, whereby cancer cells acquire mutations that allow them to adapt to the microenvironment [55].In the current study, collagen proteome gradients were observed surrounding the tumor and extending to certain DCIS lesion sites with some distance dependence.Thus, the surrounding proteomic microenvironment appears to have a significant role that at minimum could form a connective pathway of chemical biology between invasive cancer sites and DCIS.Further, the lesion-specific investigation showed common proteomic signatures between cancer sites and only certain DCIS lesions, suggestive of differing cellular origins for field effects.This is supported by literature showing that up to 75% of DCIS lesions are true invasive cancer precursors and up to 18% of invasive cancers arise from independent lineages [56].Although much of the focus in cancer field effects and cancer evolution is on cellular morphology and genetic mutations, it is unknown how the extracellular microenvironment contributes to the promotion and emergence of breast cancer.It is likely that maladaptation of the extracellular microenvironment results in aberrant chemical gradients producing cancerous field effects with mismatched cell interfaces that stabilize mutational adaption, allowing cancer evolution.This is hypothesized to be a compounding feed-forward effect, under current investigation by the described proteomic approaches in larger cohorts. This foundational study supports that DCIS lesion pathologies are marked by heterogeneity that includes unique collagen proteomic variation.In a patient-specific tissue with highly localized cancer sites, it was expected that comedo necrosis lesions, associated with invasive cancer risk [57], would show similar patterns to the invasive cancer site.However, only certain comedo necrosis lesions clustered with collagen signatures from the invasive cancer site.Solid pathologies in the same patient often clustered with nearby comedo necrosis, further implicating an underlying proteomic field effect.Additionally, primarily high nuclear grade DCIS pathologies, considered at increased risk for progression to IDC [58], were investigated.Within high nuclear grade pathologies, a significant heterogeneity of the collagen proteome was also observed.While lesion heterogeneity is increasingly being viewed as a pathological feature of DCIS [45], it remains unclear how signatures contribute to emergent invasive cancer.Further investigations multiplexing the spatial proteome as shown by this study and expanding on intra-and inter-patient DCIS lesions are expected to provide insight into the origins of heterogeneity. There were limitations of this study.Sample size was limited.This study sought to build foundational examples to understand the potential of multiplexed spatial omics, and larger, highly annotated cohorts must be analyzed with these approaches to build a comprehensive portrait of proteomic changes in DCIS and IDC.It is also important to note that genetic ancestry plays a role in disparities in progression to IDC [59] that have yet to be investigated.To develop predictive biomarkers for recurrence, DCIS pathologies must be studied at primary diagnosis and linked to outcomes as well as ancestry data. Patient Cohort Eighteen breast tissue samples were obtained from the Department of Surgery at Duke University.Samples were scored as nuclear grade 2 or 3 by a pathologist.The average age of diagnosis of patients within the cohort was 58.4 years old (SD = 13.6).Samples were annotated by a pathologist as DCIS only, DCIS and IDC, and IDC only according to the College of American Pathologists "Protocol for the Examination of Specimens From Patients With Ductal Carcinoma In Situ (DCIS) of the Breast" [60].Architectural patterns of specimens were classified as solid, cribriform, or comedo necrosis (Table S1).An additional four specimens were acquired from the Department of Surgery at Duke University for higher-resolution imaging of individual pathological lesions.Architectural patterns of DCIS lesions were defined as solid, cribriform, comedo necrosis, or a combination of patterns.Nuclear grade was assigned to each region of interest (Table S1).The type of invasive breast cancer was not specified in this four-sample subset and is referred to as invasive breast cancer (IBC). Histological Staining Formalin-fixed paraffin-embedded slides were obtained directly from collaborators and stained with hematoxylin (Gill 2) and eosin-y (Fisher Scientific, Hampton, NH, USA) using the manufacturer's instructions.Each slide was then imaged on a high-resolution scanner (Nanozoomer, Hamamatsu, Japan) to obtain a whole tissue image. Antigen retrieval was performed in 10 mM Tris HCL at pH 9 for 20 min in a Decloaker chamber for samples at 95 • C. COLase3, elastase, or trypsin was applied to slides using a M3 or M5 TM-Sprayer Tissue MALDI Sample Preparation System (HTX Technologies, LLC, Chapel Hill, NC, USA) with the following settings: 40 • C, 10 psi, 25 µL/min, 1200 velocity, and 15 passes.Following a 5 h incubation at 37 • C at ≥80% humidity, tissues were sprayed with a MALDI matrix consisting of 7 mg/mL α-cyano-4-hydroxycinnamic acid dissolved in 50% acetonitrile/1% trifluoracetic acid with 0.15 picomoles Glu-1-Fibrinopeptide-1 as an internal standard.Matrix was sprayed at 79 • C, 10 psi, 70 µL/min, and 1300 velocity for a total of 14 passes.Following matrix application, slides were quickly immersed in cold 5 mM ammonium phosphate monobasic and allowed to dry in a desiccator prior to imaging. MALDI-MSI A timsTOF fleX imaging mass spectrometer (Bruker, Bremen, Germany) with matrixassisted laser desorption/ionization (MALDI) capabilities was used to analyze tissue sections.Images were acquired in positive ion mode within an m/z range of 700-2500.The laser was set to fire 300 shots per pixel with 60-80 µM between each pixel for the eighteen-sample cohort and with 20-40 µM between each pixel for the higher resolution studies.Transfer time was 75.0 µs and pre-pulse storage was 20.0 µs. FlexImaging v. 7 and SCiLs Lab software 2023c Pro (Bruker Scientific, LLC, Bremen, Germany) were utilized to visualize and analyze the data.Collagenase peptide spectral data were normalized to the root square mean for the eighteen-sample cohort unless otherwise specified and to an internal peptide standard for the higher-resolution imaging of the four-sample cohort.Tryptic and elastase peptide spectral data were normalized to the peptide internal standard unless otherwise specified.Spectral data were manually analyzed to putatively identify ECM peptide peaks especially those that were spatially expressed in regions of pathology.We focused our analysis on the mean spectrum statistics of maximum peak intensity with the internal processing mode of peak maximum with the peak interval width set to ±20 ppm.Segmentation analysis was performed in SCiLs software 2023c Pro using the k-bisecting method with the Manhattan metric.Prior to analysis, peak intensities were transformed using the natural logarithm. Sample Preparation for LC-MS/MS Proteomics Following MALDI-TOF MSI, samples were stained with hematoxylin and eosin to confirm the localization of pathological regions from annotations completed on another tissue section.Tissue sections were de-stained with a series of xylene and ethanol washes with Carnoy's solutions interspersed [20].A razor blade was used to perform a macrodissection of a selected subset of the slides to obtain four samples with primarily DCIS lesions and four with primarily IBC.For the COLase3 workflow, samples were placed in Eppendorf tubes and underwent COLase3 digestion overnight at 38 • C and 450 rpm [15].The following day, samples were sonicated and underwent a second COLase3 digestion for 5 h to increase the abundance of peptides.For tryptic and elastase workflows, samples only underwent a 5 h enzymatic digestion.With regards to DCIS076, which underwent an elastase digest followed by a tryptic digest, the sample was pelleted and washed extensively prior to the tryptic digest.To remove undigested proteins, enzyme, and salts, a C18 StageTip (Thermo Fisher Scientific, Waltham, MA, USA) was used, followed by a ZipTip (Millipore Sigma, Burlington, MA, USA) before loading samples on the column. LC-MS/MS Peptide Sequencing Peptide sequencing information was acquired using an EASY nanoLC 1200 system (Thermo Fisher Scientific, Waltham, MA, USA) coupled to an Orbitrap Exploris 480 mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA).Prior to chromatographic separation, two µg of peptide was resuspended in solvent A (5% acetonitrile, 0.1% formic acid).Peptides were loaded onto a C18 reversed-phase column (Acclaim™ PepMap™ RSLC, 75 µm × 25 cm (2 µm, 100 Å)) with increasing solvent B (80% acetonitrile, 0.1% formic acid) from 0 to 35% over a 180 min gradient for collagenase digests and 170 min gradient for tryptic/elastase digests.Samples were run at a flow rate of 300 nL/min.The Orbitrap was used to acquire MS1 data (60,000 resolution; maximum injection time: 25 ms; normalized AGC target: 300%).For collagenase MS2 data, charge states between 2 and 7 with a dynamic exclusion window of 20 s were analyzed.For tryptic and elastase MS2 data, charge states between 1 and 7 with a dynamic exclusion window of 20 s and cycle time of 3 s were analyzed.The ion trap with HCD fragmentation (isolation window: 1.4-2 m/z; collision energy: 33%; maximum injection time: 40 ms; normalized AGC target: 100%) was used for MS2 scans.Thermo Scientific Xcalibur 4.5 software was utilized for data recording.MaxQuant was used for database searching for peptide identifications.Peptides were filtered out if they had scores less than 70 assigned in MaxQuant.Probabilities of a post-translational modification at a site are denoted by a numerical value between zero and one surrounded by parentheses.For putative peptide identifications, mass spectrometry images were matched to peaks within 5 ppm mass accuracy of previously identified extracellular matrix peptides from previously published databases [18,20]. Proteomic Analysis MSFragger v. 20.0 was used for peptide identification for protein-level classifications with a false discovery rate of 0.01.Database search results from MSFragger were uploaded into Scaffold v. 5.3.0 for quantification and total spectrum counts.For collagenase digests, results were filtered by a protein threshold of 99.9% [64], a minimum of 3 peptides, and a peptide threshold of 98% [65].For tryptic and elastase digests of DCIS076, the protein threshold was set to 99.0% [64], a minimum of 2 peptides, and a peptide threshold of 99.0% [65]. Statistics Data are summarized graphically and numerically for exploratory data analysis using descriptive statistics (e.g., mean, standard deviation, frequency, and relative frequency).Heatmaps and clustering analyses were performed through MetaboAnalyst 5.0, ClustVis, or Multiple Experiment Viewer.The natural logarithms of each peak intensity were imported into MetaboAnalyst 5.0 [66].Multiple Experiment Viewer was used to create a Pearson correlation heatmap distinguishing between DCIS and IDC pathologies.Otherwise, Euclidean distance was used to produce heatmaps with clustering by the Ward method performed in ClusVis or MetaboAnalyst 5.0.A Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) was performed to discern if peptide peaks could classify specimens based on different pathological features including lesion architectural types [33].The VolcaNoseR web tool was utilized to generate the volcano plot comparing the relative intensities of peptide peaks between DCIS and IDC pathologies [67].Box plots and Receiver-Operator Curves (ROCs) were generated using GraphPad Prism 10.0.2.Mann-Whitney tests (p < 0.05) were utilized to assess the significance of box plots while Wilson/Brown t-tests (p < 0.05) were used for ROC analysis. Gene Ontology (GO) Analysis The Database for Annotation, Visualization, and Integrated Discovery (DAVID) was utilized for functional analysis of tryptic proteomic data.Proteomic hits were imported into DAVID.Then, cellular component, molecular function, and biological processes analyses were completed.Pathways were assessed for redundancy of proteomic hits and the top ten descriptive terms with the least redundancy were reported. Conclusions DCIS is a noninvasive breast disease with the potential to progress to invasive cancer, resulting in a significant amount of overtreatment.New biomarkers are needed that can report lesions that are likely to progress to invasive cancer; identification of such markers will greatly improve patient management.Breast stroma forms the basis for clinical care throughout breast health.The current study reports that signatures from multiplexed proteomic imaging approaches can differentiate breast pathologies.This study highlights the potential for the collagen proteome to distinguish between DCIS and IDC.The data support that field cancerization is observed in the underlying extracellular proteome within the breast microenvironment and provide novel insight into breast heterogeneity.Overall, spatial, multiplexed proteomic analysis of the breast stroma microenvironment presents significant utility in understanding breast biology throughout breast health.The collagen proteome presents a high potential for clinical utility in differentiating breast pathologies and may be a novel avenue for markers that improve patient care. Figure 1 . Figure1.Study workflow.H&E slides were annotated by a breast pathologist with regions defined as DCIS (blue) or IBC (red or orange).On a subsequent tissue section, slides were prepared for collagenase digestion to target the extracellular matrix (ECM).Mass spectrometry imaging (MSI) was performed with matrix-assisted laser desorption/ionization-quadrupole time-of-flight (MALDI-QTOF) imaging.Four samples were annotated per pathological lesion type and underwent the collagenase MSI workflow with high-resolution imaging at an individual lesion level.From the remaining eighteen-sample subset, specific slides were selected for further proteomic analysis multiplexing either tryptic or elastase digestion followed by mass spectrometry imaging.This schema was created in Biorender.com. Figure 2 . Figure 2. Spatial mapping of the extracellular proteome defines DCIS histopathology.(A) The eighteensample cohort underwent the workflow depicted, beginning with pathological annotation followed by extracellular matrix (ECM)-targeted mass spectrometry imaging and ECM peptide identification. Figure 3 . Figure 3. DCIS specimens report distinct fibrillar collagen profiles to pathological regions.(A) A total of 43 extracellular matrix peptides were identified across tissue images distinguished between DCIS and IDC by an unpaired, two-tailed t-test (p < 0.01).(B) A volcano plot of peaks identified via LC-MS/MS reports the most significantly differentially expressed peaks between DCIS and IDC pathologies.An absolute value fold change greater than 0.5 between DCIS and IDC with −log(p-value) greater than or equal to 1.5 is shown in orange if increased expression was found in IDC and blue if decreased expression was found in IDC.The volcano plot was created with VolcaNoseR.(C) Boxand-whiskers plots of fibrillar collagen sequences that are differentially expressed between DCIS (n = 13) and IDC (n = 10) lesions in eighteen samples by the Mann-Whitney test (p < 0.05).ROC analyses of peaks adjacent to box-and-whiskers plots (AUROC > 0.75 and p < 0.05 by the Wilson/Brown ttest) are shown.Ox denotes oxidation, and HYP denotes hydroxylation of proline residues.(D) Location of the identified peptide within the protein sequence found to be differentially expressed by the Mann-Whitney test (p < 0.05).(E) Spatial heatmaps of MALDI-QTOF imaging of 1084.498m/z and 1458.700m/z from two representative samples.Black annotations encircle IDC regions, while white annotations delineate DCIS regions.Ppm calculations between MALDI-QTOF imaging and LC-MS/MS were within 5 mass accuracy. Figure 3 . Figure 3. DCIS specimens report distinct fibrillar collagen profiles to pathological regions.(A) A total of 43 extracellular matrix peptides were identified across tissue images distinguished between DCIS and IDC by an unpaired, two-tailed t-test (p < 0.01).(B) A volcano plot of peaks identified via LC-MS/MS reports the most significantly differentially expressed peaks between DCIS and IDC pathologies.An absolute value fold change greater than 0.5 between DCIS and IDC with −log(p-value) greater than or equal to 1.5 is shown in orange if increased expression was found in IDC and blue if decreased expression was found in IDC.The volcano plot was created with VolcaNoseR.(C) Box-and-whiskers plots of fibrillar collagen sequences that are differentially expressed between DCIS (n = 13) and IDC (n = 10) lesions in eighteen samples by the Mann-Whitney test (p < 0.05).ROC analyses of peaks adjacent to box-and-whiskers plots (AUROC > 0.75 and p < 0.05 by the Wilson/Brown t-test) are shown.Ox denotes oxidation, and HYP denotes hydroxylation of proline residues.(D) Location of the identified peptide within the protein sequence found to be differentially expressed by the Mann-Whitney test (p < 0.05).(E) Spatial heatmaps of MALDI-QTOF imaging of 1084.498m/z and 1458.700m/z from two representative samples.Black annotations encircle IDC regions, while white annotations delineate DCIS regions.Ppm calculations between MALDI-QTOF imaging and LC-MS/MS were within 5 mass accuracy. Figure 4 . Figure 4. Extracellular microenvironment contributes to intra-tumoral heterogeneity.(A) Pathologist-defined lesions are circled in black.Orange delineates an IBC lesion while blue indicates Figure 4 . Figure 4. Extracellular microenvironment contributes to intra-tumoral heterogeneity.(A) Pathologistdefined lesions are circled in black.Orange delineates an IBC lesion while blue indicates the DCIS lesions.The red asterisk demarcates comedo necrosis and the blue asterisk delineates solid archetypes.For additional information on pathological evaluation, see Supplemental Table S1.(B) Peptide sequences are shown within the protein schema.(C) The top row represents a segmentation analysis demonstrating proteomic clustering across architectural patterns derived from extracellular matrixtargeted proteomics.Clusters are altered by spatial distance from a discrete invasive cancer site (region 34).The bottom row shows a representative collagen α-1(I) chain peptide detected almost uniformly across the invasive cancer region (34) compared to expression patterns surrounding the DCIS lesions (for example, regions 41 and 45).(D) Heatmap of LC-MS/MS identified peptides using Euclidean distance reports differences and similarities across architectural patterns.Pink delineates Figure 5 . Figure 5. Distinct tryptic peptide profile defines pathological regions.(A) Workflow for tryptic digestion depicted.This schema was created in Biorender.com.(B) Hematoxylin and eosin-stained images showing normal adjacent tissue and pathological annotations.(C) A segmentation analysis of tryptic peptides from 5 specimens of 214,558 pixels and 1104 peaks demonstrates 10 uniquely localized proteomic clusters.(D) Top ten significant GO terms associated with differentially expressed peptides associated with cellular components (red), molecular functions (light blue), and biological processes (dark blue).(E) Differential expression patterns of tryptic peptides among normal adjacent tissue (NAT; n = 5) and tumors (n = 6) by a two-tailed t-test (p < 0.01).Tryptic digest targets both cellular and extracellular components.(F) Normal adjacent tissue and tumor separate based on sPLS-DA analysis of tryptic peptides.(G) Spatial heatmaps of 3 tryptic peptide peaks depict discrete localization to IDC, DCIS, and surrounding normal adjacent tissue. Figure 5 . Figure 5. Distinct tryptic peptide profile defines pathological regions.(A) Workflow for tryptic digestion depicted.This schema was created in Biorender.com.(B) Hematoxylin and eosin-stained images showing normal adjacent tissue and pathological annotations.(C) A segmentation analysis of tryptic peptides from 5 specimens of 214,558 pixels and 1104 peaks demonstrates 10 uniquely localized proteomic clusters.(D) Top ten significant GO terms associated with differentially expressed peptides associated with cellular components (red), molecular functions (light blue), and biological processes (dark blue).(E) Differential expression patterns of tryptic peptides among normal adjacent tissue (NAT; n = 5) and tumors (n = 6) by a two-tailed t-test (p < 0.01).Tryptic digest targets both cellular and extracellular components.(F) Normal adjacent tissue and tumor separate based on sPLS-DA analysis of tryptic peptides.(G) Spatial heatmaps of 3 tryptic peptide peaks depict discrete localization to IDC, DCIS, and surrounding normal adjacent tissue. Figure 6 . Figure 6.Serial enzymatic digest reveals pathology-specific proteomes and proteomic field cancerization.(A) Workflow for serial enzymatic digestion depicted with cellular localization from LC-MS/MS proteomic hits from each enzymatic digestion shown.Tissue was digested by collagenase to define stroma composition, trypsin to capture cellular features and additional extracellular Figure 6 . Figure 6.Serial enzymatic digest reveals pathology-specific proteomes and proteomic field cancerization.(A) Workflow for serial enzymatic digestion depicted with cellular localization from LC-MS/MS Table 1 . Summary of patient characteristics.A total of 13 patients were used for the first part of the study for ECM-targeted mass spectrometry imaging.Five patients had two specimens used.For a full list of the samples used, see Supplemental FigureS1.Within this table, patients were stratified according to pathology.Architecture patterns were determined by at least one pathologist.SD denotes standard deviation.NA delineates not applicable.* indicates that some DCIS tumor sizes were characterized as percentages or qualitatively (see Supplemental Tables for more information).
2024-06-22T15:05:21.340Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "ae606988443d439f3b711991f79a070c7351e672", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/12/6748/pdf?version=1718797602", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08803d45c1241fe483eb335c8bc3986f8c6c60ed", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250342983
pes2o/s2orc
v3-fos-license
Photosensitized and Photothermal Stimulation of Cellular Membranes by Organic Thin Films and Nanoparticles Conjugated polymers are increasingly exploited for biomedical applications. In this work, we explored the optical characteristics of conjugated polymers of variable chemical structures at multiple levels relevant to biological interfacing, from fluorescence yield to their influence on cellular membrane potential. We systematically compared the performance of conjugated polymer as cast thin films and as nanoparticles stabilized with amphiphilic polyethylene glycol-poly lactic acid-co-glycolic acid (PEG-PLGA). We assessed in both the dark and under illumination the stability of key optoelectronic properties in various environments, including air and biologically relevant physiological saline solutions. We found that photoreduction of oxygen correlates with nanoparticle and film degradation in physiologically relevant media. Using patch-clamp recordings in cell lines and primary neurons, we identified two broad classes of membrane potential response, which correspond to photosensitizer- and photothermal-mediated effects. Last, we introduced a metric named OED50 (optical energy for 50% depolarization), which conveys the phototoxic potency of a given agent and thereby its operational photo-safety profile. INTRODUCTION Organic semiconductors (OSCs) have emerged as a versatile class of materials with a wide range of applications in biophotonics as photoemitters and phototransducers. They are employed in two ways: 1) encapsulated in solid-state devices that avoid direct contact with the biological environment and 2) open to the environment, with the semiconductor material being in direct contact with the biological medium. Examples of the former approach include organic light-emitting diodes (OLEDs) for optogenetic stimulation (Steude et al., 2015;Matarèse et al., 2019;Morton et al., 2019), while the latter approach, which is the focus of this study, uses predominantly passive devices. As phototransducers, OSCs in contact with aqueous systems/ biological tissue have been studied for photothermal cellular stimulation (Martino et al., 2015;Feyen et al., 2016), photosensitized cellular signaling (Abdel Aziz et al., 2020), and as photosensitizer interphases for optically targeted cell death/ ablation (Li et al., 2020). As photoemitters, their most widespread application is in combination with optical imaging microscopy for anatomical and sub-cellular structural studies in medicine and biological research, including single-molecule imaging (Jin et al., 2018). Many of the listed applications exploit nanoparticles based on organic conjugated semiconductors (CNPs, conjugated nanoparticles), which have attracted increasing attention for their desirable properties including their nanoscale size, good photostability, high fluorescence efficiency in the visible or nearinfrared region (NIR) regions of the electromagnetic spectrum, biocompatibility, and absence of toxic heavy metal ions (Abelha et al., 2020). The light-induced applications of OSCs described above exploit the relaxation processes following photoexcitation, such as photoluminescence, heat generation due to the internal conversion of the excited state, or electron transfer to a nearby molecule, leading to the generation of radical chemical species. The relaxation processes are coupled and in dynamic competition, which can lead to non-linear responses to light exposure (Peters et al., 2016). On the biological side, effects of undesired relaxation pathways must be carefully considered to pair OSCs effectively and reliably to their intended applications. Due to the complexity of the biochemical systems in which OSCs operate, and the absence of 100% efficiency for any relaxation process, it is important to investigate the multiple effects that OSCs exert on their target system. For example, it is usually undesirable that photoemitter platforms intended for live-cell imaging act as efficient oxygen photosensitizers, as the resultant photogenerated radical species may impair cellular viability. Conversely, it is critical to assess the effect of the biological environment on the OSCs, to ensure a suitable OSC is chosen for a given clinical, diagnostic, or research application (e.g., resistance to sterilization methods and stability in an aqueous system). In this work, we investigated the photophysical properties of several OSCs and evaluated their biological effects on the cell membrane to determine their suitability for bioelectronic applications such as live-cellular imaging, photo-stimulation, and photo-ablation. In particular, we studied conjugated polymers of varied chemical structures, based on polyfluorene (PF) and polyphenylene vinylene (PPV) in both solid thin-film form and as nanoparticle dispersions, stabilized by polyethylene glycol-poly lactic acid-co-glycolic acid (PEG-PLGA). We assessed the photostability of the conjugated polymers in these two forms under various environmental conditions, ranging from air to biologically relevant cell culture media, and over a wide range of irradiation times. Patch-clamp recordings of human embryonic kidney cell lines (HEK293T) and primary hippocampal neurons revealed two distinct membrane voltage responses to light stimulation: (i) reversible multiphasic photothermal responses and (ii) irreversible depolarization driven by photosensitization of molecular oxygen. The observed responses of cellular membranes to varied material classes and as a function of irradiation power and duration highlight the importance of broadly characterizing and evaluating optical probes such as conjugated polymers for biological applications. Thin-Film and Nanoparticle Preparation. Solid-state thin films were prepared from 5 mg/ml solutions, using a spin speed of 1,200 rpm. Conjugated polymer nanoparticles were prepared by the solvent displacement technique, with a weight ratio of 1:10 (conjugated polymer: PEG 5K -PLGA 55K ) (Abelha et al., 2019). A total of 4 ml of THF solution (0.9 mg/ml polymers) was added dropwise to 20 ml of water at room temperature, and the mixture was stirred until THF had completely evaporated. Formulations containing 100% PEG 5K -PLGA 55K and 100% CN-PPV were also prepared with the same polymer concentration. The resulting solutions had a total solid content of 0.2 mg/ml, which were concentrated to a minimum of 1.0 mg/ml using Amicon ® Ultra-15 100K centrifugal filter devices (Merck Millipore, Tullagreen, Ireland). Hydrodynamic Diameter and Zeta Potential Characterization. Hydrodynamic diameters were assessed by dynamic light scattering (DLS) using a Zetasizer NanoZS (Malvern Instruments Ltd., United Kingdom) at 25°C with a scattering angle of 173°and 50 μg/ml final polymer concentration. The zeta potential at 25°C, after sample dilution in 10 mM NaCl to a final polymer concentration of 20 μg/ml, was measured using a Zetasizer NanoZS (Malvern Instruments Ltd., United Kingdom). The product yield and spectroscopic characterization were performed according to previously described methods for similar nanoparticles . photomultiplier tube (PMT) detector for the ultraviolet and visible regions, with a blank glass substrate as a reference. Absorption spectra of solid-state thin-film conjugated polymers in THF and organic semiconductor nanoparticles (CNPs) in distilled water were measured using a Shimadzu UV-3101 NIR-Vis-NUV spectrophotometer. Emission spectra of solid-state thin-film conjugated polymers in THF and CNPs in distilled water were measured using a FLS980 photoluminescence spectrophotometer with a xenon arc lamp that covers a range of 230-1,000 nm. The samples were excited at their absorbance peak values and signals were captured using a single-photon-counting PMT detector. Photoluminescence Quantum Yield (PLQY) measurements of solid-state thin-film and CNPs in distilled water were performed in an integrating sphere (De Mello et al., 1997) under N 2 flow using an Andor Shamrock spectrometer and Andor iDus DU420A-BVF Si detector. Time-resolved PL lifetime measurements of solid-state thin films and CNPs in distilled water were performed with a Time-Correlated Single Photon Counting (TCSPC) module SPC830 (Becker & Hickl, Germany). The module was synchronized to a Ti-Sapphire-pulsed laser source (680-1,080 nm, 80 MHz, 140 fs, Chameleon Vision II, Coherent Inc., Germany). TCSPC data were analyzed by fitting to a two-exponential decay. Cell Cultures The human embryo kidney (HEK293) cells (from ATCC) and human epithelial adenocarcinoma (Hela) cell lines were cultured in Dulbecco's modified Eagle Medium (DMEM) supplemented with 2 mM L-glutamine. The SV40 immortalized human microglial cells were cultured at 37°C/5% CO 2 in RPMI-1640 with sodium bicarbonate (Sigma-Aldrich). All cell culture media were supplemented with 10% heat-inactivated fetal bovine serum (FBS; Life Technologies), 2 mM L-glutamine, 100 μg/ml penicillin, and 100 μg/ml streptomycin (Life Technologies). The cells were initially sub-cultured in polystyrene flasks and were detached using 0.25% trypsin (2.5 g/L; Sigma). Primary cultures of hippocampal neurons were prepared from embryonic 18-day rat embryos (Charles River). Briefly, hippocampi were dissociated by a 15-min incubation with 0.25% trypsin at 37°C and cells were plated on poly-L-lysinecoated substrates (0.1% PLL in the Borax solution overnight) in a neurobasal medium supplemented with 2 mM L-glutamine, 2% B27, 100 μg/ml penicillin and 100 μg/ml streptomycin, and with a further 10% horse serum (Life Technologies) in the first 4 h of plating. Glass coverslips were thermally sterilized at 120°C prior to overnight incubation in 0.1% PLL solution prior to the transfer of HEK293, HeLa, and SV40 cell lines. The cultures were maintained at 37°C in a humidified atmosphere containing 5% CO 2 . Reactive Oxygen Species (ROS) Analysis. ROS generation by HeLa, HEK, and SV40 cell lines was measured by confocal microscopy and a fluorescence TECAN Spark microplate reader. ROS measurements were performed in duplicate by staining cells using the Cellular ROS Assay Kit (Deep Red) (ab186029) according to the manufacturer's instructions (Abcam, Cambridge, United Kingdom); Ex/Em 650/675 nm. The cells were grown on glass (negative control) and PFO, F8BT, CN-PPV, SY-PPV, SO-PPV, and MEH-PPV surfaces in 5% CO2/95% O 2 at 37°C for 48 h. ROS measurements were carried out in the dark, and subsequent to photoexcitation at 0.8 or 5 mW/mm 2 using narrow-bandwidth Lumencor LEDs with peak emission at 390 nm for PFO, 473 nm for CN-PPV, F8BT, SY-PPV, and SO-PPV, and 548 nm for MEH-PPV. Irradiation Procedures Patch-Clamp Recordings Photo-irradiation of HEK239T cells and neurons grown on thin films of conjugated polymers or in the presence of CNPs were carried out on a Nikon FN1 upright microscope (Nikon Instruments) using a Spectra X LED system (Lumencor) to target the absorption spectrum maxima of the conjugated polymer under test via appropriate dichroic mirrors and a ×16 water immersion objective. The peak illumination wavelength was 390 nm for PFO, 473 nm for CN-PPV, F8BT, SY-PPV, and SO-PPV, and 548 nm for MEH-PPV. Gate timing was controlled by digital outputs of the HEKA patch-clamp amplifier. Safety No unexpected significant hazards or risks were associated with the reported work. All animal manipulations and procedures were performed in accordance with the guidelines established by the European Community Council (Directive 2012/63/EU of 22 September 2010) and were approved by the Italian Ministry of Health. Statistical Analysis Statistical tests were selected based on the data distribution. Normal distribution of the data was assessed using the D'Agostino and Pearson omnibus normality test (p-value = 0.05). The sample size was indicated as number of recorded cells, or culture replicates (n). The analysis was carried out with GraphPad Prism (GraphPad Software Inc.) and using custom Matlab routines (Mathworks). Light-Emitting Nanoparticle and Solid-State Thin-Film Conjugated Polymer Properties We selected six widely studied, commercially available lightemitting conjugated polymers that exhibit efficient photoluminescence, with optical band gap energies (E g ) ranging from 2.11 to 2.8 eV. Two of the conjugated polymers were PF-derivatives (F8BT and PFO) and four were PPV-derivatives (SY-PPV, SO-PPV, MEH-PPV, and CN-PPV). For each polymer, thin films were prepared by spin coating while organic semiconductor nanoparticles (CNPs, conjugated nanoparticles) were prepared by solvent displacement. For all the preparations, cellular responses during photostimulation were recorded using patch-clamp recordings (see schematic Figures 1A,B). Key photophysical properties including ROS-generating capacity and fluorescence stability were also evaluated. The toolkit of green to red light emitting CNPs stabilized with PEG 5K -PLGA 55K was prepared via the nano-precipitation method Abelha et al., 2019). Five types of CNPs were prepared; F8BT, SY-PPV, and SO-PPV were encapsulated with PEG-PLGA, and two variants of CN-PPV nanoparticles were made with PEG-PLGA and without PEG-PLGA (100% CN-PPV, neat CNPs) ( Figures 1C-F). Due to low product yield, we did not investigate PFO and MEH-PPV in nanoparticulate form. Nanoparticles containing SY-PPV and SO-PPV presented the highest product yield (≥95% by weight), followed by CN-PPV (66%), F8BT (56%), and CN-PPV without PEG 5k -PLGA 55k (46%). Yields closely matched previous reports for CN-PPV and F8BT CNPs produced with PEG 5k -PLGA 55k Kemal et al., 2017). Except for F8BT-CNPs, the CNPs presented a redshift in emission compared to the same conjugated polymer in tetrahydrofuran, indicating interactions between segments of the polymer chain and intramolecular energy transfer within the particles (Abelha (Chua et al., 2005;Hwang and Kahn, 2005;Thompson et al., 2005;Chasteen et al., 2006;Nevil et al., 2012;Walker et al., 2014). Encapsulation of conjugated polymers within PEG 5K -PLGA 55K significantly increased the CNP hydrodynamic diameter in comparison to PEG 5K -PLGA 55K alone, with the increase being dependent on both the conjugated polymer chemical structure and molecular weight. SO-PPV/PEG-PLGA and SY-PPV/PEG-PLGA CNPs showed the largest hydrodynamic diameters of 150 nm ( Figure 1C), possibly due to their higher molecular weight leading to a higher viscosity of the organic solution. Embedding CN-PPV polymer within a PEG 5K -PLGA 55K matrix reduced the zeta potential of the nanoparticles compared to pure PEG-PLGA, while 100% CN-PPV CNPs were more electronegative than the PEG-PLGA counterpart, consistent with previous reports . In the form of both solid-state thin films and CNP suspensions, all polymers featured large, vibrationally resolved, absorption bands and efficient visible emission ( Figure 2). Key photophysical properties of all thin films and CNPs, namely absorption (Abs) spectra, photoluminescence (PL) spectra, PL lifetime (LT), and photoluminescence quantum yield (QY) are summarized in Supplementary Tables 1-2. To gain an overview of the materials' behavior in cell-culture systems, we studied the stability of fluorescence in a culture medium, in darkness, and under illumination. Atmospheric impurities in air and components of physiological media such as essential ions, peptides, lipids, enzymes, growth factors and other additives, and products of metabolism can play an important role in static quenching (Seidel et al., 1996;Papadopoulou et al., 2005). Although there were no observable changes in the shape of either the absorbance or photoluminescence spectrum after exposure to cell-culture media for 672 h in the dark, all materials showed a reduction in PL intensity (Supplementary Figure 1) in the range of 20%-80%. Degradation was dramatically accelerated by photo-irradiation, and the materials could be split into two groups based on their Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 5 photodegradation rates. Group-1, composed of PFO, F8BT, and CN-PPV showed the largest decreases in PL intensity of 54% for F8BT, 58% for PFO, and 61% for CN-PPV when exposed to 5 h of irradiation at 0.1 mW/mm 2 (see details in methods). Group-2, composed of SY-PPV, SO-PPV, and MEH-PPV, was moderately stable with a decrease in the photoluminescence of 11% for SY-PPV, 22% for SO-PPV, and 33% for MEH-PPV over the same observation window. The difference in degradation rates between the groups largely persisted across all tested irradiance durations and powers, as shown in the Supporting Information (Supplementary Figure 1). The two groups could also be distinguished based on their different photodegradation rates in nanoparticle suspensions (Supplementary Figure 1), with the CN-PPV-and F8BT-based nanoparticles again showing faster loss of photoluminescent intensity. Grouping Conjugated Polymers by ROS Generation of Thin Films and Nanoparticles The photoexcitation of OSCs in physiological systems can lead to the formation of reactive oxygen species (ROS) that can interact with proteins, fatty acids, or nucleic acids, leading to oxidative damage. These processes can be exploited for targeted cell ablation, in which focused light delivery is employed to impair cell viability in a spatially restricted manner, for example, in cancerous tumors (Li et al., 2020). At low concentrations, ROS species act as intracellular signaling molecules, opening the possibility of using conjugated polymer nanoparticles to modulate signal cascades with subcellular resolution (Moros et al., 2018;Antognazza et al., 2019). To track photogenerated oxygen species in physiological settings, we employed a nearinfrared ROS detection probe with high sensitivity to superoxide (O − 2 ) and hydroxyl (OH·) radicals and monitored the photosensitizing potential of the OSC films and CNPs using microglia, HEK, and HeLA cell lines as human cell models. ROS generation was studied by applying a near infrared fluorescent detection assay (see methods) to live HEK239, HeLa, and SV40 cells grown on different surfaces (uncoated glass and glass coated with PFO, F8BT, CN-PPV, SY-PPV, SO-PPV, and MEH-PPV). Representative images overlaying bright-field transmission images of SV40 cells with confocal fluorescence microscopy of the fluorescent ROS indicator, following sample irradiation are shown in Figure 3A. Figure 3B shows for the three cell lines on the various substrates of the fluorescence intensity measured after exposing the samples 1 min to 0 (dark), 0.8, and 5.0 mW/ mm 2 narrowband LED illumination (see Methods). Comparative data are also provided for cells grown in a culture medium containing 30 μg/ml and 300 μg/ml of CNPs. In the absence of illumination, cells grown on glass, cells grown on the polymer films, and cells incubated with nanoparticles showed broadly equivalent fluorescence signals, implying similar baseline ROS levels. Irradiating the cells on the polymer films with 0.8 and 5.0 mW/mm 2 irradiation led to a significant increase in the fluorescence signal for all conjugated polymers, indicating a rise in ROS levels. Fluorescent assay quantifying ROS intensity in the dark (gray bars) and after irradiation with optical power of 0.8 mW/mm 2 (blue) or 5 mW/mm 2 (pink) at peak emission at 390 nm for PFO, 473 nm for CN-PPV, F8BT, SY-PPV, and SO-PPV, and 548 nm for MEH-PPV, for HEK239, HeLa, and SV40 cells grown on conjugated polymer thin films (column 1), CNPs encapsulated with PEG-PLGA seeded at 30 μg/ml (column 2) and CNPs seeded at 300 μg/ml (column 3), 100% neat CN-PPV nanoparticles (columns 4-5) and control glass, no CNPs, only Di-water (300 µg) and only neat PEG-PLGA (30 µg or 300 µg) (all controls, column 6). To aid in visualization, some bars have been magnified by a factor of five or 10 (red insets). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 However, the effect was substantially stronger for the group-1 polymers, with an irradiance of 0.8 mW/mm 2 (blue columns) and 5 mW/mm 2 (pink columns) of PFO, F8BT, and CN-PPV nanoparticles causing a~15-42-fold increase in ROS indicator intensity, compared to a range of 0.98-1.09-fold change for measurements on glass. For the group-2 polymers, increases in indicator intensity were less than three-fold. The cells incubated with CNPs exhibited broadly similar behavior, and samples incubated with 300 μg/ml F8BT and CN-PPVbased nanoparticles showed respective increases in assay fluorescence of 6.2-19 (F8BT/PEG-PLGA), 6.6-23.9 (CN-PPV/PEG-PLGA), and 7.2-24 (CN-PPV neat), compared to 1.06-1.14 for cells seeded with pure PEG-PLGA nanoparticles, while those incubated with 300 ug SY-PPV/PEG-PLGA and SO-PPV/PEG-PLGA showed weaker increases in the range of 1.26-1.44 and 1.24-1.61, respectively. (No data are provided for cells incubated with PFO-based nanoparticles since we could not obtain unaggregated particles using the preparation method described). Weaker increases in the fluorescence signal were observed at a 30 μg/ml incubation concentration. In all cases, the response to photostimulation was found to be broadly independent of the cell types used in the present study. Microglial (SV40), epithelial (HEK293), and human epithelial adenocarcinoma (HeLa) cells all presented ROS increases upon photoexcitation when in contact with OSC thin films and nanoparticles. We concluded that all the tested materials lead to significant ROS formation (in both thin-film and nanoparticulate form) when illuminated at their peak absorption wavelengths, with polymers containing fluorene and cyano-terephthalylidene generating the highest ROS levels. Divergent Responses of Cell Membrane Potential to Photoexcitation Next, we investigated the effects of photo-stimulation on the cell membrane potential of HEK293T cells. This cell line does not express the ion channels required to generate rapid electrical currents across its plasma membrane, allowing observations of effects that translate across mammalian cell types. We first cultured HEK293T cells on coverslips that had been spincoated with one of the six polymers. Forty-eight hours after plating the cells, light-evoked responses were assessed by the whole-cell patch-clamp, in current-clamp (I = 0) configuration ( Figure 4). The conjugated polymers were excited at their respective absorbance peaks (see Methods) for 500 ms, allowing for direct comparison to our previous work on photothermal stimulations with polythiophene-type materials (Martino et al., 2015;Feyen et al., 2016) (Figures 4A,B). Two types of responses could be identified across the materials, with a grouping of responses following the ROS production capacity of the materials inferred from Figure 1B. Photostimulation of PFO, F8BT, and CN-PPV led to a rapid depolarization of the membrane potential which did not reverse when illumination ceased (group 1; Figure 4A). The photo-response of cells grown on the group 2 materials (SY-PPV, SO-PPV, and MEH-PPV) was reversible and characterized by an initial depolarization followed by sustained hyperpolarization during illumination (group 2; Figure 4B). For both response types, the magnitude of the membrane potential variation during light stimulation scaled positively with the irradiance level ( Figures 4C-E). For group-2 OSCs, the kinetics and waveform of the membrane potential changes (initial depolarization followed by a sustained membrane hyperpolarization) qualitatively matched those we have previously reported for HEK293T cells grown onto polythiophene-type thin films (Martino et al., 2015;Feyen et al., 2016). We have previously shown that this multiphasic membrane response to intense and prolonged light stimuli is attributable to thermal stimulation following non-radiative recombination of photo-excited electrons and holes (Martino et al., 2015). On the cellular side, the response is mediated by increased membrane capacitance, the temperature dependence of the transmembrane potential, and temperature-mediated activations of membrane currents. To directly assess the relationship between the evoked responses and the peak surface temperature elicited by photostimulation, we measured the temperature variation at the film surface for each of the tested materials using the calibrated pipette resistance method (Yao et al., 2009). Surface temperature measurements were carried out using the highest irradiance power tested per film during cellular photostimulation. Light-evoked increases in surface temperature were present for all materials ( Figure 4F) with CN-PPV films showing the lowest (0.28 ± 0.02°C) and SY films the largest (5.29 ± 0.01°C) temperature rise. The variance in surface temperature measured across the materials can be largely accounted for by the absorbance of the OSC films and the employed irradiance power (Supplementary Figure 2). In agreement with our previous reports, we found that the magnitude of depolarization and hyperpolarization of cells plated on the group-2 materials scales as a function of the measured surface temperature increase (see Supplementary Figure 3; SO-PPV (3.34 ± 0.01°C), SY-PPV (5.29 ± 0.01°C), and MEH-PPV (0.71 ± 0.02°C)). In contrast, given the intermediate to low temperature increase measured for PFO (1.54 ± 0.01°C), F8BT (1.45 ± 0.02°C), and CN-PPV films (0.28 ± 0.01°C), we deemed it unlikely that thermal membrane ablation contributes to the depolarizations observed with group-1 OSCs. An interesting question is how far a generated ROS can diffuse before decaying via a unimolecular process or being scavenged by a target molecule. Given that all tested materials generated ROS, albeit to substantially varying levels, we sought to assess if stimulus duration or power would modulate the membrane response. We hypothesized that under prolonged illumination, ROS species could accumulate to high concentrations in the cleft between the cell membrane and the film, increasing the probability of interaction events with membrane components, and thus potentially leading to an alteration of the photo-mediated response. To analyze the consequences of prolonged illumination and better measure the kinetics of the depolarizations observed with the group-1 materials, PFO, F8BT, and CN-PPV films, we employed 1-min light stimuli at varying irradiances, while continuously recording cellular membrane potential ( Figure 5A). For the group-1 materials, we observed that prolonged illumination leads to a rapid depolarization of the membrane potential with a response plateau near 0 mV, indicating a loss of membrane integrity and a resulting drop in membrane resistance. For PFO, this response was apparent during 1-min stimulation with an irradiance power density of 0.02 mW/mm 2 , which is close to typical daytime irradiances. The 0-mV plateau response became observable within the 1-min illumination time for F8BT and CN-PPV for irradiances ≥0.8 mW/mm 2 . Interestingly, during prolonged illumination at high irradiance levels (>1 mW/mm 2 ), the membrane potential responses of cells grown on the group 2 OSCs displayed irreversible depolarization components in their responses ( Figure 5A; bottom rows). Although shorter 500-ms illumination resulted in transient and reversible changes in membrane potential ( Figure 4A), prolonged illumination at higher irradiance levels led to irreversible depolarization, similarly to the behavior observed for cells grown on PFO, F8BT, and CN-PPV. To directly compare the kinetics of the irreversible depolarization across irradiance power and materials, we quantified the cumulative optical energy delivered at the time the membrane potential reached 50% depolarization relative to the resting potential and 0 mV (OED 50 ). The values were extracted based on empirical observation or linear extrapolation of the observed response during illumination. Recordings with depolarizations of less than 5% of RMP were considered to show a safe response and were not included in OED 50 analysis. The OED 50 value metric is intended to capture the phototoxic potential for different material classes, and to provide an indication of a safe operating range. The OED 50 value calculated per material and irradiance is reported in Figure 5B. The cells grown on PFO showed the fastest rates of depolarization, and correspondingly the lowest OED 50 values. The OED 50 values of the group-1 materials (PFO, F8BT, and CN-PPV) were separated by several orders of magnitude from the group-2 materials (SY-PPV, SO-PPV, and MEH-PPV). Furthermore, group-2 materials only displayed observable OED 50 at irradiances >0.86 mW/mm 2 for SO-PPV and SY-Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 8 PPV, and MEH-PPV for irradiance powers >5.7 mW/mm 2 , yielding upper bound estimates for the safe operating window of these materials. Response amplitudes in mV across the tested conditions are reported in Supplementary Table 3. A 'close-up' of the initial 10 s of a cell grown on SY-PPV and irradiated with 14 mW/mm 2 light is presented in Supplementary Figure 4. We found that the mean OED 50 measured across irradiance powers shows an inverse power law relationship to the ROS generation capacity of the films inferred from the reporter fluorescence in HEK293 cells ( Figure 5C; linear regression fit log(OED 50 ) vs. log(fold increase in ROS fluorescence); r 2 = 0.9143, p-value = 0.003), denoting the observed positive relationship between ROS generation capacity of a material and the rate at which it induces membrane depolarization. Observation of an inverse power law relationship between measured ROS evolution (fold increase in reporter fluorescent intensity) and mean OED50 value obtained across irradiance powers per material (OED50 [mJ/mm 2 ] mean ± SEM; PFO (0.37 ± 0.02), F8BT (12.69 ± 0.81), CN-PPV (8.35 ± 0.93), SY-PPV (9.6x10 3 ± 1.52x10 3 ), SO-PPV (1.15x10 3 ± 0.18x10 3 ), and MEH-PPV (6.98x10 3 ± 3.76x10 3 ). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 9 We next investigated the spectral dependence of the irreversible stimulations observed with PFO and F8BT films. For both materials, cell membrane responses increased as the excitation wavelength was shifted toward the respective absorption maxima (Supplementary Figure 5). Focusing on F8BT, we carried out two series of experiments to directly assess the involvement of oxygen in mediating the responses. In the first one, we coated a 2 µm layer of SU8 photoresist onto F8BT, reducing oxygen availability and generating a physical barrier for quenching generated radical species. This fully abolished the photo-response. Given previous reports on singlet oxygen production by F8BT in water (Spada et al., 2018), we assessed whether the introduction of the singlet oxygen scavenger sodium azide (NaN 3 ) in the extracellular solution would slow the kinetics of the irreversible depolarization. Indeed, the addition of 100 mM NaN 3 significantly slowed cellular depolarizations with respect to a NaN 3 -free control, as reflected by the higher OED 50 values of 16.67 ± 5.5 mJ/mm 2 versus 4.08 ± 2.3 mJ/mm 2 for recordings with and without NaN 3 , respectively (see Supplementary Figure 5; Mann-Whitney U-test, p < 0.01). The results indicate that singlet oxygen production, following photoexcitation of F8BT films, contributes to the measured cellular membrane response. Together with ROS detection, which has the sensitivity to superoxide and hydroxyl evolution, multiple photochemical processes may be active at the polymer-cell interface. On the basis of the OED 50 values, we consider group-2 materials preferable for imaging applications due to reduced ROS generation. In contrast, notwithstanding their high fluorescence quantum yields (see Supplementary Table 1), the low OED 50 values of group 1 suggest suitable applications in cell ablation, sterilization, or photocatalysis. Dose-dependent Light-Induced Depolarization in HEK293 and Irreversible Depolarization and Firing in Primary Neurons by CNPs We next checked the functional consequences of CNP illumination on the membrane electrical properties. Using patch-clamp recordings, we sought to assess whether the OED 50 values obtained in thin films hold predictive validity for the phototoxicity of the OSCs in nanoparticle form. We plated HEK293T cells on glass coverslips and, 12 h after cell Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 seeding, incubated the cells for 24 h with either 30 or 300 μg/ml of CNPs. Using whole-cell patch-clamp recordings, we tracked the membrane potential response to illumination at various power densities of incident light (0.8, 3.4, and 5 mW/mm 2 ; 1 min). In response to light, two distinct responses were readily observed. While cells incubated with PEG-PLGA controls, SO-PPV/PEG-PLGA or SY-PPV/PEG-PLGA nanoparticles were broadly unaffected by light, the membrane potentials of cells incubated with F8BT/PEG-PLGA, CN-PPV/PEG-PLGA, or 100% CN-PPV nanoparticles displayed depolarizations that were time-locked to the light onset ( Figure 6A). No consistent relationships could be identified between light intensity and cell responses in this data set ( Figure 6A), likely reflecting a variable quantity of membrane bound and internalized CNPs by different cells. To quantify the depolarization effect across the tested CNP concentrations, we grouped the recorded cells across irradiance per CNP type. This analysis captured a dose-dependent increase in depolarization for F8BT/PEG-PLGA and CN-PPV/PEG-PLGA nanoparticles (Mann-Whitney U-test; p < 0.05, depolarizations 30 vs. 300 μg/ml) and a similar trend for CN-PPV nanoparticles (mean ± std; 6.02 ± 5.28 mV vs. 10.39 ± 7.91, Mann-Whitney U-test; p = 0.19) ( Figure 6B). Importantly, for all three CNPs that induced a depolarizing response, the membrane potential did not recover in the 30-s after illumination ended. The materials with the lowest OED 50 values in thin-film form also modulated the cell membrane potential in a similar irreversible fashion when they were in nanoparticle form. Given their nanoscale size and propensity for uptake by cells, a potential application of CNPs is in the labeling and manipulation of neural circuits. We examined how the depolarization observed in cell lines would translate to electrically excitable neuronal cells. To this end, we cultured primary hippocampal neurons on glass coverslips and, after maturation of the neural network, incubated these cells with CNPs and recorded them in whole cell configuration during 1min light stimulation. As with HEK293 cells, this illumination protocol had no effect on the membrane potential of neurons grown in culture media containing PEG-PLGA control CNPs, or group-2 CNPs (SY-PPV/PEG-PLGA or SO-PPV/PEG-PLGA). By contrast, upon illumination of neurons bearing F8BT/PEG-PLGA, CN-PPV/PEG-PLGA, or non-encapsulated CN-PPV nanoparticles, the neuronal membrane potentials rapidly depolarized, resulting in action potential discharges, a progressive decrease in action potential amplitude, and reductions in the membrane resistance ( Figure 6C). We, therefore, suggested that SY-PPV/PEG-PLGA and SO-PPV/ PEG-PLGA could be further investigated for imaging applications, including retrograde labeling of neural circuits. F8BT/PEG-PLGA and CN-PPV/PEG-PLGA could be explored in a similar context and be further exploited in vivo for their ability to lesion neural circuit elements through focused light illumination. CONCLUSION Using a multi-level comparison involving both thin films and nanoparticles, ranging from photophysical to membrane potential measurements, we have investigated the processes caused by the photostimulation of OSC materials in the vicinity of cellular plasma membranes. Two types of membrane responses were observed: (i) reversible multiphasic photothermal responses and (ii) irreversible depolarization driven by photosensitization of molecular oxygen. Using thin-film OSCs we further found that-under intense and/or prolonged illumination-rapid photothermal effects can give way to irreversible processes, which are likely driven by accumulation of photogenerated ROS that disrupt membrane integrity, yielding progressive, and irreversible depolarizations. In the case of F8BT and CN-PPV, these irreversible membrane modifying properties are also observed when they are formulated as PEG 5k -PLGA 55k encapsulated CNPs. In contrast, formulation of SO-PPV and SY-PPV as nanoparticles resulted in "safer" ROS levels under illumination that did not irreversibly change cellular polarization. In no case, the photosensitized stimulation was found to reverse in the recorded time window. From recordings on HEK293, we found that the depolarization amplitude is dependent on both light intensity and nanoparticle concentrations. In neurons, stimulation led to intense action potential firing. The amplitude of the action potentials decreased over the illumination period, before firing of the cells was silenced. The persistence of the electrophysiological responses in HEK293 and neurons (progressive depolarization), suggests that key mechanisms are generalizable across cell types. The observed phenomena are likely to be mediated by the modulation of transmembrane currents and the fundamental electrical properties of the cell membrane (e.g., lipid peroxidation, which is known to generate ion and water-conducting pores in the plasma membrane). Previous reports on photosensitizers including photofrin II, rose Bengal, and protoporpyrin IX have yielded similar results; illumination gave rise to depolarization, inactivation of calcium-dependent K + channels, and increased leak conductances (Killig et al., 2001). Action potential firing and progressive decrease in action potential amplitude due to the progressive membrane depolarization have been described in the case of PPa-sensitized production of singlet oxygen in neurons (Breitenbach et al., 2010). It is interesting to note that group-1 materials possess the lowest HOMO values: −5.8 eV for PFO, −5.9 eV for F8BT, and −5.9 eV for CN-PPV compared to −5.2 eV for SY-PPV, −5.1eV for SO-PPV, and −5.4 eV for MEH-PPV. The group-2 materials, SY-PPV, SO-PPV, and MEH-PPV, show higher stability and lower ROS levels. Hence, molecules of materials in group-1 (PFO, F8BT, and CN-PPV) may be easily ionized (oxidized) at their surface from bioenvironmental contacts compared to the other polymers studied. From an application perspective, we have generated a multicolor toolset of PEG-PLGA encapsulated nanoparticles presenting high processing yields, with diameters between 85 and 145 nm, and emissions spanning the visible spectrum. Their respective properties are ideally suited for a range of biomedical applications including, but not limited to, subcellular scale imaging applications, drug delivery, cellular ablation, and localized ROS production. The behavior of PFO, F8BT, and CN-PPV films as effective ROS generators, potentially opens up applications of these OSCs as sterilization agents, in biodegradation, treatment of large area infections, or flow-cell photocatalysis applications using low-dose irradiation. In particular, PFO stands out, having obtained an OED 50 value of 0.37 mJ/mm 2 and yielding depolarizations at irradiance values equivalent to common daytime and indoor ambient light Frontiers in Bioengineering and Biotechnology | www.frontiersin.org July 2022 | Volume 10 | Article 932877 conditions. In separate work, we have used the same light-emitting polymers in active devices for photo-stimulation of opsin-expressing neurons (Matarèse et al., 2019) and for optical brain-imaging (Matarèse et al., 2020) with evidence of their use in biocompatible organic light-emitting diodes (Matarese, 2021). This work provides evidence that group-2 light-emitting polymers, such as SY-PPV and SO-PPV, are more suitable for the next generation of passive photoemitters, due to their reduced ROS-mediated side effects. While for some applications, such as neural prosthetic applications or live-cell imaging, nanoparticles should remain intact over long periods of time, in other applications, such as photodynamic therapy, the general concept is that the nanoparticles should be biodegradable. Thus, an investigation of nanoparticle stability under light irradiation in aqueous systems needs to be investigated more closely for each individual polymer to identify the suitability toward clinical in vivo applications, and the extent to which the photo-mediated degradation can be exploited to beneficial ends. In sum, broad evaluation of photonics tools is critical to obtaining a stable and specific biological interface. The non-linear response properties of cellular membranes to varied illumination duration and power which we have observed in this study underscore the potential for "off-target" effects of photoexcitation, which should be accounted for in experimental design and medical applications. At the same time, the tested polymers and CNP counterparts showed reliable stimulation of cellular membrane potentials, the ability to generate ROS, and robust emissive properties over prolonged aqueous exposure, suggesting many avenues for tailored applications of the materials characterized herein. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The animal study was reviewed and approved by the Ethics Committee of the Istituto Italiano di Tecnologia. AUTHOR CONTRIBUTIONS PF and BM conceptualized the project, collected, analyzed, interpreted data, and wrote the manuscript. PF designed patch-clamp experiments and performed data analysis, carried out requisite cultures, recordings, and contributed to material characterization. BM prepared thin films, synthesized nanoparticles, performed photophysical material characterization, and ROS assays. HR aided in cultures for ROS detection and for acquisition ROS assay data. FB and JdM contributed to experimental planning, data interpretation, and analysis. FB, JdM, LD, and MG contributed to data interpretation and discussion and provided funding, materials, and equipment. LU, TA, and BM synthesized and contributed to the characterization of CNPs. All authors contributed to manuscript revision and approved the final manuscript.
2022-07-08T13:06:04.504Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "0123017675e00bb13fe52d4108beede9fec02be3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "0123017675e00bb13fe52d4108beede9fec02be3", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
248338508
pes2o/s2orc
v3-fos-license
Is There a Better Biomaterial for Dental Implants than Titanium?—A Review and Meta-Study Analysis This article focuses on preclinical studies and reviews the available evidence from the literature on dental implant and abutment materials in the last decade. Specifically, different peri-implantitis materials and how surface modifications may affect the peri-implant soft-tissue seal and subsequently delay or hinder peri-implantitis are examined. This review analyzed more than 30 studies that were Randomized Controlled Trials (RCTs), Controlled Clinical Trials (CCTs), or prospective case series (CS) with at least six months of follow-up. Meta-analyses were performed to make a comparison between different implant materials (titanium vs. zirconia), including impact on bone changes, probing depth, plaque levels, and peri-implant mucosal inflammation, as well as how the properties of the implant material and surface modifications would affect the peri-implant soft-tissue seal and peri-implant health conditions. However, there was no clear evidence regarding whether titanium is better than other implant materials. Clinical evidence suggests no difference between different implant materials in peri-implant bone stability. The metal analysis offered a statistically significant advantage of zirconia implants over titanium regarding developing a favorable response to the alveolar bone. Introduction Tooth loss usually occurs in patients who suffer from oral diseases and traumas. It impairs masticatory function and may also cause gradual resorption of the alveolar bone [1]. Removable prostheses were conventionally used to restore functional defects and improve esthetics without teeth. In comparison, dental implants possess various advantages over unfixed dentures, such as preservation of adjacent teeth and long-term success. With the continuous optimization of dental implants and improvement of clinical techniques, the survival rate of implants has been reported to be up to 90-95% over periods of 5-10 years [2]. Dental implants have become a popular alternative to conventional prostheses during the past decade. However, implant failures do occur due to different factors such as postoperative infections and occlusal overload. The inflammatory process at the peri-implant bone region, namely peri-implantitis, is one of the significant concerns jeopardizing the longterm efficacy of implants. Peri-implantitis is a destructive inflammatory process affecting the soft and hard tissues surrounding dental implants [3]. Peri-implantitis involves severe complications at the implant site featured by the presence of pus, bleeding upon probing, deep pocket, and bone resorption [4]. Without successful treatment, peri-implantitis could lead to severe destruction of the supporting bone and eventual implant loss and is becoming a serious challenge in implantology [3]. Peri-implant tissue has been well discussed in terms of healing and clinical outcomes, showing a clear benefit for the patients' masticatory efficiency and quality of life [5]. The were used depending on the study's nature to determine the studies to be included in the meta-analysis. The following inclusion criteria were adopted for the animal studies: (1) the number and type of tested animals need to be mentioned clearly; (2) the sample size of test animals needs to be less than 4 in each treatment category, and (3) the test and control groups need to be included. The clinical studies were included if they reported a pronounced effect on the experiment, the control group for titanium implant or a piece implant, and at least six months of follow-up analysis. The results were considered explicit if the study reported a soft-tissue seal. Dental Implants Dental implants have become widely accepted and implemented in the last decade to replace missing teeth and support fixed and partially removable prostheses. Despite overall high long-term survival rates (96.1% after ten years and 83.8% after 25 years) and intensive periodontal and prosthetic maintenance, implant failures can still occur. In the past decade, substantially increased evidence has been provided to indicate that bacterial biofilm-induced peri-implant inflammation could affect both soft and hard tissues, which have eventually brought about implant failure. This inflammatory condition is distinguished as peri-implant mucositis and peri-implantitis. Peri-implantitis was first described by Mombelli et al., as an infectious disease with several common characteristics with periodontitis [8,9]. From the clinical perspective, there was a lack of consensus on a clear definition for peri-implantitis due to complex etiological and clinical factors associated and was often case-by-case. Specifically, there was confusion between peri-implantitis and peri-implant mucositis: the former was mainly defined as an inflammatory response of the peri-implant mucosa with marginal bone loss, while the latter was focused on soft tissue inflammation. To begin with, Berglundh et al. [10] defined peri-implantitis as a plaque-related inflammatory condition that occurs around the dental implant(s). At the same time, peri-implant mucositis refers to inflammation in the adjacent gingival tissues and has no signs of loss of supportive bones after the first placement. The primary symptom of inflammation associated with peri-implantitis and peri-implant mucositis is bleeding on probing. On the other hand, inflammation of the mucosa that surrounds the dental implant(s) and progressive loss of bone tend to be the main clinical signs of both peri-implant mucositis and peri-implantitis [10]. In the recent past, peri-implant diseases (peri-implantitis and mucositis) have been extensively researched, and different treatments for these diseases were discussed. Specifically, the selection of implant materials and various material surface modifications has been studied to improve treatment outcomes and prevent inflammatory conditions that are usually associated. The material used to fabricate dental implants is critical in providing ideal mechanical properties, such as stiffness and tensile strength, and preventing the onset of inflammation surrounding the implant. Among various options, titanium (Ti), alloys, and ceramic-based zirconia (ZrO 2 ) have been examined with the highest efficacy to manage inflammation and prevent peri-implant diseases. Another critical aspect is material surface modifications, such as chemical and biological treatment of the material surface. Implants with surface modifications would prefer roughness and prevent inflammatory cell recruitment. Figuero et al. [11] examined the interventions used to manage peri-implantitis and peri-implant mucositis. Based on controlled clinical trials, it was found that the utilization of antiseptics, antibiotics, and mechanical debridement of the implant surfaces by curettes, air-abrasive devices, lasers, and ultrasonic devices could help to treat peri-implantitis and peri-implant mucositis. These interventions decreased the probing pocket depth and bleeding and inflammation of the lesions [11]. Similarly, Chala et al. [12] examined the efficacy of using lasers in treating periimplantitis and peri-implant mucositis. Based on the evaluation of findings from nine Randomized Controlled Trials, Chala et al. found that adjunct laser application, a nonsurgical therapy/treatment, had a significant impact on the treatment of peri-implantitis and peri-implant mucositis over a shorter-term period. The authors also recommended surgical intervention when a non-surgical approach is not clinically significant/practical [12]. In the following sections, implant materials and promising surface modification methods will be introduced and discussed. Titanium Titanium, a lustrous transition metal with atomic number 22, is widely used to manufacture dental implants [13]. Its biocompatibility due to inert behavior in the living tissue was already documented in 1951 by Gottlieb Leventhal [14]. Bengt Kasemo further elaborated on the biocompatibility of titanium and connected its superior properties as an implant material to the 2-10-nm-thick oxide layer, which instantly formed on titanium in the presence of oxygen [15]. Due to this oxide layer, titanium features high polarization resistance, protecting the metal against corrosion, hindering the release of metallic ions into the human body [16][17][18]. As a result of titanium oxide's high dielectric constant, the surface oxide film was an attractive site for establishing chemical bindings and the attachment of a large spectrum of biomolecules [19,20]. The bioactivity, osseointegration, and biocompatibility features of titanium play an essential role in promoting bone formation in direct contact with the metal surface after dental implant placement; therefore, titanium dental implants have shown an excellent survival rate and effectiveness [21,22]. In addition, titanium promotes osseointegration, which is crucial in the success of the dental implant material. During osseointegration, the interfacial zone between the living bone and the titanium/titanium alloy dental implant materials, between 21 and 50 nm, plays a vital role since bone cells release critical growth factors into this interfacial zone for bone formation around the titanium dental implants. Furthermore, blood plasma proteins are deposited onto the surface oxide layer found on the surfaces of titanium dental implants after implantation, leading to the development of fibrin matrices, which act as a scaffold for the bone-forming cells to reside and therefore promote bone formation to anchor the dental implants [23,24]. One example of a titanium dental implant is the OsseoSpeed implant (DENTSPLY Implants, Mannheim, Germany), which came into the market in 2004. This implant's surface texture results from two subtractive, sequential manufacturing steps: titanium oxide blasting and subsequent hydrofluoric etching. Titanium oxide blasting produces the microscale surface roughness, and subsequent etching with the hydrofluoric acid influences the nanostructure of the implant [25][26][27]. Ellingsen et al. have examined the biomechanical features and histomorphometric characteristics of osseointegration with OsseoSpeed implants using a rabbit model. For the treatment group that received Os-seoSpeed implants, significantly greater removal torque and shear strengths and higher levels of bone to implant contact were observed after 1 and 3 months of healing compared to the controls [28][29][30]. In a healing chamber model, the amount of bone formation around OsseoSpeed implants was superior to the bone quantity around the precursor implant. Moreover, Choi et al. compared OsseoSpeed implants with TiUnite implants in a rabbit model and noted similar findings of osseointegration [31]. In prospective clinical trials, Mertens and Sterling examined the long-term clinical outcome of OsseoSpeed implants by evaluating 42 implants in 15 patients over five years. The overall survival rate was 97%, and the mean marginal bone loss was 0.1 mm. Such results seemed independent of immediate or conventional loading. In addition, Raes et al. reported a one-year survival rate of 98% in a prospective clinical trial using immediately professionalized OsseoSpeed implants placed in the anterior maxilla of 48 patients [32]. A 2-year prospective clinical trial by Collaert et al. examined the clinical outcomes of 25 edentulous patients; each was treated with five OsseoSpeed mandibular implants professionalized with the loaded screwretained restoration. In this study, the two-year survival rate was 100%, and the mean crestal bone loss was measured as 0.11 mm [33]. Titanium Alloy Despite the successful application of titanium implants, research has constantly aimed to develop advanced titanium alloying techniques to optimize biocompatibility and mechanical properties. However, Ti implants usually cannot be placed in narrow bones such as the anterior alveolar ridge [34]. In addition, close proximity between the implant and neighboring teeth could cause bone loss [35]. Thus, different titanium alloys have been developed to improve the mechanical strength for applications requiring small-diameter implants (≤3.5 mm) [36]. Titanium-6aluminum-4vanadium is one of the most commonly used titanium alloys. Ti alloy's most commonly used product in dental implants is Ti-6Al-4V, known as Grade V titanium alloy, composed of 6 and 4% aluminum and vanadium with the addition of a maximum of 0.25% of iron and 0.2% of oxygen. Ti-6Al-4V yields better strength and fatigue features, excellent corrosion resistance, and an improved elastic modulus compared to cp-Ti. Specifically, vanadium has been demonstrated with high cytotoxicity, and aluminum might play a role in inducing senile dementia [37]. However, a safety risk is posed due to the release of toxic vanadium and aluminum ions [38,39]. Titanium-nickel is also limited due to nickel hypersensitivity [40]. In comparison, titanium alloys with other beta-phase stabilizers such as tantalum, molybdenum, niobium, and zirconium are non-toxic and non-allergenic and thus have received more interest as materials for medical applications [41,42]. Zirconium has the same crystal structure as Ti and exhibits unlimited mutual solubility in Ti [43]. Titaniumzirconium alloys (TiZr) have demonstrated increased corrosion resistance [44], improved tensile and fatigue strength [45,46], and similar biocompatibility as Ti [36,47,48]; as titanium and zirconium are the only metals that do not show osteoblast growth inhibition, a combination of both is well suited for implants [18]. TiZr alloy, known as Roxolild ® , Straumann AG (Basel, Switzerland), contains 13 to 17% zirconium. Its surfaces are pretreated with large-grit (0.25-0.5 mm) aluminum oxide sand blasting and acid etching in hydrochloric and sulfuric acid [49]. In this context, Gottlow et al. could show significantly higher removal torque and bone area in vivo for a titanium-zirconium alloy compared to commercial pure (cp) titanium [50]. Furthermore, it was observed that the oxides on titanium-zirconium alloy surfaces are more stable and have favorable corrosion resistance [51]. Moreover, the alloying of titanium with zirconium improves the mechanical strength, especially for applications in small-diameter implants [36]. While the mechanical strength is high for titaniumzirconium alloys, they are well suited for implantation in the cortical bone due to a low Young's modulus, which prevents stress shielding [52]. The effect of Zr on the increase in mechanical properties and its ability to influence the etching process were identified as causes for these differences [53]. Increased mechanical properties were responsible for fewer structural changes on TiZr during sand blasting [49]. TiZr increased integrin-beta3 mRNA and protein levels compared with Ti in an in vitro study by Gomez et al. Cells on TiZr surfaces showed higher MMP1 protein levels than Ti surfaces, although similar TIMP1 protein production was observed [54], suggesting that TiZr is a potential clinical candidate for soft tissue integration [55,56]. Furthermore, the alloying of zirconium was reported to influence the corrosion resistance of titanium alloys and acted as a catalytic agent for the formation of hydrogen during etching and hydridation [51,53]. In addition, the mechanical properties of titaniumzirconium alloys allow the placement of small-diameter implants in critical implantation sites, such as the front of the lower jaw, where bone is scarce and the crestal bone is thick [57]. An alternative alloy could consist of Ti, Ta, Nb, and Zr, which showed similar cytocompatibility to cpTi, but with a lower inflammatory response, and also osseointegrated [58], e.g., Ti-Ta-Nb-Zr(-Si)(-Fe) displayed improved cytotoxicity when compared to Ti-6Al-4V alloy [59]. Even though the side effects of these components have not been observed when they are used in the format of Ti alloy as dental implants, extra caution should be taken, and long-term evaluations should be conducted for safety concerns. Animal studies have shown the superior mechanical properties of titanium alloy compared with titanium alone when used as an implant material for a tooth implant. Biological responses to the alloys have been characterized in vitro [60][61][62]. It has been noted that the form of alloy has beneficial influences on its microstructure and, as a result, its mechanical properties. Randomized, controlled clinical trials on alloying with titanium are still scarce. A review of the available studies by Wennerberg et al. noted little clinical evidence so far to demonstrate a preference of alloying with titanium over zirconia or titanium alone. In a split-mouth study, alloying with titanium was compared with titanium alone, with early loading protocols in irradiated patients. One hundred and two implants were placed in 20 patients in both jaws. One-year follow-up showed excellent yield strength and fatigue properties for all implants, which translated to higher survival rates and low crystal bone loss <0.4 mm in all patients, with no significant difference. Accordingly, alloying with titanium was found to have low wear resistance, a high elastic modulus approximately 4-10 times that of human bone, and less shear strength, which could impair the usage as implants and in screw form. Zirconia Zirconia-based dental bioceramics are chemically inert materials with no adverse effects on oral tissues [63]. Zirconia can exist in several different crystal structures; however, the three molar percentage yttrium-stabilized tetragonal zirconia polycrystal (3Y-TZP) is the most commonly used for dental implants [64]. Zirconia has been increasingly used in dental implantology because of its ideal physical, aesthetic, and biological properties [64]. One of the selling points for manufacturers of zirconia implants is that its white color has advantages over metallic implants in narrow ridges. Zirconia, being white in color, avoids "black line" for Ti dental implants in patients with gingival and bone recession [65]. Unlike titanium, zirconia is bioceramic, which offers superior biological and anti-corrosive properties but also makes it more brittle. Zirconia implants are found to have higher survival and marginal bone loss than titanium dental implants after ten years or more from implantation [66]. Moreover, zirconia implant material has shown considerably higher cell spreading and cell viability and improved biocompatibility over titanium [64]. The other advantage of using zirconia is its high corrosion resistance, low infection rate, and plaque formation. Increasing success and survival rates and high biocompatibility make zirconia an ideal dental implant material candidate [67]. In a recent prospective cohort study, Balmer et al. evaluated a single zirconia implant's radiological and clinical results with fixed dental prostheses or restored with single crowns for 60 months. Seventy-one zirconia implants were placed on the 61 patients' posterior, anterior, and sites, and in a 60-month follow-up, the results indicated that the zirconia dental implants had a mean bone loss of 0.70 ± 0.60 mm after 60 months [68]. The authors also found that zirconia dental implants had a survival rate of approximately 98.40% (95.0% C.I. = 91.6, 99.90). Furthermore, the statistical analysis revealed no significant marginal bone level after the 60 months (p = 0.458), implying that zirconia dental implants had a lower/marginal bone level [68]. Therefore, it was concluded that zirconia implant material has mucosal margin levels, highly stable marginal bone, and higher survival rates. Moreover, it may serve as a reliable and safer implant material for dental implant applications. Improvement of Tissue Integration on Implant Various surface treatments have been commonly applied to improve the surface properties of dental implants. These physiochemical modifications can change the dental implants' surface topography, morphology, and chemistry. Furthermore, additive processes are known to improve the physiological reaction of implants [33] potentially. Morphology and topographical surface modifications can improve the interaction between implant and tissue [69,70]. In this context, rough implant surfaces promote osseointegration more than smooth surfaces [26,71]. Mechanical or chemical methods, or a combination of both, can be used to optimize the surface morphology and topography [72,73]. For example, one recognized method to modify titanium implant surface roughness is through blasting with titanium dioxide (TiO 2 ) particles, and the resulting roughness can be controlled via the mesh size of the blasting particles. It has been found previously that the optimal surface roughness for titanium dental implants lies in the range of 0.3-2.2 µm, which is the surface roughness range of commercial dental implants [74]. This would result in the highest improvement in bioactivity. Besides blasting, dental implant surfaces may be classified into different groups, according to surface roughness: (1) Moderately rough implants include SLA, TiUnite, OsseoSpeed, TiOblast, and the Southern Implants, whereas IMZ, TPS, Ankylos, Friadent, and Xive represent rough surfaces [74]. Etching and multistep etching are frequently used to roughen the surface of titanium dental implants [76][77][78][79]. Implant surface roughness alteration involves mechanical modification of the surfaces to improve the integration of abutment into the soft tissues. Various mechanical processes and treatments have been tested to alter the surface roughness of implant materials. These mechanical processes/treatments, such as machining, grinding, polishing, and blasting, improve the adhesion and clean the surfaces of dental implant materials simultaneously [80,81]. Implant Surface Chemistry Alteration Nevertheless, hardly any modification of the surface morphology can be done without inducing changes in the chemical surface composition and vice versa. The etching processes used on titanium for surface modifications increase the amount of hydrogen on the Ti surfaces, as liberated hydrogen ions are attached to titanium's outer surface layer in the form of titanium hydride [69,82,83]. This process can be influenced by the molar strength of the acid and the etching time. Several studies suggest that the hydrogen content induces faster healing and better osseointegration. Therefore, cathodic polarization (hydridation) was applied to increase titanium hydride's layer thickness and concentration [84]. Videm et al. showed that hydride surfaces with increased hydrogen content had 60% higher retention in an in vivo model [69]. In addition, the hydridation process can be used to improve the attachment of ampholytic biological molecules, which bind to the surface during hydridation together with hydrogen [84]. As the oxide layer on titanium is the most prominent material feature, modifications of this layer were tested. Nevertheless, the approaches to increase titanium's biocompatibility by the sheer increase in oxide layer thickness by anodic oxidation (hydroxylation) in acidic solutions did not show increased biocompatibility [85][86][87]. Nevertheless, if hydroxylation is used with alkaline solutions, an increase in hydroxide (OH) groups on the surface can be achieved [88,89]. Alteration of the implant surface chemistry involves various chemical processes to achieve higher physical and mechanical properties. As a result, chemistry alteration of the implant surface will lead to improved performance of the dental implant material, and improved survival and success rates of the dental implants for several years [90]. Chemical treatments used for the surface chemistry alteration of dental implant materials' surfaces can be categorized into acid treatment, alkali treatment, and the use of hydrogen peroxide and anodic oxidation. For example, anodic oxidation aims to increase the thickness of titanium (IV) oxide on the surfaces of dental implant materials. Similarly, hydrogen peroxide adds a porous outer layer and dense inner oxide layer on the surfaces of dental implant materials to improve the corrosion resistance features of the surfaces of the dental implant materials. On the other hand, alkali and acid treatments used in implant surface chemistry alteration focus on improving the biocompatibility features of the dental implant materials (Nicholson, 2020). In the following section, the surface chemistry alteration of titanium and titanium alloy dental implant materials will be discussed as titanium is most widely used in dental implants. Surface modification of titanium and titanium alloy dental implant materials, such as Ti-6A1-4V and cpTi (commercially pure titanium), is performed through the oxidization of titanium (IV) [91] (Figure 1). Changes in the dental material surface significantly promote the adhesion of osteoblasts and the oxide layer, which further enhances their biological properties, making them suitable for dental implantology applications [22]. Nevertheless, this implant surface chemistry alteration could potentially induce an immune response and develop fibrosis in the region surrounding the dental implants because the body more easily recognizes the chemically modified surface as an invader, and the release of various fibrotic factors will occur [92]. drogen peroxide and anodic oxidation. For example, anodic oxidation aims to increase the thickness of titanium (IV) oxide on the surfaces of dental implant materials. Similarly, hydrogen peroxide adds a porous outer layer and dense inner oxide layer on the surfaces of dental implant materials to improve the corrosion resistance features of the surfaces of the dental implant materials. On the other hand, alkali and acid treatments used in implant surface chemistry alteration focus on improving the biocompatibility features of the dental implant materials (Nicholson, 2020). In the following section, the surface chemistry alteration of titanium and titanium alloy dental implant materials will be discussed as titanium is most widely used in dental implants. Surface modification of titanium and titanium alloy dental implant materials, such as Ti-6A1-4V and cpTi (commercially pure titanium), is performed through the oxidization of titanium (IV) ( [91] (Figure 1). Changes in the dental material surface significantly promote the adhesion of osteoblasts and the oxide layer, which further enhances their biological properties, making them suitable for dental implantology applications [22]. Nevertheless, this implant surface chemistry alteration could potentially induce an immune response and develop fibrosis in the region surrounding the dental implants because the body more easily recognizes the chemically modified surface as an invader, and the release of various fibrotic factors will occur [92]. Biomolecules Besides physical and chemical modifications to the implant surface, various bioactive molecules are also developed to treat the implant surface to increase biocompatibility [94,95]. By conjugating bioactive molecules, such as proteins, enzymes, or peptides, to the implant surface [95], the goal is to mitigate the host response that the implant will otherwise elicit after surgery and improve the interaction between the implant and cells at the implant site. Cell attachment factors such as fibronectin could improve the attachment and spread of cells on the implant surfaces, while the application of growth factors Biomolecules Besides physical and chemical modifications to the implant surface, various bioactive molecules are also developed to treat the implant surface to increase biocompatibility [94,95]. By conjugating bioactive molecules, such as proteins, enzymes, or peptides, to the implant surface [95], the goal is to mitigate the host response that the implant will otherwise elicit after surgery and improve the interaction between the implant and cells at the implant site. Cell attachment factors such as fibronectin could improve the attachment and spread of cells on the implant surfaces, while the application of growth factors such as bone morphogenetic proteins could directly influence the development of osteoblasts on the surfaces [96]. Theoretically, the possibilities of suitable molecules are extensive, yet the process is limited by the conditions that molecules are exposed to during the coating and solubility and degradation problems or simply destruction due to the attachment process. Furthermore, polarization of the molecule (ampholyte) can be used for attachment to titanium in an electrochemical process. For example, two biomolecules, doxycycline and simvastatin, as coating candidates with such a technique have been reported [97][98][99][100][101]. Doxycycline is a semi-synthetic broad-spectrum antibiotic from the group of tetracycline antibiotics, which is used to treat various infections and works by inhibiting bacterial protein biosynthesis [102,103]. Tetracycline enhances bone formation, mostly based on general knowledge about the interaction with collagen formation and calcium incorporation [104]. It has been proven to promote bone growth and treat periodontal disease and peri-implantitis in vitro [105][106][107]. Other studies further verified doxycycline's applications in controlling osteogenic differentiation in genetically engineered mesenchymal stem cells [108]. Currently, doxycycline is mainly applied alone through drug delivery systems for periodontal disease and peri-implantitis treatment [107,[109][110][111]. Combining doxycycline with dental implants and its direct incorporation in the implant system could be favorable as local administration would reduce interference with the patient's body and optimize the area of drug administration to the bone directly surrounding the implant (Figure 2). However, successful binding of doxycycline directly to an implant has not been reported in the literature yet, but this could be achieved with the process of cathodic reduction in acidic electrolytes. However, one must be aware of the delicate dose relation between enhancing and inhibitory effects on the osteogenic differentiation of this biomolecule [112]. Doxycycline is a semi-synthetic broad-spectrum antibiotic from the group of tetracycline antibiotics, which is used to treat various infections and works by inhibiting bacterial protein biosynthesis [102,103]. Tetracycline enhances bone formation, mostly based on general knowledge about the interaction with collagen formation and calcium incorporation [104]. It has been proven to promote bone growth and treat periodontal disease and peri-implantitis in vitro [105][106][107]. Other studies further verified doxycycline's applications in controlling osteogenic differentiation in genetically engineered mesenchymal stem cells [108]. Currently, doxycycline is mainly applied alone through drug delivery systems for periodontal disease and peri-implantitis treatment [107,[109][110][111]. Combining doxycycline with dental implants and its direct incorporation in the implant system could be favorable as local administration would reduce interference with the patient's body and optimize the area of drug administration to the bone directly surrounding the implant (Figure 2). However, successful binding of doxycycline directly to an implant has not been reported in the literature yet, but this could be achieved with the process of cathodic reduction in acidic electrolytes. However, one must be aware of the delicate dose relation between enhancing and inhibitory effects on the osteogenic differentiation of this biomolecule [112]. Statins benefit various medical conditions and are commonly used as cholesterol-lowering drugs. Statins have also been researched for their tumor inhibition potential and are used as anti-inflammatory drugs [113][114][115]. Moreover, the applications of statins could be further broadened to bone growth promotion [113,[116][117][118]. Many studies have pointed out the capability of statins to reduce bone resorption by inhibiting osteoclast activity, which is essential for osteoporosis treatment [115,117,[119][120][121]. Mechanistically, statins contribute to bone formation and stimulate bone growth by regulating bone morphogenetic protein-2 (BMP-2). In addition, the upregulation of BMP-2 causes increased osteoblast differentiation and bone formation, as documented in various works in the literature [113,117,118,122,123]. Another feature of statins is their ability to enter Statins benefit various medical conditions and are commonly used as cholesterollowering drugs. Statins have also been researched for their tumor inhibition potential and are used as anti-inflammatory drugs [113][114][115]. Moreover, the applications of statins could be further broadened to bone growth promotion [113,[116][117][118]. Many studies have pointed out the capability of statins to reduce bone resorption by inhibiting osteoclast activity, which is essential for osteoporosis treatment [115,117,[119][120][121]. Mechanistically, statins contribute to bone formation and stimulate bone growth by regulating bone morphogenetic protein-2 (BMP-2). In addition, the upregulation of BMP-2 causes increased osteoblast differentiation and bone formation, as documented in various works in the literature [113,117,118,122,123]. Another feature of statins is their ability to enter the cell membrane through passive diffusion and active uptake by osteoblasts [114,124,125]. It is worth noting that the surface modification of zirconia, including both physical and chemical modifications, is mainly similar to that of titanium and titanium alloy (including TiZr alloy). Specifically, sand blasting and acid etching are applied to increase roughness; various coating strategies use hydroxyapatite, calcium phosphate, electrophoretic deposition, and biofunctionalization with arginine-glycine-aspartate (RGD) peptide, which are utilized to improve biocompatibility and reduce inflammation with zirconia dental implants [126][127][128]. Meta-Study Analysis The initial article pool had 40 articles. Standard reviews were excluded due to the possibility of study selection bias, and in vitro studies were excluded due to their limited clinical relevance. Subsequently, 20 publications were subjected to additional evaluation, including six animal studies with dog and monkey models, four human studies, and 20 clinical studies. The clinical studies can be further categorized as follows: seven RCTs (level 1b), one prospective controlled (level 2a), seven prospective uncontrolled studies (level 2b), one case series, and four case reports (level 4). The extensive examination brought about the final sample of nine articles, including three animal studies, two human studies, and four RCTs. The meta-analysis results on all the implant materials on marginal bone loss (MBL) showed that bone level 30 studies showed an interproximal marginal bone loss. The mean bone loss differs from 0.2-0.4 mm to 1.05-1.48 mm from the zirconia implant and also 0.3-0.5 mm to 0.67-1.43 mm for the titanium implant. The distal and mesial marginal bone loss was reported in some of the articles. The meta-analysis was conducted to examine the same intervention and outcomes for the 30 included studies. The mean difference for the unceasing outcome (MBL) was used, using the software of a random effect model (RevMan 5.3, 2014). Evaluation of Heterogeneity The Cochran test examined any discrepancies in treatment effects' estimation from different RCTs for heterogeneity, and the difference was considered significant if p < 0.1. Statistics from 30 studies describing the total difference across the trials were used to compute heterogeneity, and results above 50% were viewed as moderate to high heterogeneity. All the findings for the included studies were pooled using the random model effect as the statistical heterogeneity among the studies was significant (93%, p < 0.00001). The mean difference for marginal bone loss between zirconia and titanium implants for all the pooled findings was −0.20 (−0.32-0.08), with a 95% confidence level. The overall estimate was statistically significant, with p < 0.0009. The meta-analysis was conducted with the continuous outcome using the random effect model (Table 1). On the Effect of Implant Materials on Probing Depth Overall, 22 studies out of 30 recorded the pocket probing depth. Albornoz et al. measured the PPD at six sites, and the other eight papers measured it at four areas. After a one-year follow-up (112,15), it was noted that the mean pocket depth for the titanium abutment was 3.3 mm and the mean pocket depth for all the ceramic zirconia implants ranged from 2.9 to 3.5 mm. DeAlboroz et al. noted that after a year of follow-up, an increase of 0.2 mm from the baseline was recorded around the zirconia implant, while the pocket probing depth around the titanium implant was not affected. In recent years, the mean pocket depth around zirconia abutment was found to be 3.38 mm, and the mean pocket depth around titanium alloy was 3.33 mm. After six months of follow-up, the zirconia abutment showed a pocket probing depth of 3.2 mm versus 3.4 mm at the sites of the titanium abutment. The survival rate after two years was used by two studies. Zembic et al. indicated that the mean pocket probing depth around the zirconia implant was 3.3 mm, with an upsurge of 0.4 mm from the baseline, and the titanium abutment had 3.6 mm, with an upsurge of 0.5 mm from the baseline. Lops et al. showed 2.6 mm for zirconia abutment and 2.7 mm for titanium. All the studies included showed no significant differences between zirconia and titanium implants. The pocket probing depth mean difference used in this meta-analysis was −0.10 (−0.25-0.05) with a 95% confidence level. The overall evaluation was not statistically significant at p = 0.18. Therefore, the meta-analysis with the random effect model was performed for a continuous outcome [130] (Table 2). Abrahamsson et al. performed a comparison of peri-implant tissues on titanium and gold alloys. In total, 32 titanium implants were placed in five dogs, and the distance from the abutment-implant junction to the first bone-implant contact was considered to indicate the actual bone loss. Histometric observations indicated that bone loss was 0.78 around the titanium (control implant), 0.80 mm around alloy, 1.80 mm around zirconium, and 1.26 mm around the dental porcelain implant. The clinical evaluation indicated marked soft tissue recession around the alloy implant [137]. According to Piattelli et al., there was a difference in peri-implant tissue stability between titanium abutment versus gold alloy, zirconia, and aluminum oxide implants [138]. The research was conducted through various methods, including examining databases, dental implants, prosthetics, and periodontal journals. The research showed that the measurement of soft tissue had a problem with accuracy; peri-implant tissues around zirconia and titanium were defined in histologic and animal studies only. As a result of the difference in research types, follow-up time, and outcome variables, it was not easy to perform the meta-analysis. For example, titanium abutment did not have higher bone level maintenance than the gold alloy, aluminum oxide, or zirconia abutment [139], and there was no information about the zirconia and alloy's clinical performance compared to the titanium. Implant-supported restorations require crystal bone stability and healthy soft tissues, and both factors should be considered to determine a practical treatment approach for patients. The peri-implant tissue has been challenged for a while by some problems. Studies show that bone loss has been observed during the treatment in the first year. Implant materials are often regarded as the factors that affect the stability of the mucosa and crestal bone. In addition, several papers show similar responses to peri-implant tissues' reaction to titanium and aluminum oxide implants [140]. A comparison of peri-implant tissues' reaction on titanium and alloy implants was studied in dogs. Bone loss was considered as the distance from implant-implant junction to first bone-implant contact. Through observation, bone loss was 0.78 mm around the titanium implant and 1.80 mm around the alloy implant [48]. Zirconia and titanium implants were compared by placing 12 implants on six monkeys. There was no difference observed between treatment groups that received either material implant. The ability to form stable peri-implant tissues was tested using one piece of alloy and titanium implants. The report showed a vertical extension of soft peri-implant tissues around implants from the mucosa's margin to the first bone-implant contact [141]. Clinical Studies A histological study of the soft tissue response to titanium and zirconium healing caps/abutments in five patients was carried out. After six months of healing, gingival biopsy specimens were derived from test and control implant sites. It revealed that in-flammation prevailed for titanium specimens compared with zirconium. In addition, one piece of soft tissue in aluminum oxide and titanium implants was compared to twenty patients. The biopsies indicated the exact composition of peri-implant tissue among tested abutments [48]. A randomized trial with a split-mouth design was conducted over 4 years, comparing titanium and gold alloy restored with metal-ceramic crowns in 20 patients. Each patient was given two implants, one gold alloy and one titanium. Four years later, peri-implant tissues had no difference in response to gold alloy or titanium implant. In a clinical randomized controlled multicenter study, there was a comparison between aluminum oxide and titanium implants. Patients were given 34 test sintered aluminum oxide abutments containing 35 control implants and observed for one year. The following 15 patients were subjected to ten tests, and ten control abutment implants were followed up for three years. In the first group, bone loss was absent around the ceramic implant, and the second group had a loss of 0.3 mm after one year and 0.1 mm gain after three years [142]. A 5-year study was performed to observe the difference between ceramic and titanium. Thirty-two patients were given 103 implants. Fifty-three aluminum oxide ceramics were connected. Soft tissue around the implant and the teeth was healthy. Regarding bleeding of the peri-implant mucosa, there was no difference in ceramic and titanium implants. Less bone loss was observed with titanium abutment implants than ceramic implants [142]. Discussion Clinical trials were designed correctly and employed randomization, but proper control groups were often omitted. Randomized controlled clinical studies provide reliable evidence but contain inherent drawbacks compared to other studies. There is a tendency to favor randomized trials and avoid lower-rank evidence. Therefore, it is essential to compare the results to those that failed to use inclusion criteria. This should not be considered a means to increase the review's integrity but to determine whether there is a difference in included and excluded research results. The clinical format was regarded as the most reliable, while the animal studies were the least reliable. Lops et al. reviewed implant material effects on peri-implant tissues, and no inclusion or exclusion criteria were used. The readers cannot rely on the authors' subjective selection of the studies [132]. However, no clinical trials were accounted for, and the advice was based on in vitro and animal studies. The formation of a stable peri-implant seal by a prosthetic implant material is categorized into two parameters: presence or absence of loss of bone and gingival recession. It was proven in an animal study that titanium and oxide ceramic implants could develop stable soft-tissue seals. Soft tissues adjacent to gold and porcelain-fused-to-metal implants indicated recession and crystal bone loss [134]. Another study showed no difference between soft and hard tissue integration around gold alloy and titanium one-piece implants. These two studies possess a methodological disparity. The first study used a two-piece implant, and the other used a custom-made piece implant. There was proof of implant disconnection, second-stage surgery with flap elevation, and soft tissue recession. Smooth tissue extension and bone apposition were not different among compared specimens. The study showed similar biocompatibility between zirconia and titanium. The physiology between animals and humans is similar and forms the basis for animal studies, and the outcomes are relevant to humans but cannot be generalized to a clinical environment. Therefore, clinicians are left to rely on data collected from animals. However, simple case reports are more clinically valid than randomized animal experiments and are controlled well. Studied animal data should be carefully interpreted if applied in the clinical environment, if the clinical evidence that is being relied on is unavailable. For example, evidence of gold alloy implants not maintaining stable peri-implant tissue relied on animal studies even when there were contradictory data from clinical reports. Therefore, the concept should be reassessed using clinical and histologic evidence. Three published prospective randomized controlled clinical trials show stable soft and hard tissue around aluminum oxide implants. Loss of bone occurred, but there was no difference from the control titanium implant, in which biocompatibility had been proven long ago. All studies showed bone loss, but a data pool was impossible to achieve since the follow-up period ranged from one to five years. Oxide ceramic implants can achieve a stable marginal bone in a clinical situation. We can conclude that titanium and alloy have no crystal bone stability difference. Unfortunately, there is no comparison between zirconia and titanium implants in a clinical trial. Therefore, a conclusion regarding the superiority or inferiority of zirconium over titanium is challenging. Some data can rely on tooth-controlled investigations. A study of four years provided clear information that shows that zirconia implants cause a favorable reaction in peri-implant tissues. Therefore, there should be a clinical trial to compare zirconia and titanium implants. All studies failed to include the exact gingival recession measurement. Clinical studies contained a report of observation of the status of periimplant mucosa. Animals and histologic experiments may give insights into the structure and dimensions of soft tissues in contact with the different implants. Therefore, analyzing other studies is critical to understanding whether the implant material is essential for smooth tissue behavior. One of this investigation's goals was to assess the impact of various implant materials on bone changes, probing depth, plaque levels, and peri-implant mucosal inflammation. The authors, in their investigation, focused on the biological outcomes (pocket depth and plague levels). The authors aimed to exclude studies where the implant was compared to tooth bone restoration, apart from the implant. Hence, some studies with a follow-up of less than six months were omitted. This action can be considered appropriate; patient bias is avoidable by uncontrolled prospective clinical trials. Hence, the longest follow-up was three years. Generally, both implant materials' results showed minor statistically significant differences. The evidence-based review examined the outcome for implant materials on bone loss. As per the visceral, human biology and various clinical studies, implant materials (zirconia and titanium) indicated no differences in bone stability. However, the current review does not show significant differences in pocket probing depths between the various implant materials. Further, it is essential to note that Agustín-Panadero et al. showed significantly lower pocket probing depth around the zirconium implant than the titanium implant [143]. This study showed a complete picture of zirconia and titanium implants [135]. New in vitro studies indicated that the surface roughness of various implant materials has a crucial role in the performance of cells for the implant materials. Zirconia surfaces offer better adhesion media for epithelial cells than titanium surfaces. It can be noted that the reduced pocket probing depth around the implant is strongly related to the adherence to the gingival cells. It is rather challenging to examine the impact of the implant material on plague accumulation due to implant materials not showing the oral cavity. The included articles overlooked biological or mechanical implications. The most notable was in one study [136]. It is important to note that a fistula triggered by the excel cement has been documented to be a factor causing biological complications. The findings were explained through the implant design. The superstructure margin is located subgingivally, approximately 1-1.5 mm below a gingival crest. The implant, supported by the fixed partial dentures, was cemented through dual-cured resin cement on the implant materials. Hence, due to biological complications, the removal of excess adhesive was challenging. Accordingly, the complete removal of excess resin cement is essential, even with a customized implant. Conclusions There is no clear evidence indicating that titanium is better than other implant materials. Clinical evidence suggests little difference between different implant materials in peri-implant bone stability. There is no difference in crystal bone loss statistically from studies. Animal histologic studies have the same peri-implant soft and hard tissue reaction to titanium and zirconium. There is an indication of a better response of human mucosa to zirconia implants than titanium. While evidence-based research does not offer a definitive decision on using ceramic or metallic implants for the alveolar bone response, some studies do not show better mechanical or biological performance for zirconia implants over titanium implants. The metal analysis showed a statistically significant advantage of zirconia implants over titanium regarding developing a favorable response to the alveolar bone.
2022-04-23T15:17:57.341Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "8633ca463e02676ca49844c6214920503b9bf3aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4983/13/2/46/pdf?version=1650504114", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3158e3ecb2ea514b74b92169502d7120f1d5f1d0", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
18208264
pes2o/s2orc
v3-fos-license
Validation of Reef-Scale Thermal Stress Satellite Products for Coral Bleaching Monitoring Satellite monitoring of thermal stress on coral reefs has become an essential component of reef management practice around the world. A recent development by the U.S. National Oceanic and Atmospheric Administration’s Coral Reef Watch (NOAA CRW) program provides daily global monitoring at 5 km resolution—at or near the scale of most coral reefs. In this paper, we introduce two new monitoring products in the CRW Decision Support System for coral reef management: Regional Virtual Stations, a regional synthesis of thermal stress conditions, and Seven-day Sea Surface Temperature (SST) Trend, describing recent changes in temperature at each location. We describe how these products provided information in support of management activities prior to, during and after the 2014 thermal stress event in the Commonwealth of the Northern Mariana Islands (CNMI). Using in situ survey data from this event, we undertake the first quantitative comparison between 5 km satellite monitoring products and coral bleaching observations. Analysis of coral community characteristics, historical temperature conditions and thermal stress revealed a strong influence of coral biodiversity in the patterns of observed bleaching. This resulted in a model based on thermal stress and generic richness that explained 97% of the variance in observed bleaching. These findings illustrate the importance of using local benthic characteristics to interpret the level of impact from thermal stress exposure. In an era of continuing climate change, accurate monitoring of thermal stress and prediction of coral bleaching are essential for stakeholders to direct resources to the most effective management actions to conserve coral reefs. Remote Sens. 2016, 8, 59; doi:10.3390/rs8010059 www.mdpi.com/journal/remotesensing Remote Sens. 2016, 8, 59 2 of 16 Introduction Global, near real-time satellite monitoring of environmental conditions linked to coral bleaching has supported coral reef management efforts for nearly 20 years.Throughout this period, the U.S. National Oceanic and Atmospheric Administration's Coral Reef Watch (NOAA CRW) program developed and released coral-specific satellite-based tools and successfully monitored thermal stress causing mass bleaching events around the world [1][2][3][4][5][6][7][8].These products have been instrumental in aiding reef managers and other stakeholders to prepare for and respond to coral bleaching events. Bleaching is a stress response of corals whereby the symbiotic zooxanthellae, which under usual conditions provide up to 90% of energy requirements of corals, are expelled from the coral host [9].Zooxanthellae contain colorful pigments-their departure leaves the white calcium carbonate skeleton of the coral visible through the translucent tissue; the coral appear "bleached".Environmental stressors including low salinity (fresh water), unusually cold temperature and increased exposure to light can result in localized coral bleaching.However, mass coral bleaching events have been linked to warm oceanic temperature anomalies, which occur on the scale of hundreds to thousands of kilometers [10]. Initially developed in the mid-1990s, CRW's heritage coral bleaching Decision Support System (DSS) consists of a suite of operationally supported (i.e., 24/7 production/delivery/maintenance) products in near real-time twice each week [11,12].The basis of the satellite product suite is a global Sea Surface Temperature (SST) field at 0.5 ¥ (~50 km) resolution.Comparison of this SST field with a long-term monthly climatology provides the SST Anomaly product, identifying conditions that differ from the expected temperatures for each location at that time of year.The first coral-specific product released by CRW was the Coral Bleaching HotSpot, reporting positive temperature anomalies above the warmest monthly climatology value and therein indicating the current magnitude of thermal stress.The Degree Heating Week (DHW) is the accumulation of HotSpots of 1 ¥ C or greater through a rolling 12-week period and has been the strongest predictor of mass coral bleaching (e.g., [2]).Summarizing the information of the HotSpot and DHW products into a single management-oriented product, the Bleaching Alert Area provides reef stakeholders with a categorized stress level on a reef, indicating presence or absence of bleaching thermal stress and predicted coral bleaching severity.Virtual Stations at over 200 reef locations provide managers with summarized information on current thermal conditions accompanied by historical time series of CRW SST and DHW products.These underpin the free, automated email system providing Satellite Bleaching Alerts to subscribers whenever thermal conditions traverse established thermal stress thresholds.These 50 km products have served the global coral reef community for well over a decade. A recent major development has been the release of next-generation global thermal stress products at 5 km (0.05 ¥ ) spatial and daily temporal resolution, which resulted in a dramatic improvement in near-shore reef coverage and responded to the most-frequent request from reef managers and other stakeholders for higher resolution [13].Underpinning the suite is a 5 km SST field that blends data from instruments on multiple geostationary and polar-orbiting satellites, with current input streams resulting in as much as a 50-fold increase in the amount of data for most of the global ocean each day, as compared with the heritage SST.Derived products, matching those described for the 50 km resolution, are calculated by comparing the 5 km SST field with a customized, long-term climatology [14] derived using the current NOAA Climate Data Record for SST-the Pathfinder dataset [15].The one distinction is that the 5 km Bleaching Alert Area product reports the maximum alert from the prior seven-day period, updated on a daily basis.This was necessitated by rapid daily fluctuations that result from the increased spatial and temporal resolution.This next-generation 5 km DSS was initially released in July 2012, with updated versions culminating in the official release in February 2015. Since its inception, the CRW 5 km DSS has monitored thermal stress globally and identified reef locations at risk of bleaching.CRW has received anecdotal and, in some cases, quantitative reports of coral bleaching from partner observers at several of these locations.The 5 km DSS showed that the Commonwealth of the Northern Mariana Islands (CNMI) was exposed to high levels of thermal stress in both 2013 and 2014 [13].Other reef locations where the 5 km DSS reported thermal stress and where mass bleaching was observed in the past 2-3 years include the Hawaiian archipelago, the central and South Pacific, the Coral Triangle, the Florida Keys and Bermuda. In this paper we introduce two new monitoring products developed within the 5 km DSS that support management efforts prior to, during, and after a bleaching event: Regional Virtual Stations and Seven-day SST Trend.Using these and other CRW products, we describe the development of thermal stress in CNMI (Figure 1a) during 2014; discuss how the CRW 5 km DSS was used to inform local managers and other stakeholders as stress developed; present data from in situ observations of coral bleaching during this event; and undertake the first quantitative comparison of the 5 km DSS products with observations of coral bleaching. Regional Virtual Stations Reef managers and other stakeholders have benefited from the capacity to track localized conditions through time afforded by the heritage Virtual Stations.CRW has developed a set of 5 km Regional Virtual Stations (211 at the time of writing) to replace the heritage Virtual Stations and take advantage of higher resolution data.These provide comprehensive information for reefs in a jurisdiction or predetermined sub-region.The Regional Virtual Stations represent a change in methodology from the heritage Virtual Stations.Rather than constructing each Virtual Station using a single pixel, as in the heritage 50 km Virtual Stations [12], Regional Virtual Stations were based on data from all of the 5 km pixels within each regional jurisdiction (e.g., CNMI, Guam; polygons in Figure 1b).While data from a single 5 km pixel provide much higher spatial detail, they may not be generally representative of thermal conditions for reefs across each jurisdiction.The new Regional Virtual Stations are more representative of regions at the expense of any localized spatial variability. In a further enhancement from the heritage Virtual stations and because of the regional nature, the new Regional Virtual Stations include all coral reef locations around the world.Global coral reef locations were compiled from several data sources.The multi-source compilation by the United Nations Environment Programme-World Conservation Monitoring Centre (UNEP-WCMC) and the WorldFish Centre, in collaboration with the World Resources Institute (WRI) and The Nature Conservancy (TNC) [17], includes the Millennium Coral Reef Mapping Project and the World Atlas of Coral Reefs.This was augmented using other local marine atlases (e.g., refs [18,19]) and several in-house reef location sources (i.e., where reef observation surveys had been reported).Reef-containing 5 km pixels were identified and augmented with a 20 km buffer around each 5 km reef pixel to define the extent for each Regional Virtual Station (black polygons in Figure 1b).The product provides regionally representative statistics based on all pixels contained within the Regional Virtual Station.The number of water pixels contained within each Regional Virtual Station varies due to the geo-political definition of the jurisdictions, ranging from 39 (Easter Island) to 12,014 (Papua New Guinea), with an average of 1156.The examples in Figure 1b contain 813 (CNMI) and 275 water pixels (Guam). The Regional Virtual Stations are used in a series of products including new Regional Bleaching Thermal Stress Gauges; a Satellite Bleaching Alert email system; time series graphs; interactive Google Maps and Google Earth interfaces showing locations of Regional Virtual Stations; and associated data.Bleaching Thermal Stress Gauges use the 90th percentile value among pixels in the designated region to report the regional thermal stress alert level (No Stress, Bleaching Watch, Bleaching Warning, Alert Level 1, and Alert Level 2).For example, if 5% of the pixels for CNMI were at Alert Level 2 and a further 8% at Alert Level 1, the status for CNMI would be Alert Level 1.This methodology alerts users to regional thermal stress exposure while preventing exaggeration of bleaching risk.Satellite Bleaching Alerts for a region are emailed when the alert level changes, prompting users to look at CRW's map products for details on which specific locations within the region are affected by thermal stress.The "current" Bleaching Thermal Stress Gauge is augmented with three further gauges showing the predicted stress level for the coming one-to-three months (Figure 1c), based on CRW's Four-Month Coral Bleaching Thermal Stress Outlook product [20]. Time series reveal the temporal evolution of SST and thermal stress metrics for each Regional Virtual Station.Time series data of the 90th percentile value of SST, SST Anomaly, HotSpot, DHW and Bleaching Alert Area level within each region are published online.In addition, the temperature at the location of the 90th percentile HotSpot value from among 5 km pixels of each Regional Virtual Station is provided each day, corresponding to the thermal stress indicated by the HotSpot value for that day.It is this SST value that is shown on the time series figure for each Regional Virtual Station (Figure 2), along with representative monthly climatological SST values for each region (the average of climatology values across the pixels within each region) and the 90th percentile DHW within the region.Regional Virtual Station time series summary information and graphs are accessible directly from the interactive Google Maps interface on the CRW website by clicking on the Google Maps pins (Figure 1b), the color of which reflect the current bleaching alert level for each jurisdiction.The new Regional Virtual Stations provide an indication of regional conditions pertaining to entire reef jurisdictions; however, the method lessens the geographic specificity of the data for monitoring individual islands and reefs.Information from the Regional Virtual Stations is intended to lead users to the CRW product maps, where spatial patterns of thermal stress are found.1b) for each date.Similarly, the red DHW trace is the 90th percentile DHW value and the color under this trace reflects the 90th percentile Bleaching Alert Area value.For each pixel, DHW accumulates when the SST value exceeds the maximum (blue dashed, MMM) of the monthly mean climatology values (blue plus) by at least 1 ¥ C (blue solid, Bleaching Threshold)-the time series shows the spatial average of each of these values.DHW thresholds of 4 and 8 ¥ C-weeks (red dashed) have been associated with significant coral bleaching, and widespread bleaching and significant mortality, respectively. Seven-Day SST Trend Dramatic and rapid changes in SST, particularly during summer months, can alert reef managers and other stakeholders to increased likelihood of ecosystem impacts.The Seven-day SST Trend product at 5 km resolution was recently developed, providing reef managers with the capacity to track the rate of SST change (slope of the linear regression) during the prior seven days.The seven-day period was chosen as it is within the spring-neap tidal cycle (typically ~14 days), while providing sufficient values (n = 7) for testing trend significance.A two-tailed Student's t-test with five degrees of freedom at the 20% significance level was incorporated to test for trend significance.Updated daily, the product was designed to mask trends insignificant at the 20% level, as well as those within the range ¡0.2 to 0.2 ¥ C per week.SST Trend products can provide distinctive information on short-term changes associated with significant weather events (e.g., the passing of a tropical storm); change in upwelling strength; and warming caused by persistent doldrum conditions.While describing the changes in SST during the past seven days, trend products can also point to the trajectory of SST changes in the upcoming days.This product is useful to distinguish short-term thermal variations from the longer-term signals, lasting on the order of weeks to months, that lead to mass bleaching. CNMI Field Work The Mariana Archipelago consists of nine emergent volcanic islands to the north and five geologically older, raised limestone islands to the south (Figure 1a).The CNMI consists of all islands of the archipelago north of Guam (a separate US commonwealth from CNMI).Reef structure increases from north, where the underlying benthos consists of boulders surrounding the volcano, to south, where more-developed fringing reef areas are present.Reef managers and other stakeholders monitored the 5 km CRW DSS products during the development of thermal stress in 2014.From 26 June to 20 July 2014 members of the CNMI Bureau of Environmental and Coastal Quality's (BECQ) marine monitoring team collected bleaching data at 62 shallow (2-6 m), inshore sites across seven of the remote northern volcanic islands.The islands visited, from north to south (Figure 1a), were Uracas, Maug and Asuncion in June; and Pagan, Guguan, Sarigan and Anatahan in July.For each island, survey sites were selected using a stratified random sampling design with stratification based on distance along the coastline.At each site, surveys were conducted on snorkel with an average of 79 (range: 66-104) 0.25 m 2 photoquadrats from 1 m above the substrate taken across an approximately 200 m ¢ 10 m belt transect.The number of photoquadrats depended upon the availability of hard bottom habitat within the prescribed depth range and ocean conditions.Using the computer program CPCe v4.1 [21], five random points (after [22]) were digitally overlaid on each photoquadrat frame and the substrate or biota under each point was recorded.Hard corals were identified to the genus level (allowing determination of generic richness at each site) following the Corals of the World taxonomy [23].Bleaching and mortality were noted for each recorded coral point. A separate, collaborative project provided the opportunity to revisit the island of Maug from 10-13 August 2014 and survey three relatively deeper (7-10 m) reef sites on SCUBA.For these surveys, three to five 50 m transects were laid out sequentially along the depth contour, along which 0.25 m 2 photoquadrats were taken every meter.Photoquadrats were processed as described above. Analysis of Field Observations and Comparison with Satellite Products Field data for the northern section of the Mariana Archipelago provided an opportunity to undertake a quantitative comparison between satellite thermal stress monitoring and observed bleaching.Time series of the CRW products were extracted for satellite pixels containing or directly adjacent to the field survey sites.For each survey site and date, thermal stress values were extracted.While the 5 km products are at higher resolution than previously available, sub-pixel variability can remain due to localized effects (e.g., bathymetry, turbidity, shading).To reduce effects of between-site variability in the comparisons, survey and satellite data were averaged for each island (island-scale), with the two Maug surveys in June and August kept distinct.The two periods of field observations at Maug (24-27 June and 12-13 August 2014) provided an opportunity to evaluate the progression in bleaching at that island as the thermal stress continued to develop. To investigate any influence of site characteristics on bleaching, island-scale coral cover, generic richness and bleaching susceptibility were also determined.Coral taxa bleaching susceptibility were extracted from the summary for the CNMI in [24], averaging the five-point scale of specific susceptibility (1-5, low-high) to give genus-level values (Table 1).Site and island-scale susceptibilities were determined as the weighted average of genus-level susceptibilities, based on the predominant coral taxa (i.e., those genera with at least 1% benthic cover) at each site.Generic richness, the number of coral genera present, is an effective predictor variable for coral species richness [25].Historical temperature conditions for individual sites were also considered.The 5 km products for 2013 revealed substantial thermal stress levels across the CNMI.The maximum DHW value in 2013 for each island was used in the analysis to incorporate impacts of the prior year's thermal stress.Bleaching has also been linked to the SST variability for the warm season [26] and frequency of past thermal disturbance.Past temperature variability has also been identified as a key factor for reef resilience [27].Due to the short temporal domain of the 5 km products, each of these metrics was calculated using the Pathfinder version 5.2 SST dataset (1985-2012), an official NOAA Climate Data Record for SST [15].SST variability was calculated as the standard deviation about the mean from temperatures during the warmest three-month period, reflecting the likely acclimation to extreme warm temperature and, therefore, capacity for reduced impact during extremes.This parameter was recently included in a resilience assessment for the southern islands of CNMI [24].Past thermal disturbance is represented here by the number of thermal stress events of 4 ¥ C-weeks or greater (corresponding with the established CRW threshold for ecologically significant bleaching [3]). Direct relationships between bleaching, thermal stress, coral community characteristics and historical temperature conditions were investigated through linear regression and correlation analysis to demonstrate which variables affect bleaching response.Combined linear effects from several factors were investigated using multiple correlation analysis, which led to multi-factor modeling of bleaching response.The form of the model was dependent upon which factors showed the strongest relationship with bleaching.Model fit was assessed using linear regression of modeled predictions with observed bleaching. Results Field observations were analyzed for each site individually and grouped by island and survey month (June-August).Coral cover varied between 0 and 34% at the sites surveyed.When averaged by island and month ("island-scale") the cover ranged from 2%-22%, with variability (standard deviation) between sites on the same order as and scaling with the average value (Figure 3a).Bleaching at sites ranged from 0%-94% of coral present.Island-and-month average bleaching varied within 0%-90%, with variability about the averages within 1%-34% (Figure 3b).Spatial variation in bleaching was particularly apparent for Maug, for which the June survey sites were distributed inside (six sites) and outside (seven sites) the volcanic caldera.While the coral cover (average ¨SD) was fairly consistent inside and outside (11.1% ¨8.7% and 7.1% ¨3.3%, respectively), the observed bleaching was markedly greater but with less variability inside the caldera (52.2% ¨22.0%) than outside (13.8% ¨33.1%). Generic richness of corals (with at least 1% benthic cover) ranged from 0-9 genera across the surveyed sites and 0.86-3.88genera when averaged for island/month, with variability on the same order as and scaling with average values (Figure 3c).Bleaching susceptibility at sites ranged within 1.25-3.96(on a scale from 1 to 5); susceptibilities were 1.75-3.44 at the island-scale with fairly consistent variability across the islands (Figure 3d). Comparisons between the field survey data and satellite thermal stress revealed that observed bleaching increased with increasing DHW (Figure 4).Comparisons for each site individually (dots in Figure 4) were weakly correlated (r 2 = 0.142, linear regression), as was anticipated given that localized effects that may have influenced bleaching were not included.Grouping the surveys by island and month (squares in Figure 4) resulted in only one group having less than six observations (Maug, August; see Figure 3).This island-scale grouping reduced the effect of between-site variability and considerably enhanced the goodness-of-fit of the linear relationship (r 2 = 0.411), demonstrating with these data the established link between thermal stress and bleaching [2]. Consideration of the influence of benthic coral characteristics (cover, generic richness, susceptibility) and historical temperature conditions (number of bleaching-level stress events during 1985-2012, warm-season SST variability from 1985-2012, maximum DHW from 2013) was incorporated with thermal stress exposure (DHW) and bleaching response through pair-wise correlations of variables (Table 2).These revealed that cover, generic richness and DHW (r = 0.9423, 0.8166 and 0.6410, respectively) had the strongest correlations with percent bleaching.Strong positive correlations were observed between coral cover and generic richness (r = 0.7345) and between bleaching susceptibility and the number of bleaching-level stress events (r = 0.8293), the latter being the second highest correlation calculated.Multiple correlations of bleaching response with DHW exposure and various other parameters were used to identify the best combination of predictors for the 2014 northern CNMI bleaching (Table 3).With the small size of the island-scale data (n = 8), the number of contributing variables in determining the multiple correlations was limited to a maximum of three (in addition to DHW).The high correlation between bleaching and coral cover suggested the importance of cover; however, the combination of generic richness with DHW exposure provided the highest two-factor correlation with bleaching (r = 0.9455).Unsurprisingly, using all three further increased the correlation but not greatly (r = 0.9660), suggesting considerable overlap in the explanatory power of these variables.Adding susceptibility resulted in small improvements to each of the multivariate models.Combining current thermal stress with historical temperature parameters also improved the predictive capacity for bleaching from DHW exposure alone (Table 3). Discussion The new Regional Virtual Station (Figure 2) provided BECQ reef managers and other stakeholders in CNMI with the capacity to easily track the development of thermal stress in 2014.Regional long-term monthly climatology values illustrate the normal seasonal cycle of temperatures, with the warmest period for CNMI typically in July-September (Figure 2).The SST trace was above the regional monthly climatology values from the beginning of 2014.SST briefly exceeded the regional maximum monthly mean (MMM) climatology in April and was consistently above the MMM from the first week of May until November.A rapid increase in regional SST in the first week of June resulted in accumulation of thermal stress (i.e., DHW) in at least 10% of CNMI pixels.Regional SST values stayed above the Bleaching Threshold through June and July, with regional stress accumulation exceeding 4 ¥ C-weeks (associated with ecologically significant coral bleaching [3]) in late-June and 8 ¥ C-weeks (associated with widespread bleaching and mortality [3]) in mid-July.Stress accumulation resulted in the 90th-percentile DHW value for CNMI peaking at 13.9 ¥ C-weeks in early September.SST values returned below the regional Bleaching Threshold by the end of September and below the MMM by December.As Regional Virtual Stations represent the 90th-percentile conditions in each region, they do not necessarily reflect conditions at individual satellite pixels but rather the upper range of thermal stress conditions within the region.This event was part of a large-scale warm anomaly affecting the northwestern Pacific Ocean, whose epicenter was located just to the north of the CNMI [13]. The rapid increase in regional SST in early June was reflected in the Seven-day SST Trend product for 4-10 June 2014 (Figure 5a), with some locations at the northern extent of the archipelago experiencing an increase of more than 2.5 ¥ C during this single week.This was the most rapid large-scale increase in thermal stress during the event and occurred while final preparations for the planned monitoring expedition to the northernmost islands of the CNMI were being undertaken [28].A second rapid increase in SST peaked on 21 June (Figures 2 and 5b), exacerbating thermal stress.Based on the CRW satellite products, the field monitoring team was anticipating coral bleaching in the survey area and re-assessed their methods to ensure that bleaching information was captured.Throughout their surveys, BECQ staff provided regular status updates to CRW.A rapid decrease in SST occurred at the end of June (Figure 5c) during the first stages of the fieldwork, possibly related to an intensifying storm or frontal system.While SST trends that occurred in the northern CNMI were large in magnitude, the trend magnitudes were consistently less in the southern CNMI during the entire event.Spatial and temporal patterns in the SST Trend appeared to be associated with short-term weather events in the region, emphasizing the weather-related nature of bleaching events [7] superimposed upon broad-scale warm ocean anomalies [10]. By the end of June 2014, coincident with the first group of BECQ field surveys, thermal stress along the entire Mariana Archipelago (Figure 6a) had exceeded the DHW threshold of 4 ¥ C-weeks that has historically been connected with ecologically significant bleaching [3].By mid-July (Figure 6b), when most of the sites were surveyed, thermal stress had intensified to the level where widespread bleaching and significant mortality could be expected (8 ¥ C-weeks [3]).By the mid-August visit to Maug, thermal stress there had increased to 12.3 ¥ C-weeks (Figure 6c). 3 and 4). While the remote and unpopulated nature of the surveyed locations in CNMI minimizes localized anthropogenic impacts, it also greatly reduces the capacity for pro-active and responsive management actions.However, having information on the development of thermal stress meant that field observers could anticipate likely conditions during the surveys and target monitoring efforts and protocols accordingly.Targeting research opportunities is a significant benefit afforded by satellite remote sensing and predictive models of thermal stress.In more accessible and populated places, the capability of predicting and monitoring thermal stress can lead to implementation of management actions that enhance reef resilience, minimize impacts from other potential stressors and support recovery of reefs following disturbance.In all locations, satellite monitoring of thermal stress provides effective support for communication with governments and other stakeholders, including the public, about potential ecosystem impacts. Variability in bleaching among sites at each island is unsurprising given that variations in physical exposure (i.e., windward vs. leeward sides) and coral community structure (i.e., coral diversity) across sites are likely to influence thermal response characteristics.This is exemplified by the volcanic caldera at Maug.The semi-enclosed nature of the caldera and reduced circulation may have resulted in a greater temperature increase within the caldera as compared with temperature in the open-ocean during the 2014 event, with a corresponding increase in average bleaching observed inside the caldera.A volcanic vent inside the caldera acidifies the waters, affecting the abundance and diversity of corals [29] and may also have influenced the susceptibility of corals to thermal stress; however, recent evidence indicates little effect of acidified waters on bleaching sensitivity [30].The increased bleaching variability outside the caldera is driven by one site that had distinctly different coral composition and was predominated by corals of greater susceptibility (80% Pocillopora spp.) than were present at the other outer caldera sites.This example emphasizes the importance of local knowledge in interpreting the broad-scale (even at 5 km resolution) satellite products for impacts on specific reef sites. That observed bleaching increased with increasing DHW was consistent with past analyses using the CRW heritage (50 km) products (e.g., [2]).Acknowledging the conspicuous variation among site conditions in and around the Maug caldera, the island-scale bleaching for the June survey was re-calculated excluding sites inside the caldera.This revised the data point to just below the determined linear regression (at DHW = 3.43 ¥ C-weeks, % Coral Bleached = 13.8% ¨33.1%) and resulted in a marginal improvement to the correlation (r 2 = 0.465).Even with this adjustment, it is clear that there remains considerable scatter of the island-and-month values about the line of best fit. Performance evaluation of how the satellite products monitored the development of bleaching through time was hampered in that a return visit during the event occurred only at one island.Furthermore, the August survey at Maug consisted of only three sites and only one of these was at the same location (albeit deeper) as the June survey.This lack of replication makes direct comparisons between the June and August surveys and the analysis of event development difficult.At the one repeated site, inside the caldera, bleaching had increased from 74% (June) to 88% (August). It is important to note that the 2014 thermal stress and bleaching was not an isolated event.The CNMI experienced significant thermal stress during 2013 [13] with multiple reports of significant coral bleaching and mortality in the field [31].Based on the 2014 field observations, corals in Anatahan, Sarigan and Guguan appeared to have suffered extremely high mortality of bleaching susceptible corals during 2013 (Figure 3d), consistent with the highest observed thermal stress in the region during 2013 [13].This may explain why the bleaching response for these three islands was substantially lower than the linear regression in Figure 6.Comparing the proportion of high susceptibility (¥3 in Table 1) taxa observed at Anatahan, Sarigan and Guguan (42.8% ¨18.5%) with those from Uracas, Maug (June survey), Asuncion and Pagan (82.9% ¨7.4%) revealed they were distinct (Student's t = 4.01, df = 5, p = 0.0102).Colonies present at Anatahan, Sarigan and Guguan in 2014 were either new recruits or had tolerated and survived thermal stress in 2013.The presence of more tolerant corals may explain why the observed bleaching response in 2014 was below what might otherwise have been expected.More susceptible coral taxa or genotypes were probably the first to die in 2013.Considering only data from the four northernmost islands in June/July (i.e., excluding locations with prior effects of thermal stress) considerably strengthened the correlation between thermal stress and bleaching (r 2 = 0.871).While it is useful to examine this relationship in the absence of prior effects, bleaching prediction may be improved by considering other relevant factors. The influence of benthic coral characteristics and historical temperature conditions demonstrated the importance of using coral community characteristics in interpreting bleaching likelihood due to thermal stress (Table 2).The positive correlation of generic richness with bleaching was somewhat surprising, given that coral diversity has been described as an important indicator of reef resilience, associated with both resistance to and recovery from bleaching [27].This correlation may have resulted from the mortality of the most-susceptible taxa from thermal stress in 2013 leaving only the hardiest corals to be surveyed in 2014 (though there is no correlation between generic richness and maximum DHW from 2013).All else being equal, higher susceptibility should result in higher bleaching; however, the correlation between percent bleaching and susceptibility was unexpectedly low (r = 0.2980).We note that thermal stress levels varied across the surveys (Figure 4), so the true contribution of susceptibility to bleaching response may not have been realized in all locations.The strong positive correlation between cover and generic richness suggested that greater coral cover generally included more taxa for these locations. Historical temperature conditions were generally less correlated with bleaching than coral characteristics.However, the correlation of bleaching with the number of bleaching-level stress events during 1985-2012 suggests the potential for this metric to combine with thermal stress exposure (DHW) to improve the prediction of bleaching.The association of past exposure with bleaching may be a result of past disturbance influencing coral characteristics, demonstrated by the correlations between the number of events and each of the coral characteristics.Notably, the strong positive correlation between number of stress events and susceptibility suggested that disturbance history had increased thermal tolerance of susceptible taxa for these CNMI locations.This is consistent with patterns between prior stress exposure and current bleaching impacts previously reported [26,32].One possible explanation is that the higher growth rates and shorter generation times of the more susceptible genera infer a relatively higher capacity to adapt and/or acclimatize to thermal stress (i.e., a locally reduced susceptibility).Prior-year DHW exposure showed little correlation with coral characteristics, indicating that short-term disturbance history was less informative.The negative correlation between bleaching and warm-season SST variability supports past findings that, ceteris paribus, reefs acclimated to more variable summer temperatures experience less bleaching [26,27].Additionally, the negative correlation between DHW (in both 2013 and 2014) and warm-season SST variability (r = ¡0.4715and ¡0.5526, respectively) appears to suggest that less thermal stress is accumulated in highly variable locations; physically this could result from more variable SST being more likely to drop below the accumulation threshold for DHW, adding a stress mitigation mechanism to the reduced sensitivity coral acclimation.Multi-variable correlations (Table 3) of bleaching with combinations of thermal stress and benthic characteristics confirmed the value in interpreting thermal stress using knowledge of reef conditions.While the multi-variable correlations using historical temperature parameters were not as high as observed with the benthic characteristics, the combined effects of temperature history could prove useful in enhancing the capability of remotely sensed prediction of bleaching.This is especially so for the number of past stress exposure events, which might be considered a proxy for benthic characteristics (noting the correlations in Table 2). Based on multiple correlations, a model for bleaching response was developed using only the thermal stress accumulation (DHW) and generic richness (GR).That bleaching occurred where both DHW and generic richness were high suggested an interactive model for these factors.Furthermore, no bleaching would be expected for zero thermal stress (irrespective of coral biodiversity).This led to a model that multiplied these factors to predict percent bleaching (B): where a and c are constants.This model format represents the linear relationship of each independent variable with bleaching (as used in the multiple correlation), for a fixed value of the other variable.A least squares fit of this model to the island-and-month data (a = 2.818, c = ¡4.935)provided a strong predictor for bleaching (Figure 7a, r 2 = 0.8788).The negative intercept suggests that bleaching does not occur below a threshold of DHW exposure (as a function of generic richness).The least squares fit of this model (Figure 7b, a = 0.4640, d = 0.9995, g = 2.509, c = 1.920, r 2 = 0.9696) revealed the exponent of DHW was very close to unity, the presumptive value in Equation (1). This supports the past use of linear comparisons between thermal stress and resultant bleaching (e.g., [2,33]).In contrast, the magnitude of the exponent g is substantially different from unity (as was presumed in Equation ( 1)).This stronger weighting of variations in the generic richness identifies its importance in describing the bleaching during this event. The results here provide the first validation of the performance of reef-scale monitoring using satellite products, showing the clear relationship between thermal stress and bleaching.Furthermore, the study demonstrates that complex relations exist among various coral characteristics in describing bleaching.However, it does not automatically ensue that all mass bleaching events will necessarily follow the presented models for bleaching that incorporate generic richness and thermal stress accumulation.Given the limited geographic domain and the relatively small number of observations analyzed, these results should be tested more broadly.The analysis does emphasize the importance of interpreting thermal stress exposure using locally specific reef conditions. Conclusions The 5 km Decision Support System continues the legacy of Coral Reef Watch products in providing timely information about thermal stress conducive to mass coral bleaching events.The new Regional Virtual Stations enable reef managers and other stakeholders to track the potential for mass bleaching events and underpin the automated emails notifying managers of changes in the Bleaching Alert level.Spatial patterns in stress are revealed through CRW's mapped products, including the newly released Seven-day SST Trend.The analysis of bleaching observations from the 2014 CNMI event, undertaken here, has validated the use of reef-scale monitoring products in monitoring conditions likely to result in bleaching.Analysis of benthic data and historical thermal conditions has shown the importance of interpreting thermal stress exposure using locally specific information.A model for observing bleaching based only on accumulated thermal stress (explaining 41% of variance in bleaching) was greatly improved by the inclusion of generic richness (97%).Where available, benthic coral characteristics can be used to refine the vulnerability of reef areas to bleaching and lead to improved robustness of the models demonstrated here.The potential for historical temperature conditions to prove useful in fine-tuning the level of impact on corals, particularly in remote and/or infrequently visited areas, was also indicated.Together with CRW's Four-Month Thermal Stress Outlook product, the 5 km DSS provides reef managers and other stakeholders with tools to respond to potential and apparent thermal stress exposure, taking appropriate management action to minimize ecosystem impacts before, during and following bleaching.In an era of changing climate, improved understanding and monitoring of threats to coral reef ecosystems will guide the management and conservation of reef resources. Figure 1 . Figure 1.(a) Map of the study region identifying islands at which surveys were undertaken; (b) Spatial coverage of the Regional Virtual Stations for CNMI (north of dashed line) and Guam include satellite pixels within 20 km of reef-containing pixels (outlined by polygons).The color of customized Google Maps pins (inverted teardrops) indicates the 90th percentile Bleaching Alert Area level for pixels within each Regional Virtual Station.The background image shows the Bleaching Alert Area level on 13 August 2014 displayed in Google Maps [16]); (c) Regional Virtual Station gauges for CNMI showing the Bleaching Alert level on 13 August 2014 (top gauge) and the forecast stress levels in the subsequent three months.Grey arrows indicate change from the previous gauge reading. Figure 2 . Figure 2. Regional Virtual Station time series for CNMI in 2014.The purple SST trace is the temperature at the location of the 90th percentile HotSpot value from among 5 km pixels for CNMI (see Figure 1b) for each date.Similarly, the red DHW trace is the 90th percentile DHW value and the color under this trace reflects the 90th percentile Bleaching Alert Area value.For each pixel, DHW accumulates when the SST value exceeds the maximum (blue dashed, MMM) of the monthly mean climatology values (blue plus) by at least 1 ¥ C (blue solid, Bleaching Threshold)-the time series shows the spatial average of each of these values.DHW thresholds of 4 and 8 ¥ C-weeks (red dashed) have been associated with significant coral bleaching, and widespread bleaching and significant mortality, respectively. Figure 3 . Figure 3. Observed (a) coral cover; (b) percent of coral bleached (white shading); and (c) number of coral genera present; with calculated (d) bleaching susceptibility, compiled by island and survey month.Bars show the island-and-month average, while whiskers show 1 SD.Numbers of sites for each island/month grouping are shown.Grey shading in all panels indicates the survey month (light-dark: June-August); white shading in (b) indicates percent bleached.Note that Maug was surveyed twice (in June and August 2014). Figure 4 . Figure 4. Comparison of percent coral bleached against Degree Heating Weeks (DHW).Site data (dots) were averaged by island and date (filled square), with the whiskers showing 1 SD.Symbol shading indicates the month of observation.The dashed line shows the linear regression of the island-scale data (y = 7.0177x ¡ 7.5183, r 2 = 0.411). Figure 5 . Figure 5. Maps of 5 km Seven-day SST Trend for the Mariana Archipelago during June 2014.Regional SST peaked on (a) 10 June 2014 and (b) 21 June 2014, with a rapid cooling event culminating on (c) 30 June 2014 (see also Figure 2). Figure 6 . Figure 6.Maps of 5 km Degree Heating Week (DHW) for the Mariana Archipelago when bleaching observation surveys were conducted.(a) 30 June 2014; (b) 19 July 2014; (c) 13 August 2014.Arrows indicate locations of surveys taken within 1-11 days prior to the date shown.Shade of arrows corresponds to the different survey months (also in Figures 3 and 4). Figure 7 . Figure 7. Modeled predictions of percent bleaching, as functions of thermal stress accumulation (DHW) and generic richness.Models based on (a) the product of DHW and generic richness (Equation (1)); and (b) the product of exponential factors of DHW and generic richness (Equation (2)).Symbol color indicates the month of observation.The line of unity is also shown (dashed). Table 2 . Correlations (r) between percent Bleaching (B), Degree Heating Week (DHW) (thermal stress accumulation, D), Coral cover (C), Generic richness (G), Susceptibility (S), Number of DHW ¥ 4 ¥ C-week events during 1985-2012 (N), Warm-season sea surface temperature (SST) variability during 1985-2012 (W) and the Maximum 5 km DHW from 2013 (M).All parameters were calculated for each island-and-month grouping (n = 8).Emboldened numbers are referenced in the text.Where no plausible correlation should be expected, values were not calculated. Table 3 . Multiple correlations between bleaching (dependent variable) and combinations of location-specific variables.
2016-03-01T03:19:46.873Z
2016-01-12T00:00:00.000
{ "year": 2016, "sha1": "a06f4d7d709cd76277b68b58398a025721025c21", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/8/1/59/pdf?version=1452593350", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "a06f4d7d709cd76277b68b58398a025721025c21", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
14446327
pes2o/s2orc
v3-fos-license
Small Designs for Path Connected Spaces and Path Connected Homogeneous Spaces We prove the existence of designs of small size in a number of contexts. In particular our techniques can be applied to prove the existence of $n$-designs on $S^{d}$ of size $O_d(n^{d}\log(n)^{d-1})$. Introduction Given a measure space (X, µ) and a set f 1 , . . . , f m : X → R, [9] defines an averaging set to be a finite set of points, p 1 , . . . , p N ∈ X so that for all 1 ≤ j ≤ m. The authors of [9] show that if X is a path-connected topological space, µ has full support, and the f i are continuous that such sets necessarily exist. In this paper, we study the problem of how small such averaging sets can be. In particular, we define a design problem to be the data of X, µ and the vector space of functions on X spanned by the f j . For a design problem, D, we show that there exist averaging sets (we call them designs) for D with N relatively small. Perhaps the best studied case of the above is that of spherical designs, introduced in [5]. A spherical design on S d of strength n is defined to be an averaging set for X = S d (with the standard measure) where the set of f j is a basis for the polynomials of degree at most n on the sphere. It is not hard to show that such a design must have size at least Ω d (n d ) (proved for example in [5]). It was conjectured by Korevaar and Meyers that designs of size O d (n d ) existed. There has been much work towards this Conjecture. Wagner proved in [11] that there were designs of size O d (n 12d 4 ). This was improved by Korevaar and Meyers in [6] to O d (n (d 2 +d)/2 ), by Bondarenko, and Viazovska in [4] to O d (n 2d(d+1)/(d+2) ). In [3], Bondarenko, Radchenko, and Viazovska recently announced a proof of the full conjecture. In this paper, we develop techniques to prove the existence of small designs in a number of contexts. In greatest generality, we prove that on a path-connected topological space there exist designs to fool any set of continuous functions on X of size roughly M K, where M is the number of linearly independent functions, and K is a measure of how badly behaved these functions are. We also show that if in addition X is a homogeneous space and the linear span of functions we wish to fool is preserved under the symmetry group of X that K ≤ M . For example, this immediately implies strength-n designs of size O(n 2d /(d!) 2 ) on S d . It also implies the existence of small Grassmannian designs (see [1] for the definition). Generally, this result proves the existence of designs whose size is roughly the square of what we expect the optimal size should be. With a slight modification of our technique, we can also achieve better bounds in some more specialized contexts. In particular, in Section 6 we produce designs of nearly optimal size for beta distributions on the interval [−1, 1], and in Section 7, we prove the existence of strength-n designs on S d of size O d (n d log(n) d−1 ), which is optimal up to a polylog factor. In Section 2, we describe the most general setting of our work and some of the fundamental ideas behind our technique. In Section 3, we handle our most general case of path-connected spaces. In Section 4, we produce an example in which the upper bound for sizes of designs in the previous section is essentially tight. In Section 5, we study the special case of homogeneous spaces. In Section 6, we provide nearly optimal bounds for the size of designs for beta distributions on the interval. In Section 7, we prove our bounds on the size of spherical designs. Basic Concepts We begin by defining the most general notion of a design that we deal with in this paper. Definition. A design-problem is a triple (X, µ, W ) where X is a measure space with a positive measure µ, normalized so that µ(X) = 1, and W is a vector space of L 1 functions on X. Given a design-problem (X, µ, W ), a design of size N is a list of N points (not necessarily distinct) p 1 , p 2 , . . . , p N ∈ X so that for every f ∈ W , A weighted design of size N is a set of points p 1 , p 2 , . . . , p N ∈ X and a list of weights w 1 , w 2 , . . . , w N ∈ [0, 1] so that N i=1 w i = 1 and so that for each f ∈ W , For example, if (X, µ) is the d-sphere with its standard (normalized) measure, and W is the space of polynomials of total degree at most n restricted to X, then our notion of a design (resp. weighted design) corresponds exactly to the standard notion of a design (resp. weighted design) of strength n on the d-sphere. Note that a design is the same thing as a weighted design in which all the weights are 1 N . Notice that if we set f (x) to be any constant function that the formulas in Equations 2 and 3 will hold automatically. Hence for a design problem it is natural to define the vector space V of functions on X to be the space of functions, f , in W + 1 so that X f (x)dµ(x) = 0. Lemma 1. For a design-problem (X, µ, W ) with V as defined above, p 1 , p 2 , . . . , p N is a design (resp. p 1 , p 2 , . . . , p N , w 1 , w 2 , . . . , w N is a weighted design) if and only if for all f ∈ V , N i=1 f (p i ) = 0, (resp. N i=1 w i f (p i ) = 0). Proof. Since any design can be thought of as a weighted design, it suffices to prove the version of this Lemma for weighted designs. First assume that N i=1 w i f (p i ) = 0 for each f ∈ V . For every g ∈ W , letting f (x) = g(x) − X g(y)dµ(y), f ∈ V . Hence Hence p i , w i is a weighted design. If on the other hand, p i , w i is a weighted design and f ∈ V , then f (x) = g(x) + c for some g ∈ W and constant c. It will also be convenient to associate with the design problem (X, µ, W ) the number M = dim(V ). We note that there is a natural map E : X → V * , where V * is the dual space of V . This is defined by (E(p))(f ) = f (p). This function allows us to rephrase the idea of a design in the following useful way: Lemma 2. Given a design problem (X, µ, W ) along with V and E as described above, p i is a design (resp. p i , w i is a weighted design) if and only if Proof. Again it suffices to prove only the version of this Lemma for weighted designs. Note that for f ∈ V , that This is 0 for all f ∈ V , if and only if N i=1 w i E(p i ) = 0. This, along with Lemma 1, completes the proof. To demonstrate the utility of this geometric formulation, we present the following Lemma: there exists a weighted design for this problem of size at most M + 1. Therefore X E(x)dµ(x) = 0. Therefore 0 is in the convex hull of E(X). Therefore 0 can be written as a positive affine linear combination of at most M + 1 points in E(X). By Lemma 2, this gives us a weighted design of size at most M + 1. Unfortunately, our notion of a design problem is too general to prove many useful results about. We will therefore work instead with the following more restricted notion: Definition. A topological design problem is a design problem, (X, µ, W ) in which X is a topological space, the σ-algebra associated to µ is Borel, the functions in W are bounded and continuous, and W is finite dimensional. We call a topological design problem path-connected if the topology on X makes it a path-connected topological space. We call a topological design problem homogeneous if for every x, y ∈ X there is a measure-preserving homeomorphism f : X → X so that f * (W ) = W and f (x) = y. We will also want a measure on the complexity of the functions in W for such a design problem. Definition. Let (X, µ, W ) be a topological design problem. Associate to it the number Notice that since sup(f ) | inf(f )| is invariant under scaling of f by positive numbers, and since V \{0} modulo such scalings is compact, that K will be finite unless there is some f ∈ V \{0} so that f (x) ≥ 0 for all x. Since X f (x)dµ(x) = 0 this can only be the case if f is 0 on the support of µ. Throughout the rest of the paper, to each topological design problem, (X, µ, W ) we will associate V, E, M, K as described above. The Bound for Path Connected Spaces In this Section, we prove the following Theorem, which will also be the basis for some of our later results. Theorem 4. Let (X, µ, W ) be a path-connected topological design problem. If M > 0, then for every integer N > (M − 1)(K + 1) there exists a design of size N for this design problem. Throughout the rest of this Section, we use X, µ, W, V, E, M, K, N to refer to the corresponding objects in the statement of Theorem 4. Our proof technique will be as follows. First, we construct a convex polytope P given by the convex hull of points of E(X), that also contains the origin. Next, we construct a continuous function F : P → V * so that every point in the image of F is a sum of N points in E(X), and so that for each facet, T , of P , F (T ) lies on the same side of the hyperplane through the origin parallel the one defining T as T does. Lastly, we show, using topological considerations, that 0 must be in the image of F . We begin with the construction of P . Proposition 5. For every ǫ > 0, there exists a polytope P ⊂ V * spanned by points in E(X) such that for every linear inequality satisfied by the points of P of the form Proof. Suppose that P is the the convex hull of some set of points E(p i ) for some points p i ∈ X. Then it is the case that x, f ≤ c for all x ∈ P if and only if this holds for all x = E(p i ), or if f (p i ) ≤ c for all i. Hence it suffices to find some finite set of p i ∈ X so that for each f ∈ V \{0}, sup(|f |) ≤ sup i f (p i )(K + ǫ). Notice that this condition is invariant under scaling f by a positive constant, so it suffices to check for f on the unit sphere of V . Notice that by the definition of K, that for each such f , there is a p ∈ X so that sup(|f |) ≤ f (p)K. Notice that for such a p, sup(|g|) ≤ g(p)(K + ǫ) for all g in some open neighborhood of f . Hence these p define an open cover of the unit ball of V , and by compactness there must exist a finite set of p i so that for each such f , sup(|f |) ≤ f (p i )(K + ǫ) for some i. This completes our proof. Throughout the rest of this section we will use ǫ and P to refer to a positive real number and a polytope in V * satisfying the conditions from Proposition 5. We now construct our function F . there exists a continuous function F : • For each facet, T , defined by the equation L(x) = c for some linear function L on V * and some c ∈ R + , L(F (T )) ⊂ R + Proof. For a real number x, let ⌊x⌋ denote the greatest integer less than or equal to x and let {x} = x − ⌊x⌋ denote the fractional part of x. Let p i be points in X so that P i = E(p i ) are the vertices of P . Let p 0 be some particular point in X. Since X is path-connected, we can produce continuous paths γ i : [0, 1] → X so that γ i (0) = p 0 and γ i (1) = p i . For r ∈ [0, 1] a real number, we use [rP i ] to denote E(γ i (r)). We let [0] := [0P i ] = E(p 0 ). We also note that [P i ] := [1P i ] = P i and that [rP i ] is continuous in r. Next pick a triangulation of P . Our basic idea will be as follows: for any Q ∈ P , if Q is in the simplex in our triangulation defined by P n0 , P n1 , . . . , P n d for some n i and d ≤ M we can write Q uniquely as 1] with i x i = 1 (here we think of the sum as being a sum of points in V * ). The idea is that F (Q) should be approximately N Q = d i=0 N x i [P ni ]. If the N x i are all integers, this is just a sum of N points. Otherwise, we need to smooth things out some, and define F as follows. Let S be the set of i ∈ {0, . . . , d} so that {N x i } ≥ 1 − 1/(3M ). Define We have several things to check. First, we need to check that F is well defined. Next, we need to check that F is continuous. Finally, we need to check that F has the desired properties. We must first show that F is well defined. We have defined it on each simplex of our triangulation, but we must show that these definitions agree on the intersection of two simplices. It will be enough to check that if Q is in the simplex defined by P n0 , . . . , P n d and the simplex defined by P n0 , . . . , P n d , P n d+1 , that our two definitions of F (Q) agree (because then all definitions of F (Q) agree with the definition coming from the minimal simplex containing Q). In this case, if we write Q = d i=0 x i P ni = d+1 i=0 y i P ni , then it must be the case that x i = y i for i ≤ d and y d+1 = 0. It is easy to check that our two definitions of F on this intersection agree on Q. To prove continuity, we need to deal with several things. Firstly, since F can be defined independently on each simplex in our decomposition of P in such a way that the definitions agree on the boundaries, we only need to check that F is continuous on any given simplex. In this case, we may write F (Q) = F (x 0 , . . . , x d ). We also note that we can write We now have the check continuity of F i . Note that F i is clearly continuous except where y is either an integer or an integer minus 1/(3M ). For integer n, as y approaches n from below, . Hence F is continuous. Next we need to check that for any Q that F (Q) is a sum of N elements of E(X). From the definition it is clear that F (Q) is sum of elements of E(X) with integer coefficients that add up to N . Hence, we just need to check that all of these coefficients are positive. This is obvious for all of the coefficients except Finally, suppose that T is some facet of P defined by L(x) = c > 0 and that Q lies on T . Since (V * ) * = V , there is a function f ∈ V so that L(x) = x, f for all x ∈ V * . Let Q be in the simplex defined by P n0 , . . . , P n d where P ni ∈ T and d ≤ M − 1. We need to show that L(F (Q)) > 0. Recall by the construction of P that for any p ∈ X that |f (p)| ≤ c(K + ǫ). Equivalently |L(E(p))| ≤ c(K + ǫ). Note also that since the P ni are in T , that L( This completes our proof. To finish the proof of Theorem 4 we will use the following: Proof. We may assume that Q spans U = R n , since otherwise we may replace U by the span of Q and replace F by its composition with a projection onto this subspace. Suppose for sake of contradiction that 0 ∈ F (Q). Consider the map f : B n → Q defined by letting f (0) = 0 and otherwise f (x) = m x x where m x is the unique positive real number so that mxx |x| ∈ ∂Q. Next consider g : |F (x)| . Composing we get a map g • f : B n → S n−1 . Since the map extends to the whole ball, g • f : S n−1 → S n−1 must be contractible. We use our hypothesis on F to show that this map is actually degree 1 and reach a contradiction. First, we claim that for no is a positive multiple of x. We also have that L(g(f (x))) > 0 because g(f (x)) is a positive multiple of a point in F (T ). Since L(x) > 0 and L(g(f (x))) > 0, it cannot be the case that g(f (x)) = −x. Finally, we claim that any map h : S n−1 → S n−1 that sends no point to its antipodal point is degree 1. This is because there is a homotopy from h to the identity by moving each h(x) at a constant rate along the arc from −x to h(x) to x. Finally, we can prove Theorem 4 Proof. We construct the polytope P as in Proposition 5 with ǫ < N M−1 − K − 1, and F as in Proposition 6. Then by Proposition 7 we have that 0 is in the image of F . Since every point in the image of F is a sum of N points of E(X), we have a design of size N by Lemma 2. Tightness of the Bound In this Section, we demonstrate that, in the generality in which it is stated, the lower bound for N in Theorem 4 is tight. First, we note that although it is possible that K is infinite, this can be indicative of the non-existence of designs of any size. Proposition 8. Let α ∈ (0, 1) be an irrational number. Consider the topological design problem Then there is no unweighted design for this problem of any size. But for each i, we must have g(p i ) is either 0 or 1. Therefore, this sum is a rational number and cannot be α, which is irrational. We show that even when K is finite, that a path-connected topological design problem may require that its designs be nearly the size mentioned in Theorem 4. In particular, we show: Proposition 9. Let m > 1 be an integer and k ≥ 1, ǫ > 0 real numbers. Then there exists a path-connected topological design problem with M = m and K ≤ k + ǫ that admits no design of size (m − 1)(k + 1) or less. Proof. First note that by increasing the value of k by ǫ/2 and decreasing ǫ by a factor of 2, it suffices to construct such a design problem that admits no design of size strictly less than (m − 1)(k + 1). We construct such a design problem as follows. Let X = [0, 1] and let µ be the Lebesgue measure. Let F : X → R be a continuous function with the following properties: Notice that such F are not difficult to construct. Next pick δ > 0 a sufficiently small real number (we will discuss how small later). Let φ i for 1 ≤ i ≤ m − 1 be continuous real-valued function on X so that It is not hard to see that this is possible to arrange as long as δ is sufficiently small. Let It is easy to see that X f i (x)dµ(x) = 0. We let W be the span of F and the f i . Since all elements of W already have 0 integral, we have that V = W so M = dim(W ). The F and the f i are clearly linearly independent, and hence M = m. We now need to bound K. Consider an element of V of the form G = aF + a i f i . It is easy to see that G's values on [1/(2k), 1 − 1/(4k)] are sandwiched between its values on the rest of X. Hence G attains its sup and inf Suppose for sake of contradiction that sup(G) | inf(G)| > k+ǫ. This means that sup(G)+ (k + ǫ) inf(G) > 0. If sup(G) = ak + sδ − min(a i , 0) this is at most which is non-positive for δ sufficiently small. If on the other hand, sup( which is non-positive for δ sufficiently small, yielding a contradiction. Hence, if we picked δ sufficiently small sup(G) Next suppose that we have a design x 1 , . . . , x N for this design problem. Since f j (x i ) = 0 and since f j is negative only on the support of φ j , we must have at least m − 1 of the x i each in a support of one of the φ j , and hence there must be at least m − 1 x i in [0, 1/(2k)]. Next we note that we must also have F (x i ) = 0. At least m − 1 of these x i are in [0, 1/(2k)] and therefore F of these x i equals k. Therefore since F (x j ) ≥ −1 for each other j, there must be at least k(m − 1) other points in our design. Hence N must be at least The Bound for Homogeneous Spaces In this Section, we show that there is a much nicer bound on the size of designs if we have a homogenous, path-connected, topological design problem. We will show that K ≤ (M − 1), where the equality is strict unless X has a design of size M . An application of Theorem 4 then yields our result. We begin with a Lemma Lemma 11. If X is a homogenous topological design problem, and if p i , w i is a weighted design for X, then K ≤ 1−max(wi) max(wi) . Proof. Without loss of generality, w 1 = max(w i ). Suppose for sake of contradic- Since X is homogenous, there is a g : X → X preserving all properties of the design problem so that g(p 1 ) = p. Since g preserves µ and W , g(p i ), w i must also be a weighted design for X. Therefore, i w i f (g(p i )) = 0. But on the other hand this is yielding a contradiction. We note the following interesting pair of Corollaries. Corollary 12. If X is a homogeneous topological design problem, and p i , w i a weighted design for X, then max(w i ) ≤ 1 K+1 . Corollary 13. If X is a homogeneous topological design problem, X admits no weighted design of size less than K + 1. We will also need one more Lemma Lemma 14. If X is a path-connected topological design problem and M > 0, X has a weighted design of size at most M . Proof. Suppose for sake of contradiction that there is no such weighted design. Then it must be the case that there are no p i ∈ X and w i ≥ 0 for 1 ≤ i ≤ M so that i w i E(p i ) = 0. This means that whenever a non-negative linear combination of M + 1 values of E(p i ) equals 0, the weights must be all 0 or all positive. By Lemma 3 there must be some M + 1 points for which some non-negative linear combination equals 0. As we deform our set of points, it will always be the case that some linear combination equals 0 by a dimension count. Furthermore, the coefficients of this combination will vary continuously. Since, by assumption, it is never possible to write 0 as a non-negative linear combination with at least one coefficient equal to 0, it must be the case that no matter how we deform the p i , there will always exist a linear combination equal to 0 with strictly positive coefficients. But this is clearly not the case if all of the p i are equal to some point p on which not all of the functions in V vanish. We can now prove Theorem 10. Proof. By Lemma 14, there is a weighted design for X of size at most M . If all of the weights are equal, this is a design of size M , and by Lemma 11 Examples We provide several Corollaries of Theorem 10. Corollary 15. There exists a spherical design of strength n on the d-dimensional sphere of size O(n 2d /(d! 2 )). Corollary 16. There exists a design of strength n on the Grassmannian, G(m, k) of size O m,k (n 2k(m−k) ). Conjecture Although we prove a bound of size O(M 2 ) for homogeneous path-connected topological design problems, it feels like the correct result should be O(M ), since that is roughly the number of degrees of freedom that you would need. We can rephrase the problem for homogeneous path-connected spaces a little though. First, we may replace X by E(X), which is a bounded subset of V * . Next, we note that the L 2 measure on V is preserved by the symmetries of X. Hence the symmetry group G of X (which is transitive by assumption) is a subgroup of O(V * ), and hence compact. Since X is a quotient of the identity component G 0 of G we may pull our design problem back to one on G 0 (using the pullbacks of µ and W ). Since G 0 is also a path-connected subgroup of O(V * ), it must be a Lie group. Hence we have reduced the problem of finding a design in a pathconnected homogenous topological design problem to finding one in a design problem of the following form: X = G is a compact Lie Group. µ is the normalized Haar measure for G. W is a left-invariant, finite dimensional space of functions on G. Since L 2 (G) decomposes as a sum ρi∈Ĝ φ i ⊗φ * i , W must be a sum of the form ρi∈Ĝ ρ i ⊗W i where W i is a subspace of ρ * i and all but finitely many W i are 0. Note that although we have all this structure to work with, proving better bounds even for the circle seems to be non-trivial. This Conjecture says in that case that given any M distinct non-zero integers n i that there should exist O(M ) complex numbers z j with |z j | = 1 so that j z ni j = 0 for all i. Where above and throughout the paper, O a (N ) denotes a quantity bounded above by N times some absolute constant depending only on a, and Θ a (N ) denotes a quantity bounded above and below by positive multiples of N that depend only on a. Several others have considered the problem of finding designs for this design problem. Bernstein proved in [2] the existence of such designs of size O(n 2 ) for α = β = 0. This work was latter extended by Kuijlaars, who proved asymptotically optimal upper bounds for α = β ≥ 0 in [8] and for α, β ≥ 0 in [7]. Theorem 17 extends these results to the case of α and β negative. In order to prove this Theorem, we will first need to review some basic facts about Jacobi polynomials. We will use [10] as a guide. Definition. We define the Jacobi polynomials inductively as follows: For n a non-negative integer and α, β ≥ − 1 2 , P (α,β) n (x) is the unique degree n polynomial with P (α,β) n (1) = n + α n and so that P (α,β) n is orthogonal to P (α,β) k for k < n with respect to the inner product f, g = I f (x)g(x)dµ α,β (x). Lemma 18. Let µ be a normalized measure on I. Let R µ n be the sequence of orthogonal polynomials for µ. (i.e. R µ n is a polynomial of degree n, and {R µ 0 , R µ 1 , . . . , R µ n } is an orthonormal basis for P n with the inner product f, Then (w i , r i ) is a weighted design for (I, µ, P 2n−1 ). We are now prepared to show that all designs for (I, µ α,β , P n ) are reasonably large. Proof. We increase n by a factor of 2, and instead prove bounds on the size of designs for (I, µ α,β , P 2n ). Let r n be the biggest root of R is R (α,β) n (x) times a polynomial of degree less than n, I pdµ α,β = 0. Since p(x) is positive outside of [r n , 1], any design must have a point in this interval. Therefore any design must have at least one point in [1−O α,β (n −2 ), 1]. If such a point is written as cos θ then θ = O α,β (n −1 ). For c a sufficiently small constant (depending on α and β), define It is clear that f (x) ≥ 0 for all x, and clear from the orthonormality that I f dµ α,β = 1. On the other hand, for c sufficiently small and cn ≤ i ≤ 2cn, Equation 5 tells us that on [1−r n , 1]. Therefore if p 1 , . . . , p N is a design for (I, µ α,β , P n ), we may assume that p 1 ∈ [1 − r n , 1] and we have that Therefore N = Ω(n 2α+2 ). In order to prove the upper bound, we use a slightly more sophisticated version of our previous techniques. First, we need to define some terminology. Definition. For a design problem (X, µ, W ) and a map γ : [0, 1] → X we define It should be noted that as a consequence of this definition that if there are f ∈ V \{0} that are non-positive on γ([0, 1]) that this will cause K γ to be infinite. It should be noted that in such cases, it will usually not be the case that there will be any design supported only on the image of γ. If no such f exists, a compactness argument shows that K γ is finite. We note that replacing f by g = Var γ (g). Or equivalently, scaling g by an arbitrary positive constant, Proposition 20. Let (X, µ, W ) be a topological design problem with M > 0. Let γ : [0, 1] → X be a continuous function with K γ finite. Then for any integer N > K γ /2 there exists a design for (X, µ, W ) of size N . Proof. Let 2N Kγ − 1 > ǫ > 0. For every f ∈ V \{0}, there exists an x ∈ [0, 1] so that K γ f (γ(x))(1 + ǫ) > Var γ (f ). Since this property is invariant under scaling of f by positive real numbers, and since it must also hold for some open neighborhood of f , by compactness, we may pick finitely many x i so that for Let P be the polytope in V * spanned by the points E(γ(x i )). We will define a function F : P → V * with the following properties: • F is continuous • For each x ∈ P , F (x) can be written as N i=1 E(γ(y i )) for some y i ∈ [0, 1] • For each facet T of P defined by L(x) = c > 0, L(F (T )) ⊂ R + Once we construct such an F , we will be done by Proposition 7. Suppose that our set of x i is x 1 < x 2 < . . . < x R . We first define a continuous function C : P → R R whose image consists of points with non-negative coordinates that add to 1. This is defined as follows. First, we triangulate P . Then for y ∈ P in the simplex spanned by, say, {E(γ(x i1 )), E(γ(x i2 )), . . . , E(γ(x i k ))}. We can then write y uniquely as k j=1 w j E(γ(x ij )) for w j ≥ 0 and j w j = 1. We then define C(y) to be w j on its i j coordinate for 1 ≤ j ≤ k, and 0 on all other coordinates. This map is clearly continuous within a simplex and its definitions on two simplices agree on their intersection. Therefore, C is continuous. For w ∈ R R with w i ≥ 0 and i w i = 1, we call w i a set of weights for the x i . Given such a set of weights define u w : [0, 1] → [0, N + 1] to be the increasing, upper semi-continuous function This function clearly satisfies the first two of our properties, we need now to verify the third. Suppose that we have a face of P defined by the equation f, y = 1 for some f ∈ V . We then have that sup i (f (γ(x i ))) = 1. Therefore Var γ (f ) < K γ (1+ǫ). Let this face of P be spanned by E(γ(x i1 )), . . . , E(γ(x iM )) for i 1 < i 2 < . . . < i M . It is then the case that f (γ(x ij )) = 1 for each j. Letting w = C(y), it is also the case that w k is 0 unless k is one of the i j . Note that lim x→x − i 1 u w (x) < 1 and u(x iM ) > N . This implies that none of This implies that there is at most one p i in (x in , x in+1 ) for each n. For a point x in this interval we have that |f (γ(x)) − 1| is at most half of the total variation of f • γ on [x in , x in+1 ]. All other p i (w) must be one of the x ij . Therefore summing over all p i (w), we get that which is at most half of the variation of f • γ on [x i1 , x iM ]. This in turn is at most < N . Therefore f (F (y)) > 0. This proves that F has the last of the required properties and completes our proof. Proof. We will use the alternative definition of K γ , namely the sup over f ∈ W 1, non-negative on γ([0, 1]) and f dµ −1/2,−1/2 = 1, of Var γ (f ). If f ≥ 0 on γ([0, 1]) = I, then f must be a sum of squares of polynomials of degree at most n/2 + 1 plus (1 − x 2 ) times a sum of such polynomials. Since I f dµ is linear and Var γ (f ) sublinear, it suffices to check for f = g 2 or f = (1 − x 2 )g 2 . Note that µ −1/2,−1/2 is the projected measure from the circle to the interval. Therefore, we can pull f back to a function on the circle either of the form g(cos θ) 2 or (sin θg(cos θ)) 2 . In either case, S 1 f (θ)dθ = 1 and f (θ) = h(θ) 2 for some polynomial h of degree O(n). It suffices to bound the variation of f on the circle. In particular it suffices to show that S 1 |f ′ (θ)|dθ = O(n). We note that We also note that Hence it suffices to prove that for h a polynomial of degree m that |h ′ | 2 = O(m)|h| 2 . This follows immediately after noting that the orthogonal polynomials e ikθ diagonalize the derivative operator. We now relate this to functions for arbitrary α and β. We have that i w i f (r i ) = 1. Therefore, since f (r i ) ≥ 0, we have that . for c a sufficiently small positive constant. Let I R be the indicator function of the set R. Then . Spherical Designs In this Section, we will focus on the problem of designs on a sphere. In particular, for integers d, n > 0 let D d n denote the design problem given by the d-sphere with its standard, normalized measure, and W the space of polynomials of total degree at most n. We begin by proving lower bounds: Theorem 24. Any weighted design for D d n is of size Ω d (n d ). Proof. Let U be the space of polynomials of degree at most n/2 on S d . Note that dim(U ) = Ω d (n d ). We claim that K ≥ M ′ := dim(U ). Pick x ∈ S d . Let φ 1 , . . . , φ M ′ be an orthonormal basis of U . Let f (y) g is clearly invariant under the action of SO(d + 1), and is therefore constant. Furthermore, S d gdµ = M ′ . Therefore g(x) = M ′ . Therefore, Therefore since the action of SO(d) makes D d n a homogeneous design problem Corollary 13 implies that any weighted design for D d n must have size at least M ′ = Ω d (n d ). We also prove a nearly matching lower bound. Namely: The proof of Theorem 25 again uses Proposition 20, but the choice of γ is far less obvious than it is when applied in Theorem 17. In fact, we will want to introduce a slight generalization of the terminology first. Var γe (f ). Note that for an embedded graph G, we will often simply refer to Var G (f ). Definition. For (X, µ, M ) a design problem, G a graph, and γ : G → X a function, define Note that we have alternative definitions of K γ in the same way as we did before. We will often ignore the function γ and simply define K G for G and embedded graph in X. We note the following version of Proposition 20: Proposition 26. Let (X, µ, W ) be a topological design problem. Let G be a connected graph and γ : G → X a continuous function. If K G is finite, and N > K G is an integer, then (X, µ, W ) admits a design of size N . Proof. Note that if we double all of the edges of G that the resulting multigraph admits an Eulerian circuit. This gives us a continuous map γ ′ : [0, 1] → X that covers each edge of G exactly twice. Therefore for every function f , sup G (f ) = sup γ([0,1]) (f ) and Var γ ′ (f ) = 2Var G (f ). Hence K γ ′ = 2K G , and the result follows from Proposition 20. We will now need to prove the following: Proposition 27. For d, n ≥ 1, there exists a connected graph G for the design problem D d n so that . Furthermore this can be done is such a way that the total length of all the edges of G is n O d (1) . The basic idea of the proof of Proposition 27 is as follows. First, by projecting S d down onto its first d − 1 coordinates, we can think of it as a circle bundle over B d−1 . We construct our graphs by induction on d. We pick a number of radii r i , and place our graphs for various strength designs on the spheres of radius r i in B d−1 . We also add the loops over the points on these graphs given by the corresponding designs. The first step is to show that average value of f over our loops in G is roughly the average value over the sphere (see Lemma 33). Naively, this should hold since the average value of f on the sphere of radius r i in B d−1 should equal the average value of f over the appropriate loops (because the loops are arranged in a design). Our radii will themselves by arranged in an appropriate design, so that the value of f on the sphere will equal the average of the values at there radii. Unfortunately, our component designs will be of insufficient strength for this to hold. This is fixed by showing that the component of f corresponding to high degree spherical harmonics at small radius r i in B d−1 is small (this is shown in Lemma 29). The bound on K G comes from noting that the variation of f along G is given by the sum of variations on the subgraphs. These in turn are bounded by the size of f on these subgraphs, and the appropriate sum of variations is bounded by the size of f on the whole sphere. Before we proceed, we will need the following technical results: Proof. Let φ i (1 ≤ i ≤ M ) be an orthonormal basis of the polynomials of degree at most n on S d , so that each of the φ i is a spherical harmonic. Note So △φ i (u) = k 2 φ i (u) for some k ≤ n. Therefore, this is at most n 2 M . Hence, Lemma 29. For n ≥ d, k ≥ 1 integers, and f a polynomial of degree at most n on the d-disk, D, Let µ be the measure 2(1−r 2 ) (k−2)/2 dr Vol(D)dβ(d/2,k/2) . Note that µ is the projected measure from the d+k−1-sphere onto the d-disk. We have that D f 2 (r)dµ = 2 dβ(d/2,k/2) . Rescaling f so that Pulling f back onto the (d + k − 1)-sphere, we get that S d+k−1 f 2 (x)dx = 1, where dx is the normalized measure on S d+k−1 . We need to show that for be an orthonormal basis of the space of polynomials of degree at most n on S d+k−1 . We can write f (y) = i a i φ i (y). It must be the case that i a 2 i = 1 and f (x) = i a i φ i (x). By Cauchy Schwartz this is at most . This is clearly invariant under SO(d + k) (since it is independent of the choice of basis φ i ). Therefore this function is constant. Furthermore its average value on S d+k−1 is clearly M . Therefore f (x) ≤ √ M . On the other hand we have that This completes our proof. Lemma 30. Let f be a real-valued polynomial of degree at most n on S 1 . Suppose that f ≥ 0 on S 1 . We can write f in terms of a Fourier Series as Then a 0 is real and a 0 ≥ |a k | for all k. Proof. The fact that f can be written in such a way comes from noting that e ±ikθ are the spherical harmonics of degree k on S 1 . Since f is real valued it follows that a −k =ā k for all k. We have that Lemma 31. If f is a polynomial of degree at most n on S 1 , and if f is nonnegative on S 1 , then Var S 1 (f ) = O(n) S 1 f. Proof. Consider f = f (θ) as above. For an angle φ, let g φ (θ) = f (φ + θ) + f (φ − θ). Clearly g φ is non-negative, and S 1 g φ = 2 S 1 f . Furthermore, we have that Where above we use the fact that 2π 0 f ′ (ρ)dρ = 0 and that the absolute value function is convex. Hence for some φ, Var S 1 (g φ ) ≥ Var S 1 (f ). Therefore, we may consider g φ instead of f . Noting that g φ (θ) = g φ (−θ), we find that g φ can be written as p(cos θ) for some polynomial p of degree at most n. Our result then follows from Lemma 21. Lemma 32. Let d ≥ 0 be an integer. Consider the design problem given by X = [0, 1], µ = r d dr/(d + 1), and W the set of polynomials of degree at most n in r 2 . Then there exists a weighted design for this problem, , min(r i ) = Ω(n −1 ), and max(r i ) = 1 − Ω(n −2 ). Proof. For any such polynomial p(r 2 ) we have that We are now ready to prove Proposition 27. We prove by induction on d ≥ 1 that for any n, there exists a graph G d n on S d with K G d n = O d (n d log(n) d−1 ) and so that the total length of the edges of G d n is n O d (1) . For d = 1, we let G d n = S 1 . This suffices by Lemma 31. From this point on, all of our asymptotic notation will potentially depend on d. In order to construct these graphs for larger d, we will want to pick a convenient parametrization of the d-sphere. Consider S d ⊂ R d+1 as {x : |x| = 1}. We let r be the coordinate on the sphere i . We let u ∈ S d−2 be the coordinate so that (x 1 , x 2 , . . . , x d−1 ) = ru. We let θ be the coordinate so that (x d , x d+1 ) = √ 1 − r 2 (cos θ, sin θ). Note that u is defined except where r = 0 and θ is defined except where r = 1. Note that in these coordinates, the normalized measure on S d is given by r d−2 drdudθ 2π(d−1) . We also note that if φ m i are an orthonormal basis for the degree m spherical harmonics on S d−2 , that an orthonormal basis for the polynomials of degree at most n on S d is given by Where k, m, ℓ are integers with m, ℓ ≥ 0 and |k| + m + 2ℓ ≤ n and where the P k,m,d ℓ (r 2 ) are orthogonal polynomials for the measure r m+d−2 (1−r 2 ) k/2 dr/(d− 1) on [0, 1] and functions in r 2 , or, equivalently, P k,m,d ℓ (s) are the orthogonal polynomials for the measure s (m+d−3)/2 (1 − s) k/2 ds/(2(d − 1)) on [0, 1]. We construct G d n as follows. Our construction will depend on the graph given by our inductive hypothesis for d − 2. Since our Theorem does not hold for d = 0, this means that our construction will need to be slightly altered in the case d = 2. On the other hand, there is a disconnected graph, G on S 0 with K G = O(1) that has total length n O(1) and supports a design of size 2 (this graph of course being the union of two loops, one at each point of S 0 ). This will turn out to be a sufficient inductive hypothesis to prove our d = 2 case with only minor modification. We now proceed to explain the construction of G d n . Let (w i , r i ) (1 ≤ i ≤ h) be the design for the measure r d−2 dr/(d − 1) on [0, 1] for polynomials of degree at most 2n in r 2 as described in Lemma 32. We first consider the construction for d > 2. Let N = An d−2 (log(n)) d−2 for A a sufficiently large constant. For each r i , let N i = [r d−2 i N ] and k i = Brin log(n) log(nr log(n)) , where B is a constant chosen so that both B and A/B are sufficiently large. We inductively construct G i = G d−2 ki . By the inductive hypothesis for the design problem D d−2 ki , K Gi < (N i ) if A was sufficiently large compared to B. Therefore, by Proposition 26 there is a design u i,j , 1 ≤ j ≤ N i for the design problem D d−2 ki so that each of the u i,j lies on G i . Let r 1 be the smallest of the r i . By rotating G i , u i,j if necessary we can guarantee that r i u i,1 = (r 1 , r 2 i − r 2 1 , 0, . . . , 0) for all i. We now define our graph G = G d n as follows in (r, u, θ) coordinates. First we define H to be the union of: • The circles (r i , u i,j , θ) for θ ∈ [0, 2π] for 1 ≤ i ≤ h and 1 ≤ j ≤ N i • The graphs (r i , u, 0) for u ∈ G i for 1 ≤ i ≤ h We note that H is not connected. Its connected components correspond to the r i , since each G i connects all of the circles at the corresponding u i,j . We let G = H ∪H ′ , where H ′ is the image of H under the reflection that swaps the coordinates x 2 and x d . We note that H union the circle in H ′ corresponding to u 1,1 is connected. Since this circle is parameterized as (r 1 , 1 − r 2 1 sin θ, 0, 0, . . . , 0, 1 − r 2 1 cos θ) intersects each of the u i,1 in H. Similarly H ′ union the circle over u 1,1 in H is connected. Hence G is connected. It is also clear that the total length of all the edges of G is n O (1) . We now only need to prove that K G = O(n d log(n) d−1 ). We note that it suffices to prove that For d = 2, we need to make a couple of small modifications to the above construction. The graphs G 0 n are of course trivial. In this case, it will be sufficient to let N = N i = 2 and k i = Brin log(n) log(nr log(n)) for B a sufficiently large constant. We still have a design of size N i on S 0 (of unlimited strength) given by {−1, 1}. The graph H is now given by a union of latitude lines of our sphere supported on the latitudes ±r i . H now has two connected components for each r i (instead of the one we see in other cases). On the other hand, it is still the case that if H ′ is the rotation of H by 90 deg, then the most central of the circles in H ′ meets each connected component of H (and visa versa), and hence G = H ∪ H ′ is connected. The remainder of our argument will hold identically for the d = 2 and d > 2 cases. Let v i = wi Ni . We note that v i = Ω(n −1 N −1 1 − r 2 i ). We claim that the circles in H with weights given by v i form an approximate design in the following sense. Lemma 33. Let C be any real number. Then if B/C is sufficiently large, and f ∈ P 4n we have that Proof. We note that after increasing C by a constant, it suffices to check our Lemma for f in an orthonormal basis of P 2n . Hence we consider We need to show that First we note that if m = 0, φ m (u) = 1. In this case Where we use above the fact that w i , r i is a weighted design. Hence we are done for the case m = 0. For m > 0, the integral of f over S d is 0. Furthermore for k i ≥ m, j φ m (u i,j ) = 0 (since the u i,j are a design). Therefore in this case, the left hand side of Equation 7 is By results in the proof of Lemma 29, we have that φ m (u i,j ) = n O d (1) . Furthermore v i = O(1) and there are n O d (1) many pairs of i, j in the sum. Therefore, this is at most The fact that |f | 2 = 1 implies that Since for B sufficiently large, O nri ki would be less than 1 2 , this is at most Hence we need to know that, Where we use the fact that nr i = Ω(1). If on the other hand nr i ≥ log(n), then k i = Ω(B log(n)) and the left hand side of Equation 8 is This completes our proof. For f a polynomial on S d let Let Lemma 34. For f ∈ P 2n , f ≥ 0 on H, Proof. Since f (r i , u i,j , θ) is a non-negative polynomial of degree at most 2n on the circle, Where the last equality holds since v i = Ω(n −1 N −1 1 − r 2 i ), and 1 − r 2 i = Ω(n −2 ) for all i. We now prove a more useful version of Lemma 33. Lemma 35. If B is sufficiently large, and if f is a polynomial of degree at most 2n on S d that is non-negative on H then Proof. By Lemma 33 applied to f 2 , we have that On the other hand, we have that sup S d (|f |) = n O(1) |f | 2 . Therefore, Hence, we have that If the above holds for sufficiently large C (which by Lemma 33 happens if B is sufficiently large), this implies that Therefore, for B sufficiently large, we have that Corollary 36. Assuming B is sufficiently large, if f is a polynomial of degree at most 2n on S d and f is non-negative on H then We will now try to bound K H based on a variant of one of our existing criteria. In particular, we would like to show that if f is a degree n polynomial with f = 1 and f ≥ 0 on H that Var G (f ) = O(n d log(n) d−1 ). Replacing f by f +1 A(f +1) and noting by Corollary 36 that A(f + 1) ≤ 4, we can assume instead that f ≥ 1 4 on H and that A(f ) = 1. We first bound the variation of f on the circles over u i,j . Define f i,j (θ) := f (r i , u i,j , θ). We will prove the following: Proposition 37. Let B be sufficiently large. Let f be a degree n polynomials with f ≥ 1/4 on H and A(f ) = 1. Then This would follow immediately if f i,j was degree at most n log(n) √ 1 − r 2 . We will show that the contribution from higher degree harmonics is negligible. We define for integers k, a k (r, u) to be the e ikθ component of f at (r, u, θ). We note that a k (r, u) = (1 − r 2 ) |k|/2 P k ( r), where r = ru is a coordinate on the (d − 1)-disc and P k ( r) some polynomial. We first show that |a k (r, u)| is small for k > n log(n) √ 1 − r 2 . Lemma 38. Let C be a real number so that B/C is sufficiently large. Let f be a degree n polynomial with f ≥ 0 on H and A(f ) = 1. Then for |k| > n log(n) 1 − r 2 i , |a k (r i , u)| = O(n −C ). Proof. We have that |a k | 2 ≤ |f | 2 = n O(1) by Lemma 34. Therefore, Applying Lemma 29, we find that Since |k| = Ω(log(n)) (because 1 − r 2 i = Ω(n −1 )), this is O(n −C ). Proof of Proposition 37. Let f l i,j be the component of f i,j coming from Fourier coefficients of absolute value at most n log(n) 1 − r 2 i . By Lemmas 28 and 38, we have that for B sufficiently large, f i,j − f l i,j is less than 1/8 everywhere and has Variation O(1). But since f l i,j is non-negative and has bounded Fourier coefficients, we have by Lemma 31 that This means that Again, for B sufficiently large, this is Ni j f (r i , u i,j , θ). We have that F is a polynomial of degree at most n and that F ≥ 1/4. Let F l be the component of F consisting of Fourier coefficients with |k| ≤ n log(n) 1 − r 2 i . By Lemmas 28 and 38, if B is sufficiently large, |F − F l | < 1/8. It is clear that Note that by Lemma 31 We can finally prove Proposition 27. Proof. We proceed by induction on d. For d = 1 the S 1 suffices as discussed. Assuming that we have the graph for d − 2 we construct G as described above. Clearly G is connected and has total length n O (1) . We need to show that K H = O(n d log(n) d−1 ). To do so it suffices to show that for any f ∈ P n with f ≥ 1/4 on H and A(f ) = 1 that Var H (f ) = O(n d log(n) d−1 ). We have that This completes the proof. Theorem 25 now follows from Proposition 27 and Proposition 26. Acknowledgements This work was done while the author was an intern at Microsoft Research.
2012-06-25T23:49:02.000Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "95a7855bdc3234a77cbbf2744dcf673a04b24ab3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1112.4900", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "86493ee76cdafce604a54e176c7a10e33741b09e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119279046
pes2o/s2orc
v3-fos-license
A Large Sieve Inequality for Euler Products An inequality of Large Sieve type, efficacious in the analytic treatment of Euler products, is obtained. In this paper we establish an inequality of Large Sieve type that, besides its own interest, lends itself to the study of Dirichlet series with attached Euler products. Obstacles to sharpenings are discussed. Theorem. For each positive real B there is a real c such that with s = σ + it, σ = Re(s), L = D<p≤x p −1 , uniformly for a p in C and distinct Dirichlet characters The constant c may be made explicit. The following result is vital. Lemma. Given B > 0 Re w<p≤y χ(p)p −s is bounded above in terms of B alone, uniformly for σ ≥ 1, |t| ≤ D B , y ≥ w ≥ D and all non-principal characters χ (mod D), D ≥ 1. A proof of this lemma employing analytic properties of Dirichlet L-series is given in Elliott [10], Lemmas 1, 4; an elementary proof, via the Selberg sieve and which yields a better dependence upon B, is given in Elliott [11], Lemma 4; the case σ = 1 by continuity. Proof of the Theorem. Since the sum D<p≤x |a p |p −σ approaches zero as σ → ∞, the innermost maximum may be taken over a bounded rectangle. In view of the uniformity in y, Abel summation allows us to restrict to the case σ = 1. . . , k, and consider the inequality where the b j are for the moment real and non-negative. The expanded sum is An appeal to the lemma followed by an application of the Cauchy-Schwarz inequality shows that we may take ∆ = L + (k − 1)c 1 for a certain c 1 depending at most upon B. If now b j is complex, we represent it as a sum and correspondingly partition the innersum over j. Since the coefficients in each subsum all have the same argument, a second application of the Cauchy-Schwarz inequality allows us to conclude that with ∆ = 4 (L + c 1 (k − 1)) the above inequality holds for all complex b j . Dualising: Replacing a p by a p p − 1 2 completes the proof. Remarks. Presumably the inequality in the Theorem is valid with the coefficient 4 replaced by 1. With the present argument that appears to require the sum in the Lemma, with σ = 1, w = D, to be uniformly bounded not only above but also below. Such a bound seems currently out of reach. It would, in particular, guarantee a lower bound L(1, χ) ≥ c 2 (log D) −1 for quadratic characters (mod D) and eliminate Siegel zeros. Without an adjustment to the term (k − 1)c the restriction D < p in the sums over the primes cannot be altogether removed: We may identify Dirichlet characters χ of order m to prime moduli q with m th -power residue symbols and view them in terms of characters on ideal class groups, as in Elliott [6], where appropriate references to works of Eisenstein, Landau, Furtwängler, Artin and Hasse may be found. Employing the uniform distribution of prime ideals in ideal classes, in particular Fogels' generalisation of Linnik's theorem on the size of the least prime in a rational residue class, c.f. Elliott [1], [2], [4], Fogels [12], one may arrange an infinitude of moduli q for which χ(p), with (p, m) = 1, p up to a certain constant multiple of log q, may be given individually any value available to the character. As an example, if min 1≤r≤m cos 2πrm −1 ≤ β ≤ 1, then by choosing the successive values of χ(p) to be complex conjugates we may arrange that p≤q χ(p)p −1 = β log log log q + O(1). Moreover, with β = ±1, separately, the estimate may be required to hold for every character of order m or even order m, respectively. Via the construction of finite probability spaces, these methods allow the successful study of the values of series L(s, χ) formed with Dirichlet characters of order m to prime moduli provided σ > 1 − c 3 > 1 2 , reaching part-way into the critical strip, c.f. Elliott [8]. More generally, taking imprimitive characters into account, for almost all moduli D, in a strong quantitative sense, the sums in the Lemma are indeed bounded below and the inequality of the Theorem valid with 4 replaced by 1. Variant inequalities also allow the constant 4 to be reduced. For example, replacing χ j (p)p −s by Re (χ j (p)p −s ) we may replace 4 (L + (k − 1)c) by 2 (L + kc). Note that summands corresponding to a complex character χ j may then appear twice in the bounded sum.
2012-03-05T02:35:21.000Z
2012-03-05T00:00:00.000
{ "year": 2012, "sha1": "039f8019d6270247f9cebf4ffda12e8ccff9526d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "039f8019d6270247f9cebf4ffda12e8ccff9526d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
266841802
pes2o/s2orc
v3-fos-license
Zoonotic risks of pathogens from dairy cattle and their milk-borne transmission Abstract Dairy products are major sources of high-quality protein and bioavailable nutrients and dairy production contributes to local, regional and national-level economies. Consumption of raw milk and raw milk products does, however, carry a zoonotic risk, as does direct contact with cattle by farm husbandry staff and other employees. This review will mainly focus on the latter, and deal with it from the standpoint of a well-developed dairy industry, using the example of the Netherlands. With regard to dairy cattle, the main bacterial pathogens are Salmonella spp., Listeria monocytogenes and Leptospira hardjo as well as Brucella abortus and Chlamydia abortus. The main viral pathogens associated with dairy are Rift Valley fever virus, rabies virus, cowpox virus and vaccinia virus. The main parasitological infections are Echinococcus granulosis, Cryptosporidium parvum and Giardia duodenalis, however, the last mentioned have mainly swimming pools as sources of human infection. Finally ectoparasites such as lice and mites and Trichophyton verrucosum may affect employees. Some pathogens may cause health problems due to contamination. Bacterial pathogens of importance that may contaminate milk are Campylolobacter jejuni, Escherichia coli, Mycobacterium avium subsp. paratuberculosis, Leptospira hardjo and Salmonella typhimurium. Excretion of zoonotic viruses in milk is negligible in the Netherlands, and the endoparasite, Toxocara vitulorum is mainly found in suckling and fattening calves, whilst the risk in dairy cattle is limited. Excretion of transmissible spongiform encephalopathies (TSEs) or mycoses in milk are not expected and are, therefore, not of importance here. Being aware of the risks and working according to hygiene standards can substantially limit zoonotic risks for employees. Additionally, diseased employees are advised to limit their contact with cattle and to indicate that they work with cattle when consulting a physician. To prevent zoonotic risks through excretion of pathogens in milk, standard hygiene measures are necessary. Further, using only pasteurised milk for consumption and/or processing of milk can considerably limit the risks. If these measures are not possible, well-constructed monitoring can be followed. Monitoring programmes already exist for pathogens such as for Salmonella spp., Leptospira hardjo and Mycobacterium avium subsp. paratuberculosis. For others, like Campylobacter jejuni and E. coli, programmes are not available yet as far as we know. Introduction Dairy production and consumption have mainly positive effects on society and individual consumers, but can have negative effects on human health also (Hawkes and Ruel, 2006).Dairy products are major sources of high-quality protein and bioavailable nutrients (eg calcium; Todd et al., 2006).Dairy production can also contribute to local, regional and national-level economies and provide opportunities for employment and income generation (Hawkes, 2006), which are critical determinants of health (Marmot et al., 2008).However, a number of potential health risks associated with dairy production and consumption have also been identified, such as diet-related chronic diseases like milk allergy, environmental change, foodborne and occupational hazards and zoonotic diseases (Horrigan et al., 2002;Hawkes, 2006;Kimman et al., 2013).Globally, there is strong demand for milk and dairy products (IDF, 2016;USDA, 2021).This is largely due to global population growth (IDF, 2016), although increases per capita in dairy intake have also driven global demand (OECD and FAO, 2016).As demand for food increases, agricultural sectors have sought to increase production to meet that demand, and the dairy sector is no exception.In 2020, more than 906 million tons of milk were produced by the global dairy sector (FAO, 2021), and global production is projected to increase by 23% in 2025, compared to years 2013-2015(OECD and FAO, 2016). Direct or indirect contact with contaminants such as bacteria, viruses and other pathogens is a potential risk when working with animals (WOAH, 2022).Exposure to contaminants can occur by respiration or by contact with excreta such as urine, faeces, milk and abortive fluids.Individuals may also have direct contact with the animal's coat and skin.Contact with the pathogenic agents can cause an infection or even disease and the same risk of transmission of pathogens from animals to humans applies to the consumption of raw milk products (Maunsell and Donovan, 2008).Fortunately, not all infectious agents and infections lead to health problems for cows and humans, though some pathogens unmistakably contain a markedly increased zoonotic risk.The objective of this paper is to provide an overview of the micro-organisms that may affect dairy cows under Dutch circumstances, the risk that these micro-organisms are presented to herd mates and to the employees working with dairy cattle and their effect on safety of the milk.Based on this overview, the pathogens with the highest zoonotic risk are identified and listed in the database.A distinction is made between pathogens that pose a risk to employees that work with dairy cattle and those that threaten the safety of milk and milk products.Reviewing the health impacts associated with dairy production and consumption will enhance understanding of the potential consequences associated with intensification of the dairy sector.To the authors' knowledge, no other comprehensive reviews of the potential health impacts of bacterial, viral, parasitological and mycotic infections associated with dairy production and consumption have been published. With these objectives into mind, a broad review was undertaken in an effort to provide a comprehensive overview of the linkages between the dairy sector and public health.Specifically, the review aimed to identify the potential public health risks associated with dairy production.The content of this review can be used to support improved decision making for the future development of the dairy sector, from a public health perspective.Such decisions include: • Prioritisation of potential health hazards associated with the dairy sector that require specific risk communication and management actions; • Resource allocation for the management of specific hazards associated with dairy production and consumption and • Identification of knowledge gaps that require further research to improve understanding and management of the public health impacts associated with dairy production and consumption. There are several methods that can be used to support these decision-making processes by providing systematic assessments of the public health impacts of dairy production and consumption at a wide range of spatial and temporal scales and with varying levels of detail.This inventory covered the clinical symptoms in cattle, the impact on animal welfare, the route and estimated risk of transmission to herd mates and to humans, the excretion in milk, the production of endotoxins, possible biosecurity measurements, vaccinations, diagnostic tools and the prevalence in our country (The Netherlands) as an example of a developed dairy sector. Materials and methods The infectious contaminants that may be found on Dutch dairy farms for now and in the near future were identified and listed in a database.Next, relevant background information for these micro-organisms was added, including their prevalence, the risk of excretion in milk, available diagnostic tests and preventive measures that can be taken to minimise the risk of infection.The relevant information was obtained from the literature, specialists of and the diagnostic results from the Royal GD laboratory in the last decade. The pathogens with the highest zoonotic risk were identified based on the following criteria: causes of diseases of infectious origin in dairy cattle, characteristics of the agents, zoonotic aspects of the agents, route and risk of transmission to herd mates and humans, prevalence or the risk that the disease will be introduced in the Netherlands, laboratory diagnosis and excretion in or contamination of milk. Scope of the literature review to support the risk analysis First, lists were composed of micro-organisms including bacteria, viruses, parasites, TSEs, mycoses and emerging diseases that may be found on Dutch dairy.The following information was included in the list for each micro-organism: • Species • General information including size, RNA-or DNA, presence of an envelope (viruses), Gram status and production of endotoxins (bacteria), route of transmission (all pathogens).This information was obtained from the literature, experts and laboratory staff employees of Royal GD Animal Health.The Dutch information was combined with papers about zoonotic infectious diseases from other countries with modern dairy production systems.Attention was given to TSEs and viral, bacterial, parasitological and mycotic infections.Specific attention was paid to list A diseases (ie diseases regulated by the EU Animal Health Law: https://eur-lex.europa.eu/EN/legal-content/summary/the-euanimal-health-law).Literature regarding production systems other than the dairy system (suckler cows, for example) was not included in this study.A comprehensive literature search was conducted in July-August 2021 to identify all relevant publications addressing infectious diseases in dairy cows, excretion of agents in milk, faeces and urine, as well as zoonotic risks.Literature search terms included specific phrases such as 'salmonella', 'trichophyton' and 'dairy cows' in the title, abstract, or as a keyword.The search terms were entered into the following three search databases: • Web of Science (http://apps.webofknowledge.com). To complete this systematic review in a reasonable period of time, we included literature published in the last five years as far as possible, thus, publications between January 2015 and August 2021.An exception was made in the case of high-quality reviews published before 2015 or where there were no publications in period mentioned.The database search of scientific articles resulted in papers published predominantly in Western countries.All of the information was presented in an Excel file data base that distinguished between bacterial, mycotic, parasitological and viral infections as well as TSEs. Risk analysis The pathogens with the highest zoonotic risk were identified based on the following criteria: • Causing diseases of infectious origin in dairy cattle • Characteristics of agents • Zoonotic aspects of agents • Route and risk of transmission to herd mates and humans • Prevalence in the Netherlands or the risk of being introduced The prioritisation of the pathogens in this report is based on knowledge and discussion with the scientific staff of the Bovine Health Department and the Laboratory of Royal GD, which manages diseases in cattle with zoonotic consequences on a daily basis.Among the 223 publications identified in the literature, 119 were not considered useful because better or more recent examples or reviews of a given pathogen were consistently identified.In total, 104 papers provided usable information, but the pathogens discussed were either not all present or not emerging in the Netherlands.Ultimately, about 60 papers were selected to support the conclusions presented in this paper. Results An alphabetic overview of the pathogens of importance in the Netherlands or from regions important for the Netherlands was compiled.Viral, bacterial and parasitological infections are presented in Tables 1-3, respectively.Mycoses are presented in Table 4 transmission of pathogens from cattle to humans is especially possible through direct or indirect contact with the skin or excreta, and through faecal contaminated milk produced by clinically healthy animals.Transmission by excretion of pathogens in the milk is considered to be very limited (with exception of Salmonellae), especially if milk from sick cows is treated with care. Viral infections There are said to be a total of 42 viruses, including Toro or Breda virus, that are causing serious infections in cattle and are of potential zoonotic risk in The Netherlands (Hoet and Saif, 2004).Many viruses are species-specific, in which case the risk for transmission to humans is considered to be minimal.Other viruses (such as Enterovirus) have a low zoonotic potential but currently (January 2022) are limited present in the Netherlands.They are also found in other European regions and in the US (Gomez and Weese, 2017) and may become a concern in the near future in our region. Rabies and Rift Valley fever (RVF) were identified as viral infections with the highest risk to employees working with cattle.Rabies can occur in all warm-blooded animals and is principally transmitted via direct contact with the saliva of an infected animal.Infection with rabies can be fatal without rapid intervention, which is the main reason for it being classified as a high-risk pathogen.Rabies is found in wildlife in Eastern Europe, in Africa, Asia, Indonesia, Bolivia, Mexico and Cuba (WHO, 2019).RVF, genus Phlebovirus, order Bunyavirales, is most commonly seen in domestic animals in sub-Sahara Africa and considered a serious risk to animals by the World Organisation for Animal Health (WOAH, 2022), with high economic impact.The virus can be transmitted to humans by contact with the body fluids of infected animals or through bites from infected mosquitoes (Culicoides).Most infected humans do not show signs of clinical illness or have only mild symptoms.However, a small percentage develop severe symptoms such as eye disease, haemorrhage and encephalitis (Wright and Kortekaas, 2019).The risk of future RVF introduction in Europe is relatively high given intercontinental traffic and storms. Cattle warts, caused by bovine papillomavirus, are highly prevalent in Dutch cattle but appear to be species-specific and transmission to humans is unproven (Lawson et al., 2018).In contrast, cowpox (mainly observed in cats) and related vaccinia virus may infect humans (Lapa et al., 2019).These viruses usually cause skin lesions, although the ocular form may lead to serious complications.Both viruses are not present in cattle in the Netherlands, but may become a threat in the near future through worldwide travelling and trade.At present, viruses with high zoonotic risk that are excreted in the milk of dairy cows have not been identified in the Netherlands.For an overview of potential zoonotic viruses, the laboratory test to detect them, their clinical symptoms in cattle, their presence in milk, control measures and their total occurrence and importance in our region, see Table 1. Bacterial infections There are a total of 37 bacteria species and their various subspecies, causing infections in cattle and of which roughly 17 species present a potential zoonotic risk.Some species such as Salmonella typhimurium, Bacillus cereus and Brucella abortus have a high As said before, mastitis pathogens themselves are normally not a problem in food-borne diseases because these products are not used for human consumption.In the milk of dairy cows with mastitis, endotoxins (lipopolysaccharides) may be present at the moment of bacterial death (ie after treatment with antimicrobials that kill mastitis pathogens).Experts at our company estimated that endotoxins remain in milk for roughly seven days after removal of the bacterial infection (vd Merwe, Royal GD, personal communication).These endotoxins can cause fever and local inflammatory reactions in the gastro-intestinal tract of humans if the milk of cows cured of mastitis is consumed (Wang and Quinn, 2010).However, this risk is limited because cows that contract clinical mastitis will be treated with antibiotics and the milk of treated cows is not allowed for consumption during the withdrawal period.Special attention must be paid to mastitis caused by potentially methicillin resistant S. aureus (MRSA;Vanderhaeghe et al., 2010).However, these MRSA are mostly linked to the intensive beef industry (van Loo et al., 2007).For an overview of potential zoonotic bacteria, the laboratory test to detect them, their clinical symptoms in cattle, their presence in milk, control measures and their total occurrence and importance in our region, see Table 2. Parasitological and mycotic infections Parasitological infections can be distinguished as being caused by endo-or ectoparasites.Endoparasites affecting host tissues and organs of live cattle include: Some of these parasites, such as Cryptosporidium parvum (Thomson et al., 2017), Toxocara ventilorum (Borgsteede et al., 2012), Echinococus granulosis (Eckert and Deplazes, 2004) and Giardia duodenalis (G.duodenalis; Geurden et al., 2004;Olson et al., 2004), are of zoonotic importance.C. parvum infections in humans are frequently related to contact with surface water (e.g. in swimming pools).The T. ventilorum parasite is known to be excreted in milk, but is mainly found in the colostrum of suckling cattle from southern Europe.The prevalence of E. granulosis in the Netherlands is also low and the main risk is consumption of imported raw meat of cattle from Eastern Europe (Berends et al., 2009).G. duodenalis is predominantly found in young calves (Geurden et al., 2004).Therefore, the overall zoonotic risk of endoparasites from dairy cattle in the Netherlands is estimated as low. Ectoparasites such as lice and mites may cause problems of the coat and skin.They can be a risk for employees working with cattle, and may be principally responsible in humans for zoonotic dermatitis symptomsred spots and itch, in the case of infection with mites (Pérez de León et al., 2020).Consuming milk from cattle infected with ectoparasites does not carry a zoonotic risk. By far the most important mycotic infection with a serious zoonotic risk is Trichophyton verrucosum (Lund et al., 2014) which results in proliferative dermatitis with crust.The spores of this infection are highly resistant and are mostly seen in animal crusts but can also be present throughout the barn.Therefore, eliminating this infection from the herd is very challenging.Trichophyton verrucosum (commonly known as ringworm) can be transmitted to humans by direct contact and causes circular skin lesions (Lund et al., 2014).The agent is not excreted in milk.For an overview of potential zoonotic parasitological and mycotic infections of importance in Western Europe, the laboratory test to detect them, their clinical symptoms in cattle, their presence in milk, control measures and their total occurrence and importance in our region, see Tables 3 (parasitological infections) and 4 (fungal infections). Transmissible spongiform encephalitis The most important TSE in the last several decades has been bovine spongiform encephalitis (BSE), which has been responsible for a considerable number of outbreaks with most economic damage in the UK (Alarcon et al., 2022).The Netherlands has observed 31 clinical cases and a total of 89 confirmed cases (58 after slaughterhouse control, www.wur.nl).The last confirmed case was in 2023.Humans have been diagnosed with variant Creutzfeldt-Jacob disease, but a relationship with BSE has not been proven.Evidence does not support transmission of TSE by milk consumption.Therefore, the zoonotic risk of BSE is estimated as very low. Conclusions and recommendations Dairy cattle can be a source of various types of zoonotic infections.Therefore, working with cattle includes a risk that farmers or employees become infected with a pathogen.Some infections may cause serious symptoms such as fever, diarrhoea, respiratory problems or worse in humans.The risk of transmission of infectious agents from dairy cattle to humans is mainly through air, by direct or indirect contact with manure, urine or abortive material (where indirect contact is largely through contaminated milk) and by direct contact with the coat.Risks can be limited by taking good preventive hygiene measures.We advise that all employees working with cattle or milk be aware of the risks and take preventive measures, for example using coveralls, gloves and protective glasses and washing hands frequently with disinfectant soap after contact with cows or their milk.Additionally, sick dairy farm employees are advised to limit their contact with cattle and to indicate they work with cattle when consultation with a physician is required.Excluding milk from infected dairy cattle also limits the risk of pathogen transmission.Other measures include the use of cattle that are free of specific pathogens such as L. hardjo, S. typhimurium and M. paratuberculosis. Bacterial infections caused by pathogens excreted in milk (Salmonella spp.and paratuberculosis) or faecal contamination of milk (eg Campylobacter jejuni and E. coli) and mycosal infections (eg Trichophyton verrucosum) are particular risks for farmers and employees working with cattle.They should be aware of possible risks, avoid the consumption of raw milk and take protective measures such as those just described.More extreme measures, like having lunch in dedicated rooms, must be considered as well.Dairy farms are advised to follow certification programmes for L. hardjo, Salmonella spp.and M. paratuberculosis.All these measures should result in an acceptably low risk of becoming affected by a zoonotic disease. Table 1 . Potential zoonotic viruses of importance in Western Europe, DNA or RNA, the laboratory test to detect them, their clinical symptoms in cattle, their presence in milk, control measures and their importance in The Netherlands Table 2 . Potential zoonotic bacteria of importance in Western Europe, the laboratory test to detect them, their clinical symptoms in cattle, their presence in milk, control measures and their importance in The Netherlands potential pathogenic character.Bacillus cereus can cause clinical mastitis in cows.In case of clinical mastitis, milk delivery for consumption is forbidden, so the zoonotic risk for direct transmission by milk is considered low, if mastitis milk is removed and if hygiene measures are followed by employees.Brucella abortus can cause substantial health problems in Table 3 . Potential zoonotic parasitological infections, their clinical symptoms in cattle, their detection methods, their presence in milk, control measures and their importance in The Netherlands Table 4 . Potential zoonotic fungal infections, their clinical symptoms in cattle, their route of transmission, their presence in milk, control measures and their risk of transmission Journal of Dairy Research 329 humans, but the Netherlands has been declared by the European Union to be officially free of bovine brucellosis for over 20 years.In the Netherlands, salmonellosis, campylobacteriosis and possibly paratuberculosis were identified as bacterial infections with serious zoonotic risk.Other bacterial pathogens with nonnegligible zoonotic risk are Leptospira hardjo, Escherichia coli and Listeria monocytogenes.Most of these bacterial zoonotic infections are a consequence of direct excretion in or contamination of the milk or contact with manure (eg Salmonella spp., Mycobacterium avium subsp.paratuberculosis, E. coli [STEC O157], Campylobacter spp.; Christidis et al. 2016; Whittington et al. 2019; Ameer et al., 2021; Stevens and Kingsley, 2021).Other vectors are excreta associated with abortion (e.g.Brucella abortus, Listeria monocytogenes and Chlamydia abortus; Walker et al., 2015; Chlebicz and Śliżewska, 2018; Whatmore and Foster, 2021), with urine (e.g.Leptospira hardjo; Ellis, 2015) or with cadavers (e.g.Clostridium botulinum; Holzhauer et al., 2009).
2024-01-09T06:17:27.852Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "dd37ed8aa4c76d06d3fd1743afcf16d05a7cfaf7", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CE68C141565DD53AE98B7AD38A979A15/S0022029923000730a.pdf/div-class-title-zoonotic-risks-of-pathogens-from-dairy-cattle-and-their-milk-borne-transmission-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "33cd0c21e06bd5971da20e2ba3f4216da06db136", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237302009
pes2o/s2orc
v3-fos-license
Economic violence among women of economically backward Muslim minority community: the case of rural North India Economic violence represents a state of control over an individual capacity to obtain, utilize and keep up economic assets. The current study investigates the prevalence of economic violence among women of the socioeconomically backward Muslim minority community by taking a sample of 387 women from rural areas of North India within a framework of domestic violence. It is shown that economic violence against Muslim women perpetrated by their husbands exists in India. Economic violence adversely affects Muslim women’s access to health services, educational attainment, social mobility, and employment opportunities. Our findings indicate that among the components of economic violence experienced by women, the tendency of employment sabotage is higher compared to economic control and economic exploitation. Clearly, there is a need for a special focus on improving minority community women’s access to developmental opportunities. Introduction Economic violence against women is a crucial component of domestic violence [2,13]. This type of violence exists within intimate partner relationships. It represents a state of control of a woman's capacity to obtain, utilize and keep up economic assets by a husband [2,30]. Economic violence against women within the household is a rising social concern. An intimate partner can establish a state of economic violence, mainly through economic exploitation, employment sabotage, and economic control [28]. Economic violence against women can range from "denying women their most basic needs such as food, clothing, shelter, and so on, to more complex needs, including their economic independence and ability to participate in household purchasing decisions" [34]. The prevalence of economic violence affects a substantial number of women [30]. Its consequences include threatening women's economic security and the potential for self-sufficiency. Economic violence can lead to put women on a strict allowance or force them to beg for money, making it a gendered problem [35]. The economically disadvantaged women in India face the problem of low economic freedom, not only due to their class and gender but also because of religion [25]. In other words, non-Muslim women enjoy a relatively high level of economic freedom in India. India is the world's second-largest populous country. With a huge population of 195 million, Muslims form the largest religious minority in the country. The socioeconomic condition of Muslims in India is relatively poor. The occurrence of domestic violence is comparatively high in rural India [23]. Most Indian Muslims reside in rural areas [25]. The level of economic development achieved by rural India is relatively low and the rural-urban divide in the country has increased during the last decade [26]. These facts ascertain that the possibility of the existence of economic violence against socioeconomically backward Muslim minority community women in rural India is relatively high. Motivated by the above facts, this study examines the economic violence experienced by Muslim minority women of socioeconomically backward regions in the context of rural North India within a framework of domestic violence perpetrated by their husbands. This study is further warranted by the current state of relevant literature, which indicates that the existing research on domestic violence against women largely considers its psychological, sexual, and physical appearances only [3,39,41]. In other words, the empirical literature on the existence of economic violence against women is scant. A few existing studies on economic abuse derive attention especially in the context of developed countries like the USA [40]. Moreover, the samples used in most of these studies supporting the prevalence of economic violence against women are restricted to intimate partner violence (IPV) survivors [30]. The findings of such studies have less relevant policy implications for developing countries. The rest of the study is presented as follows. "Review of literature" section provides a brief review of the relevant literature. "Methods" section introduces the methods of analysis employed in the study. In "Results and discussion" section, we assess the level of economic violence against Muslim women, identify factors associated with economic violence, examine the causes of economic violence, and investigate the impact of economic violence on women's access level of educational attainment, social involvement, and health. The final section summarizes the main findings of the study and presents policy implications. Review of literature An analytical and focused review of the most important studies on economic violence against women is as follows. Fawole [13] theoretically explored the most common forms of economic violence experienced by women in developing countries. It was reported that women generally face limited access to funds and credit, controlled access to health care, employment, education, including agricultural resources, and exclusion from financial decision-making. Jury et al. [20] empirically investigated the experiences and effects of economic abuse of women in New Zealand by surveying 398 respondents. They concluded that the most common types of economic violence reported by women were: (a) erosion of financial decision-making power, (b) no right to input, (c) disregard of women's financial wants and needs, (d) depriving women of essentials, and (e) deceit and blame. Stylianou et al. [38] measured abusive behaviours using factor analysis on data taken from domestic violence shelters in the USA. They conceptualized economic abuse as three separate constructs: economic control, economic exploitation, and employment sabotage. They show that economic abuse is a unique form of abuse that was moderately correlated with psychological, physical, and sexual forms of abuse. In the case of minority communities in developed countries, Davila et al. [9] used a hierarchical multiple regression model to analyse the impact of economic violence on the mental health of IPV survivors of Latina minority in the USA. They found that restricting access to money and financial information was the most common form of economic violence experienced by Latinas women. However, economic violence did not explain the variation in symptoms of depression and anxiety. On the other hand, Outlaw [27] investigated gender differences in the existence of intimate partner economic abuse using the USA-based survey data obtained during 1994-1996. It was found that there existed a significant gap in level of economic abuse experienced by men and women-women were more likely to be a victim of economic abuse. Sanders [34], employing ATLAS.ti software, examined the role of financial issues and economic factors in women's experiences of intimate partner violence (IPV) using qualitative data taken from St. Louis-based redevelopment opportunities for women's economic action program. It was found that IPV consists of economic abuse dimension that negatively affects women's economic well-being. The abusive partners interfere with women's employment, their access to financial resources and isolate them from household financial information. In contrast, Casey et al. [7] reviewed the empirical literature on gender transformative approaches allowing men's participation in ending violence against women. They observed that the impact of these approaches on prevention outcomes was promising. In order to supplement qualitative analysis, Hetling et al. [17] developed a scale for measuring financial strain in the lives of survivors of intimate partner violence using data collected from seven US states and Puerto Rico. They argued that financial strain is a significant component for evaluating one's economic situation and has important implications for developing policies for reducing economic violence. In addition, this study highlighted the importance of using a comprehensive approach with wider coverage of different relevant aspects for proper measurement of financial strain. Likewise, Borchers et al. [6] analysed women's experience of attaining and maintaining employment while facing intimate partner violence using a sample of thirty-four respondents in west-central Ohio. They found that women who had experienced IPV could attain employment; however, maintaining employment was difficult for them. All respondents experienced the entanglement of work and IPV. The perpetrator controlled their appearance, sabotaged their work, interfered with their work, and controlled their finances. In the context of relationship between physical and economic violence, Moe and Bell [22] examined the effect of domestic battering on women's employability, ability to find job, sustain employment, and utilize their earnings to strengthen economic liberty and security by conducting interview of residents of a domestic violence shelter in Arizona. They found that women's ability to work outside home was affected by physical abuse like injuries on the face and assaults caused by a husband. The life partner violence was perpetrated with the intent to sabotage women employment and their economic freedom. In respect of underdeveloped countries, Sedziafa et al. [36] used qualitative methods to explore the experience of economic abuse of women by an intimate partner in the Eastern region of Ghana. They found that economic abuse was widespread in Ghana and its form varied with the employment situation of women. For unemployed women, economic abuse was tied to their sexual unavailability to partner. The employed women narrated the experiences of financial sabotage such as husband's chronic economic dependency and abandonment of the family's financial obligations. Regarding the case of developing countries, Yount et al. [41] analysed the husband's behaviours that control their wives' ability to acquire, use, and maintain economic resources in Vietnam. They found that the prevalence of economic coercion against wives was high and associated with other standard forms of violence like physical, psychological, and sexual. The determinants of economic violence are less understood than those of standard forms of violence. This gap in literature provides important avenues for further research. In the context of India, Khattab et al. [21] estimated disadvantages of participation in economic activity, employment, and the occupational choice faced by Muslim women who migrated from India in the Australian labour market. They observed that due to differences in qualifications Muslim women were less likely to participate in the labour market and less likely to obtain managerial and professional jobs. Similarly, a comparative analysis of socioeconomic profile and health status of Muslim women in India was conducted by Ohlan [25]. The analysis indicated that Muslims in general form India's largest deprived and disadvantaged religious minority community. Muslim women are still lagging behind the mainstream in social, economic, health, and educational sectors. They enjoy relatively less economic and social freedom. Nonetheless, Armand et al. [4] studied the differential effect of targeting cash transfers to poor families on the patterns of their food expenditure in respect of the recipient's gender using data from the Republic of North Macedonia. They observed that the target cash transfer towards women did not affect household food consumption patterns. However, an increase in women's income leads to a uniform increase in the food budget. Ringdal and Sjursen [32] employed an experiential method to assess the impact of an increase in women's intra-household bargaining power on the amount of family spending on children's education in Tanzania. It was found that an increase in the wife's bargaining power did not affect a family's total spending on children's education. However, the relative change in spouses' bargaining power reduces gender biases in the allocation of educational spending among children. Likewise, in contrast to the conventional wisdom of conflict of husband and wife's preferences of purchase of goods and services for the household running costs, Bjorvatn et al. [5] used experimental games to explore intra-household cooperation of married couples in Ethiopia. They found striking similarities in intra-household allocation preferences and norms of married couples' decision-making in Ethiopia. In sum, the latest empirical studies on the impact of cash transfer to women on their household expenditure allocation preferences and family welfare challenge the view that an increase in money in women's hands results in more expenditure of the household and children as compared with expenditures incurred by men. The findings of the above-reviewed studies are important; however, these cannot be generalized because of their shortcomings in the use of the restricted samples. More importantly, there exist large dissimilarities in the socioeconomic structures of developed, developing, and underdeveloped countries. The issue of economic violence against women is yet to attain enough scholarly attention in the context of India. The current study contributes to the extant scientific literature by considering women of the economically disadvantaged minority community of a large developing country. Theoretical underpins for selection of backward area for the study The selection of minority concentrated backward areas of rural North India for the study is supported by social resource theory [10,14]. This theory states people confronting hardship of assets have less renown and force, and in this manner have fewer means to accomplish their objectives. Such people may depend on the power to accomplish their goals [16]. In consequence, when men lack cash, education, or monetary assets, they may depend on power and financial brutality to control their spouses [15,16]. Likewise, social exchange theory appears particularly applicable to understand monetary elements within a family encountering financial brutality [11,12,34]. This theory is guided by the economic rationality of expenses and advantages [18]. It predicts that if a woman contributes critical financial assets to the family, her family members will have more to lose in the event that they fall back on viciousness and their partner leaves them. Then again, if a woman has scarcely any monetary assets and is monetarily subject to an oppressive partner, he has little to lose monetarily and can utilize budgetary assets and financial maltreatment as a way to control his partner [19]. The selection of the area of the current study is further supported by the geographical distribution of crime reported under the category of domestic violence against women. In India, a crime of economic abuse is covered under the Protection of Women from Domestic Violence Act (PWDVA), instituted in 2005. According to data available from the National Crime Records Bureau, 437 incidences of crime were finally reported under this Act in 2016. Of these, most of the cases (98%) were reported out of metropolitan cities. The prevalence of relatively high domestic violence in rural areas supports our choice of that area for a survey of the study. Research design and setting In order to achieve the objectives of the current study, a descriptive-cum-diagnostic type of research design is used. The condition of Muslims in India is more terrible in the northern region than that of the rest of the country [33]. Muslims are in the minority in Haryana, Rajasthan, and Punjab states of north India. Accordingly, the Muslim concentrated socioeconomically backward districts of these selected states were chosen for the study. The survey for the current study was conducted in Nuh (erstwhile Mewat) district of Haryana, Nagaur of Rajasthan, and Sangrur of Punjab. The socioeconomic status of women in all three select districts is not at par with men and the mainstream of national life. The survey for the study is conducted at the household level. The unit of observation was individual women. Muslim women from the age group of 18 to 50 years were covered in the study [41]. Sample design and size A purposive sampling procedure is used for the selection of the above-mentioned research sites. The blocks and villages for the study were selected based on a relatively high concentration of Muslim population. The random sampling technique was used for the selection of respondents within the selected villages. A list of all Muslim women of the above-stated age group residing in every selected village was prepared. The random numbers were selected from the list of women of each village. Before conducting the interview, every woman was informed that this survey is only for research and academic purposes. None of the women declined to participate in the survey. Given the large size of the population, using a 95% level of confidence, and a 5% level of margin, the representative sample used in the study consists of 387 respondents. For drawing a fair comparative picture, the whole sample was equally distributed in select districts of Haryana, Punjab, and Rajasthan. One block having a high concentration of Muslim women was marked from each district. By using the same method, three villages were selected from every block. From each village, 43 women respondents selected randomly were interviewed. Accordingly, 129 respondents were interviewed from each select district. The empirical information was gathered by conducting personal interviews with the respondents by trained female investigators. Tool for data collection and methods of analysis The requisite primary data were collected through a sample survey using a detailed structured schedule according to the main objective of the research. The schedule adopted for the survey is based on the relevant literature, and opinions of experts and practitioners. A scale consisting of 29 items has been adapted from Postmus et al. [29,30]. Yau et al. [40] also validated this scale by taking a sample of a household-level survey of both men and women from Hong Kong city of China. Schrag and Ravi [35] employed this scale for assessing the level of economic abuse among female students of community college in the USA. For measuring the level of economic violence, the respondents were asked to rate the frequency of experiencing economic violence perpetrated by their husbands. Responses were made on a 5-point scale ranging from 1 (never) to 5 (quite often) [1]. In this way, a higher mean score represents a higher level of economic violence. The standard quantitative measures of analysis employed in the study are tabulation, proportion, averages, and exploratory factor analysis. Demographic analysis The educational status of most of Muslim women interviewed for the study was found to be poor. Their family income was low and their wealth condition was weak. Most of the respondents were either homemakers or agricultural labour. The average age of the interviewed women was 38 years. Most of them have a bank account. The average family size was large, with seven members. The average age of Muslim women at the time of their marriage was 18 years. About 77% of interviewed women belong to nuclear families while the rest 23% belong to joint families. All respondents had bricks made house. Each house was equipped with an electricity connection. Most Muslim women use public transport facilities for travelling. Likewise, most Muslim women covered in the survey for the current study were living a married life. It was observed that demographic variables did not vary considerably. However, the fertility of Muslim women is still much higher in comparison with women of other communities. The use of contraceptive methods in Muslim women is below the national average [24]. At the same time, the fertility rate in the country is indirectly related to women's level of schooling and their wealth index. The policy implication is clear. The local selfgovernment should facilitate Muslim women of lower socioeconomic status residing in backward rural areas in adopting family planning measures. Table 1 shows the arithmetic mean, percentage, standard error (SE), and standard deviation (SD) of the responses to the original Scale of Economic Violence (SEV) divided into three sub-scales. The survey used a 5-point scale with responses ranging from one (never) to five (quite often). It is evident from Table 1 that the overall mean score is less than two that is the response to a low level of aggregate economic violence. The value of standard deviation is also low which implies that the estimated value of mean properly represents the sample. In other words, a low standard deviation value indicates that most of the responses fall near to the mean value. It may be added here that filed-investigators confidentially observed the facial expression and state of mind of the respondents of this survey while conducting the personal interviews. It was observed that all respondents were very happy throughout the discussion with the interviewers. Most of the respondents expressed a smiley face while refuting any experience of economic violence caused by their husbands. Most women, however, believed that they have less freedom than their male counterparts on every level and type of outside employment. Likewise, several women also talked about the fact that less social mobility is perpetuated in their religion. A few women also talked about the role of gender discriminatory provisions of Muslim personal law on inheritance in economic violence against them. A quote from a woman surveyed in this study is stated below: I think economic violence against women is perpetuated in our religion. I really felt that if I had born in a Hindu family, then I might enjoy much better legal rights in parental property. A comparison of means of different sub-categories brought out that the tendency of employment sabotage is higher in comparison with economic control and economic exploitation. The value of the mean score of some items in this category is above two. A debriefing of results presented in Table 1 indicates that in the sub-category of economic exploitation the practices of keeping financial information away from women on the one hand and convincing them to lend money and not paying it back, on the other hand, were most commonly used. Similarly, asking a woman to quit her job was a common tactic of employment sabotage. Likewise, the involvement of women in households' important financial decision-making was low. Recently, the Government of India has launched self-employment and wage employment schemes meant for minority communities like The Scheme for Leadership Development of Minority Women. However, there is a need for provision of reservation of some seats for economically backward rural areas minority girls/women candidates. This finding supports the hypothesis that economic violence against Muslim women exists in India. The finding of a low level of economic exploitation is consistent with that of results reported in NFHS [24] that women's control over their own earnings is highest among Muslims in comparison with non-Muslims. Our finding indicating a low level of economic violence against Muslim women is consistent with the Government of India official data of crime under the Protection of Women from Domestic Violence Act, 2005. However, our finding is different from that reported for the USA by Postmus et al. [29] who reported a somewhat higher value of mean scores for similar measures of economic violence. This difference in finding may be because we conducted a survey of Muslim women residing in general households, while their survey was on IPV survivors staying in domestic violence shelters. Another reason for a difference in finding may be socioeconomic and cultural dissimilarities between India and the USA. Factor analysis for scale of economic violence A twenty-nine items scale was used for measuring the level of economic violence against Muslim women. The items in this scale were adapted from the National Family Health Survey [24,29,30]. The reliability of this scale was investigated using Cronbach's alpha coefficient. The results are given in Table 2. The estimated value of Cronbach's alpha coefficient is 0.877, which is above the threshold level of 0.7. This value indicates that the scale consisting of 29 variables has good internal consistency. The sampling adequacy was examined using Kaiser-Meyer-Olkin (KMO) test and the results are reported in Table 3. The estimated value of KMO test is 0.673 that is above the threshold level of 0.5. It indicates the adequacy of the sample. In other words, a 67% variation in variables is caused by underlying factors. These findings provide a sound base for the use of the factor analysis. The estimated value of χ 2 = 10,493.176 and p < 0.001. An exploratory factor analysis (EFA) was conducted. The combined eight factors were accounted for 76.94% of the total variance. Three components are identified for basic themes and item names were doled out in like manner. Table 4 shows statements covered in the identified factors. The first factor was named economic exploitation and contains 12 items that capture the concept of economic exploitation. The second factor or theme, Employment Sabotage, is comprised of five variables that address hindrance to employment. The third factor is termed as Economic Control and it contains three items. It may be concluded that among the components of economic violence, the tendency of employment sabotage is higher compared to economic control and economic exploitation. Causes of economic violence A ten-point scale was used for investigation of the causes of economic violence against Muslim women. The face validity of the items included in this scale was tested by taking experts' opinions from the disciplines of Economics, Sociology, Law, and Psychology. The internal consistency of the scale used for examining causes of economic violence was investigated using Cronbach's alpha coefficient. Based on the values of this test, some variables were removed from the analysis. The excluded variables were (a) type of family, (b) number of marriages of women, (c) birth of a male child, and (d) presence of more than one wife. The reliability statistics were estimated. After the removal of four variables from a scale consisting of 10 variables, the estimated value of Cronbach's alpha coefficient became 0.742, which was above the threshold level of 0.7. This value indicated that the scale consisting of six variables has good internal consistency. The KMO test of sampling adequacy was performed. The estimated value of KMO test was 0.638 that was above the threshold level of 0.5. It indicated the adequacy of the sample. In other words, a 64% variation in variables was caused by underlying factors. These findings provide a base for use of factor analysis. The estimated value of χ 2 was 1040.720, and p < 0.001. The EFA was also conducted. The combined two factors were accounted for 72.76% of the total variance. Two components are identified for basic themes and item names were doled out in like manner. Table 5 shows statements covered in the identified factors. The first factor was named economic backwardness and contains three items that capture the concept of low income and low saving. These three items cover fluctuating low personal income, low personal savings, and low household income. The second factor or theme-low educational attainment and high debt-is comprised of three variables that address socioeconomic backwardness. An open-ended question was also asked of Muslim women regarding the way to improve their participation in economic decision-making at the household level. Most women say that the upgrading of skills through technical training can be an effective measure to check economic violence against Muslim women. Based on our findings, we propose that the government should establish more skill development centres in minority concentrated backward areas. Consequence of economic violence on Muslim women access to health facilities The health status of Muslim women and the utilization of health facilities by them is relatively poor [8,31]. Similarly, Muslim women are suffering from the problem of relatively low literacy rate, low earning, less access to financial resources, high teenage pregnancy, high fertility, and low workforce participation rate [25]. Their opinion on the consequences of a lack of more financial liberty on different aspects of access to health facilities was analysed. They were asked whether you might have better access to health facilities with high financial liberty. The response of Muslim women obtained using a five-point scale ranging from one (strongly disagree) to five (strongly agree) is reported in Table 6. It is seen that the mean score of the response of Muslim women for the value of standard deviation is less than one-third of the arithmetic mean. The value of standard error is also small. It indicates that the arithmetic mean properly represents the values of individual responses. The overall mean score is 3.99. This value of the mean score represents the response of Most Muslim women also agreed that in case of increased financial liberty they might have taken better nutritious foods. The mean value for this variable is also above four, which is a response of agreed. This finding provides justification for the continuation of Government of India national nutrition mission aimed at improvement in nutritional outcomes for children, pregnant women, and nursing mothers. NFHS [24] reported that for the delivery purpose most rural women use government hospitals rather than private hospitals. Muslim women recognized this fact in the survey of the current study. About 80% of Muslim women say that in case of more financial independence they can opt for a specialized private hospital for safest delivery. This finding suggests that for promoting institutional delivery among poor pregnant women the Government of India should provide enough financial support through Janani Suraksha Yojana (Mother Safety Scheme) under national health mission. The Pradhan Mantri Matru Vandana Yojana (Prime Minister Mother Respect Scheme) is a highly appreciable step in this direction. The deliveries in private hospitals should also be covered under this scheme. Consequence of economic violence on educational attainment of Muslim women As pointed out earlier that the educational attainment of Muslim women is poor. Therefore, they were asked whether you might have attained higher education with high financial liberty. The response of Muslim women was obtained using a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree). It was noted that 31% of Muslim women were either agreed (A) or strongly agreed (SA) with the statement that better educational attainment might be achieved through high financial independence. The estimated value of the mean score for this statement is 3.44. This response indicates the consensus of respondents towards this statement. The policy implication of this finding is that government and non-government organizations (NGOs) should focus on measures of improving the level of educational attainment like subsidized education for Muslim women and assuring easy access to educational institutions. The policy intervention is required to establish more educational institutions in minority concentrated backward areas of the country. It may be noted here that most Muslim women are suffering from economic backwardness rather than economic violence. As informed by Muslim women during personal interviews that their low financial independence is largely due to economic backwardness. Economic backwardness is associated with women's low literacy rate [37]. Consequence of economic violence on social involvement of Muslim women The opinion of Muslim women was sought for possible improvement in their level of social involvement with high financial liberty. The estimated value of the mean score for this question was 3.16. It means that Muslim women's involvement in social activities could be increased by improving their financial liberty. This finding justifies the policy of financial intervention by Government of India to reduce the burden of travel expenses of social causes on Muslim women. Until 2018, the Government of India provided discounted Airfares on Indian Government-owned Air India flights to Indian Muslims Hajj pilgrims. In order to improve the social mobility of Muslim women, the Haj subsidy can be restored to them. It is contented here that recently the Government of India has lifted a ban on Muslim women going to Haj without "Mehram" (male companion). More importantly, for improving the involvement of Muslim women in social functions and increasing the value of their social life the government should continue with measures like educating and skilling them. These findings approve the hypothesis that economic violence against Muslim women does affect their access to health services, educational attainment, social mobility, and employment opportunities. Conclusion Economic violence connotes a deliberate pattern of control to interfere with an individual's ability to acquire, use and maintain economic resources. Muslim is a distinct minority community in India. The study is a first attempt to measure and analyse the level of economic violence against economically backward minority community women in Indian context within a framework of domestic violence perpetrated by their husbands. It also explores the consequences of economic violence in availing education, health, social involvement and employment opportunities by Muslim women. We found evidence of economic violence experienced by Muslim women. Economic violence exists mainly in the form of employment sabotage whereas a tendency of economic control and economic exploitation is low. Based on the finding of employment sabotage, we approve the hypothesis that economic violence against Muslim women exists in India. For preventing economic violence against Muslim women, strong messages should send to violators by meting out stringent punishment through PWDVA. The state and local level governments should arrange a special budget for the implementation of PWDVA. The NGOs can arrange needful for increasing the awareness of women about PWDVA in backward areas. Regarding the effect of religion on the economic right of women, it is pertinent to mention here that the Hindu Succession (Amendment) Act, 2005, provides equal rights to men and women in a parental property whereas Muslim personal law on inheritance discriminates between men and women. In other words, Muslim women face a disadvantage in sharing an inherited property in comparison with their male counterparts. In order to combat the economic violence against Muslim women, there is a need for an amendment in Muslim personal law. We find that economic backwardness and low educational attainment and high debt are major factors explaining the variation in economic violence against Muslim women. In the case of remedial measures, most women considered upgrading their skills as an effective measure to check economic violence against Muslim women. In the context of the consequences of economic violence on women's access to developmental opportunities, most Muslim women agreed that they might have better educational attainment with high financial liberty. The attitude of Muslim women towards education may be favourably changed by assuring the availability of Muslim women teachers in their educational institutions. It is well documented that the social mobility of Muslim women is relatively low. Muslim women say that their social involvement can be improved with high financial independence. There exist large economic disparities within minority communities. Therefore, eligibility for availing the benefits of existing schemes meant for minority communities should be accompanied by economic criteria. It was also admitted by the women who participated in the survey of the study that with high financial liberty they might have better access to health facilities. In the context of the impact of economic violence on women's access to developmental opportunities, most Muslim women agreed that they might have better educational attainment with high financial independence. Similarly, Muslim women say that their social involvement can be improved with high financial independence. They also admit that with high financial independence they may have better health. Regarding the causes of economic violence, the findings of factor analysis indicated that economic backwardness and low educational attainment, and high debt are major factors explaining the variation in economic violence against Muslim women. NFHS [24] data show that economic freedom enjoyed by Indian women is positively related to their wealth condition, and level of education. In the context of remedial measures, most Muslim women considered that the upgrading of their skills is an effective measure to check economic violence against them. It is contented here that currently the Government of India, under the national skills qualification framework initiative of the ministry of skill development and entrepreneurship, has laid special emphasis on the skilling needs of minority communities. The initiatives are undertaken to ensure the participation of several public sector undertakings and corporates in the inclusive skill development of minority communities under corporate social responsibility. Based on our findings, it may be argued that more skill development centres should be established in minority concentrated backward areas. Besides, the NGOs should come forward for ensuring the effective implementation of schemes meant for the skill development of Muslim women. These policy recommendations can be implemented in other developing countries as well for combating economic violence against women of socioeconomically backward minority communities. The issue of economic violence experienced by Muslim women outside the home does warrant further research.
2021-08-25T20:03:38.601Z
2021-08-08T00:00:00.000
{ "year": 2021, "sha1": "1398ef24faaef723d244e2689cda91d8d8c221a8", "oa_license": "CCBY", "oa_url": "https://fbj.springeropen.com/track/pdf/10.1186/s43093-021-00074-9", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dd58e766d9fa406b82c645fe3669ef0cfbb2da7b", "s2fieldsofstudy": [ "Sociology", "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
233562678
pes2o/s2orc
v3-fos-license
The formation of Fe colloids and layered double hydroxides as sequestration agents in the natural remediation of mine drainage seasons of 2016 and 2018 respectively, based on observations of physiochemical characteristics, elemental concentrations in dissolved and colloidal fractions, transmission electron microscopy, and synthetic experiments. In this circumneutral Fe-rich mine drainage, Fe 2 + is oxidized to Fe 3+ , resulting in the formation of Fe colloids that incorporate As during their formation. Colloid formationincreasesturbidity,and,intherainyseason,increasedcolloidalinteractionenhancestheiraggregation andhigher fl owratesleadtogreatermobilizationofthecolloids.Zn-bearingcolloidsarerareinAinaiminedrain-age because the Zn concentrations are low. However, Zn-Fe layered double hydroxide (LDH) was identi fi ed and con fi rmedbygeochemicalmodellingand experiments.The Zn-Fe LDH wasformedbyisomorphoussubstitution of Zn into an Fe 2+ – Fe 3+ – CO 32 – LDH, at pH greater than 7.5, thereby achieving ef fi cient natural remediation of Zn and As in the drainage. © 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction Mine drainage is an increasing source of toxic elements in the environment (Fu and Wang, 2011), due to wastewater released from underground workings or tailings. Various treatment methods have been applied to treat such wastewater, all involving long-term and expensive processes, so natural remediation processes, (i.e., passive treatment systems), are becoming the preferred option (Zipper and Skousen, 2014). Geochemical characteristics of wastewater such as pH, redox conditions, dissolved chemical composition, and organicmatter content (Nordstrom, 2011) vary at different sites, as well as in different seasons. As a consequence, it is necessary for designing efficient treatment systems, to understand the processes and factors that affect natural remediation. The most common types of mine drainage wastewaters are acidic (Dold, 2014), and these have been studied in more detail than their circumneutral counterparts. To achieve remediation, most acidic mine drainage is mixed with other water sources to attain a circumneutral pH (Jung et al., 2012), but field studies highlighting processes that may achieve efficient remediation are scarce, especially those concerning nanoparticle behavior and seasonal variations. Iron is generally ubiquitous in mine drainage, along with other toxic elements (Pokrovsky and Schott, 2002;Schemel et al., 2000). The thermodynamic characteristics of Fe allows it to affect the mobility of toxic elements such as arsenic (As), zinc (Zn), lead (Pb), and copper (Cu), because it exists as Fe 2+ in relatively anoxic environments but can oxidize to Fe 3+ and form Fe oxyhydroxides under oxidizing conditions (Whitney King, 1998). In oxic and neutral-pH environments, Fe 3+ exists predominantly in the oxyhydroxide form, facilitating the formation of Fe-rich colloids (1 nm to 100 nm) (Liao et al., 2017;Gledhiir and Buck, 2012). Considering the abundance and large surface area of colloids, Jung et al. (2012) reported that Fe colloids are more reactive than bulk suspended solids, and their sequestration of other toxic elements has been widely reported. Experimental studies have also highlighted this (Sharma et al., 2010;Mokhter et al., 2018). However, due to the diverse geochemical characteristics of treatment systems, field evidence concerning the formation and behavior of Fe colloids, and their significance in removing toxic elements from mine drainage, still requires clarity. Colloids are resistant to gravitational settling, and studies of their formation and aggregation rates should account for the time they remain in the system (Pokrovsky and Schott, 2002;Wang et al., 2014). Whereas deposition of colloids is possible once the aggregates become large enough, aggregation rates may vary between systems and depending on the aqueous chemistry. The estimation of aggregation rate is therefore a major aspect to consider in the design of passive treatment systems. Processes such as competitive adsorption and precipitation kinetics in mine drainage usually result in the formation of a variety of mineral phases that are able to sequester toxic elements (Plumlee et al., 1997;Nguyen et al., 2019). In addition to Fe colloids as a dominant medium for remediation, layered double hydroxides (LDHs) have also gained popularity due to their efficiency, flexibility, and ability to reduce concentrations of metals such as Zn and Cu (Xu, 2013;Okamoto et al., 2010). Previous studies have reported natural occurrences of LDHs and their synthesis, particularly in the presence of Fe (Morimoto et al., 2015;Hongo et al., 2008). However, clarity is still lacking regarding the characterization of LDHs and factors crucial for their formation and stability, as might be provided by comparisons of field and experimental observations. Here we report a study of a circumneutral passive treatment system utilizing aeration to remove Fe, As, and Zn from underground wastewater drainage from Ainai mine, Japan, considering both dissolved and colloidal fractions. The progression of elements from dissolved to colloidal states and their ultimate fate were studied. Our objectives were to (1) clarify the formation, semi-quantitative aggregation, and deposition behaviors of Fe colloids at circumneutral pH; (2) examine their application to As and Zn sequestration from mine drainage over two seasons; and (3) investigate the formation of LDHs by comparing natural and synthetic samples and factors affecting their formation and stability. The concentrations of metals in the drainage system from the mine provided insight into the behavior of the metals involved, and highlighted factors that determine the fate of toxic elements in mine drainage. Study area The abandoned Ainai mine is in the northern part of Kosaka town, 600 m northeast of Omori mountain, in the Hokuroku district, Akita prefecture, Japan. Sulfide ores containing Zn, Pb, and Cu (e.g., sphalerite, galena, chalcopyrite) were mined from the Kurokotype volcanogenic massive sulfide deposit (1951 to 1985), which was formed associated with submarine bimodal volcanisms in the middle Miocene (Ishii, 1964;Yamada and Yoshida, 2011). Contaminated water has flowed from the mine since its closure and requires treatment before discharge. Aeration has been used to oxidize and precipitate Fe along with As and Zn. This natural remediation involves a 1000 m drain that runs from the underground tunnel through two treatment ponds to connect with a tributary of the Kosaka River at altitudes of 260-300 m (Fig. 1a). Wastewater flows from the underground mine in pipes ( Fig. 1b), through a concrete drain inside a tunnel (Fig. 1c), is released to an outside drain (Fig. 1d), which has steps to improve aeration ( Fig. 1e), then connects to a concrete drain (Fig. 1f) before being stored in a reservoir (Fig. 1g) and sedimentation ponds prior to release to the river. Sediments that accumulate in the ponds are transferred periodically to a nearby area. Ainai mine drainage is located in a sub-frigid humid climate, with four distinct seasons. Abundant snowfall and monsoons have been reported in the region (Lu et al., 2019). Besides winter, when the drainage is covered by snow, precipitation at Ainai is highest in July, and lowest in October (Fig. S1a). Snow falls from October until mid-February (Japan Meteorological Agency, 2018). Therefore, July and October are referred to as the rainy and dry seasons, respectively, at some points in the manuscript for simplicity. Sampling and on-site measurements Field surveys at Ainai mine were conducted annually from 2016 to 2019, however, the data reported here are from the surveys that were conducted in July 2016 and Oct 2018. These datasets are a representative summary of our findings. Sixteen samples of water, suspended solids, and sediments were collected from the tunnel exit at the upper drain (S1; Fig. 1a), at~100 m intervals until S6, and from the reservoir and settling pond (P1 and P2, respectively, Fig. 1a). Water was sampled through three types of filters, 0.2 μm PTFE membrane filter (Advantec 25HP020AN), 200 kDa ultrafilter (Advantec USY-20) and As filters (Sep-Pack cartridge: As exchange column), to provide four types of sample: 0.2 μm membrane-filtered, non-acidified sample for anion determinations; acidified 0.2 μm membrane-filtered sample; and 200 kDa ultra-filtered sample and samples to be analyzed for As speciation, with the latter three being acidified by 1 vol% HNO 3 (ultrapure grade, Kanto chemicals) for cation determinations. Here, we define the dissolved and colloidal fractions as follows: (1) ultrafiltered water samples contain the dissolved fraction; (2) membrane-filtered samples contain the colloidal and dissolved fractions; (3) the difference between (1) and (2) provides the colloidal fraction. Samples were collected in 50 mL polypropylene bottles, pre-rinsed with 3 vol% HNO 3 overnight, and stored at~4°C pending analysis. Sediments were also collected at all sampling points S1 to S6. Suspended particulates were also collected at all points until P1 (Fig. 1a) by pumping 0.5 L of water through 0.2 μm mixed-cellulose-ester filters (Advantec A020A047A). On-site measurements were undertaken for Fe 2+ concentrations, dissolved oxygen (DO), pH, electrical conductivity (EC), turbidity, temperature, oxidation-reduction potential (ORP), and alkalinity of water samples. A pack test was used for Fe 2+ concentrations; Eh (Redox potential of the normal hydrogen electrode) was calculated as Eh = E+206-0.7 × (T-25) (E: oxidation -reduction potential (mV), T: temperature (°C)). Alkalinity was determined by HNO 3 titration of water samples filtered through a 0.45 μm PTFE membrane filter. A Gran-function plot was applied to obtain HCO 3 − concentrations from the alkalinity (Rounds and Wilde, 2006). Distances from each sampling site and flowrates were measured on site. Analytical methods Non-acidified water samples were diluted 10 times and analyzed by ion chromatography (IC; Metrohm IC861) using Multi-anion Standard Solution 1 (Wako Pure Chemical Corporation) for calibration. Acidified samples were diluted 40 times and analyzed for major and trace elements by inductively coupled plasma-atomic emission spectroscopy (ICP-AES; Shimadzu ICPE-9000) and ICP-mass spectrometry (ICP-MS; Thermo Scientific iCap Qc). Standards were prepared from a multi-standard solution (Wako). In, Ru and Rh and were used as internal standards for ICP-MS analysis. Oxide formation during analysis was monitored by the CeO/Ce ratio and maintained at <0.5% and He collision mode was utilized to avoid molecular interference from 40 Ar 35 Cl + on 75 As + . Sediment and suspended particulate samples were dried at room temperature and the minerals present were determined by X-ray diffraction (XRD; Rigaku XRD Multi-Flex) using Cu Kα radiation (λ = 0.15406 nm) with an accelerating voltage of 30 kV and beam current of 20 mA, in the 5°-70°range, scanned at 2.0°min −1 . The morphology and chemical composition of the suspended particulates were analyzed by field-emission scanning electron microscopy with an energydispersive X-ray spectrometer (FE-SEM-EDS; JEOL JSM-6500F). Samples were prepared for transmission electron microscopy (TEM; JEOL JEM-2010) by dispersion in ethanol (with ultrasonication) and placed on a Cu grid with a film. Minerals were identified using Crystal Structures Libraries. Synthesis of Zn-bearing colloids and thermodynamic calculations The mineralogical characteristics of solid samples were studied to constrain sequestration mechanisms for toxic elements. Zinc concentrations in the aqueous solutions from the drain may have been too low to produce observable amounts of Zn-bearing colloids in the sediments by the above methods. Therefore, ZnSO 4 ·7H 2 O was added to wastewater collected in 2 L bottles to induce the synthesis of Zn-bearing minerals. With reference to previous studies (Morimoto et al., 2015;Parida and Mohapatra, 2012) and based on Ainai mine water chemistry, synthesis was done using unfiltered water sample from S1 ( Fig. 1a) to act as a supply of Fe, and addition of ZnSO 4 ·7H 2 O to increase the Zn concentration. The Zn reagent (249.5 mg) was added to 2 L of sample to provide a Zn:Fe molar ratio of 2:1. The samples were stirred and allowed to settle for 24 h at room temperature, after which precipitates were collected on 0.2 μm filters for XRD and SEM analysis. The Zn and Fe concentrations (by ICP-AES) and pH were recorded immediately after mixing and again after the 24 h. Stability diagrams for possible mineral forms expected in the drainage were constructed using the Geochemist's Workbench software (GWB model, Ver. 14) to evaluate their formation in the drainage at various pH values. Measured element concentrations were used as input parameters for modelling to account for the effects of coexisting cations and anions on solubility. Solubility diagrams were constructed using thermodynamic datasets generated from the thermoddem, modified where necessary by additions from the literature, in particular incorporating layered double hydroxides and other iron oxides (Bravo-Suárez et al., 2004). Interlamellar anions, combinations of divalent and trivalent cations, and related species, cause wide variations in the chemical compositions of LDHs. Therefore, the solubility products of Zn-Fe LDHs were estimated using the chemical compositions and thermodynamic data of the end-member hydroxides, sulfates, and carbonates (Allada et al., 2006). Thermodynamic data for these compounds were referred from the enthalpies of formation measured by acid-solution calorimetry, and solubility products were based on solubility measurements (Bravo-Suárez et al., 2004;Hase et al., 2017). General characteristics of water samples Results of on-site measurements of Ainai mine drainage are reported in Supplementary Table S1. Based on the stiff diagram (Fig. 2a), the water samples are classified as Ca-SO 4 -type water, indicating mixing of mine drainage with underground water rich in Ca and HCO 3 − before outflow from the mine head (Akashima et al., 2011). Despite being unusual for mine drainage, this classification was observed from the upper to the lower drain with negligible variations, implying that external factors have no significant effect on the drainage system. The HCO 3 − concentration is relatively high and decreased down-drain (246.2 to 127.1 mg L −1 ), indicating a strong buffering capacity. Ainai mine drainage is a neutral to alkaline (pH 6.20 to 7.91) system and pH increases down-drain with decreasing HCO 3 − (Fig. 2b). This inverse relationship reflects CO 2 degassing by aeration (Kirby et al., 2007), and the dissociation of HCO 3 − to CO 2 and OH − ion (Langmuir, 1997). Unlike most mine drainages, which are actually acidic, Ainai mine drainage is circumneutral despite an abundance of SO 4 2− in the system and this is due to the mixing of the drainage in the underground. Ainai mine drainage is generally an oxidative system, with over saturation of DO (6.54-12.55 mg L −1 ), possibly attributable to microbial photosynthesis (Stumm and Morgan, 2006). ORP values increase from the upper to lower drain (127 to 209 mV), possibly due to increasing pH (Stumm and Morgan, 2006). These properties exhibit negligible variation between July and October. However, the flow rate, turbidity, and Fe 2+ concentrations display notable variations between the two months (Supplementary Table S1), with average flow rates of 32.80 and 22.96 L s −1 in July and October, respectively, which is attributed to heavier precipitation in July (Fig. S1a). Relationship between Fe colloids and turbidity The turbidity is generally higher in July, when rainfall and flow rates are high, than in October (Fig. 3a), while Fe 2+ concentrations are lower in July than October (Fig. 3b) as a result of dilution by higher precipitation rates. There is an inverse relationship between turbidity and Fe 2+ concentrations (Fig. S1b); turbidity increases and Fe 2+ concentrations decrease down-drain. Turbidity is commonly higher in rainy seasons mainly due to the resuspension of sediments (Zay Ya et al., 2020; Rezaei et al., 2013). The higher precipitation at Ainai may have further contributed to geochemical processes in the drainage, with the inverse relationship between turbidity and Fe 2+ concentration indicating oxidation of Fe 2+ to Fe 3+ ion (Nairn et al., 2002), thereby facilitating formation of nanoparticles (Buffle and Leppard, 1995) with increasing turbidity (Tikhonova, 2016;Yao et al., 2014). This process typically occurs in circumneutral pH systems, as in this case. In the Ainai drainage, the abundance of Fe and trend of decreasing Fe 2+ imply the formation of nanoparticles of Fe colloid (oxy)hydroxides, whose mobility and perhaps aggregation behavior might be reflected by the turbidity. The turbidity continues to increase further down-drain in July, in contrast to the decrease observed in October when the flow rate is lower, indicating more rapid formation and/or aggregation of precipitates in the rainy season (July). The increasing turbidity supports the notion that the precipitates were mobilized over greater distances during the rainy season, indicating that flow rate should be a significant factor for consideration in the design of passive treatment systems. Turbidity increases with particle size (Tikhonova, 2016; Yao et al., 2014), implying more aggregation of Fe colloids in the rainy season, such that increased interaction among the colloids overcomes repulsive forces between them (Petosa et al., 2010;Baalousha, 2009), leading to more aggregation. At P1 and P2 (Fig. 1a), turbidity decreases markedly, because of the longer residence time and sedimentation in the ponds (Fig. S2); the larger particles in July settled more quickly with a greater drop in turbidity than in October. The calculated charge imbalance for all water samples was within ±15%. Distribution of Fe, As, and Zn in dissolved and colloidal fractions of mine drainage Fe, As, and Zn concentrations in dissolved and colloidal fractions of water samples are reported in Supplementary Table S2 and plotted in Fig. 4. The dissolved Fe concentration decreases down-drain (12.1 mg L −1 at S1 to 0.015 mg L −1 at P2), most notably between S1 and S3 where it is almost completely replaced by colloidal Fe, which forms increasingly as flow proceeds downward (Fig. 4a). The concentration of dissolved Fe fraction is similar to the Fe 2+ concentration (Fig. S1) obtained by on-site pack tests, therefore implying that the dissolved Fe fraction is predominantly Fe 2+ . Subsequently, colloidal Fe forms in the upper drain, initiated by the oxidation of Fe 2+ to Fe 3+ and allowing the formation of the Fe hydroxide nanoparticles (Pokrovsky and Schott, 2002). The nanoparticles are prone to aggregating (Kellner and Köhler, 2005) with increasing particle size, and the turbidity increase (Fig. 3) is attributed to the formation of Fe colloids (Liao et al., 2017). Despite the increase in the colloidal Fe fraction (1.39 mg L −1 at S1 to 8.79 mg L −1 at S2), at the expense of dissolved Fe, the total Fe concentration decreases down-drain (Fig. 4a) due to the aggregation of colloids to a size that is removed efficiently by gravitational settling, thereby removing the particles effectively from the drainage. Arsenic, which is mainly Arsenite, As(III), throughout the drainage, shows a similar trend to Fe, most noticeably in the distribution between dissolved and colloidal fractions in the drain (Fig. 4b). Colloid formation is inferred at S2 and S3 (where Fe colloids also dominate) and total As continues to decrease down-drain, reflecting the impact of Fe colloids on As mobility. Conclusively, Fe colloids behave as the primary colloids, whereas As exists as pseudo-colloids, hence the fate of As being mainly determined by Fe colloids in the drainage (Fritzsche et al., 2011). The impact of Fe nanoparticles on As in aquatic systems has been studied previously (Zhao et al., 2011;Leupin and Hug, 2005), with several removal mechanisms being proposed, including co-precipitation of Fe hydroxides with As (Crawford et al., 1993;Yokoyama et al., 1999) and adsorption of As by Fe hydroxides (Khamphila et al., 2017). Considering the similar trends in Fe and As observed here, we suggest that As may be incorporated into the Fe colloids and removed from the mine drainage by co-precipitation and aggregation of the As containing Fe colloids. On the other hand, an inverse relationship where Fe concentrations in July are higher, while As concentrations are lower is observed. Higher Fe concentrations are most likely from the underground source, but the low As concentrations are associated with the increased Fe, which allows for more sorption of As, hence the inverse relationship also displayed in October when precipitation of As is lower due to higher Fe. However, further downstream, As remains in the drainage as the dissolved phase. Zeng (2003) reported a decreased affinity of iron oxides to As adsorption, that is associated with the less affinity of Si for the As, thereby leaving As in the drainage. On the contrary, Zn and Si ( Fig. 4c and d) display different trends from Fe and As in the drainage. They (Zn and Si) mainly exist as dissolved fractions and minimally removed from the drainage. There may be various constraining factors that resist the colloid formation and removal of the elements in the drainage, and since the metal concentration patterns may be difficult to utilize as clarification tools, therefore, mineralogy and modelling was utilized to clarify the behaviors of these elements. Mineral compositions and aggregation behavior of Fe colloids Particles collected as colloids on the 200 kDa filters were further characterized by TEM and EDS. They display aggregated spherical structures (Fig. 5a), typical of Fe colloids (Liao et al., 2017;Gledhiir and Buck, 2012). The particles were more abundant on filters from sites S2 and S3, implying that formation of colloids occurred mainly in the upper drain, soon after oxidation of Fe 2+ to Fe 3+ . The composition of colloids was homogeneous throughout the drainage, including mainly Fe, Si, S, C, and O (Fig. 5b). As concentrations were too low to be detected by EDS. The colloids were likely Si-bearing 2-line ferrihydrite (Dold and Fontboté, 2002), as implied by XRD results of the suspended solids (shown in the following section), which has been reported to be stable, especially at pH ≥ 4, thus explaining the stability of Fe colloids formed in the Ainai mine drainage. This also reflects the minimal removal mechanism for Si from the drainage, showing that it is incorporated in the Ferrihydrite colloids. The typical colloid aggregate particle-size range collected on the ultrafilter is 100-200 nm, and they have distinctive spherical shapes (Fig. 5a). This suggests that colloids remain suspended at around this size and are further transported in the drainage. However, since the dynamics of Fe oxides formations have reported that they are sometimes formed from smaller particles, the particles were further observed under TEM. Further enlargement by TEM (Fig. 5c) indicates that the colloids are aggregates of finer-grained particles of 3-5 nm diameter (Fig. 5d). The Fe colloidal particles under TEM exhibit two rings at 2.66 and 1.49 Å, consistent with those of core-shell ferrihydrite (Weatherill, 2016). Core-shell ferrihydrite are precursors to ferrihydrite, which is a cluster of Fe ions that is a pre-nucleation cluster. These clusters have also reportedly been associated to the spherical structure borne by the ferrihydrite colloids (Michel, 2007). These observations show that the Fe colloids in the drainage, following oxidation, slowly generated into the spherical colloid aggregates observed on the ultrafilters. Previous studies have indicated that nanoparticle aggregation is inevitable in liquid phases and significantly alters their properties, thereby affecting the stability of colloids (Petosa et al., 2010;Baalousha, 2009). The surface charge of colloids, which is impacted by pH of solution, anion concentrations and organic matter among other factors, enhances repulsive interactions, dispersing them in the liquid phase, but their continued interaction overcomes these forces, allowing their agglomeration. As the colloids also host As, their aggregation behavior, which gives insight into their mobility and deposition characteristics, was investigated semi-quantitatively. Formation of the colloids in the drainage shows that, core-shell ferrihydrite (~3 nm) is evident in TEM micrographs (Fig. 5), implying that the collected colloids were actually aggregates formed from core-shell ferrihydrite, which then aggregate to spherical, stable 100 nm Fe hydroxide colloids. The increase in turbidity in Section 3.2 is thus a result of the aggregation of core-shell ferrihydrite to form 100 nm colloids. Given that these colloids were collected on the ultrafilters implies that they had not settled and remained in the drainage to a certain point. Since distances from one point to the next were measured during field sampling, a relation of the colloid concentrations with distance is reported in Fig. 6. Evidently from the graph, following their formation, the colloids formed at S1, S2 and S3 are transported down, while at S4, significant deposition is observed. This suggests that the colloids were aggregated to a particular size before deposition. Observation of the colloids at different points of the drainage displayed variations in size and aggregation (Figs. S2 and S3). An increase in size from S1 to S4 in particular was measured following microscopic observations. A gradual increase in size and colloidal aggregates was observed; S1, S2 and S3 have a distribution of 80 to 300 nm colloid aggregates which increase in abundance from S1 to S3. On the other hand, S4 was majorly composed of highly aggregated colloids of 300-400 nm. Particle aggregates larger than 400 nm were not observed on the ultrafilters, implying that colloid aggregates of >400 nm were deposited to the bottom of the drainage. Subsequently, deposition of the colloids in the drainage occurs at~300 to~400 nm particle size indicating that the colloids are efficiently removed by gravitational settling when they reach a certain size. The rate of aggregation in the drainage, as indicated by the turbidity, warrants study. The colloids in this system do not disintegrate after formation and are mobilized over considerable distances before deposition in the reservoir and sedimentation ponds. Colloid stability may be associated with Si in the system (Vempati et al., 1990), which allows formation of stable ferrihydrite. However, the aggregation rate may be explained by the DLVO theory, which highlights electrostatic factors affecting aggregation behavior. Given the high pH of the system, which results in the system being near the point of zero charge, aggregation of the particles is quite significant, hence the colloid deposition observed at about 500 m from their formation. According to the DLVO theory, repulsive forces limit aggregation, and this limitation is not so significant at Ainai mine drainage and aggregation is achieved to remove colloids from the drainage. A variation in the aggregation rate in July and October exists. In July, the turbidity quickly increases, and also quickly decreases towards the downstream, whereas, in October, the turbidity slowly increases and remains at values lower than July for longer distances (Figs. 3 and 6). This indicates that in July, the colloids aggregate faster than in October and are deposited faster than in October. According to previous research, increased van der Waals forces (Hiemenz, 1972;Hunter, 1963) may be responsible for this phenomenon, seeing as in July, total Fe is more abundant in the drainage, allowing more colloid formation, therefore, interactions among colloids are also increased, and result in faster aggregation than October. Consequently, the larger aggregates reach an ideal settling size quickly in July, allowing for the quicker deposition of the As-bearing Fe colloids in July unlike in October. Removal of Zn by colloid formation Fe and As concentrations decrease steadily in response to colloid formation but Zn concentrations decrease in an irregular pattern. (Fig. 4c). Formation of Zn colloids is observed downstream from S2 as pH increases slightly, most likely because Zn colloid formation is highly pHdependent (Roberts et al., 2002). A colloidal fraction is also observed at sites S3 and S4, but the total Zn concentration does not reduce significantly at these sites. Considering that the Fe concentration in the drainage is significantly higher than that of As, and the high adsorption efficiency of ferrihydrite at circumneutral pH (Hao et al., 2018), the Fe in the drainage should be sufficient to remove the Zn. It follows that there must be other factors inhibiting Zn removal by colloids, possibly involving a different removal mechanism. An FE-SEM observation coupled with EDS of natural suspended particles collected at P1 (Fig. 7a), besides minor calcite and gypsum, revealed layered particles containing Zn, Fe, Ca, Si, C, and O (Fig. 7b). Furthermore, XRD peaks of the same natural samples were observed at 2θ at around 12.2°, 20.1°and 59.5°corresponded to previously reported Zn-Fe LDHs (Zaher, 2020;Moaty et al., 2016) (Fig. 7c), further supported by a supplementary FTIR characterization (Fig. S4). These observations strongly suggest that an LDH is responsible for the removal of Zn from the mine drainage. Despite this finding, their occurrence was rare in the drainage and the particles were relatively small, implying a limited formation. Therefore, using the drainage water samples that the Zn reagent was added, the synthesized layered particles were obtained (Fig. 7d). Zn and Fe concentrations in the water sample decreased drastically after 24 h (Table S3) and showed significant particle formations. FE-SEM (Fig. 7d) and EDS of the synthesized particles also showed the presence of layered particles similar to those of the natural sample, with similar compositions. In addition, the abundance and size of the particles was much higher than those of the natural samples, implying that the Zn concentration may have limited the formation of Zn-Fe LDH in the drainage. Following the synthesis, in addition to more and larger Zn particles collected from the filtration of the sample, an increase in pH was observed (6.19 to 7.82). Our findings therefore suggest that Zn concentration and pH are critical factors in Zn sequestration from mine drainage. Layered double hydroxides comprising trivalent and divalent cations and anions (Hase et al., 2017) remove toxic elements effectively from a variety of systems (Hase et al., 2017;Hao et al., 2018). Toxic metal ions can be removed from water by LDHs via: (i) precipitation of metal hydroxides onto their surface; (ii) adsorption through bonding with LDH surface hydroxyls; (iii) isomorphous substitution; and (iv) chelation with the functional ligands in the interlayers (Xu, 2013 . Therefore, we suggest the formation of an Fe 2+ -Fe 3+ -CO 3 2− LDH, in which Zn may be isomorphically substituted, as a remediation mechanism for the Ainai mine drainage, which quickly transforms to a stable phase composed of Zn, Fe, Ca, C and O i.e., Zn-Fe LDH. The formation and stability of a Fe 2+ -Fe 3+ -CO 3 2− LDH were considered at various pH values by thermodynamic modelling using the GWB software (Fig. 8). The measured elemental concentrations in water samples at site S2 were used as input parameters for modelling to account for the effect of coexisting cations and anions on colloid solubility and separation. Bravo-Suarez (Bravo-Suárez et al., 2004) estimated solubility products of LDHs with different anions, based on their chemical compositions and thermodynamic data for their end-products. These data were used to model the formation of LDHs in the mine drainage. A gradual decrease in Fe 2+ concentration occurs from the upper to lower drain, particularly at S3 Fig. 3b. In the Eh-pH stability diagram (Fig. 8 Conclusions This study provides insights into the importance of nanomaterials such as Fe colloids and LDHs for sequestration of toxic elements in circumneutral mine drainage. Our understanding of the formation of these nanomaterials highlights the geochemical properties and processes that play important roles in mine drainage and might be applicable to the treatment of drainage from other mines. A supply of Fe 2+ from underground wastewater promotes the formation of spherical, ho-mogenous~100 nm Fe colloids that are micro-aggregates of core-shell ferrihydrite, which co-precipitate with As thereby facilitating the removal of As. Minor Fe concentration variations significantly affect the inverse relationship between Fe and As, especially in terms of colloid size, with a minimal decrease in Fe concentration in October significantly increasing the As concentration. We have established that the mobility of elements depends highly on the size of colloids, which is significantly affected by the aggregation rate. Fe colloids are mobilized for longer in the drainage until aggregating to about 300 μm in size, when they are gravitationally separated. Precipitation and flow rate affect colloid interaction and thereby provide first-order controls on aggregation and deposition, so these parameters should be closely monitored in passive treatment systems. The application of LDHs as sequestration agents has been explored previously (Wang et al., 2014). Zn is reportedly a challenging element to remediate in natural systems due to its high solubility and poor adsorption onto minerals such as hydroxides and carbonates. Here a novel approach involving isomorphous incorporation of Zn onto an existing Fe 2+ -Fe 3+ -CO 3 2− LDH to form a Zn-Fe LDH is demonstrated using geochemical modelling, synthetic samples, and observations of natural samples. A combination of high Zn and Fe concentrations and pH > 7.5 is ideal for efficient removal of Zn in passive treatment systems, with Zn-Fe LDH nanoparticles predominating in naturally treated mine drainage. Critical geochemical factors for heavy-metal removal include wastewater chemistry and composition, pH, flow rate, and aggregation rate. Understanding of these factors enables the role of turbidity and sequestration mechanisms to be clarified. Our findings imply that quantitative prediction of the behavior of nanoparticles such as colloids and LDHs might facilitate optimal design of highly efficient treatment systems, have general applications to mine drainage and other aquatic systems, and improve our understanding of the interaction between toxic elements and colloids that form in these systems. CRediT authorship contribution statement Frances Chikanda: Conceptualizations, field and laboratory investigations, formal analyses, writing-original draft. Tsubasa Otake: Conceptualizations, investigations, data analyses, validations, review and editing. Aio Koide: Conceptualizations, field and laboratory investigations, data analysis and discussions. Akane Ito: Conceptualizations, field and laboratory investigations, data analyses and discussions. Tsutomu Sato: Conceptualizations, investigations, formal analyses, review and editing, validations. Declaration of competing interest The authors declare that they have no known competing or conflict of interest that may have influenced the work reported.
2021-05-04T22:04:08.420Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "f5c80bef9ddabfa7c95f285c68b9ef1afe57b93a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.scitotenv.2021.145183", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ec043a0c9dab28b4db45892003c44f8f21b2ae18", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
9664437
pes2o/s2orc
v3-fos-license
Glycoform-independent prion conversion by highly efficient, cell-based, protein misfolding cyclic amplification Prions are formed of misfolded assemblies (PrPSc) of the variably N-glycosylated cellular prion protein (PrPC). In infected species, prions replicate by seeding the conversion and polymerization of host PrPC. Distinct prion strains can be recognized, exhibiting defined PrPSc biochemical properties such as the glycotype and specific biological traits. While strain information is encoded within the conformation of PrPSc assemblies, the storage of the structural information and the molecular requirements for self-perpetuation remain uncertain. Here, we investigated the specific role of PrPC glycosylation status. First, we developed an efficient protein misfolding cyclic amplification method using cells expressing the PrPC species of interest as substrate. Applying the technique to PrPC glycosylation mutants expressing cells revealed that neither PrPC nor PrPSc glycoform stoichiometry was instrumental to PrPSc formation and strainness perpetuation. Our study supports the view that strain properties, including PrPSc glycotype are enciphered within PrPSc structural backbone, not in the attached glycans. Prion phenotype results from the conformational change of specific amyloidogenic proteins. This change is based on the self-sustained transfer of a structural information from a protein conformer in the prion state to the same protein in the non-prion conformation, presumably through a seeding-polymerization process. Initially formulated to explain prion diseases pathogenesis in human and animals, the prion concept has gained wider relevance in the regulation of diverse biological process and in the progression of other neurodegenerative disorders such as Alzheimer and Parkinson diseases [1][2][3] . Mammalian prions are primarily formed of macromolecular assemblies of PrP Sc , a misfolded, ß-sheet enriched form of the ubiquitously expressed, α -helix rich, host-encoded prion glycoprotein PrP C . Within defined host species, PrP C can be transconformed in many prion variants or strains, differing in their PrP Sc conformation at the level of the tertiary and/quaternary structure and in their biological properties [4][5][6][7] . In particular, prions maintain strain-specific stoichiometric ratios of PrP Sc glycoforms on serial passaging in the same host species 8,9 , leading to the view that glycans may somehow participate to prion strain information encoding. Consistently, transgenic modelling suggested that PrP C glycosylation status influenced the efficacy of intra-and cross-species transmission of prions [10][11][12] and prion strain properties 13 . However, such studies remained difficult to interpret, given that point mutations inserted to prevent N-linked glycosylation or altered trafficking of the mutant PrP C rather than N-glycans removal may be the primary cause for the observed alterations in the propagation of prions (see ref. 14 and references herein). The intrinsic convertibility of PrP C glycosylation mutants into PrP Sc and the role of attached glycans in prion strainness remain thus an open question. While the molecular mechanisms and the cellular factors potentially involved in PrP Sc formation remain largely undefined, PrP C is convertible into PrP Sc in a test tube after adjunction of minute amounts of PrP Sc seeds by a technique designated protein misfolding cyclic amplification (PMCA) 15 . PMCA increases the ability of PrP Sc to template the conversion of PrP C by repetitive cycles of incubation and sonication. As the main source of PrP C substrate, most of the proprietary PMCA protocols are using brain homogenate from susceptible animals or Scientific RepoRts | 6:29116 | DOI: 10.1038/srep29116 transgenic mouse models expressing the PrP C of interest. The sensitivity achieved by PMCA allows amplification of subinfectious levels of PrP Sc in biological samples such as blood, urine, faeces or cerebrospinal fluid of human and animals infected with prions [16][17][18][19][20] . PMCA products or 'amplicons' are truly infectious and generally exhibit the same strain properties as the PrP Sc seeds [21][22][23][24] . A limited number of experiments have been performed by replacing brain substrate with cell substrate [25][26][27][28] despite the availability of a number of cell models expressing PrP C from different species, and permissive to prions (for review 29 ). These cell-based PMCA assays generally yielded either low PrP conversion rates or unsatisfying sensitivity to be applied routinely in high throughput protocols. While using brain material is not a limiting step for routine use of PMCA, addressing the contribution to the prion conversion process of certain PrP C polymorphisms or mutations or post-translational modifications, such as glycosylation becomes an issue with this technique when suitable transgenic mouse models are not available. In the present study, we adapted the miniaturized-bead PMCA (mb-PMCA) protocol 22 to the use, as PrP C substrate, of cell lysates from RK13 cell lines 30 expressing PrP C from different species or with point mutation. We report highly efficient amplification of scrapie, hamster, human and to a lesser extent of mouse-adapted prion strains. We next addressed the question of the prion convertibility of several ovine PrP glycosylation mutants defective in glycosylation at either sites of PrP or at both sites 14 . At variance with earlier reports using cell 14,31 or transgenic mouse 11 modelling, PrP C glycosylation mutants were converted as efficiently as wild-type PrP C into bona fide prions by PMCA with two unrelated prion strains. Our study also reveals that interactions between defined stoichiometric ratios of neither PrP C nor PrP Sc glycoforms are key to PrP Sc formation, leading to the view that strain-specific glycotype is enciphered within PrP backbone. In ethical and practical terms, the development of a highly sensitive cell lysate-based PMCA will allow reduction and even bypassing the use of animal tissues. Results Cell-based mb-PMCA efficiently amplifies prions. Previously, we developed a so-called mb-PCMCA procedure allowing efficient amplification of prions from different species in a round of 48 hours and in a microplate format 18,22,32 . The experimental conditions were primarily established with brain material from transgenic tg338 mice overexpressing ovine PrP C and the 127S scrapie strain, a prototypal 'fast' strain killing tg338 mice within 2 months 33 . To progressively replace tg338 brain substrate by cell substrate, we used rabbit kidney epithelial RK13 cells expressing constitutively the VRQ allele of ovine PrP C (P2FJ6 clone), and susceptible to 127S prions 32,34 . At the same protein concentration, these cells express approximately 2/3 of PrP C levels compared with tg338 mouse brain, and the PrP C glycoform pattern is cell-specific ( Fig. 1A and refs 30,35). The cell-based mb-PMCA (Cell-mb-PMCA) procedure was performed by seeding serial 10-fold dilutions of 127S-infected brain homogenate in P2FJ6 cell lysates, and running 96 incubation/sonication cycles for a round of 48 h in 96-well microplates. The amplicons were treated with proteinase K (PK) to eliminate PrP C before detection of PK-resistant PrP Sc (PrP res ) by Western-blotting. Cell-mb-PMCA efficiency to amplify 127S prions was tightly dependent of the total protein concentration in the cell lysate. At the concentration of 2 mg/mL, there was no significant amplification of 127S seeds (not shown). At 6 mg/mL, PrP res was detected in reaction mixtures seeded with 10 4 -fold diluted 127S brain homogenate. Further concentration of the cell lysate to 10 mg/mL led to detection of PrP res from reaction mixtures seeded with 10 5 -10 6 -fold diluted inoculum (Fig. 2). Supplementation of the 10 mg/mL cell lysate with brain homogenate from PrP 0/0 mice (1:1 ratio) allowed detection of PrP res in 127S brain homogenate diluted up to 10 7 -fold (Fig. 2). PMCA performed with untransfected parental RK13 cells resulted in no detectable amplification of PrP res (Fig. S1), Table 2) were probed for the presence of PrP C by western blotting (SAF34 anti-PrP monoclonal antibody). Brain lysate of ovine PrP tg338 mice (A) or cell lysates from P2FJ6 clone (B) were used to determine the relative expression levels in the different cell lines. Migration size of standard molecular mass marker (kDa) is indicated on the left. consistent with the non-detection of endogenous rabbit PrP C in these cells 30 and the poor convertibility of rabbit PrP C by 127S-like prions 36 . Various molecules reportedly enhance prion conversion and amplification, in a strain-dependent manner [37][38][39][40][41][42][43] . We examined whether addition of negatively charged molecules, such as dextran sulfate (DSS) would further improve Cell-mb-PMCA sensitivity. Addition of 1% DSS (> 500 kDa) led to the amplification of the 127S strain up to 10 −9 dilution, i.e. approx. 10-100-fold less the sensitivity routinely obtained with tg338 brain as substrate 22 . All the unseeded samples were negative in these experiments (Fig. 2, lanes U). The conversion yield of PrP C present in the cell lysate, as examined after thermolysin treatment of the amplified products 22 was ≤ 30% (data not shown), as previously observed with tg338 mouse brain material 22 . Compared to brain PrP res (Fig. 2, lanes Inoculum), the PrP res glycoprofile of the amplified product was cell-specific, consistent with the differences in PrP C glycans content between brain and non-neuronal cultured cell models 30,35 . Furthermore, the cell-mb-PMCA amplicon exhibited a higher molecular mass (~2 kDa) than that of PrP res accumulating in infected P2FJ6/Rov cell cultures (Fig. 2, compare with Cell PrP res lane). In Rov cells, biosynthetized PrP Sc is naturally cleaved by cathepsin proteases, to produce the so-called C2 fragment. This fragment is more truncated than the PK-resistant core of 127S PrP Sc 35 . Conceivably, interactions between PrP Sc and cathepsin proteases may not occur during Cell-mb-PMCA, due to disruption of endolysosomial vesicles by the detergent used in the PMCA lysis buffer. We next determined whether the infectivity of the amplified products generated using cell lysates would correlate with the efficacy of amplification. The amplicons obtained with the 10 −7 127S seed with 10 mg/mL cell lysates supplemented with PrP 0/0 brain, in the presence or absence of 1% DSS, were tenfold diluted up to the 10 −7 dilution and immediately inoculated intracerebrally to reporter tg338 mice (Table 1). Mice inoculated with unseeded controls did not develop any clinical disease and were euthanized healthy at 240 days post-inoculation. . Endpoint titration of 127S prions by Cell lysate based mb-PMCA supplemented with PrP knockout brain. P2FJ6 cell lysates were prepared at two protein concentrations and used either alone or mixed (1:1) with 10% PrP 0/0 mouse brain (Br) lysate in the absence and the presence of 1% Dextran sulphate sodium (DSS), as indicated. The lysates were then used as PMCA substrate to amplify serial 10-fold dilutions of brain homogenate from tg338 mice infected with 127S prions. Each dilution was directly analysed by Western blotting for PrP res content (Sha31 antibody) after proteinase K treatment. For comparison purposes, the first two lanes illustrate PrP res content in non-amplified products (10 −3 and 10 −4 dilutions); the last two lanes PrP res and PrP C electrophoretic profiles in 127S infected P2FJ6 cell line (Cell PrP res ) and normal tg338 mouse brain lysates (Br PrP C ), respectively. Lanes U correspond to unseeded lysates ran on the same microplate. Note the difference in unglysosylated PrP res molecular mass between Cell-PMCA generated products (small arrow) and cell-passaged prions (arrowhead). * Low size PrP res fragments. Dilution Incubation time in days ± SEM (n/n 0 ) PMCA substrate Cell lysate − DSS Cell lysate + DSS tg338 brain Table 1. Incubation time of tg338 mice inoculated with serial tenfold dilutions of Cell-based mb-PMCA-generated 127S prions. Amplicons obtained from a 10 −7 127S seed mixed with 10 mg/mL cell lysates supplemented with PrP 0/0 brain, in the absence (− DSS) or presence (+ DSS) of 1% dextran sulfate, were tenfold diluted up to the 10 −7 dilution and immediately inoculated intracerebrally to reporter tg338 mice. n/n 0 : number of mice with neurological disease and positive for PrP res in the brain by immunoblotting/number of inoculated tg338 mice. * Non-affected mice euthanized healthy at 240 dpi. Data in italic are from 22 . ND: not done. A 100% attack rate was observed with the cell-generated amplicons diluted up to 10 −4 fold. At the 10 −5 dilution, 1/5 (− DSS) and 2/5 (+ DSS) tg338 mice were infected. At the 10 −6 and 10 −7 dilution, none of the mice developed the disease and were euthanized healthy. There was thus no significant impact of adding DSS to the PMCA reaction on the infectivity of the amplified products. Collectively, these data indicate that the cell-generated amplicons were 100-fold less infectious than the brain-generated amplicons (Table 1 and ref. 22). This value was consistent with the difference of amplification efficiency observed between cell and brain lysates. We next examined whether the Cell-mb-PMCA protocol (10 mg/mL protein concentration, addition of 1% DSS and PrP 0/0 brain) would amplify prion from other species. We seeded RK13 cell lysates expressing hamster (HaRK13), human (methionine at codon 129, HuRK13) or mouse (MoRK13) PrP C ( Fig. 1 44 and unpublished data) with serial dilutions of 263K, vCJD and 139A prions, respectively and compared the sensitivity achieved with that obtained with transgenic mouse brain as substrate. The results are summarized in Fig. 3, which is representative of more than 4 independent experiments. PrP res from hamster 263 K and human vCJD prions was amplified from 10 −7 -and 10 −8 -diluted input seeds by using HaRK13 and HuRK13 cell substrate, respectively. This sensitivity was close to that obtained with transgenic mouse brain ( Fig. 3 and ref. 22). In contrast, 139A prions were less efficiently amplified, as two PMCA rounds without PrP 0/0 brain supplementation were necessary for PrP res detection from the 10 −5 dilution, compared to the 10 −7 dilution amplified in one round with tga20 brain lysate ( Fig. 3 and ref. 22). More sensitive cell clones expressing higher levels of mouse PrP C (Fig. 1) or Mo RK13 cell-specific conditions are to be found to improve the amplification of 139A prions. Collectively, our data indicate that Cell-based PMCA, as mouse brain based mb-PMCA 22 , is a versatile protocol, allowing amplification of minutes amounts of prions from different species, including human. Cell lysates with high protein concentration avoid the use of PrP knock-out mouse brain material and DSS for efficient prion amplification. The positive correlation between the protein concentration in the cell lysate used as substrate and the PMCA sensitivity to detect 127S prions led us to reason that increasing further the total protein concentration over 10 mg/mL in the cell lysates might allow efficient prion amplification without additives. To obtain highly concentrated cell lysates, we cultivated P2FJ6 cells in multilayer preparative flasks. The cell lysates were then used 'crude' in Cell-mb-PMCA reactions. As shown in Fig. 4A, concentrating cell lysates from 12 mg/mL to 24 mg/mL increased the sensitivity of 127S detection by 5 Log 10 . At that concentration, the sensitivity achieved was similar or even higher than that obtained with tg338 brain substrate run in the same micro-plate (Fig. 4B). Reporting the limiting dilution achieved to the total protein concentration in the cell lysate showed a strong correlation between total protein concentration and the efficacy of PMCA amplification (Fig. 4C). Of note, a 10% brain homogenate would provide a 10-12 mg/mL protein concentration. Thus, the use Lysates from RK13 cell lines expressing hamster, human and mouse PrP C (supplemented with 1% DSS and eventually PrP 0/0 brain (Br)) or brain homogenates from hamster PrP (tg7), human PrP (tg650) and mouse PrP (tga20) mice were seeded with serial dilutions of brain homogenates containing hamster 263K prions, human vCJD prions or mouse 139A prions and submitted to a single round (263K, vCJD) or 2 rounds (139A) of PMCA. Unamplified inoculums (two first lanes of each panel), unseeded controls (lanes U) and the amplified samples were digested with PK before western blotting (Sha31 antibody) analysis for PrP res content. Scientific RepoRts | 6:29116 | DOI: 10.1038/srep29116 of highly concentrated cell lysate with regard to protein content allows amplifying minute amounts of 127S prions in a brain-free context. Efficient Cell-based mb-PMCA conversion of ovine PrP glycosylation mutants by two distinct prion strains. PrP C has two variably occupied glycosylation sites at amino acid N184 (site 1) and N200 (site 2) (ovine PrP sequence numbering). Modelling in RK13 cells previously suggested that unglycosylated double PrP mutant failed to be converted by 127S prions, even after being properly expressed at the cell surface by an ectopic glycosylation site in the N-terminus of PrP C 14 . Prion convertibility of monoglycosylated mutant was site-dependent with mutants at site 2 all being convertible and mutants at site 1 being convertible only when the N184D amino acid substitution was performed. These negative results opened the possibility that some mutants were intrinsically not convertible into prions, due to point mutation or to N-glycans removal. We examined this possibility by seeding the different cell lysate mutants ( Fig. 5 and Table 2) with serially diluted seeds of 127S prions and running Cell-mb-PMCA (10 mg/mL cell lysate supplemented with 10% PrP 0/0 brain homogenate, no DSS). Figure 1B illustrates PrP C electrophoretic pattern and expression level in the different glycosylation mutant cell lysates relative to the wild type P2FJ6 cells. Overall, PrP C expression level in the different PrP glycosylation mutants was low, ranging from ~3% (N184Q) to 32% (NDND double mutant). All the PrP C glycosylation mutants were converted into PrP Sc by Cell-mb-PMCA. In one round, the limiting dilution of the 127S input seed established at 10 −5 for N184Q and N200Q and 10 −6 for N200D and NDND mutants. Despite low expression levels of mutant PrP C in the cell lysates, the sensitivity achieved was thus only 100-(N184Q) and 10-fold less (N200D, NDND) than that observed by using wild-type cell lysates. After a second round (Fig. 5A), the limiting dilution established between 10 −7 and 10 −9 for all but the N200Q mutants, which established at 10 −6 . Taken together, these results demonstrate that unglycosylated and monoglycosylated PrP C mutants are intrinsically convertible into PrP Sc by 127S prions, independently of their non-convertibility once expressed in RK13 cells. We next examined whether the absence of PrP C glycosylation requirement for in vitro prion conversion would apply to another prion strain designated T1 Ov , and obtained after adaption to tg338 mice of prions responsible for a rare cortical, MM2 form of sporadic Creutzfeldt-Jakob disease 45 . T1 Ov prions have no strain properties in common with 127S prions but share similar efficacy to be amplified by PMCA using tg338 brain as substrate 45 . In two rounds, the limiting dilution of the T1 Ov input seed established at 10 −6 for all the N184D and N200D mutants, as for wild-type PrP, and 10 −7 for the NDND mutant (Fig. 5B). It can be noted that the proportion of low-size PrP res fragments in the lowly-glycosylated amplicons markedly differed between the two strains (Fig. 5), further differentiating the two agents. Cell-based mb-PMCA lowly glycosylated prions are highly infectious. We finally addressed whether the monoglycosylated or unglycosylated PrP Sc products generated by glycosylation mutant Cell mb-PMCA were infectious and retained strain-specific biochemical and neuropathological properties. Amplicons generated from reaction mixtures seeded with 10 −7 127S brain material, -and amplified over 2 rounds to exclude Table 2. Convertibility of the PrP glycosylation mutants in cell culture and by cell PMCA. n/n 0 : number of mice with neurological disease and positive for PrP res in the brain by immunoblotting/number of inoculated tg338 mice. The seeds used to infect mice were from a 127S 10 −7 seed amplified over 2 rounds, so as to avoid any residual input. * Cell conversion was assessed in ref. 14. # The PMCA product inoculated was obtained by seeding tg338 brain lysate with unglycosylated PrP Sc seed (10 −7 dilution), itself obtained by seeding the NDND double mutant cell lysate with 10 −8 diluted 127S seed. nd: not done. any residual infectivity of the input seed-, were inoculated by intracerebral route to reporter tg338 mice. As shown in Table 2, the amplified products generated with the glycosylation mutants induced disease in mice with similar efficacy as the products generated on the wild-type PrP cell substrate. Mean incubation time to disease was the shortest after inoculation of the non-glycosylated amplicons (68 days) compared to wild-type generated amplicons (70 days). N184Q-derived amplicons were the less efficient, inducing disease in 77 days. Reporting the incubation time values to 127S dose-response curve 34 and quantifying the amount of PrP res in the amplicons allowed calculating specific infectivity values, that is the amount of infectivity per molecule of PrP res generated by the PMCA reaction. Assuming a straight correlation between PrP res content in the PMCA amplicons and infectivity, the specific infectivity per unit PrP res appeared 10 to 80-fold higher for the lowly -glycosylated amplicons than for the wild-type amplicons (Fig. S2). Remarkably, PrP res electrophoretic pattern in brain and spleen tissue (Fig. 6A,B), and neuroanatomical distribution of PrP res (Figs 6C and S3) and of vacuolar degeneration (Fig. 6D) in the reporter tg338 mice inoculated with unglycosylated or monoglycosylated 127S amplicons was reminiscent of 127S prions, passaged (Fig. 6) or not 19,33,46,47 by Cell-mb-PMCA. Prominent PrP res deposition in the lateral hypothalamic area, in the corpus callosum, in the habenula (Fig. 6C) in the raphe nuclei of the brain stem (Fig. S3), and marked vacuolar degeneration in the dorsal medulla, hypothalamus and white matter of the mesencephalic tegmentum (Fig. 6D) were typical of 127S prions 33 . PrP res staining and vacuolation in the affected brain regions were sometimes less intense, as observed with infection of diluted 127S-infected tg338 brain homogenate 33 or on reisolation of 127S prions to tg338 mice 19,46,47 . To further ascertain that PMCA-generated lowly glycosylated amplicons were good convertors of wild-type ovine PrP VRQ , 127S and T1 Ov amplicons were submitted to mb-PMCA using tg338 mouse brain as substrate. The seeding activity of 127S and T1 Ov amplicons was observed up to the 10 −7 and 10 −9 dilution (Fig. 7A,B), as with 127S amplicons generated from wild-type cells (Fig. 2) or with T1 Ov prions from brains of terminally sick tg338 mice 45 , respectively. One of the 127S PMCA products generated with unglycosylated PrP res seed (10 −8 ) was inoculated to reporter tg338 mice to further confirm efficient conversion and maintaining of strain properties. The survival time of the mice, PrP res electrophoretic pattern in brain and spleen tissue, and PrP res /vacuolar deposition patterns in the brain were all consistent with the generation of (highly) infectious 127S prions ( Collectively, these data indicate that 127S prion strain properties and T1 Ov seeding capacity were essentially conserved despite intermediate replication on lowly glycosylated PrP C species. This lends support to the view that glycans do not play a major role in prion replication dynamics and strain biological properties. Discussion Following our simplification of the PMCA method, we now report that cell lysate expressing PrP C can conveniently replace brain substrate from PrP transgenic mice to achieve efficient amplification of prions from different species. Highly concentrated cell lysate may permit amplification at 'maximal' levels without the need to supplement the reaction mixture with PrP knockout brain substrate. Applying the cell-PMCA technique to a panel of cells expressing PrP C glycosylation mutants and two prion strains demonstrates that unglycosylated and monoglycosylated mutants are intrinsically convertible and that PrP C and/or PrP Sc glycoforms stoichiometry does appear to alter neither PrP Sc formation rate in vitro nor the biological properties of the formed prion (for at least the strain tested in vivo). PrP glycosylation may thus be dispensable to perpetuate prion strain information. To approach with cell lysates the sensitivity obtained in one round of PMCA with the ad hoc transgenic mouse brain as substrate 22 , it was beneficial to use concentrated cell lysate with respect to total protein concentration, and to supplement it with PrP 0/0 mouse brain lysate and 1% DSS. By using the 127S prions/ovine PrP C combination, we further showed that the PMCA amplification threshold obtained with brain material could be reached by using highly concentrated cell lysate alone, at least with 127S prions. The respective contributions of DSS, PrP 0/0 mouse brain and concentrated cell lysate to efficient prion conversion remain to be determined. Non-PrP C cellular factors such as brain lipids or polyanionic scaffold molecules like sulphated glycans and RNA, which are known to improve PMCA 37,39,40,43,[48][49][50] , may have been concentrated. The conditions used may also create a macromolecular crowded environment 51 favouring highly efficient prion conversion. In ethical and practical terms, sensitive PMCA can thus be performed without requiring animal models. Applying the cell-PMCA technique to a panel of cells expressing PrP C glycosylation mutants demonstrates that unglycosylated and monoglycosylated PrP C were intrinsically convertible by 127S prions, despite non convertibility in cultured Rov cells, even after apparent proper expression at the cell surface during biosynthesis 14 or exposure to homologous prions (this study). The reasons for such discrepancies with regards to glycosylation requirements between cell-free and in-cell systems remain to be determined. Subtle alterations in the subcellular localisation/trafficking of the PrP C mutants or different turnover could explain their non-conversion in the cell models. Folding and/or stability and/or resistance to clearance of the nascent PrP Sc assemblies in Rov cells may necessitate incorporation of a certain threshold of di-glycosylated species. The molecular basis for prion strain-specific glycopattern and its perpetuation over serial passage is poorly understood. Host PrP C glycosylation has been reported to contribute to prion replication and to prion strain phenotype (reviews refs [52][53][54][55]. Both the infecting prions and the convertible PrP C isoforms in the recipient host or tissue determine the glycopattern of each strain. Use of biochemically deglycosylated native PrP C in PMCA reaction suggested that the stoichiometry of PrP C glycoforms regulated prion formation in a strain-specific manner 56 . For example, formation of PrP Sc on seeding with mouse RML or hamster Sc237 prions necessitates presence or absence of unglycosylated PrP C , respectively. Oppositely, the failure of PrP C glycoform-specific antibodies 57 to exert similar selectively towards PrP Sc glycoforms 58 lend to the proposal that the proportion of each PrP C glycoform incorporated into nascent PrP Sc assemblies was controlled by the defined glycoforms stoichiometry in the starting infectious seeds 52,58 . Indirectly supporting this hypothesis is the observation that the PrP Sc glycoform ratio (for a given strain) is conserved whatever PrP Sc aggregation size 32,34 . What information does bring our PMCA modeling with cell expressing PrP glycosylation mutants? First, the high conversion rate of the mono-and un-glycosylated PrP C mutants relative to wild-type PrP C , despite expression at lowered levels in the cell lysates, would sustain the view that highly glycosylated PrP C species interfere with prion conversion or that presence of N-linked glycans on the two sites in PrP C cause steric hindrance for PrP Sc formation or through stabilisation of the PrP C native state. The latter point would be consistent with the observation that the structural sequence important for PrP oligomerization lies between the two N-glycosylation sites 59 or just upstream 28,60 . Diglycosylated PrP C species may thus have a dual role during the formation of PrP Sc assemblies. Second, 127S prion seeds, which Lysates from RK13 cells expressing wild-type (WT) ovine PrP C or ovine PrP C mutated on the first N-glycosylation site at residue 184 (N184D, N184Q), on the second glycosylation site at residue 200 (N200D) or at both residue (NDND) glycosylation sites were mixed with PrP 0/0 brain lysate (1:1 dilution), seeded with serial 10-fold dilutions of tg338 brain homogenate containing 127S prions and submitted to 2 rounds of Cell-mb-PMCA before inoculation to tg338 mice. Seeds generated from NDND cells were also submitted to another round of PMCA using tg338 mouse brain as substrate (wild type brain PrP C ). The amplicon obtained at 10 −8 dilution was then used for inoculation (NDND seeds). (A) PrP res banding pattern and (B) ratios of high-and low-molecular mass PrP res glycoforms in the brain (filled symbol) and spleen (open symbol) tissue of tg338 mice inoculated with Cell-mb-PMCA products. (C) Neuroanatomical distribution of PrP res in tg338 mice inoculated with the Cell-mb-PMCA products. Representative histoblot (12F10 antibody) of brain coronal section (hippocampus level). Deposition in standardized anterio-posterior sections can be vizualized in Supplementary Figure S1. (D) Distribution of vacuolar degeneration (lesion profile) in tg338 mouse brain inoculated with the Cell-mb-PMCA products, as above. The intensity of vacuolation was scored as means standard errors of the means (error bars) in standard gray (G1 to G9) and white (W1 to W3) matter areas. These areas are as follows: G1, dorsal medulla; G2, cerebellar cortex; G3, superior colliculus; G4, hypothalamus; G5, medial thalamus; G6, hippocampus; G7, septum; G8, medial cerebral cortex at the level of the thalamus; G9, medial cerebral cortex at the level of the septum; W1, cerebellar white matter; W2, white matter of the mesencephalic tegmentum; and W3, pyramidal tract. Scientific RepoRts | 6:29116 | DOI: 10.1038/srep29116 exhibit in tg338 mouse brain a determined PrP res glycotype (45% diglycosylated species, 35% monoglycosylated, 20% unglycosylated 22 ), convert indifferently unglycosylated and monoglycosylated PrP C species alone or in combination. Because conversion is not monitored in real-time during PMCA reactions, a glycotypic preference may exist during the initial converting events but be trailed off within a 48 h round. It could be argued that in face of mono or unglycosylated PrP C species, mono and unglycosylated PrP Sc may have been preferentially amplified. However, when the opposite experiment was done, that is when PMCA-generated unglycosylated or monoglycosylated PrP Sc seeds were submitted to PMCA in the presence of wild-type PrP C , the initial 127S PrP Sc glycotype was fully restored, thus suggesting no preferential compatibility between PrP C and PrP Sc with regard to the occupancy of the glycosylation sites. The same observations were made with a Creutzfeldt-Jakob disease derived prion strain designated T1 Ov , thus indicating that the non-requirement of PrP glycosylation for prion conversion is not limited to one peculiar strain. We finally show that monoglycosylated and unglycosylated 127S amplicons share similar strain properties as normally glycosylated 127S prions in tg338 mice, including the PrP res glycotype in the brain of the mice. Collectively, we can conclude that a defined stoichiometry of PrP Sc glycoforms and of PrP C glycoforms is not necessary for efficient conversion by PMCA, and to dictate strain-specific properties, at least for 127S prions. Prion strain properties, including the glycotype stoichiometry of PrP Sc , may thus be solely enciphered within PrP Sc structural backbone or within the way PrP Sc molecules do assemble. Methods Ethics Statement. All animal experiments were carried out in accordance with the European Union directive 2010/63 and were approved by COMETHEA, the local ethics committee of the authors' institution (permit number 12/034). Transgenic mice and Prion strains. The transgenic lines (tg338, tg7, tga20 and tg650 lines) and prions (127S, T1 Ov , 139A, 263K and vCJD) have been previously described 22,33,34,45 . Pools of prion-sick mouse brains were prepared as 20% (wt/vol) homogenate in 5% glucose by use of tissue homogenizer (Precellys 24 Ribolyzer, Ozyme, Bertin technologies, France). The homogenate was diluted half to 10% in PMCA buffer (see below) to obtain the 10 −1 dilution of the inoculum and stored at − 80 °C. The Zürich I mouse line on an Sv129 mouse background was used as PrP 0/0 line 61 . Cell culture. The Rabbit kidney epithelial RK13 cell line was used to establish cells expressing sheep (Rov9, P2FJ6 and glycosylation mutants), hamster (HaRK13), human (HuRK13) and mouse PrP (MoRK13). Rov9, P2FJ6 clones, cells expressing glycosylation mutants and MoRK13 cells have been described previously 30,32,34,44 . The open reading frame of hamster and human PrP C was PCR amplified from Syrian hamster and human (Met 129 allele) genomic DNA, cloned into pBluescript plasmid, before subcloning in the pTRE and pCDNA plasmids (Clontech), respectively. After sequencing, each plasmid was introduced into RK13 cells as described previously 30 , and puromycin-resistant cell clones were selected for doxycycline-inducible and constitutive expression of PrP C , respectively. Cells were cultivated at 37 °C in 5% CO 2 in Opti-MEM (Gibco) supplemented with 10% foetal calf serum and 0.1% penicillin and streptomycin. Cells were passaged once a week at a ¼ dilution. For production of large amount of concentrated cell lysates, cells were cultured in 2 or 4 layers of multilayer cell culture flasks (Thermo-Scientific Nunc). Preparation of cell lysate for PMCA. Cultured cell lines in either T175 cm 2 or in multilayer culture flasks were rinsed three times with sterile Ca ++ and Mg ++ free PBS. The cells were dissociated by incubation with trypsin-free dissociation media (Sigma) for 10 min at 37 °C. They were flushed with PBS, recovered in a falcon tubes and harvested by 5 min centrifugation at 1000 g at 4 °C. The pellet was then resupended in a given volume of cold and 0.2 μ m filtered PMCA buffer (Tris-HCl 50 mM pH 7.4, EDTA 5 mM, NaCl 300 mM, 1% Triton-X-100). The lysed cells were incubated at 4 °C during 15-30 min with gentle vortexing. The lysates were centrifuged at 2000 g during 6 min to pellet the insoluble and chromatin materials. Supernatants were collected, aliquoted and stored at − 80 °C until use as substrate in Cell mb-PMCA reactions. Protein content of cell lysates was measured by Bradford protein concentration determination kit (BCA kit, Pierce) using BSA as standard. Cell-miniaturized beads-Protein Misfolding Cyclic Amplification (Cell-mb-PMCA). The standard mb-PMCA, using brain lysate as source of PrP C substrate was realized as described 22 , by using 96-well PCR microplates and one 2.384 mm teflon beads. The Cell-mb-PMCA was set up with either 100% cell lysate or a mix with 10% mouse PrP 0/0 brain lysate (ratio 1:1) in the presence or absence of 1% of Dextran Sulfate Sodium (DSS > 500 kDa Sigma Aldrich, Saint Quentin Fallavier, France) as indicated. Practically, a 4 μ l aliquot of the analyte inoculum (10 −n dilution) was suspended in 36 μ l of PMCA substrate (brain or cell lysates) to obtain the 10 −n+1 dilution. A series of 10-fold dilution was made by diluting 4 μ L of the previous inoculum dilution to the next 36 μ L containing well. Microplates were subjected to 96 cycles of 30 sec sonication at 200-220 Watt power (36-40% amplitude of the Q700 sonicators, Misonix, Farmingdale USA; or Delta Labo, Colombelles France) followed by 29.5 min of incubation at 37 °C. When needed, a second round of PMCA was realized with 1/10 diluted aliquot of the first round in fresh lysates. At the end of the PMCA, aliquots from each sample were analysed for PrP res content by Western blotting. Protease digestion of PMCA products. To analyse the production of Proteinase K (PK)-resistant PrP Sc species during PMCA, 10 μ L of each sample were supplemented with SDS (up to 0.6% final concentration) and treated with PK (125 μ g/mL final concentration) at 37 °C for 1 hour. The PK digestion was stopped by adding an equal volume of 2x Laemmli denaturation sample buffer and heating at 100 °C for 5 min. The samples were then stored at − 20 °C. The levels of thermolysin-resistant PrP species in the PMCA amplicons were determined, as previously described 22 . SDS-PAGE and western blotting. PMCA samples were run on Criterion XT 12% Bis-Tris precast gels (Biorad, Hercules, CA, USA), electrotransferred onto nitrocellulose membranes with the semi-dry electrotransfer system (Biorad) and probed with biotinylated Sha31 anti-PrP monoclonal antibody 62 , as described above. PrP C content of the cell lysates was determined by western blotting with SAF34 62 anti octarepeat region of PrP. Quantification was determined with the GeneTools software after acquisition of the signals with a GeneGnome digital imager. Endpoint-titration of PMCA products in tg338 mice. Standard protocol based on the use of disposable equipment and preparation of all inocula in a class II microbiological safety cabinet was followed. Serial ten-fold dilutions of PMCA products were prepared in sterile 5% glucose containing 5% bovine serum albumin. Individually identified 6-to 10-week-old tg338 recipient mice (n = 5 mice per dilution) were inoculated intracerebrally with 20 μ L of each sample. The inoculated animals were observed daily for the appearance of prion disease symptoms. Animals at terminal stage of disease were euthanized. The survival time was defined as the number of days from inoculation to euthanasia. Their brain and spleen were removed for PrP res analysis by western blotting and histoblotting as previously described 22,33 . For histoblotting procedure, brains were rapidly removed from euthanized mice and frozen on dry ice. Cryosections were cut at 8-10 μ m, transferred onto Superfrost slides and kept at − 20 °C until use. Histoblot analyses were performed on 3 brains per dilution per amplicon, using the 12F10 63 anti-PrP antibody. To quantify vacuolar degeneration, brains were fixed in neutral-buffered 10% formalin (4% formaldehyde) before paraffin embedding. After deparaffinization, 2-μ m-thick sections were stained with hematoxylin-eosin. Vacuolation profiles were established according to the standard method described by Fraser and Dickinson 64 , using three brains per experiment.
2018-04-03T05:40:25.515Z
2016-07-07T00:00:00.000
{ "year": 2016, "sha1": "68c1376d35d897dbf29d860faff155f15f0d2812", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep29116.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68c1376d35d897dbf29d860faff155f15f0d2812", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244765290
pes2o/s2orc
v3-fos-license
Activity of Madden Julian Oscillation during 2002 and 2006 – A comparative analysis The Indian summer monsoon is characterized by very significant intra-seasonal variability. MaddenJulian Oscillation (MJO) is one of the dominant modes of the intra-seasonal variability of the Indian summer monsoon rainfall. The activity of Madden Julian Oscillation during the monsoon seasons of the two years of contrasting intraseasonal rainfall variability has been examined in terms of rainfall activity over India and eastward propagation of convection in the near-equatorial region. The study shows the contrasting nature, viz., in the monsoon season of 2002, eastward mode dominated whereas in 2006, it remained suppressed. Introduction The Indian summer monsoon is characterized by very significant intra-seasonal variability. It has been well established that, one of the dominant modes of the intraseasonal variability of the Indian summer monsoon rainfall is governed by the eastward moving Madden-Julian Oscillation -MJO [Madden and Julian (1971;1972;1994), Lau and Chan, (1986), Lau, et al. (1988), Singh, et al. (1992), Jones and Weare (1996) and Saith & Slingo, (2006)]. It is one of the fundamental modes of low-frequency oscillations in the tropics. The Madden Julian Oscillation is manifested in the fields of surface winds, surface heat fluxes, sea surface temperature and ocean currents, on a time scale of about 30-60 days (Madden and Julian, 1994). MJO is characterized by an eastward progression of large regions of both enhanced and suppressed tropical rainfall, observed mainly over the Indian Ocean and Pacific Ocean. The anomalous rainfall is usually first observed over the western Indian Ocean, and propagates over the warm ocean waters of the western and central tropical Pacific. During winter season, eastward propagating MJO mode is dominant. But, in summer, as the eastward propagating MJO mode appears over the South-East (SE) Arabian Sea and the Bay of Bengal, a north ward moving convective organization is established with a frequency of 30-50 days Gadgil (1980) &Yasunari (1980)]. In recent times, 2002 and 2006 have been the years that have exhibited large scale intra seasonal variability of contrasting nature, in terms of all India area weighted rainfall. With this backdrop, an attempt has been done in this study to examine the activity of Madden Julian oscillation during the monsoon seasons of these two years of contrasting intra-seasonal rainfall variability. Data and methodology In order to examine the intra seasonal variability of Indian summer monsoon, the weekly All-India area (Fig. 4). Also, the vector wind anomaly values over the region 40° S to 60° N and 30° E to 60° W, as obtained from Earth System Research Laboratory, Physical Science Division, NOAA, are analysed to examine the lower tropospheric circulation features. Discussion In 2002, the onset of the southwest monsoon over Kerala occurred on 29 th May, 3 days earlier to its normal date of 1 June. After 12 th June, there was a hiatus in the advance of the monsoon for about a week. This hiatus was terminated on formation of a low pressure system over the north Bay of Bengal on 20 th June, which moved across central India. In association with this, the monsoon advanced into central India and some parts of the Gangetic plains. However, with the weakening of the low on 28 th June, there was a sudden weakening of the monsoon current. This situation prevailed almost till the end of July. Further, in association with a feeble low pressure that formed over northwest Bay on 17 th July, the monsoon advanced up to Delhi and neighbourhood as a weak current on 19 th July. However, there was another prolonged hiatus in the subsequent advance of monsoon and it covered the entire country only by 15 th August [ Fig. 1(a)]. The seasonal rainfall over the country as a whole was 81% of its long period average and thus it was an All-India drought monsoon year. The rainfall deficiency during July was the highest (-51%) in the 102 period from 1901 to 2002. During the entire monsoon season of 2002, not a single monsoon depression was formed. The monthly rainfall during June and August was normal and rainfall during September was near normal [Mausam, (2003)]. The All-India area weighted rainfall during 1 st June to 30 th September 2006, was 99.6% of the long period average. In 2006, the onset of monsoon over Kerala occurred on 26 th May, six days prior to the normal date of 1 st June. The advance of the monsoon occurred rapidly over the west coast, in association with an off-shore trough along the west coast, till 6 th June. Further advance of the monsoon was characterized by two predominant epochs of hiatus, viz., 7-22 June and 1-8 July. The monsoon covered the entire country on 24 th July, nine days later than the normal date [ Fig. 1 these monsoon depressions were observed to have a higher westerly component rather than the climatological west-northwesterly direction. As a result, central and peninsular India received well distributed above normal rainfall, causing floods over these regions. The details of these depressions are listed in Table 1. In addition to these depressions, seven low pressure areas/ well marked low pressure areas formed during the SW Monsoon season of 2006 [Mausam, (2007)]. Thus, the monsoons of 2002 and 2006 were clearly distinct in terms of the intra-seasonal variability. The monsoon of 2002 was characterized by spells of prolonged breaks, particularly in July, leading to seasonal rainfall deficiency. On the other hand, monsoon of 2006 was characterized by frequent active spells (Table 2). In 2002, there had been a significant eastward displacement of the west pacific warm pool during the northern hemispheric summer months. The Sea Surface Temperatures (SSTs) over the Indian Ocean were above normal. In contrast, during 2006, the west pacific warm pool was confined over that region; it did not extend east of the international date line (Figures not shown). In response to these SST conditions, during 2002, the Intra seasonal variability of weekly All-India area weighted rainfall during 2002 and 2006 Week ending on All India weekly area weighted rainfall (% departure from normal) Week ending on All India weekly area weighted rainfall (% departure from normal) correspond to the pre-monsoon rainfall activity (Event I) and the active phases of monsoon (Saith and Slingo, 2006) and these same events were also discernible in the equatorial eastward propagating mode of MJO [ Fig. 2 (a) and Fig. 3(a)]. As seen from Fig. 3(a), during 2002, the eastward propagation of convection is very prominent, as discerned by events (i) and (ii). Also, another two events, event (iii) and event (iv), though, not as strong as event (i) and event (ii), are observed to influence the intra seasonal variability of the monsoon during the period July Fig. 2(b)]. Event I, (during 15 th -30 th May) and Event II, (with about 30 days periodicity, appeared during the period 15 th -30 th June), correspond to the onset of monsoon over Kerala and the subsequent rapid advance of the monsoon along the west coast of India. It was followed by another Event III, during 15 th -25 th July (again with a periodicity of about a month), which was associated with further advance of the monsoon to cover the entire country. Event IV occurred during 15-25 th August, coinciding with an active phase of the monsoon characterized by two depressions moving across India in west-northwest direction. The final event occurred during 1 st -25 th September, which again coincided with two depressions (including one land depression) moving across India in west-northwest direction. However, as seen from Fig. 3 (b), there was only one distinct event Table 2 that, following are the major prolonged spells of below normal All India rainfall activity during the years 2002 and 2006: 2002 : Spell I -3 rd July to 7 th August-Spell of 34 days of below normal All India rainfall. Spell II -18 th September to 25 th September-Spell of 8 days of below normal All India rainfall. 2006 : Spell I -14 th June to 28 th June-Spell of 15 days of below normal All India rainfall. Spell II -12 th July to 26 th July-Spell of 15 days of below normal All India rainfall. It is observed from Fig. 2(a) and Fig. 3(a) that, during 2002, during the spell from 30 th June to 25 th July, the eastward mode of propagation of convection, was highly suppressed. Also, the northward propagation of convection during this spell is not significant. This spell corresponds to the prolonged spell of below normal All India below normal rainfall activity from 3 rd July to 7 th August. Also, during the period 18 th September to 25 th September, the mode of northward propagation of convection was insignificant; however, the eastward propagation of convection was dominant. It is also evident from Fig. 2(b) and Fig. 3(b) that, during 2006, during 14 th June to 28 th June, there was a northward propagation of convection from 5° S to 25° N. During the same period, the eastward propagating mode of enhanced equatorial convection was also dominant. Despite of this, the All India rainfall activity was below normal for a period of 15 days. During another spell of below normal All India rainfall activity for 15 days (12 th July to 26 th July), the northward propagation of convection from 5° S to 25° N is clearly evident [ Fig. 2(b)]. However, during the same period, no eastward propagation of equatorial convection is observed [ Fig. 3 (b)]. The lower tropospheric wind anomalies during July 2002 (Fig. 4) clearly show predominant anticyclonic wind flow over the Indian region, suggesting thereby, an anomalous sinking over these areas and hence, suppressed convection. During July 2002, the rainfall over India was highly deficient, 49% lower than the respective normal. Also, enhanced strength of cross-equatorial flow during July 2006 is evident in contrast to the easterly wind anomalies (weak cross-equatorial flow) during July 2002. Conclusions (i) During 2002, the eastward mode of the Madden Julian Oscillation and associated propagation of the convection eastwards were dominant over the northward propagation of the convection. This led to major breaks in Indian summer monsoon in 2002, particularly in the month of July. (ii) In contrast, during 2006, the northward mode of propagation of convection was more predominant, with suppressed eastward propagating mode, leading to absence of any major break during the Indian summer monsoon season and frequent spells of active monsoon.
2021-12-01T16:38:11.064Z
2021-11-27T00:00:00.000
{ "year": 2021, "sha1": "b23bae3feb9c3e70439acbbc86a2af40a8c3114e", "oa_license": "CCBYNC", "oa_url": "https://mausamjournal.imd.gov.in/index.php/MAUSAM/article/download/1263/1094", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f5626c377d701d3d9d7473fafdd16e46ecfddd79", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
133950749
pes2o/s2orc
v3-fos-license
Influence of Land and Water Rights on Land Degradation in Central Asia : Land degradation is a key issue for Central Asia as an agrarian region. Land degradation in Central Asia is usually seen as a technological challenge and corresponding solutions are associated with the improvement of land-use technology. However, the reality is more complicated and multi-faceted. Institutional aspects of land degradation in the region are more prominent and yet unnoticed. De-linked water and land rights, increased land production functions, water infrastructure degradation, a lack of water-use monitoring, and a lack of knowledge among water users constitute the major institutional aspects of land degradation in Central Asia. This paper looks at the linkages between water and land rights and the main aspects of land degradation. The research was built on a literature review, including internationally funded project reports and in-house investigations. Introduction Central Asia (CA) is known as an area of productive irrigated lands and pastures. However, currently, the region is facing serious land degradation challenges. More than half of 8 mln ha of irrigated land and half of pastures in CA are degraded to various degrees [1]. As a biophysical process, land degradation has been examined quite well for Central Asia [1][2][3][4]. On the one hand, Central Asia's reduced productivity and potential economic losses due to land degradation are comparable with the effects of climate change [3]. In order to understand the scope of land degradation, it could be described in terms of the widely known climate change. By 2050, the region may face severe water shortages due to climate risks, and land degradation may produce severe food shortages. On the other hand, experts note that changing management schemes as well as legislation on land and water rights also contribute to increased land degradation [5,6]. This aspect of land degradation has been rarely studied by either international or national scientists. Historically, beginning from the Tsarist period until the post-Soviet era, water and land rights in Central Asia underwent change several times. In every such instance, institutional reforms affected land and water use, and as a result led to land degradation. Since ancient times, land and water rights in Central Asia have been interlinked. The landowner was entitled to pre-determined water rights. However, in the 1930s, water and land rights were disintegrated. Land became one of elements of the system of production and almost lost its role as part of nature. Due to large-scale land degradation and loss of productivity of land resources in traditional agricultural areas, the Soviet state launched new "virgin land" development programmes to extend agriculture into new areas. By the 1990s, the degradation of land resources reached disastrous proportions, and reclamation of degraded land became the focus of state agencies responsible for agricultural production. However, due to institutional disintegration and a lack of financial means, land degradation continued in CA. Moreover, the newly established states of Central Asia initiated social, economic and environmental reforms shifting away from the Soviet system. Ownership and production forms in agriculture changed-instead of a collective system, land ownership transformed into an array of different forms of individual land use [7,8]. This article looks at three historical intervals of the development of Central Asian countries and four types of land degradation. The publication aims to analyze the relations between water and land rights during the different periods of Central Asia's development and their impact on land degradation. The research methodology applied consisted of system analysis with elements of institutional analysis on various types of land degradation. The empirical section of this paper contains data and information gathered by the authors from literature, internationally funded project reports and their own investigations. Concept and Methodology Land degradation in Central Asia has serious social and economic impacts. The condition of the environment directly influences the living standard and health of the population, especially its socially vulnerable segments [9]. Land degradation makes rural populations increasingly vulnerable, as well as making them further exploit land resources for short-term production and benefits. The Aral Sea catastrophe is a famous example of the influence of the Soviet management system on land degradation. Salt pollution had significant adverse effects on the agricultural sector. Approximately 1.5 bln tons of salt and dust from the drained bottom of the Aral Sea distributed over about 3.5 mln ha of land, mostly in downstream areas [8,10]. Large-scale irrigation systems built during the Soviet period led to large water losses and, consequently, to secondary land salinization. For example, by 1980, in Turkmenistan approximately 50,000 ha of land was abandoned annually due to degradation [11]. Land degradation in Kazakhstan poses a huge problem in all administrative areas. The total area of degraded land is estimated to comprise 66% of the national territory, i.e., over 48 mln ha [1,12]. Significant pasture degradation due to over-grazing is present in Kyrgyzstan (30%), Tajikistan (89% for summer pastures and 97% for winter pastures) and Turkmenistan (70%). In Uzbekistan, about half of irrigated land was salinized by 2007 [1]. Every year, economic losses from land degradation make up 3% of the GDP for Kazakhstan and Uzbekistan, 4% for Turkmenistan, 10% for Tajikistan and 11% for Kyrgyzstan [13]. After the collapse of the Soviet Union, the process of land degradation has continued due to different changes of water and land management. The current land degradation trends remain and progress due to the "business as usual" model. At the same time, the scale of land degradation is multiplied by climate change impacts [14]. This study focused on water and land management systems, mechanisms of interaction and the identification of regularities. The purpose of this study was to determine the impact of changes in land and water management on land degradation. The research looks at water and land reforms in Central Asian countries and how they influence different types of land degradation. The main questions of this study were formulated as follows: • What are the features of the current management systems of land and water rights in Central Asia? How do they interact? • How did the system of land and water rights change after the CA countries gained independence? What is the impact of these changes on land degradation? This study examines the corresponding land and water management systems and ongoing reforms in the land and water sectors, as well as retrospectively analyzing the changes in land and water rights in the course of the last 100 years in Central Asia. The research is based on a literature review on the issues of water and land management on the one hand, and on land degradation aspects on the other. The authors tried to identify interactions between these two aspects. The authors used statistical data available for the period of study, several internationally funded project reports, publications, archive materials, and their own investigations based on the experience of projects implementation in Central Asian countries. Various managerial decisions were considered as well as their impact on the use of land and water resources. At the same time, this publication gives a general overview on the problems and relations between land and water resources and does not determine optimal solutions. Conceptual and scientific basics of the land degradation problem are key to developing solutions. This publication can be used in future research to develop a deeper understanding of different types of land and water management on different types of land degradation. Historical Overview of Land-Water Right Systems in Central Asia During the past 70 years, land has been the major driver behind socio-political changes in rural Central Asia. Political system changes were among the core factors affecting land ownership and led to the emergence of its new forms. Land ownership slightly changed during the Tsarist period (1861-1917) and totally transformed during the Soviet period . During the post-Soviet period (1991 onwards), the independent Central Asian states continued the transformation of their land ownership and management systems [15][16][17][18]. Lands rights in Central Asia underwent three major changes, starting from the Tsarist period until the post-Soviet time, and in each instance land rights affected water rights ( Figure 1). publications, archive materials, and their own investigations based on the experience of projects implementation in Central Asian countries. Various managerial decisions were considered as well as their impact on the use of land and water resources. At the same time, this publication gives a general overview on the problems and relations between land and water resources and does not determine optimal solutions. Conceptual and scientific basics of the land degradation problem are key to developing solutions. This publication can be used in future research to develop a deeper understanding of different types of land and water management on different types of land degradation. Historical Overview of Land-Water Right Systems in Central Asia During the past 70 years, land has been the major driver behind socio-political changes in rural Central Asia. Political system changes were among the core factors affecting land ownership and led to the emergence of its new forms. Land ownership slightly changed during the Tsarist period (1861-1917) and totally transformed during the Soviet period . During the post-Soviet period (1991 onwards), the independent Central Asian states continued the transformation of their land ownership and management systems [15][16][17][18]. Lands rights in Central Asia underwent three major changes, starting from the Tsarist period until the post-Soviet time, and in each instance land rights affected water rights ( Figure 1). Tsarist Period During the Tsarist period, water management was customary and land arrangements were based on historical rights. The "mirab"-water master-was a person selected by land owners to observe the overall distribution of water among water users. Land not only had production value but was also deemed a family asset and had economical value. However, land productivity was rather low and fallow areas were abundant. Local populations were engaged in subsistence farming and grew mostly food crops. A lot of land was not used and Tsarist Russia tried to use as much of it as possible for cotton production. Land and water rights were closely linked and, thus water distribution was strictly followed by land ownerships. All settlements had to follow the Sharia (Islamic) Water Law regulating water governance [11,[18][19][20]. In the early 20th century, the agrarian policy implemented by Tsarist Russia led to a gradual change in the ratio of nomadic and settled populations. The traditional forms of agriculture became weak and cotton production increased quickly. The reforms by the Russian government altered overall socio-political life. The number of individual/private land users increased [11,21]. During that time, huge areas of irrigated land were allocated for cotton production [11]. The demand for and use of water increased, land ownership was changing; former nomads and newcomers from other parts Tsarist Period During the Tsarist period, water management was customary and land arrangements were based on historical rights. The "mirab"-water master-was a person selected by land owners to observe the overall distribution of water among water users. Land not only had production value but was also deemed a family asset and had economical value. However, land productivity was rather low and fallow areas were abundant. Local populations were engaged in subsistence farming and grew mostly food crops. A lot of land was not used and Tsarist Russia tried to use as much of it as possible for cotton production. Land and water rights were closely linked and, thus water distribution was strictly followed by land ownerships. All settlements had to follow the Sharia (Islamic) Water Law regulating water governance [11,[18][19][20]. In the early 20th century, the agrarian policy implemented by Tsarist Russia led to a gradual change in the ratio of nomadic and settled populations. The traditional forms of agriculture became weak and cotton production increased quickly. The reforms by the Russian government altered overall socio-political life. The number of individual/private land users increased [11,21]. During that time, huge areas of irrigated land were allocated for cotton production [11]. The demand for and use of water increased, land ownership was changing; former nomads and newcomers from other parts of Russia were awarded new land rights. That was the time when land degradation started and productivity dropped in the newly settled areas. In an attempt to legalize and rationalize existing land and irrigation practices, water resources were de jure nationalized. The Tsarist authority did not realize how important access to water was for the agrarian society of Central Asia. At that time, locally, nothing changed in the customary water management [7]. Soviet Period In the early Soviet period, the political and security-associated significance of irrigated land was evident in the Bolsheviks' attempt to pacify the Fergana Valley. Initially, the changes in the land rights system were insignificant. This fact was also key during the national delimitation process of 1924-1936 when the borders of the CA republics were demarcated [8]. In the late 1920s, water-land reforms destroyed the custom-based relations and individual land ownership rights. The collectivization unfolded extremely quickly and ignored the traditional lifestyles and farming practices. The right to water and land were inseparable in Central Asia. Disregarding that link formed the basis for the new and devastating water policy implemented along with the collectivization. The new governments in Central Asia embarked on the large-scale programme of land redistribution with the intention of sweeping away the traditional patterns of land tenure. The Soviet water policy of that period was characterized by single-purpose water use, centralized decision-making and planning. The Central Asian republics were ordered to devote their available resources, including land and water, to cotton growing. At the time, the population and the area of irrigated land in certain provinces, mostly in the Fergana Valley, significantly increased. Fifty percent of the population of Central Asian Republics lived on 20% of their territory, i.e., in the Fergana Valley, Lower Zarafshan and the Tashkent-Khojand Corridor [8]. Those were the primary and conventionally irrigated areas that demonstrated the relationship between population pressures and competition for limited access to water and fertile land. The transformation of land ownership during the Soviet period was marked by the collectivization of production and de-privatization of land. A major breakthrough in land relations happened during the 1930s-1940s: collective farms, or Kolkhozes, became the main landowners in the former Soviet Union [11,20,22]. Water was also nationalized and became state property. In the 1930s-1960s, the agricultural policy of the Soviet Union focused on the development of "virgin lands". As a result, Central Asia became a monocrop farming system with cotton as the major crop and, thus lost its food self-sufficiency. Moreover, cotton cultivation caused further soil degradation and loss of land productivity [7]. Several large-scale irrigation systems were built in CA during the Soviet era. In the early 1970s, the Kolkhozes transformed into large Soviet farms (Sovkhozes) occupying up to 100,000 ha of land. The Sovkhozes specialized in single-type production, e.g., cattle husbandry, poultry, vineyards, rice, wheat or cotton [23]. Therefore, within Sovkhozes, land was intensively used for one or two crops for a long time, leading to the decline of soil organic elements, salinization and erosion. Post-Soviet Period Since 1991, the countries of Central Asia have been developing their independent national economies based on different priorities and schemes. Nevertheless, to a significant extent, they remain agrarian with natural resources still playing an important role [24]. Social and political transformations are leading to changes in land and water resources governance and management. CA countries have launched land reforms with the aim of dismantling the Soviet land management system. The privatization and individualization of land ownership forms the foundation of reforms taking place in post-Soviet Central Asia. The land reforms of the 1990s were marked by a complete shift of the economic model with regard to rural development in Central Asia. New states were driven by nation-building priorities, seeking to increase the economic value of their national resources, including land. Gradually, collective farms were disbanded and land was handed over to private users based on various country-specific legal arrangements [8]. Although individual (private) land ownership could represent the best model in terms of maintaining land resources, the institutional arrangements in place do not provide sufficient incentives for effective land use. Main Types of Land Degradation Currently, land degradation in Central Asia represents a severe and multi-faceted process [25]. Land tenure arrangements, including tenure security, take a special place among the institutional aspects of land degradation as they impact farmers' land management decisions. The management model or the actual way the land is managed-privately or communally, landholding size and fragmentation, land mortgage options, opportunities to transfer land by sale and/or lease-constitutes an essential element of land tenure [5,15,16]. Over-irrigation (salinity). Salinity is the main land degradation problem in the region. As of today, 3 of 8 mln ha of irrigated land is subject to different degrees of salinity. Thus, annually, approximately 30,000 ha of irrigated land suffers degradation due to salinization [3,4]. After the collapse of the Soviet Union, land and water rights changed; as a result, the number of farms increased from hundreds to thousands at once. The former employees of the Kolkhozes and Sovkhozes became the new farmers responsible for all farming-related issues. Many of them did not have any farming education [1]. The single farm-level land-water planning units disappeared, and the lack of knowledge on irrigation norms led to the fact that farmers believed that using more water was better. This approach has resulted in chaos and uncontrolled competition for water resources [26]. Increased water competition forced the use of low-quality (saline) water, irregular or over-irrigation, and intensive ground water extraction [27]. Thus, the main reason for large-scale salinity lies in weak institutional arrangements related to land ownership, reclamation services and state agricultural policies [28]. The lack of a farm-level water use monitoring system results in uncontrolled water use and, as a consequence, increased water use. Whereas poor maintenance leads to the incapacity of irrigation systems, subsidizing the sector leads to less incentivization for the farmers to save water [13]. Local-level institutional irregularities of water management are most critical for land degradation. Due to inefficient water resource management, the acreage at the end of irrigation networks does not receive enough water, thus causing even worse salinization and land degradation. For example, the area of saline lands at the tail end of irrigation canals increased by 20-25% only in the Khorezm region of Uzbekistan [29]. Soil erosion becomes increasingly relevant every year due to water balance among farmers lacking money to maintain corresponding irrigation systems. Inadequate management of irrigation networks leads to significant water losses, breakdowns of irrigation canals, and the washing away of fields [4]. Compared to non-degraded soils, degraded land consumes more water. Therefore, linkages between land degradation and water management are obvious. As we can see, most of the time, degraded lands experience water pressures that indicate the close relationship between land degradation and water (mis-)management. Intensive cropping. Land privatization and individualization did not yield sufficient outcomes to recover the quality of land resources, and land productivity did not change drastically. Private land investments mostly target production functions (fertilization, irrigation, harvesting, etc.). In the past two decades, the Soviet-period single-crop system was replaced by crop quotas or profit-driven monocrop cultivation. At present, land management is in the hands of landowners, and in three CA countries (Kazakhstan, Kyrgyzstan and Tajikistan) farmers are more or less free to choose their cropping and agricultural operations. However, by different means, national governments do influence farmers' cropping choices via direct state quotas, subsidies, state contracts and/or loans [17,22]. The pursuit of immediate profit and short-term benefits is leading to the fast decline of land productivity and the removal of land from agricultural use. Simultaneously, the size of land plots significantly promotes land degradation. Farmers owning small plots (approx. 1 ha) are trying to achieve the maximum profit from their land assets. Over-grazing and erosion of pastures. After the collapse of the Soviet Union, the effective pasture management mechanisms disappeared, and farmers suffered from a lack of economic and organizational capacities to develop distant pastures. The absence of a reliable water supply leads to increased livestock migration from pasture zones to the areas adjacent to rural communities. In order to feed themselves, cattle-breeders use pastures near their settlements as much as possible. As a result, pastures around villages are over-grazed and are subject to severe soil erosion and degradation [1,4,6,13,30,31]. As we can see from the examples above, the current land governance and management framework will require significant reforms to reduce land degradation risks, which are high both because of the overall scale of degradation and their impacts on the stability of the regions' countries. Based on the previous discussion, the authors describe the influence of three water and land management systems in Central Asia on different aspects of land degradation in Table 1. Table 1. Linkages between historical periods and different types of land degradation. Source: prepared by the authors based on a literature review. Over-irrigation (salinity) Land degradation is not an issue. Strict control of water distribution by Mirab (water master). Minor increase of land degradation. Use of significant irrigated acreage against the background of practically absent water-saving. Development of arid land characterized by high natural salinity. However, annual land-washing efforts were taken to prevent salinity. Significant increase of land degradation. Lack of water use control; lack of irrigation and reclamation knowledge of newly created private farms; "the more water, the better" trend. Soil erosion Land degradation is not an issue. Traditional irrigation techniques; strict control by mirab; absence of large-scale irrigation systems. Minor increase of land degradation. Significant length of irrigational systems with multiple earthen canals. Significant increase of land degradation. Lacking system for regular servicing of irrigation networks due to a lack of financial means; wear of irrigation systems (30-70%); considerable water losses during transportation. Intensive cropping Land degradation is not an issue. Agriculture covers the food needs of only local communities; absence of considerable export of goods. Minor increase of land degradation. Planned crop distribution; aspiration to receive maximum yields; regional crop specialization against the background of observing crop rotation. Significant increase of land degradation. Farmers' desire to get maximum harvests from small land plots; lack of crop rotation. Over-grazing and erosion of pasture lands Land degradation is not an issue. Traditional cattle-breeding based on distant-pasture grazing; migration of livestock between pastures. Land degradation is not an issue. Controlled increase of livestock population; developed system of distant-pasture livestock production with equipped and water-supplied grazing stations. Significant increase of land degradation. Destruction of pasture water supply infrastructure; cattle grazing only near settlements; uncontrolled increase of livestock population. As we can see from Matrix 1, different management systems influence land degradation differently. Some issues emerged during the Soviet period and increased significantly in the post-Soviet period. In our research, we suggest that the highest impact is exerted by changing land and water rights, which we will discuss in detail in the next chapter. Water and Land Rights Land governance and management of land resources have been well studied by Abdullaev and Rahmatullaev, Spoor, Veldwisch, and Kandiyoti [19,22,[32][33][34]. Their research focused on the evolving institutional changes affecting land and water use in rural Central Asia. The transformation of land and water rights and their implications for the state of land resources are analyzed in this paper. Despite having different forms in different countries, land ownership in all countries can be classified into land transactions and use. Unclear land ownership status can influence land use sustainability. For example, a lack of responsibility of individuals within collective land use can undermine their incentives to contribute to collective action [35]. Institutional development and decision-making in the sphere of land management represent important components of building an effective management system and sustainable use of land resources. In the Soviet period, land governance in Central Asia was traditionally the prerogative of the state. The transition of national agricultures from centralized collective to private farms induced significant institutional changes. Land reforms created a certain vacuum among farmers with respect to agricultural services that were previously provided by the state. The changes in land ownership in the Soviet period led to changes in decision making on land use, preservation, and production. Land ownership was de-coupled from water rights during the collectivization. That constituted a major change from the Tsarist-period model in irrigated agriculture and had an extremely significant long-term impact on irrigated land management. Preceding water-land rights systems were replaced by the new land production functions [11,22,33] that became the main cause of land degradation in the region. Land lost its value as a private asset. Land ownership by collective farms and the state did not promote private interests, thus farmers lost motivation to protect land and use it most effectively. State regulation of land transactions was rather strict, not allowing any private investment, although agricultural productivity was relatively high. According to Hodgson [35], the most effective management system is when land and water rights are linked. Yet, in the course of the last two decades, we witness further disintegration of these linkages in CA countries. Under the former collective farms model, land and water rights were both in the hands of the state, and water planning and supply were the responsibility of a single organization: the collective farm. At present, water rights are still vested with the state, but land rights in different forms became individualized and are administered by thousands of farmers. This makes water management and planning scattered. Although Water Users Associations (WUAs) were established to replace former collective farms, they are still weak and incapable of proper water resources planning and management. Therefore, farm-level water management is not streamlined and lacks a single institutional agent, giving ground for uncertainty and competition for water resources among land users [24]. The initial phase (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000) of land reforms in Central Asian countries was accompanied by economic recession, growing unemployment, and greater reliance on the domestic economy. The agricultural sector became the "shock absorber" while national populations were trying to ensure their livelihoods [32]. National governments took on a "gradualist" approach to institutional reforms attempting to control the production of leading export crops, avoid rapid unemployment, and provide decent living conditions for rural communities [36]. After gaining independence, national water management systems faced similar challenges. Irrigation networks were constructed at a time when sovereign boundaries did not affect the decision-making of concerned engineers and politicians. After the collapse of the Soviet Union, the new governments tried to manage their land and water considering the challenges of importing water from neighbors and sharing the existing transboundary network. Thus, during the years of independence, the linkages between water management, land rights and new agricultural policies obtained various novel forms and combinations (Table 2). Table 2. Linkages between land ownership, agricultural policies and water systems (Abdullaev, 2016). Land Ownership Agricultural Policies Water Systems Individual, mid-and long-term lease (49-100 years): • Different correlations of the identified types of linkages between land ownership, agricultural policies and water systems are presented in all Central Asian countries. This variety led to the irrational use of water and land resources, and consequently increasing land degradation. Several different management types are presented below. The first type of system is presented more in Uzbekistan and partly in Kazakhstan. After the collapse of the Soviet Union, water and land management systems in Uzbekistan more or less correlated to the previous Soviet model, i.e., hierarchic and centralized based on the top-down approach [18]. Water remains a state-owned resource and is distributed via state-owned networks [37]. The central planning system still exists and the state controls farmers by ordering crop production quantities. At the same time, Uzbekistan has been developing its trade and has legalized three different schemes of agricultural production: state-ordered production (cotton, wheat); commercial production (rice); household production of food crops [33,38]. After the collapse of the Soviet Union, land in Kazakhstan was divided into conditional land shares (CLSs) between the members of the former Sovkhozes and Kolkhozes based on a long-term lease (initially for 99 years; later changed to 49 years). CLSs were issued as "undefined common shares" and farmers could be unaware of the exact location and shape of the land plots to which they were entitled. Simultaneously, the water fund remains under state ownership. The government of Kazakhstan encourages the establishment of large farm enterprises and supports them. In the north of the country, large farms still exist and operate similarly to the Soviet "collective farms" model. Multiple land shareholders contributed their land shares as capital to establish such "farm enterprises" [39,40]. Re-structuring of land administration at various government levels took place, but there does not seem to be any clear process in place for the transition/transfer of obligations. The third type of system is present in Kyrgyzstan and partly in Tajikistan [41]. Land distribution in these two countries started immediately after the collapse of the Soviet system. For example, in Kyrgyzstan, the 470 Kolkhozes and Sovkhozes were split into more than 30,000 small farms [16]. Initially, the agrarian reform was largely controlled by local administrations and depended on the in situ rules designed by respective governance entities. The burden of covering the Kolkhozes' and Sovkhozes' liabilities was placed on the newly established farms, and many of them achieved profitability [15]. Thus, in Kyrgyzstan, the system of land rights sale/transfer was introduced, giving birth to the new land market. At present, wealthier households prefer renting lands from poorer ones [16]. Land and water reforms in Tajikistan are still underway. During the Soviet era, 99% of agricultural land was owned by large state and collective farms, and 1% was cultivated by households for subsistence purposes. The 1996 Land Code granted every household the permanent and heritable right to a 0.15-0.40 ha plot. Such household-garden and/or kitchen-plots were generally given to the members of state and collective farms in the Soviet era. Tajikistan's government distributed these small land plots in two phases, corresponding to presidential decrees of 1995, 1996 and 1997. Conclusions In Central Asia, land and water rights systems are closely linked. After the collapse of the Soviet Union, the newly emerged states launched their respective water and land reforms. Transformations in the sphere of land ownership and rights had and continue to have longstanding impacts on water resources management and vice-versa. Central Asian countries are making considerable efforts to control land degradation, including state control of land use and land quality, mapping of land categories, and monitoring of land degradation. Each country has a land cadaster (inventory) as a land control tool. Non-agricultural land acquisition policies are key for preventing the withdrawal of productive lands from agricultural use and recovering degraded land. During the Soviet period, land in CA was deemed a means of production. The post-Soviet policies of Central Asian states mostly focus on food/crop exports and/or food self-sufficiency. Although recent reforms in the regions' countries resulted in individual or private ownership, land issues are still acute. In addition, the changes in the water sector did not produce sustainable links with land management. As a result, water rights are de-linked from land rights. Therefore, water resources planning and use models do not correspond to the actual condition and productivity of land. Mono-cropping and price/market-driven land use are still the mainstream land-use policies in CA countries with the governments focusing on production volumes. The function of monitoring the land condition lies with the same ministry which is responsible for agricultural production, i.e., the Ministry of Agriculture. Thus, state agricultural agencies are focusing more on promoting state agricultural production policies than land reclamation and rehabilitation efforts. Agricultural policies are vital for designing and implementing land conservation measures. Coping with large-scale anthropogenic land degradation requires a shift in land governance and management policies and practices. Ownership schemes, land rights, linkages between water and land rights, and the introduction of incentives to preserve land resources all constitute important institutional factors. Nonetheless, Central Asian countries are still promoting policies that neglect land protection and the concept of "land as a production unit", resulting in the volume of degraded land growing every year. The consequences, particularly in the contexts of economic and social aspects of land degradation, may help to see the issue as an institutional one and understand its socio-political scope. To reverse the trend, certain elements of land management require change allowing the application of economic incentives and building knowledge and capacities of immediate land-and water-users. New institutional approaches and solutions such as Water Users Associations and joint planning of land-water resources are the overarching factors to improve the situation in the land sector. Strong and sensible efforts to re-link water management and land ownership could also foster improved land degradation control. Author Contributions: I.A. was responsible for the paper's concept and review of water and land policies. E.S. focused on the data analysis and wrote the sections on land degradation. T.R. gathered and analyzed data and linkages between land and water rights. Funding: this research received no external funding. Acknowledgments: the authors express their gratitude to the Regional Environmental Centre for Central Asia (CAREC) for the opportunity to conduct this research. Conflicts of Interest: the authors declare no conflict of interest.
2019-04-27T13:10:55.014Z
2018-09-14T00:00:00.000
{ "year": 2018, "sha1": "f4d562eea8999ce7f27d96fd9cf3f66de588f796", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/9/1242/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6c41f99decaf4279a84f4c4f09eeae8084365e92", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geology" ] }
259763324
pes2o/s2orc
v3-fos-license
Blending Creative Approaches to English Language Learning: Shaping Critical Thinkers —The primary objective of this study is to assess whether or not there has been an increase in the students' capacity for creative and critical thinking as a direct result of the focus that has been placed on critical thinking and communication. The following hypotheses will guide our research: (H1) that original thinking is not included in the prescribed syllabus at the graduate level; and (H2) that Paul's E&S of critical thinking can promote creative writing skills among graduate Arab learners in the Department of English & Translation at Ar Rass, Qassim University. Both quantitative and qualitative research approaches were used by the researchers in this study with a cross-sectional design. Quantitative analysis was performed on a total of two hundred forty (240) research papers. Twelve instructors from the English and Translation Department at Qassim University's Ar Rass campus contributed the descriptive information that was used. A Paired Samples t-test was carried out for the purpose of investigating the hypotheses. The null hypothesis was validated using the p 0.05 threshold of significance. This entails that the curriculum for the Bachelor of Arts degree must include some forms of innovative problem solving. The second hypothesis was validated using the p0.05 and p0.01 thresholds of significance, respectively. That is to say, Paul's E&S line of thinking can be included into Research Writing in order to nurture and support students' creative thinking. I. INTRODUCTION This study examines graduating Arab students of English and Translation at Qassim University to see whether or not they are able to think creatively or in a novel manner. Researchers employ Bloom's Taxonomy (2019) as the organizing principle for their research into creative thinking in order to better equip Arab students with the ability to produce original ideas. This study investigates the question of whether or not the English curriculum at the graduate level at Qassim University poses any obstacles to original thought, and it then makes recommendations for how to more effectively incorporate original thought into English language instruction in order to foster creative writing abilities among Arab graduate students. During the course of the winter, spring, and summer of the academic year 2020-21, a total of (140) students who were in their fourth year at Ar Rass English and Translation Department at Qassim University, made responses. The primary objective was to investigate whether or not Arab students who are studying at the graduate level exhibit signs of original thought in their writing and whether or not the curriculum that is intended for these students genuinely supports unique thought. The participants in this study (n=280) were given the task of writing an essay, and the goal of the research was to establish what percentage of participants were capable of coming up with original ideas and concepts. In order to assess the hypotheses, we made use of both descriptive statistics and a t-test on paired samples. Taking into consideration the findings, the research offered some suggestions for developing inventive teaching strategies in ELT (English Language Teaching) programs at the graduate level. It was hypothesized that the English language professors working in the Department of English and Translation at Ar Rass, Qassim University could play a significant role in the development of self-reflective linguistic habits of mind in the students who were expected to obtain BA degree in English language and translation. Students are likely to increase both their language abilities and their overall level of competency if they are able to mix their writing with innovative thoughts. Research Questions The research mainly targeting to obtain answers to the following questions to help Arab graduate students enhance their creative writing abilities: a) Does the prescribed course of study present a sufficient challenge to the breadth of original thought at the graduate level? b) How might innovative thought be incorporated into ELT classrooms? Hypotheses The following hypotheses have been proposed by the researchers to gain a deeper understanding of these study topics: (H1) Graduate level curriculum does not contain any opportunity for creative problem solving. (H2) Paul's E&S of critical thinking can be beneficial to creative writing among graduate Arab students at Qassim University. II. LITERATURE REVIEW The great Greek philosopher Socrates is credited with popularizing the method of original thought known as Socratic questioning, which was used by many of his students and disciples to guide ancient logic and is still utilized by modern linguists today. If we look back far enough in history, we can see that Socrates popularized this method of original thought. Dewey (1933, p. 6) was the first scholar to bring the concept of original thought into the classroom with his definition of critical thinking as "active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds on which it is based; and the further conclusion to which it tends". Paul et al. (1993, p. 56) also discovered methods that were comparable to "judge the credibility of sources of information," "analyze or evaluate arguments, interpretations, beliefs, or ideas," and "create or assess solutions". Based on the three fundamental domains proposed by Benjamin Bloom in 1956 and some of his followers, the ELTs (English Language Teachers) can split education into three broad groups: 1. Information that contains the individual's mental capabilities. 2. Character, which is developed via inner growth and maturation. 3. The capacity to move one's own body or to motivate oneself; the psychomotor talents fall under this group. As they pursue their educational goals, students should keep these three areas in the forefront of their minds (Bloom, 1956). The purpose of this study is to provide a solution to the question, "Does the prescribed curriculum promote the goals of the learning process?" and, if the answer is no, to provide some suggestions for how Saudi students of English can be encouraged to engage then in creative problem-solving. In order to determine whether or not students at the Department of English and Translation in College of Science and Arts at Ar Rass (Qassim University) have developed proficiency in one or more of these three areas by the time they graduate. The purpose of this research is to determine whether or not they have. The findings of this study provide credence to the concept that students' capacity to think creatively when writing essays not only enriches their experience of learning a language but also affords them an opportunity to learn more about themselves. According to Day (2003, p. 26), characteristics that are beneficial to creative thought include "the use of intuition, creating unusual connections, originality, flexibility, objectivity, reason, and willingness to take chances". Openness, curiosity, and fearlessness are just a few examples of the kinds of personality attributes that foster the development of creative or original thought. These characteristic features are assigned by psychologists and educators to a mode of thought known as "divergent thinking," in which an individual's thoughts and reasoning are allowed to "roam" freely and evaluate a number of different ways to a certain situation. The ability to think in unexpected ways may be taught, just like any other skill you might learn. As opposed to merely guiding students through the many lessons in the curriculum, Lipman (2003) asserted that the fundamental responsibility of teachers should be to cultivate students' capacity for critical thinking. According to Brown (2004), the goals of an ideal academic English program should go beyond language issues and nurture the skill of original thought rather than simply focusing on the language itself. Teachers of a language have a responsibility to advance their students higher along Bloom's Taxonomy (1956) of Learner Tasks to properly teach the language. According to Bloom's taxonomy of learning, which was developed in 1956, it is possible for the knowledge gained during the process of learning to encompass not only the development of mental or cognitive abilities but also the recognition or recall of facts that have been stated in the past. This is one of the possibilities. According to Bloom's taxonomy, the six categories that make up this domain are listed below, starting with the least difficult and working their way up to the most challenging ones. a) Be able to recall specifics such as statistics or facts b) Being able to understand what is being said and being able to translate, extrapolate, interpret, and apply what is being stated to find solutions to problems, describing a situation into a particular setting using one's own terminology. c) Application: making use of a concept in a novel setting or unintentionally using an abstraction in a setting where it was not intended to be used. The student is required to apply classroom information in unexpected professional settings. d) Deconstructs the information or the ideas such that their structure can be understood by breaking them down into their component parts. Examines the information or data to determine its patterns so that the user can observe them and be able to recognize the difference between facts and assumptions and then acts consequently. e) Synthesis is the process of putting together a whole from its component elements. Construct a final product out of its constituent parts while keeping an eye out for any innovative linkages or frameworks that might emerge. f) Evaluate the merits of anything, such as an idea or a piece of information. In order for students to become innovative and critical language users of the English language, it is proposed that teachers of the English language may employ a variety of different teaching strategies, some of which may involve exercises that can encourage practicing some forms of the original thought. This enables students to learn the language and this can be accomplished with the support of skills in interaction, analysis, and criticism. In spite of the broad recognition that critical thinking skills are important, their use is restricted for a number of reasons, one of which is the absence of defined levels of thinking in ELT. In order to counteract this difficulty, English language instructors frequently resort to exercises that push students to interact with one another, think imaginatively, and communicate with one another. As a result, children would develop the key skills necessary for learning a language. When stated that "every student who learns the logic of a discipline must build that logic in his or her own mind," he is referring to this idea. There is no way to generate the logic for the learner or to simply "give," "transfer," or "inject" the logic in prepackaged form; rather, each step of the production process requires the presence of critical thought and judgment. There is no way to generate the logic for the learner or to simply "give," "transfer," or "inject" the logic in prepackaged form. The process of learning is not instantaneous; rather, students should make an effort to make use of their own thoughts to critically scrutinize and analyze the information that is presented to them, which will ultimately lead to the construction of their own personal understanding of the language Wallace (2005, p. 67). Students in the Department of English and Translation at Qassim University are expected to be able to think critically and creatively by analyzing, synthesizing, and evaluating the information they encounter. The communicative approach is the most effective method for stimulating the learning of English because the language contains four core abilities (speaking, reading, listening, and writing), and teaching them is based on the application of analytical thought. Therefore, the communicative approach is the most effective method for stimulating the learning of English. According to what Moore et al. (2001, p. 1) said, "Critical thinking presents students with the opportunity to strengthen their language abilities communicatively". This is due to the fact that, in his words, "reading is viewed as actively constructing meanings on the basis of the material," which requires the reader to investigate and evaluate the concepts included within the text. It is an excellent tool for generating ideas for any form of writing and finding connections between different ideas. According to Hare (1998, pp. 41-42), the Communicative Teaching Approach and creative thinking have the following goals: 1. Making an effort to encourage the interpretation, expression, and negotiation of meaning, which is an endeavor that requires the participation of the students. 2. Inspiring students to participate in meaningful dialogue by posing questions for clarification, expressing their own viewpoints, and expressing whether they agree or disagree with the perspectives of their peers. 3. Enabling activities in the classroom that foster the students' individual language development. 4. Students are better able to reap the benefits of the interplay between various linguistic features when their language learning experiences are placed into bigger settings, such as units of conversation, which are examples of such contexts. Examining the English essays that were produced by (240) students for the purpose of determining whether or not the students' use of critical thinking and communicative approaches has resulted in an increase in creative problem solving is the major objective of this research. In addition, the development of writing talents requires the development of two key sub-skills: the ability to organize information and the ability to formulate ideas. Finding linkages, arranging issues, and generating connections between ideas are key components of both creative thinking and good writing. Furthermore, achieving these objectives requires original thought Epstein (2019, p. 73). III. METHODOLOGY Two hypotheses guide this research: H1: Graduate-level coursework does not include opportunities for original thought; and H2: Paul's E&S of critical thinking can improve creative writing abilities among Arab students at Qassim University. Both assumptions were tested over the course of ten months of research conducted in the Department of English & Translation at Ar Rass, Qassim University. The population of this study consisted of senior students (n = 140). Over the course of three seasons (Fall, Spring, and Summer), (140) students submitted a total of (280) essays that were assessed for evidence of creative writing. Teachers of English in the Department of English Language and Translation were given a Likert-scale close-ended questionnaire to assess whether or not the current curriculum of English taught at the graduate level in Qassim University promotes original thinking among the Arabs learners during their graduation years. The statistical package for the social sciences (SPSS) was used to measure the level of original thinking among these Arabs learners. Careful analysis of the collected data revealed the Arab students' views on the limits of creative thinking and brought attention to the efficacy of the required graduate-level curriculum in English language teaching. In this cross-sectional study, researchers employed both quantitative and qualitative techniques. Using the quantitative approach described by Paul's English and Style, a random sample of (240) English essays was assessed in five areas: 1. Readability of the Text 2. Evaluation of the author's argument 3. Evaluation of the author's use of supporting evidence 4. Evaluation of the paper's overall organization (coherence and cohesion) 5. Evaluation of grammar and syntax JOURNAL OF LANGUAGE TEACHING AND RESEARCH Originality in scholastic research writing was evaluated by pre-and post-tests. Prior to Post-tests I and II, students in the course "Research methods: code: ENG 446" were instructed in research writing using Paul's E&S of original thought during the Fall, Spring, and Summer of the 2020/21 academic year. Twelve instructors from the English and Translation Department of Ar Rass, Qassim University provided descriptive information. In order to put H1 to the test, we gave each teacher a 5-minute interview in which we asked about whether or not the required curriculum includes elements that encourage creative thinking among Arab students. The first and second post-tests were designed to evaluate H2, which hypothesizes that Arab students can be taught to think creatively through the development of research writing skills. IV. RESULTS AND ANALYSIS Both quantitative and qualitative approaches were taken to complete these project's objectives. For the purpose of administering the five-point scale questionnaire, a sample group consisting of twelve educators was chosen. They were given the task of writing down their thoughts on whether or not skills in critical thinking were included in the required curriculum for the Bachelor of Arts degree. For the purpose of putting the hypothesis to the test, descriptive statistics and the paired sample t-test were utilized (H1). Using a quantitative method, we were able to measure the amount of progress made between Post-test I and Post-test II. In order to assess the degree of progress in original thought brought about by research writing, a sample of 140 subjects was selected. During the pre-test, the participants were given prompts on contemporary topics such as global warming, suicide bombing, the message of Islam, smoking, school punishment, and whether or not computers can take the role of teachers. They were required to write between 200 and 250 words on each topic. The research searched for indications that the participants had improved their composition skills, such as greater clarity of writing, level of analysis, use of supporting information, arrangement of ideas, and accuracy of grammar and syntax. Quantitative analysis of the subjects' writing abilities was performed with the help of a rubric that Paul (1997) had developed. Data Analysis Table 1 shows data gathered from the English language teachers. Cronbatch's alpha shows 0.60 reliability level in the questionnaire. The Mean Score (MS) was (41.12) with (15.67) Standard Deviation (SD). The t-test value (-15.67) was found to be significant at p ≤ 0.05. The result was also found significant at p ≥ 0.01 level of significance. This outcome disproved hypothesis H1 and provided proof that the mandated curriculum is structured in such a way that it can improve learners' ability to improve their original thinking if and only if it is taught effectively. In order to generate a triangulation in the results and test the hypotheses discussed earlier, quantitative data was collected from a total of (140) subjects. In order to obtain the results, we carried out three separate tests: the Pre-test, the Post-test I, and the Post-test II. In Table 2, the data was quantified using a scale that ranged from 0 to 4 grade points for Low-range Achievers, Mid-range Achieves, and High-range Achievers respectively. Between the Pre-test and Post-test I & II, as well as between the Post-test I & II comparisons, we used the DS and PS t-tests to examine the effects of the critical thinking instructions provided through EEW. Table 3 and Figure 1 show the comparison between five rubrics over three executions: Pre-test, Post-test I and Posttest II. The score on all of the available rubrics was lower than (2.00 GP), with the exception of the Clarity rubric; nonetheless, the Support rubric had the lowest score (1.47 GP) when it came to the pre-test. During the Post-test I, the score for each of the five different categories of measurement was above (2.00). The Support category received the lowest possible score of (2.03 GP), while the Clarity category received the highest possible score of (2.43 GP). JOURNAL OF LANGUAGE TEACHING AND RESEARCH 1061 On the Post-test II, the score for all of the rubrics was higher than (2.50 GP), with the exception of Grammar, which received a score of (2.37 GP). The Clarity category yielded the highest score possible (2.87 GP). Over the course of the three iterations, the participants' level of critical thinking ability in EEW showed steady progress. A comparison of the test scores of those who scored in the high range, those who scored in the midrange, and those who scored in the low range is presented in Figure 2. The cumulative score for the High-range achievers on the pre-test was (3.64 GP), the score for the Mid-range achievers was (2.50 GP), and the score for the Low-range achievers was (2.15 GP). On the Post-test I, the High-range Achievers received a score of (3.67) General Performance, whereas the Mid-range Achievers received a score of (3.25) General Performance, and the Low-range Achievers received a score of (2.90) General Performance. On the Post-test II, those who achieved in the Highest Range received a grade point total (1.20) in their critical thinking ability across all five rubrics in each of the tests. The students who scored in the middle of the distribution showed a significant improvement in critical thinking on the first post-test, but on the second post-test, their performance was relatively unchanged. V. DISCUSSION The Low-range achievers experienced a notable shift in their original thinking capacity on five rubrics throughout all of the tests, whereas the High-range achievers maintained a performance that was rather consistent despite having the highest-grade point total (3.75). Mid-range achievers exhibited a significant improvement in their critical thinking during the Post-test I, but during the Post-test II, their performance was relatively unchanged. According to the findings of the study, the use of critical thinking pedagogy had a greater impact on students whose grades fell in the Low-range, Mid-range, and High-Range categories. Low-range achievers had a low affective filter for the assimilation of critical thinking pedagogy. High-range achievers, on the other hand, exhibited a high affective filter, which prevented them from making a major development in their critical writing skill. It was hypothesized that the Low-range performers gained the most from the original thinking pedagogy, followed by the Mid-range achievers, and then the High-range achievers. The low-range achievers had high motivation, high self-esteem, and a low emotional filter, all of which assisted them in improving their critical writing ability. VI. CONCLUSION The purpose of the current research was to find answers to two questions: (a) to what extent does the graduate curriculum challenge students to think critically and (b) how can unique thinking be integrated in ELT to enhance creative writing skills among Arab graduate students? A Paired Samples t-test was carried out for the purpose of investigating the hypotheses. The null hypothesis was validated using the p 0.05 threshold of significance. What this entails is that the curriculum for the Bachelor of Arts degree must include some form of innovative problem solving. The alternative hypothesis was supported when p 0.05 and p 0.01 were used as significance criteria. That is to say, developing and supporting students' ability for creative thought can be facilitated by bringing Paul's E&S of reasoning into the teaching of English Essay Writing. This can be done in a number of different ways. The results of the students drastically improved as a direct consequence of being instructed to think creatively for the purposes of their research writing (mean score of 41.26). There was a statistically significant difference between Post-test I's results and Post-test II's results. To put this another way, this demonstrates that improving students' critical thinking skills through the use of Paul's E&S of original thought in the context of the English Research Writing curriculum is beneficial. The subjects demonstrated a constant improvement in their critical thinking skills between the first and second posttests that were administered to them. Low-range performers saw improvements in critical thinking that were much lower (1.20 cumulative GP) compared to High-range and Mid-range achievers. The students who scored in the Lowrange Achievers demonstrated significant growth in their capacity for innovative thinking across all five rubrics, whereas the students who scored in the High-range Achievers maintained a rather consistent performance across all examinations. At the end of the first post-test, the Midachievers' critical thinking had greatly improved, but at the end of the second post-test, it had not been changed at all. These conclusions are in line with what was obtained by Ennis (1991), Fairclough (2001), Brown (2004) and Cottrell (2005).
2023-07-12T06:22:45.309Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "7b315f1a2aecc632807f9ea68e7a4d8b677deace", "oa_license": null, "oa_url": "https://jltr.academypublication.com/index.php/jltr/article/download/6427/5165", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7408392434af191659768f3fb287eca54f835ddc", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
51948721
pes2o/s2orc
v3-fos-license
Biomathematical Analysis of the Liver Fibrosis Liver fibrosis is the final common stage of the most chronic liver diseases, it is caused by several factors which lead to a major worldwide health care burden. Over the decades, the understanding of the liver fibrosis disease was growing rapidly, several studies reported that this progress could be regressed or reversed, which give us a bright prospect in developing anti-fibrotic therapies. In this experiment, liver fibrosis were fully developed after CCl4 induction for 7 weeks in eight animals. Clinical pathologic parameters, four indicators of hepatic fibrosis in monkey showed similarly changes in human. All animals had liver fibrosis after 1.5 months of CCl4 induction, and liver fibrosis still existed after 9 months recovery periods, the fibrosis stages in most animals had no obvious regression without treatment. Biomathematical analysis of the liver fibrosis would aid to utilize the anti-fibrotic therapies and their derivatives for various biomedical applications. Introduction Liver fibrosis is defined as an abnormal response of the liver to persistent injury, characterized by the excessive accumulation of collagenous extracellular matrices (ECMs), and therefore involves both wound healing and fibrotic processes [1][2][3]. The repair processes occurs right after liver injury, which can take either of two distinct paths: one way called regenerative path in which injured cells are replaced by the same type of cells; the other is connective tissue replaces normal parenchymal tissue in an uncontrolled fashion, which is known as fibroplasias or fibrosis [4][5][6][7][8]. Persisting injury caused uncontrolled repair processes, lead to the damaged tissues/organs undergo substitution by over-abundant ECM and suffer from extensive, pathological fibrosis [3]. The onset of liver fibrosis is usually insidious, advanced liver fibrosis results in liver failure and portal hypertension and is associated with an increased risk of liver cancer [9]. Severe end-stage liver disease (cirrhosis or hepatocellular carcinoma) is associated with morbidity and mortality, and orthotopic liver transplantation is often indicated as the only effective therapy [10]. However, liver transplantation has several disadvantages, shortages of organ donors, the commitment of recipients to lifelong toxic immunosuppression, and recrudescence of the original disease in transplant recipients, therefore effective antifibrotic treatments are urgent unmet medical needs [11,12]. Liver fibrosis research can be assigned to two broad groups: in-vitro model including cell culture model [13,14], human tissue culture [15], and in-vivo experimental animal models. Cell behavior and the effect of specific mediator could be studied in in-vitro model, but it clearly cannot recapitulate the event that occur in-vivo. As we all know, liver fibrosis is developing disease with potentially dynamic processes that resulted from the complexed interplay of resident and incoming cells in a microenvironment. Animal models have been used for several decades to study fibrogenesis and to validate anti-fibrotic effects of potential therapeutic approaches [16,17]. Animal models allow for (i) comprehensive study of questions that may not be able to address in human studies, (ii) multiple sampling at strategic times during the development vs. resolution phases, (iii) experimental testing with restriction of the minimal number of variables [18]. Current animal model in liver fibrosis research are allocated in four main categories, the first category is via the cholestatic mechanism that damage the biliary epithelium including surgical bile duct ligation model [19], gene knockout or transgenic model [20,21], dietary models by feeding with 3,5-diethoxycarbonyl-1,4dihydrocollidine (DDC) or α-naphthylisothiocyanate (ANIT) [22,23]. The second category is induced by hepatotoxins such as CCl4 [24], thioacetamide (TAA) [25], or dimethylnitrosamine (DMN) [26] that belong into toxin-induced liver models. The third category is activated by metabolic liver injuries including both alcohol induced fibrosis and NASH-associated fibrosis [27][28][29][30]. The fourth category is induced by autoimmune responses via injecting heterologous serum to elicit liver fibrosis [31]. Most of these models were established in rodent animals. Although rodent models can mimic the liver fibrosis development to some extent, several differences between murine and human need to take into consideration; such as the different number and proportion of distinct immune cell populations in the liver and the different marker molecules to identify corresponding immune cell subsets [32], and diversity in RNA expression is reflecting the fundamental physiological differences between mice and humans [33]. Studies revealed that the subsets of circulating classical and non-classical monocytes show very different ratios in humans (90%:10%) and mice (50%:50%) [34]. Nonhuman primates are essential and irreplaceable animal models in human disease research because genetic, anatomical and physiological similarity to humans. High-fat diet and/or CCl4 induced rodent liver fibrosis was widely investigated [24,35], but few studies report monkey liver fibrosis. Alcohol induced liver fibrosis model were developed in rhesus monkeys, which take 3 years [36]. Another study combined CCl4 subcutaneous dosing with chronically fed high-fat diet and alcohol in drinking water for 16 weeks to establish liver fibrosis model in cynomolgus monkeys [37]. Both studies used alcohol as a major inducer. In order to establish a non-alcoholic liver fibrosis monkey model with a single stimulus within a reasonable time frame and to selectively target the liver, we chose to deliver CCl4 through the portal vein. Animal and husbandry Cynomolgus monkeys (3-6 years, 3-7kg) were provided by Hainan Jingang Biotech Co., Ltd. All animals were single-housed in stainless steel cages equipped with a bar type floor and an automatic watering valve, these cages conform to standards set forth by the US Animal Welfare Act. The rooms controlled humidity at 40% to 70%, temperature at 18 °C to 29 °C, 10 to 20 air changes/ hour and 12-hour light/dark. Regular or high fat diet and fresh fruit were fed daily. Protocols for all the animal studies were approved by the Institutional Animal Care and Use Committee (IACUC) (WuXi AppTec Co., Ltd, Suzhou, Jiangsu province, The People's Republic of China.). Experiment Animals had portal vein cannulation surgery. Briefly, animals were anesthetized through trachea intubation with isoflurane during surgery, the animals lied on its back and general sterilized in operation area, exposed portal vein and selected a branch of mesenteric vein at the far end. PE catheter was cannulated into the portal vein. After securing the catheter, the other end of catheter was connected with a heparin cap to confirm the catheter unobstructed. The heparin cap were placed in muscle layer subcutaneously. After a 20-28 days recovery period, the animals were ready to use. Eight convalescent portal vein cannulated animals were assigned into this experiment. Animals were dosed with CCl4 formulated in PEG 400 (400mL/L) via intravenous bolus injection into portal vein. Animals were received escalating dosage at 0.1mL/ kg once weekly, 0.1mL/kg twice weekly and 0.15mL/kg twice weekly (Figure 1), all animals were put into recovery phase after the last dose. Blood samples were collected before and weeks 1, 2, 4, 6, 8, 12, 24, 46 after first dosing, all blood samples were collected from a peripheral vessel into commercially available tubes containing Potassium (K2) EDTA or plain with separating gel before CCl4 dosing on the specified day. Serum samples were stored at -60 degree or lower until analysis. Liver biopsy and ultrasound B examination were conducted in this experiment. Animals were anesthetized with ketamine hydrochloride (10mg/kg), lied on his back, sterilized appropriately, used ultrasound B (Vet-M7, Mindray) to keep away from big vessel and gall bladder, and then inserted auto biopsy gun (acecut 14G x 115mm, TSK, Japan) to collect liver tissue. After the procedure, animals were observed daily by experienced technician till its recovery. Sample analysis Whole blood samples (anti-coagulation EDTAK2) for hematological parameters were analyzed by an automatic analyzer (ADVIA 2120, Siemens). Serum samples for clinical chemistry parameters were detected by an automatic analyzer (HITACHI 7180, Hitachi High-Tech Science Systems Corporation). Serum samples for four indicators of hepatic fibrosis laminin (LN), hyaluronic acid (HA), collagen type IV (CIV), and N-terminal propeptide of collagen (PIIINP)) parameters were determined through radio immunoassay (RIA) method in ADC CLIA 400 automatic plate immunoassay analyzer (Autobio). Appro Poult Dairy & Vet Sci Volume -4 Issue -5 The HA, LN, and PIIINP parameters were increased from 72.8±21.6ng/mL to 136±32.0ng/mL, 201± 16.9ng/mL to 299±28.8 ng/mL, 26.1±5.27ng/mL to 49.5±5.94ng/mL after CCl4 induction respectively. HA and LN level restored to normal after a recovery periods, but the PIIINP value was still higher at week 24 than baseline ( Figure 5). The mean CIV value was 34ng/mL in week 4, beside that all the other CIV values were below the limit of quantitation (15ng/mL). Pathology examination in liver biopsy samples showed that fibrosis were found for all animals ( Figure 6). Liver fibrosis were existed persistently during the recovery period (Table 2), it did not cure naturally without treatment. Irregular or nodular surface and blunt edges in liver were observed under ultrasound B examination (Figure 7). : Ultrasound liver images before induction, 1.5 months, 3 months, 11 months after induction. 7a) Clear liver edge, smooth envelope, uniform echo from liver parenchyma, the structure and track of vessels are normal. 7b) Obtuse and thick liver edge, parenchyma echo coarsened, increased liver volume and expansive portal vein. 7c) Enhanced punctiform echo inparenchyma, rough liver edge, the branch of portal vein is a bate and the vein wall is blur. 7d) Strong echo structure in parenchyma, thickening liver edge. Discussion The kinetics of fibrosis development can be roughly divided into three phases: acute injury, initiation of fiber formation and advanced fibrosis [39]. CCl4 is metabolized by hepatocytes, giving rise to toxic trichloromethyl (CCl3) radicals by CYP2E1, an enzyme expressed in perivenular hepatocytes. It induces thus an acute centrolobular necrosis which triggers a wound healing response: 1. recruitment of phagocytic and inflammatory cells to clear necrotic zones, 2. activation of fibrogenesis and increased ECM, 3. proliferation of parenchymal and non-parenchymal cells to replace dead cells; which would restitute liver integrity. When the insult is repeated, successive rounds of wound healing occur prior to resolution of the previous one resulting in fibrosis accumulation [18]. All animals developed liver fibrosis after CCl4 administration via portal vein. Hemolysis could be induced rapidly when CCl4 quickly injected into portal vein, and liver cell necrosis could reduce the liver's ability to metabolize and excrete bilirubin leading to a buildup of unconjugated bilirubin in the blood. Liver fibrosis evaluation methods can be divided into invasive and non-invasive [40]. Non-invasive method include serum tests, RNA expression analysis and imaging techniques. These methods may be performed repeatedly, allowing for ongoing monitoring of potential fibrosis in vivo [41]. In this study, the mean ALT was increased almost 20-fold after administrating CCl4. ALT was released from liver tissue into the circulation in proportion to the degree of hepatocellular damage. Its level is thought to be one of the most sensitive markers of liver injury and liver disease progression [42]. Mean AST level increased less than 3-fold after CCl4 induction. ALT is predominantly found in the liver, with clinically negligible quantities found in the kidneys, heart, and skeletal muscle. In contrast, AST is found in the liver, heart (cardiac muscle), skeletal muscle, kidneys, brain, and red blood cells. Therefore, ALT is a more specific indicator of liver damage than AST. The increasing of four liver enzymes AST, ALT, ALP, GGT levels and TBIL indicate liver toxicity. ALB and TP, and A/G ratio were decreased. ALB is produced in the liver, impaired liver cannot synthesized effectively and maintain ALB level. Whereas, globulins are produced in the liver or immune system. This might be the reason why GLB is not changed during CCL4 induction. The ratio of AST/ALT>1 (AAR) has been proposed as a test of cirrhosis in human [43], while other study demonstrate that AST/ALT ratio is confounded when used in alcoholic and many other acute and chronic fatty infiltrating liver diseases [44], and not recommended for evaluation the stage of fibrosis. Among the monkeys were diagnosed as liver fibrosis, the AST/ALT ratios were below 1.0 throughout the study. The process of liver fibrosis is characterized mainly by cellular activation of hepatic stellate cells (HSCs) and are able to express and deposit large quantities of extracellular matrix components [45,46]. Liver ECM components include collagen type I, III, and IV, fibronectin, undulin, elastin, laminin, hyaluronan, and proteoglycans were higher than normal in advanced stage [47]. HA, LN, PIIINP were increased, those were consistent with previous studies [48][49][50]. But N-terminal pro-peptide of collagen type III (PIIINP) level also elevated in chronic pancreatitis [44] and HA levels may be elevated after meal or glucose drink [51], they are not specific for liver fibrosis. The ideal biomarker should: 1) Specific for liver; 2) Readily available and standardized between all laboratories performing diagnostic biochemistry/haematology; 3) Not subject to false positive results, for example due to inflammation; 4) Identifies the stage of fibrosis [52]. Currently, no non-invasive markers are specific and capable of providing accurate information about fibrogenesis and the extent of fibrosis in the liver. The utility of serum models such as Fibrotest [53], Fibrometer [54], Fibrospect [55], Hepascore [56] were used to predict fibrogenesis, but currently cannot replace the gold-standard method liver biopsy [57]. Fibrosis stage is assessed by Metavir (stage 0-4) score. We can found that increased fibrillar eosinophilic material (H&E stained slides) and red Sirius Red stained were noted in the periportal (centroacinar) area, this change generally limited to individual lobules, but also with extension from one portal tract to another (bridging fibrosis), in addition, small number of pigmented macrophages (hemosiderin) and mononuclear inflammatory cells were present. However, there were some limitations when using liver biopsy evaluation. Firstly, hepatic fibrosis may not be homogenous throughout the liver, the size of biopsy specimen is not large enough to contain whole hepatic lobule, and it only represents a tiny fraction of organ. Sampling error (25%-40%) may result in poor reproducibility [58]. Secondly, it's an invasive procedure that caused pain and major complication occurring in 40% and 0.5% of patients, respectively [59]. Thirdly, there is well known observer variability amongst pathologists in categorizing the degree of fibrosis, no matter how precisely defined the stage [60]. The liver fibrosis scores minor changed in different months in our experiment, it mainly depend on the liver biopsy sample size and sampling location, some histopathologic images including whole hepatic lobules which contribute to making judgement, and it's really challenge to evaluate the fibrosis score in images with partial hepatic lobule. Increasing the biopsy sample numbers may decrease the erroneous judgement, but noting that biopsy is an invasive procedure. Many imaging techniques have emerged for liver fibrosis detection and assessment, such as ultrasound [61], computed tomography (CT) [62] and magnetic resonance imaging (MRI) [63]. The image of ultrasound B showed clearly changes during the induction in our study, but it only produce specific findings, with very limited sensitivity and cannot assess the fibrosis stage, especially in early and intermediate stages. CT and MRI have the same problem [64,65]. All in all, it would be better to combine both non-invasive and invasive method for comprehensive assessment of the liver stage. Liver fibrosis reversal is still a debated topic. When administrating of neutralizing TIMP1-specific antibody decreases the collagen content in CCl4-induced fibrosis [53], and the reversibility of fibrosis was found in experimentally induced cholestasis in rat [56]. In humans, spontaneous resolution of liver fibrosis can occur after successful treatment of the underlying disease. Hepatitis C caused liver fibrosis could be reverse after treatment [54]. It may take years for significant regression to be achieved, the time course varies depending on the underlying cause of the liver disease and its severity. Some experimental evidence suggests cirrhosis might reach a point of no return. Using the CCl4-intoxication rat model of liver fibrosis, the remodeling of advanced cirrhosis is limited and the liver remains cirrhotic even after a very protracted recovery period [55]. Our study indicates the same process after 9-month recovery period, liver fibrosis remain existing. In the other hand, it means a long term therapeutic window using this model. Conclusion Liver fibrosis represents a classical outcome of many chronic liver diseases. Animal models are being used for several decades to study fibrogenesis and to evaluate the anti-fibrotic potential of therapies and strategies. Previous study demonstrated that monkeys and human have similar liver architecture including hepatocyte, portal regions, bile duct, portal vein and liver veins [66]. Our study showed that liver fibrosis could be established by only given CCl4, which testify the hypothesis. In current stage, many technology could assist diagnose liver fibrosis, but no one indicator can diagnosis the diseases except for pathological result. The monkey model is a better system to explore the prevention and treatment of chronic liver diseases and develop new diagnostic techniques and novel treatment. Conflict of Interest We have no conflict of interests to disclose and the manuscript has been read and approved by all named authors.
2018-08-07T23:09:52.811Z
2018-08-09T00:00:00.000
{ "year": 2018, "sha1": "c4d9c358c62f1fd4b6ad7b37da09bc2cc3cdaf0d", "oa_license": "CCBYNC", "oa_url": "https://irispublishers.com/abba/pdf/ABBA.MS.ID.000501.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b0da34ef3deb4d63672002e36a0a3f392749ef58", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225126081
pes2o/s2orc
v3-fos-license
Transformation as relational mobilisation: The networked geography of Addis Ababa’s sustainable transport interventions Literatures on sustainability transition and transformation increasingly emphasise the role of spatiality and local agency. This paper argues that relational thinking has much more to offer this debate than presently acknowledged, particularly in revealing the geographical interconnections between dispersed nodes of action and innovation. We use relationality to show the interconnections at work in exchanging and negotiating sustainability interventions between cities and across scales. Using the mass transit planning process in Addis Ababa as a point of entry, we trace how the city’s transformation is negotiated at the intersection of local agency, the Ethiopian national political setting and international networks. A host of actors from different scales come together as transformation is assembled by aligning extensive local experience with elements mobilised from elsewhere. This relational mobilisation perspective arguably infuses hope into the debate, because it opens new ways of identifying seemingly insignificant actions and actors elsewhere and recognising them as potential drivers of change. Introduction The need for rapid and deep societal transformation to respond to climate change has spurred a vibrant academic debate on conditions, contexts and pathways for transformation. In recent years, new actors have emerged as global climate governance has been rescaled and local-level actions have become more prominent (Bulkeley, 2016). Cities such as Oslo, Addis Ababa and New York are currently pursuing climate goals that are considerably more ambitious than those of their national governments or global commitments. Recent scholarship traces the emergence of new climate governance arrangements that build on voluntary climate action through loosely co-ordinated public, private and civic initiatives (Biermann et al., 2017;Casta´n Broto and Bulkeley, 2013;Marvin et al., 2018). These efforts have exposed a rich undergrowth of local agency that was previously concealed in national and multilateral accounts of climate governance. However, this literature has less to say about the relational dynamics of how transformations are mobilised across space. The key questions are essentially spatial: Where does innovation take place? How is change mobilised to other places or scales, and by whom? How do particular interventions interact with local contexts, and how are they materialised through longer-term change? In the literature, there tends to be a divide between gradual transitions driven by innovation (Geels, 2011;K€ ohler et al., 2019), and more pluralistic and unruly transformations (O'Brien, 2012;Olsson et al., 2014;Pelling et al., 2014;Scoones et al., 2015). These two perspectives have their own distinct conceptual histories and are only partly in conversation with each other. However, both perspectives have in common that they often build on what Scoones et al. (2020) refer to as 'systemic approaches' -typically multi-level transitions theory (MLP) and socio-ecological systems perspectives -that consider systems as relatively bounded, territorially stable and nested in a scalar hierarchy. Accordingly, the understanding of local transition and transformation initiatives is centred around a vocabulary emphasising local innovation and experimentation. However, when it comes to examining the actual work of transformation, systems approaches, by focusing on the system as a whole, 'have tended to diminish the role of individual agency, downplay the complexity of politics, power and asymmetries in human-environment dynamics' (Scoones et al., 2020: 67). In this paper, we draw on the case of sustainable mobility in Addis Ababa to show how a closer engagement with relational thinking can help unpack the spatial dynamics of transformation. When actors in Addis Ababa engage with sustainable development, transformative interventions -here understood as directed actions to achieve urban change -rarely emerge from niches or within bounded systems. Instead, they are mobilised by innovations, technologies and interventions that are exchanged and translated between cities, facilitated by formal networks such as C40 Cities and the professional and personal networks of policy makers, planners, consultants and activists on a trans-urban scale. These urban transformation efforts are also shaped by different contexts on the ground, uneven power relations and the fact that some people and places are more connected than others (Bouzarovski and Haarstad, 2018;Grandin et al., 2018). Therefore, we need a theory of transformation that is more attuned to the relational, networked and scalar nature of contemporary processes of social change. We aim to advance an understanding of transformation that reveals how transformations are mobilised as relational rather than as bounded endeavours. Our approach highlights the interconnections between geographically dispersed nodes of innovation and shows how local sustainability interventions are interconnected with 'multiple elsewheres' that shape and condition the opportunities for local change. This argument builds on the ongoing discussion on spatiality and geography in the sustainability transitions literature (Affolderbach and Schulz, 2016;Bridge et al., 2013;Sengers and Raven, 2015;Temenos et al., 2017). Indeed, Loorbach et al. (2020) argue that transformative innovations are locally rooted as well as globally connected. Explicating the nature of the relations and networks that foster these global connections would however benefit from closer engagement with key ideas from human geography (see Binz et al., 2020). Therefore, rather than spatialising systemic approaches to transition and transformation, we argue that we should start from the idea of relational spatiality (Massey, 2005) to foster a distinctly geographical approach. In discussing transformation as relational mobilisation, this paper draws on work that conceptualises the flow of ideas, people and matter across space and how those flows interact with local contexts -particularly the policy mobilities literature (e.g. McCann and Ward, 2011;Peck and Theodore, 2015;Robinson, 2013;Wood, 2014). But it also builds on work on mobility (Cresswell, 2010) and assemblage (Anderson et al., 2012). We advance that literature by mobilising its insights to help explain relational sustainability transformation. This means that we are less interested in how policy mobilities serve as conduits of neoliberalisation and depoliticisation (e.g. McCann, 2017;Peck and Theodore, 2015), and instead highlight the constructive, strategic and contextual agency involved in mobilising, translating and negotiating ideas and resources from elsewhere. We conceptualise transformation as a process whereby local innovation and intervention is interconnected with multiple places and scales. Understanding transformation as relational mobilisation means bringing insights from relational thinking into discussions on transitions and transformation to a much greater degree than is currently done. 'Relational mobilisation' hence emphasises both the relational and mobile constitution of social phenomena (Massey, 2007;S€ oderstr€ om et al., 2013) and the local work involved in mobilising and aligning local and non-local resources and actors (cf. Cox, 1998). Our approach thus contributes to the thinking on how transformations are negotiated in local contexts that are interconnected with multiple geographically dispersed nodes of innovation through mobile practices. The paper proceeds as follows. We start by assessing how spatiality and agency are understood in common approaches to transition and transformation. This is followed by a discussion of relational and mobile conceptualisations of change. After outlining our methodological approach, we apply these insights to examine the development of mass transit and climate planning in Addis Ababa. We interpret this work as a relational mobilisation which involves municipal agencies as well as international networks such as C40 Cities. We highlight how the three dimensions of transformation as relational mobilisation-namely interconnected settings, mobile relations and contextualised agency-come into play as urban transformation pathways are negotiated. Discussion and conclusions follow. Spatialising transformation It is widely recognised that the climate challenge requires drastic action (IPCC, 2018;McKinsey and C40 Cities, 2017;United Nations, 2015). Social science responses to this challenge have been predominantly framed within the sustainability transitions tradition and various approaches to sustainability transformations. Transitions research has often examined technical transitions in electricity, transport and urban sectors, and the literature highlights interactions between niche innovations and larger (often national) institutional structures that are slower to change (Geels, 2011;Grandin and Sareen, 2020;K€ ohler et al., 2019). In contrast, transformations are understood as a 'fundamental change to the functioning of systems' that may open 'new areas of policy response' (Pelling et al., 2014) and the transformations literature emphasises both the role of local agency and the unruly and political character of sustainability transformations from a range of theoretical points of departure (O'Brien, 2012;Scoones et al., 2015;Westley et al., 2011). While grounded in distinct traditions, the transitions and transformation debates have in common that their analysis is often framed from within a systems approach that draws on systems thinking to identify interconnections between social, economic and technical systems (Scoones et al., 2020); this approach underscores that it is the systems and not the individuals that are unsustainable (Shove et al., 2015;Urry, 2004a). For instance, in the transitions literature, the 'unit of analysis is [. . .] primarily situated at the 'meso'-level of socio-technical systems' (K€ ohler et al., 2019: 2). This approach highlights how societies and technological systems co-evolve, and how technological and institutional path dependencies lead to inertia that makes change difficult (Geels, 2004). From a sustainability transformations perspective, the socio-ecological systems approach draws on resilience theory to emphasise how social systems are interconnected with the ecological and planetary systems on which they are dependent. Scholarship in this tradition aims to assess the integrated effects of different policies and transformations and 'is crucial to prevent undesirable and unintended outcomes of initiatives to move toward sustainability' (Olsson et al., 2014: 5). Systems approaches to sustainability can, as Olsson et al. (2014) cogently argue, give important guidance when policies are designed and their effects are assessed. However, when it comes to examining the actual work of transformation, the systems approaches prevalent in both the sustainability transitions and the transformations literature would benefit from further engagement with the spatial dynamics of social change. There are several areas where spatial and relational perspectives are starting to nuance and advance this theoretical landscape. We will highlight here three such areas. First, systems approaches have tended to consider systems in transformation as geographically bounded, demarcated by political boundaries or the properties inherent in the system itself. The multi-level perspective has traditionally examined transitions that are nationally bounded, but with an empirical focus on the local level of protected niches where the innovation that instigates larger transition is understood to take place (Geels, 2011). As an indication of this, this literature accordingly has a profusion of concepts around local innovation, experimentation, urban living labs and incubators -protected spaces for innovation (Marvin et al., 2018). However, this bounded spatiality has been challenged by a growing geographical literature on sustainability transitions, which emphasises spatial diversity, geographical unevenness as well as the translocal nature of transitions (e.g. Coenen et al., 2012). This is mirrored in the urban governance literature, which understands cities to be produced by the circulation of policy ideas, finance and people and emphasises relations and mobility (Casta´n Broto, 2017;Massey, 2007;McCann and Ward, 2011;S€ oderstr€ om et al., 2013). Here, the role of collaboration, learning and exchange between cities -hence the importance of connections between different systems -is underscored. To a degree, these perspectives are brought into the transitions literature. For instance, Sengers and Raven (2015) conceptualise a 'spatialised' niche model which highlights the role of translocal connections between multiple co-existing niches (see also Affolderbach and Schulz, 2016). This mirrors similar endeavours by Loorbach et al. (2020) to explicate the translocal character of transformative innovation. These efforts contribute to a more porous and spatially nuanced understanding of how transitions and transformations unfold. A second area where spatial thinking has advanced transformations work is in understandings of systems. Often systems studied are understood to be relatively coherent, complete and territorially stable entities. Both the MLP and socio-ecological systems approaches allow for the evolution, disintegration and reintegration of systems over time. However, this is understood to take place predominantly within the system; in other words, it occurs in the ways in which national institutional structures evolve (see Grandin and Sareen, 2020), or how relations between different components in a socio-ecological system are continuously made and remade as the system reorganises itself and occasionally shifts to a new regime (Holling, 2001;Olsson et al., 2014). The analytical emphasis is placed on systemic capacities in order to uncover both systemic barriers to change -for instance 'traps', or feedback loops that maintain undesirable trajectories -and tipping points that may unlock rapid transformation (e.g. Westley et al., 2011). In contrast, geographers have pointed out that this interest in aggregate and systemic outcomes creates blindspots (Cote and Nightingale, 2012). The climate governance literature, drawing significantly on spatial thinking, paints a landscape that is fragmented, inherently contradictory and only loosely co-ordinated (Biermann et al., 2017;Casta´n Broto and Bulkeley, 2013;Marvin et al., 2018). For instance, conceptualising cities as 'systems' may obscure the fact that neither urban governance arrangements nor infrastructure have ever been complete or coherent (Simone and Pieterse, 2017). This unevenness, as political ecologists are quick to point out, means that transitions and transformations will always be political (Meadowcroft, 2011), contested (Casta´n Broto, 2015 and driven by trade-offs and compromise (Fenton, 2016). The third area where spatial thinking has advanced transformations work is in highlighting scale and scaling. In systems approaches, scale is generally understood in terms of a nested hierarchy (Gibson et al., 2000), where 'lower' scales of smaller geographical reach are contained within 'higher' scales of larger spatial extent; transformations are regarded as dependent on interaction between these scales. For instance, resilience thinking assumes that systems operate in a 'panarchy', where smaller and faster systems are contained within larger and slower systems (Holling, 2001). Similarly, in MLP, scales are largely metaphorical and geographically non-specific, but nevertheless conceptualised as levels of phenomena that are relatively hierarchical. As in resilience approaches, change in 'higher scales' -regimes and landscapes -is assumed to be more structurally constrained than in the smaller niches (Affolderbach and Schulz, 2016). In contrast, geography's relational approaches to scale posit that scales are socially produced and mutually constituted -the 'local' and the 'global' are not distinct levels but 'deeply interconnected as part of a continuum of social existence and praxis' (Herod, 2011: xv). This perspective unveils how global systemic effects are actively produced by local-level practices and decisions, and emphasises local agency and responsibility with regard to problems on other scales (Massey, 2007). Similarly, work on social movements has underlined how even place-based movements are dependent on cross-scalar relationships for various types of resources, inspiration and support (Haarstad and Fløysand, 2007) -what Cox (1998) termed 'spaces of engagement'. We argue that this relational perspective on scale allows a better understanding of how localised transformation processes are interconnected with larger processes, governance structures and networks (Bouzarovski and Haarstad, 2018). In short, spatial thinking has both challenged and advanced mainstream work on transitions and transformation in several ways. We build on this work, but at the same time, our approach is different. Rather than spatialising MLP or resilience approaches, we take relational spatiality as the point of departure in order to foster a distinctly spatial approach to transformation. In the following section, we outline the key conceptual underpinnings of what we term relational mobilisation. The relationality of urban transformation Relational thinking helps conceptualise the interconnected geographies through which transformation -quite literallytakes place. The 'relational turn' in human geography understands places to be constituted by more or less distended social, political and material relationships, as opposed to characterising them according to some 'essential' properties (Anderson et al., 2012;Haarstad and Wanvik, 2017). Massey (2005Massey ( , 2007, one of the main advocates of relational thinking in geography, thought of space as continuously produced through relations and highlighted difference, multiplicity and agency. Massey's contributions have had a major impact on geographical theory, but relatively less impact on debates on work in sustainability transitions and transformations where geographers have often relied on frameworks imported from adjacent fields (Bridge et al., 2013;Hansen and Coenen, 2015). The relational turn bears a family resemblance to wider trends in social theory. First, it shares clear affinities with assemblage thinking, which understand phenomena to be loosely connected and temporary gatherings of human and non-human component parts, brought together across different places and scales of governance (Anderson et al., 2012;Haarstad and Wanvik, 2017;McFarlane, 2011). Second, relationality is a key component in thinking around decentring and decolonialising common Eurocentric interpretations of the geographies of transition and transformation (Bridge, 2018;Nagendra et al., 2018;Simone and Pieterse, 2017), emphasising spatial interdependence and multiple nodes of innovation. Third, relational thinking also underpins the 'mobilities turn', highlighting how society is constituted by different forms of (inherently uneven) mobility of people, ideas, practices and technologies (Cresswell, 2010;Sheller and Urry, 2006;S€ oderstr€ om et al., 2013). The policy mobilities literature has brought these insights into the discussion of policy making and implementation, emphasising the actors, artefacts and pathways involved in the mobilisation and translation of particular policies from one setting to another (McCann and Ward, 2011;Peck and Theodore, 2015). It has also stressed the local agency involved, as local administrations assemble policies from local parts as well as inspiration and resources from other places (Bulkeley, 2016;Robinson, 2011). Common to these currents of scholarship is the insight that places, people and institutions are intricately shaped and constituted by relationships with 'multiple elsewheres'. One such mobile policy, increasingly scrutinised by policy mobilities scholars, is bus rapid transit (BRT), a bus-based mass transit system with dedicated bus lanes, pre-boarding fare collection and advanced fleet management. Initiated in Curitiba, Brazil in the 1970s, BRT has been celebrated as a policy innovation from the Global South that has received international acclaim (Wood, 2015a). It has its own standards and manuals (ITDP, 2017) and is promoted internationally as a potent climate solution (McKinsey and C40 Cities, 2017). The critical research literature has unpacked how the 'process of exchange between cities is asymmetrical, uneven and incredibly partisan' and shaped by local political priorities (Wood, 2015a(Wood, : 1071. For instance, study tours are both an opportunity for 'experiential learning' and a way to develop local political coalitions (Montero, 2016). The implementation of a BRT system involves the bundling of a number of different sometimes conflicting policies into a 'policy package' (Filipe and Maca´rio, 2013). Wood (2015b) has shown that BRT adoption is highly dependent on local context and has in many places been subject to slow political deliberation rather than 'fast policy' transfer. Policy learning has furthermore concentrated on a small subset of hallmark cities with large-scale systems, while learning opportunities from other places are deliberately disregarded (Wood, 2015a; see also Schwanen, 2018); this selective learning has been reinforced by international networks (Wood, 2015a). However, while BRT systems are generally pursued as large-scale projects that benefit large private companies at the expense of informal actors, they may also challenge neoliberalisation by placing mobility in the public sphere and increasing opportunities for collective action (Paget-Seekins, 2015). Transformation as relational mobilisation We draw on the three currents of scholarship discussed above -namely assemblage thinking, decentring social theory and the 'mobilities turn' -to conceptualise the interconnected geographies through which transformation is mobilised: what we refer to here as transformation as relational mobilisation. In other words, we are bringing insights from the relationality and mobility debates to bear on the transitions and transformations debates. This way we can account for both the way in which resources, policies and technologies are assembled between cities and the contextual processes of local negotiation and material change. In doing so, we are further conceptualising the role of strategic local agency in mobilising ideas from elsewhere. We will highlight three dimensions of transformation as relational mobilisation, namely (a) interconnected settings, (b) mobile relations and (c) contextualised agency. Interconnected settings: Learning and exchange does not happen in a sequential chain of innovation and implementation from one city to another, but in multiple interconnected nodes of concomitant innovation. The interconnections between cities create spheres of innovation that are implicated in both trans-local and local (urban) spaces at the same time. New connections between places are generated through exercises like benchmarking and the identification and the promotion of best practices (Larner and Le Heron, 2002), thereby producing 'global spaces of emulation and competition' (McCann, 2008: 6). A bicycle planner in London and a bicycle planner in Malm€ o are engaged in the same interconnected sphere through networks, discourses and mobile policies concerned with project generation, funding opportunities and best practices on bicycle planning. Housing planners in the same cities may be equally well connected, but through very different networks and discourses. Consequently, when urban plans are developed, policy ideas circulate leading to 'remarkably similar analyses, conclusions, and policy ambitions' across cities (Robinson, 2011:15). This creates a complex spatial constitution where urban transformation is partially connected to many different (and potentially competing) trans-urban networks at the same time (cf. Massey, 2005). Hence, the continuous engagement with kindred initiatives elsewhere is an integral part of the local work of transformation. In turn, we need to examine the complex interconnected settings through which urban transformations are mobilised. Mobile relations: Connections between transformation initiatives in different settings are created and maintained by different forms of mobility and travel. This creates what Urry (2004b: 28) describes as an '"imagined presence" through travelling objects, moving people, and moving images that carry connections across, and into, multiple other social spaces'. These mobilities, argues McCann (2008: 6), 'facilitate the production of a particular form of relational knowledge in and through which policy actors understand themselves and their cities' policies to be tied up in wider circuits of knowledge'. A planner from Stockholm may meet a city official from Portland face-to-face in a study trip, and they may subsequently share ideas in webinars or chat groups. Such connections are often facilitated by intermediaries-international city networks, consultants, donors, and public sector institutions such as the European Union-that are often involved in several similar initiatives at once and maintain connections between different nodes of innovation. As policy mobility research has made clear, such agency is not neutral (Bulkeley, 2006). By framing best practices, transfer agents themselves shape the policies and technologies that are mobilised (McCann, 2008;Peck and Theodore, 2015;Prince, 2016). A critical insight here is that the process or act of becoming mobile is political. Mobilities are grounded in particular material contexts, full of friction and inherently uneven: some people and things are highly mobile while others stay inert (Cresswell, 2010;Nikolaeva et al., 2019). Both physical and virtual mobility is differently constrained by borders, immigration regulations, the price of airplane tickets and access material infrastructure such as a reliable internet connection. This affects the type of ideas (and whose interpretation of them) that are able to travel to different settings to take part in urban transformation initiatives. Viewing relations through the lens of mobility, then, underscores the variegated meanings and practices involved in the uneven social production of relational space (Cresswell, 2010;Robinson, 2011). In turn, we need to assess how cross-spatial relationality and mobility are created, structured and distributed. Contextualised agency: While relational and mobile, urban transformations are also stubbornly local affairs. They depend on local agency, political deliberation and negotiating particular material configurations (Peck and Theodore, 2015). Local actors may draw on resources and ideas from elsewhere (cf. Cox, 1998) in their work of 'assembling' transformations (Bulkeley, 2016). However, the local contexts are not simply surfaces on which mobile policy processes play out -they should also be recognised as arenas for proactive and strategic agency. Mobile ideas interplay with deeper institutional and personal policy histories (Bor en and Young, 2012). Actors at the local level are often active in pulling these ideas together, combining them and reconfiguring them in creative and strategic ways (Haarstad and Wathne, 2019;Robinson, 2013;Wood, 2014) and may draw on experiences from other cities as argumentative resources to support particular policy pathways (Kennedy, 2016). At the same time, implementing these ideas in the built urban environment is not without dissonance -the local material context and political resistance may create considerable barriers to the enactment of particular sustainability policies (Casta´n Broto, 2015). In turn, we need to investigate how local contexts reconfigure urban transformation pathways. In our framework, these three dimensions of transformation -interconnected settings, mobile relations and contextualised agency -constitute relational mobilisation. After a brief outline of our methodological approach, we will use the lens of relational mobilisation to discuss the ongoing efforts in Addis Ababa to develop sustainable transport and create a strategic climate action plan (CAP). Methodology: Tracing the genesis of Addis Ababa's transformation The empirical basis for this paper is fieldwork conducted under the auspices of a larger research project that examines the role of collaboration between cities in climate and energy transformation. Our methodological approach seeks to examine transformations through the 'circulations and connections which shape cities' and 'engage with urban outcomes through tracing their genesis by means of specific connections, influences, actions, compositions, alliances [and] experiences' (Robinson, 2016: 15). This is similar to Peck and Theodore's (2012: 24) notion of a 'distended case study', although we empirically centre our investigation in one particular city -Addis Ababa. The case study draws on in-depth interviews, analysis of policy documents and ethnographic work at multiple locations. A total of 29 semi-structured interviews were conducted in person or through Skype with practitioners involved in mobility and climate policies in Addis Ababa. Within Addis Ababa, this included officials at different municipal authorities as well as representatives from funding agencies, NGOs and consultancies. Among these, the C40 Climate Leadership Group was identified as particularly relevant due to their close engagement with both climate and transport projects in Addis Ababa. Interviews with representatives from the C40 Cities network headquarters were therefore conducted to learn about how the network sees its role in supporting collaboration between cities. Informants were identified through strategic sampling, which was later expanded through snowball sampling. The interviews covered themes such as the development and implementation of climate and mobility policies, how these policies interplay with the local institutional and material context, and the role of collaboration with other cities and organisations. Interviews were supplemented with participation at seminars, conferences and webinars related to urban transportation and climate policies. Finally, prolonged engagement with the material systems on the ground in Addis Ababa provided a nuanced understanding of the material, social, political and cultural contexts of transformation. Interview transcripts and field notes were analysed thematically, identifying themes concerning policy development, the role of international and local collaboration, and the role of the local context. The networked geography of Addis Ababa's transformation Addis Ababa, the capital of Ethiopia and the seat of the African Union, is undergoing rapid change brought about by population growth, new housing projects and urban renewal programmes (Angelil and Hebel, 2016). The population, estimated at 3.6 million in 2013, is expected to double to 9.8 million in 2037 (World Bank and GFDRR, 2015). To meet the changing transportation demand, Addis Ababa pursues a transit-oriented development strategy and a number of high-profile public transport initiatives (AACPPO, 2017). These projects combine social, environment and climate goals linked to the development of a CAP. These initiatives have distinctly local dynamics: they are shaped by particular regulatory structures and material conditions specific to Addis Ababa. Their primary aim is not to replace cars (private ownership of cars is still low) but to ensure efficient connectivity in the city, reduce commuting times and accommodate rising transport demand (AACPPO, 2016). However, wider relationships are also in play. As for many cities (see Nikolaeva et al., 2019), different forms of scarcity underpin Addis Ababa's mobility strategies, including that of mobility services, road space, emissions space and hard currency. A keystone project is the development of a BRT system, an initiative that has brought together a number of local, national and international actors over the years. Addis Ababa's urban initiatives are also shaped by national priorities and are embedded in the international agendas related to sustainable development, resilience and climate change, supported by active participation in the climate-oriented C40 Cities network as well as the resilience-focused 100 Resilient Cities network. Both networks have advisers in Addis Ababa who consult on different parts of the planning process. Through such networks, study tours and policy advice from friendship cities, experiences from elsewhere are continuously channelled into the projects. At the same time, the projects draw on the municipality's historical expertise in constructing and operating bus-based public transport. Hence, Addis Ababa's sustainable mobility interventions bring together actors at multiple locations and scales. The BRT project is placed under the Addis Ababa Road and Transport Bureau, and involves the Transport Authority (which manages public transport), the City Roads Authority (which constructs and maintains roads), the Transport Management Authority (which allocates road space), the municipal express bus operator Sheger (which will eventually operate the BRT system) as well as the Addis Ababa City Planning Project Office. 1 The work is led by a BRT project management unit placed at the Transport Programs Management Office (TPMO, see below), which also coordinates with consultants and funders. International organisations such as World Resources Institute (WRI) and Institute for Transport Development Policy (ITDP), Lyon Town Planning Agency (LTPA) and the C40 Cities network have provided direct input to various stages of the project. The project moreover depends on funding from the French development agency AFD. Examining Addis Ababa's ongoing processes of change through the lens of relational mobilisation thus involves empirically accounting for both local dynamics and the way in which change is mobilised in networks. In the following sections, we will discuss Addis Ababa's urban transformation in light of the relational, networked and scalar nature of contemporary processes of social change. Interconnected settings We do not know exactly when the idea of constructing a BRT system in Addis Ababa first arose, but its origins date at least from the early 2000s. City officials may have brought the idea with them from one of their study tours, or it may have travelled with one of the parachuted experts and consultants who visit the city from time to time. As one official noted, 'there are a lot of experts coming and going as advisers to the city, so maybe . . . they brought the idea of BRT' (July 2018, personal communication). Creating high-capacity mass transit corridors along an east-west axis was in any case one of the priorities in the implementation of Addis Ababa's revised 2002-2010 master plan (Egis Rail and LTPA, 2010). From the outset, the project has been built on international exchange. The first BRT feasibility study was conducted in 2010 by consultants from the French LTPA, Addis Ababa officials and the engineering firm Egis Rail (Egis Rail and LTPA, 2010). They identified and prioritised seven BRT corridors in the city. This was the culmination of a longer partnership in urban development between Lyon and Addis Ababa who became friendship cities in 1999. This was followed by intensive exchange, supported by the French development agency AFD, with a particular focus on the development of high-capacity bus corridors. The metropolitan area of Lyon, home to 1.7 million inhabitants, had involved the LTPA since 2005 to assess the potential of a transport-oriented urban development strategy channelling on urban growth to public transport hubs (Berger, 2010). In 2008, Addis Ababa city officials visited Lyon to discuss the implementation of mass transit projects, focusing on BRT and light rail (Egis Rail and LTPA, 2010). Addis Ababa proceeded to organise and secure funding for the project, which brought new non-local actors onboard. The BRT project was placed at the TPMO, a special office formed to initiate, support and co-ordinate transport-related projects across authorities in Addis Ababa. They continued to work on the 'B2' corridor, a 16-km stretch connecting Wingate in the north to the new housing areas in the South. In subsequent years, a number of designs and revisions for this BRT corridor were commissioned. In 2015, AFD committed to an 85 million Euro soft loan to fund the project, which in turn led to further revisions. The new funders both called for revisions in the BRT corridor design and funded the engagement of external experts to review the technical designs provided by French consultancy firm Safege SAS and Ethiopian consultants Hammda Engineering (Endeshaw, 2016). When engaging in the project, AFD sees itself not only as a funder, but also as a mediator that can draw on experiences from similar projects in other cities: We have similar experiences around the world. The fact that we have this transport team-. . . based in Paris AFD headquarters-that's a very good asset for us. Because it is really International exchange was also facilitated by participation in international networks. By 2013, Addis Ababa's mass transit agenda had become increasingly connected to the international climate agenda. With enthusiastic support from then-Mayor (and former Minister of Transport) Deriba Kuma, Addis Ababa was among the first African cities to join the C40 network. Central to C40's official narrative is the role of continuous exchange and mutual learning between cities in the pursuit of 'large-scale, replicable projects' to curb climate emissions (Chikoko, 2013). This may enable a more rapid transformation; for instance, several cities committing to the same goals may create market signals that can accelerate innovation and support later transformation efforts (C40 officials, June 2018, personal communication). C40's Deputy Executive Director Kevin Austin highlighted that this may also decrease risk, reduce costs and spur action: And also, it can help reduce the transaction cost. It is very, very costly to be the first but if you've got support and help or you've got groups of people working together you can sort of all be the first, or be the second. And it allows action to happen more quickly because you've got more resources, more thought, and you've also de-risked it. (Kevin Austin, May 2018, seminar at European Commission) Through the C40 network, Addis Ababa officials connected with climate initiatives in other cities around the world. They were particularly involved in activities relating to solid waste management and transport (Ramboll, 2016), and Addis Ababa hosted a workshop for C40's transport-oriented development network in 2015. At the same time, Addis Ababa's sustainable urban development efforts gained increasing international recognition. Addis Ababa was shortlisted for the 2016 Guangzhou International Award for Urban Innovation for work on sustainable transport and won the C40 Award of the same year for its newly opened light-rail transit system. Consequently, the design of the BRT corridor could draw on experiences from other cities. As part of the C40 Award, Addis Ababa received a resident C40 adviser who worked alongside municipal authorities on the BRT project for two years. The resident adviser supported stakeholder engagement workshops, assisted in modelling the climate change mitigation potential of BRT corridors and contributed to a branding and communications strategy. C40, together with WRI, also supported a study tour to India, where city officials visited BRT systems in four Indian cities. The importance of learning from other cities' experiences with different aspects of the BRT system -from corridor design to integrated fare and ticketing systems -is emphasised by city officials as they could 'take ideas from working systems and [try] to incorporate them in our design' (Addis Ababa officials, December 2018 and September 2019, personal communication). An official involved in Addis Ababa's BRT project also reflected on the value of learning both from failures and success stories through study trips: [We] have seen failed BRTs and successful ones. So, you also understand the reason why it failed. We don't want to make the same mistakes . . . Because basically some of the issues that are not addressed there [in Dar es Salaam] are costing them. So now we are trying to address it here from the beginning, before we start the operation. (July 2018, personal communication) In later years, international collaboration in Addis Ababa has also focused on climate planning at a strategic level. Since 2018, a new C40 adviser has been stationed at the Addis Ababa Environment Protection Authority to facilitate the development of a strategic CAP. The CAP follows a standardised framework established by C40 Cities (2018) and is aligned with similar processes in other African cities. The C40 adviser is frequently in contact with his counterparts in other African cities, and knowledge and experience from different cities is shared at workshops and through digital tools (August 2018, personal communication). Addis Ababa's urban transformation is hence implicated in a broader geography of urban change through participation in networks, friendship city agreements and exchanges facilitated by funding agencies. Addis Ababa authorities emphasise the importance of this continuous learning and exchange across cities, what we referred to as interconnected settings, for achieving local goals. At the same time, this gives external actors the power to influence projects in significant ways. We now proceed to examine how the relations between these different settings are produced and maintained. Mobile relations Urban planning in Addis Ababa has built on international collaboration for a long time, and these relations are maintained by people that travel, communicate and exchange experiences. The importance of bringing people together to build personal relationships is emphasised repeatedly in the official C40 narrative. C40 Deputy Executive Director Kevin Austin noted that when city officials meet and create friendships to the extent that they 'send birthday cards', they are more likely to help each other: And the critical thing here is trusting relationships, that the little groups that we have of maybe 20 or 30 cities, they get to know each other. They go on workshops once a year where they meet in person. . . . [They get] to know people to the extent that when they get back home they send birthday cards . . . And as they become friends, they are much more willing to help, because they really want their friend to deliver what is needed for their city. (Kevin Austin, May 2018, seminar at European Commission) Creating spaces where trusting relationships can be formed is regarded as essential for enabling mutual learning. A C40 official described the workshops in the C40 network as 'closed door safe places' where city representatives can step back, reflect and share not only success stories but also difficulties and failures. In these settings, trust is regarded as important for sharing proposals that are not ready to be shared in public (C40 network manager, June 2018, personal communication). The importance of meeting face-to-face for collaboration and exchange is recognised by Addis Ababa officials. An assessment of C40's impact in Addis Ababa by the consultancy firm Ramboll (2016: 9) concluded that 'workshops clearly offer the most useful interaction method, permitting participants to understand and discuss solutions and foster good quality knowledge sharing', noting that this type of face-to-face interaction is more difficult to achieve in other forms of (virtual) communication. As an official involved in Addis Ababa's BRT project observed, travelling to visit particular cities in person to experience transformation initiatives on the ground can be significant in mobilising political support for an initiative: It doesn't simply come, you know, the support. Because they believe in it, they believe in the system. They have seen some systems working in other countries, and [were impressed by] how they did it. (Addis Ababa official, August 2018, personal communication) However, maintaining relations between physical meetings remains an issue. A C40 official noted that '[t]he workshops are a great place to take a step back. But it is easy to get lost when you get back to your day-to-day job' (June 2018, personal communication; paraphrased from detailed notes). Similarly, the Ramboll (2016: 11) assessment noted that 'while C40 provided useful learning experiences, the capacity to implement the solutions in Addis and the necessary knowledge was sometimes lacking'. The C40 network uses webinars and one-to-one calls between cities to maintain relations (C40 official, June 2018, personal communication). While these encounters do not have the quality of faceto-face meetings, they can still be significant. For instance, after a group of C40 advisers met at an intensive training event, they maintained contact through virtual means. According to the Addis Ababa adviser: Now we can talk personally. We use Viber and WhatsApp and interact through those apps. Whenever I have questions, I can send for someone to brief me on those issues. It is a good opportunity to get knowledge from different cities. It helps me to think in a bigger way and makes my job here easier. (August 2018, personal communication; paraphrased from detailed interview notes) However, the ability to connect with other places is affected by material conditions on the ground. For Addis Ababa officials, participation in webinars was often constrained by poor internet connection speeds, time differences or workload. They therefore often found themselves reading summaries of discussions rather than directly participating in webinars; the C40 city adviser became an important node through whom information and experience was relayed. An official working on public transport in Addis Ababa noted that: [The C40 adviser] pointed me to some webinars that I'm participating in. I don't participate directly due to connection issues and the time difference-if it is scheduled according to Latin American time it is not possible to participate from here. But I get the summaries of the discussions. (August 2018, personal communication; paraphrased from detailed interview notes) Consequently, in important ways, the geography of Addis Ababa's transformation is produced by mobility. Officials may travel abroad to forge trusting relationships, or quickly exchange information in webinars or chat groups. The quality of these relationships is influenced by the different forms of mobility involved in sustaining them. We have referred to these as mobile relations. However, these relations are also grounded in particular local material settings that shape people's access to mobility, producing spatial unevenness. Next, we investigate how actors navigate these local contexts when mobilising transformative policies. Contextualised agency While officials in the Addis Ababa transport administration mobilise insights from other cities, they also build on local experience. The BRT project depends on funding and expertise from elsewhere, and officials involved in the project also report lack of previous experience in building BRT systems as one of their main challenges (December 2018 andSeptember 2019, personal communication). At the same time, they highlight that busbased public transit is not at all new to Addis Ababa; in fact, the municipality has operated the Anbessa public buses since 1945. In 2015, a new municipal company, Sheger city buses, was founded to provide an express bus service and to eventually operate the BRT. An official involved in the project notes that the ability to build on local experience was an important factor in the decision to go for a BRT system: BRT is basically bus-based operation, which resembles the operation of LRT [light rail transit] . . . We are quite familiar with bus-based transportation. That gives us an advantage in terms of operation and maintenance. We have more than 70 years of experience, even as an . . . operator. So that contributed a lot to going for BRT instead of other modes. (August 2018, personal communication) The actual implementation of a BRT in Addis Ababa is consequently a stubbornly contextual affair. Officials observe that Addis Ababa's history of spontaneous (and not planned) growth has led to a poor and often narrow road network which is easily congested (Traffic Management Agency, September 2019, personal communication). The design and construction of a BRT lane hence inevitably runs into right of way issues, and certain road segments will need to be widened which leads to resettlement issues, delays and resistance. As one official noted: 'In Cairo for instance, they have a lot of road space. . ., in our case it is a challenge just to find road space' (Transport Bureau, September 2019, personal communication, paraphrased from detailed notes). As the BRT system moves towards operation, officials also anticipate that poor availability of hard currency may lead to delays in procuring spare parts and high down time of the rolling stock (Sheger buses, September 2019, personal communication). The local institutional and organisational context is also emphasised. An official at the French development agency AFD noted that while the management of BRTs is similar across cities, the actual implementation of the project is a unique process (December 2018, personal communication). The project management unit at the TPMO coordinates with consultants, funding agencies and a range of municipal authorities involved in disparate parts of the project. Experts from different parts of the municipality also provide regular feedback on the BRT designs provided by consultants. Here, international best practice manuals are deployed to make the B2 BRT 'corridor a clear example for other corridors to come' by 'incorporat [ing] contemporary thinking in terms of complete street design [and] designing streets that are safe for pedestrians' (Addis Ababa official, December 2018, personal communication). Experience from elsewhere is again mobilised: this time through support from ITDP, the international NGO specialised in BRT systems that has consulted with numerous cities in Africa. Its input was regarded as of 'really great help in providing these high-level concepts' to the project team (Addis Ababa official, December 2018, personal communication). At the same time, however, institutional fragmentation, the limited knowledge of BRT systems within the Addis Ababa transport administration and the consequent reliance on external partners to review design proposals are underscored as obstacles which lead to project delays (Addis Ababa official, September 2019, personal communication). Officials also have to keep up with rapid urban development. The designs and plans for the Addis Ababa BRT project have been revised several times to accommodate rising public transport demand, which in turn has led to delays in the project. An Addis Ababa Transport Bureau official noted that it is important that the elements of the BRT system, such as stations and pedestrian crossings, 'are precisely the right size', which is difficult to achieve when the city is changing rapidly (September 2019, personal communication, paraphrased from detailed notes). Another official noted that it is 'challenging for the public transport sector to provide for this continually growing public transport demand' (December 2018, personal communication). The BRT is therefore understood to be a medium-term solution that can (at least partially) meet current transport demand until more high-density transport options are economically available. The 'local' scale in the development of mass transit in Addis Ababa is closely related to national priorities. Ethiopia's national Growth and Transformation Plan has the goal of making Ethiopia a middle-income, climate-neutral and resilient economy by 2025. A key component of this plan is a modernisation strategy based on investments in large-scale infrastructure such as hydropower dams and railroads. By the time of the 2010 BRT feasibility study (Egis Rail and LTPA, 2010), the newly founded Ethiopian Railways Corporation had proposed the creation of an LRT system. Funded, constructed and initially operated by various Chinese enterprises, the system, consisting of two lines (34 km in total), was hailed as sub-Saharan Africa's first LRT system when it opened in 2015. While Addis Ababa's public transport system is managed by the Addis Ababa Transport Authority, the LRT is administrated directly by the national Ethiopian Railways Corporation. Hence, actors and priorities on several scales directly shape Addis Ababa's urban development; co-ordinating these actors can sometimes be a challenge. However, insights from the LRT project are continuously mobilised into the planning and design of the BRT corridors. A common critique of the LRT project is that it was poorly integrated into the city (Addis Ababa official, August 2018, personal communication). In the design of the BRT corridors, care is taken to avoid the same mistakes: for instance, by ensuring a safe crossing environment for pedestrians. Thus, the implementation of new mass transit systems is situated in particular material and organisational settings that are continuously changing. The agencies that shape this system are always contextual, but not only 'local'. A multitude of actors working at different scales come together as Addis Ababa's BRT system is assembled by aligning extensive local experience with elements mobilised from elsewhere. Discussion and conclusions This paper contributes to the ongoing efforts to conceptualise and analyse the work involved in engendering deliberate sustainability transformations (cf. O'Brien, 2012). Our distinct contribution in this debate has been to develop a perspective on transformation as relational mobilisation. It is motivated by our efforts to account for the interconnected and cross-scalar character of processes and agencies that we encounter through our own research on urban change. It draws on the literature on mobility and relational space to examine how transformations are negotiated both within and across multiple, geographically dispersed settings that are interconnected through mobile practices. Bringing this into the understanding of transformation highlights the interconnectedness of events in various places and across different scales, and the vibrant contextuality of sites of innovation that actively shape transformation outcomes. As such, it may serve to nuance predominant perspectives on transition and transformation -typically building on multi-level transitions theory and resilience approaches -which, we have argued, tend to understand systems or niches as relatively bounded or isolated. Transformation as relational mobilisation alludes to the work involved in mobilising: in bringing together and aligning disparate resources and actors. It also takes seriously the idea that mobility is not only about movement, but concerns practices and meanings as well (Cresswell, 2010). Accordingly, we need to account for the qualitative dimensions of being mobile across space, the ways in which people and ideas are changed by the very act of travel, so that (as T. S. Eliot observes), 'You are not the same people who left that station/ Or who will arrive at any terminus'. These ideas have been highlighted to some extent by the recent interest in policy mobility, which underscores how policies mutate as they are picked up and mobilised from one place to another. But they are not properly brought to bear on the sustainability transitions and transformations literature. Framing transformation as relational mobilisation draws on these insights, but puts them to use to examine the political potential and spatial dynamics of deliberate sustainability transformations: how mobile ideas are translated, negotiated and mobilised to achieve local objectives. This means that while the policy mobilities literature has had a predominant interest in unpacking and critiquing the role that mobile policies play in various forms of neoliberalisation, we are instead emphasising the constructive and strategic agency that go into mobilising sustainability transformations. Our empirical analysis of the ongoing urban sustainability transformation in Addis Ababa has illustrated the need to think relationally about transformation: that we are hardly dealing with bounded, coherent or hierarchical entities. Using its BRT planning process as a point of entry, we have shown how it has been negotiated at the intersection of international networks with on-site embeddedness, local agency and the Ethiopian political setting. We underlined three relational spatial processes to describe what is occurring. First, we highlighted the dimension of interconnected settings. Concurrent iteration and learning between multiple dispersed settings are central to Addis Ababa's ongoing climate and mass transit initiatives. This suggests that rather than a simple adoption of ideas from elsewhere, change is mobilised between multiple interconnected nodes of concomitant innovation. Our findings reinforce the observation made elsewhere (Wood, 2015a) that while BRT is celebrated as an example of South-South policy transfer, the actual mobilisation of a BRT policy bundle is a more spatially complex affair that involves a plethora of actors, many of whom are based in the Global North. Second, we pointed to the importance of accounting for mobile relations -the variety of connections between places and actors that produce an uneven relational space. By emphasising how these relations are constituted by mobility, we uncover material conditions that may enable or constrain participation in relational endeavours. The relational geography of Addis Ababa's urban transformation is not only about different 'transfer agents' and ideas arriving in new settings, but also about the human relationships that hold networks together. Finally, we showed how contextualised agency plays a distinct role in urban transformation. Addis Ababa's climate and mass transit projects are shaped by and assembled from distinctly local histories of public transport as well as national development agendas and ideas from elsewhere. Ultimately, then, transformations are stubbornly local affairs. Therefore, as policy mobilities scholars have been quick to observe, localities are not simply surfaces on which mobile policy processes play out -they are also arenas for proactive and strategic agency. Thus, thinking of transformation as relational mobilisation is essentially about making use of a rich intellectual tradition to make sense of and achieve sustainable transformation. With all the talk of local action, living labs, incubators and niches -both in policy-making and in academic arenas -relational thinking can show how these are interconnected and mutually constituted. For sustainability transformations, the relationality of transformation processes also adds an element of hope. It opens new ways of seeing seemingly insignificant actions and actors elsewhere and recognising them as potential drivers of change. Moreover, it shows that transformative change does not necessarily depend on overcoming bounded systemic structures -they can also work through the mobilisation of partial and incomplete changes across 'multiple elsewheres'.
2020-10-28T19:12:29.760Z
2020-10-13T00:00:00.000
{ "year": 2020, "sha1": "0d5b190cc90c9e1bc756f931e4daf9e18922f810", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0263775820963281", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "2d4084e090a87778e09b777cfb87edfec1bf0231", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
246412595
pes2o/s2orc
v3-fos-license
The Transcriptional Regulator MucR, but Not Its Controlled Acid-Activated Chaperone HdeA, Is Essential for Virulence and Modulates Surface Architecture and Properties in Brucella ovis PA Brucella ovis is a non-zoonotic bacterium causing contagious epididymitis and other genital lesions in rams and responsible for significant economic losses in sheep-breeding areas. It is a naturally rough (without O-chains in the lipopolysaccharide) Brucella species whose virulence mechanisms have been less explored than those of zoonotic smooth brucellae (bearing O-chains that mask other outer membrane molecules). Considering the rough nature of Brucella ovis, the influence of surface components other than O-chains on its biological properties may be greater than in smooth Brucella species. Here we describe the construction and characterization of the mucR deletion mutant of virulent B. ovis PA, which is defective in a transcriptional regulator, affecting surface properties and virulence in smooth brucellae. This mutant showed increased amounts of three proteins identified as HdeA (acid-activated chaperone), Omp25d (outer membrane protein undetectable in the parental strain), and BOV_A0299 (hypothetical protein of unknown function). This observation correlated with the enhanced transcription of the corresponding genes and constitutes the first report on this type of proteome alteration in Brucella ΔmucR mutants. The upstream regions of the three genes contained AT rich domains with T-A steps described as binding sites for MucR in the Brucella abortus 2308 babR promoter (gene also upregulated in B. ovis ΔmucR), which suggests that hdeA, omp25d, and BOV_A0299 expression could be repressed by MucR through a direct binding to their promoter regions. Relative quantification of transcripts of several other genes selected according to the transcriptome of smooth brucellae ΔmucR mutants revealed not only similarities but also relevant differences among strains, such as those detected in flagellar and virB genes. Periplasmic HdeA has been related to the resistance of B. abortus to acidic pH, conditions encountered by Brucella inside phagocytes, but the deletion of hdeA in B. ovis PA and the ΔmucR mutant did not modify any of the evaluated properties of these strains. The B. ovis PA ΔmucR and ΔmucRΔhdeA mutants had defective in vitro growth and altered surface properties and architecture, exemplified by detectable amounts of Omp25d. Moreover, they showed virulence attenuation but established persistent splenic infection in mice, which encourages their evaluation as specifical attenuated vaccines against B. ovis. Brucella ovis is a non-zoonotic bacterium causing contagious epididymitis and other genital lesions in rams and responsible for significant economic losses in sheep-breeding areas. It is a naturally rough (without O-chains in the lipopolysaccharide) Brucella species whose virulence mechanisms have been less explored than those of zoonotic smooth brucellae (bearing O-chains that mask other outer membrane molecules). Considering the rough nature of Brucella ovis, the influence of surface components other than O-chains on its biological properties may be greater than in smooth Brucella species. Here we describe the construction and characterization of the mucR deletion mutant of virulent B. ovis PA, which is defective in a transcriptional regulator, affecting surface properties and virulence in smooth brucellae. This mutant showed increased amounts of three proteins identified as HdeA (acid-activated chaperone), Omp25d (outer membrane protein undetectable in the parental strain), and BOV_A0299 (hypothetical protein of unknown function). This observation correlated with the enhanced transcription of the corresponding genes and constitutes the first report on this type of proteome alteration in Brucella mucR mutants. The upstream regions of the three genes contained AT rich domains with T-A steps described as binding sites for MucR in the Brucella abortus 2308 babR promoter (gene also upregulated in B. ovis mucR), which suggests that hdeA, omp25d, and BOV_A0299 expression could be repressed by MucR through a direct binding to their promoter regions. Relative quantification of transcripts of several other genes selected according to the transcriptome of smooth brucellae mucR mutants revealed not only similarities but also relevant differences among strains, such as those detected in flagellar and virB genes. Periplasmic HdeA has been related to the resistance of B. abortus to acidic pH, conditions encountered by Brucella inside phagocytes, but the deletion of hdeA in B. ovis PA and the mucR mutant did not modify any of the evaluated properties of these strains. The B. ovis PA mucR and mucR hdeA mutants had defective in vitro growth and altered surface properties and architecture, exemplified INTRODUCTION The genus Brucella (https://lpsn.dsmz.de/genus/brucella) includes facultative intracellular Gram-negative bacterial pathogens that differ, among other phenotypic and genotypic characteristics, in pathogenicity and host preference (1,2). Depending on the Brucella species, a variety of terrestrial and marine mammal species can be infected, but amphibians, reptiles, and fish have also been reported as hosts for the atypical Brucella strains described in the recent years (1,2). Ovine brucellosis is mainly caused by Brucella melitensis and Brucella ovis, species that share high levels of homology at the DNA level (3) but exhibit relevant differences regarding pathogenicity. Thus, B. melitensis induces abortion in the natural host, is the most relevant zoonotic Brucella species (4), and is defined as smooth because it bears O-polysaccharide chains in the lipopolysaccharide (LPS), which are required for full virulence (5,6). On the contrary, B. ovis is rough (lacks O-chains in the LPS), has never been reported as a human pathogen, and rarely induces abortion in sheep. However, B. ovis causes contagious epididymitis and other genital lesions in rams that originate important economic losses worldwide (7, 8) but lack a specific vaccine. B. melitensis Rev1 is an attenuated smooth vaccine, used in vaccination campaigns against ovine brucellosis, that protects not only against homologous B. melitensis infections but also against heterologous infection by B. ovis (8,9). However, since it induces antibodies against Ochains, which are targets for the serological diagnosis of B. melitensis infection, this vaccine is banned in regions where zoonotic B. melitensis is considered eradicated. Accordingly, ovine brucellosis caused by rough B. ovis is increasing in these regions (10), which highlights the need to develop a specific vaccine against B. ovis infection that does not interfere with the diagnosis of brucellosis caused by smooth Brucella species. Both the development of specific attenuated vaccines and the understanding of the mechanisms underlying the differences of pathogenicity and host preference observed among the species of the genus Brucella require a better knowledge of the bacterial components needed for infection in each species. Most works analyzing Brucella virulence have been performed with smooth zoonotic species (mainly with B. melitensis and Brucella abortus), although studies with B. ovis are increasing in number in the recent years. These studies have evidenced not only similarities but also relevant differences between B. ovis and smooth brucellae. Thus, some virulence factors described in smooth brucellae, such as the type IV secretion system encoded by the virB operon, the quorum-sensing transcriptional regulator VjbR, ß-1,2 glucans, and core LPS glycosyltransferases or pyruvate phosphate dikinase, are also required for virulence in B. ovis (11)(12)(13). However, some other proteins that are essential in smooth B. melitensis or B. abortus for full virulence, such as Omp10, Omp19, BepC, SP41, BacA, or the flagellar apparatus, are not required in B. ovis (11,14,15), while an ATP-binding cassette transporter, absent in the main zoonotic smooth brucellae, is a virulence factor in B. ovis (16). The surface of bacterial pathogens is a key structure connecting the cell with the surrounding environment, including host cells and defense mechanisms. The LPS O-chains are essential for virulence in smooth brucellae (5,6), and it is known that they mask other surface components (5,17). Considering the rough nature of B. ovis, molecules exposed on the bacterial surface and/or regulatory mechanisms affecting its structure could be relevant actors in host-pathogen interactions, targets for the development of attenuated vaccines and related to the differences in pathogenicity and host preference that exist in the genus Brucella. In fact, in addition to the presence or absence of O-chains in the LPS, the Brucella species differ in outer membrane (OM) composition (5,6,18) and OM-related properties (5,6,19). Among the virulence factors identified in smooth brucellae, MucR is a transcriptional regulator affecting the surface properties and virulence in B. melitensis and B. abortus (20)(21)(22), but that has not been studied in rough B. ovis. In this work, we have constructed a mucR mutant in virulent B. ovis PA and evaluated its growth characteristics, properties related to the bacterial surface, expression of relevant genes, and virulence in cellular and animal models. Additionally, considering the high levels of HdeA-a protein that has been related to acid stress resistance in B. abortus (23)-detected in the mucR mutant (see below), we have also constructed and characterized the hdeA mutant in the genetic background of parental B. ovis PA and in the isogenic mucR mutant. The results are discussed in comparison with those obtained with mucR mutants of other brucellae. Cloning Vectors, Bacterial Strains, and Culture Conditions Plasmid pGEM-T Easy (Promega, Madison, WI, USA) was used to clone PCR products, and pCVD-KanD (11)-which does not replicate in Brucella spp. and confers sucrose sensitivity and kanamycin resistance-was used to construct, as described below, the recombinant plasmids containing the inactivated genes used for mutagenesis. Plasmid pBBR1MCS-2, which replicates in Brucella spp. and confers kanamycin resistance (24), was used for genetic complementation of the mucR mutant with wild-type mucR. B. ovis PA was used as a parental strain for the construction of the mutant strains described in Table 1. B. ovis PAderived strains were cultured on tryptic soy agar (TSA) or broth (TSB) (Pronadisa-Laboratorios Conda, Torrejón de Ardoz, Spain) supplemented with 0.3% yeast extract (YE, (Pronadisa-Laboratorios Conda, Torrejón de Ardoz, Spain) and 5% horse serum (HS) (Gibco-Life Technologies, Grand Island, NY, USA) (TSA-YE-HS or TSB-YE-HS). When required for the construction and maintenance of the genetically engineered strains, kanamycin (50 µg/ml) or sucrose (5%) was added to the medium. B. ovis mutants defective in proteins of the Omp25/Omp31 family ( Table 1) were previously obtained (25,26) and used as controls in immunoblot assays. B. ovis strains were cultured at 37 • C under a 5% CO 2 atmosphere. Escherichia coli JM109 was used for the replication of pGEM-T Easy, pBBR1MCS-2, and their derived recombinant plasmids. E. coli CC118 was used for the replication of pCVD-KanD and its derived recombinant plasmids. E. coli strains were cultured at 37 • C in Luria-Bertani medium that was supplemented with 50 µg/ml ampicillin or kanamycin when required. The microbiological procedures were reviewed and approved by the Biosecurity Committee of the University of Salamanca, Spain. The primers (IDT, Leuven, Belgium) used in this work for the construction and characterization of mutant strains are listed in Table 2 and were designed according to the published genome sequence of B. ovis 63/290 (ATCC 25840), with GenBank accession numbers NC_009505 and NC_009504 for chromosomes I and II, respectively. GenBank accession numbers were also used to retrieve the respective DNA sequences from B. melitensis 16M (AE008917 and AE008918), B. abortus 2308 (AM040264 and AM040265), and Brucella canis RM6/66 (NZ_CP007758 and NZ_CP007759). Orthologs of the analyzed genes were identified at the Kyoto Encyclopedia of Genes and Genomes (https://www.kegg.jp), and multiple nucleotide sequence alignments were performed with Clustal Omega at the European Bioinformatics Institute (https:// www.ebi.ac.uk/Tools/msa/clustalo/). For non-polar deletion of mucR in B. ovis PA, the wildtype chromosomal gene was replaced by the inactivated gene, following a procedure previously described (11,15). Briefly, two PCR reactions were performed to amplify the 5 ′ and 3 ′ ends of mucR, together with about 700 bp located upstream or downstream the gene, respectively. Amplification of the 5 ′ end was performed with B. ovis PA DNA and primers mucRMUT-F and mucROVL-R, while the 3 ′ end was amplified with primers mucROVL-F and mucRMUT-R ( Table 2). The two PCR products were fused by an overlapping PCR with primers mucRMUT-F and mucRMUT-R through the overlapping section of primers mucROVL-F and mucROVL-R ( Table 2). The amplicon was cloned in plasmid pGEM-T Easy and transformed in E. coli JM109. The correct nucleotide sequence of the insert cloned in the resulting pNVmucR01 recombinant plasmid-that contains the mucR gene of B. ovis PA almost completely deleted and adjacent DNA to both sides of the gene-was verified by automated Sanger sequencing with primers Universal-F and Universal-R ( Table 2) at the DNA sequencing facility of the Universidad de Salamanca, Spain. The insert of pNVmucR01 was extracted by digestion with SphI and SacI restriction enzymes and cloned into plasmid pCVDKan-D digested with the same enzymes. The resulting plasmid pNVmucR02, containing the defective mucR gene together with the sacB gene that confers sensitivity to sucrose and a kanamycin resistance cassette, was replicated in E. coli CC118 and subsequently introduced in parental B. ovis PA by electroporation. The bacteria were cultured in TSA-YE-HS plates supplemented with Kan to select an intermediate strain with the entire plasmid integrated in the chromosome after a single homologous recombination event occurred through one of the mucR ends. The intermediate strain, which is resistant to Kan and contains a copy of wild-type mucR and one copy of mucR, was cultured in the presence of sucrose to select for the second homologous recombination event that leads either to a revertant strain recovering the parental genotype or to the desired mucR mutant and the loss of the plasmid. Differentiation between both B. ovis PA strains was performed by PCR with specific primers annealing inside and/or hdeAMUT-R ACGAGCGCCCAGAAGGTA hdeA (BOV_A0312) Primers for RT-qPCR or verification of recombinant plasmids and mutants The primers were purchased from IDT, Leuven, Belgium. Lowercase sequences in mucROVL-F and hdeAOVL-F correspond to regions overlapping with mucROVL-R and hdeAOVL-R, respectively. b The target gene is the B. ovis gene to be deleted or PCR-amplified. The primers were designed according to the published genome sequence of B. ovis 63/290 (ATCC 25840) (accession numbers NC_009505 and NC_009504 for chromosome I and II, respectively). The primers targeting 16S were those previously described (27). Primers Universal-F and Universal-R were used for sequencing the DNA insert of the pGEM-T Easy recombinant plasmids. outside the deleted fragment (pairs mucRMUT-F + mucRMUT-R and mucRMUT-F + mucR-R4) ( Table 2). Two intermediate strains selected from two independent electroporation events were selected to obtain two independent mutant strains for confirmative studies. The mucR mutant was complemented in trans with plasmid pNVmucRcom01, which is pBBR1MCS-2 bearing wild type mucR of B. ovis PA amplified with primers mucR-com1 and mucR-com2 ( Table 2). The hdeA mutant of B. ovis PA was obtained with the same procedure but using the specific primers listed in Table 2. The pNVhdeA02 plasmid was electroporated in parental B. ovis PA to obtain the hdeA mutant and in the mucR mutant to obtain the mucR hdeA double mutant. Growth, Autoagglutination, and Susceptibility Assays The growth characteristics of mutant strains were evaluated in comparison with those of the parental strain B. ovis PA. Bacterial suspensions of optical density values at 600 nm (OD 600 ) of 0.2 were prepared in phosphate-buffered saline (PBS). Serial dilutions were plated in triplicate on TSA-YE-HS plates that were then incubated for 10 days at 37 • C in a 5% CO 2 atmosphere. The colony size was periodically checked, and the number of colonyforming units (CFU)/ml corresponding to the OD 600 score of 0.2 was determined and used for further experiments. Growth curves were determined in TSB-YE-HS inoculated with 2.5 × 10 8 CFU/ml of each B. ovis strain and incubated under agitation at 120 rpm at 37 • C and 5% CO 2 . The OD 600 scores were periodically determined, and the number of CFU/ml was evaluated at several time points by platting serial dilutions on TSA-YE-HS plates. To evaluate susceptibility to hypersaline medium, TSA-YE-HS containing 1 M NaCl was inoculated with 2.5 × 10 8 CFU/ml of each bacterial strain and incubated under agitation at 120 rpm at 37 • C and 5% CO 2 for 24 h when the number of CFU/ml was determined. Susceptibility to acid pH was evaluated similarly in TSB-YE-HS adjusted to pH 4.4 and inoculated with 5 × 10 8 CFU/ml of each bacterial strain. The results were presented as means ± SD of three assays. Autoagglutination ability was evaluated as previously described (11,26) by determining the evolution of OD 600 values-in static incubation at room temperature-of bacterial suspensions in TSA-YE-HS of initial OD 600 scores of 0.8. Susceptibility test to polymyxin B (10 mg/ml), sodium deoxycholate (10 mg/ml), sodium dodecyl sulfate (SDS, 10 mg/ml), Tween 20 , and hydrogen peroxide (H 2 O 2 , 7.5%) (Sigma Aldrich, St. Louis, MO, USA) was performed in a disc assay. The bacterial suspensions (100 µl containing 10 8 CFU) were spread on TSA-YE-HS plates. Then, a paper disc (diameter of 9 mm) was deposited on the center of the plate and soaked with 20 µl of each agent. The diameter of the growth inhibition zone was measured after 7 days of incubation at 37 • C and 5% CO 2 (four diameter scores were considered to establish the mean value for each plate). The results were presented as means ± SD of three assays. Protein and Immunological Techniques SDS-polyacrylamide gel electrophoresis (SDS-PAGE) was performed, as previously described (18), with whole-cell bacterial lysates and using pre-stained protein marker VI (Applichem-Panreac, Barcelona, Spain) as protein standard. The samples were resolved in a Protean II xi cell (Bio-Rad, Hercules, CA, United States) using 14% acrylamide/bisacrylamide gels. After the electrophoresis step, the protein bands were stained with Coomassie blue or transferred to nitrocellulose with a semidry electroblotter (Amersham, GE Healthcare, Little Chalfont, United Kingdom) for the analysis of reactivity with antibodies against surface antigens. The proteomic analysis of protein bands excised from SDS-PAGE gels was performed by MALDI-TOF or LC-MS/MS, after trypsin in-gel digestion, in the proteomics facility of Centro de Investigación del Cáncer, Salamanca, Spain, following its standardized procedures. Immunoblot was performed, as described before (18,28), with rabbit antibodies raised previously against Omp31b, Omp25c, or Omp25d recombinant outer membrane proteins of Brucella spp. (18), a goat anti-rabbit IgG-peroxidase conjugate as secondary antibody, and 4-chloro-1-naphthol (Sigma Aldrich, St. Louis, MO, USA) as substrate for peroxidase. Before immunoblotting, the proteins transferred to the nitrocellulose membrane were stained with Ponceau S (Sigma Aldrich, St. Louis, MO, USA) to check the protein load (that was similar in all tested strains). Quantitative Reverse-Transcription PCR For relative quantification of transcripts by real-time reverse transcription-PCR (RT-qPCR), bacterial strains were incubated in TSB-YE-HS for 15 h (exponential growth phase), and RNA was extracted using the E.Z.N.A. R Bacterial RNA Kit (Omega Bio-tek Inc., Norcross, GA, USA). Contaminant DNA was removed by DNaseI, RNase-free treatment (Thermo Fisher Scientific, Vilnius, Lithuania), and cDNA synthetized from 1 µg of RNA with the NZY first-strand cDNA synthesis kit (NZYTech Lda., Lisboa, Portugal). A reaction without retrotranscriptase was also settled to be used in RT-PCR reactions as control of DNA absence. Real-time reactions were performed, in a StepOnePlus TM apparatus (Applied Biosystems, Foster City, USA), on cDNA obtained from B. ovis PA, the mucR mutant, and the mucR mutant complemented with wild-type mucR. The primer pairs listed in Table 2 and NZYSupreme qPCR Green Master Mix (2x) ROX plus (NZYTech Lda., Lisboa, Portugal) were used in the amplification reactions. Two independent biological samples with three technical replicates for each strain and primer pair were analyzed. Calculation of relative expression was performed with the StepOne TM software v2.3 (2 − Ct method). B. ovis PA and 16S were used as reference strain and gene, respectively, and the results were expressed as mean ± SD of the log 2 of the relative quantity (log 2 RQ). Virulence Assays J774A.1 murine macrophages were used to evaluate the intracellular behavior of B. ovis strains in phagocytic cells as previously described (14). Briefly, 2 × 10 4 macrophages per well were incubated for 24 h in 96-well plates at 37 • C and 5% CO 2 . The macrophages were then infected with each B. ovis strain at a multiplicity of infection of 1:200. After an incubation period of 2 h, extracellular bacteria were killed with gentamycin, macrophages lysed, and the number of intracellular bacteria determined in three wells per strain by plating serial dilutions on TSA-YE-HS (t0). Intracellular bacteria were also determined at 20 and 44 h later (t20 and t44) in three wells per strain, where infected macrophages had been maintained in the presence of gentamycin. The results were expressed as means ± SD (n = 3) of the log 10 CFU/well values at each time point. Virulence in mice was evaluated in 6-week-old female BALB/c mice (Charles River Laboratories, Chatillon-sur-Chalaronne, France). Mice received a week before were inoculated intraperitoneally with 10 6 or 10 8 CFU of each bacterial strain in 0.2 ml PBS. At several post-infection (p.i.) time points, CFU were determined in spleen, in five mice per group, as described before (29). The results were expressed as means ± SD (n = 5) of the log 10 Statistical Analysis Statistical analysis was performed by one-way ANOVA and Fisher's LSD using the GraphPad Prism 7 Software (GraphPad Software Inc., San Diego, CA, United States). A 99% confidence interval was considered for statistically significant differences (P < 0.01). RESULTS HdeA, Omp25d, and BOV_A0299 Are Over-translated in B. ovis PA mucR A first proteomic analysis by SDS-PAGE was performed with parental B. ovis PA, the mucR mutant and the mucR mutant complemented in trans with wild-type mucR. When compared to the parental strain, three proteins of the mucR mutant showed increased levels that recovered those of the parental strain after complementation with wild-type mucR (Figure 1). The results were also reproduced in a second mucR mutant ( mucR M2) (Figure 1), which was obtained from another independent mutagenesis procedure and was included in some analysis to obtain additional confirmation of results. The three protein bands were excised from the gel and identified by the proteomics facility of Centro de Investigación del Cáncer, Salamanca, Spain. A protein of low molecular mass showed a remarkable intensity in the mucR mutant (Figure 1) and was identified as the product of the gene BOV_A0312, which codes for a protein annotated as acid-activated periplasmic chaperone HdeA in the genome sequence of B. ovis 63/290 (GenBank accession number NC_009504) and that contributes to acid resistance in B. abortus 2308 (23). Another protein band matched Omp25d (BOV_0115) (Figure 1), a member of the Brucella Omp25/Omp31 family of OMPs that has never been detected in parental B. ovis PA and was only detected in a omp25d mutant complemented in trans that overexpresses omp25d (18). The third overproduced protein, located between Omp25d and HdeA in the SDS-PAGE gel, was identified as the product of gene BOV_A0299, which corresponds to a hypothetical protein lacking homology with proteins of known function. Considering that brucellae are intracellular bacteria facing acidic conditions in the phagocyte (30) and that HdeA, described as chaperone activated under acidic conditions, is highly represented in the mucR mutant (Figure 1) and has never been studied in B. ovis, the hdeA deletion mutant was constructed in B. ovis PA and in the mucR mutant. Both single hdeA and double mucR hdeA mutants were included in further experiments. As expected, overproduction of HdeA was not detected when hdeA was deleted from the mucR mutant, but the resulting double mucR hdeA mutant maintained the increased levels of Omp25d and BOV_0299 (Figure 1). No apparent differences in the SDS-PAGE protein profile were observed between the parental strain and the single hdeA mutant, although it must be considered that a discrete HdeA band is not detected in the parental strain (Figure 1). The colony size of mutant strains and parental B. ovis PA in TSA-YE-HS solid medium was monitored daily and recorded after 6 days (Figure 2A). Colonies of the mucR mutant were significantly smaller than those of B. ovis PA, and complementation of the mutant with wild-type mucR restored the parental phenotype. The mucR hdeA double mutant had slightly bigger colonies than the mucR mutant, but not as big as those of the parental strain (Figure 2A). On the contrary, the single deletion of hdeA in B. ovis PA did not have an apparent effect on colony size (Figure 2A). The growth deficiencies of the mucR mutants were also evident in TSB-YE-HS liquid medium since the mucR and mucR hdeA mutants had reduced replication rates, and their OD 600 and log 10 CFU values in the stationary phase were significantly lower than those obtained with the parental strain and the hdeA mutant (Figures 2B,C). B. ovis PA mucR and mucR hdeA, but Not hdeA, Have Altered Outer Membrane-Related Properties A disc assay was used to determine the susceptibility of the mutant strains derived from B. ovis PA to several compounds Discs were soaked with 20 µl of each compound, and the diameter of the growth inhibition area was measured after a period of incubation of 7 days. The results are mean ± SD of three assays. Statistically significant differences (P < 0.01), when compared to the parental strain, are marked with an asterisk. related to outer membrane properties and/or survival in the host. No statistically significant differences were found between parental B. ovis PA and the mutant strains regarding susceptibility to H 2 O 2 or the high-affinity iron chelator 2,2 ′dipyridyl ( Table 3). On the contrary, the mucR single mutant and the mucR hdeA double mutant were more susceptible (P < 0.001) to the cationic peptide polymyxin B and the detergents sodium deoxycholate, sodium dodecyl sulfate, Tween 20, and CHAPS ( Table 3). Complementation in trans of the mucR mutant with wild-type mucR restored the parental phenotype (Table 3). Additionally, the mucR single and double mutants showed autoagglutination ability indicative of surface alterations (Figure 3), but no significant differences were found among strains regarding survival to exposure for 24 h to hypersaline or acid pH conditions (Supplementary Table S1). Immunodetection of Omp25d and Major OMPs of the Omp25/Omp31 Family Confirmation of Omp25d overproduction in the mucR and mucR hdeA mutants was obtained by immunoblot with rabbit sera raised previously against recombinant Omp25d (18). Both mutant strains developed a protein band of identical molecular mass to that detected in a B. ovis PA omp25d mutant complemented in trans with omp25d and that overproduces Omp25d (18) (Figure 4A). As expected according to previous reports, Omp25d was not detected by immunoblot in parental B. ovis PA (18), and this phenotype was recovered in the mucR mutant complemented with mucR ( Figure 4A). Omp25d was also undetectable in the hdeA mutant (Figure 4A), which is in accordance with the SDS-PAGE protein profile (Figure 1). Considering the overproduction of Omp25d detected in the mucR and mucR hdeA mutants and that members of the Brucella spp. Omp25/Omp31 seem to be tightly balanced (18), these mutants were evaluated in immunoblot (Figures 4B,C) with rabbit sera that allow the immunological detection of Omp31, Omp25, and Omp25c major OMPs (18). Except for the omp31 mutant of B. ovis PA used as control in this assay (25), the characteristic multiple band pattern of Omp31 was detected in all mutant strains tested, although Omp31 appeared to be more abundant in the parental strain and the complemented mucR mutant (Figure 4B). Although the multiple band profile of Omp31 makes it difficult to estimate its quantity by SDS-PAGE, a lower amount of Omp31 in the mucR and mucR hdeA mutants would be in accordance with the transcriptomic results described below. Omp25c and Omp25 were also detected in the mutant strains ( Figure 4C) by reactivity with an anti-Omp25c sera that cross-react with Omp25, thus allowing the detection of both proteins (18). Differentiation between Omp25 and Omp25c bands was performed according to the reactivity profile of the omp25 and omp25c control strains previously obtained. Although deletion of mucR in B. ovis PA did not lead to important alterations in the levels of Omp25 and Omp25c as visualized by immunoblot (Figure 4C), subtle differences seem to exist among strains (single and double mucR mutants seem to have a more intense Omp25 band and less intense Omp25c bands) that would also correlate with the RT-qPCR results described below. ovis PA (Figure 1), suggesting that the corresponding genes are upregulated in this mutant. However, the analysis of previous works performed with smooth B. melitensis 16M and B. abortus 2308 showed that orthologs of BOV_A0299 were not listed among the genes regulated by MucR and that hdeA was not cited as over-transcribed in the mucR mutant of B. abortus 2308 ( Table 4). Considering these differences among species, a transcriptomic analysis of the B. ovis PA mucR mutant and the complemented strain was performed in comparison with the parental strain. A panel of 19 genes was selected according to the protein profile detected in SDS-PAGE and immunoblot in the B. ovis PA mutants (Figures 1, 4) and to the transcriptome of mucR mutants analyzed in smooth Brucella species (20,31). Data of the mucR mutant of B. canis RM6/66 described during the preparation of this manuscript (32) were also included in the interspecies comparative analysis summarized in Table 4. In accordance with the proteomic results (Figure 1), transcripts for genes hdeA, omp25d, and BOV_A0299 were overrepresented in the mucR mutant of B. ovis PA when compared to the parental strain and the complemented mutant ( Figure 5); the results were also reproduced in B. canis RM6/66 ( Table 4). Considering the results described in B. abortus 2308 (20,33), genes BOV_0982 and BOV_0183, the latter coding for the LuxR family quorum sensing transcriptional regulator blxR (34) also named babR (35), were also included in the transcriptomic analysis. In B. abortus 2308, overexpression of the corresponding orthologs BAB1_0190 and BAB1_1035 in the mucR mutant has been associated to the demonstrated direct binding of MucR to their promoter region (20,33). Both genes were also upregulated in the mucR mutant of B. ovis PA ( Figure 5) and in B. canis RM6/66 (Table 4). However, the B. melitensis 16M ortholog of BOV_0982 (BMEI0948) does not appear among the upregulated genes in the mucR mutant (31) ( Table 4). Four other genes (orthologs of genes BOV_1296, BOV_1963, BOV_1935, and BOV_1925) upregulated in either B. melitensis or B. abortus mucR mutants (but not simultaneously in both strains) were also found upregulated in the mucR mutant of B. ovis ( Figure 5) and B. canis (32) ( Table 4) Table 4). More interspecies differences in the transcriptome of Brucella mucR mutants were observed regarding vjbR, which codes for a quorum sensing-related regulator acting as activator of flagellar and virB genes in B. melitensis (36) and that is required for virulence in Brucella spp. (11,36,37). Expression patterns dependent on Brucella species were also detected for genes encoding surface antigens of the Omp25/Omp31 family, for queC, and for virulence genes reported as downregulated in mucR mutants of smooth B. melitensis 16M or B. abortus 2308 (20,31), such as the virB operon encoding the type IV secretion system or iron homeostasis genes bfr and ftrA ( Figure 5, Table 4). Omp31 (B), Omp25, and Omp25c (C) were detected by immunoblot after SDS-PAGE resolution of whole-cell lysates. Previously characterized rabbit sera raised against recombinant proteins of the Omp25/Omp31 family (18) were used as primary antibodies. The anti-Omp25c serum allows the simultaneous detection of Omp25 and Omp25c because of cross-reactivity due to the amino acid sequence similarity between both proteins (18). Mutant B. ovis PA strains defective in Omp31, Omp25, or Omp25c obtained before (25,26) were used as negative control strains for each protein. Since, as previously described, Omp25d is not detected in parental B. ovis PA (18), an omp25d mutant complemented in trans with wild-type omp25d and that overproduces Omp25d (18,26) was used as the positive control strain for this protein. The omp25d mutant was not included since B. ovis PA behaves as the negative control for Omp25d. B. ovis PA mucR and mucR hdeA, but Not hdeA, Have Attenuated Virulence in Macrophages and Mice Murine J774A.1 macrophages were used to determine the ability of the B. ovis PA mutants obtained in this work to internalize, survive, and replicate within phagocytic cells. While the hdeA mutant behaved as the parental strain, the mucR and mucR hdeA mutants internalized similarly to the parental strain but showed increased killing at t20. Both mutants were able to replicate thereafter, but intracellular counts at t44 were in the order of 1 log unit lower than those obtained with the parental strain ( Figure 6A). For a first evaluation in the mouse model, the mice were inoculated with 10 6 CFU of the parental strain or the isogenic mutants obtained in this work. Bacterial counts in the spleen were determined at weeks 3 and 7 p.i., time points that correspond to the peak of infection in mice for B. ovis PA (acute phase of infection) and to the plateau of the chronic phase (25). The hdeA mutant did not show statistically significant differences with the parental strain even at week 11 p.i., an additional point of analysis included for this mutant (Figure 6B). On the contrary, spleens of mice inoculated with the mucR and mucR hdeA mutants were free of infection, except for one mouse at each sampling point of the group inoculated with B. ovis PA mucR hdeA (Figure 6B). A second experiment was performed in mice inoculated with 10 8 CFU-a dose usually employed for protection experiments with B. ovis attenuated vaccines (13,25,29)-in which bacterial splenic colonization was monitored from week 1 to 11 p.i. In these conditions, the mucR mutants produced a persistent infection but with bacterial splenic colonization levels that were significantly lower than those observed with the parental strain ( Figure 6C). DISCUSSION In the last decade, the regulatory network of the MucR transcriptional regulator has been depicted in B. melitensis 16M and B. abortus 2308 (20,21,31) and very recently also in B. canis RM6/66 (32). Several upregulated or downregulated genes at the transcription level were found in the corresponding mucR mutants, but apart from some flagellar proteins in B. melitensis 16M (21), no reports exist regarding whether the modification of the transcription rate of affected genes effectively leads to increased or reduced levels of the encoded proteins. In this work, we demonstrate that deletion of mucR in B. ovis PA leads to increased levels of transcripts for hdeA, omp25d, and BOV_A0299 (Figure 5), resulting in increased translation of their respective encoded proteins, which were easily detected after SDS-PAGE, followed by Coomassie blue staining (Figure 1). HdeA has been described in enteric bacteria as a periplasmic chaperone with activity at low pH and required for acid resistance (38,39), and its ortholog in B. abortus 2308 was found to be involved in survival to in vitro acid stress exposure but dispensable for virulence in macrophages and mice (23). Considering its high overproduction in the mucR mutant of B. ovis PA and its described role in resistance to acid pH, a condition encountered by Brucella inside phagocytes, we considered it interesting to analyze the relevance of HdeA for the in vitro and in vivo behavior of B. ovis PA and the isogenic mucR mutant. However, no differences between the parental strain and the B. ovis PA hdeA mutant were found in any of the tests performed, including survival at acid pH in the conditions assayed (Supplementary Table S1) and virulence (Figure 6). Accordingly, HdeA is not essential, at least in B. abortus 2308 and B. ovis PA, to survive under the acidic conditions that Brucella encounters inside phagocytes or to follow a normal infectious (20,21,31,32). a virB2 is not listed as differentially expressed in the B. melitensis 16M mucR mutant, but several other genes of the virB operon are downregulated in this strain (31). process in mice inoculated intraperitoneally. Nevertheless, HdeA of pathogenic enteric bacteria has exclusive activity at stomach pH ranges (pH values below 3)-conditions that have not been analyzed with Brucella hdeA mutants-and is essential for bacterial resistance to this acidic environment (38,39). Accordingly, although the intestinal mucosa is not considered a relevant port of entry for Brucella (40), HdeA could have a role if some degree of invasion by this route occurs. Moreover, contribution of HdeA to Brucella virulence in the natural host cannot be discarded. Although other characteristics not evaluated in this work could be affected, overproduction of HdeA in the B. ovis PA mucR mutant (Figure 1) was not responsible for any of its observed defective characteristics (i.e., growth, OM-related properties, and virulence) nor provided beneficial characteristics to the mucR mutant since the behavior of the mucR hdeA double mutant was undistinguishable from that of the mucR single mutant in all tests performed. Omp25d is a member of the Omp25/Omp31 family, which is constituted by seven homologous outer membrane proteins with a different occurrence and distribution pattern depending on the Brucella species (18). Detection of Omp25d in wild-type brucellae has not been reported, and this protein could only be detected in a B. ovis omp25d mutant complemented in trans with omp25d and that overexpresses omp25d (18). The protein has been linked to the virulence of B. ovis PA (26), and its overproduction could be somehow involved in the attenuation of the mucR mutant (Figure 6). In this respect, proteins of the Omp25/Omp31 family seem to be finely tuned (18), as it is also suggested by the lower transcription levels of omp31 and omp25c-which encode two major OMPs of the family (18)-that were observed in the Omp25d-overproducing mucR mutant ( Figure 5) and that also seem to correlate with lower protein levels (Figure 4). According to these results, which include the lack of detection of Omp25d in B. ovis PA under standard culture conditions (Figures 1, 4A), MucR could modulate the levels of surface proteins of the Omp25/Omp31 family in response to environmental stimuli within the host to build an optimal surface architecture for the establishment of infection. Regarding BOV_A0299, the third overproduced protein detected in the mucR mutant of B. ovis PA (Figure 1), no homology with proteins of known function has been evidenced, but the protein is highly conserved in the genus Brucella, at least in the classical species (data not shown). According to the attenuation of the mucR mutant and the degree of conservation of BOV_A0299 in the genus Brucella, this protein emerges as a new candidate to be further studied regarding its role in the biology of the bacterium. The DNA region located upstream of babR (or blxR), a gene that is upregulated in the four available Brucella mucR mutants ( Table 4), has been previously analyzed in B. abortus 2308 while searching for the target site for MucR binding (33). Multiple binding sites of MucR to the promoter region of babR were identified, and it was determined that AT-rich regions containing T-A steps were involved in the MucR-promoter interaction resulting in transcriptional repression (33). Considering these observations, we have analyzed the DNA region located upstream of the genes coding for the three proteins overproduced in the B. ovis PA mucR mutant (HdeA, Omp25d, and BOV_A0299). The three genes presented AT-rich regions containing T-A steps (Supplementary Figure S1) that could be targets for MucR binding, which suggests that MucR also acts as a direct repressor of these genes. Despite the homology at the DNA level shared by the classical Brucella species (3), the mucR available mutants show not only common transcriptomic characteristics but also differences in the expression of several genes ( Table 4). Among the first genes that were shown to be regulated by MucR, several flagellar genes of B. melitensis 16M locus I (fliF, fliC, ftcR, and flgE) and that are required for virulence (41) can be mentioned (21). It was proposed that the downregulation of flagellar genes mediated by MucR was due to its repressor activity upstream of ftcR, which encodes a master regulator of flagellar expression (21). On the contrary, in B. ovis PA no MucR-mediated positive or negative regulation of fliC and fliF flagellar genes has been evidenced ( Figure 5, Table 4). Nevertheless, this observation is not in disagreement with the proposed role of MucR as repressor of ftcR since the predicted binding site of FtcR upstream of fliF (42) is missing in B. ovis due to two independent deletion events accounting for 283 bp (11). Even if there was a MucR-mediated regulation of flagellar gene expression in B. ovis PA, deletion of mucR would not have a relevant impact in flagellum-dependent bacterial properties since it has been demonstrated that the entire three main flagellar loci are dispensable for B. ovis PA virulence, and their deletion does not modify any of the evaluated bacterial characteristics (15). Upregulation of flagellar genes detected in B. canis RM6/66 mucR (32) was also in accordance with the results described for the B. melitensis 16M mutant ( Table 4), but the expression of flagellar genes in B. abortus 2308 does not seem to be dependent on MucR (20) ( Table 4). These differences cannot be explained according to distinctive traits in DNA regions located upstream of fliC, fliF, or ftcR since they are almost identical between B. melitensis 16M and B. abortus 2308 (data not shown). However, in addition to MucR and FtcR, other actors affecting flagellar gene expression have been identified in B. melitensis 16M (e.g., VjbR, BlxR, RpoE1, RpoH2, BdpA, and YbeY), which suggest a complex regulation network for flagellar gene expression (21). Therefore, the differences in flagellar gene expression detected among Brucella species in mucR mutants could be related to different effects on the expression of the other regulators of the network, although the influence of the experimental conditions used in each work cannot be discarded. Another recognized virulence factor in Brucella, including B. ovis PA, is the type IV secretion system whose components are encoded by the virB operon (11,30). Although binding of MucR upstream of virB1 has been evidenced in B. abortus 2308 (33), this interaction had a weaker affinity than that observed with the babR promoter and was considered to have little impact on the expression of virB genes in this strain (33). These observations corroborate previous results where no virB genes were listed among the differentially expressed genes in the mucR mutant of B. abortus 2308 (20) and are in accordance with results obtained with the B. canis RM6/66 mutant. On the contrary, several virB genes were downregulated in the mucR mutant of B. melitensis 16M (31), and reduced virB2 transcription was detected in the B. ovis PA mucR mutant ( Figure 5, Table 4). Considering the low affinity of MucR by the virB promoter (33) and that DNA regions located upstream of virB1 and virB2 are almost identical in the four species (data not shown), differences among species regarding the role of MucR in virB expression are probably not related to a direct binding of MucR to the promoter region but to an indirect effect on other known or unknown regulators of this operon that is controlled by a complex regulatory network of repressors and activators responding to different environmental stimuli (30). BabR and VjbR, listed in Table 4, are two quorum-sensing-related transcriptional regulators known to affect virB expression, but several other proteins have been involved in its regulation (30). In this respect, the reported differences among Brucella strains regarding the production of VirB proteins under several culture conditions provide evidence of a differential regulation of virB expression within the genus Brucella (43). The differences among Brucella spp. mucR mutants were not exclusive to the transcriptomic pattern but also extended to phenotypic characteristics. A surprising observation was that, while the deletion of mucR caused in vitro growth defects in B. melitensis 16M (21), B. abortus 2308 (20), and B. ovis PA (Figure 2)-although they were less evident in B. melitensis 16M (21, 31)-the growth characteristics of the mucR mutant of B. canis RM6/66 were identical to those of the parental strain (32). Also remarkable is the fact that, contrary to what was described in smooth Brucella mucR mutants, B. ovis PA mucR or mucR hdeA did not show higher susceptibility than the parental strain to oxidative, acid, or hypersaline stresses in the conditions assayed ( Table 3, Supplementary Table S1). Additionally, B. ovis PA mucR did not show the increased susceptibility to iron restriction ( Table 3) reported for mutants of the three other Brucella species (20,31,32) despite the downregulation observed by the RT-qPCR of genes ftrA and bfr that are related to iron homeostasis ( Figure 5). Susceptibility to detergents and the cationic peptide polymyxin B, which is related to properties of the bacterial cell envelope (5,19), has been evaluated in the Brucella spp. mucR mutants, except that of B. canis. While susceptibility to detergents is shared by the Brucella spp. mucR mutants (20, 21) ( Table 3), deletion of mucR increases the susceptibility of B. melitensis 16M and B. ovis PA (21) ( Table 3), but not that of B. abortus 2308 (20) to polymyxin B. Considering the influence of MucR in the modulation of the bacterial surface architecture (clearly exemplified in this work by the increased levels of Omp25d in the B. ovis PA mucR mutant), the differences observed among the Brucella spp. mucR mutants might be related, at least in part, to the distinctive OM-related properties reported for each Brucella species (5,19) and contribute to the differences of pathogenicity, host preference and tissue tropism that exist within the genus Brucella. Some of these distinctive properties seem to be associated to the smooth or rough phenotype, but some others seem to be dependent on differences in other cell envelope components (19). In this respect, a distinctive pattern for each Brucella species regarding members of the Omp25/Omp31 family, at least Omp25d being controlled by MucR (Figure 1), has been described (5,18). Despite the differences detailed above, all Brucella spp. mucR mutants shared the virulence attenuation in cellular models and mice as a common trait (20)(21)(22)32) (Figure 6). Although the B. ovis PA mucR and mucR hdeA mutants have important in vitro growth defects (Figure 2), which would be a drawback for their industrial production, the persistent infection that they establish in mice (Figure 6) is an interesting trait that makes them potential candidates to be evaluated as specific attenuated vaccines against ovine brucellosis caused by B. ovis. The good protective activity against B. melitensis infection described for B. melitensis 16M mucR (44) also encourages further evaluation of the B. ovis PA mutant as vaccine. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by Bioethics Committee of the University of Salamanca and competent authority of Junta de Castilla y León, Spain. AUTHOR CONTRIBUTIONS RS-M and NV conceived the study. BT-C and NV wrote the manuscript, and all authors participated in the experimental work, the discussion of the results, and the revision of the manuscript. All authors read and approved the final version of the manuscript.
2022-01-31T14:27:47.477Z
2022-01-31T00:00:00.000
{ "year": 2021, "sha1": "ef64dbf0ebb6805d1f068034b5d1f8bd23de9d85", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ef64dbf0ebb6805d1f068034b5d1f8bd23de9d85", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17588503
pes2o/s2orc
v3-fos-license
c-Cbl mediates the degradation of tumorigenic nuclear β-catenin contributing to the heterogeneity in Wnt activity in colorectal tumors Despite the loss of Adenomatous Polyposis Coli (APC) in a majority of colorectal cancers (CRC), not all CRCs bear hallmarks of Wnt activation, such as nuclear β-catenin. This underscores the presence of other Wnt regulators that are important to define, given the pathogenic and prognostic roles of nuclear β-catenin in human CRC. Herein, we investigated the effect of Casitas B-lineage lymphoma (c-Cbl) on nuclear β-catenin, which is an oncoprotein upregulated in CRC due to loss-of-function APC or gain-of-function CTNNB1 mutations. Despite mechanistic rationale and recent discoveries of c-Cbl's mutations in solid tumors, little is known about its functional importance in CRC. Our study in a cohort of human CRC patients demonstrated an inverse correlation between nuclear β-catenin and c-Cbl. Further investigation showed that the loss of c-Cbl activity significantly enhanced nuclear β-catenin and CRC tumor growth in cell culture and a mouse xenograft model. c-Cbl interacted with and downregulated β-catenin in a manner that was independent of CTNNB1 or APC mutation status. This study demonstrates a previously unrecognized function of c-Cbl as a negative regulator of CRC. by centroids [47][48][49]. This algorithm treats each object as having a location in space and uses a heuristic to find centroid seeds for clustering. It finds partitions such that objects within each cluster are as close to each other as possible, and as far from objects in other clusters as possible. The approach requires that one specify the number of clusters to be partitioned and a distance metric to quantify how close two objects are to each other. For our case, we used a squared-Euclidean distance metric defined as where x is the pixel of interest and c is the centroid, and d is the computed distance between the pixel and the centroid. Note that each centroid is the mean of the points in that cluster. For each pixel in the image, the clustering approach returns an index corresponding to a cluster. Using this index, one can separate objects by color. In order to ensure robustness, the clustering procedure using new initial cluster centroid positions was repeated ten times, and the solution with the lowest cluster sums of point-to-centroid distances was selected. Note that since the color information exists in the a*b* space, the objects derived using the clustering approach are pixels with a* and b* values. Based on general observation of all the original images, the expert identified three basic subregions within each image: nuclei and its neighborhood, luminal area, and the interstitial space. This identification served as the basis for us to sub-divide each image into three clusters as the first step of color-based segmentation. For the case of c-Cbl, the k-means algorithm (k=3) was used on each transformed color image in a*b* space and segmented into three clusters. From the three segmented clusters, the expert then identified the cluster that encapsulated the cytoplasmic area within the entire image. A size-based filtering operation was then performed on the identified cluster to eliminate all connected components that have fewer than a threshold level of pixel area. Several threshold values for the pixel areas were explored. After careful consideration, all connected components that have fewer than 5000 pixels were removed from the identified cytoplasmic cluster. The resulting image was verified by the expert as the one comprising a filtered sub-region of c-Cbl content as exemplified by the colored staining within the cytosolic regions of each cell within the tissue. For the case of β-catenin, the same clustering algorithm was used to first divide the transformed color image in a*b* space into three clusters. As the goal here was to estimate the amount of nuclear β-catenin, separating the nuclei that contained β-catenin from the other nuclei within the interstitial and other tissue regions were needed. Therefore, each of the three clusters derived from the color image in a*b* space was divided into two sub-groups using the same k-means (k=2) clustering approach. The final outcome of this 2-tier clustering approach was 6 non-overlapping images. The expert then identified the images with nuclear β-catenin. For many cases, one out of six images was identified to contain exclusive nuclear β-catenin but for some cases, there were at least 2 images that were identified to contain nuclear β-Catenin. Valid area estimation of sub-regions per image: For both c-Cbl and β-catenin, estimation of the size(s) of the resulting sub-region(s) was performed by computing the fraction of the non-zero pixels within the entire image(s). This resulted in a measure of cytoplasmic c-Cbl and nuclear β-catenin contents, respectively. When more than one image was identified (for nuclear β-catenin), a sum of the computed fractions of the images was considered as the valid area. Germany) were grown with doxycycline 100 mg/ml for 5 days to induce the expression of dnTCF4, as described previously [30]. Chemicals. Sorafenib, Foretinib and Gefitinb all dissolved in DMSO were obtained from LC laboratories. Emetine was obtained from Sigma and MG132 from Calbiochem, EMD Millipore. Immunoblotting and Immunoprecipitation. Cells were lysed in 50 mM Tris-HCl, pH 7.6, 150 mM NaCl, 30 mM EDTA, 0.5% Triton X-100 with complete protease inhibitor (Roche Applied Science). Immunoblotting and immunoprecipitation were performed, as described previously (5) and all the antibodies were from Cell Signaling (MA, USA), unless specified otherwise. Immunofluorescence. Cells were grown in chamber slides (Lab-Tek) and fixed and processed as described previously. Alexa 488 goat anti-rabbit and Alexa 647 goat anti-mouse (Molecular Probes, Life Technologies) were used as secondary antibodies. ImageJ was used to generate the profile and the scatter plots, as described previously [16,29]. Cellular Fractionation. Subcellular fractionation was performed using Dounce homogenization, as described previously [29]. Spheroid formation assay. 10,000 CRC cell were seeded in low adhesion plates (Corning ®) for 48 hours in complete growth medium and the spheroid colonies were counted in a blinded manner. TCF/β-Catenin-responsive Luciferase Reporter Assay. The cells seeded in 6-well plates stably expressing c-Cbl or c-Cbl-70Z were cotransduced with lentiviral particles of pBARLS or pfuBARLS. After 48 h of transfection, luciferase assays were performed using the Dual-Luciferase Kit (Promega) and normalized using protein content determined by the Bradford assay (Bio-Rad). Generation of Viral Particles. Retroviral constructs with c-Cbl overexpressing or c-Cbl or β-TrCP shRNA constructs, as described previously [16,17], were cotransfected into HEK293T cells along with packaging, envelope, and reverse transcriptase vectors using Lipofectamine 2000 (Life Technologies) per the manufacturer's instructions. Medium containing active viral particles collected after 48 h was centrifuged and stored at 80 °C. Lentiviral particles of TOPand FOP-Flash were generated similarly by cotransfecting the lentiviral constructs with packaging, envelope, and reverse transcriptase vectors using Lipofectamine 2000 per the manufacturer's instructions. For viral transduction, the cells were seeded at 50-60% confluence. The cells were treated overnight with the medium containing active viral particles along with hexadimethrine bromide (Sigma), a cationic polymer, to increase the efficiency of infection. Puromycin (Sigma) selection was initiated after 24 h. The cells were harvested after four days to examine the effect on protein levels. 3 [H] Thymidine incorporation assay. 5,000 CRC cells seeded in 96 well plate were serum starved overnight. The cells were stimulated using DMEM medium containing 5% serum for 24 hours after which they were subjected to 1μCi of 3 [H] Thymidine overnight. The lyzed cells were counted for the radioactivity using the LabLogic 300SL Liquid Scintillation Counter. Figure 1C. For example, average normalized nuclear β-catenin less than 0.38 was considered low. #= Average normalized c-Cbl cut-off is based on mean c-Cbl (0.74) shown by the vertical line in Figure 1C. For example, normalized average c-Cbl less than 0.74 was considered low. for 16 hours were lyzed and immunoprecipitated using β-catenin antibodies and then probed with ubiquitin antibody. Five percent of lysates were probed as inputs. A representative of two independent experiments is shown. Supplementary Figure 3. c-Cbl regulates CRC proliferation and spheroid formation A. c-Cbl-70Z, an E3 ligase-deficient and a dominant negative form of c-Cbl increases CRC cell proliferation harboring wild-type β-catenin. RKO stably expressing control or c-Cbl-70Z constructs were serum starved for 24 hours and stimulated with 5% FBS. 3 [H] incorporation assay was performed after 24 h. An average of 6 samples done in duplicates is shown. A Student's t-test was performed. Error bars = SEM. Supplementary Figure 4. Direct and RTK-mediated regulation of Wnt/ β-catenin by c-Cbl IC50 of Gefitinib was determined as above and was found to be 6 μM for HCT116 cells.
2018-04-03T02:01:01.352Z
2016-09-20T00:00:00.000
{ "year": 2016, "sha1": "61ada7153448f5b80aaaf53e55b34dcfe6e95d8e", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=12107&path[]=38312", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5db3b75068928d69fa3b54bbae3f7e7d896c418a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232092594
pes2o/s2orc
v3-fos-license
SWP: Microsecond Network SLOs Without Priorities The increasing use of cloud computing for latency-sensitive applications has sparked renewed interest in providing tight bounds on network tail latency. Achieving this in practice at reasonable network utilization has proved elusive, due to a combination of highly bursty application demand, faster link speeds, and heavy-tailed message sizes. While priority scheduling can be used to reduce tail latency for some traffic, this comes at a cost of much worse delay behavior for all other traffic on the network. Most operators choose to run their networks at very low average utilization, despite the added cost, and yet still suffer poor tail behavior. This paper takes a different approach. We build a system, swp, to help operators (and network designers) to understand and control tail latency without relying on priority scheduling. As network workload changes, swp is designed to give real-time advice on the network switch configurations needed to maintain tail latency objectives for each traffic class. The core of swp is an efficient model for simulating the combined effect of traffic characteristics, end-to-end congestion control, and switch scheduling on service-level objectives (SLOs), along with an optimizer that adjusts switch-level scheduling weights assigned to each class. Using simulation across a diverse set of workloads with different SLOs, we show that to meet the same SLOs as swp provides, FIFO would require 65% greater link capacity, and 79% more for scenarios with tight SLOs on bursty traffic classes. Introduction The performance of many cloud applications is becoming increasingly dominated by network tail latency [9,16,19]. Even when average case message performance is acceptable, end-to-end application performance is often limited by worst case message behavior. On today's data center networks, the latency of a typical remote procedure call (RPC) or remote memory (RDMA) operation can vary by several orders of magnitude from average case behavior, even when networks are kept at low to moderate utilization. Many application programmers have learned that they simply should not expect network communication to be fast, except on average [31]. Consistently low network tail latency has proved elusive for a number of reasons. Data center network switches use FIFO queues where interactions between traffic types can dominate tail behavior. Data center network traffic is highly bursty, meaning that average and tail load can diverge radically, even within a single traffic class. While link speeds are rapidly scaling up, with 100 Gpbs and even 400 Gbps links becoming standard [13,30], application demand is scaling up even more rapidly [26]. Faster links means that a substantial fraction of data center network traffic completes within a few round trips, reducing the portion of traffic subject to congestion control [18] and further complicating efforts to keep queues small. While priority scheduling and admission control can achieve guaranteed low latency [16,19], that only works for a small slice of well-behaved network traffic and it comes at the cost of much worse variability for the remaining traffic. Our interest is in providing tight bounds for all traffic classes-in particular, we show that priority scheduling is needlessly aggressive in many situations. Likewise, endpoint congestion control attempts to keep queues bounded and small to achieve better response times for very short transfers, but this often comes at a cost of worse latency variation for medium-sized transfers and a loss of throughput for long flows [6,20,23]. Our interest is in supporting tight latency bounds across traffic classes and mixtures of short and medium-length flows. We assume that the network operator splits traffic into a small number of classes, where each traffic class has a service level objective (SLO) [22]. We assume these SLOs are probabilistic rather than absolute, matching the probabilistic SLOs for network and service availability that operators already provide. At datacenter scale with hundreds of thousands of servers, deterministic SLOs are impossible except for a small fraction of datacenter traffic [16,19]. The SLOs we consider in this paper provide tight bounds, such that 99% of messages (or flowlets [27]), regardless of size, are delivered within a small integer factor of the time they would take on an unloaded network. We note that our tools are generalizable to weaker or stricter SLOs, and to different SLOs for different transfer sizes. Further, we assume the network operator continuously gathers the distribution of message lengths and interarrival times (burstiness) of messages within each traffic class. This can be accomplished through traffic sampling at the RPC, socket, virtual machine, or RDMA level, with periodic updates of traffic estimates. We assume no change to the standard socket level API-the network interface or switch does not have access to message/flowlet length (except in retrospect). Thus, we do not consider switch scheduling mechanisms that use flow length to favor short messages, such as shortest remaining flow first [23], or to implement deadline first scheduling [5,28]. We also assume widely used endpoint congestion control mechanisms, specifically DCTCP [6] and HPCC [20]. At the switch level, we assume only standard configurability of modern datacenter switches such as Broadcom's Tomahawk 4 [13] or Intel's Tofino 2-the operator can assign strict priorities or scheduling weights to each of a small number of traffic classes. A class with a normalized weight of 0.4 and with a queue of packets, for example, will have its packets scheduled at least 40% of the time. Within each class, we assume FIFO scheduling. We also explore the potential benefit of switch programmability by considering the use of a calendar queue [25] to implement fair queueing within each traffic class. We make the simplifying assumption of considering a single bottleneck at a time, leaving multiple bottlenecks for future work. With these knobs, we build swp (SLOs without priorities) to efficiently determine if target network SLOs can be met given estimated load, burstiness, and flow length distribution for each traffic class. If SLOs can be met, swp provides switch configuration weights for each traffic class. swp can also be used prospectively, to evaluate the feasibility of target SLOs given potential future traffic changes, e.g., due to the rollout of a new application, an anticipated spike in traffic, or a prospective change in endpoint congestion control policy. swp provides two benefits. First, instead of giving priority to whichever traffic class has the tightest deadlines, we allow the scheduling weight for each class to be the minimum necessary to meet its SLO given its burstiness, message size distribution, and utilization. The more bursty a traffic class, and the tighter its SLOs, the greater headroom is needed above and beyond its average utilization, in order to meet its SLOs. In many circumstances, a set of scheduling weights can meet the SLOs of each class, where strict priorities, or endpoint congestion control alone, would not be able to. If all classes can meet their deadlines, excess capacity is distributed to (approximately) minimize the chance of an SLO violation. Second, if the bursts of traffic in one class are uncorrelated with the traffic in other classes, we can overcommit weights relative to what each class would need if it was running in isolation. Since each class does not require its entire headroom all the time, a link multiplexed between multiple traffic classes can statistically support a higher load than would be possible otherwise. One can think of this as the equivalent of the use of slack in deadline scheduling, but computed on traffic aggregates. This extra capacity is not completely free: if a particular traffic class exceeds its expected utilization or burstiness, it can cause missed deadlines for other traffic classes. A key barrier to swp is the efficient computation of tail latency behavior given a particular switch configuration and message arrival pattern. The state of the art would be to simulate (or directly observe) the queueing behavior in detail for a sufficiently long sample to gain statistical reliability, repeated for each possible switch configuration. The inner loop of that calculation is gated by the operational behavior of the congestion control mechanism-the queue length at each instant in time, what packets are marked (in DCTCP [6]) or congestion information returned (in HPCC [20]), when that information would reach the endpoint, how the endpoint would react, etc. Instead, for our setting, we only need sufficient accuracy to predict tail latency SLOs. We create a high-level abstract model of each endpoint congestion control algorithm, where the control loop operates at a time lag but with perfect information about the remote queue. This simplified model speeds up execution time by 50-80× relative to ns3. We calibrate the models (one each for HPCC and DCTCP) using ns3 simulations, and we show that the resulting models are accurate enough to provide a basis for computing switch configurations to meet probabilistic tail latency SLOs. We evaluate the robustness of swp in simulation. We sample randomly among plausible scenarios of three and five traffic classes, with varying utilization, burstiness, traffic size distribution, and SLO tightness. We use swp to determine the optimal configuration to meet the SLO for each scenario with the least aggregate bandwidth. We then repeat the same scenario assuming a single FIFO queue, multiple FIFO queues (one per traffic class) with weights assigned by swp, and an idealized hierarchical fair queuing scheduler with per traffic class weights assigned by swp. Averaged across all five-class scenarios, FIFO requires 65% more link capacity to accomplish the same SLOs as swp. This benefit increases to 79% for more challenging scenarios where at least one traffic class has a tighter SLO and is relatively bursty. Using swp with fair queueing gains another factor of two in link capacity on average across all scenarios, while still meeting SLOs. Background and Motivation In this section, we consider the limitations in the use of priorities and traffic shaping to achieving tight service level objectives for multiple classes of traffic. We begin by defining terms. Our goal is to provide tight, probabilistic bounds on tail latency for latency-sensitive traffic in data center networks. To distinguish connections that may be reused, we consider each message separately, e.g., each remote procedure call (RPC), remote memory operation (RDMA), or independent data transfer, where latency is measured as the time to complete the transfer. We define message latency as the time to complete a message transfer, including transmission, propagation, and queueing delay, from when the first packet is available to be sent until the last packet arrives at the destination. In particular, we include in the latency any queueing at the end host queue needed for traffic shaping or congestion control. We define message slowdown as the message latency divided by the minimum latency on an unloaded network. For example, in a network with a round trip propagation delay of 10 sec and 100Gbps links, the minimum latency for a 125KB transfer would be 20 sec. We can also define tail slowdown behavior separately for different message sizes to prevent swp from optimizing for small messages at the expense of medium-sized or long messages. The specification of the tail probability bound on message slowdown is configurable in swp, but our aim is to provide bounds that are tight enough for application developers to largely ignore tail effects. Thus, we focus in this paper at tail message slowdowns, across different message sizes, of a small integer multiple of the best case behavior. It is impossible to provide bounds on the slowdown for any shared resource without some characterization or bound on the arrival process of requests. Mogul and Wilkes call this Customer Behavior Expectations (CBE) [22]. We assume only a probabilistic characterization, provided by an ongoing measurement of application network usage. Some prior attempts at providing network quality of service, such as IntServ [34], assume users provide hard limits on their traffic demands which can be guaranteed (or denied) at runtime depending on current traffic conditions. For many cloud applications, however, network traffic demand is a dynamic property at varying time scales, resistant to deterministic limits. At any point a flash crowd may appear, and the system should be configured to handle these within its promised probabilistic performance envelope. We assume traffic is inherently bursty, with traffic measurement conducted on a long enough interval to allow us to construct a model of the traffic behavior. For each traffic class, we assume a sampled measurement process of the distribution of message sizes and message interarrival distribution to characterize traffic from that class. Following the terminology in Mogul and Wilkes [22], a Service Level Objective (SLO) lets a provider describe in precise terms the quality of service it aims to give its users. By writing an SLO, an operator codifies the properties that can be relied on, guarding each party against potentially mismatched expectations. A building block for SLOs is the service level indicator (SLI), specifying some metric of interest, such as the tail latency for small requests or average throughput for larger requests. For a particular class of traffic, the SLO specifies a bound for the relevant SLI and can combine bounds on different SLIs in a conjunction. For example, we can specify that all memcache traffic, regardless of message size, has a tail slowdown of no more than three, at least 99% of the time. swp provides a small specification language for the network operator to specify its SLIs and SLOs. swp determines whether the SLO can be met given competing classes of traffic and network capacity. Limits of Priority Scheduling To motivate the approach taken by swp, we develop a simple experiment to characterize the limitations of priority scheduling for providing feasible SLOs for non-priority traffic. Our evaluation of swp explores the parameter space more fully. We start with a large number of servers connected by 100 Gbps links and a round-trip delay of 10 µs. We focus on a single bottleneck, located near the destination, with traffic split between foreground (higher priority) and background (lower priority) traffic. Message sizes for both foreground and background traffic are drawn from the Homa W3 distribution [23], taken as a sample of all messages in a Google data center. We assume the two traffic classes are independent of each other, but within each traffic class, flows have log-normal interarrival time distributions with a shape parameter of two. Further, we assume HPCC congestion control with an initial window size of the bandwidth-delay product [20]. For this experiment, we focus on 99% tail message slowdown for messages less than the bandwidth-delay product. In this workload, these account for the large majority of requests but less than half of the total bytes transferred. To test how well each configuration insulates foreground traffic from background traffic, we fix the foreground utilization at 10% and vary the background utilization from 20% to 80%. We choose a target slowdown of 2.5× for the SLO, that is, a tail latency for 125 KB messages of 50 µs. Fig. 1 compares the 99% tail message slowdowns for four configurations. First, we pair endpoint congestion control with a FIFO queue at the switch, shared between both foreground and background traffic (Fig. 1a). As expected, using a shared queue means that background traffic can interfere with foreground tail latency. At low load, both foreground and background traffic can achieve the target SLO. As background load increases, endpoint congestion control limits the effect of the background traffic on the foreground traffic, but at high enough loads, the unconstrained portion of the traffic (within the initial window) can impact foreground tail delay. Note that the tail latency for foreground and background traffic differ. This is because the inter-arrival distribution is heavy-tailed and hence bursty within its own traffic class, but uncorrelated across traffic classes. Conditional on a small background message arriving at the switch, it is more likely that other background flows will already be queued at the switch; this is the definition of heavy-tailed behavior. Foreground traffic is more likely to encounter other foreground flows; however, the foreground load is lower so that occurs less often. Next, we add strict priority scheduling at the switch, where foreground traffic always takes precedence over background traffic (Fig. 1b). This achieves the SLO for the foreground traffic regardless of the background traffic intensity, but only at the cost of much higher small message response time for background traffic. Note that the y-axis is rescaled to show the effect. Because the foreground traffic takes priority, if the background traffic arrives during a burst of foreground traffic, it will experience head-of-line blocking-no progress until the burst is cleared. This has a measurable effect on background tail latency, even though the foreground traffic uses only 10% of the link in aggregate. Although not shown because of the y-axis rescaling, the foreground traffic achieves its SLO with considerable room Figure 1. Tail (99%) message slowdown for the Homa W3 workload [23]. Foreground traffic is set to 10% of the link bandwidth, with variable background traffic. The dotted horizontal line shows a 2.5× SLO, and the y-axis scale varies between graphs. Shaded regions represent the 95% confidence interval. We compare endpoint congestion control (CC) and a FIFO queue at the switch; CC and strict priority queueing (PQ); CC and per-traffic class (TC) scheduling weights at the switch, tuned to meet the foreground SLO with 80% background traffic; scheduling weights and per-traffic class fair queueing among queued flows. While priority scheduling can provide latency guarantees for foreground traffic, it leaves the tail latency of background traffic higher than necessary. . When a workload is bursty, leaky bucket parameters must be tuned to preserve low tail delays at the host. However, this will in turn increase the tail of downstream queue lengths (as seen by an arriving packet). to spare; in this scenario, prioritization is overly conservative, needlessly harming background tail latency. The third graph in the figure (Fig. 1c) considers the impact of traffic class weights on SLOs. In this case, the switch has separate FIFO queues for each traffic class, and schedules among each queue according to its weight if both queues are occupied. When only one queue is occupied, that queue is scheduled. To our knowledge, there are no well-established guidelines for how to set traffic weights. Instead, data center operators act by trial and error-adjust weights to meet a specific SLO in a specific situation. We model this by setting the foreground scheduling weight so that it meets its SLO tail latency target even for the highest level of background traffic (80% load), with a small margin of error. A single weight is able to insulate the foreground tail latency across the entire range of background traffic intensity, unlike CC+FIFO. Likewise, although the background traffic is unable to meet the target SLO, it experiences much better tail latency than with priority scheduling. We could do even better for reducing background tail latency if we were to know in advance the average traffic intensity of the background traffic. At low background utilization, the traffic weights chosen above are needlessly strict. With less competing background traffic, we can afford to give the foreground traffic less weight-less headroom above its traffic demand-and still meet its SLOs. This is because most of the time the foreground traffic will arrive at the switch to find the background queue empty, improving its overall performance. This insight-that we should set weights given knowledge of the foreground and background traffic characteristics-lies at the core of the design of swp. Finally, we consider the case of a programmable switch capable of implementing fair queueing [14] through the use of calendar queues [25]. We assume a hierarchical setting, where weights are used to choose among traffic classes when both have traffic present. Within each traffic class, separate calendar queues are used to implement fair queueing among flows with traffic queued within that class. Fair queueing allows better isolation between competing flows within the same traffic class, by eliminating head-of-line blocking. With a diversity of message sizes, with FIFO queueing a short message can be delayed behind packets of a longer flow. A fair queued system allows the short message to be scheduled earlier. Recall that we give the foreground traffic just enough weight to meet the SLO in the presence of background traffic at 80% load, with the remaining capacity given to the background traffic. With fair queueing, the foreground traffic requires so little weight to meet the SLO that the background traffic even outperforms it (Fig. 1d). In this case the SLO could be further tightened without incurring additional SLO violations. In §3.3, we describe a more sophisticated and more general optimizer that can find weights to meet distinct SLOs for multiple traffic classes. Deterministic Latency Bounds Prior work in bounding network tail latency has used endpoint traffic shaping and worst case analysis to derive deterministic guarantees. These guarantees are, in a sense, stronger than what is provided by swp, in that they provide bounds on worst case behavior for queueing even at the tail, as long as packets are not corrupted in flight. On the other hand, particularly for large scale systems with bursty traffic, the bounds are substantially looser than what swp can provide. For example, QJump [16] and Silo [19] apply a send-side leaky bucket filter to constrain the worst case load in the network. Even if all nodes send a burst at exactly the same time, the leaky bucket will constrain the worst case queueing at the bottleneck to, roughly, the bucket size times the number of senders, provided that the switch is configured to give priority to these latency-sensitive packets. This follows from a classic result due to Parekh and Gallager: if 1) admission to a network is governed by leaky bucket, 2) the network uses fair queueing, and 3) the arrival rates are constrained to ensure stability, then the delay in the network can be bounded [24]. There are two important limitations that led us to use a probabilistic, rather than a deterministic, model. First, most datacenter network traffic is highly bursty [10]. When source bursts can exceed the bucket size, a leaky bucket mechanism will impose an additional queueing delay at the source, to prevent bursts from one node from compromising the SLO's provided to another. Second, the latency bound scales in proportion to the product of the bucket size and the number of senders. As the number of possible senders increases, the allowable burst must decrease proportionately to keep the SLO constant. For example, for a small data center with 1000 servers connected by 100Gbps links and a bucket size of 10KB buffer per server, worst case latency is nearly a millisecond. This places the network designer in a bind. Tighten the bucket size, and more delay is experienced at the source; loosen the bucket size, and the worst case network queueing goes up. We illustrate this with a simple experiment (Fig. 2). First, we consider the send side queue. In Fig. 2a, we assume exponentially distributed message sizes, with an arrival rate 5% smaller than the leaky bucket rate . We set the bucket size to yield reasonable send side tail latency with Poisson arrivals, and then consider what happens when we shift to moderately bursty arrivals (log-normal with a shape parameter of 1.5). Tail delays with the more realistic log-normal distribution are an order of magnitude higher than would be predicted under the analytically tractable model. To compensate for burstier traffic, we can increase the bucket size, as shown in Fig. 2b-source queueing delay decreases. Unfortunately, this only trades queueing at the host for queueing further downstream. Fig. 2c shows the distributions of downstream queue lengths at different bucket sizes. For each distribution, the median, the quartiles, and a rotated kernel density estimation is drawn. We see that as the bucket size increases, so too does the tail of downstream queueing. The swp Methodology Our goal with swp is to build a tool to aid network operators with configuring their network switches to achieve tail latency SLOs for multiple classes of traffic. Specifically, swp 1. allows users to specify a network configuration, a set of traffic classes, and their SLOs, 2. automatically finds switch weights for meeting those SLOs (if possible), and 3. provides descriptive answers to what-if style questions about SLO behavior. We begin with a brief overview of the swp workflow ( §3.1). Then, we show how to parameterize the network model and specify a traffic class's SLOs ( §3.2). Lastly, we discuss how simulation outputs are used as input to a downstream optimizer, and we describe the one we use for automatically finding switch weights ( §3.3). The network simulator is described in detail in §4. For each traffic class, the user provides information about its flow sizes and interarrival times. These can be in the form of a trace, a measured cumulative probability distribution, or a generator function. Once the workloads have been characterized, the next steps are to specify each traffic class's SLIs and SLOs ( §2) and describe the network used by the simulator in §4. We use Dhall [3] as the configuration language because it allows for typed and modular configurations, and it can be converted to widely used formats like JSON and YAML. Specifications can be fed to either the simulator itself through the standard front end or to the weight optimizer. If the simulator is invoked directly through the front end, it will simply run the classes together according to the specification and return whether the SLOs are met or violated, and why. Alternatively, the optimizer will run the simulator many times in search of a suitable weight allocation. Fig. 4 shows an example specification for a network configuration, traffic classes, SLIs, and SLOs. First, we provide the link capacity and the round-trip time (lines 3-4), followed by the queueing discipline used at the bottleneck (line 6). The congestion control protocol is imported from dctcp.dhall (reproduced in lines [12][13][14][15][16][17][18]. The meanings of the congestion control parameters are described in more detail in the next section. Likewise, the traffic classes Foo and Bar are imported from foo.dhall and bar.dhall, respectively. Specification We define a traffic class Foo on line 21. Foo's message size distribution is given by the cumulative distribution in websearch.txt, as described in Homa [23]. For interarrival times, we approximate them using a log-normal distribution with mean equal to 11.3 and shape parameter equal to 2.0. Together, the mean interarrival time and message size specify the average bandwidth required for this traffic class; the interarrival and message distribution, along with the congestion control protocol, control the burstiness of traffic at the bottleneck link. if min > 0 then 12: break ⊲ fail when min loss is positive 13: rev ← reverse(losses) 14: iterator ← zip(losses, rev) ⊲ iterate from both ends 15: for ( , ), ( , ) in iterator do 16: if ≥ 0 or ≤ 0 then To evaluate this traffic class's performance, we select the 99th percentile slowdown as our SLI. We can also filter over flow size ranges in the SLI definition, but this is omitted from this example for brevity. The last step for Foo is to form an SLO by writing a logical predicate over the SLI (line 33). Here, we say that the P99 slowdown should be less than 3×. We could also combine different SLIs with different thresholds, e.g., so that small messages can have a tighter bound than longer transfers. The example includes a second traffic class Bar, defined in much the same way, but with independent choices for traffic size distribution, interarrival burstiness, SLIs, and SLOs. Once the specification is written, the user can pass it to swp to ask it to make SLO predictions under simulation. The front end will: 1. Read and validate the specification. 2. Use the specification to build and run a simulation, which will terminate with a set of statistics. 3. Digest the statistics to populate the SLIs. 4. Predict whether or not each SLO will be met. A key contribution is to make simulations fast enough that a large space of configurations and SLOs can be quickly explored. Upon completion, swp will produce a set of predictions. For each traffic class, it predicts a value for each SLI as well as a final prediction for the SLO. A slightly modified version of the specification can also be passed to the weight optimizer, which we describe next. A Weight Optimizer The final component of swp is an optimizer that hooks into the simulator's Rust API and searches for switch weights that can meet the SLOs of a set of traffic classes. To start, each traffic class defines a loss function that takes the simulation output and quantifies the distance between the observed SLI value and its target SLO threshold. For the purpose of tail latency SLOs, we assume the threshold is an upper bound, so if the value exceeds the threshold, then the class requires more capacity, but if the SLI is under the threshold, then the class potentially has extra slack that can be given to other classes. To compute loss, we simply use Here, a negative loss indicates the SLO is met, and a positive loss indicates the SLO is missed. To find a starting point for the search process, we define a baseline weight allocation for each class: the minimum normalized switch weight a class requires to meet its SLO assuming worst case behavior by all other competing traffic classes-that is, that all other traffic classes always have a packet queued at the bottleneck switch. We use a binary search for this; each class's baselines are found in parallel. Once the baselines are found, we normalize them such that they sum to one, and then we enter the optimization loop. At each step of the loop, we first compute the losses by running the classes together in simulation and then sorting their losses in ascending order. Then in an inner loop we iterate through the losses from both ends simultaneously, with the minimum loss class paired with the maximum loss class, and so on. If class is paired with class , and if met its SLO while did not, then we simply transfer weight from to in proportion to 's loss. The optimization succeeds when all losses are negative, and it fails when either 1) no loss is negative or 2) the optimization loop times out. Pseudocode for the optimization loop is shown in Alg. 1. A Network Model Because it runs as the inner loop of swp, our network model is designed to be as simple as possible while still making accurate predictions about the aggregate tail SLO behavior seen in a network. Like any model, it will necessarily diverge from reality. Our goal, however, is not to represent the network stack with full fidelity but to isolate and represent the aspects are most significant for whether tail SLOs are met by a particular configuration. The need for something that is simple and fast is motivated in part by our experience with ns-3 [2]-a thorough and widely-used network simulator-whose attention to detail lends it high fidelity, but whose explicit modeling of protocol mechanisms prevents rapid answers to what-if questions. ns-3 simulates every packet arrival and departure at every link, queue, and host, along with host timeouts and the full host network protocol stack, with multiple events to send a packet through transport, network, and link layers. Particularly for establishing confidence intervals in tail behavior with bursty workloads, this level of detail can make simulations take hours rather than minutes. Using ns-3 would also place a limit on how quickly swp could react to measured changes in workload. We believe that a model of the network can be useful even if it ignores most of the above details, especially if it is only used to reason about aggregate statistical behavior. The central challenge, however, is defining one that adequately balances simplicity against fidelity. One of our contributions is a functional, rather than an operational, model of the network to try to better strike this balance. Overview Fig. 5 depicts the network model. All links have capacity , and there is a single bottleneck with an infinite queue, following some queueing discipline. Flows are generated at the edge with flow sizes and interarrival times drawn from distributions and . Upon arrival, each flow begins sending data at some rate governed by a model of congestion control (described in the next section). The delay between sources at the edge and the bottleneck is (giving a round-trip time of 2 ). Restricting the model to include only a single bottleneck queue in this manner is a simplification, but meeting SLOs in even this simple case is nontrivial. Moreover, where congestion is caused by incast or fan-in, there is only one bottleneck, and Google has previously reported that last hop congestion is the most common in datacenters [26]. In Section §5.1, we compare predictions made by simulating against this model to those made by ns-3 on a single bottleneck link. Congestion Control While usually complex, congestion control algorithms often have a functional description that can be succinctly summarized. For example, a particular algorithm will react to congestion signals, choose a balance between utilization and queueing, and converge to a target bandwidth allocation after some number of steps. Our goal is to design a model that captures these characteristics. Throughout, we use the notation + to mean max(0, ). First, for any given flow, its ideal sending rate at time is denoted ( ), and it is given by where ( ) is an update rule which will be described shortly. Recall that is the one-way delay between the sources and the bottleneck. Upon arrival, a new flow begins sending immediately at some initial rate init , and it will continue sending at this rate until it receives congestion control feedback at time 2 . Many modern datacenter protocols, such as DCQCN [35] and HPCC [20], set init to the line rate . For protocols with a smaller init value, we assume the initial window is paced [4] rather than ack clocked. We model the feedback delay explicitly because it has important implications on the number of uncontrolled bytes in the network. As link speeds increase and flow size distributions skew smaller, a greater fraction of traffic is transmitted in an uncontrolled manner [15,18]. Update rule. The update rule ( ) defines the core of the control loop, and it applies to a flow as soon as the flow first receives feedback. At this time, we say the flow is controlled. Likewise if a flow has begun sending but has not yet received feedback, we say it is uncontrolled. Given this terminology, a high-level and idealized description of the update rule is rate = total capacity−uncontrolled rate−queue drain # of controlled flows , where "uncontrolled rate" refers to the total sending rate across all uncontrolled flows. This rule follows our intuition that the rate at which a controlled flow should send at any given time depends on how much residual capacity there is at the bottleneck-and that, in turn, depends on 1) how much uncontrolled traffic there is and 2) how much queueing has accumulated at the switch. For simplicity we do not model differences in fairness, so the residual capacity is simply divided evenly among all controlled flows. We build on this idealized version by introducing delays in signal propagation as well as parameters for approximating different congestion control algorithms. We begin by defining the uncontrolled rate more precisely. Let be an arbitrary flow, be its initial size, and a, be its arrival time. The uncontrolled rate contributed by this flow at time , called u, ( ), is given by In other words, each flow will contribute up to init · 2 uncontrolled bytes upon arrival, paced out over 2 time. The total uncontrolled rate at time is then just the sum over the individual flows, Next, the number of controlled flows at time is simply the number of flows that have already sent init ·2 bytes and are still sending at time : ( ) = { | − a > 2 and has unsent bytes} . (5) And finally, let ( ) be the number of bytes queued at the bottleneck at time . We are now ready to state the update rule ( ). Recalling that is the total link capacity, the update rule for each controlled flow is where , , and are parameters to the congestion control model. Different settings will approximate different congestion control algorithms. The parameter is the target utilization, and is always zero or one, controlling whether or not the model reacts to the uncontrolled rate. These parameters are meant to model an algorithm like HPCC [20], which 1) tries to keep near-zero queues by intentionally underutilizing links, and 2) can detect congestion without waiting for queueing to occur. On the other hand, algorithms like DCTCP [6], DCQCN [35], and TIMELY [21] can only detect congestion after a queue builds up. In this case, would be set to zero, and the threshold at which the model reacts to queueing is controlled by . We also note the time delays on u , , and . These reflect our intuition about which signals are collected at the sources ( −2 ), and which signals are collected at the switch ( − ). For example, a new flow that arrives cannot have its uncontrolled rate affect the rates assigned to other flows until 2 after its arrival. However, since queueing is measured at the bottleneck, the buildup of a queue can be indicated to sources after time. Convergence speed. Recall that ( ) from (2) is the ideal sending rate of a flow at time , as determined by the parameters and the model. The actual rate assigned to flows ( ) will converge to ( ) on a certain timescale dictated by the convergence speed of the congestion control algorithm. The convergence timescale is controlled by a smoothing parameter : This differential equation is the continuous-time equivalent of applying an exponentially weighted moving average (firstorder low pass filter) to ( ) to derive ( will result in slower convergence. A full listing of the model's congestion control parameters are given in Table 1. Multiple traffic classes. The above model assumes there is only one class of traffic, but algorithms like DCTCP can be adapted to the case where there are multiple traffic classes, and each class is associated with a fixed scheduling weight at the switch. Here, we describe how to generalize the model to cover this scenario. To state our goal concretely, suppose there are classes, and suppose is an arbitrary class-with weight -that is continuously backlogged on some time interval. Then in the congestion control model, we want the capacity that is available to class to be such that on that time interval. Moreover, if a class is not using the link, its capacity should be divided among all active classes in proportion to their weights. First, we restrict congestion signals u , , and to only include information about a particular class , yielding u, , , and . For example, u, would be the total rate of uncontrolled traffic due to class . 1 Now we define the model's notion of an active class. We say a class is active at time when at least one of two conditions holds: For convenience, we define a predicate active ( ) which is true whenever the above criterion is met for class . We will also define a function which maps a set of class indices to the sum of the class' weights, With these definitions, we can write the bottleneck capacity available to class at time as Unlike in the original definition of ( ) in (6), we now have a link capacity that is time varying. Substituting this as well as Table 3. swp's simplified model allows a simulator to simulate 50,000 flows over 50× faster than ns-3 in a comparable scenario. Here, ns-3 is running DCTCP and swp is running SWP-D. Running times are shown in minutes. the class-specific signals from above, we can write the update rule for class as . Aside from the update rule, everything in the multi-class case works in the same way as before. Queueing Discipline We consider several standard queueing disciplines: first-in first-out (FIFO), priority queueing (PQ), round robin (RR), deficit round robin (DRR), as well as common hierarchical and weighted variants. Some of these policies had long been considered impractical to implement on switches, but recent work has shown how the mechanisms on newer programmable switches can be used to approximate policies like weighted fair queueing [25]. We include a variety of policies in our model to observe their impact on SLOs. Evaluating the Network Model Before using swp to predict SLOs, we first evaluate our network model by comparing its predictions to the output of ns-3 simulating a single bottleneck link. For a higher degree of confidence, we try different flow size distributions at different load levels using both DCTCP and HPCC, and we analyze tail latencies across fine-grained flow size bins. The goal of the model is not to report accurate predictions for per-flow statistics, but rather to capture information about aggregate statistics like the averages and tails of flow completion times (FCTs). This is what allows us to distill complex behavior into a simple model and achieve quicker answers to the questions we are trying to ask. Specifically, we evaluate the model along three axes: 1. How well does the model predict the tail FCT slowdown of short flows? 2. How well does the model predict the average FCT slowdown of long flows? 3. How much faster is the model when compared to ns-3? For short flows, we believe the model should accurately predict tail slowdowns, since those will depend largely on the aggregate behavior of the congestion control model defined in §4.2. However, since we explicitly do not model differences in fairness among long flows, we do not always expect accurate tail predictions for them. For these flows, we instead evaluate the model's ability to predict accurate average slowdowns. This reflects common practice, since short messages often require low, predictable latency while long ones should achieve acceptable throughput. In these experiments, we use a 100 Gbps bottleneck link and a 10 µs round trip time (RTT). We configure ns-3 with multiple sources sending to the same destination through a bottleneck, similarly to the model shown in Fig. 5. We use two publicly available flow size distributions: one is a web search application [6], and the other is an aggregated workload from a Google datacenter [23]. The workloads are run with a bursty log-normal interarrival time distribution (shape parameter = 2) at two different load levels, 30% and 60%. We also run them with both DCTCP and HPCC. Fig. 6 shows the tail FCT slowdown predictions produced by the model across these configurations and across equallysized flow size bins. For short flows, the model is able to accurately predict the slowdowns at the 99th percentile, and at 60% utilization it can even reasonably predict the tail slowdown of long flows. At low utilization, the tail predictions for long flows are less accurate because 1) the model does not model short-term unfairness for long flows and 2) at low utilization events have higher variance, and this increases the tail variance of long flows. The average FCT slowdown for long flows, however, remains accurate across load levels. This is shown in Fig. 7. Lastly, the model arrives at these predictions up to 80× faster than does ns-3. Table 3 shows the running time of ns-3 running DCTCP compared against swp's model, with measurements taken on an Intel Xeon E5-2680 CPU. The swp simulator simulates 50,000 flows up to 81× faster than does ns-3. Running times for HPCC and its corresponding swp model are similar. Evaluating Network Configurations We next use the optimizer described in §3.3 to evaluate swp's ability to identify switch configurations that meet target SLOs. Whether a particular configuration can be satisfied by swp, a FIFO queue, or both is a coarse-grained metric. Instead, we generate a randomly generated set of scenarios, and consider the minimum total bandwidth at the bottleneck that is sufficient to meet the combined SLO for that configuration. Lower required bandwidth for the same scenario is better-it implies that the same SLOs can be met with higher link utilization on a fixed bandwidth link. We do not consider priority scheduling in this experiment as that would require extremely high Figure 8. swp finds weights that can meet a set of SLOs using significantly less bandwidth than would be required with pure FIFO queueing. The advantage of weights over FIFO is yet more pronounced with tighter SLOs and higher application burstiness. Fig. 8 and Fig. 9. The Google flow size distribution is the same as the one in §5.1, and the Facebook and Alibaba distributions are taken from the public repository of HPCC [1]. Burstiness is the shape parameter for the log-normal interarrival time distribution. link capacities to meet the target SLOs, requiring very long runtimes to reach convergence at the tail. We compare shared FIFO, per-class scheduling weights chosen by swp with FIFO within each class, and the same with fair queueing within each class. We consider scenarios with three simultaneous traffic classes, and to cover a wide range of cases, we sample traffic class characteristics randomly. For each class, we uniformly select at random a flow size distribution from three measured datacenter workloads, a burstiness For each one, we track the minimum SLO threshold, the maximum burstiness, and the maximum mean rate. Here we see that the tighter the SLO threshold, the higher the inflation on average. (log-normal ) between 1 -2, a mean sending rate between 5-10 Gbps, and an SLO threshold for 99% tail latency slowdown between 3 -8×. We divide transfers into those that are smaller than the bandwidth-delay product for this network (125KB) and those that are larger. The tail latency slowdown is enforced against both sets independently, but with twice the slowdown threshold for larger flows. This is to allow solutions that favor small flows, without unduly starving medium-sized flows. Table 4 summarizes the sample space from which the random values are drawn. We use a 10 µs round trip time, and where congestion control applies, we use the model of DCTCP. For each of 150 randomly chosen scenarios, we use swp to search for the minimum link capacity required to simultaneously meet all three SLOs for both small and large flows, separately for per-class FIFO and an idealized version of perclass fair queueing. We also use binary search to identify the minimum link capacity needed to meet the SLOs with a shared FIFO queue at the bottleneck. Fig. 8a shows the cumulative distribution function for the minimum link capacity under different strategies and traffic scenarios. Averaged across all configurations, using FIFO alone requires on average 44% more link bandwidth than if swp is used to optimize per-class FIFO weights, and an average of 247% more bandwidth than with swp with optimized hierarchical weighted fair queueing. If we restrict the configurations to those with at least one bursty class ( > 1.7) with a tight SLO (threshold < 4), the average gap between swp and FIFO widens to 62% (Fig. 8b). Fig. 8c shows the results for configurations with SLO < 4, but no restriction on burstiness, while Fig. 8d shows the results with all SLOs ≥ 4 and burstiness ≤ 1.7. The greatest advantage for swp comes on more challenging scenarios. We can also ask: how important is the swp optimizer described in Algorithm 1 to the benefits of our approach? To study this, we consider a static version of swp, where we compute the bandwidth needed for each traffic class to meet its SLOs independently, assuming no knowledge of the behavior of the other traffic classes. We then run this on the same scenarios as we considered above. The result is plotted on the same graph in Fig. 8a, labeled as static. The benefits of swp relative to FIFO roughly disappear-that is, the advantage of per-class SLOs comes from being able to take advantage of the cumulative slack across traffic classes. Next, we consider configurations with five random traffic classes instead of three, where we keep aggregate load the same by adjusting per-class mean rates to be from 3 to 6 Gbps. The result is shown in Fig. 9. With more traffic classes, there is greater diversity of requirements, presenting more opportunities for optimization. In this setting, using FIFO queueing alone now requires 65% more link bandwidth than using optimized FIFO weights (Fig. 9a), and in the challenging scenario of a tight SLO on a bursty class, that number increases to 79% (Fig. 9b). Statically provisioning bandwidth for the worst case is also now more costly than pure FIFO queueing. Finally, the inflation factor of a configuration is the ratio between the bandwidth required to meet SLOs with FIFO, divided by (respectively) that required by swp weights with per-class FIFO and swp with per-class fair queueing. Fig. 10 provides a scatter plot of inflation factors for all 150 five-class configurations, with each configuration indexed by the value of the tightest SLO, the greatest burstiness, and the greatest mean rate across the five traffic classes. In general, configurations with tighter SLOs show greater benefit for carefully optimized per-class traffic weights. Related Work Achieving quality of service in packet switched networks is a long-standing and rich research area. Quality of service is easiest to achieve when we can assume that switches or routers can keep per-flow state, such as Autonet-2 [8], Intserv [12], and RSVP [34]. Parekh and Gallager showed that packet latency bounds could be provided given fair queueing in the network [14,24]. Others have shown it is possible to emulate fair queueing without per-flow state at every router [29]. However, because of the difficulty of implementing these approaches at high speeds, most work in the Internet gravitated to the use of priorities [11] to deliver quality of service. The underlying assumption was that only a small amount of near constant-bit rate traffic would need prioritization. By contrast, our work focuses on providing quality of service for datacenter traffic, where most or almost all network requests have timeliness constraints and the traffic demand is inherently bursty. Per-packet latency guarantees. The closest to our work are QJump [16] and Silo [19] as they both target providing quality of service guarantees in the datacenter. Both use a leaky bucket at sources to ensure senders do not exceed some average rate and burstiness allowance. Then, QJump uses switch priorities to bound the network packet delays for latency sensitive applications. SILO on the other hand uses network calculus to develop a VM placement algorithm that can ensure network queueing delays do not exceed some bound. Like earlier work, these approaches target individual packet delays, rather than application-level metrics, and traffic shaping can impose significant delays at the source particularly for bursty traffic. Moreover, for large scale datacenter networks, tight worst-case bounds can only be provided for a small fraction of the traffic. Our goal with swp is to provide quality of service for arbitrary size messages for most or all of the network traffic, using probabilistic rather than deterministic guarantees. Scheduling flows for low latency. A number of systems, such as D 3 [33], D 2 TCP [32], and PDQ [17] use deadlines to schedule flows for low latency. pFabric [7] uses shortest remaining time first to reduce average latency for short flows. A significant limitation with these approaches is that they require applications to provide the size of each flow, before it starts, to be able to assign it a deadline or scheduling priority. Many applications in wide use lack this information, and in fact the UNIX socket API does not allow applications to specify it even if known. In addition, these schemes require switch hardware modifications. swp helps operators achieve SLOs without new switch hardware or changing applications, provided aggregate information is available about the distribution of flow sizes and burstiness. Homa [23] is a transport protocol that uses switch priorities and receiver-driven use of priority queues to achieve low tail latency for short messages with existing hardware. Like these other approaches, however, Homa assumes the size of the flow is available. It also lacks a mechanism for predicting what tail latency guarantees it can provide, nor does it provide an algorithm for balancing SLOs across traffic classes or flow sizes. Conclusion In this paper, we have built and evaluated swp, a new tool for quickly identifying network switch configurations to achieve tight tail latency bounds for bursty datacenter traffic patterns. Given measured data about the distribution of message sizes and message interarrival times, swp finds traffic-class based switch scheduling weights to accomplish class-specific operator-defined service level objectives (SLOs). A key innovation is a 50-80× faster network simulation engine that elides detail but produces accurate estimates of tail behavior through a bottleneck link for two popular data center congestion control protocols, DCTCP and HPCC, for a variety of switch scheduling algorithms. We use swp on randomly chosen scenarios to show that swp can identify switch configurations that meet target SLOs at much lower bandwidth than FIFO. This work does not raise any ethical issues.
2021-03-03T02:15:44.680Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4c090862b148c62825c800dd8d4f30b306bdfd80", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9fd139b9101c18e24cbd8f3c854583988d2beeaf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257131430
pes2o/s2orc
v3-fos-license
Development and validation of an ama instrument for assessing the disease activity on the basis of constitutional features in Amavata (Rheumatoid Arthritis) Background Rheumatoid Arthritis (RA), having a striking clinical resemblance to amavata in traditional Indian medicine (Ayurveda) presents an opportunity to look at disease from two different healthcare perspectives. This differential information may potentially supplement one system with the knowledge of the other for optimal application. This study is the first of its kind, where Ayurvedic concepts of amavata have been adopted to enhance the knowledge about RA where optimal care is still beyond the common reach. Objective The study was conducted to develop and validate a novel ama score based upon constitutional features of ama as depicted in ayurvedic literature as a disease activity indicator in RA. Material and methods The study was conducted in two parts comprising development and textual validation of the ama assessment instrument (AAI) followed by its clinical testing. AAI comprising ten items, was developed where each item was provided with a range of scores to offer the assessment close to the patient's observations. The score obtained through AAI was clinically and statistically tested on 79 RA/amavata patients randomly selected for validity and reliability. The score obtained through AAI was tested for its correlation with the DAS-28 score and ESR. Results Ama Assessment Instrument could find a slight correlation with acute phase reactant ESR (r-value between ESR and AMA at baseline is 0.287, and at 1st, 2nd, and 3rd follow-up is 0.276, 0.276 and 0.160 respectively) and DAS-28 (The r value between DAS and AMA at baseline is 0.231, and at 1st, 2nd and 3rd follow up is 0.218, 0.201 and 0.247 respectively). It however emerged as an independent disease status marker since it could mark the changes in the study population on a time scale more precisely as compared to DAS -28 or ESR. When the ama values at different follow-ups were compared, a significant difference was observed consistent with disease activity marker catching constitutional and GI related domain of the patients. When reducing values of ama score were compared to overall improvements as reported by the patients, a similar trend was observed showing that a change in ama score is reflective of a change in disease status and the impact of the disease on the patient. Conclusion This study provided a quantitative measure for the abstract concept of ama which could be used to mark the disease activity in amavata or RA. The change in ama based scores can be used to assess disease status and the intervention related benefits. The observations prompt for the possible inclusion of AAI in RA composite score to make it more dynamic in terms of disease activity identification in RA. Introduction In the management of rheumatoid arthritis (RA), assessment of disease activity plays crucial. The level of disease activity primarily works as a point of reference to predict the course of illness prospectively. Disease activity score (DAS) on its own or in combination with acute phase reactants like ESR and CRP has also been critically utilized to judge the outcome of any intervention in RA [1]. Knowing the disease activity status in rheumatoid arthritis therefore became the first mandate in the current practice of clinical rheumatology. Although important, we observe that the DAS score is not always reflective of the clinical conditions or the concerns of the RA patients and similarly is not always a reliable reflector of the outcomes observed after any therapeutic intervention in RA [2]. The reason for this mismatch between observed and actual disease status is that DAS largely relies upon observations made by the physician on the domain areas like joint pain, swelling and tenderness. Tenderness, pain or swelling reported by the patient but not elicitable to the physician, obviously leads to a lower disease activity score. To overcome the limitation of DAS score, the use of composite scores having multiple variables have been proposed. Two commonly utilized indices of this kind in RA are the clinical disease activity index (CDAI) and simplified disease activity index (SDAI) [3]. Rheumatoid arthritis is a systemic disorder, with many extra articular and constitutional features that contribute to the status of the disease. Unfortunately, in the conventional disease activity identification done by the tools currently available, such features remain unnoticed irrespective of their importance from the patient's perspective. Finding this gap, recently some suggestions have been made to include fatigue [4] and sleep [5] as disease activity indicators and outcome assessors in RA. Weight loss during the early and active phase of RA is also a key observation during the high disease activity indicative of poor prognosis [6,7]. Recently, the normalization of appetite and resulting weight gain has beenproposed to be one important outcome measure in selected RA patients [8]. However, these new entrants of disease monitoring are yet to find a place in routine assessment of RA. On the contrary, the traditional health care system particularly Ayurveda, has a vivid clinical description of amavata, a disease entity resembling to inflammatory arthritis including RA and Spondyloarthritis (SpA) gives a great attention to the systemic features besides adhering to the joint related features. This is observed that the patients diagnosed with RA or SpA on the basis of ACR or EULAR criteria, if simultaneously diagnosed as amavata based on Ayurveda fundamentals and are treated based on such principles, respond well to the Ayurveda interventions [9]. More importantly, in such cases treated with Ayurveda, systemic features respond well and early. RA patients on Ayurveda interventions report less fatigue, stiffness, lethargy, lack of interest, improved appetite and improved weight [10]. Although, this is yet to be understood that to what extent these systemic features can be correlated to the primary joint pathology in RA, their correction simultaneous to the reduction in joint symptoms after the ayurvedic intervention gives a clue for their possible association in altering the primary pathology and thereby in determining the disease activity status. From Ayurvedic perspective, ama is an important pathogenic produce involved in amavata. This is conceptualized as a product of incomplete digestion and metabolism resulting from impaired metabolic fire (agni) .Ama produced at GIT, tissue or cellular level is proposed to have obliterative properties owing to its stickiness and macromolecular nature. Due to this nature, ama can produce features resulting from the obstruction of the conduits. Ama can be involved in a disease either as a primary factor like in amavata or can secondarily be involved in the disease process because of consequences impairing digestive or metabolic agni [11]. Ama produces pathognomonic GI related or systemic features depending on the site of its primary production and settlement. Its presumed level and associated features often correlate with disease activity. All treatments involving ama are directed towards dissociating of existing ama (ama pachana) and stopping further production and accumulation of ama (agni deepana). For this reason, a treatment focusing on ama leads to the disappearance of ama related features gradually [12]. To ease the understanding of ama related pathogenesis in a clinical setting, various features related to ama association of dosha are described in Ayurvedic texts. Sama (features with ama) and nirama (features without ama) examination make one important point of clinical examination in Ayurveda helping to determine the relative availability of ama in the body and thus determining the appropriate therapeutic action plan. The association of ama in the body with various dosha and mala may be identified by clinical features representing their association. Once such association is lost, the clinical features also disappear. It is for this reason; ama related features seem to have a high indicator value for ama related disease activity. This can also help assess the therapeutic responses by seeing if the features of ama are reducing in intensity after the appropriate therapeutic interventions in an ama related pathology. Amavata is a classical prototype of ama related pathogenesis where ama has been involved in the disease process since the beginning. Treatments focusing upon ama dissociation and prevention of its further formation are the first line of management of amavata besides many other interventions aiming to manage other symptoms. Assessing the relative presence or absence of ama in a patient of amavata has a huge, predictive and prognostic value by knowing the disease activity. This also has a high value for it being utilized as a patient-related outcome measure (PROM) occurring as a response to any therapeutic intervention. In rheumatoid arthritis (RA) which represents a subpopulation under the umbrella term of amavata representing most varieties of inflammatory arthritis, PROM has been of renewed interest as dependable measures to assess the outcome of any intervention. Measuring constitutional features like fatigue, stiffness, sleep, appetite etc. have been prompted recently as important indicators of changes in the pathogenesis ofRA in response to any therapeutic intervention. Ama assessment from an ayurvedic perspective therefore presents an opportunity to make a composite measurement of all constitutional features thatare highly sensitive from the patient's perspective and truly reflective of an ama related joint pathogenesis. It is presumed that an ama assessment may not only find its high applicability in ayurvedic clinical practice related to joint diseases but also its high applicability in modern rheumatology practice by providing a composite tool to measure many constitutional features in one go. Despite its high clinical importance, attempts have not been made to make ama assessment among amavata patients suitable for its predictive and prognostic value. Development and validation of an ama assessment instrument (AAI) for making a quantitative assessment of ama for its use in Ayurveda rheumatology practice therefore is highly desired. Subsequent to its development, this AAI when tested for its clinical reliability and validity against existing disease activity indicators of RA, was found to have a potential to emerge as a high utility index to assess disease activity in RA. This study was done to develop and validate the AAI observations as disease activity indicator and to check its reliability as a disease activity indicator in RA. Study setting This study was conducted at the PG Department of Kaya Chikitsa, State Ayurvedic College and Hospital, Lucknow (PP, SR) in collaboration with the Department of Clinical Immunology and Rheumatology, SGPGI, Lucknow (AL) and Department of Statistics, Lucknow University, Lucknow (GGA). The development and initial validation of the AAI were done in this setting. The face and content validity was partly by inviting domain experts outside the primary research institution. The clinical testing of the instrument was done at Ayurveda eArthritis Treatment and Advanced Research Center (A-ATARC), State Ayurvedic College and Hospital, Lucknow. Time frame of the study AAI development began in December 2020 and was completed in June 2021. After the instrument's initial validation, the AAI clinical testig was done from July 2021 to April 2022. The data was subsequently statistically analyzed in June 2022. Ethical clearance The study had an ethical clearance issued by Institutional Ethics Committee wide letter no. SAC/IEC/2020/Dated 23.10.2020. Conduction of the study The study was conducted in two steps. The first was about developing the ama assessment instrument (AAI) for quantitative measurement of ama (index test) based on available classical literature. This step involved multiple small steps, from screening the available literature to finalizing tool components after pilot testing. The second step of the study was related to the validation of developed AAI on various parameters and correlating the AAI observations with standard biomarkers or disease activity scores (reference test) in RA both before and after a given intervention for a certain time period. The process utilized in this study for the development and validation of a new tool was tested and proven reliable through earlier studies having their relevance to Ayurveda [13,14]. Development of ama assessment instrument (AAI) The process utilized for the development of AAI was done by utilizing following steps related to the construction of Instrument- Domain specification: This involved the specification of "what" is to be measured in the evaluation. 2. Scaling: This involved the conversion of qualitative characteristics in quantitative terms. This was done by identifying the two extremes of responses against a given question and subsequently dividing the range of responses into 10 clearly definable categories. Item generation: This involved the specific question formation pertaining to the specific domain area. Literature survey To begin the process of development of AAI, nine classical texts (Table 1) of Ayurveda were thoroughly screened for the inscription of ama related features. After thoroughly reviewing all texts, 51 signs/symptoms were identified for relevance to ama related pathology. These symptoms were subsequently listed to identifycommonly agreed upon by all texts consulted for the process. In this process 27 features were found to be described by almost every text either in a similar language or with partially rephrased language having a similar meaning. From the shortlist, the symptoms pertaining to the joint were eliminated to keep the focus on the constitutional features of ama. This exercise has eliminated six joint-related features and finally identified 21 features for their relation to the systemic presentation of ama. Table 2). Content validity of the selected items After making the preliminary selection of items reflective of the clinical presentation of ama related systemic pathology, the selected items were further reviewed against a relevance scale of 1e5 showing minimum to maximum relevance by a national cohort of 12 clinical experts of Ayurveda selected based on their experience and expertise in rheumatology. Each expert in the process was provided with a detailed item sheet consisting of all 21 items and was asked to mark the relevance of the individual item in reference to disease activity status of amavata on a scale ranging from 1 to 5 representing minimum and maximum relevance respectively. The relevance identification was made by approaching the experts physically or through email. Responses from ten experts were obtained within the stipulated time whereas two experts could not comply with the time specified for the responses. Based on summated responses of all ten experts, the items having an average score four or higher were selected to frame the final index tool. This process has finally identified ten items agreed upon by all experts for having high relevance as the systemic clinical feature related to ama pathology (Tables 2 and 3). Formatting the questions for practical application of AAI and determining the scores for individual observations Once clearly identified for their relevance and being selected through a process of consensus of a cohort of experts, selected items were expanded into the question to make them comprehensible by the RA patients when attempted for ama examination in individual cases (Table 4). Selected items after formatting of appropriate questions were allocated a range of score from 1 to 10 in order to get a closer opinion from the respondent. For this purpose 1 was considered to be minimum intensity of a specific feature whereas 10 was considered the highest intensity of the same selected feature. Based upon this scoring pattern, 100 had been postulated as the highest and 10 as the lowest ama score in any given case. During the process of AAI development, the prototype questionnaire including all items and their scales of quantitative measurement was taken up for its content and construct validity. For content validity each item in the questionnaire was reviewed by a team of in-house Ayurveda experts (having a minimum standard decided before the start of the study) in ama assessment through clinical examination and was evaluated if each of the items had the possibility of predicting the ama status in Amavata patients. Construct validity was done through an exploration of each item for their construct and showing if the construct of the item and the scale assigned to measure it quantitatively is able to find the answer it is aiming at. Pilot testing of the prototype questionnaire The pilot testing of the prototype AAI was done on 10 RA patients fulfilling the specific inclusion and exclusion criteria for selecting cases for clinical validation of the instrument. The participants of the pilot testing were selected from A-ATARC outpatient clinic on routine OP days. Such patients were given the AAI to obtain a response from them and see if there were any interpretational problems. This pilot testing was done by the lead investigator (PP) of this study. After this exercise, one item and the question framed to evaluate this (Q 2) were found ambiguous. This question was therefore elaborated and expanded in the final questionnaire to give a clear meaning and response selection. The questionnaire approved after pilot testing was subsequently taken up for further clinical validation study of the instrument. Clinical validation study against existing benchmarks of disease activity parameters in RA The ama assessment instrument was subsequently validated against standard disease activity indicators in RA (amavata). These standard disease activity scores and markers comprised ESR and DAS 28 scores. Following were the inclusion criteria of the selection of patients of RA (amavata) for the validity testing of the instrument e Participant's sample for the clinical validation study The reliability and validity of the instrument were tested on 79 participants fulfilling the inclusion and exclusion criteria. Construct and content validity was tested with the help of~10 domain experts fulfilling predetermined inclusion criteria. Results 79 patients duly diagnosed with RA and amavata fulfilling the inclusion and exclusion criteria having attended the A-ATARC OPD were enrolled in the study (Fig. 1). The study population included All study participants were examined at baseline for DAS-28 score, ESR (reference tests) and AMA score (index test). The participants were further examined at followeups every month and finally on completion of the study after three months. Ama instrument was tested statistically for reliability using Cronbach's Alpha and SpearmaneBrown Coefficient (split half method) ( Table 5) and validity using Pearson Correlation and Sig. (2-tailed) ( Table 6). Considering Chronbach's Alpha, the reliability of Baseline (0.724) is maximum, whereas minimum for Follow-up 3 (0.689). Similarly, considering split half method, the reliability of Follow-up 3 (0.803) is maximum, whereas minimum for Follow-up 1 (0.750). For the scores computed, the reliabilities obtained are acceptable. Based on the significance value obtained by the Sig. (2 tailed), the sig. (2 tailed) values obtained are less than 0.05, so it can be concluded that the items are valid. Correlation between ama score and disease activity markers in RA A Pearson Correlation was obtained at different timelines (baseline, follow-ups and final follow-up) between Ama score and DAS score and also ESR. At each of such observations DAS and ESR were found moderately correlated on the basis of r value whereas Ama was only slightly related to both DAS and ESR at all 4 observation time points (Table 7). The r value between DAS and ESR obtained is 0.572 which means the DAS and ESR is moderately correlated. The r value between DAS and AMA obtained is 0.231 which means DAS and AMA are slightly correlated. The r value between ESR and AMA obtained is 0.287 which means ESR and AMA are slightly correlated. The r value between DAS and ESR obtained is 0.485 which means the DAS and ESR is moderately correlated. The r value between DAS and AMA obtained is 0.218 which means DAS and AMA are slightly correlated. The r value between ESR and AMA obtained is 0.276 which means ESR and AMA are slightly correlated. The r value between DAS and ESR obtained is 0.400 which means the DAS and ESR is moderately correlated. The r value between DAS and AMA obtained is 0.201 which means DAS and AMA are slightly correlated. The r value between ESR and AMA obtained is 0.276 which means ESR and AMA are slightly correlated. The r value between DAS and ESR obtained is 0.439 which means the DAS and ESR is moderately correlated. The r value between DAS and AMA obtained is 0.247 which means DAS and AMA are slightly correlated. The r value between ESR and AMA obtained is 0.160 which means ESR and AMA are slightly correlated. Table 6 Validity testing for ama score. Analyzing ama score for their possibility of being considered as an independent disease variable indicating the disease activity It the study population mean Ama score was found to be 57.15 (SD 8.476) at base line which was subsequently reduced to 49.39 (SD 7.155), 42.03 (SD 6.577) and 34.91 (SD 5.359) at 1st, 2nd and final follow up subsequently ( Table 8). The p-value for AMA obtained is <0.05 which is significant at 5% level of significance; hence there is difference in different levels of AMA. For test of within subjects effects, the p-value obtained is <0.05 for all levels which is significant at 5% level of significance, hence we reject the null hypothesis and conclude that there is significant difference between different levels of AMA. Table 9 (Supplementary file) makes a pairwise comparison compares the different level of AMA. The p-value <0.05 shows that there is significant difference between means of AMA at different level for the considered pairs. When a similar descriptive statistics was applied to various levels of DAS scores and ESR and their pair wise comparison was made, a similar trend of reducing values was observed and a similar significance was observed at pair wise comparison of all values in both of these variables (Tables 10e13 in supplementary file and Fig. 2). The p-value for DAS obtained is <0.05 which is significant at 5% level of significance; hence there is difference in different levels of DAS. For test of within subjects effects, the p-value obtained is <0.05 for all levels which is significant at 5% level of significance, hence we reject the null hypothesis and conclude that there is significant difference between different levels of DAS. The p-value for ESR obtained is <0.05 which is significant at 5% level of significance; hence there is difference in different levels of ESR. For test of within subjects effects, the p-value obtained is <0.05 for all levels which is significant at 5% level of significance, hence we reject the null hypothesis and conclude that there is significant difference between different levels of ESR. The above table pairwise comparison compares the different level of ESR. The p-value obtained is <0.05 shows that there is significant difference between means of ESR at different level for the considered pairs. The p-value >0.05 shows no significant difference between means of ESR at different level. Discussion Clinical rheumatology faces a dearth of parameters that can precisely reflect the clinical activity of the disease. This seems highly important in conditions like Rheumatoid Arthritis, where high disease activity predicts a bad prognosis warranting urgent actions to arrest the disease progression and joint destruction. Various disease activity parameters currently used for evaluating the clinical staging of RA are either joint-based, defining the joint counts in terms of swelling, tenderness, and stiffness, or the levels of inflammatory markers like ESR and CRP reflecting the underlying inflammatory process in an individual. Scores like DAS which rely heavily upon joint status actually fail to appreciate the systemic features which might be of substantial importance to the patient for the level of discomfort they might pose. Similarly, the inflammatory biomarkers are not specific to RA alone and are obtainable in many conditions other than joint diseases. The level of inflammatory biomarkers can also be misleading for it being a cumulative score for a summated time period. Therefore changes in such scores do not reflect the changes in the clinical status in the case of RA precisely. Besides being unable to reflect the disease condition through various domain areas precisely, currently utilized RA disease activity indicators also have a poor translational capacity to be considered a dependable outcome reporting measure. Changes in the joint counts in relation to its swelling, pain, and tenderness or even changes in ESR or CRP like inflammatory biomarkers do not necessarily reflect the changes the patient perceives in response to a therapeutic intervention. In RA what is meaningful to a patient may differ much from what is meaningful to a physician in terms of observations. This delusion about proposing the most appropriate measures to define the disease status and the intervention-related outcome has long been faced in clinical rhematology practice. To overcome this, a composite scoring pattern was proposed, developed, and utilized in clinical trials related to RA [16]. Composite outcome measures have become very popular in assessing RMDs, because of their claim to catch all relevant dimensions of the disease into one convenient measure. Composite scores are proposed to reflect multidimensionality and heterogeneity in disease pathogenesis, manifestations, and outcome. Multidimensional composites usually include several disease manifestations and outcome dimensions into one index. It is however noticed that multidimensional composites are not free of errors in terms of their use for treat-to-target and window of opportunity like strategies of modern rheumatology. Using multidimensional composites in clinical trials is found to have a different notion when the same is used in clinical practice. Moreover, many aspects of disease impact have not yet been covered in any of the components of the commonly utilized composite scores. Patients with rheumatoid arthritis are found to differ from controls in their emotion-related personality traits, leading to their increased susceptibility to chronic stress and hypothalamicepituitaryeadrenal axis dysregulation. Such dysregulations make a substantial impact on a patient's well-being and net outcome of a given intervention [17]. Evidence suggests RA is a highly heterogeneous disease with many subtypes characterized by personality, psychiatric and immunological differences. Such complexities associated RA again warrants for a more comprehensive scoring method inclusive of every dimension reflecting the disease activity and outcome status. Currently used measures of rheumatoid arthritis disease activity are the following: Patient (PtGA) and Provider (PrGA) Global Assessment of Disease Activity, Disease Activity Score (DAS) and Disease Activity Score with 28-Joint Counts (DAS28), Simplified Disease Activity Index (SDAI), Clinical Disease Activity Index (CDAI), Patient Activity Score (PAS) and Patient Activity Score-II (PASII), Routine Assessment of Patient Index Data (RAPID), Rheumatoid Arthritis Disease Activity Index (RADAI) and Rheumatoid Arthritis Disease Activity Index-5 (RADAI-5), Chronic Arthritis Systemic . We see that despite such a plethora of single and composite indexes, the disease activity in RA and its outcome assessment is still far from being perfect [18]. RA patients are found to have intriguing constitutional features related to general well-being, sleep, energy status, appetite, and GI functioning [19]. This has been observed that such features find little place in the currently used composite score meant for evaluating RA disease activity. Clinical observation on RA patients has revealed that while receiving the Ayurveda treatment, these are the features that are addressed first within 1e3 months before actual improvements in joint related features are observed. This is also observed that during high disease activity, most RA patients report a loss in weight [20] accounting for a loss of appetite and after ayurvedic treatment the improvements in a loss of appetite and weight are reported [8]. Such clinical observations made in ayurveda rheumatology clinics when matched with a near absence of any such parameter or observation in modern rheumatology practice, warrant a serious thought of including these constitutional features in the composite scores meant for a comprehensive clinical evaluation of RA. Finding a parallel to RA in Ayurveda was the first problem to be addressed before any such measure could be developed based on Ayurvedic fundamentals explaining the pathogenesis of RA kind of diseases. A dual diagnosis approach was adopted to overcome this intrigue where the study population was diagnosed simultaneously as having RA and Amavata by the standard parameters of diagnosis adopted by both the systems. After finding this parallelism, this was easy to extrapolate the observations made in one context to be inferred in the other. Subsequent to the establishment of this parallelism, what done was to explore the extra articular and constitutional features in RA which are otherwise the hallmarks of ama. A thorough literature search helped extensively identify and determine the most appropriate features reflective of ama and check their reliability and validity on established parameters. After initial textual validation of the instrument which was meant to check the presence and level of ama related features in rheumatoid arthritis, its testing as an index test on the sample population was quite rewarding. Although the newly developed ama assessment instrument could find only a slight correlation with the reference tests like acute phase reactant ESR and disease activity score based upon 28 joint counts (DAS-28), it stood apart as an independent disease status marker since it was able to mark the changes in the population on a time scale more precisely comparing to DAS -28 or ESR. When the succeeding ama values of follow-ups were compared with preceding values, a significant difference was observed, showing it to be a consistent and reliable disease activity marker catching constitutional and GI-related domain of the patients. When reducing values of ama score were compared to overall improvements as reported by the patients, a similar trend was observed showing that a change in ama score reflects of a change in disease status and the impact of the disease on the patient. The study however had its limitations. Question framing always has a scope to be refined further, and so is in this study. Ama score, the index test used in this study, also needs to be further tested for its sensitivity, specificity and predictive values in reference to RA. Ama, the progenitor of various ama induced pathogenesis also prompts the test to be evaluated for ama diseases other than amavata. Conclusion This study provided a quantifiable measure for the abstract concept of ama and helped utilize this percept as a reliable measure to mark the disease activity in amavata. This came as a great help in determining the course of ayurvedic therapy based upon ama scores reflecting the disease activity status. This ama based scoring also came as a help in quantifying the intervention-related benefits in terms of the significance of changes in baseline ama score. This study leads to future studies in this area focusing upon the development of ama score as a patient-related outcome measure (PROM) in Ayurveda.
2023-02-24T17:03:20.679Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "6e17e62327ee7f45880690e7024e15e585fde250", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d8272b1592c775c951d5834f9dcbc3e9f0265816", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226258751
pes2o/s2orc
v3-fos-license
Elasticity of connected semiflexible quadrilaterals Soft Using the positional–orientational propagator of a semiflexible filament in the weakly bending regime, we analytically calculate the probability densities associated with the fluctuating tip and the corners of a grafted system of connected quadrilaterals. We calculate closed analytic expressions for the probability densities within the framework of the worm-like chain model, which are valid in the weakly bending regime. The probability densities give the physical quantities related to the elasticity of the system such as the force– extension relation in the fixed extension ensemble, the Poisson’s ratio and the average of the force exerted to a confining stiff planar wall by the fluctuating tip of the system. Our analysis reveals that the force–extension relations depend on the contour length of the system (material content), the bending stiffness (chemical nature), the geometrical angle and the number of the quadrilaterals, while the Poisson’s ratio depends only on the geometrical angle and the number of the quadrilaterals, and is thus a purely geometric property of the system. Introduction Semiflexible filaments such as the cellular cytoskeletal elements (e.g., actins, microtubules, and intermediate filaments) and the genomic DNA play critical roles in many biological functions and have been subjects of intense research during the recent decades. These biofilaments take various topologies and are often confined within cellular boundaries that exert or transduce mechanical forces. 35 Moreover, the study of the mechanical properties of such semiflexible filaments is important in designing new bio-and nano-materials. 36 An important emerging field of research is structural DNA nanotechnology. Progress has been made in designing programmable nanoscale architectures through the self-assembly of synthetic oligonucleotides via base-pairing and other forms of intermolecular connectivity. Various topologies and spatial configurations have been designed and synthesized for various applications, from electronics and photonics to biology and nanomedicine. [37][38][39][40][41][42][43][44] One way to make such DNA nanostructures is via folding and assembly of DNA molecules into various topologies, the so-called DNA origami. Defined structures can be made of DNA filaments by the arrangement of nucleotides with a subnanometer precision. [45][46][47][48][49][50][51][52][53][54][55] The structural DNA nanotechnology provides us with a plethora of techniques to design and to make two dimensional and three dimensional nano-objects, which can be static or dynamic. [56][57][58][59][60][61][62][63][64][65][66][67][68][69] The worm-like chain model is a commonly used model for describing the elasticity of semiflexible filaments. There are numerous studies in which elasticity of the semiflexible filaments were investigated within the framework of the worm-like chain model. 30,[70][71][72][73][74][75][76] Next to theoretical developments, many progresses have been made in experimental characterization of semiflexible filaments (polymers) in recent decades. Powerful techniques like optical tweezers, atomic force microscopy and other force methods allow us to measure the elasticity of such filaments at the single molecule (filament) level. [77][78][79][80][81][82][83][84][85] In these experiments, the force-extension relation is typically measured by stretching or compressing the filament (polymer) by an external force applied to the filament's ends (or ''tips''). The force-extension relation of a single filament in longitudinal direction gives stretching or compressing behavior of the filament while the force-extension relation in lateral direction gives the bending behavior of the filament. The force-extension relation can be obtained in two different ensembles, namely the fixed-extension ensemble, and the fixed-force ensemble. 86 In the fixed-extension ensemble, the displacement of the tips is fixed and the external force fluctuates due to the thermal fluctuation of the polymer. In the fixed-force ensemble, the external force exerted on the tips of the polymer is fixed, while the displacement of the tips undergoes thermal fluctuations. Here, we study the elasticity of structures made by longitudinally connected quadrilaterals. The work is motivated by recent developments in structural DNA nanotechnology. The quadrilaterals consist of semiflexible filaments, which can be made by DNA fragments or other semiflexible polymers. In the Section 2.1, we describe the positional-orientational propagator of a semiflexible filament in the weakly bending regime based on existing theories. In the Section 2.2, we consider a grafted quadrilateral. In the presence of thermal fluctuations, using the positional-orientational propagator, we obtain the probability density associated with the fluctuations and calculate the force-extension relation of the tip of the quadrilateral in the fixed extension ensemble in two different direction, x and y. Also, we calculate the Poisson's ratio of the structure. Furthermore, we confine the structure by a stiff planar wall and calculate the average of the fluctuating force exerted to the wall. The force is caused by a reduction in the number of configurations available to the system, which is in turn caused by the presence of the wall. In the Section 2.3, we repeat the calculations for the case of two longitudinally connected quadrilaterals. In Section 2.4, we generalize our calculations to a system with an arbitrary number of quadrilaterals. In Section 2.5, we compare the elasticity of the structures with different number of quadrilaterals made of identical amounts of a given polymeric material. We end our article with a conclusion part. The positional-orientational propagator of a semiflexible filament The physical properties of a semiflexible filament of the contour length, L is given by the Hamiltonian, H ¼ k 2 where k is the bending stiffness of the filament and t(s) is the tangent vector of the filament in the arc length s. Here, we study the filament confined in a two dimensional space. In the weakly bending regime, the density probability of finding the end point of such filament at position, (x,y) with orientation, y given the grafted tip at position, (x 0 ,y 0 ) with orientation, o is as follows, 6,87,88 G L x; y; yjx 0 ; where N L is the normalization factor, is the Dirac delta function, k B is the Boltzmann constant and T is the temperature. The grafted quadrilateral Here, we study the elasticity of a grafted semiflexible quadrilateral. The quadrilateral is constructed by conjunction of four semiflexible filaments of length L. It has four corner labeled by 0, 11, 12, 21 according to Fig. 2. The conjunction angles are fixed since the length scales of the conjunction points are much smaller than the persistence length of the filaments. We put the origin of the coordinates at graft point of the system. The probability density to find the end corner of the quadrilateral labeled by 12 at position (x 12 ,y 12 ) and orientation y 12 is given by the following expression, where o 4 0 is the grafting angle indicated in Fig. 2. We calculate integrals in eqn (2) and obtain the following analytic expression, and and C ¼ À 3l p ðx À 2L cosðoÞÞ 2 L 3 sin 2 ðoÞ À 3l p y 2 4L 3 cos 2 ðoÞ We integrate the y component and angle y in the probability density given by eqn (3) and obtain an expression for the probability density of finding the x component of position of the corner of quadrilateral labeled by 12 at value x, Similarly, we obtain the probability density of the position of the corner labeled by 12 at y coordinate, The probability density to find the x component of the position of the corner labeled by 12 at x 12 and the y component of the position of the corner labeled by 11 at y 11 is given by ; y 11 ; y 11 j0; 0; o ð Þ Â G L x 12 ; y 12 ; y 12 jx 11 ; y 11 ; y 11 À 2o ð Þ Â G L x 21 ; y 21 ; y 21 j0; 0; Ào ð Þ Â G L x 12 ; y 12 ; y 12 þ 2ojx 21 ; y 21 ; y 21 þ 2o ð Þ Â dx 11 dy 11 dx 21 dy 21 dy 21 dy 12 dy 12 (9) We obtain the analytic expression for the probability density after calculation of the integrals, where l p ðx À 2L cosðoÞÞ 2 L 3 sin 2 ðoÞ À 12 7 l p ðx À 2L cosðoÞÞ y 11 À L sinðoÞ ð Þ L 3 sinðoÞ cosðoÞ À 12 7 l p y 11 À L sinðoÞ ð Þ 2 L 3 cos 2 ðoÞ (11) The probability densities in eqn (10), (7) and (8) give the associated force-extension relations. Using the similar method in ref. 88, we obtain the analytic expression for the x coordinate of the force-extension relation associated to the corner, 12 (see Appendix), where H 1 = 2L cos(o) is the length of the system in the longitudinal direction at zero temperature. Similarly, the y coordinate of the force-extension relation associated to the corner, 12 is given by the following expression (see Appendix), which gives the relation between y 11 and x (see Appendix), where y 11 À L sin(o) is the displacement of the corner of the quadrilateral labeled by 11 in y coordinate due to the displacement of the corner labeled by 12 in x coordinate, x À 2L cos(o). The Poisson's ratio for the grafted semiflexible quadrilateral is (see Appendix) where w = 2L sin(o) is the width of the system at zero temperature. Next, we confine the system with a stiff wall in x coordinate. The confining wall reduce the number of the configuration of the system and therefore experience a fluctuating force due to the confinement. The average of the fluctuating force on the confining stiff wall is (see Appendix) where Two longitudinally connected quadrilaterals grafted in a substrate We connect another quadrilateral to the grafted quadrilateral according to the Fig. 2. The probability density to find the end point of the structure at position (x 14 ,y 14 ) with orientation y 14 is given by the following equation, The probability density to find the position of the end point at x coordinate in value x is Similarly, we write the probability density to find the position of the end point at y coordinate in value y according to below, Using the probability density, we calculate the forceextension relation in x coordinate and it takes the following form (see Appendix), where the length of the system in longitudinal direction at zero temperature is H 2 = 4L cos(o). Also, the force-extension relation in y coordinate is (see Appendix) The probability density to find the x component of the position of the corner labeled by 14 at x 14 and the y component of the position of the corner labeled by 13 at y 13 is Similar to eqn (14) the equation in below gives relation between the lateral and the longitudinal displacement (see Appendix), where y 13 À L sin(o) is the displacement of the corner of the quadrilateral labeled by 13 in y coordinate due to the displacement of the corner labeled by 14 in x coordinate, x À 4L cos(o). Therefore, the Poisson's ratio is Similar to the previous section, we confine the system by a stiff wall in x coordinate. The expression for the average of the fluctuating force on the confining stiff wall due to the confinement has the following form (see Appendix), 2.4 N number of the longitudinally connected quadrilaterals grafted in a substrate Here, we study the elasticity of N number of the longitudinally connected quadrilaterals. The symmetry of the system implies that it is equivalent to a system consisting of N number of springs in series in a way that each spring has a Poisson's ratio of n 1 = cot 2 (o)/2 which is equals to the Poisson's ratio of a grafted quadrilateral (see eqn (15)). Also, the force constant of each spring in x coordinate is equals to the force constant of a grafted quadrilateral in x coordinate, k = 6k B Tl p /L 3 (sin(o)) 2 (see eqn (12)). The force constant of N number of the springs in series is k/N (Fig. 3). Therefore, the force-extension relation of a system with the N number of the longitudinally connected quadrilaterals in x coordinate is where the length of the system in x direction is H N = 2NL cos(o). The probability density of the x component of position of the tip of the polymeric system is Also, the Poisson's ratio of the system consisting of N number of springs is where Dx is the displacement of the tip of the system in the longitudinal direction and Dy is the contraction or the expansion of the system in the transverse direction. Again, we confine the system with a stiff wall in x coordinate and calculate the average of the fluctuating force on the wall. The average of the exerted fluctuating force on the confining stiff wall by the fluctuating tip of the system is (see Appendix), where 2.5 Comparison of the longitudinally connected quadrilaterals with different number of quadrilaterals having the same polymeric materials Here, we vary the number of quadrilaterals in the system of longitudinally connected quadrilaterals while we keep the total contour length of the systems (the polymeric material of the system) the same. The force-extension relation of a system with the N number of the longitudinally connected quadrilaterals in x coordinate is (see eqn (30)) where L PM = 4NL is the total contour length of the system. The force constant associated with the force-extension relation is . The dimensionless form of the force-extension relation is and the dimensionless force constant isK . If we keep the polymeric material of the system (L PM ) fixed, the force constant will depend on the squared of the number of quadrilaterals. The higher the force constant, the stiffer the system. This fact shows that the system with higher number of quadrilateral with the same amount of polymeric material is stiffer as it can be seen from eqn (36). It is interesting that the Poisson's ratio is independent of amount of the polymeric material used in the system (the total contour length) and only depends on the angle o (see eqn (32)) The average of the exerted fluctuating force on the confining stiff wall by the fluctuating tip of the system with the fixed total polymeric length, L PM is (see eqn (33) where The dimensionless form of the average of the force on the wall isf Fig. 4 shows the average of the dimensionless force exerted to the wall by the system of the longitudinally connected quadrilaterals. We fix the polymeric material of the system and vary the number of the quadrilaterals for different curves. We look at the force-extension curves in two different regimes. In the first regime, d o H N , the system is compressed by the wall therefore the mixture of the entropic forces and enthalpic forces play role in the elasticity of the system. In this regime, the force constant of the system increases as the number of quadrilaterals increases. In the second regime, d 4 H N , the system is not compressed therefore the entropic forces play major role in the elasticity of the system. In this regime the system should be bent by the thermal fluctuations to hit the wall therefore the dimensionless force exerted to the wall decreases as the number of the quadrilaterals increases. In Fig. 5, we show the force-extension relation of the wall while we vary the value of the persistence length for the different curves. The force constant of the system increases in the regime, d o H N , while the persistence length increases. We expected this fact since the system should be stiffer for higher values of the persistence length. In Fig. 6, we show the force-extension of the wall for different values of the angle, o. The stiffness of the system increases as the value of the angle, o decreases (Fig. 6). Discussion and conclusion Here, we employed the worm-like chain model to describe the elastic behavior of a system made of longitudinally connected quadrilaterals. To obtain analytic expressions for the probability densities associated with the thermal fluctuation of the tip and the corners of the system, we used a positional-orientational propagator of a semiflexible filament in the weakly bending regime, which was introduced in ref. 6, 87 and 88. We used the probability densities to calculate the force-extension relation of the tip of the system for arbitrary number of the quadrilaterals in the fixed extension ensemble. We offered a closed analytic expression which is valid in the linear regime (since we used the weakly bending approximation). The force constant associated with the force-extension relation is proportional to the bending stiffness, k and the squared of the number of the quadrilaterals, N 2 and it is reversely proportional to the cubed of the total contour length (the amount of the polymeric material), L PM 3 and sin 2 (o). Also, we obtained an analytic expression for the Poisson's ratio of the system. It only depends on the angle, o. The Poisson's ratio does not depend on the amount of polymeric material or the bending stiffness; therefore, it is a geometrical quantity of our system. In the last step, we confined the system with a planar stiff wall and calculated the average of the force exerted to the wall. The analytic expression for the average of the force depends on the persistence length, l p , total contour length, L PM , thermal energy, k B T, the number of the quadrilaterals, N, the angle, o and the distance of the wall from the grafting point of the system, d. All of the relations obtained in this article are closed analytic expressions therefore it is easy to theoretically track the behavior of the system in the parameter space of the linear force regime. The theoretical model presented here can be applied to experimental and molecular modeling data available from research into DNA minicircles. Michele Caraglio et al. studied the DNA minicircles confined in a two dimensional space by a course grained model called the oxDNA. 89 The oxDNA has two versions. The first version uses the symmetric grooves and is called the oxDNA1. The second version is implemented with asymmetric grooves. They reported rounded square shape for the case of the oxDNA1 and a circular shape for the case of oxDNA2 when the DNA minicircle is confined in a two dimensional space. 89 Our calculations can be used to describe the two dimensional bending elasticity of the square shape of the DNA appeared in the model of oxDNA1. The bending modes of the DNA minicircles were studied by Davide Demurtas et al. 90 A molecular dynamics study of minicircles was done by Marco Pasi et al. 91 A theoretical study of the effect of bending anisotropy in minicircles is available in ref. 92. The circular shape of the DNA minicircles can be approximated with a square and then our model can be applied to describe the bending elasticity of them. The square shape structure made of the DNA is reported in ref. 93. Sungwook Woo et al. experimentally studied the selfassembly of two-dimensional DNA origami. 94 They report a two dimensional square shape of DNA that can be arrange in one dimensional and two dimensional lattice. The elasticity of the one dimensional lattice made of squares of the DNA can be studied by our model (see Fig. 5 in ref. 94). The model presented in our article can be easily modified for describing the bending elasticity of semiflexible polymeric materials of more complex topologies which we suggest for future work. Conflicts of interest There are no conflicts to declare. A.1. The force-extension relation in the fixed-extension ensemble Here, we consider a system made of a semiflexible polymer which is described by a Hamiltonian, H. We can look at the probability density function of the position of a fluctuating tip of a polymeric system in x coordinate as a partition function, 88 where r(s) = (r x (s),r y (s),r z (t)) is the position vector associate with the arc length s and r x (L) is the position of the tip of the system in x coordinate. The partition function is in the fixed position ensemble because of the term, d(r x (L) À x) which fixes the x component of the position of the tip in the value, x. The partition function gives the following term for the free energy associated with the tip, The free energy defines the average of the x component of the force exerted to the tip in the fixed extension ensemble, This is the force-extension relation the fixed extension ensemble. The experimental setup for measuring the relation should be build in a way that the position of the tip in x coordinate is fixed while the other components of the position are free. In the experimental setup the force exerted in x coordinate would thermally fluctuate. By the similar reasoning we can obtain a force-extension relation in y coordinate, where P y (y) is the probability density function of the position of a fluctuating tip of a polymeric system in y coordinate. In the next subsections, we will use eqn (43) and (44) to calculate the forceextension relation associated with the connected quadrilaterals. A.2. The Poisson's ratio Here, we calculate the Poisson's ratio of a system. To do so, we consider the case of one grafted quadrilateral (see Fig. 2) and calculate the y component of displacement of the lateral point of the quadrilateral labeled by 11 in due to the displacement of the tip labeled by 12 in x coordinate. The probability density of the x component of the position if the tip of the quadrilateral (labeled by 12) is shown by P X o,1 (x 12 ). The force-extension relation associated with the probability density is where hf x 12 (x 12 )i is the average of the fluctuating force in x coordinate that keeps the x component of extension of the tip labeled by 12 in the fixed value of x 12 while the other points of the quadrilateral are free. The probability density to find the x component of the position of the corner labeled by 12 at x 12 and the y component of the position of the corner labeled by 11 at y 11 is shown by P PR o,1 (x 12 ,y 11 ). The force-extension relation associated with the probability density is given be the following equation, where h f X 12 (x 12 ,y 11 )i is the average of the fluctuating force in (45) and (46) in the condition, we find the relation between the lateral contraction or the lateral expansion of the system due to the longitudinal displacement of the end point of the system d dx 12 ln P X o;1 x 12 ð Þ ¼ d dx 12 ln P PR o;1 x 12 ; y 11 ð Þ We define the Poisson's ratio of the quadrilateral as follows, A.3. The force exerted to a stiff confining wall by a fluctuating tip Here, we take the grafted quadrilateral as an example of the polymeric system and confine the system by a stiff and impenetrable wall in x coordinate. The distance of the wall from the grafting point of the quadrilateral is fixed, d while the fluctuating tip of the quadrilateral exerts a fluctuating force on the wall. We are interested in calculating the average of the force (see Fig. 1, 2 and 7). To do so, we use the method introduced in ref. 6 and 95. The origin of the force comes from the reduction of the number of the configuration of the system due to the presence of the wall. The number of the configuration of the system is given by the following equation 6,95 ZðdÞ ¼ where P X o,1 (x 12 ) is the probability density to find the x component of the position of the tip of the quadrilateral at the value, x 12 . The derivative of the logarithm of the number of the configuration of the system with respect to d is the average of the fluctuating force In this method, we ignore the steric effect of the wall and the other corners of the system. A.4. Three longitudinally connected quadrilaterals grafted in a substrate The probability density to find the end point of the grafted three longitudinally connected quadrilateral in position (x 16 ,y 16 We obtain the probability density to find the position of the end point of the structure in x coordinate at value x according to below, Similarly, the probability density to find the position of the end point of the structure in y coordinate at value y is of the following form, which is of the following form y 15 À L sinðoÞ x À 6L cosðoÞ ¼ À 1 6 cotðoÞ (58) where y 115 À L sin(o) is the displacement of the corner of the quadrilateral labeled by 15 in y coordinate due to the displacement of the corner labeled by 16 in x coordinate, x À 6L cos(o). Therefore the Poisson's ratio is As with the previous section, if we confine the system by a stiff wall in x coordinate the wall will experience a fluctuating force. The average of the fluctuating force on the confining stiff wall is (see Appendix)
2020-11-06T14:07:23.004Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "9414f4d124dbad41892e24e4a68bf414c7689946", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sm/d0sm01719a", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "700bc538af1f1a83e2cfc1a484b6603a06e0cc28", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
64623692
pes2o/s2orc
v3-fos-license
Brief Review on SQL and NoSQL High speed processing is the need of industries and so industry after industry shifts to the Digitiz Digital Economy gives a new look to the industries by massive changes. The aim of this paper is to give a overview on CAP in GDB and ACID properties in RDBMS and then why both developers and enterprises typically aim at providing simple, yet very efficient, solutions for specific problems to speed, data flexibility and unprecedented levels of scale. Next focus will towards what are problems while migration from SQL to NoSQL. High speed processing is the need of industries and so industry after industry shifts to the Digitization . This s a new look to the industries by massive changes. The aim of this paper is to give a CAP in GDB and ACID properties in RDBMS and then why both developers and enterprises yet very efficient, solutions for specific problems to speed, data flexibility and unprecedented levels of scale. Next focus will towards what are problems while migration from SQL data Migration, NoSQL, RDBMS, Big Data As the digitized era More and more peoples are doing more and more online, whether at home, at work, or on the go. Everything like from payment of hand to hand online shopping for on and on.. This online trend w Technical challenges as: Scaling, , maintenance of the Databases. From the faces question efore this Relational database f Digitized data. With this problem of big data few companies introduced a alternative databases for market application. They are concern about "one size fits all" stores. Storage problem get desolve partial manner. These questions were just because of problem rage. According to Jon Travis, people are looking at other technologies because the volume of data is getting so huge day by day. Jon Travis is an engineer at Spring Source [1]. And hence researchers were in search of magical database which will be able to solve the big data problem and also can provide the solutions for the most Big Data requirements for the huge volume of data and velocity of request. In order to overcome problem of scalability variety of non-relational structural solutions have bee introduced , such as Dynamo by Amazon , BigTables by Google. In 2006 and 2007, Google and Amezon give the research papers which explain the need of new featured database with agility and high scalability. Also there was a need of such tool or services wh support an ever increasing number of users and data [3]. Big Data is Form with 3Vs namely Variety, Volume and Velocity. In [2] researcher defines the term Veriety. According to him variety belongs to the various data formats of the data being g volume is amount of data and velocity for the rate at which this data is produced [2]. Hence the need of strong foundation of any realintroduced. A non relational set of architecture termed as NoSQL(Not only SQL). The term "NoSQL" first applied by non -structural language, though it was based on a relational mode. Use of NoSQL gives a fast, portable and relational database management system with huge Scalability. [8]. Now a days the te oftenly use by breaking word "NoSQL" as "Not only SQL" and consider backbone for non systems. Most of the noSQL systems are developed specifically to address scalability and availability. In order to overcome problem of scalability variety of relational structural solutions have been introduced , such as Dynamo by Amazon , BigTables by Google. In 2006 and 2007, Google and Amezon give the research papers which explain the need of new featured database with agility and high scalability. Also there was a need of such tool or services which able to support an ever increasing number of users and data Big Data is Form with 3Vs namely Variety, Volume and Velocity. In [2] researcher defines the term Veriety. According to him variety belongs to the various data enerated and stored and volume is amount of data and velocity for the rate at which this data is produced [2]. Hence the need of time big data architecture introduced. A non relational set of architecture termed The term "NoSQL" first applied by Carlo Strozzi in structural language, though it was based on a relational mode. Use of NoSQL gives a fast, portable and relational database management system with huge days the term "NoSQL" is oftenly use by breaking word "NoSQL" as "Not only SQL" and consider backbone for non-relational storage systems. Most of the noSQL systems are developed specifically to address scalability and availability. In the era of mainframes and business applications many developers use a mature query technology that is SQL. From the beginning RDBMS were very popular. In fact, in 1979 the first commercial implementation was released by a small and unknown software company named "Relational Software". This unknown software company now becomes a popular as Oracle. Johannes Zollmann described in his chapter about the ACID properties of the RDBMS. He also notes that ACID properties give strong guarantees on consistency [9]. Though ACID makes a strong to relational databases some drawback of type is, These databases were works with single server that means way of enhancing the capacity is only upgrading the server. Another researcher Antro Salminen note that in his seminar that for scale-up the RDBMS only way is by adding the Hardware processing power[6] RDBMS also has physical storage limit[3].Hence Such database was required that support current generation web, mobile, and other applications to operate at any scale .As need is a Mother of innovations, when everyone was facing a problem of Big data there was a introduction of NoSQL by the researchers. NoSQL is divided into 4 different types as Key Values, Documents, Colum and Graphs. Each database has its own advantages. Selection of type is depending on the type of application and its requirement. Some of the advantage of NoSQL which make it powerful is Horizontal scaling, Demoralization, Replication. NoSQL gives the benefit of time over relational databases' join queries by providing a feature of simple graph traversal operation queries. We can store frequently required information in one [19]. Migration of Data between SQL and NoSQL depending on the style of NoSQL database. Relational database expert Chris Bird observed syntactical difference NoSQL and SQL, Also found that hardness for migration like some mental gymnastics requirement for new users of NoSQL. Hence the process for migration is depend on which technology of NoSQL technology going to be selected for use. Some researchers had tried to cover the gap between RDB and GDB .Graph Databases are able to represent as graph of any kind of information, naturally accommodate changes in data [15]. Cypher Gremlin, are examples graph data query languages. Some noSQL db doesn't support range queries or joins, users are restricted in what user can view and how fast user can view[10]. An Oracal white paper note that Choice of new system is an exciting, strategic business activity that usually entails working with new technologies, suppliers, and opportunities under the head of successful data migration in October 2011. It is also observed that without a sufficient knowledge of both source and target, data migration can cause major or minor or any hidden legacy problems, which is nothing but playing with risk. [17]. It is usually assumed that target of migration knows the existing systems and support a same structure. However NoSql Query Stucture is harder to gain user acceptance . Researcher Patrícia Cavoto observed in his research work that Complex Structer of relational DB cause the complexity can be while performing a data analysis in RDB. Also the relational models are not as flexible as the graph model for data analysis [11] .some researchers found that relational database might require very sophisticated and expensive operations also needs complex join operations to fetch required results in RDBMS which affect the performance and efficiency deterioration [15,16]. In concern with rethinking and remodeling of data migration which is essential part of almost every organization, While the migration of data from SQL between NoSQL developers need to think about how to represent your existing model in new database [18] it means mechanism for storage and retrieval of data are designed in different way in both databases. Figure 1: sql-vs-nosql-databases [21] Power of SQL: ACID compliancy: ACID property is the best power of SQL which reduces anomalies and protects the integrity database . Security: In Handling of data in DBMSs security of data is important .In security concerns NoSQL databases loose his security by allowing permissions or access control in these systems are to be provided it means it lacks the with the respect to Security as compare to SQL. Limitations of SQL: Challenge of scalability and elasticity achievement is a huge related to the relational databases . Researcher also found that a small change to one table can cause changes across the system [20].It is observe that response time of an SQL query changes with many factors. One of them is volume and Scalability. These factors are also related to dependency of performance the data volume [19] One another limitation of SQL is fixed set of columns which is not suitable for big data and hence there is a demand for non-relational databases. Leonardo Rocha and et ,al; point out that Challenges in Migration: Conclusion: From the survey we can conclude thata) existing softwares not only large and but also medium scale software applications are relye on rlational database. b) Due to the variation in synatatical and storage structure of SQL and NoSQL thrir is problem of migration. As amount of data stored in SQL has a impact on its performance. i.e Data stored and the performance are inversely proportional to each other. Effect of this is a query becomes slower. But can we improve this ratio by giving a power of Horizontal Scaling to SQL of NoSQL is the key questions when discussing dataset scalability.
2019-02-17T14:18:42.887Z
2017-12-12T00:00:00.000
{ "year": 2017, "sha1": "941884d4c56b59db8ebcc6a87b9e042d83f4184f", "oa_license": "CCBY", "oa_url": "https://www.ijtsrd.com/papers/ijtsrd7105.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a99ae4ee238f3fd145c1ff20a7b4bb370ebfedeb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251459569
pes2o/s2orc
v3-fos-license
Narrative review of the prevalence and distribution of acute pain in children in the self‐care setting Abstract Acute pain among children is common, yet it may be underestimated and undertreated if the pain is not recognized. Assessing and managing pediatric pain can be complicated, and as such, measuring the prevalence of acute pain in children can be challenging. We sought to provide a consolidated review of the available data on the prevalence of commonly occurring acute pain in children in the self‐care setting. An extensive literature search was performed to determine the prevalence of acute pain at multiple bodily locations in children aged between 3 months and 18 years. We considered the influence of age, sex, and sociodemographic factors on prevalence estimates. We also sought to identify some of the challenges involved in assessing and managing pediatric pain, thus shedding light on areas where there may be clinical and medical unmet needs. In general, a high prevalence of acute pain in children was detected, particularly headache, menstruation‐related pain, and dental and back pain. Older age, female sex, and lower socioeconomic status were associated with increased pain prevalence. Risk factors were identified for all pain types and included psychological issues, stress, and unhealthy lifestyle habits. Owing to the heterogeneity in study populations, the prevalence estimates varied widely; there was also heterogeneity in the pain assessment tools utilized. The paucity of information regarding pain prevalence appears to be out of proportion with the burden of acute pain in children. This could indicate that clinicians may not be equipped with an optimal pain management strategy to guide their practice, especially regarding the use of developmentally appropriate pain assessment tools, without which prevalence data may not be captured. If acute pain is not accurately identified, it cannot be optimally treated. Further investigation is required to determine how the information from prevalence studies translates to the real‐world setting. | INTRODUC TI ON Acute pain is described as lasting for 3 months or less, 1 and is usually caused by illness, injury, or medical procedure. 2 Acute pain is commonly experienced by children, although it may be underestimated and undertreated because of the inability of young children to understand and effectively communicate their symptoms, and the perception by adults and healthcare professionals that the pain is not serious enough to warrant intervention. [3][4][5] Pain is a subjective phenomenon and unique to an individual. Assessing pediatric acute pain can be problematic, and there are numerous pain assessment tools whose use depends on the child's age, cognitive and communication skills, and pain location. There is currently no evidence to suggest that one tool is superior to the others. 6 All relevant information pertaining to the child's situation must be considered, 1 to allow the pain to be recognized and treated appropriately. The effect of untreated pain on children's daily activities and quality of life can be significant, 3 and some may experience a continuation of their pain and its sequelae from childhood into adulthood. 1 Measuring the prevalence of acute pain in a pediatric population can be challenging, and various factors relating to trial methodology can influence the quality and quantity of data, such as small sample sizes, recruitment of pediatric patients across a wide age range, and ethical challenges that researchers need to consider when carrying out pediatric clinical trials. 7,8 Studies may also involve subsets that are not applicable to the general population. 3 The primary aim of this review was to provide a consolidated review of the available data on the prevalence of commonly occurring acute pain, stratified by pain location, in children aged between 3 months and 18 years being treated in the self-care environment. We also sought to determine the effects of age, sex, and sociodemographic factors on pain prevalence, and elucidate the key challenges involved in assessing and managing acute pain in pediatric populations. | Selection criteria We performed a literature search and review and have provided a narrative account of the prevalence of acute pain in children in the self-care setting. After removing the duplicate records, we screened the titles and abstracts of the retrieved references against the predefined inclusion and exclusion criteria. The inclusion criteria were epidemiological studies of any design and duration which included experimental studies (randomized controlled trials, field trials, and community trials) and observational studies (descriptive and analytical studies, except for case reports), and meta-analyses and systematic reviews that specifically reported data on self-limiting, self-treated acute pain (i.e., not from a chronic pain condition); full publications in English from the last 10 years (August 2010 to August 2020) including children 3 months to <18 years of age being treated in a self-care environment for acute pain of 1-week duration or less; treatments were over-the-counter (OTC) pain medications or no treatment in a self-care environment; and the outcomes were prevalence, type, and severity of acute pain in a self-care environment in children. The definition of self-care from the World Health not take into account chronic progressive and chronic nonprogressive headaches or any mixed type of headache pattern. In this analysis, some questionnaires used to assess headache were based on diagnostic criteria for headaches, such as the International Classification of Headache Disorders (ICHD)/ International Headache Society [14][15][16][17] ; the Health Behavior of Schoolaged Children (HBSC) symptom checklist [18][19][20][21][22][23] ; or other validated or previously used tools. [24][25][26][27][28][29][30] Prevalence estimates have previously indicated that headache is one of the most commonly reported types of pain in children. 31 In 15 studies included in this analysis, estimates of headache prevalence ranged from 17.4% to 67.8% 3,14,19,[21][22][23][24][25][26][27][28][29][30]32,33 (Table 1). In five studies where an age breakdown was available, there were clear increases in head pain prevalence with increasing age, 3,18,23,24,32 although in the report by Keeratisiroj et al. 28 , this age-related increase occurred in girls but not boys from the ages of 9-14 years (48.8%) to 15-19 years (60.3%). Only one study by Du et al. 3 evaluated headache in children aged under 5 years ( Table 1). Headache was reported more frequently in girls than in boys in 13 studies where a split according to sex was available. 3,18,20,[22][23][24][25][26][27][28][29][30]33 Indeed, in female-only studies, the prevalence rates were higher than in the mixed-sex studies, ranging from 67.2% to 87.7%. 16,17 There was no difference in headache prevalence between different socioeconomic groups. 3,24 The prevalence of migraine ranged from 8.8 to 23.5% in four studies (episodic and probable migraine figures combined for Arruda et al) 14,16,17,27 ; migraine occurred more frequently during menstruation, 16,17 which is in keeping with the knowledge that F I G U R E 1 Dot plot showing prevalence estimates from individual studies for each pain type/location. *Female population only. Pain type/location in early childhood, migraine is more commonly seen in boys than girls, but from puberty onwards, the prevalence in females rises rapidly. 11,12,34 Two studies found that psychological symptoms, such as depression and anxiety, were significantly associated with headache. 18,24 A positive family history of headache was reported by 53.2% of children in one study by ALBashtawy et al. 27 With advancing age, children are increasingly exposed to risk factors and triggers, such as social pressures and educational demands, and certain unfavorable lifestyle habits, such as smoking and drinking alcohol, all of which may influence the frequency of headaches and compromise their quality of life. 19,35 The findings of our analysis suggest that headaches are more prevalent in adolescence than in early childhood and involve an interplay between biological, psychological, and socioenvironmental factors. A biopsychosocial approach to pain management would thus be of greater benefit than pharmacological treatment alone 36,37 and would allow the therapeutic strategy to be tailored to the factors that are unique to the individual. 36 Behavioral treatment strategies can also help to ensure compliance with a pharmacological treatment, as well as supporting an improvement in longer-term outcomes. 37 In most cases, headaches do not indicate a serious underlying condition and can be managed at home with support from the parent/carer, and with advice from a pharmacist if needed. 38 Reports of analgesic use in the studies in this analysis were limited. Du et al. 3 noted that 10% of children used medications or went to see a doctor, although this was not specific to headache. ALBashtawy et al. 27 reported that 26% of children sought help to relieve pain, and of those, 43.4% were advised to take analgesics. In the study by Lima et al. 17 , a need for pain medication was reported by 70.3% of adolescents (female-only study), although only 26.2% sought medical attention, indicating that most subjects self-medicated. Adebayo et al. 16 noted that females with menstruation-related headaches were significantly more likely to consult a doctor (53.8% vs 30.9%; p = 0.03) and were more likely to self-medicate (76.9% vs 59.1%; not significant) compared with females with nonmenstruation-related headaches. They also found that the most used medication was paracetamol (67.5% for all primary headaches). In a large international comparative study by Gobina et al. 22 , almost half of 15-year-old adolescents used medicine for headache. It is important to ensure that a child with a headache receives appropriate treatment, including the right medication at the correct dose, in a timely manner. However, overuse of medications is itself a contributing factor to headaches in children. 39 It is evident that some of the risk factors for headache are preventable, and by adopting nonpharmacological as well as pharmacological therapies for selected patients and certain headache types, the frequency of headaches could be reduced. Although there was no evidence that participants were advised to make lifestyle changes in this analysis, any pragmatic strategy to prevent acute headache and avoid any possible progression to a more chronic state has the potential to improve the overall health of children and adolescents. In this analysis, most headaches were self-treated and advice from a primary care physician was rarely sought. Despite the availability of effective treatment, clear guidance on acute headache diagnosis and management is currently lacking. | Prevalence of abdominal pain The cause of abdominal pain in children may originate from several organs or systems, including the stomach, intestines, appendix, liver, and gall bladder. Acute abdominal pain in children is also confounded by the range of underlying conditions that may be triggering the pain. These may be categorized as nonsurgical conditions, such as gastroenteritis, or surgical conditions, such as appendicitis. 40 Acute abdominal pain can also occur with sickle cell disease, urinary tract infection, and short-term constipation. 40 In this analysis, some assessments of abdominal pain were based on the HBSC symptom checklist, 18,20-23 ICHD-2 criteria, 41 or other validated or previously used tools. 24,25,30,42 Our analysis indicates that the prevalence of acute abdominal pain ranged from 12.0 to 49.8% in the general pediatric population in 10 studies 3,[21][22][23][24][25]30,32,33,42 (Table 2). Two studies reported prevalence values of 1.5% and 57.0%, 41,43 although in both studies patients attended pediatric clinics or tertiary care centers. As these patients may not be reflective of the general population, these values have not been included within the overall estimate of prevalence of acute abdominal pain. Results from four studies where an age breakdown was available were mixed ( Table 2) reported that the prevalence of abdominal pain did not increase across ages 3-6, 7-10, and 11-13 years, but was higher in children aged 14-15 years. Romero-Acosta et al. 24 found that prevalence was lower in those aged 11-12 years than in those aged 8-10 years, but higher in children aged 13-14 and 15-16 years. Gustafsson et al. 18 demonstrated similar rates in children aged 10, 12, and 15 years. The inconclusive effect of age on abdominal pain prevalence is perhaps not surprising given the different age brackets studied, as well as the differences in methodology used and populations studied, which do not allow an accurate comparison. Abdominal pain was reported more frequently in girls than in boys in seven studies where a split according to sex was available. 3,18,[22][23][24][25]33 In the studies reviewed here, one study showed an inverse association between lower abdominal pain and socioeconomic status, 3 while another study indicated that there was no impact of social status on the prevalence of abdominal pain. 24 Two studies found that psychological symptoms, such as depression and anxiety, were significantly associated with abdominal pain. 18 Acute abdominal pain is common in children. Pain in the abdomen could signify a more serious condition 40 and thus should be closely monitored. In this analysis, however, there was no consistent approach for the assessment of acute abdominal pain as a variety of tools were used, and the underlying causes were not investigated. | Prevalence of menstruation-related pain Pain related to menstruation, or dysmenorrhea, is a significant and widespread problem in adolescent females. While primary dysmenorrhea has no underlying pathology, secondary dysmenorrhea involves painful menstruation with underlying pathology. 44 The most common cause of secondary dysmenorrhea is endometriosis, a chronic condition that is frequently underdiagnosed 45 and can be considerably detrimental to quality of life. 44 In the studies included in our analysis, self-administered questionnaires were primarily used to elicit information from students. Most of the questionnaires were not based on specific diagnostic criteria or previously validated instruments, although two studies mentioned the use of pretested or validated tools. 46,47 Pain scales (faces or numeric rating) were used in around half of the studies. [47][48][49][50][51][52][53] Interviews were conducted in three studies. 47 The percentage of subjects who took medication to treat their menstrual pain ranged widely from 10.2% to 63.8%. [47][48][49]51,[53][54][55] In some studies, alternative means of pain relief were described ( Table 3). 47-50, 53 Rani et al. 49 noted that girls in rural areas (village, native place, or county away from a city or urban or developed area) were more likely to choose natural methods to relieve their pain, whereas girls in urban areas tended to use medications; this may highlight a lack of access to medication for dysmenorrhea or a lack of education regarding the availability and safe use of these medications in rural areas. Country Type of study Overall prevalence Prevalence by age and sex Gustafsson Menstruation-related pain is highly prevalent among adolescent females. Overall, these findings highlight numerous disadvantages that girls and young women with dysmenorrhea face from a physical and social perspective. | Prevalence of dental pain Dental pain is a common experience for children and can have a significant impact at a functional, social, and psychological level. The prevalence of dental pain in the general pediatric population ranged from 7% to 61 ulceration, bleeding gums, missing teeth, filled teeth, and erupting molars have also been associated with dental pain. 68,[73][74][75]88 An unhealthy diet, including the consumption of sugary or soft drinks, fried foods, sweets, and alcohol, was also indicated as a risk factor for dental pain. 60 Du et al (2011) noted that the association between social status and tooth pain could arise from insufficient hygiene within families of low socioeconomic status. 3 Indeed, a link between dental hygiene and dental pain was identified in several studies. Brushing frequency, age that the child started brushing their teeth, age of first dental visit, whether the child had visited a dentist in the past year and oral health of the mother were all predictors of dental pain. 64,70,81,94 Dental anxiety is common in children and can lead to the avoidance of dental treatment and care, resulting in poor oral health and increasing the possibility of experiencing pain. 72 | Prevalence of musculoskeletal pain in the back, neck, shoulders, and spine Pain related to the back, neck, shoulders, and spine is a significant problem in children, especially in young adolescents of secondary school age. In terms of pain assessment, many studies in this analysis asked children if they had experienced pain in regions clearly indicated on a body map or mannequin, 28,[113][114][115][116][117][118][119][120][121][122] or instructed them to indicate where the pain was located using pain drawings. 123 Some questionnaires were based on diagnostic criteria such as the HBSC symptom checklist, 21,22 or other previously validated tools and questionnaires. 28,115,122,[124][125][126][127] In this analysis, the prevalence of back pain was 10. and spinal pain in our analysis, including school bag use (carrying time, load, way of carrying), 28,115,128,131,133,138,143,146,152,153 posture, 129,137,146 prolonged time spent in a sitting or sedentary positi on, 117,[130][131][132]138,139,153 school furniture, 152 and activities that require bending. 113,129,130,147 Five studies indicated that computer and TV use were associated with increased risk of pain. 28,114,117,153,154 The impact of exercise on back, neck, shoulder, and spine pain is uncertain-while a number of investigations indicated that lack of physical activity was a risk factor for pain, 30,114,136 being physically active in sports activities was also associated with pain in this analysis. 114 These findings therefore suggest that genetic, environmental, psychological, social, and physical factors may be associated with musculoskeletal pain. 116,126,134,140,143,146 Estimates for the prevalence of patients who sought medical care for their musculoskeletal pain ranged between 1.6% and 34%. [115][116][117]123,125,[130][131][132]134,135,137,139,146,148 [130][131][132]146 Given that musculoskeletal pain in the upper body can persist into adulthood, 33,114,116 early treatment of pain is important to relieve discomfort for the child and minimize the consequences of pain during later life. 33,116,140 It is clear that some musculoskeletal pain is preventable and that appropriate education on physical and behavioral risk factors is necessary to reduce back, neck, shoulder, and spinal pain. 126,131,141,152 However, musculoskeletal pain is a multifactorial condition that may benefit from a biopsychosocial approach to pain management. 36,116,118,125,126,129,143,146 | Prevalence of musculoskeletal pain in the limbs Limb pain is one of the most common types of pain experienced by children. 155 It can be caused by specific and diagnosed conditions, including cerebral palsy, muscular dystrophy, arthritis, rheumatism, and diagnosed postural defects, 144,155,156 but also by nonspecific causes such as playing sports/exercise. 28,155 Growing pains are a common cause for any type of limb pain in children. 155 Although musculoskeletal limb pain may result from various causes including chronic conditions, this analysis only included estimates of prevalence from studies wherein the musculoskeletal pain was acute in nature. Some studies in this analysis adopted previously validated questionnaires to assess musculoskeletal limb pain, such as the Standardized Nordic Questionnaire or Standardized Nordic Questionnaire for Osteomuscular Symptoms. 28,141,157 The overall prevalence of musculoskeletal limb pain ranged from 2.1% to 56.6% in three studies, 155 Four studies reported that pain prevalence was similar across sexes with respect to lower and upper body pain 3,29,157,158 ; however, three investigations indicated that the prevalence of limb pain was slightly higher in boys than girls, 155,159 particularly in the knees, ankles, and feet. 28 Therefore, it is difficult to conclude whether males are more likely than females to experience musculoskeletal pain in the limbs. Musculoskeletal pain in the limbs can interfere with daily activities such as studying, sleeping, and playing sports, and can lead to functional impairment. 144,145,157 Two studies identified being overweight as a risk factor for musculoskeletal pain in the upper limbs, knees joints, and feet 144,145 ; however, firm conclusions regarding the impact of body weight could not be made due to conflicting evidence presented by Saes et al. 157 , who did not find an association between the presence of knee pain and increased body weight. Though lower limb pain has traditionally been associated with traumatic pain rather than stressassociated pain, Østerås et al (2015) Musculoskeletal pain in the limbs is highly prevalent in children and adolescents, and is associated with biological, physical, psychological, and socioeconomic factors. 28,144,145,155,156,158 However, literature on musculoskeletal pain in children is scarce, highlighting the need for further studies on limb pain and its associated factors to facilitate preventative measures, early diagnosis, and effective interventions for treatment. | DISCUSS ION This review aimed to provide a consolidated summary of the available data on the prevalence of commonly occurring acute pediatric pain in the self-care environment, stratified by pain location. We also sought to determine the influence of age, sex, and sociodemographic factors on pain prevalence, and to highlight some of the key challenges involved in assessing and managing acute pediatric pain. The results of this review indicate that there is a high prevalence of acute pain in children, particularly headache, dental and back pain ( Figure 1). Menstruation-related pain is also a very common problem in females following menarche. Owing to the heterogeneity in study Evidence suggests that some of the acute and self-limited types of pain experienced by children today may be associated with factors such as diet, alcohol, 4 sedentarism, 162 obesity, 163 and screen time. 164 Socioeconomic factors such as maternal education level may also influence the occurrence of pain, with higher pain frequency reported in more disadvantaged children. 165 Although the primary aim of this analysis was to report the prevalence of acute pain in children, this review also sought to elucidate some of key challenges regarding the assessment of acute pain in children based on the information reported in the prevalence studies. However, the search strategy was not struc- In addition, none of the prevalence studies we reviewed included children with intellectual or developmental disabilities. Further research is required to understand prevalence rates in an all-inclusive general population. Most types of pain in children can be described using the same classification system as adults; those that are based on pain duration (i.e., acute versus chronic pain) and underlying pathophysiology (i.e., nociceptive versus neuropathic pain) are used most often. 172 This raises the question of whether a pediatric classification system would be more appropriate for acute pain, to reflect its multimodal nature and with appropriate terminology and clarity around potential causes, risk factors, and mechanisms. Acute and chronic pain in children should be reviewed from a developmental perspective, 1 and as such, separate classifications per age groups may even be worth considering, for example, 0-5, 6-11, and 12-18 years. There are new techniques in development that promise to quantify the pain experience in the very young in whom communication of their pain experience is not possible or limited, for example, functional magnetic resonance imaging. 173 Given that pain is a subjective experience, it will be important that consideration is given to accessibility, timing of treatment, and overall pain perception when looking to validate these tools, in terms of potential clinical utility as tools to determine a more accurate assessment of pediatric pain. Although this review looked at acute pain prevalence in children by analyzing a large number of studies, there were several limitations that we encountered. The scope of this review was broad, and the studies included were heterogeneous in terms of pain location, age ranges, socioeconomic groups, geography, and methodology. Comparability between studies was complicated by the reporting of different recall periods for when the pain occurred (e.g., point prevalence, weekly or monthly). Pain prevalence studies involving small subsets may not be truly reflective of the general population, hence why the prevalence estimates reported here have such wide ranges. The lack of standardization and robustness of the questionnaires utilized also limited the ability to draw firm conclusions regarding acute pain prevalence. In studies with a pediatric population covering a wide age range, the use of both self-reported and parent/carer-assessed pain may have introduced inconsistency and bias; evidence suggests that although parent reports have value, parents may be largely unaware of their child's pain and report their experience inaccurately. 174,175 There may also be an element of reporting bias if children choose not to communicate their pain because they do not want to bother their parents/carers or are scared at the prospect of visiting a healthcare professional. 168 Therefore, in some studies in this analysis, owing to the subjective evaluation of pain in very young children by carers, parents, or HCPs, the prevalence estimation of pain in infants may not be completely accurate, but this is unlikely to affect the overall reliability of the results of this review based on the low number of studies identified. Additionally, the effect of specific painkillers or interventions could not be determined owing to a lack of studies that reported how the pain was managed or treated. Finally, very few systematic reviews or meta-analyses were used in this analysis; therefore, data cannot be interpreted qualitatively; further investment in such articles may be warranted. The prevalence of acute pain in children and adolescents is high; certain pain types become more prevalent with increasing age, and females tend to experience pain more regularly than males. This The paucity of reported information in this area appears to be out of proportion with the prevalence and burden of acute pain in children, and this evidence gap could indicate that clinicians responsible for treating children with acute pain, or other healthcare professionals recommending treatment options, are not yet equipped with an optimal pain management strategy to guide their practice. Untreated pain in childhood can have significant consequences in adulthood; thus, it is essential to introduce preventative measures to reduce the risk of long-term complications. A more proactive attitude to investigation of child-reported pain is warranted; timely access to appropriate interventions may aid a faster recovery and reduce the risk of longer-term complications. Personalizing the management strategy by way of a biopsychosocial approach will ensure that the child is treated holistically according to their unique pain status. There are challenges associated with obtaining pediatric prevalence data, and applicability of findings from small studies to general populations can be problematic. Further comprehensive research is needed to understand whether the challenges faced at the preliminary level of prevalence studies are the same as those in interventional studies or observational studies, and how this translates to the real-world setting. AUTH O R CO NTR I B UTI O N S All authors were involved in the conception of the review, and critically reviewed and commented on the manuscript at all stages of development. All authors approved the final manuscript. ACK N OWLED G M ENTS The authors would like to thank Fiona Murray-Zmijewski for reviewing and contributing to the continuous improvement of this article and Frederic Esclassan for co-involvement in the creation of the search string for this manuscript. Medical writing support was provided by Caroline Sills and Leah Bundy at Elements Communications Ltd., Westerham, UK. Medical writing support was funded by Reckitt.
2022-08-10T15:21:09.332Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "fd69d3a9b0b73282c438221c97626f10b64b85aa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "484ae944bd7a7b91d666086994f9ba799242f8e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
24453023
pes2o/s2orc
v3-fos-license
HAND HYGIENE IN HOSPITAL ENVIRONMENTS : USE OF CONFORMITY An exploratory descriptive study with a quantitative approach whose objective was to use indicators to evaluate the frequency and infrastructure for hand hygiene, as well as the nursing team’s knowledge about the subject. Systematized observation was carried out at hospital in the state of São Paulo, Brazil of the routine activities of 33 participating professionals (nurses and technicians) as well as the application of an individual questionnaire about the subject.1206 opportunities for hand hygiene were identified, though it was effected in only 481 (39.9%) of them. Alcohol solution was not used at any opportunity. The infrastructure indicator for hand hygiene was close to the ideal value (83.3%). The professionals reported a high frequency of hand hygiene, demonstrating knowledge in relation to its importance, yet contradicting the findings of the observation. It was concluded that, despite the adequate infrastructure, hand hygiene was below that expected, requiring actions and strategies to overcomes these barrier and increase the use of alcohol solution. Descriptors: Hand hygiene. Hospital infection. Nursing team. INTRODUCTION The nursing team is exposed to different occupational risks, with biological risk being the most frequent (1) .Like professionals, patients are also exposed to these risks during the assistance, with the resulting infections a serious problem for public health (2) .Health care-associated infections (HCAI) may increases resistance to antibiotics, prolong hospitalization, increases costs for the health system, patients and family members, and even cause death (3) . The National Sanitary Surveillance Agency (ANVISA) and World Health Organization (WHO) have joined forces for the implementation of a World Alliance for Patient Safety.This alliance, created in 2004, established six international safety targets, including the reduction of HCAI.In order to reach this target, for the 2005-2006 biennial, the First Global Patient Safety Challenge was launched, entitled "Clean care is safer care", aimed at the prevention and reduction of the incidence and seriousness of HCAI (4)(5) .This proposal also presents an impact on current clinical practice in various services.In this context, hand hygiene (HH) is indicated as a strategy that should be promoted and incentivized in health services, as it is a simple and effective measure (4,5) . The hands are bodily structures used often in direct contact with the patient, and are the main means of transmitting microorganism.Therefore, not adhering to hand hygiene compromises the quality and safety of the healthcare offered (6) .For there to be a break in this transmission chain it is necessary to adopt basic hygiene standards in the hospital environment, with HH having the greatest impact (7) .Thus, HH is recommended at different times: before and after contact with the patient, before carrying out aseptic procedures, after exposure to bodily fluids, and after contact with areas near to the patient (5) . A study conducted in the southern region of Santa Catarina measured the quality of HH in nursing professionals at Basic Health Units and demonstrated that the percentage compliance with HH was 31.7% through clinical procedures, indicating low compliance with HH in these services (8) .A systematic review of the literature indicates an HH frequency lower than 50%, even though the impact of this on the reduction of infection is understood (7) .Other studies also have also demonstrated this low compliance (6,9) . Since 2008, in order to improve adherence to HH, the WHO has been stimulating the implementation of a multimodal or multifaceted strategy composed of: adaptation of the structure of the institution by providing washbasins, soap, paper towel and alcohol solution, training and regular education for teams, periodic evaluation of HH with feedback for professionals, use of notices acting as reminders for professionals and information for patients and visitors, and the creation of a climate of institutional safety in which the subjects of all sectors work to promote HH (5) . Although there are efforts to increase the compliance of professionals with HH, it can be noted that this practice has still not been completely incorporated into work routines, a fact which leads to the transmission of microorganism and exposes nursing professionals to biological risk Thus, this study aimed to evaluated HH using indicators to evaluate the infrastructure and process, and verify the knowledge of the nursing team in relation to HH. METHODS This is an exploratory descriptive study with a quantitative approach conducted at a Teaching Hospital with 32 beds, in the municipality of São Carlos, São Paulo, Brazil, at the adult and pediatric and emergency clinical hospitalization units. 33 nursing professionals and technicians working at the institutions on the day, evening and night shifts took part in the study after receiving guidance and agreeing to participate and signing the Declaration of Free and Clarified Consent. Initially the participants were informed that the study relation to preventive HCAI measures, not specifying that it related to HH alone, so that there was no change in behavior as a result of the research.At the end of the collection of data, the participants were informed of the specific objective of the study and had the option of removing their consent. The collection of data in the period from September to December 2011 took place at two moments.First, direction and systematized observation of the nursing practice was carried out, aimed at identifying HH opportunities and effective realization of this practice, as well as the adequacy of the physical structure offered by the institution.After the observation, an instrument was applied to identify the knowledge of the professionals about the subject. The first moment was made up of 144 hours of observation of professional practice, so that each work shift in each sector received 12 hours of observation over three different, random days.The professionals were monitored by the observer -a nursing graduate, author of the study and capacitated for the task through a review of the literature on the subject and the method to be applied -during the execution of the procedures, so that there was no interference in them.A checklist was used from the Hospital Infection Control Practices Quality Evaluation Manual (10) .In a four hour period, a pilot study was carried out for the observer to adapt to the environment studied and the instrument used. In relation to HH opportunities, it was considered that the professional had two opportunities for each procedure realized on the patient, one before and one afterwards, aimed at identify at which opportunity HH was effective. In relation to the infrastructure of the institution, the conditions of the washbasins was evaluated, as well as the presence of liquid soap dispensers, dispenser working appropriately, the availability of paper towels and the absence of other irregularities (cloth towels, dirty dispensers, lack of water, broken faucets, visible dirt on the washbasins etc.).The washbasin was considered adequate when it complied with all of the items above (10) .The availability of alcohol solution at the institution was not evaluated by systematized observation, therefore, the structure of the institution was only evaluated in relation to hand washing and not hand disinfection. The procedures were organized in Microsoft Excel® spreadsheets and grouped into: risk of exposure to bodily fluids, contact with patient, invasive procedures, contact with inanimate objects and surfaces and other procedures based on the Health Service Hand Hygiene Manual proposed by AN-VISA (11) .The data was analyzed using descriptive statistics (average, relative and absolute frequency). After the observation the Hand Hygiene Compliance Evaluation Indicators and the Hand Washing Infrastructure Evaluation Indicators were calculated using the formulas in the said Manual (10) . The first is calculated by dividing the number of HH opportunities used, i.e.where the professional washed their hands by the total number of opportunities identified multiplied by 100.The second is calculated by dividing the number of adequate evaluations by the total number of evaluations and multiplying by 100 (10) . In the second moment, an individual, closed questionnaire was applied to the nursing professionals participating in the observation to identify their knowledge of the subject.This was composed of six multiple choice questions and responded to during the work process immediately after delivery, and covered the supply of inputs required for HH, the situations in which it should be carried out, the products to be used and the factors that impede this practice. The Of the 603 procedures observed, 35 only included HH before the procedure, demonstrating a rate of 5.8%.In 238 procedures, HH was only realized after the procedure, obtaining a rate of 39.5%.In 226 procedures there was no HH at any time, totaling 452 missed opportunities. Table 1 presents the relative frequencies of HH compliance before and after, only before or after the procedures, as well as the HH Compliance Indicators for each group of procedures. In relation to the HH structure, 10 washbasins were evaluated over three random days, totaling 30 observations in which 83.3% were in conformity with the items predetermined by the indicator used (10) .Inadequacies were found in five observations in which the paper towel dispenser was empty. In relation to the produces used for HH, all of the professionals informed that they always used liquid soap; 11 (33.3%)always used 70% alcohol, 19 (57.6%) use it sometimes and three (9.1%)rarely use the product. Figure 1 shows the procedures mentioned by the team for HH, according to predetermined situations based on ANVISA recommendations in relation to the product to be used in each situation (11) . In all of the HH observed, soap and water was used, without the use of alcohol solutions or other substances. In relation to the predetermined situations, the participants informed the frequency with which they carry out HH (Figure 2). In relation to aspects that impede the practice of HH, 60.6% of the professionals indicated that hastiness is a contributing factor to noncompliance, followed by lack of time (30.3%),forgetting (21.1%), distance from the washbasin (18.2%), lack of example from other professionals (15.2%), dryness of the skin (15.2%), lack of personnel (12.1%), lack of knowledge of the need for HH (12.1%), poor distribution of dispensers (12.1%) and allergy to the product available (9.1%). DISCUSSION Adequate HH by professionals working in health services is considered as the main measure in the prevention and control of HCAI, as well as being a cheap and simple method, and should occur before and after the health care provided, regardless of the use of gloves (1) .This study corroborated the literature (9,13) 1), the HH index obtained was well below 100% foreseen by the reference manual (10) .Similar studies demonstrate HH rates below 50% (6,7) , and identify a discrepancy between the knowledge of the professionals and the practice observed (7) . In this study, in relation to the risk of exposure to bodily fluid there was a compliance rate of over 50% (65.6%).However, there was low utilization of total HH opportunities (39.9%), similar to that found in other studies (6) . We can highlight that the not only the frequency of HH is insufficient for reduction the dissemination of pathogens, but the HH technique needs to be carried out adequately for it to guarantee adequate compliance with HH.However, the execution of the technique was not foreseen in the indicators used, and not covered in this study. As indicated by the literature (14) a discrepancy was observed between compliance with HH observed and that referred to by the nursing team at the hospital studied, whose compliance index was lower than that reported by the professionals.It was noted that HH occurs with greater frequency after the realization of procedures (39.5%), data corroborated by the literature (6,9) .This fact could indicate that the concern of professionals with their own protection prevails when compared to the safety of the patient (6) . This situation is concerning, as noncompliance with HM before the procedure, especially invasive ones, may be an important source of contamination for the patient.On the other hand, contact by unwashed hands with inanimate objects and surfaces near to the patient may stimulate the colonization of these locations, transforming them into reservoirs of microorganisms (11) , situations found in 45.8% of the observations in the study. In these situations in which HH is not conducted, the safety of the patient is compromised, as the probability of cross infection occurring is high, given that the hands of the professional act as disseminators of microorganisms, including multiresistant microorganism, which are the target of intense concern at hospitals.Such microorganisms present to two or more classes of antimicrobials, which makes treatment of infection difficult and leads to the patient suffering and generates a burden for the health system (12) . Given that microorganisms are disseminated by direct contact between people or through contaminated surfaces and equipment, it can be seen that not only HH is important but also the cleaning and disinfection of inanimate objects and surfaces near to the patient (12) . It is worth reiterating that the visual inspection of the objects and surfaces is not a reliable method of evaluating cleaning.One study found that 80% of the materials were approved by this method (15) .However, after an analysis, 81% and 26% of these were rejected for containing adenosine triphosphate -which is derived from organic material and microorganisms -and Staphylococcus aureus bacteria, respectively, even after cleaning being carried out by the hospital sanitation team.It has therefore been demonstrated that surfaces and objects can act as reservoirs of pathogens, contributing to their dissemination, even when apparently clean. In relation to the products used for HH the preference for the use of soap and water is evident to the detriment of alcohol solution.One study (16) demonstrated the effectiveness of alcohol based products on hands dirtied by blood and contaminated with Serratia marcescens when verifying that the three products tested (62% alcohol gel, 70% alcohol gel and 70% liquid alcohol with 2% glycerin) produced a bacterial reduction of around 99.9%, more effective than degerming solutions.However, hand washing is still recommended as the first option in situations in which the hands are visibly dirty (12) , with alcohol being recommended in other situations (16) . According to ANVISA Collegiate Directorship Resolution (RDC) nº 42, dated from September 2010 (17) the alcohol preparation for HH in the form of gel, foam and other products should contain a minimum final concentration of 70% with proven antimicrobial activity, while alcohol preparations for HH in the form of liquid should contain alcohol with a final concentration between 60% and 80%.Therefore, it can be inferred that 70% alcohol in any formulation may be used for HH, given that this contains the concentration recommended for its effectiveness. It is understood that the physical structure of the health service for HH is just as important as the material resources available.In 2002, ANVISA published RDC nº 50 which governs the standards and physical projects for healthcare establishments, defining the mandatory provision of washbasins for exclusive HH use by the healthcare team, which should include one in every nursing room (when inside this) or one for every four rooms, when outside of such (12) . With the ideal value of 100%, the institution analyzed presented 83.3% conformity for the HH infrastructure, which indicates that unsatisfactory conditions such as visible dirtiness of the washbasin or dispenser, cloth towels, broken faucets or lack of water were not identified.However, the lack of paper two for some periods owing to delayed replacement may reduce compliance with and the effectiveness of HH, given that drying the hands is one of the stages in the technique. There are various factors that interfere in decisions relating to compliance with the HH practice or not: forgetting, lack of knowledge as to its importance, distance from the washbasin, irritation of the skin and lack of materials (18) .In this study hastiness (27%) and lack of time (14%) were identified as important difficulties in complying with HH. There is an electronic guide available for implementing the multimodal strategy from the WHO to improve HH (19) , which identified HH strategies such as: access to alcohol preparations and other inputs for this purpose and the provision of adequate and effective training. A study undertaken in Paraná at an Intensive Therapy Unit demonstrated that after intervention with educational materials, discussion about the issue in small groups and provision of alcohol gel to the team led to a significant increase in the overall HH compliance rate, from 21.7% to 28% (p=0.039) (20). The need to evaluate strategies that incentivize HH by a situational diagnostic of the institution must be reiterated, helping to change the behavior of health professionals and guaranteeing the quality of the care delivered (6) . In the environment studied, it was identified that education about the use of alcohol solution for the nursing team could constitute an important strategy for HH compliance, considering factors such as haste and lack of time, given that HH with alcohol instead of soap and water reduces the time spent on the practice by half (12) , as the product is made available by the institution according to 79% of the participants.In addition to optimizing team's time, alcohol solution has the advantage of being able to be transported to the patient's bed, and other locations far from washbasins, which are important characteristics to increase compliance with HH (12) . CONCLUSIONS The use of indicators to evaluate conformity enable the HM rate of nursing professionals at the hospital studied to be quantified.Despite the adequate infrastructure offered, this rate is far below that expected. Although the professionals were aware of the moments in which HH should occur and its importance, a frequency matching this fact was not identified.Furthermore, HH was most frequent after the procedures, indicating greater concern with the professional's safety than in relation to the patient. Hastiness and lack of time were indicated by the professionals as important difficulties for complying with HH.Therefore, implementing strategies to increase the use of alcohol solution and carrying out educational actions about this product are recommended for improving the factors limiting HH. The data obtained in this study represent the reality in a single, small sized institution, which could be considered as a limitation on the study.Furthermore, studies evaluating such issues using the same conformity indicators as those analyzed here were not found, which prevents a more precise comparison of the findings with other services. Studies that use indicators beyond those that evaluate the correct realization of HH and interventions in relation to this practice should be stimulated, aimed at improving compliance with HH by health professionals, the safety of the patient and the reduction and control of HCAI. Figure 1 - Figure 1 -Distribution of the responses of nursing professionals in relation to product type (liquid soap or 70%) alcohol chosen for HH in relation to predetermined situations.São Carlos, SP, Brazil, 2011. Table 1 - by revealing that this practice has still Distribution of the hand hygiene compliance rate by nursing team professionals at a teaching hospital, and hand hygiene compliance rate per procedure group.São Carlos, SP, Brazil, 2011. Source: research data. Table 2 - Evaluation by professionals as to the availability of HH inputs.São Carlos, SP, Brazil, 2011.
2017-05-21T21:00:47.713Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "a0949b05623c29ea2758c1861eafea7c588895af", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rgenf/a/SsvGTpRnhYrKKcpHkfmTkLr/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a0949b05623c29ea2758c1861eafea7c588895af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
9551483
pes2o/s2orc
v3-fos-license
Nuclear Factor Kappa-B Signaling Is Integral to Ocular Neovascularization in Ischemia-Independent Microenvironment Retinal ischemia promotes the upregulation of VEGF expression and accounts for most pathological features of retinal neovascularization (NV). Paradoxically, VEGF remains the pivotal stimulator of ocular NV, despite the absence of ischemia. Therefore, the central question arises as to how the various molecular mechanisms interplay in ischemia-independent NV. It's been suggested that NFκB plays a crucial role in the pathogenesis of diabetic vasculopathies. Here, we dissected the molecular mechanism of ocular NV in the rho/VEGF transgenic mouse model, which develops subretinal NV in ischemia-independent microenvironment. Furthermore, we examined whether intravitreal administration of YC-1, a HIF-1 inhibitor, can modulate the activation of NFκB and its downstream angiogenic signaling in the mouse retina. We demonstrated that YC-1 inhibited retinal NFκB/p65 DNA binding activity and downregulated NFκB/p65, FAK, α5β1, EPO, ET-1, and MMP-9 expression at the message and the protein levels. In addition, YC-1 significantly inhibited subretinal NV by reducing the number of neovascular lesions, the area of each lesion and the total area of NV per retina. We further investigated the influence of VEGF signaling pathway on HIF-1α transcriptional activity to substantiate that this mouse model develops subretinal NV in an ischemia-independent microenvironment. Our data demonstrated that VEGF overexpression didn't have any impact on HIF-1α transcriptional activity, whereas treatment with YC-1 significantly inhibited endogenous HIF-1 activity. Our study suggests that retinal NFκB transcriptional activity is pivotal to ischemia-independent mechanisms, which lead to the local activation of angiogenic cascades. Our data also indicate that the nexus between VEGF and NFκB is implicated in triggering the angiogenic cascade that promotes retinal NV. Hence, targeting the VEGF/NFκB axis may act in a negative feedback loop to suppress ocular NV. This study suggests that inhibition of NFκB activation may be a means of turning off a “master switch” responsible for initiating and perpetuating these ocular pathologies. Introduction Retinal neovascularization (NV) is the major cause of severe vision loss and irreversible blindness, affecting people of all ages [1]. Retinal NV is characterized by the abnormal formation of new vessels in the retina and in the vitreous [2]. In addition, angiogenic factors, such as vascular endothelial growth factor (VEGF) play a prominent role in promoting retinal NV [3]. VEGF is an ischemia-induced molecule [4,2], which acts as a major angiogenic stimulator in the signaling cascade of ischemia-induced retinal NV [5,6]. Most VEGF-based animal models have focused primarily on retinal NV, which occurs in the ischemic phase of various ocular pathologies. Nuclear factor kappa-B (NFkB) is a heterodimeric complex of Rel family of proteins that is physically confined to the cytoplasm in unstimulated cells through the binding to inhibitor of kB (IkB) proteins [7]. In most cells the predominant form of active NFkB consists of a p65/p50 heterodimer although other homo/ heterodimers also form [7]. Upon exposure of cells to growth factors such as epidermal growth factor (EGF), cytokines, interleukin-1(IL-1), and tumor necrosis factor a (TNF-a), IkB is proteolytically cleaved to release its p50/p65 subunits, which undergo nuclear translocation. NFkB binds to a specific DNA response element; (59-GGGPuNNNPyPyCC-39), in the promoter regions of target genes, and activates their transcription [8]. It has been indicated that VEGF activates NFkB signaling pathway, which ultimately triggers the elevation of various pro-angiogenic mediators that contribute to the development and progression of retinal microvasculopathies. We have previously demonstrated the molecular link between VEGF and NFkB under a hypoxiaindependent microenvironment in vitro [9]. YC-1 [(3-(59-Hydroxymethyl-29-furyl)-1-benzyl indazole]; a small molecule inhibitor of HIF-1, which activates soluble guanylyl cyclase (sGC) independently of nitric oxide (NO) in vivo [10]. Previously, it has been indicated that YC-1 attenuates NFkB signaling and the angiogenesis signaling cascade in different cell types [11,9]. Here we extend our in vitro findings to an in vivo model and investigate the mechanism of ocular NV under ischemia-independent microenvironment. We have further examined the influence of YC-1 on the inhibition of subretinal NV in the Rhodopsin/VEGF (rho/VEGF) transgenic mouse model, which represents an ischemia-independent model of ocular NV Reagents YC-1 was purchased from A.G. Scientific (San Diego, CA) and dissolved in sterile DMSO. SN50 [a cell-permeable synthetic peptide, known to inhibit the nuclear translocation of NFkB, and its negative control mutant peptide SN50M were obtained from Calbiochem [San Diego, CA]. Mouse monoclonal antibody that recognizes the active subunit (12H11) of RELA (Nuclear Factor Kappa B (NFkB)/p65) (MAB3026) was obtained from Millipore (Billerica, MA, USA). Monoclonal rat anti-mouse Stromal Cell-Derived Factor-1 (SDF-1)/CXCL12 antibody (Clone 247506) was obtained from R&D Systems (Minneapolis, MN). Goat polyclonal anti-mouse C-X-C Chemokine Receptor Type 4 (CXCR4) antibody was obtained from LifeSpan BioSciences (Seattle, WA). Monoclonal anti-mouse Focal Adhesion Kinase (FAK) Antibody was purchased from Thermo Scientific Pierce Antibodies (Rockford, IL). Anti-mouse integrin Alpha-5 Beta-1 (a5b1) monoclonal antibody was obtained from Millipore (Billerica, MA, USA). Rabbit polyclonal anti-mouse Erythropoietin (EPO) antibody was purchased from Bioss Inc. (Woburn, MA). Rabbit anti-Endothelin-1 (ET-1) polyclonal antibody was purchased from Bioss Inc. (Woburn, MA). Rabbit polyclonal anti-mouse Matrix Metalloproteinase-9 (MMP-9) antibody was obtained from LifeSpan BioSciences (Seattle, WA). Rabbit anti-mouse b-actin polyclonal antibody was purchased from Abcam (Cambridge, MA). Rat anti-Mouse IgG was purchased from eBioscience (San Diego, CA) and was used as an isotype control antibody for immunohistochemistry studies. Retinal Fluorescein Angiography with High-Molecular-Weight Fluorescein-Dextran After the onset of the VEGF transgene expression in rho/VEGF mice postnatal period; subgroups of mice were sacrificed, and the baseline subretinal NV was measured by image analysis at P21 and P24. The remainders of the rho/VEGF mouse colony were divided into three groups; the first group (n = 15) was left untreated. Whereas the second group (n = 15) received a sextuple regimen of intravitreous injections of DMSO (0.2%) to both eyes at P6, P9, P12, P15, P18, and P21. Mice in the third group (n = 15) were injected intravitreally with YC-1 (100 mM) to both eyes at P6, P9, P12, P15, P18, and P21 (sextuple injections). At 24 days post therapy, mice were anesthetized and perfused through the left ventricle with 2 ml of 25 mg/ml fluorescein-labeled dextran (2610 6 average molecular weight; Sigma, St. Louis, MO) in PBS, which was allowed to circulate for 2 min before the animals were euthanized and the eyes were enucleated and fixed for 24 h at 4uC in 4% paraformaldehyde/PBS. A dissecting microscope was used to remove the cornea and the lens, and the entire retina was carefully dissected from the eyecup, radially cut from the edge of the retina to the equator in all four quadrants, and flat-mounted in Aquamount Mounting Medium [Polysciences, Warrington, PA], coverslips were carefully placed over the retina, and the edges of the coverslips were sealed. The retinas were examined by fluorescence microscopy Axiovert 135 (Carl Zeiss Micro-Imaging, Inc., Thornwood, NY) at low (506) and higher (4006) magnification by using Axiocam digital camera (Carl Zeiss Micro-Imaging, Inc., Thornwood, NY), which provides a narrow depth of field so that when focusing on the outer edge of the retina, it will enable subretinal focus for neovascular buds on the outer surface of the retina, meanwhile the retinal vessels are out of focus in the background, which allows easy delineation of the subretinal NV. The outer edge of the retina, which corresponds to the subretinal space in vivo, is easily identified and therefore from slide the focal plane was standardized. Quantitation of NV on Flat Mounts In rho/VEGF transgenic mouse model, subretinal NV was measured on retinal flat mounts. Retinas were mounted with photoreceptor side up and examined with low (506) and higher (4006) magnification. Three investigators blinded to treatment group have utilized Metamorph digital image analysis software (version 7.1, Universal Imaging, Downingtown, PA) to delineate each of the lesions and quantifying the subretinal neovascular growth area per retina by; 1) calculating the number of buds of NV in each retina, 2) the area of neovascular lesion/retina, and 3) the total area of NV on the outer surface of the retina per eye. Measurements were repeated three times for each retina and the mean was used for one experimental value; there was insignificant variability among triplicate measurements. Measurement of Activation of Retinal NFkB Preparation of Nuclear Extracts. Eyes were enucleated from the following groups I, II, III, and IV. Nuclear extraction of retinal protein was performed as described previously. The total number of animals was 60 (n = 60), and the number of animals in each experimental group was 15 (n = 15). Briefly, retinas were removed; snap frozen and stored at 270uC. Pooled retinas were homogenized with a mechanical homogenizer in five pellet volumes of Buffer A (20 mM Tris (pH 7.6), 10 mM KCl, 0.2 mM EDTA, 20% (w/v) glycerol, 1.5 mM MgCl 2 , 2 mM dithiothreitol (DTT), 1 mM Na 3 VO 4 and protease inhibitors; Complete; Roche Diagnostics (Mannheim, Germany). Nuclei were pelleted (2500 g, 10 minutes) and resuspended in two pellet volumes of Buffer B (identical with Buffer A except that KCl was increased to 0.42 M). Nuclei and debris were removed by centrifugation (15,000 g, 20 minutes), and the supernatant was dialyzed against one change of buffer Z (20 mM Tris-HCl (pH 7.8), 0.1 M KCl, 0.2 mM EDTA, and 20% glycerol) for at least 3 hours at 4uC in dialysis cassettes (Dialyze Z; Pierce, Inc.). Protein concentration was measured with the bicinchoninic acid assay. Evaluation of NFkB/p65 Transcription Factor Activity (ELISA). Activation of the transcription factor NFkB was measured using a DNA-binding assay (Trans-AM NFkB/p65 Transcription Factor Elisa Assay Kit, Active Motif, Carlsbad, CA) according to manufacturer's instructions. The total number of animals was 60 (n = 60), and the number of animals in each experimental group was 15 (n = 15). Briefly, retinal extracts were collected from all mouse groups: Non-treated C57BL/6 mice; Non-treated Rho/VEGF mice; DMSO-treated Rho/VEGF mice; YC-1-treated Rho/VEGF mice; SN50-treated Rho/VEGF mice; SN50M-treated Rho/VEGF mice (n = 15), received a sextuple regimen of intravitreous injections of the mutant control SN50M [20 mM] at P6, P9, P12, P15, P18, and P21. Animals were sacrificed at P21 and P24. At P24, the extent of subretinal NV was measured. Samples were collected from 2 mg of the retinal nuclear extracts and then were incubated with an oligonucleotide containing the NFkB consensus (59-GGGACTTTCC-39) bound to a 96-well plate. After extensive washes, the NFkB complexes bound to the oligonucleotide were incubated with an antibody directed against the NFkB/p65 subunit at a dilution 1:1000. After washing, the plates were subsequently incubated with a secondary antibody conjugated to horseradish peroxidase (1:1000), and the peroxidase reaction was quantified at 450 nm with a reference wavelength of 655 nm. Results are expressed in absorbance units corrected for interference at the reference wavelength. Quantification of HIF-1a Transcriptional Activity The DNA binding activity of HIF-1a was evaluated using the HIF-1a transcription factor assay kit (Cayman Chemical, Ann Arbor, MI, USA) according to the manufacturer's instructions. The total number of animals was 60 (n = 60), and the number of animals in each experimental group was 15 (n = 15). Nuclear extracts were collected from all retinas, prepared and incubated in 96-well plates coated with immobilized double-stranded oligonucleotides containing the HIF-1a response element (59-ACGTG-39). The HIF-1a transcription factor complex was detected by the addition of a specific primary antibody directed against HIF-1a, visualized by an anti-IgG horseradish peroxidase (HRP)-conjugate and quantified by measuring the absorbance at 450 nm. The DNA binding activity of HIF-1a was expressed relative to the value of the control. The experiments were repeated 3 times and similar results were obtained. Quantitative RT-PCR Rho/VEGF mice and littermate controls were euthanized at P24, and retinas were isolated, snap frozen, pulverized, and placed in lysis buffer. The total number of animals was 60 (n = 60), and the number of animals in each experimental group was 15 (n = 15). RNA isolation was performed on all isolated retinas using an Uneasy kit (QIAGEN, Valencia, CA). To remove any contaminating genomic DNA, RNA samples were treated with DNase I (Invitrogen, Carlsbad, CA) at room temperature for 15 min, and then cDNA was synthesized with reverse transcriptase (Super-Script III; Invitrogen) and 5 mM random hexamer. The mRNA levels for all genes (NFkB/p65, SDF-1, CXCR4, FAK, a5, b1, EPO, ET-1, and MMP-9) were quantified by Real time RT-PCR using the SYBR Green reaction mixture (QIAGEN) with 0.5 mM primers. 28S rRNA was used as a standard for normalization. Gene-specific primers were designed to encompass the genes of interest. Threshold cycle (Ct) values for the different samples were utilized for the calculation of gene expression fold change using the formula 2 to the minus power of delta delta ct. Fold changes in the gene expression relative to the b-actin endogenous control gene were determined by the following equation: fold Immunohistochemistry The total number of animals used in each experiment was 60 (n = 60), and the number of animals in each experimental group was 15 (n = 15). The total number of tissue sections that were used in each experiment was 8 tissue sections per animal (n = 8). Mouse retinas were dissected and prepared for immunohistochemical analysis, fixed in 4% paraformaldehyde in 0.1 M PBS for 15 min at room temperature and embedded in paraffin, sectioned (5 mm). Tissue sections were deparaffinized, hydrated, and later exposed to heat-induced antigen retrieval using a microwave oven (three 5min cycles in citrate buffer, pH 6.0), endogenous peroxidase was abolished with methanol, and hydrogen peroxide and nonspecific background staining was blocked by incubating the tissue sections for 5 min in the appropriate serum block. Subsequently, all slides were washed three times in PBS, and incubated for 1 hr with primary anti-(NFkB/p65, SDF-1, CXCR4, FAK, a5b1, EPO, ET-1, MMP-9, and b-actin) antibodies. Negative control experiments consisted of omission of the primary antibody and utilizing the appropriate isotype control antibody as a replacement. The sections were washed with TBST and incubated with EnVision Polymer HRP secondary antibody (DAKO, Carpinteria, CA) for 30 min. All slides were stained with DAB solution and counterstained with hematoxylin. Slides were cover slipped (Permount; Fisher Scientific, Fairlawn, NJ) and examined by light microscopy. Sections were visualized under a microscope (Zeiss Axiovert 135, Thornwood, NY), and images were acquired with digital camera (Carl Zeiss Micro-Imaging, Inc., Thornwood, NY). All retinas were examined at low (506) and higher (4006) magnification objective. The staining intensity in our series ranged from a weak blush to moderate or strong. The staining intensity was further categorized as focal (,10%), patchy (10%-50%), and diffuse/ multifocal (.50%). For meaningful semiquantitative analysis, focal and/or weak staining was considered equivocal staining, and patchy or diffuse/multifocal staining was subcategorized as either moderate or strong staining. Immunohistochemical Image Analysis Immunostaining were captured using AxioCam digital microscope camera (Carl Zeiss Micro-Imaging, Inc., Thornwood, NY). All immunohistochemical analyses were measured by Metamorph digital image software (Molecular Devices, Sunnyvale, CA). Metamorph image analysis was conducted by setting the filter with excitation wavelengths 488. Metamorph image analysis software (version 7.1, Universal Imaging, Downingtown, PA) was used for image processing and quantitative analysis of positive immunostaining. Metamorph tools were used to set the threshold and regions of interest (ROIs). All images were captured at identical time and exposure settings, and they were all processed to the same scale. Images were first segmented on the basis of pixel intensity, which was done on a plane-by-plane basis for an image stack. Briefly, each retinal section was scanned into Metamorph and five (5) fields/slide were chosen from each section for analysis. One hundred and fifty (150) cells from each field were selected. The saved file was used to calibrate each image for specific pixel size. With the help of a free drawing tool, positively-stained areas were chosen and measured in total-pixels area. A threshold encompassing an intensity range of 100-250 gray-scale values was applied to the ROIs in the least brightly stained condition first. The data were also read and investigated by Matlab v6.5 script file software, which counted the total number of pixels that were above threshold value. This number was divided by the total number of pixels in each image to yield percent fluorescent pixels. To correct for background fluorescence, the threshold was adjusted for each experimental series, with concomitantly processed negative controls as the guide for setting background fluorescence. The background fluorescence intensities per pixel were subtracted from the experimental data by using a one-step erosion procedure, and then all remaining objects were counted. The same threshold was subsequently applied to all images. Protein staining was considered to be positive only when it exceeded the established threshold. Percent of positive protein staining expression above threshold in the total area selected was then calculated. The total staining fluorescence intensity per cell was calculated, and the average fluorescence intensity per pixel was determined by dividing the total intensity by the area of the cell measured in pixels. This was followed by measuring the average fluorescence intensity in each field. Data from multiple fields as indicated over several experiments were used to obtain the final results. The number of immunopositively-stained cells per image was then expressed per um 2 , and the average number per section was determined among five separate fields. Statistical Analysis All values obtained were expressed as mean value 6 SEM. Statistical analysis was performed using a one-way analysis of variance (ANOVA) and a Tukey-Kramer post hoc test for multiple comparisons. Statistical significance was defined as *P,0.05; **P,0.01; ***P,0.001. Suppression of VEGF-Induced Ocular NV by YC-1 Ocular NV was induced in rho/VEGF animals through the presence of the rhodopsin promoter, which drives the expression of VEGF in the photoreceptors region. Depending on the designated animal group, mice received a sextuple regimen of intravitreous injections of YC-1, or DMSO, or SN50, or SN50M, or they were left untreated (Fig. 1). The number of neovascular buds and total area of NV on the outer surface of the retina were quantified by Metamorph digital imaging analysis, with three investigators masked with respect to treatment group. Retinas from group I mice (n = 15; P21 and P24) exhibited normal and healthy vascularization and without the presence of any lesions of ocular NV (Fig. 2 and Fig. 3A, B and C). Since there were no significant differences in subretinal NV between P21 and P24, we have therefore selected P24 as the baseline for all ensuing measurements. Retinas from DMSO-treated mice (n = 15; P24) had numerous buds of NV (11062), which was comparable to retinas from the transgenic rho/VEGF mouse group ( Fig. 2 and Fig. 3A). In addition, retinas from DMSO-treated mice had a total NV area/retina, which was measured at 13.360.04 (mm 2 610 23 ) (Fig. 3B), and they exhibited the presence of extensive area of NV/ retina (7.8161.94) (Fig. 3C). The efficacy of YC-1 and SN50 in the transgenic rho/VEGF mouse was assessed by evaluating the suppression of the development of neovascular foci following a sextuple intravitreal injection regimen to the eyes of 6,9,12,15,18, and 21-day-old homozygous rho/VEGF mice ( Fig. 2 and Fig. 3A, 3B, and 3C). Furthermore, under low or highmagnification views, retinas from the YC-1-and SN50-treated groups (n = 15; P24) showed a significantly fewer neovascular buds compared to retinas from eyes injected with DMSO and SN50M, respectively (Fig. 2, and 3A). YC-1 treated retinas had NV measurements of 2.5260.13 (Fig. 3B) and 1.2060.40 (Fig. 3C), respectively, which was significantly (***P,0.001) less than the measurements in DMSO-treated mice. Moreover, SN50-treated retinas had neovascular measurements of 3.2960.87 (Fig. 3B) and 1.8260.80 (Fig. 3C), respectively, which was significantly (***P, 0.001) less than the measurements that were revealed in the SN50M-treated mice. Interestingly, when compared to SN50treated retinas, retinas that were injected with YC-1 exhibited a more compelling formation of new and healthy vessels, i.e., physiological revascularization, which occupied the entire retina (Fig. 2). YC-1 Inhibits VEGF-Induced NFkB/p65 Activation In order to verify the increase in the NFkB/p65 binding activity was mediated by the influence of VEGF over-expression in the rho/VEGF mouse retinas, we measured the NFkB/p65 activity by ELISA in the retinal extracts of all animal groups; I, II, III, IV, V, VI (Fig. 4A). The over-expression of VEGF in the non-treated Rho/VEGF retinas, caused a significant (***P,0.001) (99.02%61.3) upregulation in NFkB/p65 binding activity, as compared to retinas that were isolated from nontransgenic C57BL/6 control group. Intravitreal administration of SN50 or YC-1 resulted in a significant (***P,0.001) inhibition in NFkB binding activity, as compared to their respective controls; SN50M and DMSO-treated retinas. The extent of NFkB/p65 inhibition with SN50 and YC-1 was found to be (82.52%60.6) and 78.2160.9) as compared to their respective controls; SN50Mand DMSO-treated retinas. VEGF Has No Impact on HIF-1a Transcriptional Activity in the rho/VEGF Mouse Model In order to absolutely verify that rho/VEGF mouse model develops subretinal NV in an ischemia-independent microenvironment, we investigated the influence of VEGF signaling pathway upon HIF-1a transcriptional activity in this mouse model. Our results demonstrated that VEGF overexpression in the rho/VEGF mouse didn't induce HIF-1a transcriptional activity and (Fig. 4B), whereas this endogenous activity was significantly inhibited by the use of YC-1. Since this is an ischemiaindependent mouse model, it was not surprising to find that the level of HIF-1a transcriptional activity was comparable to the level of HIF-1a transcriptional activity in the C57 negative control mouse. YC-1 Downregulates the Angiogenic Gene Expression Profile in the Rho/VEGF Mice To elucidate the molecular mechanisms involved in the regulation of VEGF-induced subretinal NV in the rho/VEGF transgenic mice; the retinal levels of; NFkB/p65, SDF-1, CXCR4, FAK, a5, b1, EPO, ET-1, and MMP-9 mRNA were evaluated on P24, by quantitative real time RT-PCR with data normalized to bactin, and using the appropriate primers sets (Fig. 5J). Data analysis of the mRNA levels exhibited systematic variation in the gene expression patterns among various groups of retinal samples. There was a significant upregulation of NFkB/p65, FAK, a5, b1, Retinal whole-mounts in mice perfused with fluoresceinlabeled dextran. The retinas were examined by fluorescence microscopy, and representative retinal angiographs were obtained to illustrate the control group (group I) and all other groups (group II, III, IV, V, and VI) at different magnifications; upper panel is at low magnification (506), while the lower panel is at high magnification (4006). Fluorescence microscopy was conducted in a fashion that provides a narrow depth of the field so that when focusing on the outer edge of the retina, it will enable subretinal focus for neovascular buds on the outer surface of the retina, meanwhile the retinal vessels are out of focus in the background, which allows easy delineation of the subretinal NV. The outer edge of the retina, which corresponds to the subretinal space in vivo, is easily identified and therefore from slide the focal plane was standardized. Panels represent retina from C57BL/6 (P24) mouse, which exhibits a homogeneous normal delicate vessel pattern throughout the retina, and no presence for vascular lesions. Different subgroups of rho/VEGF transgenic mice were given sextuple intravitreal injections of DMSO (0.2%), or SN50M (20 mM), or SN50 (20 mM), or YC-1 (100 mM), at P6, P9, P12, P15, P18, and P21, or were left untreated. Compared with eyes injected with DMSO and SN50M, there appeared to be fewer neovascular buds on the outer surface of the retina in the eyes that were injected with YC-1 and SN50, respectively (arrows). There was a EPO, ET-1, and MMP-9 mRNA levels in group II animals, as compared to retinas from group I (Fig. 5A-I). At P21 and P24 and despite the sustained expression of VEGF, there were no detectable differences in the levels of CXCR4 and SDF-1 mRNA expression amongst the retinas of all groups (Fig. 5B and 5C). As predicted, NFkB/p65 upregulation was significantly attenuated by the NFkB inhibitor SN50 as compared to the retinas that were treated with its negative control mutant peptide SN50M (Fig. 5A). Whereas sextuple intravitreal injections with SN50 caused a significant downregulation in the message levels of FAK, a5, b1, EPO, ET-1, and MMP-9 as compared with its negative control mutant peptide SN50M (Fig. 5A-I). Furthermore, our data revealed that at P24, a sextuple intravitreal injection-regimen with YC-1 resulted in significant attenuations in the message levels of NFkB/p65, FAK, a5, b1, EPO, ET-1, and MMP-9 as compared with DMSO-treated retinas (Fig. 5A-I). YC-1 Inhibits the Angiogenic Protein Expression in the Rho/VEGF Mice Non-treated retinas of the C57BL/6 mice revealed the cytoplasmic staining pattern for NFkB/p65 in the inner limiting membrane (ILM) and the cells of the nerve fiber layer (NFL), ganglion cell layer (GCL) and the outer plexiform layer (OPL), with moderate staining in the cells of the inner plexiform layer (IPL). In contrast, there was very weak immunoreactivity in the cells of the inner nuclear layer (INL). Furthermore, SDF-1 staining was weak ''focal,'' sporadic, and occurred primarily in the outer plexiform layer (OPL). Whereas CXCR4 staining was faint and was mainly exhibited in the GCL and INL of the retina. In addition, there was a weak FAK expression in the INL. The expression of a5b1 was primarily localized in the retinal pigment epithelium (RPE). Moreover, EPO exhibited moderate immunoreactivity, which was primarily localized in the NFL, GCL, OPL and IPL. The staining signals of ET-1 were primarily detected in the IPL, GCL and NFL. The retinas also exhibited a very low level of MMP-9 immunoexpression that was detectable in the NFL, GCL, IPL and OPL (Fig. 6, Fig. 7, Fig. 8 Fig. 9, Fig. 10, Fig. 11 and Fig. 12). Retinas of the rho/VEGF mice exhibited a significant upregulation of nuclear NFkB/p65 immunoreactivity in the nuclei of the INL, NFL, and the GCL; especially in retinal ganglion cells (RGCs), displaced amacrine cells, and amacrine cells, whereas a weak immunoreactivity in the OPL and IPL. Furthermore, SDF-1 staining was weak, sporadic, and occurred primarily in the inner border and the OPL of the retina and its expression in the retinas of Rho/VEGF mice was no different from what we have observed in the nontransgenic control mice. In addition, CXCR4 immunoreactivity was primarily displayed in inner retina, specifically the GCL and INL. Moreover, there was a significant increase in the level of FAK expression in the INL. The expression of a5b1 was highly augmented in the RPE. In addition, EPO immunoreactivity was upregulated within the NFL, GCL, OPL, and IPL, i.e., the neurosensory retina. Furthermore, there was an upregulation in ET-1 immunoexpression within the NFL and GCL, as well as strong staining signals, which were localized in the innermost region of the IPL. There was a significant upregulation of MMP-9 immunoreactivity NFL, GCL, and OPL (Fig. 6, Fig. 7, Fig. 8 Fig. 9, Fig. 10, Fig. 11 and Fig. 12). DMSO-treated retinas displayed immunoreactivities comparable with those of non-treated rho/VEGF-treated retinas. There was a significant upregulation in the staining intensity of nuclear NFkB/p65 in the nuclei of the INL, GCL and NFL, as well as a significant augmentation in the levels of FAK, a5b1, EPO, ET-1, and MMP-9 immunoexpression compared to that in the YC-1treated retinas (Fig. 6, Fig. 7, Fig. 8 Fig. 9, Fig. 10, Fig. 11 and Fig. 12). SN50M-treated retinas exhibited NFkB/p65 immunoreactivity that resembled the retinas of the non-treated rho/VEGF-treated mice. The staining intensity of nuclear NFkB/p65 in the nuclei of the INL, GCL and NFL, was significantly over-expressed as compared to that in the SN50-treated retinas (Fig. 6, Fig. 10 and Fig. 12). Discussion Retinal NV is a prevalent cause of blindness and is the focus of intensive efforts to find selective molecular treatments. Retinal ischemia is the central pathologic feature of retinal NV and one of its major consequences is the upregulation of VEGF [16]. Furthermore, retinal NV is suppressed by agents that neutralize VEGF [17] or block VEGF receptors [18,19]. In this study we have utilized the rho/VEGF mouse model of ocular NV. This transgenic mouse model develops increased expression of VEGF in the retina, and in the absence of hypoxia starting at P7 [12,20]. The measurement of VEGF expression in this mouse model is very well documented. At P16, the level of VEGF mRNA is roughly fivefold higher than that in P16 wild-type mice [20]. Despite the absence of ischemia in the rho/VEGF transgenic mouse model, VEGF remains the pivotal stimulator for ocular NV. The question therefore arises as to the molecular mechanism of ischemiaindependent NV. It is noteworthy that many pro-angiogenic factors are mediated by NFkB activation, suggesting its crucial contribution to the pathogenesis of intraocular NV. Whereas the activation of NFkB in response to VEGF have been reported in several studies [9,21,22,23]; other reports have demonstrated the inhibition of NFkB in response to VEGF [24]. The pathological setting of the rho/VEGF mouse model may suggest that increased VEGF expression in the retina is the primary cause for the pathological microenvironment that is not accompanied by retinal ischemia, which in turn causes the upregulation of various proangiogenic mediators and ultimately promoting subretinal NV. Blunting the VEGF-signaling with a small molecule like YC-1 significant reduction in the number of subretinal neovascular buds in the retinas of the YC-1-and SN50-treated groups as compared to the DMSOand SN50M-injected retinas. Likewise, nontreated rho/VEGF mouse group revealed the presence of multiple large areas of numerous vascular foci (arrows). Image analysis confirmed no difference between vehicle-treated retinas and retinas from mice that were left untreated. would also blunt the increase in the downstream pro-angiogenic factors suggesting that it is the high levels of VEGF that promote the exacerbation of subretinal NV. YC-1 is a small molecule that inhibits HIF-1 in vivo and in vitro [25,26,27]. Previously, we have demonstrated that YC-1 exhibited pleiotropic effects, which impaired ischemia-induced expression of HIF-1 and its downstream angiogenic molecules, such as VEGF, EPO, and ET-1, leading to the inhibition of retinal NV in the oxygen-induced retinopathy (OIR) mouse model [27]. Furthermore, previous data demonstrated that nonischemic microenvi- Figure 3. A-C. Quantification of Subretinal NV in Various Control and Experimental Groups. Metamorph image-analysis software was used to compute the number and area of neovascular lesions and the total area of NV on the outer surface of each retina. The figure displays; A) the number of NV lesions per retina; B) the total neovascular area per retina; and C) the average neovascular lesion per retina. Mice that were treated with YC-1 and SN50 had; 1) significantly fewer neovascular lesions, 2) significantly smaller NV area per retina, and 3) smaller area of NV lesion per retina, than did mice that were treated with DMSO and SN50M, respectively. Image analysis confirmed that there was no difference between DMSO-and SN50M-treated mice and mice that were left untreated. doi:10.1371/journal.pone.0101602.g003 NFkB/p65, a5, b1, ET-1, MMP-9, FAK and EPO, were quantified by Real time RT-PCR. Selected experiments, which measured the mRNA levels of NFkB/p65, indicated that YC-1 and SN50 downregulated the mRNA levels of NFkB/p65 as compared to DMSO or SN50M-treated retinas, respectively. For the other genes listed above, the mRNA levels were upregulated in the DMSO-treated retinas and the rho/VEGF group that was left untreated. In ronment may also induce retinal neovascularization [28,29]. We have selected YC-1 as a pharmacological inhibitor in this study for various reasons: i) YC-1 is a small molecule, which activates soluble guanylyl cyclase (sGC) independently of nitric oxide (NO) in vivo [10]. Hence, based on our current investigation, it is tempting to speculate that downregulation of NFkB expression and its functional activity by YC-1 is mediated via the sGCdependent mechanism involving the suppression of transcriptional activity of NFkB; ii) YC-1 has pleiotropic effects that influence various downstream signaling pathways; iii) Our previous ''in vitro'' studies have demonstrated that VEGF treatment in human retinal microvascular endothelial cells promoted NFkB/p65 activation via; 1) upregulating the phosphorylation status of IkBa and increasing its intrinsic hydrolysis activity; 2) promoting the nuclear accumulation of p65; and 3) increasing the NFkB activity. Whereas YC-1 treatment induced the downregulation of the NFkB/p65 activation by preventing IkBa degradation, and hence inhibiting the nuclear translocation of NFkB/p65 subunit [9]; iv) YC-1 blunts the increase in the downstream pro-angiogenic factors, which promote the exacerbation of subretinal NV; v) earlier studies have indicated that high concentrations of YC-1 inhibited NFkB/p65 activation and induced apoptosis in human prostate cancer cells [11]. Furthermore, YC-1 inhibited cytokine release and NFkB/p65 activation in endotoxemic mouse models [30]. In addition, other studies have demonstrated that the signaling pathways of NFkB/p65 activated by LPS were also inhibited by YC-1. In toto, this report suggests that inhibition of NFkB expression and activity by YC-1 may provide therapeutic benefits in retinal diseases associated with enhanced VEGF and NFkB, such as ischemia-independent retinal microvasculopathies [15,27,31,32,33,34,35,36]. During this investigation, we have specifically selected P6 as the initiation point for YC-1 injection since previous studies have indicated that the onset of VEGF expression in the photoreceptors of rho/VEGF transgenic mice is on approximately P6. At P10 the mice develop sprouts of NV from the deep capillary bed of the retina that grow through the photoreceptor layer and form an extensive network of new vessels in the subretinal space. This is followed by an increase to a steady-state level by about P14, and is sustained for at least several months throughout adulthood [12,20]. We have utilized the intravitreal administration of YC-1 as a method of drug delivery, because this route delivers the drug in close proximity to the localization of the pathology, while the vitreous serves as a drug reservoir, which keeps the drug longer at the site. In addition, the sextuple injection regimen, which implemented in this study, was used to maximize chances of success for proof of concept, but is not ideal for clinical application, despite that implementing the same modality (multiple intravitreal injections) have been previously reported in various investigations [37,38,39,40,41]. The selection criterion of sextuple injection regimen was based on the standards, which we have established throughout our previous studies. These standards have indicated the IC50 of YC-1 at 48 hours was 55.3060.1 mom [26]. Since we didn't have any differences between these control groups highlighted above, it can be safely concluded that the effects of injection or DMSO is nullified. Throughout this report we demonstrated that NFkB/p65 expression was elevated in the retinas of rho/VEGF transgenic mice. Previously, we have reported that in cultured human retinal microvascular endothelial cells (hRMVECs), the induction of NFkB/p65 by VEGF is blunted by YC-1 in a hypoxiaindependent manner [9]. The utilization of the specific NFkB inhibitor during this investigation has substantiated that YC-1 share one common target (NFkB) with SN50. To some degree, our data have shown the effects of SN50 have duplicated the effects of YC-1. However, as compared to SN50-treated retinas, retinas that were injected with YC-1 exhibited a significant formation of new and healthy vessels, i.e., physiological revascularization, which occupied the entire retina. As predicted, NFkB/p65 upregulation was significantly attenuated by the NFkB inhibitor SN50 as compared to the retinas that were treated with its negative control mutant peptide SN50M (Fig. 5A). Taken together, our data indicate that SN50 has duplicated many of the same activities as YC-1. Immunohistochemistry data demonstrate the absence of nuclear NFkB/p65 signal in the normal retinas of C57BL/6 mice. In contrast, rho/VEGF of age-matched mice exhibited a significant upregulation of nuclear NFkB/p65 in the NFL, GCL and the INL, especially in RGCs, amacrine cells and displaced amacrine cells. This may suggest that increased expression of VEGF in the retinal photoreceptors enhances NFkB activity, leading to the upregulation of various pro-angiogenic factors including; FAK, a5b1, EPO, ET-1, and MMP-9, which ultimately promotes subretinal NV (Fig. 13). Our findings exhibit the changes, which may have occurred throughout the ischemiaindependent mechanism and have emerged throughout numerous anatomical layers of the retina, regardless of the degree of the vascularity. Our data demonstrate that VEGF-stimulated effects are mediated via the activation of NFkB-signaling pathway, and YC-1 significantly inhibits such activity. We have decided to measure SDF-1 and CXCR4 expression, because NFkB is an essential and ubiquitous transcription factor for the expression of many angiogenic-related genes, including SDF-1 and CXCR4 integrins. Several studies have reported the intimate relationship between the stimulation of SDF-1/CXCR4 and the activation of NFkB signaling. Furthermore, it's been indicated that stimulation of human hematopoietic cells by SDF-1 activates NFkB in a PI-3K-AKT-dependent manner [42]. Other studies have indicated that SDF-1a/CXCR4 activates NFkB and promotes oral squalors cell carcinoma invasion [43]. Moreover, throughout this investigation we have measure FAK expression for the followings reasons; 1) it's been revealed that FAK activates NFkB via ERK1/2 and p38MAPK pathways; 2) Several studies have indicated the importance of FAK in influencing distinct steps of the angiogenic response [44] and suggested that FAK overexpression induces enhanced pathological retinal angiogenesis [45]; 3) recent studies have defined a new mechanism, which demonstrated that VEGF-induced migration of endothelial cells is dependent on FAK. It is noteworthy that there were several reasons that prompted us to investigate the a5 and b1 expressions in this study and these reasons are; 1) it has been revealed that engagement of the a5b1 integrin promotes an NFkB-dependent program of gene expression that coordinately regulates angiogenesis and inflammation; 2) Prior evidence have suggested that a5b1 contrast, retinas from YC-1-treated retinas exhibited a significant downregulation of the mRNA expression as compared to retinas that were treated with DMSO. Despite the sustained expression of VEGF, there were no detectable differences in the levels of CXCR4 and SDF-1 mRNA expression in the animals of all groups. ANOVA was used for statistical analyses. Mean 6 SEM of mRNA level normalized to b-actin were calculated, [***P,0.001 and **P,0.01, as compared to respective controls]. Data are representative of 3 independent experiments. J. Sequence for the Primer Sets Used for the Quantitative Real-Time PCR Analysis. doi:10.1371/journal.pone.0101602.g005 Figure 6. The Expression of NFkB and Downstream Angiogenic Proteins in rho/VEGF Mouse Model. Immunohistochemical analysis of NFkB/p65, FAK, a5b1, EPO, ET-1, and MMP-9, has indicated the expression levels of these proteins were significantly elevated in the rho/VEGF retinas that were left untreated. YC-1-treated retinas exhibited a significant decrease in the protein expression levels as compared with DMSO-treated retinas. Despite the sustained expression of VEGF, there were no detectable differences in the levels of CXCR4 and SDF-1 protein expression amongst the animals of all groups. Retinas were examined at 1006 objective. Scale bar, 100 mm. doi:10.1371/journal.pone.0101602.g006 integrin activates the NF-kB pathway in fibroblasts and endothelial cells [46], which imply that a5b1-mediated NF-kB signaling is important for angiogenesis. It is noteworthy to mention that in rat RGCs, stimulation of the b1 integrin receptor with laminin, or agonist antibodies enhanced RGC survival in correlation with activation of b1 integrins' major downstream regulator, FAK. Furthermore, b1 integrin binding and FAK activation were required for retinal ganglion cell's (RGC) survival response to laminin. Thus, disruption of homeostatic RGC-laminin interaction and signaling leads to cell death after retinal ischemia. These data demonstrate that b1 integrin-focal adhesion kinase (FAK) signaling modulates retinal ganglion cell (RGC) survival [47]. Our current study demonstrates that YC-1 treatment regimen had significant anti-angiogenic effects that were mediated via suppression of VEGF/NFkB/p65 axis. Hence, targeting the nexus between VEGF and NFkB/p65 may act in a negative feedback Immunohistochemical analysis of NFkB/p65, FAK, a5b1, EPO, ET-1, and MMP-9, has indicated the expression levels of these proteins were significantly elevated in the rho/VEGF retinas that were left untreated. YC-1-treated retinas exhibited a significant decrease in the protein expression levels as compared with DMSO-treated retinas. Despite the sustained expression of VEGF, there were no detectable differences in the levels of CXCR4 and SDF-1 protein expression amongst the animals of all groups. Retinas were examined at 1006 objective. Scale bar, 100 mm. doi:10.1371/journal.pone.0101602.g008 loop to suppress subretinal NV (Fig. 13). This investigation presents a new role for NFkB in ischemia-independent ocular pathologies, which may range from an innocent bystander to major culprit. In addition, our study demonstrated that VEGF overexpression in the rho/VEGF mouse didn't induce HIF-1a transcriptional activity, while the endogenous HIF-1 activity was significantly abolished by the use of YC-1. Moreover, the data that were presented in this study demonstrate that YC-1 may have the Figure 9. The Expression of NFkB and Downstream Angiogenic Proteins in rho/VEGF Mouse Model. Immunohistochemical analysis of NFkB/p65, FAK, a5b1, EPO, ET-1, and MMP-9, has indicated the expression levels of these proteins were significantly elevated in the rho/VEGF retinas that were left untreated. YC-1-treated retinas exhibited a significant decrease in the protein expression levels as compared with DMSO-treated retinas. Despite the sustained expression of VEGF, there were no detectable differences in the levels of CXCR4 and SDF-1 protein expression amongst the animals of all groups. Retinas were examined at 1006 objective. Scale bar, 100 mm. doi:10.1371/journal.pone.0101602.g009 potential to be a novel and potent drug to reduce subretinal NV in ocular vasculopathies. Additional studies are needed to elucidate the mechanism(s) by which YC-1 works in subretinal NV and find ways to exploit its anti-angiogenic activity in the development of appropriate treatments. Further studies may elucidate whether YC-1 can be a therapeutic option for patients with ischemiaindependent ocular microvasculopathies. These findings, combined with previous studies superimposing the role of VEGF in the retinal injury with or without the presence of ischemia [14,48], should help elucidate the role played by dys/regulation of angiogenic pathways in response to retinal injury. However, we must acknowledge that hyperglycemia; hypertension and dyslipidemia may also play crucial roles in instigating retinal vasculopathies, such as diabetic retinopathy. Other players including reactive oxygen species (ROS), dysregulation of nitric oxide synthase (NOS), formation of advanced glycation endproducts (AGEs), signal transducers and activators of transcription proteins and activator protein 1 (AP1) [49,50] should be considered as Figure 12. Immunohistochemical Profile of Retinal Layers among the Various Mouse Groups. The retinal layers stained vividly. However, the grain intensity varied significantly from one layer to another. The intensity of immunoreactivity was graded as follows: strong (+++), moderate (++), weak (+), negative (2) (A). Retinal tissue specimens of YC-1 treated groups were compared to normoxic, non-treated rho/VEGF retinas, DMSO-treatedrho/VEGF mice and YC-1-treated rho/VEGF mice. doi:10.1371/journal.pone.0101602.g012 additional players that promote retinal vasculopathies. Finally, whether YC-1 invokes direct effect(s) on autocrine VEGF production/exocytosis and/or autocrine VEGF/VEGFR signaling [51] in the rho/VEGF mouse model remains to be addressed (Fig. 13).
2017-04-02T01:13:45.404Z
2014-07-22T00:00:00.000
{ "year": 2014, "sha1": "93d67a519757702847b6b816f049a577f8960055", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0101602&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93d67a519757702847b6b816f049a577f8960055", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15965631
pes2o/s2orc
v3-fos-license
Evolution of microbes and viruses: a paradigm shift in evolutionary biology? When Charles Darwin formulated the central principles of evolutionary biology in the Origin of Species in 1859 and the architects of the Modern Synthesis integrated these principles with population genetics almost a century later, the principal if not the sole objects of evolutionary biology were multicellular eukaryotes, primarily animals and plants. Before the advent of efficient gene sequencing, all attempts to extend evolutionary studies to bacteria have been futile. Sequencing of the rRNA genes in thousands of microbes allowed the construction of the three- domain “ribosomal Tree of Life” that was widely thought to have resolved the evolutionary relationships between the cellular life forms. However, subsequent massive sequencing of numerous, complete microbial genomes revealed novel evolutionary phenomena, the most fundamental of these being: (1) pervasive horizontal gene transfer (HGT), in large part mediated by viruses and plasmids, that shapes the genomes of archaea and bacteria and call for a radical revision (if not abandonment) of the Tree of Life concept, (2) Lamarckian-type inheritance that appears to be critical for antivirus defense and other forms of adaptation in prokaryotes, and (3) evolution of evolvability, i.e., dedicated mechanisms for evolution such as vehicles for HGT and stress-induced mutagenesis systems. In the non-cellular part of the microbial world, phylogenomics and metagenomics of viruses and related selfish genetic elements revealed enormous genetic and molecular diversity and extremely high abundance of viruses that come across as the dominant biological entities on earth. Furthermore, the perennial arms race between viruses and their hosts is one of the defining factors of evolution. Thus, microbial phylogenomics adds new dimensions to the fundamental picture of evolution even as the principle of descent with modification discovered by Darwin and the laws of population genetics remain at the core of evolutionary biology. INTRODUCTION Charles Darwin's On the Origin of Species that appeared in London in 1859 (Darwin, 1859) was the first plausible, detailed account of biological evolution, after the simultaneous and independent brief outlines by Darwin and Alfred Russell Wallace that were published the previous year (Darwin, 1858;Wallace, 1858). Darwin did not discover evolution and did not even offer the first coherent description of evolution: exactly 50 years before the appearance of the Origin, the French botanist and zoologist Jean-Baptiste Lamarck published his magnum opus Philosophie Zoologique (Lamarck, 1809) in which he outlined his vision of the history of life in considerable detail. However, the cornerstone of Lamarck's worldview was the purported intrinsic drive of evolving organisms toward "perfection," a patently non-scientific, irrational idea. Moreover, Lamarck's view of the role of evolution in the history of life was severely limited: he did not postulate deep common ancestry of life forms but rather believed in multiple acts of creation, perhaps a separate act for each species. Prescient ideas on evolutionary changes of organisms actually have been developed centuries before Lamarck and Darwin, most notably by the great Roman thinker Titus Lucretius Carus (2011). However, the fact remains that it was Darwin's first evolutionary synthesis that had launched the field of evolutionary biology in a sense close to the modern one and had remained central to biological thinking over the last 150 years inasmuch as "nothing in biology makes sense except in the light of evolution" (Dobzhansky, 1973). Darwin's concept lacked the essential foundation in genetics for the obvious reason that mechanisms of heredity were unknown in his day. Hence Darwin's deep concern over the so-called Jenkin nightmare, the objection to Darwin's concept according to which beneficial changes would be "diluted" after several generations in the progeny of organisms in which they occurred. The genetic basis of evolution was established after the rediscovery of Mendel's laws, with the development of population genetics in the first third of the twentieth century, primarily, through the pioneering work of Fisher, Wright, and Haldane (Fisher, 1930;Haldane, 1932). The new, advanced understanding of evolution, informed by theoretical and experimental work in genetics, was consolidated in the Modern Synthesis of evolutionary biology, usually, associated with the names of Dobzhansky, Julius Huxley, Mayr, and Simpson (Dobzhansky, 1937;Simpson, 1944). Apparently, the Modern Synthesis reached its mature form during the 1959 centennial celebration for the Origin in Chicago (Tax and Callender, 1960;Browne, 2008). Now, 50 years after the consolidation of the Modern Synthesis, evolutionary biology undoubtedly faces a new major challenge and, at the same time, the prospect of a new conceptual breakthrough (Rose and Oakley, 2007). If the Modern Synthesis can be succinctly described as Darwinism in the Light of Genetics (often referred to as neodarwinism), then, the new stage is Evolutionary Biology in the Light of Genomics and Microbiology. The combination of genomics and microbiology is indeed critical in the advent of this new age of evolutionary biology (Koonin and Wolf, 2008;Koonin, 2009a;Woese and Goldenfeld, 2009). Lamarck and Darwin (let alone Lucretius) were plainly unaware of the existence of genomes and microbes. The architects of the Modern Synthesis certainly knew about genomes and microbes "in principle" but, in the former case, did not know enough to incorporate information on genomes beyond the (important but limited) level of formal genetics, and in the latter case, did not realize the importance of microbes for understanding evolution at all. In this article, we attempt to outline the key changes to the basic tenets of evolutionary biology brought about primarily by comparative and functional microbial genomics and argue that, in many respects, the genomic stage could be a more radical departure from the Modern Synthesis than the latter was from classic Darwinian concepts. FROM THE TREE OF LIFE TO THE WEB OF GENE TREES The famous sole illustration of the Origin of Species shows a Tree of Life (or more precisely, a series of trees presumably depicting the evolution of different divisions of organisms). Obviously, Darwin was not the first to use a tree to depict history. Before him, trees had been employed for many centuries to capture human genealogy, e.g., that of the Old Testament patriarchs as well as later monarchs. Darwin, however, was the first to make the crucial conceptual step by boldly proposing that the entire history of life could (at least in principle) be accurately represented by a tree growing from a single root. Darwin's tree was a sheer scheme, without any attempt to assign real life forms to the branches but in just a few years Ernst Haeckel populated the tree by a huge variety of organisms, almost exclusively animals (Haeckel, 1997). Haeckel inferred the relationships between organisms reflected in the topology of his tree primarily on the data of comparative anatomy that was already advanced in his day. Over the next century, there was considerable progress in this field leading to improved resolution of the tree but qualitatively the situation has not changed. Phylogeny largely served as a tool for systematics, and the architects of the Modern Synthesis were much more interested in mechanisms of microevolution and speciation than in the course of macroevolution that is supposedly reflected in the Tree of Life. Although by mid-twentieth century microbiologists had realized full well that microbes possess genomes and can mutate, and accordingly, should evolve, in principle, similarly to animals and plants, all attempts to infer microbial evolution from morphological and physiological characters had been unqualified failures (Stanier and Van Niel, 1962). The fortunes of phylogeny and microbial evolution changed abruptly in the late 1970s when Carl Woese and colleagues realized that the nucleotide sequence of a universally conserved molecule, 16S rRNA, could be used to infer a universal phylogenetic tree (rather incredibly, from today's vantage point, Woese's original seminal work employed oligonucleotide maps of 16S RNA rather than sequences; however, the actual sequences became readily available shortly, and the main conclusions of the early studies stood) (Woese, 1987). Comparison of 16S RNA sequences had swiftly led to the discovery of a distinct domain of life, the Archaea, and its distinct phylogenetic affinity with the eukaryotes (Woese and Fox, 1977;Woese et al., 1990;Woese, 2004). Over the following few years, major phyla of Bacteria, Archaea and unicellular eukaryotes have been established (Woese, 1987), and the famous tripartite tree (Figure 1) emerged as the paradigm of the history of cellular life on earth which it more or less remains to this day (Woese et al., 1990;Pace, 1997Pace, , 2006Pace, , 2009. This was a veritable triumph of molecular phylogenetics and a dramatic departure from Haeckel's Tree of Life. In Haeckel's tree, Protista (unicellular eukaryotes) and Monera (bacteria) occupied unspecified positions near the root. For all purposes, these measly, tiny creatures were not considered important in the big picture of evolution. The tripartite tree of Woese and colleagues was a complete change of perspective. Now, two of the three domains of life were represented by prokaryotes (former Monera), and within the eukaryote domain, the majority of the phyla were represented by unicellular organisms (former Protista). The life forms formerly considered "important," i.e., the complex multicellular organisms (animals and plants), represent only two among the numerous branches of eukaryotes. There is no denying the fact that the true biodiversity on this planet is the diversity of unicellular microbes. In the 1980s, when the paradigmatic status of the threedomain Tree of Life was established, there was little concern over the fact that technically this tree represented the history of only one gene, even if a universally present and highly conserved one. The 16S RNA was unanimously considered a suitable reference gene to represent the evolution of the respective organisms. Other universal genes, such as 18S RNA ribosomal proteins or RNA polymerase subunits, were thought to be important only to the extent their inclusion could improve the resolution of phylogenetic trees. Even long before the advent of the genomic era, microbiologists realized that bacteria had the capacity to exchange genetic information via horizontal gene transfer (HGT), in some cases, producing outcomes of major importance, such as antibiotic resistance (Syvanen and Kado, 2002). Multiple molecular mechanisms of HGT have been described including plasmid exchange, transduction (HGT mediated by bacteriophages), and transformation (Bushman, 2001) [indeed, the phenomenon of transformation was employed by Avery and colleagues to demonstrate the genetic function of DNA in 1944(Avery et al., 1944a]. However, despite these discoveries, HGT was generally viewed as a minor phenomenon that was important only under special circumstances and, in any case, did not in any manner jeopardize the Tree of Life that could be reconstructed by phylogenetic analysis of rRNA and other conserved genes. This comfortable belief was abruptly shattered when the early findings of comparative genomics of bacteria and archaea in the late 1990s have indicated that, at least in some prokaryotic genomes, a substantial fraction of genes were acquired via demonstrable HGT, sometimes across log evolutionary distances. The pathogenicity islands and similar symbiosis islands that comprise over 30% of the genome in many pathogenic and symbiotic bacteria and obviously travel between bacteria via HGT are the prime case in point (Hacker and Kaper, 2000;Perna et al., 2001). Perhaps, more strikingly, comparative analysis of the genomes of hyperthermophilic bacteria and archaea has suggested that in shared habitats even HGT between the two domains of prokaryotes, Archaea and bacteria, can be extensive, with up to 20% of the genes of bacterial hyperthermophiles showing archaeal affinity (Aravind et al., 1998;Nelson et al., 1999;Koonin et al., 2001). Subsequent phylogenomic studies (that is analysis of phylogenies of multiple genes from numerous genomes) have led to a shocking realization: in prokaryotes at least, there seem not to exist two genes with the exact same evolutionary history Gogarten and Townsend, 2005;Gribaldo and Brochier, 2009;Zhaxybayeva, 2009;Boto, 2010;Andam and Gogarten, 2011;Zhaxybayeva and Doolittle, 2011). Apparently, this is so because all genes have experienced HGT at some stage (s) of their evolution. Although some genes, in particular those that encode components of the translation system, show substantial congruency (but not actual identity) between each other and with the standard rRNA tree, the number of such congruent trees is small. In a memorable phrase of Bill Martin and Tal Dagan, the ribosomal tree of a life is at best "a tree of one percent" (of all genes in microbial genomes) (Dagan and Martin, 2006). Thus, "evolution of prokaryotes and the Tree of Life are two different things" (Bapteste et al., 2009;Martin, 2011). Then, the question arises: is there any substantial tree component in evolution at all and accordingly does it make any sense to speak of HGT? Indeed, horizontal transfer can be defined as such only against some standard of vertical evolution (Bapteste et al., 2005;Doolittle and Bapteste, 2007;Bapteste and Boucher, 2009). As Martin and Dagan wryly notice, if a model (in this case, the Tree of Life model) adequately describes 1% of the data, it might be advisable to abandon it and search for a better one (Dagan and Martin, 2006). Such an alternative indeed has been proposed in the form of a dynamic network of microbial evolution in which the nodes are bacterial and archaeal genomes, and the edges are the fluxes of genetic information between the genomes (Kunin et al., 2005;Dagan and Martin, 2009;Dagan, 2011;Kloesges et al., 2011). In the extreme, such a network has no vertical, tree-like component whereas the weights of the edges differ depending on the intensity of the gene exchange (Figure 2). Moreover, it has been persuasively argued that "tree thinking in biology" might be a sheer myth, however deeply entrenched in the textbooks and the minds of biologists (Bapteste et al., 2005;Doolittle and Bapteste, 2007;Bapteste and Boucher, 2009). Indeed, there is potential for tree-like patterns to emerge from relationships that have nothing to do with common descent as exemplified by Doolittle and Bapteste by the distribution of human names across the departments of France (Doolittle and Bapteste, 2007). One could argue, however, that the tree pattern is not at all illusory but, on the contrary, is intrinsic and central to the entire process of biological evolution. The relevance and generality of this pattern plainly follows from the fundamental character of the replication process that underlies the evolution of life (Koonin and Wolf, 2009b). Successive generations of replicating genomes (and accordingly, dividing cells) follows an inherently binary branching pattern that, over generation naturally yields a tree. The tree pattern is predicated on a low rate of intragenic recombination which is indeed the case for all evolutionary distances large enough to prevent homologous recombination. Accordingly, evolutionary history of individual genes can be adequately represented by trees (the practical problems of accurate phylogeny reconstruction notwithstanding). A natural, key question to ask then is: are the topologies of the trees for individual genes substantially congruent? In other words, is it possible to identify a statistically significant central trend in the vast "forest" of gene trees? Statistical analysis of thousands of phylogenetic trees for diverse genes of prokaryotes (in fact, all genes with sufficient degree of conservation to obtain a reliable tree topology) has shown that a highly significant central trend is indeed detectable in the phylogenetic forest (Puigbo et al., 2009Koonin et al., 2011). Moreover, the consensus topology of the supertree of the (nearly) universal genes (the notorious B E A FIGURE 2 | A network representation of the evolutionary process. The network still includes some tree components such that the three domains of cellular life remains distinct but there is also an extensive horizontal component of genetic information flow that in particular dominates the earliest stages of evolution (Koonin and Wolf, 2008). 1%) turned out to be the best approximation of that central trend. Thus, although any phylogenetic tree of a central, conserved component of the cellular information-processing machinery (such as rRNA or the set of universal ribosomal proteins) represents only a minority of the phylogenetic signal across the phylogenetic forest (see details below) and so by no account can be considered an all-encompassing "Tree of Life," neither is such a phylogeny an arbitrary and irrelevant "tree of 1%." On the contrary, these trees represent a central evolutionary trend and reflect a "statistical tree of life" (O'Malley and Koonin, 2011). THE DYNAMIC GENE UNIVERSE For decades microbiologists knew that bacteria sometimes exchange genes (Low and Porter, 1978;Arber, 1979;Campbell, 1981;Syvanen, 1985Syvanen, , 1994. Moreover, the phenomena of transformation, acquisition of new traits via import of DNA from the environment and integration of the imported molecules into the bacterial genome, and transduction, transfer of genetic markers by bacteriophages, have been studied in considerable detail. In fact, transformation was the basis of the seminal 1944 experiments of Avery and colleagues which demonstrated that the genetic material of bacteria consisted of DNA (Avery et al., 1944b). In addition, microbiologists realized that such HGT could exert well-defined, major biological effects such as conferring pathogenicity (as in Avery's experiments) or antibiotic resistance on the recipients of horizontally transferred genes. However, all this knowledge notwithstanding, in the pregenomic era, HGT was considered a highly specialized genetic pathway rather than the mainstream of microbial evolution. Comparative genomics brought the shocking realization that bacterial and archaeal genomes were literally shaped by HGT. This was clearly demonstrated by early analyses of the genomes of bacterial hyperthermophiles that were shown contain about 20% of genes of obvious archaeal origin (Aravind et al., 1998;Nelson et al., 1999;Koonin et al., 2001); conversely, genomes of mesophilic Archaea, such as Methanosarcina, encompass roughly the same proportion of genes clearly derived from bacteria (Deppenmeier et al., 2002;Galagan et al., 2002). These are striking examples of extensive gene exchange between the most distant prokaryotes that is stimulated by cohabitation. Not unexpectedly, the extent of gene exchange is far greater between more closely related organisms, even if often more difficult to detect (Abby et al., 2012). Nevertheless, phylogenomic analysis of a variety of bacteria and archaea clearly reveals their mosaic origins: different genes affiliate with homologs from different organisms Sicheritz-Ponten and Andersson, 2001;Koonin, 2003;Esser et al., 2007;Koonin and Wolf, 2008;Kloesges et al., 2011). These findings have been encapsulated in the concept of the Rhizome of Life under which the history of any given genome can be represented as a rhizome, with diverse sources and evolutionary histories for different genes (Raoult, 2010;Merhej et al., 2011). Recent, detailed studies indicate that at least in tight microbial communities, such as for instance the human gut microbiota, gene exchange is constant and rampant (Smillie et al., 2011). In the face of the increasingly apparent genomic promiscuity, one cannot help asking whether "horizontal gene transfer" is a viable concept at all: indeed, for any extended span of evolution, HGT will be identifiable if and only if there is some objectively definable "vertical" standard to compare against. Otherwise, all genetic exchanges would be equal, and the only adequate depiction of evolution would be an undirected network graph. Thus, the validity of the tree representation of evolution and the very existence of HGT are inextricably linked. The results of exhaustive comparison of the individual gene trees in the "phylogenetic forest" discussed in the preceding section reveal the existence of substantial coherence of phylogenetic tree topologies, especially among highly conserved, (nearly) ubiquitous genes that encode components of the translation system (Puigbo et al., 2009). There are many exceptions to this generalization including extensive HGT of genes coding for aminoacyl-tRNA synthetases (Wolf et al., 1999;Woese et al., 2000) and even multiple cases of HGT of genes encoding ribosomal proteins (Brochier et al., 2000;Makarova et al., 2001;Yutin et al., 2012). Nevertheless, these genes appear to comprise a single, co-evolving ensemble, in at least general agreement with the so-called complexity hypothesis (Jain et al., 1999;Wellner et al., 2007;Abby et al., 2012). Under the complexity hypothesis, HGT of genes encoding subunits of macromolecular complexes is largely suppressed because of the deleterious effect caused by disruption of interactions refined by a long time of co-evolution. Indeed, a recent analysis has shown that it is the involvement in complex formation that shows a strong negative correlation with the rate of HGT, rather than any specific biological function (Cohen et al., 2011). Thus, genes encoding many translation system components probably coevolve and accordingly are rarely horizontally transferred because they are preferentially involved in large complexes (above all, the ribosome itself) rather than owing to their special biological importance or any other peculiarities of their biological function. Other genes show a much weaker but also significant phylogenetic coherence with the nearly universal genes for translation system components, perhaps also reflecting the involvement in complex formation. The same series of phylogenomic studies that demonstrated the validity of the statistical tree of life quantified the contributions of tree-like (vertical) and web-like (horizontal) gene transmission to the relationships between bacterial and archaeal genomes (Puigbo et al., 2010. The results came out remarkably different for the ∼100 nearly universal trees and the rest of the trees in the phylogenetic forest. The evolution of the nearly universal trees is dominated by the tree-like trend which contributes approximately 2/3 of the evolutionary information whereas in the rest of the forest, the ratio is the opposite, with about 2/3 of the signal coming from horizontal gene exchange (Figure 3). The extensive HGT that permeates the prokaryote world is the source of gene gain by bacterial and archaeal genomes. Perhaps, the best characterized case of massive gene gain is the emergence of pathogenic bacterial strains that often evolve by acquiring the so-called pathogenicity islands that sometimes comprise over 30% of the pathogen's genome as first revealed by the comparison of the genomes of laboratory and wild strains of E. coli (Perna et al., 2001;Zhang et al., 2007;Eppinger et al., 2011). The opposite trend, gene loss, is at least as prominent as gene gain via HGT (Snel et al., 2002;Mirkin et al., 2003). A prime example nearly universal trees other trees tree-like net-like 0.66 0.39 archaea bacteria FIGURE 3 | Tree-like (vertical) and web-like (horizontal) contributions in the evolution of nearly universal genes and the entire phylogenetic forest. The two heat maps schematically depict comparison of bacterial and archaeal genomes as described previously (Puigbo et al., 2010). is evolution of intracellular parasites and symbionts, for example, Buchnera, a close relative of E. coli that lost about 90% of the ancestral genes (Perez-Brocal et al., 2006); several other intracellular bacterial parasites and symbionts show even more drastic genome reduction (Klasson and Andersson, 2004;Perez-Brocal et al., 2006;McCutcheon and Moran, 2012). The balance between gene gain and gene loss translates into a distinct shape of the distribution of gene occurrence in prokaryote pangenomes at all levels, from closely related bacteria (e.g., those of Enterobacteria) to the entirety of sequenced bacterial and archaeal genomes (Koonin and Wolf, 2008;O'Malley and Koonin, 2011). This universal distribution has an asymmetric U-shape and can be approximated by three exponential functions (Figure 4). The first of these corresponds to a small, highly conserved core (the nearly universal genes discussed above); the second exponent describes the much larger "shell" of genes with limited conservation; and the third one delineates the vast "cloud" of rare, poorly conserved genes. Thus, the gene universe is dominated by rare, sparsely distributed genes most of which are not covered by the limited available sampling of genomes and still remain to be discovered although in each particular genome the moderately conserved "shell" genes comprise the majority (Figure 5). The dynamic, fluid character of the prokaryote genomes yields a distinct, fractal-like structure of the gene universe (O'Malley and Koonin, 2011). ARE THERE SPECIES IN PROKARYOTES? The title of Darwin's seminal book "The Origin of Species" is deeply steeped in traditions of eighteenth and nineteenth century biology that tended to view animal and plant species as key units of biological organization. Darwin himself actually saw species more as an arbitrary category in the continuum of varying life forms than a fundamental unit of life. In the twentieth century the species concept received its biological interpretation, primarily in the work of Ernst Mayr who famously defined a species as a system of panmictic populations that are genetically isolated from other such systems (Mayr, 1944). This concept indeed captures a key feature of the biology of organisms with regular, obligatory sexual reproduction such as, above all, animals and to a lesser extent plants. Most of the prokaryotes do not engage in regular sex but instead exchange genes via HGT with diverse other microbes that they happen to cohabitate with. In general, in the prokaryote world, there are indeed no discrete, genetically isolated systems of panmictic populations but rather complex webs of gene exchange (Dagan et al., 2008;Koonin and Wolf, 2008). Thus, the very notion of species as a distinct biological category does not apply even though traditionally bacteria and archaea are still denoted by Linnaean species names (e.g., Escherichia coli or Haloferax volcanii) (Konstantinidis et al., 2006;Cohan and Perry, 2007;Doolittle and Zhaxybayeva, 2009;Fraser et al., genome (genes) prokaryotic pan- 2009). However, the modes of evolution substantially differ across the diversity of prokaryotes, spanning the entire continuum from fully sexual to fully clonal populations (Smith et al., 1993;Doolittle and Zhaxybayeva, 2009). Some bacteria, especially parasites such as for example Neisseria gonorrhoeae, have been shown to form largely isolated communities that engage in regular conjugation, the bacterial equivalent of sex, resulting in extensive homologous recombination. For these distinct organisms but not for the majority of bacteria and archaea, Mayr's biological definition of species might be a relevant concept. The irrelevance of the (traditional) species concept for most prokaryotes by no means implies non-existence of structure in the genome space. Indeed, bacteria and archaea that share common origin in phylogenetic trees of marker genes, such as rRNA, typically also possess similar gene content. The "genome-trees" constructed on the basis of the (dis)similarity of gene content are generally congruent with phylogenetic trees of highly conserved marker genes although interesting deviations that reflect similarities in life style and/or extensive gene exchange have been detected as well (Snel et al., 1999(Snel et al., , 2005Wolf et al., 2002). Thus, although the bacterial and archaeal "species" are not species in the regular sense, they are "galaxies" in the gene universe that form distinct, hierarchical clusters. Interestingly, it has been shown that, among the processes that lead to the divergence of gene content between evolving lineages of prokaryotes, gene loss appears to occur stochastically and generally follows the divergence of marker genes whereas gene gain (primarily, via HGT) is more episodic (Snel et al., 2002;Novichkov et al., 2004). DOES EVOLUTION ADVANCE COMPLEXITY? The idea of a general evolutionary trend toward increasing complexity is extremely popular among both lay public and scientists and certainly was shared by Darwin who wrote, for example, in famous quote: "as natural selection works solely by and for the good of each being, all corporeal and mental endowments will tend to progress toward perfection" (Darwin, 1859). This view does not imply any mysterious strive for perfection as imagined by some pre-Darwinian biologists including Lamarck (1809) or teleology of any kind. Nevertheless, Darwin's position does suggest a trend of evolution from simple to complex forms which is indeed a highly intuitive notion that has some obvious support in well known facts of the history of life on earth. For example, the most organizationally complex organisms with the largest genomes, animals, and plants, appear only at relatively late stages of evolution. Even more generally, at the earliest stages in the evolution of life, origin of complex structures, such as the cell itself, "from so simple a beginning" (Darwin, 1859) appears inevitable. Thus, notwithstanding the numerous cases of reductive evolution, in particular among parasites and symbionts, the belief in a general complexification trend in the evolution of life appears to be common. However, is complexification the prevailing modality of evolution? Phylogenomic reconstruction, at least for bacteria and Archaea, suggests otherwise. It is not surprising that differential gene loss dominates the evolution of commensal bacteria, such as Lactobacilli, from a complex free-living ancestor (Makarova et al., 2006). However, a qualitatively similar pattern was detected in evolutionary reconstructions for all bacteria and archaea (Snel et al., 2002;Mirkin et al., 2003;Makarova et al., 2007). Strikingly, more recent reconstructions that were performed using larger genome sets and more sophisticated computational methods confidently indicate that the genome of the last common ancestor of all extant archaea apparently was at least as large and complex as that of typical modern organisms in this domain of cellular life (Csuros and Miklos, 2009). Fully compatible reconstruction results have been reported for the expanded set of cyanobacterial genomes (Larsson et al., 2011). Thus, counter-intuitively, at least in prokaryotes, genome shrinkage that is sometimes called streamlining (Lynch, 2006) and is attributed to increasing selective pressure in successful, large populations (Lynch, 2006;Koonin, 2009b), appears to be is no less and probably more common than genome growth and complexification. THE WRIGHTEAN-DARWINIAN-LAMARCKIAN CONTINUUM OF EVOLUTIONARY PROCESSES The Modern Synthesis of evolutionary biology emphasizes the randomness of mutations that provide the starting material for selection which engenders survival of the fittest under the given conditions and hence constitutes the adaptive, deterministic component of evolution. The insistence on such strict separation between the stochastic and deterministic aspects of evolution departs from Darwin's view that included the Lamarckian inheritance, with adaptive mutations directly caused by environmental cues, as an important, even if ancillary mechanism of evolution (Darwin, 1872). Recently, several genetic phenomena with a distinct Lamarckian flavor have been discovered (Koonin and Wolf, 2009a;O'Malley and Koonin, 2011). Probably, the most striking case is the system of adaptive antivirus immunity, known as CRISPR-Cas (Clustered Regularly Interspaced Palindromic Repeats and CRISPR-associated proteins), that is present in most archaea and many bacteria van der Oost et al., 2009;Marraffini and Sontheimer, 2010;. The CRISPR-Cas system integrates fragments of virus or plasmid DNA into a distinct, repetitive locus in the archaeal or bacterial genome. The transcript of this unique spacer functions as a guide RNA that is incorporated into a specific complex of Cas proteins possessing DNAse activity and directs this complex to the cognate alien DNA (or RNA) molecules that are cleaved and accordingly inactivated. The CRISPR-Cas system is amazingly efficient, with only about 10 −5 failure rate (Deveau et al., 2008). This mechanism qualifies CRISPR-Cas as an adaptive immunity system, i.e., immunity system that adapts to a specific infectious agent, a novelty in prokaryotes Bikard and Marraffini, 2012). Furthermore, the Lamarckian principle of inheritance and evolution is apparent in the mechanism of CRISPR-Cas function. Indeed, this system directly responds to an environmental cue (in this case, foreign DNA) by introducing a genetic change into the genome that is immediately adaptive with respect to that particular cue. The discovery of the CRISPR-Cas immune system that functions on the Lamarckian principle drew attention to other phenomena that also seem to contain a Lamarckian component (Koonin and Wolf, 2009a;O'Malley and Koonin, 2011). Some of the common, central evolutionary processes such as HGT and stress-induced mutagenesis show a "quasi-Lamarckian" character. Indeed, even if HGT cannot be viewed as being directly caused by a specific environmental factor, it certainly is the case that the repertoire of the acquired genes depends on the environment. Genes common in a given environment will be acquired often and are likely to possess adaptive value. Stress-induced mutagenesis is triggered directly by environmental stress factors, e.g., desiccation or radiation, and produces variation that is required to develop resistant phenotype (Rosenberg and Hastings, 2003;Ponder et al., 2005;Galhardo et al., 2007;Galhardo and Rosenberg, 2009). The mutations are not specific to the biologically relevant loci but the activity of the molecular machineries of stress-induced mutagenesis [the best characterized of which is the SOS repairmutagenesis system in bacteria (Sutton et al., 2000)] generates clusters of mutations, thus locally amplifying variability and so increasing the chance of adaptation once a single mutation appears in a relevant gene (Galhardo et al., 2007). More generally, recent empirical and theoretical studies of diverse processes of stochastic and deterministic change in genomes make it clear that evolution is not limited to the basic Darwinian scheme of random variation that is subject to selection. Evolution can be more adequately depicted as a continuum of processes from completely random ones, under the Wrightean modality defined by random variation and random fixation of changes via genetic drift; to the Darwinian modality with random changes fixed by the deterministic process of selection; to the Lamarckian mode in which both variation and fixation are deterministic (Figure 6) ( EVOLUTION OF EVOLVABILITY: DEDICATED MECHANISMS FOR EVOLUTION All organisms possess a certain degree of evolvability, i.e., the ability to evolve. At the most basic level, evolvability stems from the theoretical impossibility of error-free replication. Genomic variation in evolving organisms is created by a combination of intrinsic replication errors, recombination and mutations induced external agents (mutagens). An intriguing, fundamental question in evolutionary biology is whether or not evolvability itself can evolve under selection, or put another way, whether there are dedicated mechanisms of evolution (Kirschner and Gerhart, 1998;Poole et al., 2003;Pigliucci, 2008;Brookfield, 2009). The prevailing wisdom among biologists seems to be that evolvability is not selectable but is simply maintained at a sufficient level by inevitable errors at all levels of biological information processing. Under this view, selection is always directed at minimization of the error rate but the ability to attain perfection is limited by genetic drift resulting in sufficient evolvability (Lynch, 2011). Evolutionary biologists are usually suspicious of the evolution of evolvability, generally under the old adage, "evolution has no forecast." Nevertheless, evidence in support of "evolvability of evolvability" is mounting. The very existence of complex molecular systems for stress-induced mutagenesis (error-prone repair) the activity of which is exquisitely regulated in response to stress implies that mechanisms enhancing variation when variation is needed for survival have evolved (Galhardo et al., 2007). Another remarkable mechanism that appears to have specifically evolved to generate variation involves the Diversity Generating Retroelements (DGR) (Medhekar and Miller, 2007). Strikingly, the DGR are found both in bacteriophages where they generate diversity in cell attachment surface proteins via reverse transcription-mediated mutagenesis, resulting in host tropism switching (Doulatov et al., 2004;Guo et al., 2008), and in bacteria themselves where they produce receptor variation leading to bacteriophage resistance (Bikard and Marraffini, 2012). The analogy between the activity of DGR and hypermutagenesis in animal immune systems is obvious except that the variation generated by the DGR is inherited. Many bacteria and some archaea possess the natural transformation ability (that was used in the Avery experiment) that requires specialized, complex pumps (recently denoted transformosomes) that internalize DNA from the environment (Claverys et al., 2009;Johnsborg and Havarstein, 2009;Kruger and Stingl, 2011). The transformation machinery potentially could be viewed as a device that evolved under selective pressure to enhance HGT (Johnsborg and Havarstein, 2009). However, one could argue that the enhancement of HGT is only a side effect of the evolution of the transformation system, its actual raison d'etre being the utilization of DNA as a rich source of replication substrates (or simply food). This argument hardly can hold with regard to the type 4 secretion systems (T4SS) that specialize in secretion of DNA from bacterial cells (Hamilton et al., 2005;Hamilton and Dillard, 2006). The recently discovered Gene Transfer Agents (GTAs) are even more striking devices for DNA donation (Paul, 2008;McDaniel et al., 2010;Lang et al., 2012). The GTAs are a distinct type of defective bacteriophages that package in the capsid not the phage genome (which remains integrated in the host chromosome) but rather apparently random pieces of the host chromosome. The GTAs have been discovered in diverse bacteria and archaea and have been shown to infect and transfer their genetic content to a broad range of cohabitating prokaryotes (McDaniel et al., 2010). It does not seem conceivable that GTAs are anything but dedicated HGT vehicles. An additional notable aspect of T4SS and GTAs is that these devices mediate donation rather than consumption of DNA, i.e., apparently can directly benefit other microbes (recipients) rather than the donor. This seemingly altruistic behavior can be explained in terms of group selection whereby the object of selection is an ensemble of organisms that jointly benefit from adaptive mutations rather than a single organism. Group selection is a controversial subject in evolutionary biology (Maynard Smith, 1998;Borrello, 2005;Leigh, 2010) but the existence of dedicated devices for DNA donation appears to be a strong argument in its favor. The discovery of T4SS and GTAs may be the most clear-cut pieces of evidence supporting evolution of evolvability just as the CRISPR-Cas system is the showcase for Lamarckian evolution. However, the case for the evolution of mechanisms for evolution seems to be much more general (O'Malley and Koonin, 2011). Population genetic theory holds that under a broad range of conditions a clonal population is generally doomed to collapse through the action of Muller's ratchet, the irreversible accumulation of deleterious mutations leading to gradual decline in fitness (Leigh, 2010;Bachtrog and Gordo, 2004). The effect of Muller's ratchet that has been directly demonstrated in controlled evolutionary experiments on RNA viruses (Chao, 1990;Duarte et al., 1992) and on bacteria (Andersson and Hughes, 1996). The principal way to escape Muller's ratchet is to enhance recombination via sex (in the form of meiotic crossing over in eukaryotes and in the form of conjugation in prokaryotes) or HGT. Just as sex is generally viewed as a mechanism that evolved to counteract the ratchet, HGT may be best understood as a more general variation-generating process that is supported by various evolved mechanisms. At the risk of being provocative, sex indeed can be legitimately regarded as a specialized form of HGT. Clearly, evolution maintains HGT within the optimal range rather than at the maximum possible level because the latter would eliminate genome stability and wreak havoc into selected high-fitness ensembles of genes (O'Malley and Koonin, 2011). Mechanisms that counter HGT also have evolved: these are the same that provide resistance against virus infection including CRISPR-Cas and restrictionmodification (Marraffini and Sontheimer, 2008;Gardner and Olson, 2012). At a different level, an apparent mechanism of evolution involves unusual, stable phenotype modifications that are widespread in bacteria and lead to coexistence of two distinct phenotypes in a clonal population, the so-called bistability regimes (Dubnau and Losick, 2006;Veening et al., 2008a;Piggot, 2010). For instance, under limited nutrient supply, Bacillus subtilis will form two subpopulations of which only the smaller one has the capacity to sporulate and thus yields the only survivors when the conditions become incompatible with cell growth and division (Veening et al., 2008a,b;Lopez et al., 2009). The coexistence is epigenetically inherited across many bacterial generations, hence this phenomenon has become known as bistability. In theoretical and experimental models bistability is rationalized as "bet hedging": for organisms that live in often and unpredictably changing environments, it is beneficial to maintain a small subpopulation of likely survivors even when their fitness is comparatively low under normal conditions (Veening et al., 2008a;de Jong et al., 2011;Libby and Rainey, 2011;Rainey et al., 2011). The cost of maintaining this subpopulation is more than compensated by the benefit of survival under adverse conditions. Thus, the evolution of the regulatory circuitry that supports bistability appears to be not just a case of evolution of an evolutionary mechanism but more specifically evolution of a kin selection mechanism or evolution of altruism in bacteria. The evolution of kin selection demonstrated by bet hedging is paralleled by the mechanism of altruistic suicide that virus-infected bacteria and archaea commit using the toxin-antitoxin or abortive infection defense systems Van Melderen and Saavedra De Bast, 2009;Hayes and Van Melderen, 2011). In this case, by killing themselves early, before the virus has a chance to replicate, the microbes save their kin from infection. The reality of kin selection, just as that of group selection, is often hotly debated by evolutionary biologists (Nowak et al., 2010;Bourke, 2011;Ferriere and Michod, 2011;Strassmann et al., 2011) but the bistability/bet-hedging phenomena and altruistic suicide in bacteria and archaea seem to plainly demonstrate not only the existence but the evolvability of this form of selection. In parallel with experimental studies, several theoretical models have been developed that characterize evolvability as a selectable trait in fluctuating environments (Earl and Deem, 2004;Jones et al., 2007;Draghi and Wagner, 2008). Thus, on the whole, and general theoretical doubts notwithstanding, evolution of evolvability appears to be an intrinsic and fundamental, if still poorly understood, aspect of the evolutionary process. THE VAST, ANCIENT WORLD OF VIRUSES Viruses are no part of the modern synthesis or more generally the traditional narrative of evolutionary biology. Until very recently, viruses have been viewed primarily as pathogens of animals, plants, and bacteria. Several lines of recent discovery have radically changed this view and promoted viruses to a central position on the stage of evolution. This change in the evolutionary status of viruses and related selfish genetic elements has been discussed in detail elsewhere (Claverie, 2006;Koonin et al., 2006Koonin et al., , 2011Raoult and Forterre, 2008). Here we quickly recapitulate several key points, with a focus on the importance of viruses for evolutionary biology in general. Metagenomic and ecological genomics studies have shown that, astonishingly, viruses are the most common biological entities on earth (Edwards and Rohwer, 2005;Suttle, 2005Suttle, , 2007. Viruses and/or virus-like mobile elements are present in all cellular life forms. Strikingly, in mammals sequences derived from mobile elements and endogenous viruses account for at least 50% of the genome whereas in plants this fraction can reach 90% (Feschotte et al., 2002;Kazazian, 2004;Devos et al., 2005;Hedges and Batzer, 2005). Even the genomes of some unicellular eukaryotes, such as Trichomonas vaginalis, consist mostly Frontiers in Cellular and Infection Microbiology www.frontiersin.org September 2012 | Volume 2 | Article 119 | 8 of inactivated transposons (Carlton et al., 2007;Pritham et al., 2007). Recruitment of mobile element sequences for transcription regulation and other cellular functions such as microRNA formation is a common phenomenon the full extent of which is not yet fully appreciated (Jordan et al., 2003;Piriyapongsa et al., 2007;Lisch and Bennetzen, 2011). Although genomes of prokaryotes are not so overwhelmed by mobile elements, due to the intense purifying selection, nearly all of them encompass multiple prophages and mobile elements. Notably, deletion of all prophages leads to a substantial drop of fitness in E. coli (Wang et al., 2010). In at least some common environments such as ocean water and soil, the number of virus particles exceeds the number of cells by factors of 10-100 (Edwards and Rohwer, 2005;Suttle, 2007;Srinivasiah et al., 2008;Breitbart, 2012). Similarly, the genetic diversity of viruses, measured as the number of distinct genes, substantially exceeds the genetic diversity of cellular life forms. Furthermore, viruses, in particular bacteriophages, are major biogeochemical agents. Periodical killing of microbes, in particular cyanobacteria, has been identified as a major contributor to sediment formation and major contributors to the nutrient cycles in the biosphere (Suttle, 2007;Rohwer and Thurber, 2009). The same process obviously is a key determinant of the population dynamics of the hosts that shapes the selection-drift balance throughout the course of evolution (Weinbauer and Rassoulzadegan, 2004). The very fact that viruses greatly outnumber bacteria in the environment implies that antivirus defense systems are central to the evolution of bacteria and archaea. This is indeed the case as made evident by the remarkable proliferation of diverse antivirus systems including CRISPR-Cas discussed above as well as multiple restriction-modification, abortive infection, toxin-antitoxin and other, still poorly characterized defense systems that in different combinations and with different abundances are present in most prokaryotes (Juhas et al., 2009;Labrie et al., 2010;Martinez-Borra et al., 2012). Taken together, these findings and theoretical considerations strongly support the view that the virus-host arms race is one of the principal processes in all evolution (Forterre and Prangishvili, 2009;Stern and Sorek, 2011). With regard to the classification of life forms, the only defensible position appears to be that viruses (and related mobile elements) and cells are the two principal categories of biological organization (Figure 7) (Raoult and Forterre, 2008;Koonin, 2010;O'Malley and Koonin, 2011); this view is independent of the semantic issue of viruses being "alive" or not Moreira and Lopez-Garcia, 2009;Raoult, 2009). These two categories of biological entities can be characterized as informational (genetic) parasites, i.e., viruses and other selfish elements, and genetically self-sustained organisms, i.e., cellular life forms. Mathematical modeling indicates that genetic parasites inevitably emerge in any replicator system (Szathmary and Maynard Smith, 1997;Takeuchi and Hogeweg, 2012). This conclusion is certainly intuitively plausible: one expects that cheaters will appear in any system with limited resources-in particular, in any system of replicators, such parasites will attempt to utilize the replication machinery without making it (Koonin and Martin, 2005). Also, the notion that virus-like selfish elements are an intrinsic part of life since its inception [which can be reasonably considered to coincide with the origin of replication (O'Malley and Koonin, 2011)] is compatible with the ubiquity of these elements in nature. In mathematical modeling, the outcome of the virus-host interaction depends on the specific parameters of the adapted model. In homogeneous models, virus-like parasites tend to cause collapse of the entire systems but in models with compartmentalization, which are most relevant for the actual evolution of life, stable host-parasite coexistence is possible (Takeuchi and Hogeweg, 2009). Moreover, the destructive effect of genetic parasites on the host is mitigated when a dedicated genetic information storage medium evolves, which could be one of the driving forces behind the evolution of DNA in the primordial RNA world (Takeuchi et al., 2011). Further support for the classification of viruses as one of the two "empires" of life is the diversity of the replication-expression cycles that is found among viruses and related elements. Indeed, while cellular life forms all use a uniform replication-expression strategy based on double-stranded (ds)DNA replication, transcription of genes into mRNA or non-coding RNA, and translation of mRNA into protein, viral genome can be represented by all known forms of nucleic acids, and alternative replication processes such as RNA replication and reverse transcription are widely used (Figure 7) . Finally, although viral genomes are generally small compared to the genomes of cellular life forms (viruses being the ultimate genetic parasites), the range of genomic complexity is remarkable, from only about 300 nucleotides and no genes in the simplest virus-like parasites, the viroids, to over a megabase and more than 1000 genes (genomes that are more complex than those of many bacterial parasites and symbionts) in the giant mimiviruses (Raoult et al., 2004;Colson et al., 2012). Overall, the conclusion is inescapable that the entire history of life is a story of perennial interplay between genetic parasites and their hosts that is a major driver of evolution for both biological empires. EVOLUTION OF MICROBES AND VIRUSES: A NEW EVOLUTIONARY PARADIGM? Prokaryotes (bacteria and archaea) and viruses entered the realm of evolution with the advent of genomics. Has the comparative study of these relatively simple (compared to eukaryotes) organisms radically changed the core tenets of evolutionary biology that were first envisaged by Darwin and were augmented with the genetic foundation in the Modern Synthesis? In terms of Kuhn's concept of the development of science (Kuhn, 1962), did the study of microbial evolution engender a paradigm shift? It is not easy to answer this question definitively, possibly because the paradigm shift model does not adequately describe the evolution of biology (regardless of whether or not it fits the evolution of physics). Probably, a more appropriate epistemological framework is that of integration, i.e., a relatively smooth incorporation of the classic concepts into the new, more general and versatile theoretical constructs. This model of the evolution of science was recognized by Kuhn himself in his later work (Kuhn, 2002) The phylogenomic study of microbes and viruses uncovered new biological realms which Darwin and even the authors of the Modern Synthesis could not possibly fathom. The modes of evolution of these relatively simple organisms that, as we now realize, have dominated the biosphere since its beginning about 4 billion years ago to this day (and into any conceivable future) are different from the evolutionary regimes of animals and plants, the traditional objects of (evolutionary) biology. The study of microbial evolution has shattered the classic idea of a single, allencompassing tree of life by demonstrating that the evolutionary histories of individual genes are generally different. Remarkably, however, these developments have not rendered trees irrelevant as a key metaphor of evolution (O'Malley and Koonin, 2011). Rather, they have shown that the bona fide unit of tree-like evolution is an individual gene not a genome, and a "tree of life" can only be conceived as a statistical trend in the "forest" of gene trees (Koonin and Wolf, 2009b). Tree-like evolution is a fundamental implication of the binary replication of the genetic material, so it served Darwin well to use a tree as the single illustration of his book. Without, obviously, knowing anything of DNA replication, Darwin grasped the central principle of the evolution of life, descent with modification, and the tree pattern followed naturally. Microbiology yielded the first clear-cut case of Lamarckian evolution, the CRISPR-Cas system, and subsequent re-examination of other evolutionary phenomena (in both prokaryotes and eukaryotes) has strongly suggested that the (quasi)Lamarckian modality is common and important in all evolving organisms, completing the range of evolutionary phenomena from purely stochastic (drift, Wrightean evolution) to deterministic (Lamarckian evolution). Again, these findings not so much overturned but rather expanded the vision of Darwin who seriously considered Lamarckian mechanisms as being ancillary to natural selection (only the Modern synthesis banished Lamarck). Crucially, the study of microbial evolution presented apparently undeniable cases of evolution of evolvability such as the GTAs and the DGRs. Moreover, the discovery of bet-hedging strategies and altruistic suicide in bacteria shows that kin selection (a subject of considerable controversy in evolutionary biology) is evolvable as well. Again, as in the case of Lamarckian mechanisms, these discoveries force one to re-examine many more phenomena and realize that evolution is not limited to fixation of random variation and survival of the fittest but rather is an active process with multiple feedback loops, and that dedicated mechanisms of evolution exist and themselves evolve. This is a major generalization that substantially adds to the overall structure of evolutionary biology but one has to realize that the principle of descent with modification remains at the core of all these complex evolutionary phenomena. We now realize that evolution of life is to a large extent shaped by the interaction (arms race but also cooperation) between genetic parasites (viruses and other selfish elements) and their cellular hosts. Viruses and related elements, with their distinctive life strategy, informational parasitism, actually dominate the biosphere both physically and genetically, and represent one of the two principal forms of life that as intrinsic to the history of the biosphere as cells are. This new dimension of evolution simply could not be perceived by Darwin or even the creators of the Modern Synthesis due to the lack of relevant data. Thus, we are inclined to view the change in evolutionary biology brought about by phylogenomics of microbes and viruses as a case of integration rather than an abrupt departure from the paradigm of the Modern Synthesis (Figure 8). Darwin realized the importance of descent with modification and the tree pattern of evolution it implies whereas Fisher, Wright, and Haldane derived the laws of population genetics that still constitute the core of our understanding of evolution. However, recent advances, in particular those of microbial phylogenomics, added multiple, new and interconnected layers of complexity (Figure 8) such that the conceptual core is but a small part of the current big picture of evolutionary biology. AUTHOR CONTRIBUTIONS Eugene V. Koonin and Yuri I. Wolf wrote the manuscript. ACKNOWLEDGMENTS The authors' research is supported by intramural funds of the US Department of Health and Human Services (to National Library of Medicine, NIH).
2016-05-12T22:15:10.714Z
2012-09-13T00:00:00.000
{ "year": 2012, "sha1": "46ce7fb4e995f23b5246d67099340edddcc8245c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2012.00119/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46ce7fb4e995f23b5246d67099340edddcc8245c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252385817
pes2o/s2orc
v3-fos-license
Emerging roles of circular RNAs in tuberculosis Tuberculosis (TB) remains a major global health issue, resulting in around 1.5 million people deaths each year. Better diagnostic and therapeutic tools are urgently needed. Circular RNAs (circRNAs) are a new class of noncoding RNAs with a covalently closed structure, and exhibit a tissue-, cell-, and developmental stage-specific expression pattern. Recently, circRNAs were thought to be regulatory molecules implicated in the onset and progression of a series of human diseases including tuberculosis. In tuberculosis, circRNAs have been shown to regulate host anti-TB immune responses, such as decreasing monocyte apoptosis, enhancing autophagy and promoting macrophage polarization. Importantly, circRNAs are physically stable and abundant in several types of body fluids. Therefore they are considered as promising minimally-invasive biomarkers. In this review, we focus on the recent advances in the immune regulatory roles of circRNAs, as well as their potential diagnostic value in TB. Introduction Tuberculosis is the second leading infectious disease cause of death globally after COVID-19, caused by Mycobacterium tuberculosis (Mtb), which usually attacks the lung but can affect almost any part of the body. According to the WHO global TB report, approximately 10 million people fell ill with TB and 1.5 million people died from TB globally in 2020 (1). The COVID-19 pandemic largely impacts global TB control because of reduced access to care, leading to around 5% increase of TB deaths compared to 2018 (2). Although the modern antibiotics and Bacillus Calmette-Gueŕin (BCG) vaccine dramatically help human beings to fight TB, it has still not been eradicated. The main reasons are that Mtb could develop drug resistance rapidly under the pressure of antibiotics and the BCG vaccine does not work well in adults (3,4). Another reason is that Mtb is armed with a set of intricate immune escape mechanisms which enable the bacterium to avoid the host immune killing and to survive in host for a long time (5). Circular RNAs (circRNAs) are a family of recently rediscovered RNA molecules. Originally thought of as mere byproducts of aberrant splicing (6), circRNAs are now appreciated to have important biological roles including regulating gene expression, modulating protein function, encoding proteins and so forth (7). While the roles of immune protein factors during Mtb infection have been extensively studied, the functions of circRNAs in TB remain relatively unclear. In this review, we provide an overview of the current understanding of circRNAs, with a particular focus on their functions in immune regulation. We introduce recent investigations that reveal new roles of host circRNAs in anti-TB immunity, and discuss the potential of circRNAs as novel TB diagnostic biomarkers. The discovery, biogenesis and function of circRNAs The discovery of CircRNAs The first circRNA was discovered in 1976 on viroid particles (8). A few years later, circRNAs were observed by electron microscopy in the cytoplasm of eukaryotic cells (9). Nevertheless, they were mainly considered as "junk RNAs" generated from abnormal splicing events at that time (6). Until 1993, a testis-specific 1.3 kb circRNA from the sexdetermining region Y (Sry) gene in mice was molecularly identified. This circRNA was considered to have a potential function in mouse testis (10). Further, next-generation sequencing (NGS) with specific protocols for library preparation ushered in the genome-wide profiling of circRNAs. It is now accepted that circRNAs are the predominant transcript isoforms from thousands of human genes rather than simply accidental byproducts of splicing (11), and their expression is conserved in eukaryotes (12). The biogenesis of CircRNAs Based on their composition, circRNAs are currently divided into four categories: circular intronic RNAs (ciRNAs), exonintron circRNAs (EIciRNAs), exonic circRNAs (ecircRNA) and tRNA intronic circular RNAs (tricRNAs) (Figure 1). The intronic lariat generated from canonical splicing is usually attacked by a debranching enzyme DBR1 and by exonucleases. Thus the cellular lariat is only an intermediate molecule that is usually rapidly degraded, but some lariats appear to be capable of evading this debranching process to form stable ciRNAs (13). ecircRNAs and EIciRNAs come from the back-splicing process wherein a downstream 5′ splice donor site is ligated to an upstream 3′ splice acceptor site (14), unlike canonical splicing that joins an upstream 5′ splice donor site with a downstream 3′ splice acceptor site. The circulation of pre-mRNA during backsplicing is mediated by the base paring between reverse complementary sequences of flanking introns or the dimerization of RNA binding proteins (RBPs) (15,16). During the pre-mRNA transcription, exon skipping events may occur and forms an excised lariat. Subsequently, the lariat undergoes internal back-splicing leading to the formation of ecircRNAs or EIciRNAs if the flanking intronic sequences are retained in some circumstances (17,18). EcircRNAs account for over 80% of the identified circRNAs and generally localize in the cytoplasm (9,15,19). In contrast, ciRNAs and EIciRNAs are usually localized in the nucleus (13,18), suggesting that they may regulate gene transcription. TricRNAs are formed by tRNA introns through the cleavage of tRNA introns by tRNA splicing endonuclease (TSEN) complex and the circulation cleaved introns by an unknown ligase (20). The biological function of circRNAs Although circRNAs were discovered decades ago, biological functions have only been studied for a small fraction of the circRNAs identified to date. Most of them were thought to act as miRNA sponges (19,(21)(22)(23) (Figure 1). Those circRNAs contain miRNA binding sites, which allow them to sponge miRNAs and restrain miRNAs' function, thus indirectly regulating the translation of miRNAs' target mRNAs. CiRS-7, which is highly expressed in the brain, contains more than 70 conserved binding sites for miR-7 (21). It has been reported to regulate the expression of miR-7 target genes and be involved in neuronal function. In addition, some circRNAs have also been demonstrated to act as RBPs' sponges or protein scaffolds (24)(25)(26)(27). The circMbl contains multiple conserved muscleblind (MBL) binding sites and thus is able to be specifically bound by MBL (28). The binding of MBL to circMbl decrease the available cellular MBL proteins levels. Its binding to the flanking introns of circMbl also facilitates the looping of mbl pre-mRNA and promotes the circMbl biogenesis which competes with the linear splicing of pre-mRNA, thus decreasing the production of mbl mRNA (28). By binding to both 3-phosphoinositidedependent protein kinase 1 (PDK1) and its substrate AKT1, circ-Amotl1 can be as a protein scaffold to facilitate the PDK-1 dependent phosphorylation of AKT1 (25). Furthermore, circRNAs could also be translated into peptides or proteins in a cap-independent manner through internal ribosome entry sites (IRESs) (29)(30)(31)(32). For example, circ-FBXW7 contains an ORF that can be translated into a novel 185-amino acids protein called FBXW-185aa, which has been reported to reduce the halflife of c-Myc and inhibit proliferation of cancer cells by interacting with the deubiquitinating enzyme USP28 (33). In addition to IRESs, m6A modification was also reported to be able to drive the translation of circRNAs (31). CircRNAs in immune regulation As newly identified macromolecules, their roles in immune regulation attracted great attention. Some evidences supported that circRNAs are directly involved in immune regulation. Y. Grace Chen et al. first showed transfection of exogenous circRNA led to potent induction of innate immunity genes through the nucleic acid sensor RIG-I (34). Another study found that circRNAs can competitively bind and inhibit protein kinase R (PKR), a double-strand RNA activated enzyme in the antiviral signaling pathway, to regulate cellular immune signaling pathways extensively (35). Most cellular cicrRNAs have one or more intra-molecular imperfect RNA duplexes ranging from 16 to 26 bp, which is similar to doublestrand RNA structures, to interact with PKR to suppress its activity. When cells are infected with viruses or stimulated by poly(I:C), RNase L degrades cellular circRNAs to release PKR, thus activating the downstream antiviral immunity mechanisms (35). CircRNAs may also paly roles in the activation and function of immune cells. In response to different environmental stimuli, macrophages could be activated into pro-inflammatory M1-type macrophages or anti-inflammatory M2-type macrophages. One study investigated circRNAs expression profiles in M1-and M2type macrophages using circRNA microarray. The authors identified 189 differentially expressed circRNAs, indicating that circRNAs may play important roles in regulating or maintaining macrophage polarization (36). Indeed, circRasGEF1B, an LPSinducible cytoplasmic circRNA, has been reported to regulate The biogenesis and functions of circRNAs. Exonic circRNAs (EcircRNAs) are generated by a non-canonical back-splicing which is favored by the base paring between reverse complementary sequences (such as Alu repeats) and the dimerization of RNA binding proteins (RBPs). EcircRNAs can also be produced from splicing intermediates called as lariat precursors that are created by an exon-skipping event. Circular intronic RNAs (ciRNAs) are generated from intronic lariats that escape from the debranching step of canonical linear splicing. tRNA intronic circular RNAs (tricRNAs) are formed during the process of pre-tRNA splicing. CiRNAs and EIciRNAs are located in nucleus. CiRNAs can interact with the RNA pol II complex and play a role in regulating parental gene transcription. EIciRNAs can interact with U1 small nuclear ribonucleoproteins and then increase the transcription of their host genes by binding with RNA pol II. EcircRNAs are translocated to cytoplasm after formation, and can act as microRNA sponges, RBP sponges, protein scaffoldings or templates of protein translation. the expression of intercellular adhesion molecule 1 (ICAM-1) in macrophages during LPS stimulation (37). ICAM-1 is known to recruit leukocytes to inflamed sites and mediate cell-to-cell interactions during antigen presentation (38,39). In addition to macrophages, circRNAs' expression profiles were also examined in neutrophils between healthy subjects and patients with asymptomatic Moyamoya disease (40). In this study, 123 circRNAs were identified differentially expressed between the two groups (40). Another comprehensive circRNA profiling study discovered that circRNA100783 is involved in chronic CD28-associated CD8 + T cell aging (41). It was also found that down-regulation of hsa_circ_0012919 increased the expression of DNMT1, decreased expression of CD70 and CD11a, in CD4 + T cells of patients with systemic lupus erythematous (42). The role of circRNAs in tuberculosis Although excise roles of host circRNAs in TB remain to be further investigated, current evidences suggested that circRNAs could act as an important regulator in host anti-TB immune processes. Autophagy is an important cellular defense mechanism against intracellular pathogens including Mtb (43). Several circRNAs have been reported to induce autophagy in host cells. CircTRAPPC6B, one of the downregulated circRNAs in peripheral blood mononuclear cells (PBMCs) from active TB patients, could induce autophagy in Mtb-infected macrophages by abolishing the repression of miR-874-3p on ATG16L1 expression (44). CircAGFG1, an upregulated circRNA in macrophages from TB patients, also has been reported to enhance autophagy whereas reduce apoptosis of Mtb-infected macrophages via the miRNA-1257/Notch axis (45). In contrast, hsa_circ_0045474, which is downregulated in monocytes from patients with TB, negatively regulates the autophagy level in macrophages likely by repressing the expression of miR-582-5p (46). Hsa_circRNA_103571 was also down-regulated in active TB patients. Bioinformatic analysis revealed that it has strong relationship with the biological process of autophagy by analyzing its potential target miRNAs and corresponding target genes of these miRNAs (47). CircRNA_101128 is highly expressed in PBMCs from active TB patients and negatively correlates with the level of its potential target miRNA let-7a, which is a known autophagy regulator, suggesting that it may play a role in regulating autophagy levels (48). Furthermore, some circRNAs, such as circ-Dnmt1, circCDYL, circMUC16, circPAN3, circ-0009910, circ0085131, circRACGAP1, circMOT1, cicr-0023404, circEIF6 and circ-0035483, are known to activate autophagy in cancer cells (49), but their roles in autophagy during Mtb infection remain unknown. It has been known that macrophage polarization drives granuloma outcome during Mtb infection and Mtb has the potential to modulate macrophage polarization (50). An upregulated circRNA in TB patients, namely hsa_circ_0003528, was found to promote M1 to M2 macrophage polarization by sponging miR-324-5p, miR-224-5p and miR-488-5p and thus upregulating CTLA4 (51). Additionally, hsa-circRNA-100237 was indicated to play a role in TB pathogenesis by regulating macrophage activities (47). The potential mechanism is that the down-regulated hsa-circRNA-100237 in active TB patients could act as an miR-33 sponge, therefore promoting lipid storage by reducing mitochondrial fatty acid oxidation (47,52). Circ_0001490 expression was down-regulated in the serum of TB patients and M.tb-infected THP-1 macrophages (53). Recently, it has been reported that circ_0001490 repressed M.tb survival and promoted the viability and inflammatory responses of THP-1 macrophages by regulating miR-579-3p/FSTL1 axis (53). CircPWWP2A, which is downregulated in Mtb-infected macrophages, was also reported to be able to protect human macrophages from Mtb-induced cytotoxicity by sponging miR-567 and thus abolishing the suppression of miR-567 on two prosurvival proteins, SIRT1 and PDK1 (54). Furthermore, the circRNA_051239 was reported to be significantly upregulated in drug-resistant TB patients (55). MiR-320a have three binding sites of circRNA_051239 and is significantly down-regulated in the drug-resistant patients (56). Thus, the circRNA_051239 may modulate the development of drug-resistance by targeting miR-320a. The roles of circRNA in nontuberculous mycobacteria infection was also started to be investigated very recently. One study examined circRNA expression profiles of Mycobacterium avium subsp. paratuberculosis (MAP)-infected bovine monocyte-macrophages and uninfected cells by RNA sequencing (57). Authors identified 39 differentially expressed circRNAs between MAP-infected and uninfected macrophages, including 12 upregulated and 27 downregulated circRNAs in MAP-infected macrophages. Bioinformatic analysis showed that these cicrRNAs might play roles in Th1/Th2/Th17 cell differentiation, necroptosis, and JAK-STAT/chemokine signaling pathways. The other study screened circRNAs' expression in osteocyte-like cells treated with N-glycosylated muramyl dipeptide (N.g MDP) from Mycobacterium leprae (M. leprae), trying to understand the mechanisms underlying the bone remodeling during M. leprae infection (58). In this study, 724 differentially expressed circRNAs and 724 differentially expressed messenger RNAs were identified, and 58 circRNA-miRNA-mRNA interaction pairs were obtained. The following analysis showed that these 58 genes are uniquely associated with "Circadian Rhythm" including Clock, which was known to be able to regulate bone formation, suggesting that circRNAs may paly roles in bone remodeling during M. leprae infection by regulating Clock genes' expression. Anti-tuberculosis drug-induced liver injury (ADLI) often leads to treatment interruptions. To explore ADLI-specific circRNAs, Biao Li et al. assessed the circRNA expression profiles in serums from TB patients with or without ADLI and in hepatocytes treated or untreated with anti-tuberculosis drugs (59). 113 co-differentially expressed circRNAs were identified in this screening. An upregulated circRNA among them, circMARS, was found to play roles in the compensatory repair mechanism of ADLI through the circMARS-miR-6808-5p/-6874-3p/-3157-5p-KMT2C-EGFR function axis. Another study published in this year found a down-regulated circRNA in ADLI patients called has_circ_0093884 could upregulate the expression of an anti-inflammatory protein SIRT1 by binding ribosomal protein S3 and thus regulate the hepatocyte inflammation in ADLI (60). CircRNAs as diagnostic biomarkers for tuberculosis Some circRNAs are known to be expressed in a diseasespecific manner. Combined with other features of circRNAs including conservation, stability and high abundance in body fluids, circRNAs are believed to be promising biomarkers for various diseases including TB (61) ( Table 1). Using whole transcriptome sequencing, Zhang et al. identified 170 dysregulated circRNAs in whole blood samples from pulmonary TB patients, compared with samples in health individuals (69). Their findings suggested that circRNA-linked competing endogenous RNAs (ceRNA) -mediated regulation of gene is critical for pulmonary TB pathogenesis. Using microarray and quantitative real-time PCR analysis, Huang et al. revealed that a number of circRNAs were differentially expressed in PBMCs from active TB patients. Among them, hsa_ circ_001937 exhibits a substantial diagnostic value for TB with an area-under-curve (AUC) value of 0.873 in the receiver operating characteristic (ROC) curve analysis (63). It was specifically upregulated in patients with TB compared with patients with other lung diseases, and was also correlated with TB severity (63). Yi et al. found that the level of hsa_circRNA_103571 significantly decreased in active TB plasma samples and could be served as a potential biomarker for active TB diagnosis with an AUC value of 0.838 (47). Zhuang et al. reported that hsa_circ_0009128 and hsa_circ_0005836 were significantly down-regulated in the PBMCs of active (55) pulmonary TB (APTB) patients compared with health controls and the hsa_circ_0005836 might act as novel potential biomarkers for APTB (64). Recently, this group reported hsa_circ_0001380 was also significantly downregulated in the PBMCs from TB patients compared with healthy individuals. The ROC curve analysis revealed that the AUC value for distinguishing APTB using hsa_circ_0001380 was 0.9502, indicating the high diagnostic value of this circRNA in APTB (65). Another study identified three differentially expressed cicrRNAs by analyzing public GEO datasets and found hsa_circ_0028883 showed a potential diagnostic value in active TB with an AUC value of 0.773 (66). Additionally, Fu et al. characterized the expression profiles of circRNAs in PBMCs of active TB patients and discovered 171 circRNAs were dysregulated in TB samples. Among them, circRNA_059914, circRNA_101128 and circRNA_103017 were significantly upregulated and showed a potential diagnostic power with an AUC value of 0.87, 0.821 and 0.817 respectively (48). In addition to individual circRNA, the combination of circRNAs has shown a better predictive power in TB diagnosis. One study reported the expression of two circRNAs (hsa_circ_0001204 and hsa_circ_0001747) was significantly decreased in active TB plasma samples compared with health controls. The ROC curve analysis exhibited an AUC value of 0.928 for distinguishing TB patients when hsa_circ_0001747 and hsa_circ_0001204 were used in combination, indicating this circRNA combination could act as a novel biomarker for active TB diagnosis (67). Qian et al. also identified differentially expressed circRNAs in pulmonary TB patients' PMBCs. Among them, seven c ircRNAs including hsa_circ_0000414, hsa_circ_0002908, hsa_circ_0000681, hsa_circ_0002362, hsa_circ_0002113, hsa_circ_0008797 and hsa_circ_0063179 were chosen to develop a circRNA-based molecular signature based on pathway analysis. In validated groups, the 7-circRNA-based TB index was significantly higher in TB patients than that in healthy controls with an AUC value of 0.946 (68). Furthermore, Liu et al. identified three circRNAs (circRNA_029965, circRNA_051239 and circRNA_404022) which showed significantly increases in the serum of the active TB patients. The AUC value of the combination of these three circRNAs in ROC curve analysis is as high as 0.992, suggesting these cicrRNAs could serve as ideal potential biomarkers for TB diagnosis (55). Discussion The current advanced RNA-sequencing technologies and data analysis algorithms have largely promoted the discovery of host circRNAs. These circRNAs, at least some of them, are now considered to have important functions in the process of Mtb infection. However, current studies on the roles of circRNAs in TB are mainly focused on autophagy and macrophage polarization. Exact mechanisms of circRNAs' action are still largely unknown. The roles of circRNAs in other immune processes during Mtb infection and their mechanisms need to be comprehensively studied. Also, new technologies, analyses and strategies are still needed to identify key functional circRNAs in TB. Due to their excellent high abundance and stability in body fluids, circRNAs are regarded as promising diagnostic biomarkers for human diseases including TB. Indeed, as listed in Table 1, dozens of circRNAs or combinations are reported as biomarkers for the diagnosis of TB with varied predictive power. However, several problems/challenges still need to be addressed in the future. Firstly, the samples sizes of current published studies are relatively small. Muti-center large cohort studies are needed to comprehensively evaluate the diagnostic value of these circRNAs. The sensitivity and reliability of using circRNAs as diagnostic biomarkers need further validation. Secondly, as the absolute expression level of circRNAs in samples may vary from person to person, it may be difficult to set up a normal baseline for distinguishing patients from healthy people. Thus, a standardized protocol for circRNAs detection in clinical samples is required. Finally, detection of circRNAs in clinical samples is more expensive and time-consuming currently than protein detection, which may limit the application of circRNAs as biomarkers. It is very important to be considered as the fact that over 95% TB patients are in developing countries. In addition to being diagnostic biomarkers, circRNAs could be as potential therapeutic targets or therapeutic strategies for tuberculosis. The disease-promoting circRNAs could be knocked down by RNAi or CRISPR/Cas9-based gene editing. The therapeutic circRNAs could be designed and synthesized artificially according to the clinical need. Liu et al. artificially synthesized a circRNA for the first time and showed this circRNA could inhibit the proliferation of gastric cancer cells in vitro by sponging miR-21 (70). Furthermore, a very recent elegant study found the circRNA vaccine against SARS-CoV-2 enabled higher and more durable antigen production than the modified mRNA vaccine and induced a higher neutralizing antibody titers (71). This study provided a novel idea for the potential application of circRNAs in developing TB vaccine in the future. Author contributions QW, DY, YZ, and DW drafted the manuscript. QW and WL supervised and edited the manuscript. All authors contributed to the article and approved the submitted version. Funding This work was supported by the startup fund from West China Hospital to Qinglan Wang (137210102).
2022-09-21T13:35:59.040Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "82d89a2e68b49edafb8befd89101a89eb7536708", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "82d89a2e68b49edafb8befd89101a89eb7536708", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212860513
pes2o/s2orc
v3-fos-license
Hippocampal and cortical mechanisms at retrieval explain variability in episodic remembering in older adults Age-related episodic memory decline is characterized by striking heterogeneity across individuals. Hippocampal pattern completion is a fundamental process supporting episodic memory. Yet, the degree to which this mechanism is impaired with age, and contributes to variability in episodic memory, remains unclear. We combine univariate and multivariate analyses of fMRI data from a large cohort of cognitively normal older adults (N=100) to measure hippocampal activity and cortical reinstatement during retrieval of trial-unique associations. Trial-wise analyses revealed that (a) hippocampal activity scaled with reinstatement strength, (b) cortical reinstatement partially mediated the relationship between hippocampal activity and associative retrieval, (c) older age weakened cortical reinstatement and its relationship to memory behaviour. Moreover, individual differences in the strength of hippocampal activity and cortical reinstatement explained unique variance in performance across multiple assays of episodic memory. These results indicate that fMRI indices of hippocampal pattern completion explain within- and across-individual memory variability in older adults. Introduction Episodic memory -in particular the ability to form and retrieve associations between multiple event elements that comprise past experiences -declines with age (Spencer and Raz, 1995;Rö nnlund et al., 2005;Old and Naveh-Benjamin, 2008). Retrieval of an episodic memory relies critically on hippocampal-dependent pattern completion, which entails reactivation of a stored memory trace by the hippocampus in response to a partial cue, leading to replay of cortical activity patterns that were present at the time of memory encoding (Marr, 1971;McClelland et al., 1995;Tanaka et al., 2014;Staresina et al., 2019). Given observed links between in vivo measures of pattern completion and episodic remembering (Nakazawa et al., 2002;Gelbard-Sagiv et al., 2008;Gordon et al., 2014), and evidence of altered hippocampal function with age (Lister and Barnes, 2009;Leal and Yassa, 2013), changes in hippocampal pattern completion may play an important role in explaining age-related impairments in episodic memory. While a leading hypothesis, the degree to which the integrity of pattern completion can explain (a) trial-to-trial differences in episodic remembering within older adults and (b) differences in memory performance between older individuals remain underspecified. Functional MRI (fMRI) studies in younger adults suggest that hippocampal pattern completion is associated with at least two key neural markers: (a) an increase in hippocampal univariate activity (Eldridge et al., 2000;Dobbins et al., 2003;Yonelinas et al., 2005) and (b) cortical reinstatement of content-specific activity patterns present during encoding (Nyberg et al., 2000;Wheeler et al., 2000;Kahn et al., 2004). Multivariate pattern analyses -machine learning classification (Norman et al., 2006) and pattern similarity (Kriegeskorte et al., 2008) -reveal evidence for cortical reinstatement of categorical event features (Polyn et al., 2005;Johnson and Rugg, 2007;Gordon et al., 2014) and event-specific details (Staresina et al., 2012;Ritchey et al., 2013;Kuhl and Chun, 2014) during successful recollection. Moreover, hippocampal and cortical metrics of pattern completion covary, such that trial-wise fluctuations in hippocampal univariate retrieval activity are related to the strength of cortical reinstatement (Staresina et al., 2012;Ritchey et al., 2013;Gordon et al., 2014), and both hippocampal activity and reinstatement strength are related to associative retrieval performance Gagnon et al., 2019). These findings support models (Marr, 1971;McClelland et al., 1995;Tanaka et al., 2014) positing that cortical reinstatement depends, in part, on hippocampal processes, and contributes to remembering. Initial data bearing on age-related changes in hippocampal pattern completion are mixed. Studies comparing hippocampal activity during episodic retrieval in older and younger adults have revealed age-related reductions in activity (Cabeza et al., 2004;Dennis et al., 2008) and age-invariant effects Trelle et al., 2019). Similarly, while some have identified reduced category-level (McDonough et al., 2014;Abdulrahman et al., 2017) and event-level (St-Laurent et al., 2014;Folville et al., 2020) cortical reinstatement in older relative to younger adults, others observed age-invariant category-level reinstatement or that age-related differences in reinstatement strength are eliminated after accounting for the strength of category representations during encoding (Johnson et al., 2015). Although extant studies have yielded important initial insights, the absence of trial-wise analyses relating hippocampal activity to cortical reinstatement, or relating each of these neural measures to memory behaviour, prevents clear conclusions regarding the degree to which hippocampal pattern completion processes are impacted with age. Aging may affect one or both of these neural measures, and/or may disrupt the predicted relationships between these neural variables and behaviour (e.g., Gordon et al., 2014). The first aim of the present study is to quantify trial-wise fluctuations in hippocampal activity and cortical reinstatement in older adults, and examine how these measures relate to one another, as well as how these measures relate to episodic remembering of trial-unique associative content. Critically, in addition to varying within individuals, the degree to which pattern completion processes are disrupted among older adults may vary across individuals. Indeed, age-related memory decline is characterized by striking heterogeneity, with some individuals performing as well as younger adults and others demonstrating marked impairment Henson et al., 2016;see Nyberg et al., 2012 for review). Identifying the neural factors driving this variability is a clear emerging aim of cognitive aging research Cabeza et al., 2018). However, due to modest sample sizes, extant studies typically lack sufficient power to examine individual differences in retrieval mechanisms among older adults (Dennis et al., 2008;McDonough et al., 2014;St-Laurent et al., 2014;Johnson et al., 2015;Wang et al., 2016;Abdulrahman et al., 2017;Trelle et al., 2019;Folville et al., 2020). Moreover, while recent work examining variability in hippocampal function has demonstrated relationships between hippocampal retrieval activity and associative memory performance in older adults Carr et al., 2017), the direction of this relationship differed across studies; to date, the relationship between individual differences in cortical reinstatement and memory performance remains unexplored. As such, the second aim of the present study is to examine whether hippocampal and cortical indices of pattern completion vary with age, and to assess the degree to which these measures explain individual differences in episodic memory performance -both as a function of age and independent of age. To address these two aims, a large sample (N = 100) of cognitively normal older participants (60-82 years) from the Stanford Aging and Memory Study (SAMS ; Table 1; Materials and methods) performed an associative memory task ( Figure 1) concurrent with high-resolution fMRI. Participants intentionally studied trial-unique word-picture pairs (concrete nouns paired with famous faces and famous places), and then had their memory for the word-picture associations probed. During retrieval scans, participants viewed a studied or novel word on each trial and indicated whether they (a) recollected the associate paired with the word, responding 'face' or 'place' accordingly (providing an index of associative memory), (b) recognized the word as 'old' but were unable to recall the associate (providing an index of item memory -putatively reflecting familiarity, non-criterial recollection, or a mix of the two), or (c) thought the word was 'new'. Following scanning, participants were shown the studied words again and asked to recall the specific associate paired with each word, this time explicitly providing details of the specific image (providing an index of exemplar-specific recall). To measure pattern completion during retrieval, we used univariate and multivariate analyses focused on a priori regions of interest (ROIs; Figure 2). To measure hippocampal function, our primary analyses examined univariate activity in the whole hippocampus bilaterally. In addition, we measured activity in three subfields within the body of the hippocampus -dentate gyrus/CA3 (DG/ CA3), CA1, and subiculum (SUB) -given prior work suggesting that aging may differentially affect individual hippocampal subfields (Yassa et al., 2011;Carr et al., 2017;Reagh et al., 2018) and models predicting differential subfield involvement in pattern completion, including a key role for subfield CA3 (Nakazawa et al., 2002;Grande et al., 2019). To measure cortical reinstatement, we focused on two cortical regions -ventral temporal cortex (VTC) and angular gyrus (ANG) -which we predicted would support content-rich representations during memory retrieval based on prior evidence in healthy younger adults. In particular, while VTC has traditionally been associated with content coding during memory encoding and retrieval (Nyberg et al., 2000;Wheeler et al., 2000;Polyn et al., 2005;Johnson and Rugg, 2007;Staresina et al., 2012;Ritchey et al., 2013;Kuhl and Chun, 2014;Gordon et al., 2014;Gagnon et al., 2019), more recent studies have also demonstrated evidence for cortical reinstatement of both category and stimulus/event-specific features in ANG during episodic retrieval, and suggest that these representations may be differentially related to memory-guided behaviour (Kuhl et al., 2013;Kuhl and Chun, 2014;Favila et al., 2018;Lee et al., 2019). Category-level reinstatement (i.e., face/place) was quantified via pattern classification and event-specific reinstatement (e.g., Queen Elizabeth, Golden Gate Bridge) was quantified using encoding-retrieval pattern similarity. The online version of this article includes the following source data for Table 1: Source data 1. Demographic information and behavioural data presented in Table 1. Behavioural results We assessed performance on the associative cued recall task using three measures: 1) old/new d'discrimination between studied and novel words during the in-scan memory test, irrespective of memory for the associate; 2) associative d' -correctly remembering the category of associated images encoded with studied words, relative to falsely indicating an associative category to novel words; and 3) post-scan exemplar-specific associative recall -proportion correct recall of the specific exemplars associated with studied words. Performance on all three measures declined with age (old/new d': b = À0.35, p < 0.001; associative d': b = À0.30, p < 0.005, Figure 3a; post-scan exemplar-specific recall: b = À0.34, p < 0.001, Figure 3b), but did not vary by sex (bs = À0.10,-0.33, À0.23; ps !0.10) or years of education (b = À0.03,-0.02, À0.07; ps >0.47). Associative d' was higher for word-face pairs than word-place pairs (t(99) = 5.37, p < 10 À7 ). Critically, despite this decline in performance with age, we also observed considerable variability in performance across individuals in each measure ( Figure 3 and Table 1). Individual-differences and trial-wise analyses revealed that post-scan associative recall tracked inscanner associative memory. First, individuals who demonstrated higher associative memory during scanning showed superior recall of the specific exemplars on the post-scan test (controlling for age; b = 0.62, p < 10 À12 ; Figure 3c). Second, trial-wise analysis revealed that making an in-scan associative hit was a significant predictor of successful post-scan exemplar recall (c 2 (1) = 159.68, p < 10 À36 ). These findings suggest that post-scan exemplar-specific retrieval -while quantitatively lower due to the longer retention interval, change of context, and interference effects -is a good approximation of recall of the specific exemplar during scanning (relative to simply recalling more general category information). fMRI encoding classifier accuracy Following prior work (e.g., Kuhl et al., 2013;Kuhl and Chun, 2014;Favila et al., 2018;Lee et al., 2019), cortical reinstatement analyses focused on two a priori ROIs: VTC and ANG. To confirm that activity patterns during word-face and word-place encoding trials were discriminable for each participant in each ROI, we trained and tested a classifier on the encoding data using leave-one-run-out-nfold cross validation. On average, encoding classifier accuracy was well above chance (50%) using patterns in VTC (M = 98.4%, p < 0.001) and ANG (90.0%, p < 0.001), with classifier accuracy significantly greater in VTC than ANG (t(99) = 12.86, p < 10 À16 ). Classification was above chance in all 100 participants (minimum accuracy of 82.5% (p < 0.001) in VTC and 68.0% (p < 0.005) in ANG), and did not vary significantly as a function of age (VTC: b = À0.13, p = 0.133; ANG: b = À0.06, p = 0.544). To account for variance in encoding classifier strength (quantified using log odds of the classifier's probability estimate) on estimates of category-level reinstatement strength during memory retrieval (trial-wise: VTC: c 2 (1) = 13.96, p < 0.001; ANG: c 2 (1) = 30.16, p < 10 À8 ; individual differences: VTC: b = 0.45, p < 10 À5 ; ANG: b = 0.62, p < 10 À11 ; see Figure 5-figure supplement 3), we controlled for encoding classifier strength in all subsequent models in which category-level reinstatement strength was related to behavioural variables (memory accuracy, RT), as well as in models in which reinstatement strength was the dependent variable (see Materials and methods -Statistical Analysis and Supplementary file 1 for details). Memory behaviour scales with trial-wise category-level reinstatement We quantified reinstatement of relevant face or scene features (i.e., category-level reinstatement) in VTC and ANG using subject-specific classifiers trained on all encoding phase runs for an individual (training set was balanced for category), and tested for cortical reinstatement in the independent retrieval phase data; significance was assessed using permutation testing (see Materials and methods -MVPA for further details). Classifier accuracy (Figure 4a) was above chance . Cortical and hippocampal metrics of pattern completion during retrieval. (a) Classifier accuracy is above chance in VTC and ANG during successful, but not unsuccessful, associative retrieval. (b) Trial-wise category-level reinstatement strength (logits) in VTC and ANG is related to an increased probability of an associative hit and (c) faster decision RT on associative hit trials. (d) Event-level reinstatement (within-event ERS > within-category ERS) is observed during associative hits in VTC and ANG. (e) Trial-wise event-level reinstatement (within-event ERS) significantly varies with the probability of an associative hit and (f) exemplar-specific hit. (g) Hippocampal activity shows a graded response across retrieval conditions. (h) Trial-wise hippocampal activity is related to an increased probability of an associative hit and (i) greater category-level reinstatement strength (logits) in VTC and ANG. For visualization, data for each participant are binned into quintiles based on category-level reinstatement strength (b, c), event-level reinstatement strength (e,f) and hippocampal activity (h,i). Statistics were conducted on trial-wise data, z-scored within participant. Error bars represent standard error of the mean. VTC = ventral temporal cortex; ANG = angular gyrus; RT = reaction time; ERS = Encoding Retrieval Similarity. The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Classifier accuracy in VTC and ANG by trial type, depicted in Figure 4a. Source data 2. Trial-wise cortical reinstatement (logits), hippocampal activity, and behavioural data used to generate Figure 4b-c,h-i and Figure 4-figure supplements 3, 5 and 6. Source data 3. Encoding-retrieval similarity in VTC and ANG by trial type, depicted in Figure 4d and (50%) during associative hits in VTC (M = 68.3%, p < 0.005) and ANG (M = 72.3%, p < 0.001), but did not exceed chance when associative retrieval failed, including on associative miss trials (VTC: 49.8%, p = 0.57; ANG: 50.4%, p = 0.49), item hit trials (VTC: 53.5%, p = 0.29; ANG: 53.3%, p = 0.31), and item miss trials (VTC: 47.1%, p = 0.68; ANG: 51.6%, p = 0.41; see Materials and methods for trial type definitions). Classifier accuracy during associative hits was greater in ANG relative to VTC (t(99) = 3.96, p < 0.001). In VTC, classifier accuracy during associative hits was stronger on place trials (M = 71.5%) relative to face trials (M = 65.1%, t(99) = 5.25, p < 10 À7 ), whereas in ANG the strength of reinstatement did not significantly vary by stimulus category (place: M = 73.3%; face: M = 71.3%, t(99) = 1.69, p = 0.094). To control for possible effects of stimulus category on the results, category is included as a regressor in all linear and logistic mixed effects models, and interactions between category and primary variables of interest are examined and reported in Supplementary file 1). Analyses of the time course of cortical reinstatement during associative hits revealed significant category-level reinstatement effects emerging~4-6 s post-stimulus onset (Figure 4-figure supplement 1). Analogous category-level reinstatement effects were observed using a pattern similarity approach (i.e., encoding-retrieval similarity (ERS); see Figure 4figure supplement 2). Evidence for reinstatement during successful, but not unsuccessful, associative retrieval is consistent with theories that posit that reinstatement of event features (here, face or scene features) supports accurate memory-based decisions (here, associate category judgments). More directly supporting this hypothesis, generalized logistic and linear mixed effects models (see Supplementary file 1 for full list of model parameters) revealed that greater trial-wise category-level cortical reinstatement in VTC and ANG -quantified using log odds of the classifier's probability estimate -was related to (a) an increased probability of an associative hit (VTC: c 2 (1) = 102.18, p < 10 À24 ; ANG: c 2 (1) = 133.25, p < 10 À31 ; Figure 4b), (b) an increased probability of post-scan exemplar-specific recall (VTC: c 2 (1) = 62.85, p < 10 À15 ; ANG: c 2 (1) = 89.02, p < 10 À21 ; Figure c 2 (1) = 30.08, p < 10 À8 ; ANG: c 2 (1) = 21.73, p < 10 À6 ; Figure 4c). We also found that age moderated the relationship between category-level reinstatement strength in VTC and behaviour, such that older individuals exhibited a weaker relationship between reinstatement strength in VTC and (a) associative retrieval success (c 2 (1) = 7.12, p < 0.01) and (b) retrieval decision RT on associative hit trials (c 2 (1) = 3.91, p < 0.05). This interaction was marginally significant in ANG with respect to associative retrieval (c 2 (1) = 3.57, p = 0.059), but not decision RT (c 2 (1) = 0.16, p = 0.685). Together, these data provide novel evidence that the strength of category-level reinstatement in VTC and ANG is linked to memory behaviour in cognitively normal older adults (see Figure 4-figure supplement 2 for analogous ERS findings), and also suggest that older age negatively impacts the translation of cortical evidence to memory behaviour. Memory behaviour scales with trial-wise event-level reinstatement We next used encoding-retrieval similarity (ERS) to quantify trial-unique, event-specific reinstatement of encoding patterns, comparing the similarity of an event's encoding and retrieval patterns (withinevent ERS) to similarity of encoding patterns from other events from the same category (within-category ERS). Evidence for event-level reinstatement was present in both VTC (t(99) = 2.26, p < 0.05) and ANG (t(99) = 3.54, p < 0.001) during associative hits ( Figure 4d). Moreover, the strength of trial-wise event-level reinstatement -controlling for within-category ERS (see Supplementary file 1 for full list of model parameters)-was related to (a) an increased probability of an associative hit (VTC: c 2 (1) = 1.78, p = 0.183; ANG: c 2 (1) = 7.50, p = 0.006; Figure 4e) and (b) an increased probability of post-scan exemplar-specific recall (VTC: c 2 (1) = 5.35, p < 0.05; ANG: c 2 (1) = 7.27, p = 0.006; Figure 4f), but not with decision RT on associative hit trials (VTC: p = 0.845; ANG: p = 0.231). These relationships were not significantly moderated by age (all p > 0.254). These results demonstrate a relationship between trial-unique, event-specific cortical reinstatement and associative retrieval in older adults. Cortical reinstatement partially mediates the effect of hippocampal activity on retrieval Having established a relationship between associative retrieval success and (a) hippocampal activity, (b) cortical reinstatement strength in VTC, and (c) ANG, we next sought to determine whether each of these putative indices of pattern completion explain common or unique variance in associative retrieval success. Using nested comparison of logistic mixed effects models, we found that compared to a model with image category and hippocampal activity, addition of VTC category-level reinstatement strength significantly improved model fit (c 2 (1) = 103.48, p < 10 À24 ). Addition of ANG category-level reinstatement to this model further improved model fit (c 2 (1) = 115.42, p < 10 À27 ), and all three variables remained significant predictors in the full model (hippocampus: b = 0.31, z = 8.36, p < 10 À16 ; VTC: b = 0.32, z = 9.36, p < 10 À16 ; ANG: b = 0.52, z = 14.36, p < 10 À16 ). These results indicate that reinstatement strength and hippocampal activity, though related indices of pattern completion, nevertheless explain unique variance in the probability of a successful associative retrieval decision. Moreover, they indicate that measures of category-level reinstatement strength in different cortical regions are not redundant, and perhaps carry complementary information relevant for memory behaviour. Given our prediction that the present measures of cortical reinstatement are, at least in part, a read out of hippocampal pattern completion processes, we next sought to more directly test the hypothesis that cortical reinstatement mediates the relationship between hippocampal activity and associative retrieval success. We conducted a mediation analysis separately for each cortical ROI, in which the coefficient of the indirect path was computed as the product of the direct effects, a x b, and the significance of the indirect effect was calculated using bootstrap resampling (see Materials and methods -Statistics for details). Consistent with predictions, the results revealed that the relationship between hippocampal activity and the probability of an associative hit was partially mediated through category-level cortical reinstatement in VTC (indirect effect: b = 0.026, 95% CI = 0.016, 0.036) and ANG (indirect effect: b = 0.019, 95% CI = 0.006, 0.032). These findings demonstrate that the effect of retrieval-phase hippocampal activity on associative retrieval success can be explained in part through its effects on cortical reinstatement. (a-c) Effects of age on hippocampal activity (associative hit -correct rejection) and category-level reinstatement strength (mean logits) in VTC and ANG during associative hits. (d-f) Independent of age, individual differences in hippocampal activity and category-level reinstatement strength in VTC and ANG during associative hits explain significant variance in exemplar-specific recall. (g-i) Independent of age, individual differences in hippocampal activity and VTC category-level reinstatement strength also explain significant variance in standardized delayed recall performance; the relation with ANG category-level reinstatement did not reach significance. Scatterplots reflect raw values for each measure. See Effects of age on hippocampal and cortical indices of pattern completion Our second key aim was to understand how hippocampal pattern completion processes vary across individuals, turning first to the effects of age. For all individual-differences analyses of pattern completion, we computed mean category-level and event-level reinstatement strength in VTC and ANG during associative hits and mean hippocampal activity during successful associative hits (corrected by mean activity during correct rejections) for each participant. Each measure was adjusted by head motion, and reinstatement strength was further adjusted by encoding strength, before it was entered into regression models. Regression analyses revealed that (a) while hippocampal activity did not significantly vary with age (b = À0.10, p = 0.35; Figure 5a), there was (b) an age-related decline in category-level reinstatement strength during associative hits (i.e., mean logits; VTC: b = À0.34, p < 0.001; ANG: b = À0.16, p < 0.05; Figure 5b-c), and c) an age-related decline in event-level reinstatement (i.e., ERS) during associative hits in VTC (b = À0.26, p < 0.01; Neural indices of pattern completion explain individual differences in episodic memory We next asked if the strength of neural measures of pattern completion during associative retrieval explain variance in memory performance, independent of age. Separate regression models, controlling for age, revealed that individual differences in exemplar-specific recall were related to hippocampal activity (b = 0.47, p < 10 À7 ; Figure 5d) and category-level reinstatement strength during associative hits (VTC: b = 0.45, p < 10 À6 ; ANG: b = 0.41, p < 0.001, In contrast, individual differences in event-level reinstatement did not explain significant variance in exemplar-specific recall (all ps > 0.33). Thus, individual differences in the integrity of hippocampal retrieval mechanisms and category-level cortical reinstatement contribute to variability in pattern-completion-dependent (i.e., associative) memory in older adults. To determine if these observed effects were moderated by age, we repeated analyses including an age  predictor interaction in each model. These models provided no significant evidence for an age-related moderation of the effect of hippocampal activity (b = À0.15, p = 0.088) or category-level reinstatement strength (VTC: p = 0.977; ANG: p = 0.565) on exemplarspecific recall. While these results suggest that the strength of the relationships between (a) hippocampal activity and (b) category-level reinstatement strength and individual differences in associative memory is age-invariant, we interpret this result with caution given the restricted age range (60-82 years) of the current sample. To determine whether these neural variables explain unique variance in memory performance, we used hierarchical regression (see Table 2 for model parameters). Compared to a model with age alone (adjusted R 2 = 0.126), adding hippocampal activity explained additional variance in exemplar specific recall (model comparison: F(1,96) = 29.54, p < 10 À7 , adjusted R 2 = 0.325). Moreover, adding a single category-level reinstatement metric explained further variance in performance (model comparison: VTC: F(1,95) = 22.75, p < 10 À6 , adjusted R 2 = 0.438; ANG: F(1,95) = 8.25, p < 0.01, adjusted R 2 = 0.365). However, when VTC and ANG were both included in the same model, category-level reinstatement strength in ANG was no longer a significant predictor (p = 0.412). Analogous findings were observed with associative d' as the dependent variable (see Supplementary file 1 for model parameters). Thus, in older adults, individual differences in hippocampal activity and cortical reinstatement strength provide complementary information, over and above age, in explaining individual differences in associative memory, whereas indices of category-level reinstatement strength explain shared variance. Independent measures of memory scale with individual differences in pattern completion Finally, we examined whether our task-based fMRI measures of pattern completion -hippocampal activity and cortical reinstatement -explain individual differences in an independent measure of episodic memory, using a delayed recall composite score collected in a separate neuropsychological testing session (see Materials and methods). Controlling for age and sex, hippocampal activity (b = 0.19, p < 0.01; Figure 5g) and VTC category-level reinstatement strength (b = 0.21, p <0.01; Figure 5h) were significant predictors of delayed recall score; the relationship with ANG categorylevel reinstatement strength did not reach significance (b = 0.14, p = 0.11; Figure 5i; see Figure 5figure supplement 1 for partial plots). Further, as for exemplar-specific recall, we found that hippocampal activity and VTC category-level reinstatement strength explained unique variance in delayed recall performance (hippocampus: b = 0.16, p < 0.05; VTC: b = 0.20, p < 0.05, adjusted R 2 = 0.231). Given the observed relationships between this standardized neuropsychological measure and the present indices of pattern completion, we asked whether delayed recall score alone could account for the observed relationship between the neural measures and exemplar-specific recall. When delayed recall score was added to the full model (see Table 2, Step 5), this measure explained additional variance in exemplar-specific recall (model comparison: F(1,94) = 7.45, p < 0.01, adjusted R 2 = 0.485), but hippocampal activity and VTC category-level reinstatement strength remained significant predictors (hippocampus: b = 0.335, p < 10 À5 ; VTC reinstatement: b = 0.377, p < 10 À5 ). Together, these results support the hypothesis that individual differences in the integrity of pattern completion processes, indexed by univariate and pattern-based task-related fMRI metrics, explain variance in memory performance across established hippocampal-dependent assays of episodic memory, and do so in a manner that is not captured by simple standardized neuropsychological tests. Note. a = adjusted by motion; b = adjusted by encoding strength (mean logits across leave-one-run-out-n-fold cross validation); Reinstatement = category level reinstatement (mean logits across associative hits); SE = standard error; VTC = ventral temporal cortex; ANG = angular gyrus;~p < 0.1, *p<0.05, **p<0.01, ***p<0.001 ****p<10 À5 . Discussion Using univariate and multivariate fMRI, the current investigation characterizes the integrity of hippocampal pattern completion during associative retrieval in a large cohort of putatively healthy older adults. We provide novel evidence for unique contributions of hippocampal and cortical indices of pattern completion to a) trial-by-trial differences in episodic remembering in older adults, as well as b) age-related and age-independent individual differences in episodic memory performance. Taken together, these results provide novel insights into the neural mechanisms supporting episodic memory, as well as those driving variability in remembering across older adults. The present analyses of trial-level brain-behaviour relationships significantly build on work in younger adults Gagnon et al., 2019), demonstrating that trial-wise relationships (a) between hippocampal activity and cortical reinstatement and (b) between each of these neural measures and memory behaviour are present later in the lifespan. While directionality is difficult to establish with fMRI, these results are consistent with models of episodic retrieval wherein hippocampal pattern completion, triggered by partial cues, drives reinstatement of event representations in the cortex, which supports episodic remembering and memory-guided decision making (Marr, 1971;McClelland et al., 1995). Further bolstering this interpretation, we demonstrate that (a) category-level cortical reinstatement partially mediates the relationship between hippocampal activity and associative retrieval success, and (b) the relationship between hippocampal activity and associative retrieval success was qualitatively strongest in DG/CA3, consistent with a key role of CA3 in initiating pattern-completion dependent retrieval (Marr, 1971;McClelland et al., 1995;Nakazawa et al., 2002;Tanaka et al., 2014;Staresina et al., 2019). Moreover, the present results provide novel evidence for stability in the trial-wise relationship between hippocampal activity and (a) cortical reinstatement and (b) associative retrieval success, as neither relationship varied as a function of age. Consistent with the observed trial-level relationship between hippocampal activity and associative retrieval success, we also demonstrate a positive relationship between the magnitude of hippocampal activity during associative hits and associative memory performance. Our findings complement and build on prior work (de Chastelaine et al., 2016), as we demonstrate that this effect was observed across hippocampal subfields, including DG/CA3, and did not vary significantly as a function of age. These results are compatible with proposals that the relationship between hippocampal 'recollection success' effects and memory performance remains stable across the lifespan , as well as more broadly with proposals that preservation of hippocampal function is important for the maintenance of episodic memory in older adults over time (Persson et al., 2012;Pudas et al., 2013). We note, however, that a negative relationship between hippocampal retrieval activity and memory performance has also been observed in older adults (e. g., Carr et al., 2017;Reagh et al., 2018). Differences across studies may be related to (a) the paradigms and/or contrasts employed (e.g., associative recollection vs. lure discrimination), (b) image resolution (e.g., individual subfields vs. the whole hippocampus), or (c) the make-up of the study population (e.g., cognitively normal or cognitively impaired; Dickerson and Sperling, 2008). Additional well-powered studies of hippocampal retrieval dynamics in older adults are needed to assess the degree to which these variables alter the relationship between hippocampal activity and memory behaviour. The present results also provide novel insights into the basis of mnemonic decisions in older adults. Specifically, we demonstrate that trial-wise indices of reinstatement strength -indexed using classifier-derived evidence and encoding-retrieval pattern similarity -were tightly linked to memory behaviour, including response accuracy and speed. This finding suggests that retrieval was not 'all or none', but likely graded (Mickes et al., 2009;Kuhl et al., 2011;Harlow and Yonelinas, 2016). Indeed, while participants were instructed during scanning to recollect the specific associate, correct category judgments (agnostic to correct exemplar-specific recall) could nonetheless be supported by retrieval of generic category information (e.g., a place), prototypical details (e.g., a bridge), specific exemplar details (e.g., the Golden Gate Bridge), or even retrieval of erroneous, but category consistent details (e.g., Niagara Falls). The category-level reinstatement effects observed here likely reflect some combination of these retrieval outcomes, as suggested by the strong correlation between post-scan exemplar-specific recall and within-scan associative d', along with the observation that the proportion of specific exemplars recalled post-scan was generally lower than correct categorical judgements during scanning (though the former undoubtedly declined due to the longer retention interval and interference effects). Beyond the strength of reinstatement, the present results cannot adjudicate the nature of the details recalled. For example, both category-and exemplar-specific associative hits could be supported by retrieval of semantic details (e.g., the Golden Gate Bridge), perceptual details (e.g., the bridge was red), or some combination (e.g., vividly recalling the image of the Golden Gate Bridge). One possibility, though speculative, is that VTC and ANG support representations of distinct types of event features (e.g., perceptual features in VTC and semantic and/or multimodal features in ANG). This possibility is in line with existing evidence (e.g., Bonnici et al., 2016;Favila et al., 2018) and also with the present observation that reinstatement strength in VTC and ANG made complementary contributions to retrieval success. Regardless of the precise nature of the details recalled, we demonstrate that, as in younger adults (Kuhl et al., 2011;Kuhl et al., 2013;Gordon et al., 2014), recovery of stronger mnemonic evidence was associated with greater accuracy and faster responses, and this was true for representations supported by VTC and ANG alike. This relationship may reflect reduced demands on post-retrieval monitoring and selection processes and/or greater confidence in the face of stronger mnemonic evidence. Interestingly, the strength of the trial-level relationship between VTC reinstatement strength and behaviour weakened with increased age. This could be related to age-related changes in decision criteria, retrieval monitoring ability, response strategies, or some combination of these factors. Future work is needed to explore the specific neurocognitive basis of this intriguing effect, which likely involves interactions between the medial temporal lobe and frontoparietal regions (Waskom et al., 2014;Gagnon et al., 2019). Although we observed robust group-level cortical reinstatement effects during associative hits, category-level reinstatement strength declined with age, and individual differences in category-level reinstatement strength explained significant variance in episodic memory. Importantly, the effect of age on reinstatement strength, and the relationship between reinstatement strength and memory performance, was observed after accounting for variance in encoding classifier performance, a putative assay of cortical differentiation (i.e., the ability to establish distinct neural patterns associated with different visual stimulus categories) during memory encoding. Prior work has demonstrated reductions in cortical differentiation in older relative to young adults (Voss et al., 2008;Carp et al., 2011;Park et al., 2012;Koen et al., 2019;Trelle et al., 2019), and evidence from both older and young adults suggests that cortical differentiation at encoding can impact reinstatement strength Johnson et al., 2015) and memory performance (Koen et al., 2019). Indeed, we found that encoding classifier strength was a strong predictor of category-level reinstatement strength in the present sample. Critically, by controlling for encoding strength in the current analyses, the present results indicate that the observed variance in reinstatement strength, and its relation to memory performance, does not simply reflect downstream effects of cortical differentiation. Instead, variance in reinstatement strength likely also provides information about the precision with which event representations are retrieved in older adults. These data therefore provide neuroimaging evidence in support of existing proposals that age-related episodic memory decline is driven, in part, by a loss of specificity or precision in mnemonic representations, a possibility that has been well-supported by behavioural evidence (Luo and Craik, 2009;Trelle et al., 2017;Korkki et al., 2020). Interestingly, while cortical reinstatement is a putative read-out of pattern completion, and therefore relies critically on the hippocampus -a possibility supported by the present data -the hippocampal and cortical measures of pattern completion defined here explained unique variance in memory performance, both at the trial level and across individuals. Indeed, these measures together explained nearly three times as much variance in exemplar-specific associative recall as age alone. One possibility is that hippocampal activity and cortical reinstatement strength index distinct aspects of recollection: retrieval success vs. retrieval precision, respectively (e.g., Harlow and Yonelinas, 2016;Richter et al., 2016). That is, whereas increases in hippocampal activity may signal recollection of some event details, this signal alone may not indicate the fidelity or precision with which the event is recollected. Conversely, reinstatement strength may provide more information about the contents of recollection, including the specificity or precision of mnemonic representations (e.g., recall of generic as opposed to exemplar-specific details), and perhaps even the nature of the details recollected (i.e., perceptual vs semantic). An alternative, but not mutually exclusive possibility, is that representations reinstated in cortex may be differentially affected by top-down goal states, post-retrieval monitoring, selection and/or decision processes (Kuhl et al., 2013;Favila et al., 2018), which may contribute unique variance in memory performance beyond that explained by hippocampal-initiated event replay. Future work is needed to examine whether the unique variance explained by cortical reinstatement relates to frontoparietal control and decision processes in older adults. Indeed, it is important to note that variability in episodic remembering, and indeed variability in the strength of the present pattern completion metrics, is likely influenced by a number of variables, only some of which are measured here. For example, aging may affect other processes at retrieval, including elaboration of retrieval cues (Morcom and Rugg, 2004) and post-retrieval monitoring and selection (McDonough et al., 2013;Trelle et al., 2019), as well as factors at encoding, including the differentiation of stimulus representations (Voss et al., 2008;Carp et al., 2011;Park et al., 2012;Koen et al., 2019;Trelle et al., 2019), goal-directed or sustained attention (Hultsch et al., 2002;Geerligs et al., 2014), and elaborative or 'strategic' encoding processes (Luo et al., 2007;Trelle et al., 2015). These variables could vary both within individuals (i.e., across trials), as well as between individuals (e.g., trait level differences). The manner in which these variables impact pattern completion processes at retrieval, or make independent contributions to episodic remembering in older adults, is an important direction for future work. Nevertheless, the present results provide compelling initial evidence that (a) hippocampal and cortical indices of pattern completion play a central role in determining whether individual events will be remembered or forgotten, (b) that predicted relationships between hippocampal activity, reinstatement strength, and associative memory retrieval can be observed even late in the lifespan, and (c) and that these neural metrics explain unique variance in memory performance across individuals. Hippocampal and cortical indices of pattern completion not only explained variance in our primary associative memory measures, but also in delayed recall performance on standardized neuropsychological tests -among the most widely used assays of episodic memory in the study of aging and disease. The relationship between these measures, collected during separate testing sessions, suggests that the neural indices derived from task-based fMRI are tapping into stable individual differences, and may represent a sensitive biomarker of hippocampal and cortical function. Critically, we also demonstrate that these neural and neuropsychological test measures explained unique variance in associative memory, together accounting for 50% of the variance in exemplar-specific recall across individuals. This not only indicates that the present neural indices provide information that cannot be garnered from paper and pencil tests alone, but also suggests that we can combine these neural metrics with existing measurement tools to build more accurate models to explain individual differences in memory performance in older adults. An important direction for future work is to assess whether combining task-related neural measures, such as those identified here, with other known biomarkers of brain health and disease risk (e.g., in vivo measures of amyloid and tau accumulation, hippocampal volume, white matter integrity; Hedden et al., 2016;Jack et al., 2018) can further increase sensitivity for explaining individual differences in memory performance, as well as predicting future disease risk and memory decline prior to the emergence of clinical impairment. Taken together, the present results significantly advance our understanding of fundamental retrieval processes supporting episodic memory in cognitively normal older adults. By exploring how neural indices of pattern completion vary -both across trials and across individuals -these findings demonstrate that hippocampal activity and cortical reinstatement during memory retrieval provide a partial account for why and when older adults remember, and they predict which older adults will perform better than others across multiple widely adopted assays of episodic memory. They also suggest that some neural indices of pattern completion may be affected by age to a greater degree than others, though we note that both the presence and absence of age effects must be interpreted with caution due to the cross-sectional nature of the study design, and should be confirmed in the context of longitudinal studies. Moreover, we note that because the current sample is cognitively healthy, future work is needed to determine if similar patterns of results are observed across qualitatively different cohorts of older adults, particularly those in which subjective or mild cognitive decline is already apparent. Nevertheless, our findings underscore the striking heterogeneity in brain and behaviour, even among cognitively normal older adults, and lend support to the hypothesis that this high within-group variance likely contributes to the wealth of mixed findings in the literature, particularly for traditional group-level comparisons in the context of small-to-moderate sample sizes. Collectively, our findings illustrate how an individual differences approach can advance understanding of the neurocognitive mechanisms underlying variability in episodic memory in older adults. Materials and methods Participants One hundred and five cognitively healthy older adults (aged 60-82 years; 65 female) participated as part of the Stanford Aging and Memory Study. Eligibility included: normal or corrected-to-normal vision and hearing; right-handed; native English speaking; no history of neurological or psychiatric disease; a Clinical Dementia Rating score of zero (CDR; Morris, 1993) and performance within the normal range on a standardized neuropsychological assessment (see Neuropsychological Testing). Data collection spanned multiple visits: Neuropsychological assessment was completed on the first visit and the fMRI session occurred on the second visit, with the exception of nine participants who completed the fMRI session on the same day as the neuropsychological testing session. Visits took place~6.18 weeks apart on average (range = 1-96 days). Participants were compensated $50 for the clinical assessment and $80 for the fMRI session. All participants provided informed consent in accordance with a protocol approved by the Stanford Institutional Review Board. Data from five participants were excluded from all analyses due to excess head motion during scanning (see fMRI preprocessing), yielding a final sample of 100 older adults (60-82 years; 61 female; see Table 1 for demographics). Neuropsychological testing Participants completed a neuropsychological test battery consisting of standardized tests assessing a range of cognitive functions, including episodic memory, executive function, visuospatial processing, language, and attention. Scores were first reviewed by a team of neurologists and neuropsychologists to evaluate cognition and reach a consensus assessment that each participant was cognitively healthy, defined as performance on each task within 1.5 standard deviations of demographically adjusted means. Subsequently, a composite delayed recall score was computed for each participant by (a) z-scoring the delayed recall subtest scores from the Logical Memory (LM) subtest of the Wechsler Memory Scale, 3rd edition (WMS-III; Wechsler, 1997), Hopkins Verbal Learning Test-Revised (HVLT-R; Brandt, 1991), and the Brief Visuospatial Memory Test-Revised (BVMT-R; Benedict, 1997), and (b) then averaging. This composite score declined with age (b = À0.21, p < 0.005), was lower in males than females (b = À0.35, p < 0.05), but did not vary with years of education (b = 0.07, p > 0.31). Materials Stimuli comprised words paired with colour photos of faces and scenes obtained from online sources. For each participant, 120 words (out of 150 words total) were randomly selected and paired with the pictures (60 word-face; 60 word-place) during a study phase, and these 120 words plus the remaining 30 words (foils) appeared as cues during the retrieval phase. Words were concrete nouns (e.g., 'banana', 'violin') between 4 and 8 letters in length. Faces corresponded to famous people (e. g., 'Meryl Streep', 'Ronald Reagan') and included male and female actors, musicians, politicians, and scientists. Places corresponded to well-known locations (e.g., 'Golden Gate Bridge', 'Niagara Falls') and included manmade structures and natural landscapes from a combination of domestic and international locations. Behavioural procedure Prior to scanning, participants completed a practice session that comprised an abbreviated version of the task (12 word-picture pairs not included in the scan session). This ensured that participants understood the task instructions and were comfortable with the button responses. Participants had the option to repeat the practice round multiple times if needed to grasp the instructions. Next, concurrent with fMRI, participants performed an associative memory task consisting of five rounds of alternating encoding and retrieval blocks (Figure 1). In each encoding block, participants viewed 24 word-picture pairs (12 word-face and 12 word-place) and were asked to intentionally form an association between each word and picture pair. To ensure attention to the pairs, participants were instructed to indicate via button press whether they were able to successfully form an association between items in the pair. Following each encoding block, participants performed a retrieval task that probed item recognition and associative recollection. In each block, 24 target words were interspersed with 6 novel (foil) words; participants made a 4-way memory decision for each word. Specifically, if they recognized the word and recollected the associated image, they responded either 'Face' or 'Place' to indicate the category of the remembered image; if they recognized the word but could not recollect sufficient details to categorize the associated image, they responded 'Old'; if they did not recognize the word as studied, they responded 'New'. Responses were made via right-handed button presses, with four different finger assignments to the response options counterbalanced across participants. Using MATLAB Psychophysics Toolbox (Brainard, 1997), visual stimuli were projected onto a screen and viewed through a mirror; responses were collected through a magnet-compatible button box. During both encoding and retrieval blocks, stimuli were presented for 4 s, followed by an 8 s inter-trial fixation. During retrieval blocks, the probe word changed from black to green text when there was 1 s remaining, indicating that the end of the trial was approaching and signaling participants to respond (if they had not done so already). After the MR scan session, a final overt cuedrecall test was conducted outside the scanner to evaluate the degree to which participants were able to recollect the specific face or place associated with each target word. On this post-test, participants were presented with studied words, in random order, and asked to provide the name of the associate or, if not possible, a description of the associate in as much detail as they could remember. The post-test was self-paced, with responses typed out on a keyboard; participants were instructed to provide no response if no details of the associate could be remembered. Memory response classification The fMRI retrieval trials were classified into six conditions: associative hits (studied words for which the participant indicated the correct associate category), associative misses (studied words for which the participant indicated the incorrect associate category), item hits (studied words correctly identified as 'old'), item misses (studied words incorrectly identified as 'new'), item false alarms (foils incorrectly called 'old'), associative false alarms (foils incorrectly indicated as associated with a 'face' or a 'place'), and correct rejections (CR; foils correctly identified as 'new'). Because the number of false alarms was low (M = 5.1, SD = 4.7), these trials were not submitted to fMRI analysis (see Supplementary file 1 for a summary of trial counts and retrieval reaction time by memory outcome). In-scanner associative memory performance was estimated using a discrimination index, associative d'. Hit rate was defined as the rate of correct category responses to studied words (associative hits) and the false alarm rate was defined as the rate of incorrect associative responses to novel words (associative false alarms). Thus, associative d' = Z('Correct Associate Category' | Old) -Z('Associate Category' | New). We additionally calculated an old/new discrimination index to assess basic understanding of and ability to perform the task. Here, hit rate was defined as the rate of correct old responses to studied words, irrespective of associative memory (associative hits, associative misses, item hits), and the false alarm rate was defined as the rate of incorrect old responses to novel words (item false alarms, associative false alarms). Thus, old/new d' = Z('Old' + 'Face' + 'Place' | Old) -Z('Old' + 'Face' + 'Place' | New). The post-test data were analysed using a semi-automated method. Participants' typed responses were first processed with in house R code to identify exact matches to the name of the studied image. Responses that did not include exact matches were flagged, and subsequently assessed by a human rater, who determined the correspondence between the description provided by the participant and the correct associate. We computed the proportion of studied words for which the associate was correctly recalled (Exemplar Correct/All Old). One participant did not complete the posttest, leaving 99 participants in all analyses of the post-test data. MRI data acquisition Data were acquired on a 3T GE Discovery MR750 MRI scanner (GE Healthcare) using a 32-channel radiofrequency receive-only head coil (Nova Medical). Functional data were acquired using a multiband EPI sequence (acceleration factor = 3) consisting of 63 oblique axial slices parallel to the long axis of the hippocampus (TR = 2 s, TE = 30 ms, FoV = 215 mm x 215 mm, flip angle = 74, voxel size = 1.8  1.8  2 mm). To correct for B0 field distortions, we collected two B0 field maps before every functional run, one in each phase encoding direction. Two structural scans were acquired: a whole-brain high-resolution T1-weighted anatomical volume (TR = 7.26 ms, FoV = 230 mm  230 mm, voxel size = 0.9  0.9 x 0.9 mm, slices = 186), and a T2-weighted high-resolution anatomical volume perpendicular to the long axis of the hippocampus (TR = 4.2 s, TE = 65 ms, FOV = 220 mm, voxel size = 0.43  0.43Â2 mm; slices = 29). The latter was used for manual segmentation of hippocampal subfields and surrounding cortical regions (Olsen et al., 2009). fMRI preprocessing Data were processed using a workflow of FSL (Smith et al., 2004) and Freesurfer (Dale et al., 1999) tools implemented in Nipype (Gorgolewski et al., 2011). Each timeseries was first realigned to its middle volume using normalized correlation optimization and cubic spline interpolation. To correct for differences in slice acquisition times, data were temporally resampled to the TR midpoint using sinc interpolation. Finally, the timeseries data were high-pass filtered with a Gaussian running-line filter using a cutoff of 128 s. The hemodynamic response for each trial was estimated by first removing the effects of motion, trial artifacts, and session from the timeseries using a general linear model. The residualized timeseries was then reduced to a single volume for each trial by averaging across TRs 3-5 (representing 4-10 s post-stimulus onset), corresponding to the peak of the hemodynamic response function. To preserve the high resolution of the acquired data, the data were left unsmoothed. Images with motion or intensity artifacts were automatically identified as those TRs in which total displacement relative to the previous frame exceeded 0.5 mm or in which the average intensity across the whole brain deviated from the run mean by greater than five standard deviations. Runs in which the number of artifacts identified exceeded 25% of timepoints, as well as runs in which framewise displacement exceeded 2 mm, were excluded. These criteria led to exclusion of data from five participants who exhibited excess head motion across runs, as well as exclusion of one study and test run from an additional participant. Across all included runs from 100 participants, an average of 2.4 (SD = 3.7) encoding phase volumes (1.7% of volumes) and 2.6 (SD = 4.2) retrieval phase volumes (1.5% of volumes) were identified as containing an artifact. Trials containing fMRI artifacts were excluded from all analyses. To control for potential residual effects of head motion on our primary variables of interest, we adjusted each variable of interest by mean framewise displacement using linear regression (see Supplementary file 1 for a summary of motion effects). Using Freesurfer, we segmented the T1-weighted anatomical volume at the gray-white matter boundary and constructed tessellated meshes representing the cortical surface (Dale et al., 1999). Functional data from each run were registered to the anatomical volume with a six degrees-of-freedom rigid alignment optimizing a boundary-based cost function (Greve and Fischl, 2009). Finally, runs 2-4 were resampled into the space of run 1 using cubic spline interpolation to bring the data into a common alignment. All analyses were thus performed in participant native space, avoiding normalization to a group template. Regions of interest Our analyses focus specifically on hippocampal pattern completion processes -via hippocampal univariate activity and multivariate cortical reinstatement metrics -in the aging brain. Thus, analyses were conducted in three a priori regions of interest (ROIs), selected based on existing theoretical and empirical work to optimize the measurement of this process. Analyses of task-evoked univariate activity were focused on the hippocampus, whereas multivoxel pattern analyses were conducted in ventral temporal cortex (VTC) and angular gyrus (ANG), two cortical areas that have been reliably linked to cortical reinstatement in healthy younger adults (Kuhl et al., 2013;Gordon et al., 2014;Kuhl and Chun, 2014;Favila et al., 2018;Lee et al., 2019). All ROIs were bilateral and defined in participants' native space (Figure 2). The hippocampal mask was defined manually using each participant's high-resolution T2weighted structural image using established procedures (Olsen et al., 2009), and comprised the whole hippocampus (see Figure 4-figure supplements 5-6 for analysis of hippocampal subfields). The VTC mask was composed of three anatomical regions: parahippocampal cortex, fusiform gyrus, and inferior temporal cortex. The fusiform gyrus and inferior temporal cortex masks were generated from each participant's Freesurfer autosegmentation volume using bilateral inferior temporal cortex and fusiform gyrus labels. These were combined with a manually defined bilateral parahippocampal cortex ROI, defined using established procedures (Olsen et al., 2009), to form the VTC mask. The ANG ROI was defined by the intersection of the Freesurfer inferior parietal lobe label and the Default Network of the Yeo 7 network atlas (Yeo et al., 2011), defined on the Freesurfer average (fsaverage) cortical surface mesh. This intersection was used to confine the ROI to the inferior parietal nodes of the Default Mode Network, which predominantly encompasses ANG (Favila et al., 2018). To generate ROIs in participants' native space from the fsaverage space label, we used the approach detailed in Waskom and colleagues (Voss et al., 2008), which uses the spherical registration parameters to reverse-normalize the labels, and then converts the vertex coordinates of labels on the native surface into the space of each participant's first run using the inverse of the functional to anatomical registration. Participant-specific ROIs were then defined as all voxels intersecting the midpoint between the gray-white and gray-pial boundaries. Multivoxel pattern classification Our primary measure of category-level cortical reinstatement during memory retrieval was derived from multivoxel classification analysis. Classification was implemented using Scikit-learn (Pedregosa et al., 2011), nilearn (Abraham et al., 2014), nibabel (Brett et al., 2016), and in house Python scripts, and performed using L2-penalized logistic regression models as instantiated in the LIBLINEAR classification library (regularization parameter C = 1). These models were fit to preprocessed BOLD data from VTC and ANG that were reduced to a single volume for each trial by averaging across TRs 3-5. Prior to classification, the sample by voxel matrices for each region were scaled across samples within each run, such that each voxel had zero mean and unit variance. A feature selection step was also conducted, in which a subject-specific univariate contrast was used to identify the top 250 voxels that were most sensitive to each category (face, place) during encoding, yielding a set of 500 voxels over which classification analyses were performed. Prior to each of 10 iterations of classifier training, the data were subsampled to ensure an equal number of face and scene trials following exclusion of trials with artifacts. To first validate that classification of stimulus category (face/place) during encoding was above chance for each ROI, we used a leave-one-run-out-n-fold cross-validation procedure on the encoding data. This yielded a value of probabilistic classifier output for each trial, representing the degree to which the encoding pattern for a trial resembled the pattern associated with a face or place trial. This output was converted to binary classification accuracy indicating whether or not a given test trial was correctly classified according to the category of the studied picture. Here we report the average classifier accuracy across folds for each participant in each ROI. To measure category-level cortical reinstatement during memory retrieval, we trained a new classifier on all encoding phase data, and then tested on all retrieval phase data. For each retrieval trial, the value of probabilistic classifier output represented a continuous measure of the probability (range 0-1) that the classifier assigned to the relevant category for each trial (0 = certain place classification, 1 = certain face classification). For assessment of classifier performance across conditions (associative hits, associative misses, item only hits, and item misses) and ROI (VTC, ANG), we converted this continuous measure of classifier evidence to binary classification accuracy, indicating whether or not a given retrieval trial was correctly classified according to the category of the studied picture. The significance of classifier performance for each condition and ROI was assessed using permutation testing. We generated a null distribution for each participant by shuffling the trial labels over 1000 iterations for each of the 10 subsampling iterations, calculating mean classifier accuracy for each iteration. We then calculated the mean number of times the permuted classifier accuracy met or exceeded observed classifier accuracy to derive a p value indicating the probability that the observed classifier accuracy could arise by chance. For trial-wise analyses relating cortical reinstatement strength to memory behaviour (e.g., associative retrieval accuracy and reaction time) and other neural variables (e.g., hippocampal BOLD), a continuous measure of reinstatement strength was derived by calculating the logits (log odds) of the probabilistic classifier output on each trial. Reinstatement strength was signed in the direction of the correct associate for a given trial, such that, regardless of whether the trial was a face or place trial, the evidence was positive when the classifier guessed correctly, and negative when the classifier guessed incorrectly. The magnitude of reinstatement strength was thus neutral with respect to which associate category (face or place) was retrieved. For individual-differences analyses relating cortical reinstatement strength to age and memory behaviour (e.g., associative d', exemplar-specific recall), we computed the mean category-level reinstatement strength (i.e., logits) across associative hit trials for each participant. Pattern similarity analysis To complement the classification analyses, we used pattern similarity analyses to measure event-level cortical reinstatement. This approach involved computing the similarity (Pearson correlation) between trial-wise activity patterns extracted from ROIs during encoding and retrieval (i.e., encoding-retrieval similarity; ERS). This analysis approach affords the opportunity to not only examine reinstatement at the categorical level (i.e., within-category ERS -between-category ERS) but also at the trial-unique item level (i.e., within-event ERS -within-category ERS). For this analysis, we again used the voxelwise activity patterns for each ROI (this time with no feature selection step), computing the correlation between encoding and retrieval patterns separately for successful (i.e., associative hits) and unsuccessful (i.e., associative misses, item only hits, item misses) retrieval trials, such that the events being compared (within-event, within-category, between-category) were matched on associative retrieval success. Within-category ERS was computed after values on the diagonal of the correlation matrices (i.e. within-event correlations) were removed, ensuring that event-level ERS does not contribute to the within-category ERS estimate. All correlations were Fisher transformed before computing the mean correlation between different events of interest. Statistical analysis All statistical analyses were implemented in the R environment (version 3.4.4). Trial-wise analyses were conducted using mixed effects models (linear and logistic) using the lmer4 statistical package (Bates et al., 2015). Each model contained fixed effects of interest, a random intercept modeling the mean subject-specific outcome value, and a random slope term modeling the subject-specific effect of the independent variable of interest (e.g., hippocampal activity, reinstatement strength). Models also contained nuisance regressors (see Supplementary file 1 for a full list of regressors in each model), including stimulus category, age, ROI encoding classifier strength (when reinstatement strength -logits-was the independent or dependent variable), ROI univariate activity in categoryselective voxels (when reinstatement strength -logits -was the independent variable, controlling for activity in voxels identified during feature selection, over which classification was performed), overall ROI univariate activity (when ERS was the independent variable, controlling for activity in the whole ROI, as no feature selection step was conducted for pattern similarity analyses), and category-level ERS (when event-level ERS was the independent or dependent variable, to mitigate the possibility that effects of event-level ERS can be attributed to category-level reinstatement). Models were conducted over all test trials in which a studied item was presented, except where indicated that only associative hit trials were included (see Supplementary file 1 for a summary of results when item miss trials are excluded from analyses). Random slopes were uncorrelated from random intercepts to facilitate model convergence. The significance of effects within mixed-model regressions was obtained using log-likelihood ratio tests, resulting in c 2 values and corresponding p-values. A Wald z-statistic was additionally computed for model parameters to determine simultaneous significance of coefficients within a given model. All continuous variables were z-scored within participant across all trials prior to analysis. For trial-wise mediation analyses, the coefficient of the indirect path was computed as the product of the direct effects, a  b. The significance of the indirect effect was calculated with bootstrap resampling with 5000 iterations of data sampled with replacement, and was considered significant if zero does not fall within the 95% confidence interval of the bootstrapped estimate of the indirect effect; 95% confidence intervals are reported. Individual-differences analyses were conducted using multiple linear regression. In all regression models, each neural variable was computed by taking the mean value over associative hit trials. For hippocampal BOLD, mean activity during associative hits was corrected by subtracting mean activity during correct rejections. Before entry into regression models, each neural variable was further adjusted by head motion (mean framewise displacement) and, in the cases of reinstatement strength, ROI-specific encoding classifier strength (mean logits) to account for individual differences in category differentiation during encoding. Age-independent models adjusted memory scores by age. Main text figures depict raw values for interpretability (see Figure 5-figure supplement 1 for partial plots). Hierarchical regression was used to assess the relative contributions of each independent variable to memory performance. F ratio statistics were used to determine change in explained variance (R 2 ) at each step compared to the previous step. The explanatory power of each regression model was evaluated descriptively using the explained variance (adjusted R 2 ). All continuous variables were z-scored across participants prior to analysis, producing standardized coefficients. All analyses used a two-tailed level of 0.05 for defining statistical significance. Ethics Human subjects: All participants provided informed consent in accordance with a protocol approved by the Stanford Institutional Review Board (IRB #30218). (ms) and trial counts as a function of trial type. Supplementary file 1c. Summary of model parameters for mixed effects models. Supplementary file 1d. Summary of linear and logistic mixed effects model results when item miss trials are excluded. Supplementary file 1e. Summary of linear and logistic mixed effects models examining effects of stimulus category (face, place) on relationships between neural variables and behavioural variables. Supplementary file 1f. Summary of linear mixed effects models examining effects of stimulus category (face, place) on relationships between hippocampal activity and cortical reinstatement. Supplementary file 1g. Analysis of head motion and its effects on key dependent variables of interest. Supplementary file 1h. Summary of hierarchical regression analysis predicting associative d'. Supplementary file 1i. Summary of regression analyses examining the relationship between hippocampal subfield activity during associative retrieval (associative hit -CR) and associative memory. Decision letter and Author response . Transparent reporting form Data availability Source data files have been provided for Table 1 and Figures 3-5. Data and code for reproducing all analyses, results, and figures in the paper are available at https://github.com/alitrelle/sams_hpc_fmri (copy archived at https://github.com/elifesciences-publications/sams_hpc_fmri).
2020-02-20T09:18:34.766Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "5c8f87b1de3cad5e51a275b40891db2a77c43e3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.55335", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18e3243aed7ca00c65ce5bf0a6e6f4f1fbb612bf", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine", "Biology" ] }
218792862
pes2o/s2orc
v3-fos-license
A Meta-Analysis of the Cognitive, Affective, and Interpersonal Outcomes of Flipped Classrooms in Higher Education This paper aims to quantify the effects of flipped classrooms in higher education by reviewing 43 empirical studies of students’ cognitive, affective, and interpersonal outcomes. The innovative pedagogy of a flipped classroom in higher education fosters a sustainable, interactive, and student-centered learning environment (as opposed to the traditional lecture style, in which there is little room for interaction). This study’s results show the positive effects of flipped classrooms and highlight the improvement in students’ educational outcomes between 2012 and 2017. Overall, effect sizes were medium—effect size (ES) = 0.35, 95% confidence interval (CI) = 0.24 to 0.47—across three outcome domains using a random effects model. In the outcomes, affective (ES = 0.59), interpersonal (ES = 0.53), and cognitive (ES = 0.24) domains were of a higher order than the effect sizes. However, the results indicated that flipped classrooms benefitted students studying chemistry, engineering, mathematics, and physics less than they did students studying other subjects. Introduction The flipped classroom is an innovative instructional model that is gaining popularity in higher education because it provides active and student-centered learning and enhances students' educational outcomes [1]. Rahman, Mohamed, Aris, and Zaid [2] state that flipped classrooms were initially introduced in college-level technology classes. In the flipped classroom, students study instructional materials before class, typically online lectures, and apply what they learned in in-class activities [3]. Unlike teacher-centered teaching (e.g., the traditional college lecture style), flipped classrooms provide students with engaging, interactive learning experiences in which they can develop complex reasoning, written communication, and critical thinking skills [4]. The needs of students and society often evolve faster than traditional teaching methods. Thus, there is an urgent need to reconstruct college education [5]. An increasing number of stakeholders, including students and instructors, see the traditional, teacher-centered lecture style as obsolete. Consequently, universities are responding by developing, systematizing, and delivering courses and programs in new and innovative ways, which they hope will engage students as well as meet their educational needs and demands. However, transitioning from traditional lecture-based learning to a new classroom model requires a paradigm shift from teacher-centered to student-centered learning [6]. Although some scholars debate about whether the dichotomy of lectures versus active learning is meaningful in today's higher education classrooms [7,8], this paper assumes that flipped classrooms represent a different instructional model that can complement, rather than replace, traditional approaches to education. Flipped classrooms have been shown to improve student motivation [24], student satisfaction [21,25], and confidence [21]. However, some studies have shown that flipped classrooms had a negative impact on students' satisfaction and attitudes [16,26]. Interpersonal outcomes refer to learning that aims to improve student action and performance, including interaction and engagement (e.g., active learning). Flipped classrooms have been found to improve student-teacher interaction, student engagement, student-to-student interaction, individual education, active learning, and debate competence [6,21,27]. Negative Outcomes of Flipped Classrooms Not all studies on flipped classrooms report positive results. Some report mixed or negative results. Ryan and Reid [28] demonstrated that low-achieving students in flipped classrooms performed better on exams. However, Jensen, Kummer, and Godoy [16] indicated that flipped classrooms did not improve student performance outcomes regardless of whether students were high achievers or low achievers. Missildine, Fountain, Summers, et al. [26] showed that introducing flipped classrooms improved learning gains but did not improve students' satisfaction. Lucke [29] indicated that students enjoyed their flipped classes but showed no improvement in cognition and understanding. Vliet, Winnips, and Brouwer [18] pointed out that positive learning gains from flipped classroom environments were only temporary. Few meta-analyses exist on the effects of flipped classrooms. Further, there is little empirical evidence regarding flipped classrooms' utility in improving student performance in higher education [30]. This study is the first to examine the effects of flipped classrooms in higher education using a meta-analysis. Research Problem This study conducts a meta-analysis to explore the effects of flipped classrooms on cognitive, affective, and interpersonal educational outcomes. The meta-analysis synthesizes the effects of flipped classrooms in higher education and attempts to answer the following research questions: (a) what is the overall effect of the flipped classroom approach in the context of higher education? (b) What outcome variables have the most influence on measurable flipped classroom effect size? And (c) are any effects of the flipped classroom approach moderated by studies' characteristics or variables (e.g., department, subject area, and publication year)? Method Meta-analysis involves formulating a problem, collecting data, coding data, analysis, and interpretation [31]. This study's meta-analysis followed the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) guidelines [32]. Literature Search This paper examines journal articles and dissertations about flipped classrooms in the context of higher education that were published between 2012 and 2017. The authors searched five electronic databases for empirical articles: The Education Resources Information Center (ERIC), PROQUEST, Web of Science, PsychInfo, and Google Scholar. To capture a range of potential eligible studies, we employed the following search keywords in titles and abstracts: "flipped classroom," "flipped class," "flipped learning," "inverted class," "inverted classroom," "smart learning," and "blended learning." The authors found forty-three meaningful studies that met the study's inclusion and exclusion criteria ( Figure 1). Inclusion and Exclusion Criteria Studies with the following features met this study's inclusion and exclusion criteria: they must be quantitative studies on student learning or reasoning processes in flipped classrooms; they must provide sufficient information to calculate effect sizes; they must define the flipped classroom approach as including the use of video or audio materials before class and featuring in-class activities; they must compare flipped classrooms' effects with those of traditional classrooms; they must feature students in higher education settings; they must have been published between January 2012 and June 2017; and they must be an empirical, peer-reviewed journal article or dissertation. Coding Studies The data were extracted from studies that met the inclusion criteria ( Table 1). The studies' characteristics were coded as possible moderating variables to investigate the variance of flipped classrooms' effects. Two researchers independently coded each study. We developed a coding manual to maintain reliability of the coding procedures, which included study characteristics, effect size calculation, and report characteristics. Discrepancies between the two coders were resolved prior to data analysis without exception and were resolved by an independent third expert if no agreement could be reached between the two coders. First Author Year Publication Effect Size Inclusion and Exclusion Criteria Studies with the following features met this study's inclusion and exclusion criteria: they must be quantitative studies on student learning or reasoning processes in flipped classrooms; they must provide sufficient information to calculate effect sizes; they must define the flipped classroom approach as including the use of video or audio materials before class and featuring in-class activities; they must compare flipped classrooms' effects with those of traditional classrooms; they must feature students in higher education settings; they must have been published between January 2012 and June 2017; and they must be an empirical, peer-reviewed journal article or dissertation. Coding Studies The data were extracted from studies that met the inclusion criteria ( Table 1). The studies' characteristics were coded as possible moderating variables to investigate the variance of flipped classrooms' effects. Two researchers independently coded each study. We developed a coding manual to maintain reliability of the coding procedures, which included study characteristics, effect size calculation, and report characteristics. Discrepancies between the two coders were resolved prior to data analysis without exception and were resolved by an independent third expert if no agreement could be reached between the two coders. Computation of Effect Sizes The effect size of this meta-analysis includes three different data formats: treatment vs. control group design, pre-post design, and standardized mean change difference (pre-post measure with both treatment and control group), where the pooled estimate of standard deviation was used to consider different sample sizes between flipped and non-flipped classroom groups. All effect sizes were calculated using the Comprehensive Meta-Analysis (CMA) program to estimate a mean effect size [67]. Effect sizes were reported as positive when flipped classroom students performed better than students in the control groups. The effect size was evaluated as follows: 0.20 = small effect, 0.50 = medium effect, and 0.80 = large effect [68]. Combining Effect Sizes We employed a two-step process to synthesize the effects of flipped classroom outcomes. First, it calculated the effect size and variance of each outcome in the primary study. Second, it calculated the weighted mean effect size (ES) using inverse variance weight. To select its analysis model, the study conducted a homogeneity test using two measures of variability: Q and I 2 . The Q test examined whether the variability in an average weighted ES exceeds sampling error alone [69]. I 2 is an alternative measure of homogeneity, which is less sensitive to sample size than Q. I 2 shows whether the proportion of the observed variance reflects differences in true effect sizes [67]. To evaluate I 2 statistics, this study followed Higgins and Green's [70] guidelines: 0% to 40% might not be important; 30% to 60% may represent moderate heterogeneity; 50% to 90% may represent substantial heterogeneity; and 75% to 100% may represent considerable heterogeneity. The null hypothesis of the homogeneity test was that all outcomes came from the same population. If homogeneous, this study used a fixed effects model that had a common effect size and only considered sampling variance. If heterogeneous, this study used a random effects model that had no common effect size and considered sampling variance and true difference between studies [71]. Based on the homogeneity test and investigation of flipped classroom primary studies, this study used random effect models to synthesize the main effects and sub-group analyses. Publication Bias Publication bias happens when the results of published studies are different from the results of unpublished studies because studies with positive results, large effects, and large sample sizes are overrepresented in the literature [67,72]. To examine publication bias, this study adopts a funnel plot, exploring symmetrical distributions around the weighted mean effect sizes [73]. Funnel plots are scatter plots of effect sizes from studies in the meta-analysis, where the horizontal axis represents effect sizes and the vertical axis represents standard errors [72]. An asymmetrical pattern in the results of the funnel plot indicates a possible publication bias. Analyzing Variances in Effect Sizes Across Studies Finally, this study examined the variances in the effect sizes using sub-group analysis and meta-regression [74]. Meta-analyzers should prove whether the effect sizes are homogenous in order to calculate the overall effect size in a meta-analysis. This study used homogeneity test results to select an analysis model and decide whether reviewers would perform a sub-group analysis. Q-statistics were used to assess the heterogeneous structure of the average effect sizes. When the Q statistic is significant (p < 0.05), it suggests that the studies in the meta-analysis are heterogeneous effects. A random effects model was adopted to calculate the overall effect size in this study. The homogeneity calculation formula is as follows: where w i = 1/v(g i ) and w i is an inverse variance weight. The Q statistic is used to determine whether the primary results are homogeneous for subgroup analysis. The magnitude of effect sizes interpreted 0.2 as small, 0.5 as medium, and 0.8 as large according to Cohen's rule of thumb [68]. Dependence This meta-analysis included a total of 43 studies and 218 effect sizes. When a primary study has more than one effect size, reviewers should explain the assumption of independence because multiple effect sizes have dependence within the study. To maintain the assumption of independence, the reviewers should select only one effect size per study, which will cause information loss. To keep multiple effect sizes within the study, this choice will cause a violation of independence assumption. To avoid this violation, this study adopted the "shifting unit of analysis" method [75]. This method proposes a compromise between the issues of information loss and violation of independence assumptions. To calculate the overall effect size, "study" will be used as an analysis unit to determine the independence assumption. To perform sub-group analysis, the effect size of each sub-group will be used as a unit of analysis. Results As mentioned earlier, the 43 studies included in the meta-analysis synthesized a total of 218 effect sizes: an average of 5.1 effect sizes per study. As multiple effect sizes existed within studies, the reviewers considered the dependence of effect sizes in each study. Figure 2 shows the study characteristics for all 43 studies, including effect size (i.e., standard difference in means), standard error, variance, confidence interval, Z-value, and p-value in a forest plot. Black squares in the forest plot's horizontal lines show the effect size of an individual study, and the horizontal lines indicate the confidence interval for each estimate. The small diamond shape at the bottom represents the overall effect size of all studies. According to the forest plot, the smallest effect size value is −0.933, and the highest effect size value is 1.666. Thirty-nine studies had positive effect sizes, while four had negative effect sizes. Consequently, the implementation of flipped classrooms had a significant effect in 39 of the 43 studies. information loss. To keep multiple effect sizes within the study, this choice will cause a violation of independence assumption. To avoid this violation, this study adopted the "shifting unit of analysis" method [75]. This method proposes a compromise between the issues of information loss and violation of independence assumptions. To calculate the overall effect size, "study" will be used as an analysis unit to determine the independence assumption. To perform sub-group analysis, the effect size of each sub-group will be used as a unit of analysis. Results As mentioned earlier, the 43 studies included in the meta-analysis synthesized a total of 218 effect sizes: an average of 5.1 effect sizes per study. As multiple effect sizes existed within studies, the reviewers considered the dependence of effect sizes in each study. Figure 2 shows the study characteristics for all 43 studies, including effect size (i.e., standard difference in means), standard error, variance, confidence interval, Z-value, and p-value in a forest plot. Black squares in the forest plot's horizontal lines show the effect size of an individual study, and the horizontal lines indicate the confidence interval for each estimate. The small diamond shape at the bottom represents the overall effect size of all studies. According to the forest plot, the smallest effect size value is −0.933, and the highest effect size value is 1.666. Thirty-nine studies had positive effect sizes, while four had negative effect sizes. Consequently, the implementation of flipped classrooms had a significant effect in 39 of the 43 studies. Thus, all studies in the analysis did not share a common effect size, which means the null hypothesis of the homogeneity test can be rejected. We used the random effects model to estimate the overall effect size and compare sub-group differences using the study characteristics (e.g., outcome variables, report characteristics variables, and study characteristics variables). The results of the homogeneity test show that the effect sizes are heterogeneous ( Table 2). The results of the random effects model analysis are displayed in Table 3. The overall effect size of flipped classrooms was 0.35, indicating that flipped classrooms had a medium effect in terms of the Cohen's rule of thumb [68]. The effect size showed an overall significant difference in outcomes from flipped classrooms and traditional lecture-based classrooms in higher education (ES = 0.35, 95% CI = 0.24 to 0.47). Outcomes of Flipped Classroom (Research Question 2) This meta-analysis used a random effects model to investigate the differences between sub-groups, as the results from each sub-group were heterogeneous. The categorical variables are as follows: outcome domains (cognitive, affective, and interpersonal), department, subject, data format, and publication status. We conducted a meta-regression analysis using publication year as a covariate. In the random effects categorical analysis by outcome, shown in Table 4, the results of implementing flipped classrooms varied. In the outcomes, the respective effect sizes of affective (ES = 0.59), interpersonal (ES = 0.53), and cognitive (ES = 0.24) domains were in descending order. In the context of higher education, flipped classrooms appear to have more significant effects on students' affective and interpersonal outcomes than on their cognitive outcomes. Regarding affective outcomes, students' immersion (ES = 1. Effects of Characteristics (Research Question 3) Tables 5 and 6 list the effect sizes measured by this study, separated by department and subject area. This study investigated a variety of subject areas to determine whether the flipped classroom approach is more beneficial in some contexts or subjects than it is in others In the primary studies reviewed in this research, the data are generally represented in three different formats: pre-post design, treatment vs. control group design, and pre-post with treatment vs. control group (standardized mean change difference). The effect sizes for each type are as follows: treatment vs. control, ES = 0.25 (95% CI = 0.21 to 0.28), pre-post design, ES = 0.38 (95% CI = 0.35 to 0.42), and standardized mean change difference, ES = 0.47 (95% CI = 0.41 to 0.53). The difference was not small, and study design may factor into this difference in effect sizes. Regarding publication type, the effect size of dissertations (ES = 0.61, 95% CI = 0.54 to 0.68) was larger than the effect size of journal articles (ES = 0.29, 95% CI = 0.26 to 0.31), but the difference was not significant (Table 7). Regarding year of publication, this study conducted a meta-regression analysis in which the regressing effect sizes of flipped classrooms on year of publication served as a moderator. The slope of the meta-regression by publication year is negative overall, but it is statistically significant (Table 8) and has a significant moderating effect on the relationship between flipped classrooms and a study's year of publication. Publication Bias The funnel plot (Figure 3) shows the symmetry of effect size distribution in the mean effect size whether publication bias in the overall effect size exists, providing no evidence for publication bias. This meta-analysis shows no missing studies and finds no imputations of effect size for publication bias. Educ. Sci. 2020, 10, x FOR PEER REVIEW 11 of 17 The funnel plot (Figure 3) shows the symmetry of effect size distribution in the mean effect size whether publication bias in the overall effect size exists, providing no evidence for publication bias. This meta-analysis shows no missing studies and finds no imputations of effect size for publication bias. Discussion This study conducted a meta-analysis of the effects of flipped classrooms on students' cognitive, affective, and interpersonal outcomes in higher education. It extends the discussions and findings from recent meta-analyses that found that flipped classrooms had a significant effect on students' cognitive outcomes in higher education: for example, by improving their test scores, grade, knowledge, skills, and self-directed learning (e.g., [9,76,77]). This study expands the evidence for flipped classroom effectiveness in improving college students' academic outcomes as compared to traditional, lecture-based classrooms. The first research question was regarding the overall effect of flipped classrooms on students' cognitive, affective, and interpersonal outcomes. The study found that flipped classrooms had a medium effect on academic outcomes; the average scores of students in flipped classrooms were 0.35 standard deviations above the average scores of students in traditional, lecture-based classrooms. It also confirmed the results of previous, related studies (e.g., ES = 0.36 [3]; ES = 0.35 [9]; ES = 0.53 [77]; ES = 0.21 [78]). In short, its findings demonstrate that flipped classrooms can improve college students' academic outcomes in various ways, could provide an effective way to inculcate essential 21st-century skills in students [79], and may assist students with special educational needs in performing better than they would in traditional, lecture-based classrooms. The second research question was regarding the outcomes influenced by the introduction of the flipped classroom method. The overall effect sizes of the affective outcomes (ES = 0.59, SE = 0.03, 95% CI = 0.53 to 0.65]), interpersonal outcomes (ES = 0.53, SE = 0.31, CI = 0.47 to 0.59), and cognitive outcomes (ES = 0.24, SE = 0.24, 95% CI = 0.19 to 0.36) were the descending order of the overall effect sizes. This study's results suggest that flipped classrooms improve college students' cognitive, affective, and interpersonal outcomes and that flipped classrooms have more significant effects on affective and interpersonal outcomes than on cognitive outcomes. This result can be explained by the features of the flipped classroom that encourage active engagement and learner-centered interactions. Furthermore, this study's findings indicate that flipped classrooms indirectly affect cognitive outcomes because affective outcomes have a strong influence on cognitive outcomes [23], in part by improving students' motivation and willingness to learn [80]. However, affective outcomes Discussion This study conducted a meta-analysis of the effects of flipped classrooms on students' cognitive, affective, and interpersonal outcomes in higher education. It extends the discussions and findings from recent meta-analyses that found that flipped classrooms had a significant effect on students' cognitive outcomes in higher education: for example, by improving their test scores, grade, knowledge, skills, and self-directed learning (e.g., [9,76,77]). This study expands the evidence for flipped classroom effectiveness in improving college students' academic outcomes as compared to traditional, lecture-based classrooms. The first research question was regarding the overall effect of flipped classrooms on students' cognitive, affective, and interpersonal outcomes. The study found that flipped classrooms had a medium effect on academic outcomes; the average scores of students in flipped classrooms were 0.35 standard deviations above the average scores of students in traditional, lecture-based classrooms. It also confirmed the results of previous, related studies (e.g., ES = 0.36 [3]; ES = 0.35 [9]; ES = 0.53 [77]; ES = 0.21 [78]). In short, its findings demonstrate that flipped classrooms can improve college students' academic outcomes in various ways, could provide an effective way to inculcate essential 21st-century skills in students [79], and may assist students with special educational needs in performing better than they would in traditional, lecture-based classrooms. The second research question was regarding the outcomes influenced by the introduction of the flipped classroom method. The overall effect sizes of the affective outcomes (ES = 0.59, SE = 0.03, 95% CI = 0.53 to 0.65]), interpersonal outcomes (ES = 0.53, SE = 0.31, CI = 0.47 to 0.59), and cognitive outcomes (ES = 0.24, SE = 0.24, 95% CI = 0.19 to 0.36) were the descending order of the overall effect sizes. This study's results suggest that flipped classrooms improve college students' cognitive, affective, and interpersonal outcomes and that flipped classrooms have more significant effects on affective and interpersonal outcomes than on cognitive outcomes. This result can be explained by the features of the flipped classroom that encourage active engagement and learner-centered interactions. Furthermore, this study's findings indicate that flipped classrooms indirectly affect cognitive outcomes because affective outcomes have a strong influence on cognitive outcomes [23], in part by improving students' motivation and willingness to learn [80]. However, affective outcomes (e.g., attitudes and satisfaction) in the flipped classroom are not necessarily positive in higher education. This study's results regarding the high effect sizes of interpersonal outcomes in flipped classrooms are consistent with the results of Shi, Ma, Macleod, et al. [77]. Further, the results can be explained by the instructors' tendency to design active in-class activities in flipped classrooms to increase student participation and interaction [61] through discussion, small group activities, feedback, group discussion, collaborative group work, and group projects [81]. These active, in-class activities enhance students' interpersonal skills and encourage them to become active and self-directed learners who are deeply involved in the learning process [82,83]. This study's third research question addressed the effects of study characteristics on how the effect sizes of flipped classrooms were measured. To answer this question, the study performed subgroup analyses using subject area, department, publication year, and study design as moderators. These moderators accounted for a small amount of the relatively large levels of heterogeneity between studies. The results indicated that flipped classrooms can be applied in a variety of subject areas and still effectively improve educational outcomes, as discussed in Rahman, Mohamed, Aris, and Zaid [2]. Although instructors' individual approaches can influence the success of flipped classrooms, this study found that English, Engineering, Math, Physics, and Chemistry classrooms showed small effect sizes. These results are in line with other meta-analyses of flipped classrooms (e.g., [3,78]). Regarding publication bias and publication type, this study found that the primary literature on flipped classrooms did not indicate publication bias, even though dissertations (ES = 0.61) had a greater effect size than journal articles (ES = 0.29). This study also performed a funnel plot to examine the possibility of publication bias but did not find evidence for publication bias. Thus, publication type can be treated as a moderator in future flipped classroom interventions. Limitations and Future Research Directions This meta-analysis has several limitations. First, the meta-analysis gains ecological validity by including only quantitative field studies (experimental or quasi-experimental research), which examine whether the study results can be generalized to real-life settings. However, some internal validity relative to more controlled laboratory studies is sacrificed: for example, randomized controlled trials [84]. Second, this meta-analysis includes only quantitative findings despite the fact that there are many flipped classroom studies that employ qualitative research methods [31,85]. Because this study excluded qualitative studies from its analysis, its results should be interpreted with caution. Qualitative findings help researchers arrive at deeper understandings [86] and generate new knowledge [87]. Some studies show that flipped classrooms have been particularly effective among the learner demographic [28] because low achievers require more interaction and motivation to attain good learning outcomes. We recommend and encourage researchers to implement flipped classrooms with various student bodies in a variety of academic settings to better define the degree to which these results are transferrable [16]. The flipped classroom is not a panacea, and its effectiveness depends in large part on whether students actually use the available pre-class time effectively [30]. We therefore propose repeated use of flipped classrooms and related, modified strategies on a trial-and-error basis. Ratta [6] insisted that flipped classroom instruction is congruent with today's digital-savvy college student; moreover, it is also important to understand the various influences of today's student culture, study style, study habits, and use of devices. Further study may be warranted to allow more detailed conclusions about student performance to be drawn [88]. Conclusions This study synthesized the results of 43 studies regarding the effects of flipped classrooms on students' cognitive, affective, and interpersonal outcomes in higher education. It examined the overall effect sizes of flipped classrooms compared to traditional, lecture-based classrooms and found that flipped classrooms had a medium effect on various student learning outcomes. Particularly, the study identified that the flipped classroom shows a more significant effect on affective and interpersonal outcomes than on cognitive outcomes. This result can be explained by the features of the flipped classroom that encourage active engagement and learner-centered interactions. Instructors and other educational leaders in higher education institutions can pursue instruction redesigns and educational supports to implement flipped classrooms as an effective pedagogical practice. Additionally, the mixed results of adopting the flipped classroom instruction in departments and subjects show that various instructional forms and strategies are factors that determine the effectiveness on educational outcomes. Thus, future research must explore the relationship between various forms of flipped classrooms and educational outcomes to arrive at pedagogical decisions for instructional development.
2020-04-23T09:14:39.194Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "9f7a257c71c2d6c69d35bd1a2ee7beea09bcdf70", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/10/4/115/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "46d5897ac05e5067a56a5d74e460e2c374d4fc29", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
238736215
pes2o/s2orc
v3-fos-license
Seismic and Energy Integrated Retrofitting of Existing Buildings with an Innovative ICF-Based System: Design Principles and Case Studies : This work proposes an innovative integrated retrofitting system aiming to improve both the seismic and energy performance of existing reinforced concrete and masonry buildings. The system is based on engineered insulating concrete form panels, installed on the outside of existing buildings as a shell exoskeleton. A key major advantage of the proposed system is that it addresses the contemporary improvement of seismic and energy performances of existing buildings in a single installation stage, operating exclusively from outside of the building. The insulating formworks are ad hoc prefabricated in a factory on the base of the specific geometry of the existing buildings so as to greatly maximize the ratio between overall retrofitting benefits and costs and at the same time to simplify the installation procedures. The objectives of the presented research are, on one hand, to highlight the major structural issues that the system aims to address, and on the other hand to illustrate the main characteristics and combined benefits of the proposed retrofitting system. From a structural point of view, the proposed system is conceived to behave as a non-dissipative structure with regard to seismic actions, and the lateral strength and stiffness of the structural elements are designed accordingly. An analytical design approach is proposed and validated using the available data from an experimental test performed on a full-scale simple building. Moreover, numerical modeling strategies for the proposed system are illustrated for two complex case study buildings. The results of the analyses show a considerable increase in lateral stiffness of the retrofitted buildings that, considering the non-dissipative behavior of the elements, leads to a relevant reduction of seismic deformation demand on existing structural elements. Motivation and Aims Many regions in the world that have a high level of anthropization are also characterized by a high seismic hazard. Focusing on the European region, the seismicity is not evenly distributed among nations and a different evolution of seismic design codes occurred in each country [1]. Moreover, most of the building stock in Europe was built before 1980, when many of the adopted design codes still contained inadequate or limited provisions for the seismic design of structures. As a consequence, there are today a considerable number of existing buildings characterized by a high seismic vulnerability. In many cases, these buildings also have a low energy efficiency, which produces high management costs and high greenhouse gasses emissions in the atmosphere. In order to address such a combination of issues, several possible solutions have been proposed, which include, for example, the demolition-and-reconstruction or the renovation of buildings. The choice between these different solutions is influenced by many factors, involving technical, economic, and social aspects [2,3]. In this context, life cycle assessment (LCA) and providing and discussing strategies for structural design and numerical modeling of the retrofit system with reference to case study examples. Energy Consumption of New and Existing Buildings In the last decades, attention has been focused on the problem of global warming and the adverse effects on the planet induced by the exploitation of natural resources. Buildings have a paramount role in global energy consumption. According to Berardi et al. [14], in 2010, buildings accounted for 32% of total global energy use, divided into 24% for residential buildings and 8% for commercial ones. In both typologies, space heating is the primary source of energy demand (over 30% of the global consumption). It is worth noting that about 66% of the existing building stock in Europe was built before the 1970s, when the first energy codes for buildings were introduced [15]. In particular, those built after the Second World War are in general characterized by extremely poor energy performances. Today, the EU, US, and Russia show a nearly constant rate of urbanization and new construction rate due to a stabilized socio-economic situation. The estimated ratio of building energy consumption for these countries in 2040 stands between 0.7 and 1.5 with respect to the 1970 level [14]. Zhang et al. [16] reported that China experienced a very rapid expansion of urbanization and the construction of residential buildings in recent years. From 2000 to 2016, the completed floor area of urban residential buildings increased five times, and the energy consumption grew four times. The energy consumption in the considered time range experienced a quick rise at first and then a stabilization due to economic adjustment and the implementation of new modern and engineered structural types. According to Berardi et al. [14], developing countries are still today subjected to a continuous increase in population and urbanization. As an example, India is expected to increase the population living in cities in the next 25 years by 20%. As a consequence, there will be a significant increase in energy consumption related to the building sector. The employment of high-performance envelopes and heating, ventilation, and airconditioning (HVAC) systems can greatly improve the energy performance of buildings [17]. However, the construction process itself is now recognized as a highly energy-consuming process, especially regarding construction materials manufacturing. Therefore, a two-fold path needs to be pursued to obtain significant results in diminishing the global buildings' energy demand: (i) introducing prescriptions for high-performance envelopes in national building codes of developing countries, accepting the increase in the number of buildings while limiting their energy demand; (ii) promoting and encouraging practices of retrofitting and improvement of envelope's insulation capacity of existing buildings in developed countries in order to diminish the energy consumption due to construction activities and heating/cooling processes. Seismic Vulnerability of Existing Masonry Buildings Masonry is one of the most common structural typologies for buildings that have been adopted since ancient times. Most of the residential buildings in the Mediterranean area have one or two stories and are built with brick masonry. The structural behavior and seismic deformation capacity of masonry constructions are primarily related to the structural geometry and state of preservation. Masonry buildings should assume a box-type structural system, which in order to be effective needs strong connections between intersecting walls and rigid horizontal diaphragms. In fact, as fully recognized in the literature, masonry walls are effective in resisting in-plane actions while being more vulnerable to perpendicular loads. The box behavior allows taking care of walls' lack of resistance in the out-of-plane direction, taking advantage of the in-plane strength of perpendicular walls. During the last decades, surveys of earthquake effects on masonry buildings allowed the identification of their typical failure mechanisms when subjected to seismic actions. Local failure modes are one of the primary sources of vulnerability, such as simple/complex overturning of external walls and vertical/horizontal out-of-plane bending [18][19][20]. The main issues that cause the activation of the cited kinematic mechanisms are lack of connection of the wall panels with floors and orthogonal walls, relevant out of plumb of walls, and masonry internal discontinuity [21,22]. Regarding the in-plane behavior, the most common observed damages are due to shear failure, which reflect in evident diagonal cracks in masonry piers and spandrels [23][24][25]. Brick masonry can also experience sliding failure along the horizontal mortar beds due to shear action [23]. The in-plane failure under seismic actions generally does not lead to the collapse of the structure, but it can trigger the out-of-plane kinematic mechanisms mentioned above [20]. Many repair techniques have been proposed and employed to reduce the seismic vulnerability of masonry buildings [26]. The most common techniques comprise steel tie rods and masonry buttresses [27] (to improve the connections between the walls and the out-of-plane equilibrium, respectively), local dismantling and rebuilding [27] (to restore the wall continuity along crack lines), the substitution of wooden floors with RC ones (to increment story stiffness), and thin RC layer jacketing [28] (to increase in-plane and out-of-plane strength). The surveys took after the recent earthquakes reported that in some cases believed retrofit interventions have actually increased the vulnerability [29], although they had been realized according to the design codes effective at their time. As an example, the introduction of tie-beams at intermediate stories in the thickness of masonry often induce uneven load redistribution on masonry piers and produce damaging effects on perimeter walls. Another common retrofit mistake is the replacement of the existing timber floors with RC beams supporting hollow clay tiles floors without contextually increasing the strength of the masonry walls [30]. The recent earthquakes in Italy put in evidence that the extensive roof replacing with a heavier and stiffer structure caused the cracking of the supporting walls and often also the complete collapse of the structure [29]. This behavior was due to the increased seismic force induced by the increased mass on top of the building that acted on unreinforced walls and the contemporary reduction of energy dissipation capacity [30,31]. Additionally, strengthening techniques such as jacketing showed their ineffectiveness due to faulty connections and incompatibility of materials. The more recent composite material strengthening techniques, such as fiber-reinforced polymers (FRP) or glass fiber-reinforced polymers (GRP), provide high strength to the masonry panels while avoiding the negative effect of increasing the mass thanks to their small thickness and low weight. Seismic Vulnerability of Existing Reinforced Concrete Buildings Reinforced concrete structures have spread widely since the 1950s. Most of the existing RC structures have been designed only for vertical loads or according to now-outdated seismic regulations, since they were built before the adoption of modern seismic codes. Therefore, in most cases, there is a lack of the construction details that are needed to guarantee an adequate seismic capacity. Post-earthquake surveys and analyses of damaged or failed RC structures allowed the identification of major aspects that affect seismic vulnerability. The most common ones are quality of workmanship, low value of story stiffness as relative to other stories (soft story), location of stairs and their connection to the structure, structural typologies of floors and roofs, and steel reinforcement detailing [15]. In the following, the aforementioned aspects are briefly discussed. The International Building Code [32] defines a soft story as a story characterized by a lateral stiffness reduction equal to 30% related to the story immediately above. In residential buildings, the typical soft story is the ground floor that hosts the garages or the store windows. The lack of infills causes increased flexibility and reduced strength, which result in extreme horizontal deflections of the story. The presence of a soft story may induce second-order effects in columns and localizations of plastic deformations that often lead to the complete collapse of RC frame buildings [33]. Irregularities in plan geometry (e.g., due to C or U plan shapes), eventually augmented by an eccentric position of staircases and irregular mass distribution, can lead to unwanted torsional effects [23]. Torsional vibration modes cause higher stress in perimeter structural elements that could collapse if not properly designed. These irregular plan shapes or modifications of mass distribution can also be the results of interventions of architectural renovation or changes in intended use, which hence need to be correctly designed in order to avoid and/or limit the torsional effects described above. As is widely known, the detailing of transverse reinforcement in RC elements is fundamental to prevent collapse under seismic loads. Large stirrup spacing and poor quality of concrete lead to columns failure with the buckling of longitudinal reinforcement bars and crushing of core concrete. It is common to find open stirrups with inadequate anchorage or geometry in the older RC structures. In order to be effective, in critical zones, stirrups must have hooks with a proper length that guarantee the closure of the stirrup itself and, as a consequence, the confinement of the concrete. Properly designed stirrups act as a constraint preventing the buckling of the longitudinal rebars [34]. Inadequate design of beam-column joints, the absence of confining hoop reinforcement, and the wrong position of bar splices in columns are common causes of beam-column joint failure. Inadequate configuration of steel reinforcement in the stiffer elements, e.g., stair walls or lift shaft, could also be the cause of the failure of these structural elements, where the seismic stresses concentrate. Other common failures observed after earthquakes are the damages and collapse of the exterior infill walls [35]. These non-structural elements are usually not adequately connected to the structure and are subjected to out-of-plane excitation. After the out-ofplane bending strength has been reached, the infill collapses, falling out of the RC frame [36]. The in-plane loaded infill panels can also experience failure due to high inter-story drifts, which cause the concentration of high compressive stresses at the infill corners. Another common failure mechanism of the infills is caused by shear stresses induced by horizontal seismic loads, similar to what happens to masonry piers and spandrels, as discussed in the previous section. Regarding techniques for structural retrofit of RC buildings, a comprehensive review of traditional and state-of-the art methods can be found in Tsionis et al. [37] and Bournas [15]. Among most employed conventional techniques, steel or RC jacketing allows the improvement of the strength and ductility of members. However, in the case that an increased stiffness is needed, the adoption of additional shear walls could be necessary. In the last decades, enormous efforts have been put into the study of strengthening techniques based on fiber-reinforced polymers (FRP), which are currently widespread. Some of the main drawbacks of FRP strengthening techniques include poor performance at high temperature, adhesion problems at low temperature or on wet surfaces, health issues for manual workers, and high costs, as reported by Bournas [15]. The same author pointed out that textile-reinforced mortar (TRM) strengthening techniques are now becoming increasingly important in the practice of RC members retrofitting since they overcome many of the issues encountered in FRP or steel/RC jacketing technique. Integrated Retrofitting Solutions for Existing Buildings Integrated retrofitting technologies aim to simultaneously solve several of the critical issues related to energy consumption and seismic vulnerability highlighted in the previous sections. Although the adoption of integrated renovation strategies has only recently become widespread, several solutions have already been proposed. One of the first applications of the integrated retrofit was reported by Takeuchi et al. [38], who presented a case study in which energy dissipation façades have been applied to a school building, aiming to improve energy efficiency and seismic performance. The structural strengthening is obtained by seismic dissipation braces installed on the outside of the building. The outer aluminum louvers forming sunshades are fixed to the dissipation brace, forming an integrated façade with both structural and energy retrofit functions. An extensive analysis considering winter and summer scenarios proved the effectiveness of the new façade coupled with the existing glass closure in improving the energy demand for cooling and heating. Cyclic loading tests on reduced specimens and time-history numerical analysis showed improvements in the seismic behavior of the structure in terms of increased strength and reduction of story drift. Another solution was presented by Feroldi et al. [39], who proposed an engineered double skin façade for an integrated renovation of buildings from the energy, architectural, and structural point of view. Particular attention was paid to the environmental impact and cost requirement. Moreover, the concepts of "exoskeleton" and holistic renovation approach have been introduced. Labò et al. [40,41] analyzed various external retrofitting solutions and proposed a new holistic approach for the structural design procedure. The proposed approach aimed to solve architectural, energy, and structural deficiencies of buildings whilst targeting resilience, safety, and sustainability. Different external structural strengthening configurations were analyzed, from the "wall system", with strength lumped in few elements, to the optimized "grid-shell system", such as diagrids that consist of truss elements that can be adapted to any 3D shape. In the wall-type exoskeleton, the structural function is fulfilled by the walls while the energy and architectural improvement refer to the envelope. The diagrid type exoskeleton, instead, condenses both the structural and the energy retrofitting in the same structure. Two exoskeleton solutions applied to a reference RC building were analyzed. The steel diagrids are conceived as totally demountable, allowing the easy disassembly and possible reuse or recycling of the structural components. This aspect increases the sustainability of the intervention in terms of life cycle assessment, providing easier management of the construction at its end of life. Seismic performance and design procedures for exoskeleton structures have been studied by Reggio et al. [42], Labò et al. [43], and Passoni et al. [44]. The work by Manfredi and Masi [45] explored two integrated retrofitting solutions of an RC building designed only for vertical loads: the replacement of the infills and the socalled "double skin" intervention technique. The former consists of replacing hollow bricks with new elements that have better thermal and mechanical properties. The latter consists of adding on the outside of the building new infilled RC frames, structurally connected to the existing ones. The infills replacement was sufficient for structural rehabilitation in mid-low seismic hazard areas, while the double skin intervention was necessary in high seismic hazard areas. Bournas [46] illustrated a new retrofitting method that employs TRM jacketing integrated with insulating panels to improve both the energy and seismic performance of buildings while keeping a low labor cost. The application to a case-study building was analyzed, and the evaluation of the expected annual loss related to both seismic and energy costs showed that the payback of the retrofitting intervention can be significantly reduced by adopting the proposed combined retrofitting approach. The reported examples showed how the combined need for seismic and energy retrofitting can be effectively addressed. However, to determine the most suitable retrofitting solution, a careful evaluation should be performed for each specific case, considering building location, its existing structural type, and preservation conditions. Description of the Proposed Innovative Retrofitting System The retrofitting technology presented in this work follows the aforementioned principle of the double skin [45] and involves the installation of an additional structural layer on the outer surface of the building, i.e., an engineered exoskeleton [47]. The structural layer is composed of a thin reinforced concrete membrane cast on-site within a permanent formwork (ICF) made of two layers of insulating material. In this ICF-integrated retrofitting technology, the capacity to resist seismic loads is provided by the thin RC layer, while the improvement in energy performance is provided by the contribution of the insulating material layers to the building insulation. The system constitutes an external envelope for the existing building, and its structure can be idealized as composed of vertical walls connected by horizontal spandrels. The seismic actions are transferred from the floors to the external ICF structural layers and then to the foundations. The proposed system has several advantages. Regarding the improvement of the structural behavior, it is worth mentioning that the external position of the thin RC walls and their application to the entire perimeter of the building allows obtaining a structural system with high translational and torsional stiffness. Due to the high in-plane stiffness of the RC layers, structural displacement demands induced by seismic actions are reduced and damages to drift-sensible non-structural elements and vulnerable systems are limited for seismic events of moderate intensity. The proposed ICF technology is also efficient in retaining infill walls subjected to out-of-plane seismic actions. The exoskeleton must be connected to the diaphragms of the existing building. In order for the system to be effective in absorbing the horizontal seismic loads, the floors and roof should be provided with enough in-plane stiffness. The application of the exoskeleton to the complete perimeter of the building allows avoiding stress localization on diaphragms and reducing their stiffness demand. For what concerns the installation aspect, it is to be underlined that the system is conceived to be applied to both RC frame and masonry buildings, operating exclusively from outside. Moreover, compared to other retrofit solutions that require several different interventions to solve the energy and seismic deficiencies of buildings, the proposed system is designed to improve both aspects in a single intervention and using a single technology, namely ICF panels. In the following sub-sections, a detailed description of the elements that compose the system is provided, together with some insights on the installation procedure. Insulated Concrete Formwork The insulating material of the formworks has a low transmittance value, contributing to improving the building energy performance with better thermal insulation of the envelope. The energy consumption linked to the heating and cooling system is then reduced. Insulated formwork can be made of different materials to obtain the desired characteristics of thermal-acoustic insulation and reaction to fire. The thickness of the inner and the outer insulating layers can vary with respect to the climate zone and the thermal properties of the existing structures. The specific insulated formwork considered in this work is composed of a threedimensional wire mesh that defines the thickness of the insulating and structural layers, as shown in Figure 1. The 3D wire mesh can be produced to allow various thicknesses of insulating and structural layers. The insulating materials can be chosen depending on the thermal conductivity, thermal shift, acoustic properties, and reaction to fire to be guaranteed. The formwork is produced off-site, and its structure allows to guarantee a uniform thickness for the reinforced concrete layer and the correct arrangement of the rebars. The ICF is assembled by putting the insulating material slices into the 3D wire mesh, as illustrated in Figure 1, and the final product is a formwork with a void between the two insulating layers that allows the pouring of concrete. At an initial development stage, the ICF structure consisted of the outer insulating layer and the concrete layer in direct contact with the existing structure, without the inner insulating layer. The 3D wire mesh still guaranteed the positioning of the steel rebars and the thickness of the structural layer, but the hydrostatic pressure of the fresh concrete against the existing wall ( Figure 2a) made the construction operations challenging. The introduction of the inner insulating layer bonded to the outer layer by the wire mesh allowed to compensate the hydrostatic fresh concrete pressure on the two inner surfaces of the formwork, as shown in Figure 2b. Moreover, the presence of a modular inner layer allows the creation of horizontal and vertical ribs with increased thickness. Structural Layer The ICF structural layer is a thin RC membrane, which is subjected almost exclusively to in-plane loads during seismic actions, considering its negligible out-of-plane stiffness. Static compression loads acting on the piers of the ICF structural layer are due to its self-weight only. The retrofit system resists only the horizontal seismic actions being mainly subjected to in-plane bending and shear actions and eventually to axial forces when working as a coupled walls system. The reinforcement of the concrete layers consists of steel bars arranged in the longitudinal and transverse direction with defined spacing. Reinforcements can be arranged in a single layer or in two layers, depending on concrete layer thickness. In the case of small thickness, a single layer of rebars can be placed in the middle plane of the concrete layer along both vertical and horizontal directions. In this case, it is evident that transverse reinforcement cannot ensure concrete confinement and that code design details for dissipative zones in seismic resisting structures are not feasible. At each floor level, the thickness of the concrete layer is increased to create a perimetral horizontal rib, in direct contact with the masonry or the existing curb, as shown in Figure 2b, in which it is possible to anchor the connectors effectively. The ribs also allow realizing a new reinforced concrete curb in buildings that do not have one. The localized increase in the thickness of the ICF structural layer can be obtained easily by removing a horizontal band of the inner insulating layer. In the same way, vertical ribs can be created, improving out-of-plane stiffness and guaranteeing an effective anchoring to the existing walls or columns. Figure 3 shows the ICF technology with horizontal ribs applied to different structural types. Connection System The retrofitting system is connected to the existing structure at each floor level, using steel connectors embedded in the horizontal ribs. The number and diameter of the connectors are designed for the expected horizontal force transferred from the floors to the exoskeleton. For buildings with RC frame structure, the loads are transferred through shear fasteners installed on the edge beams of each floor. The fasteners can be concrete self-tapping screws, Figure 4a, or bent steel rebars fixed to the existing edge beams with injections of chemical mortar, Figure 4b. The same approach is adopted in masonry buildings in which concrete curbs are present at the floor levels. In the case, typical of old masonry buildings with timber floors, in which concrete beams or curbs to connect the ICF structural layer are missing, a connection can be made with steel bars passing through the masonry and connected to perimetral steel profiles anchored to the existing floors. At the ground level, the ICF concrete layer is to be fixed to a foundation to transfer the seismic load to the ground. If the existing foundation system can resist the post-retrofit design seismic load, the concrete layer can be directly connected using dowels with adequate lap length. In cases where the existing foundation structure is not sufficient, it may be necessary to build a new foundation curb adjacent and anchored to the existing foundation and eventually the realization of ground anchors or micro-poles to prevent uplift phenomena. Installation Phases The ICF retrofitting technology is conceived to be applied to a wide range of building typologies, due to the possibility of producing and assembling the formwork panels of any geometry directly in the factory. The installation can be performed quickly and easily, given the lightweight of the ICF panels that facilitates their handling and installation. The main phases of the integrated retrofitting intervention with the proposed ICF technology are illustrated in Figure 5. After the preliminary work, such as the enlargement of the foundation if necessary and the installation of the connection system to the foundation, the first phase consists of placing the connectors (mechanical or chemical anchors) on the perimetral edge beams, with the diameter and spacing imposed by the structural design. In this phase, it is not mandatory to remove the existing plaster, speeding up the construction times, but the connectors must be fixed to a strong structural element. Then, the ICF panels are placed against the existing walls along the perimeter of the building, starting from the ground floor, as shown in Figure 6a. Once positioned, the panels can be fixed with special anchors to the existing building or propped up to prevent the formwork from moving during the casting of the concrete phase. After positioning the formwork, the designed steel reinforcement is placed inside the insulated formwork. The concrete is then cast within the ICF panel, forming the structural layer and the ribs. Analytical Structural Design for a Simple Case Study Building The seismic response of the thin RC layers composing the structural elements of the ICF retrofit system can be analyzed by using membrane or shell models, or by means of equivalent frame models, representing vertical piers and horizontal spandrels as beam elements connected at nodes. This second option has the main benefit of directly providing internal forces on members that can be easily compared with design strengths obtained from sectional analysis. The design of the system needs the check of resistance of both the RC layer and the connections between the existing structure and the exoskeleton. Since, as mentioned before, the ICF system behaves as a non-dissipative structure, the elastic response spectra should theoretically be assumed. However, Eurocode 8 [48] provides a maximum seismic force reduction factor (i.e., behavior factor) of 1.5 for nondissipative structures. A quasi-elastic behavior of the section must be fulfilled, i.e., the limit strains of materials are up to yielding (for reinforcing steel) or up to the achievement of peak strength (for concrete). Prescription and detailing rules for dissipative structures are not mandatory. In order to illustrate a procedure for the design of the proposed system, an example is provided in the following sections for a simple case study building. The described procedure, although illustrated for an elementary case study, can be generalized and applied to the design of RC layers and connections of the proposed retrofitting system in general cases. Description of the Case Study Building A one-story full-scale masonry building with a timber floor is considered as a case study for describing the design procedure of the proposed ICF retrofit system. The building, illustrated in Figures 8 and 9, was part of an experimental campaign conducted to assess the feasibility and applicability of the proposed retrofit technology [49,50]. Despite its simplicity, the study of this building appears to be useful since it allows the evaluation of the structural performances of the analyzed system with a fully analytical approach and then the comparison of theoretical expectations with experimental results. Figure 9d. The steel plates parallel to the longer side were connected directly to the timber beams with 12 × 160 mm self-tapping partially threaded screws through the two layers of timber boards. In the direction parallel to the longer plan side, the walls were strengthened with the ICF retrofit technology. The minimum possible thickness of concrete layers was adopted, together with the minimum amount and optimum location of the steel reinforcement. The final wall section, starting from the inner toward the outer of the building, was composed of 250 mm brick masonry, a 40 mm EPS insulating layer, a 60 mm RC structural layer, and a 100 mm EPS insulating layer. The total thickness of the retrofitted walls was 450 mm. The inner insulating layer was present for all the wall height except 300 mm before reaching the ground and floor levels in order to create two horizontal ribs with a cross-section of 100 × 300 mm. Details of reinforcement are illustrated in Figure 9b. The ribs were reinforced with four Ø6 mm longitudinal rebars and with Ø8 mm stirrups, with a spacing of 300 mm. The concrete layer was reinforced with Ø6 mm vertical and horizontal rebars, with a spacing of 300 × 300 mm, while Ø10mm rebars were placed around the door opening, on top and bottom of the RC spandrel, and at the ends of the walls. The connection of the strengthening concrete layer to the foundation slab was made using Ø16 mm dowel rebars with a spacing of 150 mm and a height above ground of 900 mm. The concrete class used for the structural layer was C25/30, and the rebars were class B450C, according to Italian standard NTC18 [51]. The nearly rigid timber floor is connected to the ICF concrete layers through hook-shaped Ø14 mm B450C steel rebars, as shown in Figure 9c. The hooks are embedded in the horizontal top ribs of the ICF, while on the other end, they are welded to steel plates parallel to the longer plan side, which are fixed to the timber floor. Eight Ø14 mm rebars were placed on each side of the building parallel to the longer plan side, with a 45 • inclination with respect to the load direction, as shown in Figure 9d. Summary of the Observed Experimental Behavior The case study building was subjected to an experimental loading test, also described in the work by Pertile et al. [49]. A cyclic loading protocol was adopted, as shown in Figure 10a, with increasing value of the horizontal displacement δ imposed at the top of the RC walls. The load was applied along the longer side of the building, as shown in Figure 9a. The vertical RC walls underwent flexural and shear deformation, together with rotation of the base section due to partial slippage of the foundation dowels. Diagonal cracks were observed between vertical walls and the horizontal spandrel, both in masonry and RC layers. No sliding has been observed between the timber floor and the RC ribs during the experimental test, proving the connection system to be effective. The force vs. displacement history from the test is reported in Figure 10b, for the horizontal displacement δ measured at the top of the RC walls. The global horizontal strength of the specimen suddenly dropped after reaching the value F u,exp = 485 kN at a displacement δ u,exp equal to 26 mm. A wide horizontal crack opened in one of the RC piers just above the end of the foundation dowel rebars where the ultimate bending strength was first achieved. Evaluation of Seismic Design Action The design seismic force was determined according to the Italian standard NTC18 [51]. The design working life V N was assumed to equal 50 years for building structures, while the importance factor γ 1 was taken equal to 1.0. Therefore, the reference return period V R of the seismic action was equal to 50 years. The building is assumed to be built on a flat ground, composed of deposits of loose-to-medium cohesionless soil or predominantly soft-to-firm cohesive soil (i.e., soil type D according to NTC18 [51]). According to these characteristics, a 5% damped elastic response spectrum was generated, whose main parameters are reported in Table 1. A design inelastic response spectrum was obtained dividing the elastic spectrum ordinates by a factor equal to 1.5, according to Eurocode 8 [48] for non-dissipative structures. Assuming the building fundamental period T 1 to belong to the spectrum plateau range (i.e., T B < T 1 < T C ), the design seismic acceleration S d (T 1 ) has been evaluated as: Considering only the weight of the upper half part of the vertical elements, the seismic weight of the retrofitted building is equal to W = 114.6 kN, of which only about 11% is imputable to the added ICF system. The corresponding design value of the seismic base shear F Ed is: which is about 1/16 of the experimentally measured strength. It is worth pointing out that in this elementary case study building, even though a minimum amount of reinforcement and minimum concrete thickness have been adopted for the retrofitting system, the observed experimental strength is significantly higher than the design seismic base shear. This fact highlights how, even in real buildings, the presented retrofitting system can provide a significant contribution to the horizontal load-carrying capacity, with a very slight seismic mass increment. Design of the RC Layers In this section, approaches and formulations for the calculation of bending and shear strengths of RC members of the retrofitting system are presented, considering the quasielastic behavior assumption. Bending resistance M R can be calculated through classical sectional analysis with the assumption of preservation of plane section and limiting ultimate materials strains to the elastic range. Calculation needs to take into account the actual axial load N E . The calculation of the shear resistance of RC members is based on the Eurocode 2 [52] truss model that considers two resisting mechanisms, related to the failure of concrete struts and steel reinforcement ties, respectively related to the failure of concrete struts and steel reinforcement ties, as shown in Figure 11. 538 The sliding shear resistance VR,s is calculated according to Eurocode 2 [52] using equations (5) The shear resistance V R is the smallest value between the shear-tensile strength V R,t , Equation (3), and the shear-compressive strength V R,c , Equation (4). V R,c = α cw · b w · z · ν 1 · f c · (cotθ + cotα)/(1 + cot 2 θ) The sliding shear resistance V R,s is calculated according to Eurocode 2 [52] using Equations (5)- (8), where V d is the resistance of the vertical dowel rebars, V i is the shear resistance of inclined bars (null in the specific case), and V f is the friction resistance (meaning of uncited symbols can be found in Eurocode 2 [52]). The potential sliding plane considered in calculations is located at the critical section of the piers. In the analyzed case study, the RC walls can be schematized with an equivalent frame model, as represented in Figure 12b, where each wall consists of two vertical piers and a horizontal spandrel. The spandrel is considered to behave as a rigid connection for the piers. According to experimental observations, the critical section of piers (i.e., the section that first reaches the ultimate condition according to one of the possible above-described failure mechanisms) is set just above the end of the dowel reinforcement. The distribution of internal forces induced by the horizontal load applied at the rigid floor level can then be derived in accordance with the scheme shown in Figure 12b. It allows the derivation, in first approximation, of the global horizontal force associated with the crisis of RC piers, neglecting the contribution provided by the masonry elements. In particular, in the case of bending failure of the RC piers, the global horizontal strength may be expressed as a function of the resistant moment M R of the critical section as follows: The height h* = 1.52 m is the height of the RC piers measured from the lower end of the pier-spandrel node down to the actual critical section of each pier. In the case of shear failure or sliding shear failure of the piers, the associated global horizontal strength can be obtained, respectively, according to the relations: Finally, the ultimate horizontal force F R that can be resisted by the building is: In the calculation of the ultimate strengths of the critical section of the RC piers, it is necessary to take into account the contribution of the axial force N E , which is due in part to gravitational loads (N gravity ) and in part to the coupling between the piers guaranteed by the rigid spandrel (N coupling ). According to the scheme described in Figure 12, the latter contribution can be determined as follows: N coupling = ± k · (F E /2) · h*/L f (13) where L f = 2.1 m is the distance between the axis of the piers and k = 0.5 with the assumption of the rigid spandrel. Equation (13) shows how the axial load due to the coupling effect actually depends on the global horizontal load F E acting at the floor level. Therefore, the calculation of strength for resisting mechanisms that rely on axial load contributions would theoretically need to be performed iteratively. Design of the Connection System The RC layers of the retrofit system are connected to the existing building at the floor level, by means of steel rebars, as indicated in Figure 9. The load transfer capacity of the connection depends on the bond between steel rebars and the surrounding concrete. For a single rebar, the ultimate bond strength F b is calculated using Equations (14)- (17), where Φ is the bar diameter and l b is the anchorage length. The ultimate bond stress f b is calculated according to Eurocode 2 [52] using Equation (15), where η 1 is a coefficient related to the quality of the bond condition, η 2 is a coefficient related to the bar diameter, and f ct is the value of concrete tensile strength. The yielding strength of the steel bar F s is calculated with Equation (16). The ultimate resistance of each connector F a is then evaluated as the minimum value between F b and F s , Equation (17). For the present case study, the global resistance of the connection system along with the loading direction F connection can be computed considering all the resisting elements inclined at 45 • according to Equation (18), where n a is the total number of connectors: F connection = n a · F a / √ 2 (18) Discussion The presented design approaches can be employed to provide an estimation of the ultimate strength of the retrofitting system applied to the present case study building. A calculation has been performed adopting some simplifications; namely, the contributions of the masonry structure in terms of stiffness and ultimate strength have been neglected, allowing dealing with simple analytical formulations for building global strength evaluation. Moreover, by neglecting the masonry strength contribution, it was possible to highlight the resistance of the retrofitting system alone. Calculations been performed adopting an elastic-perfectly plastic law for steel and a parabola-rectangle law for concrete. In the first instance, design values for material strengths are employed in calculations, obtaining values for steel and concrete strengths according to Italian standard NTC18 [51] equal to f yd = 391.3 MPa and f cd = 14.17 MPa, respectively. Values for steel elastic modulus E s = 206,000 MPa and concrete strain at maximum strength ε c2 = 2‰ have been adopted. Considering a non-dissipative material behavior, steel ultimate tensile strain ε su has been adopted equal to the steel yielding strain, while ultimate concrete compressive strain ε cu has been adopted as equal to ε c2 . The strength calculations for RC piers failure mechanisms have been performed taking into account for the axial load contributions due to the coupling effect provided by the spandrel, considering as vertical dead loads only those due to the self-weight of the ICF panels. The negligible strength contribution of the steel wires of the formwork was disregarded. In Table 2, a brief resume of the obtained minimum strengths is given. The minimum design global strength is associated with the bending failure of the RC piers with tensile axial load due to coupling. The design ultimate load of the system corresponds to 27% of the actual ultimate load experimentally obtained (485 kN). It is also worth noting that the obtained design strengths are all largely greater than the design seismic action (30 kN) evaluated in Section 5.3, even if minimum reinforcement and concrete layer thickness were adopted in the presented retrofitting application. In order to provide a comparison with the experimentally measured global horizontal strength of the building, its analytical counterpart was calculated using the scheme described in Section 5.5, considering the mean experimental values of material strengths and material constitutive laws that consider plastic deformations (i.e., considering a dissipative material behavior). Experimental mean values for steel yielding strength, f ym = 555 MPa, and concrete compressive strength, f cm = 35.2 MPa, have been retrieved during the experimental campaign on the case study building. Ultimate strains for steel and concrete are assumed to be equal to ε su = 75‰ and ε cu = 3.5‰, respectively. The obtained values are reported in Table 3 together with the consequent analytical global mean strengths of the entire building. Results based on mean properties suggest that the failure of the system was actually governed by a shear-flexural mechanism in the RC piers. Theoretical mean flexural and shear capacities are quite well-aligned with the experimental result, the rations with respect to experimental peak force being, respectively, 90% and 96%. The calculated sliding strength appears a little bit underestimated since it corresponds to 84% of the recorded load. Installed connections were exuberant with respect to the need, their strength capacities being nearly double the flexural and shear strengths. It can be concluded indeed that the proposed procedure provides safe-side results and can be successfully employed for the design of the retrofitting intervention with the ICF system. In the presented analytical calculation, the contribution of masonry was disregarded because it is negligible with respect to that provided by the RC walls. However, more sophisticated numerical modeling taking into account the existing structure could be used, which can provide a better estimation of the ultimate strength of the retrofitted system and also evaluate residual strength and deformation demands on the existing structure. Examples of the detailed modeling approach are provided in the next section. Numerical Modeling Strategy and Application to Complex Buildings In this section, insights on strategies for the modeling and seismic analysis of buildings retrofitted with the proposed ICF-based system are presented. In particular, two case study applications are described, considering a city Hall and an educational facility, which can be both considered as strategic structures. In Mediterranean countries, several of these buildings have been built as masonry or concrete frame structures and many of them are characterized by poor seismic capacity in their actual state so that retrofit interventions are strongly needed. Due to the specific use of strategic structures, it is almost unfeasible to temporarily free the buildings to implement the structural retrofitting. For these constructions, suitable retrofit strategies are needed that involve minimal interferences with the inside activities. The ICF retrofit system presented in this work appears to be an optimal solution for this type of intervention because it even allows the improvement of the passive energy performance of the opaque envelope at the same time. Case Study: City Hall Building This case-study building, located in Seravezza (Italy), is a city Hall building composed of several portions, which have been built in different periods. In Figure 13, the different parts that compose the building are identified. The structure was first built before the 1930s with a masonry structure. The portion highlighted in red in Figure 13, Part A, is the only part of the original building that remains today. In fact, in the 1960s, the building was partially demolished and then enlarged. In particular, Part B consists of a three-story RC frame building with a flat roof. Part C also has an RC frame structure and three stories, but with a timber hip roof. Part D rises where the original building was partially demolished and has an RC frame structure that is connected to the masonry walls. Different floor types are present; namely, precast reinforced concrete and hollow tiles mixed floor and reinforced concrete slab floor. Due to their typologies, all floors can be assumed to be rigid diaphragms in the seismic analysis. Figure 14 shows a three-dimensional model of the building, developed using a building information modeling (BIM) commercial software. The model not only allows an understanding of the actual elevation of the different portions but also allows the integration of the structural/architectonical design with the mechanical system renovation design. The elevation structures appear in a poor conservative state, mainly due to the lack of maintenance and the environmental aggressivity. Specifically, Part D is oriented to the north and presents exposure of the steel rebars of RC structures with evidence of corrosion phenomena. From the vulnerability assessment analysis, it emerged that the masonry walls of the oldest part did not have sufficient in-plane strength to resist horizontal seismic load. Moreover, RC frame structures of the newest parts were designed before the introduction of modern seismic design codes; thus, proper details to guarantee an adequate seismic performance are lacking both in members and in beam-column joints. Another critical aspect is the inadequate connection details between the different parts of the building, which did not guarantee a global box-like behavior and can induce damages due to pounding. The building conditions require strengthening intervention in a large number of structural elements. The complex plan distribution and the different construction types connected together make the seismic retrofitting design challenging. The reinforcement of single columns and beams with traditional techniques, such as steel or FRP jacketing, was impossible to apply due to the interferences with internal wall distribution and to the high costs related to demolition and reconstruction of non-structural elements. In this context, the adopted retrofit concept was to integrate the existing structural elements (which support vertical weights) with a new system (which resists horizontal loads). The new seismic-resistant structure was conceived to coat the elevation and connect all the parts together. Using the ICF retrofit technology, the thermal insulation of the external walls was also fulfilled. The designed energy retrofitting also involves the installation of photovoltaic panels and the replacement of boilers with heat pumps. Finite element models of the building in both present and retrofitted conditions were developed using the commercial software Midas Gen [53]. First, the existing RC frame was modeled with beam elements, while the existing masonry walls were modeled with plate elements, as shown in Figure 15a. Then, the ICF concrete layer was modeled with plate elements that have the thickness of the concrete layer, Figure 15b,c. To consider the effective contribution of the existing structure, a halved elastic modulus was used for existing concrete and masonry elements to account for cracking effects. The ICF concrete layer was modeled using an elastic constitutive law for concrete, with full elastic modulus. The connections of the rigid stories to the new ICF structural layer were modeled 776 with a truss element for each side connecting the corner node of the existing diaphragm 777 with the opposite corner node of the RC membrane, as shown in Figure 16a. Analogous 778 elements were created to simulate the connections between ICF membrane and existing 779 columns, see Figure 16b. Trusses were provided with axial stiffness equivalent to the shear 780 stiffness of the steel connectors disposed along each alignment. 781 Modal analyses were performed before and after the retrofitting (i.e., both not con-782 sidering and considering the added exoskeleton). Obviously, the retrofitted structure was 783 largely more rigid than the original one, as demonstrated by the vibration periods re-784 ported in Table 4. In the retrofitted configuration, existing and added structures showed 785 common modal shapes and periods, demonstrating that the elongation of the truss ele-786 ments (i.e., shear deformation of the steel connectors) was negligible with respect to the 787 lateral drift at each story and that steel connectors can be assumed as infinitely rigid (with 788 respect to the lateral stiffness of walls). Rigid diaphragm restraint was imposed at each floor by connecting the floor center of mass to perimeter nodes of the same floor using rigid links, as shown in Figure 15d. Due to the irregular plan and different ages of the building parts, the floors of Part B were considered independent from the others but connected with truss elements. The separated floors allowed catching the independent modes of vibration of Part B and forces on the truss elements. The masses, calculated for each story, were modeled as concentrated in the center of mass at the corresponding level. A Winkler foundation was considered at the base of the structure. The connections of the rigid stories to the new ICF structural layer were modeled with a truss element for each side connecting the corner node of the existing diaphragm with the opposite corner node of the RC membrane, as shown in Figure 16a. Analogous elements were created to simulate the connections between ICF membrane and existing columns, see Figure 16b. Trusses were provided with axial stiffness equivalent to the shear stiffness of the steel connectors disposed along each alignment. Modal analyses were performed before and after the retrofitting (i.e., both not considering and considering the added exoskeleton). Obviously, the retrofitted structure was largely more rigid than the original one, as demonstrated by the vibration periods reported in Table 4. In the retrofitted configuration, existing and added structures showed common modal shapes and periods, demonstrating that the elongation of the truss elements (i.e., shear deformation of the steel connectors) was negligible with respect to the lateral drift at each story and that steel connectors can be assumed as infinitely rigid (with respect to the lateral stiffness of walls). The increase in the overall stiffness of the structure leads to a lower fundamental vibration period, which belongs to the range that defines the response spectrum plateau and is thus associated with the maximum spectral acceleration, as shown in Figure 17. This fact, together with the mass increment due to the new ICF exoskeleton (approximatively 10% that of the original building), led to an increased global seismic force on the retrofitted structure, for which the exoskeleton was designed. However, the increased stiffness of the retrofitted structure also led to a strong reduction of the structural displacements and inter-story drifts: at the life safety seismic ultimate limit state (SLV) [51], the top displacement reduced from 17.15 cm (in the original state) to 2.62 cm (in the retrofitted situation), consisting of an 85% reduction. This corresponded to a proportional reduction of the overall seismic demand on existing structures, which was seismically retrofitted with the proposed intervention. Figure 17. Design response spectrum and fundamental vibration periods for the city Hall building. Design spectrum was calculated according to the Italian standard NTC18 [51] for life safety seismic ultimate limit state (SLV), using a return period of 75 years. Case Study: Educational Facility The analyzed case study is an educational facility located in Basiliano (Italy) with an RC frame structure. The building is represented in Figure 18, and it is composed of two portions, named "block 1" and "block 2", which were built in 1971 and 1979, respectively. Block 1 consists of the classrooms and teacher offices that are connected by a service area with locker rooms to the gym. Block 2 consists of additional classrooms, the auditorium, and the music laboratory. The two blocks are composed of four and three sub-blocks, respectively, separated by construction joints. Block 1 was designed according to pre-1970s practice when capacity design and seismic detailing were not yet introduced. The structural system consists of RC frames aligned only in the longitudinal direction; in the transversal direction, there are no structures that can resist horizontal loads. Sub-block A has one story with a total height of 7.3 m; sub-block B is one-story with a total height of 3.8 m; sub-blocks C1 and C2 have two stories above ground and one story partially underground, with a total height of 9.5 m above ground. Block 2 was built after the first Italian seismic regulation [54] ("Legge n. 64 del 2 Febbraio 1974"), with the first specifications about seismic detailing. Being more recent, it presents an improved structural system that consists of RC frames with better characteristics of materials and a higher geometric reinforcement ratio. Block 2 is composed of one story with a medium height of 4.5 m. Sub-block D frames had double beams due to flat roofs at different heights connected to the same frame. The original project was studied, and on-site tests were performed. The on-site structural diagnostic campaign confirmed the structural details in the original drawings. Some non-destructive and partially destructive tests were performed to assess the mechanical parameters of the materials employed in the construction. Tests with hammer and crushing tests of hardened concrete core samples returned the compressive strength of the concrete; tests with a cover meter confirmed the presence of the rebars indicated in the drawings, and tensile tests on bars extracted from columns gave the tensile strength of the reinforcing steel. Table 5 lists the values assumed for the existing material properties. The typical section of the columns is 30 × 35 cm at the basement and 30 × 30 cm at the upper levels. All the columns are continuous from the foundation to the roof. The section of the beams is 50 × 23 cm for the internal frame and 40 × 23 cm for the external frames. Geometry and reinforcement details of columns and beams are shown in Figure 19. The RC and hollow tiles mixed floors have 200 + 30 mm thickness, with a total thickness equal to the height of the beams. Figure 19. Cross-section of existing members for sub-blocks C1 and C2 (adapted and modified from Pertile et al. [49]). Vulnerability assessment of the existing structures showed poor seismic performance, highlighting the need for a strengthening intervention. The retrofitting of Block 2 was performed with traditional steel braces technology to reduce the displacement demand on existing frame structures. Details of this intervention are not discussed in this work. In Block 1, for sub-blocks A, C1, and C2, the ICF retrofitting technology presented in this work was applied. For the sake of brevity, in the following, only the design for sub-blocks C1 and C2 is reported. The construction joint between Sub-blocks C1 and C2 has been seamed by the new exoskeleton applied on the outer of the building and cross-stitching with tie bars epoxied into holes at 45 • at floor level. In the retrofitted configuration, Sub-blocks C1 and C2 have been considered as a unique structure, with plan dimensions of 49 × 13 m. The inter-story height of the underground level is 3.3 m, and for the ground and first levels it is 3.8 m. As mentioned before, the existing building structural system consisted of reinforced concrete frames parallel to the longitudinal direction, with 3.75 m spans. The new earthquakeresistant exoskeleton built outside the building is continuous from the foundation level to the roof and is connected to the edge beams of each story and to the external columns. The RC layer of the ICF system is 150 mm thick, and rebars are placed on two layers. In addition to the application of the retrofit system on the outer facades, the elongated plan shape of the building required the addition of two internal RC walls evidenced with green outlines in Figure 20b to resist the seismic load in the weakest direction (i.e., transverse direction). For the new structural walls, concrete of class C25/30 and steel of class B450C were used, according to the Italian standard NTC18 [51]. The materials mechanical properties are reported in Table 6. A finite element model of the structure was built using the commercial code Midas Gen [53]. The aim of the numerical analysis was first to assess the vulnerability of the actual state of the structure and then to design the seismic retrofitting of the building. Models of the existing and retrofitted structure are illustrated in Figure 20. The existing RC frames were modeled with beam elements that had the geometry and materials specified in the previous section. The ICF concrete layer was modeled with plate elements with the actual thickness and the aforementioned material properties. The infill and the partition walls were not modeled but were taken into account only in terms of mass. The new earthquake-resistant structure and the RC frame structure were linked to vertical and horizontal truss elements, as already illustrated in the presentation of the previous case study. The truss elements were used to design the dimension and the spacing of connections needed to transfer loads from the existing structure to the retrofitting membrane. Gravity loads are defined according to the floor use, following Italian provisions about loading conditions as reported in standard NTC18 [51]. The finite element model was conceived to assign the vertical static loads to the existing structure, while the ICF concrete layers were vertically loaded only with their self-weight. A modal response spectrum analysis was performed for the existing structure, using the acceleration response spectrum shown in Figure 21. The low thickness of the concrete layer of the existing floors does not fulfill the requirements for the assumption of the rigid diaphragm according to the Italian standard NTC18 [51]: the flexible floor causes amplification of stresses in the peripheric columns. The gap of the construction joint between sub-bodies C1 and C2 proved to be not sufficient to prevent pounding effects. Figure 21. Design response spectrum and fundamental vibration periods for the educational facility case-study building. Design spectrum was calculated according to the Italian standard NTC18 [51] for life safety seismic ultimate limit state (SLV), using a return period of 75 years. The same analysis was performed on the retrofitted structure, which has a higher stiffness with respect to the RC frames. As a consequence, the main vibration periods of the building decrease, as shown in Table 7, and the seismic action increases, as shown in Figure 21. After the retrofit, the stresses in elements of the existing RC frames have been drastically reduced, since the majority of the seismic load is carried by the new concrete membrane due to its prevailing lateral stiffness. The increment of the seismic load is compensated by the higher strength of the new structure, designed according to the current design specifications of the Italian standard NTC18 [51]. A positive effect of the global stiffening of the structure is the decrease in total displacements and inter-story drifts, with the consequent decrease in the deformation demand on members of the existing structure. Conclusions European building stock is characterized by elevated seismic vulnerability and energy consumption. Most buildings present serious structural deficiencies against seismic action and high energy demand for heating and cooling caused by poor insulation of the building envelope, usually coupled with elevated greenhouse gasses emissions. There is a large need for integrated retrofitting solutions that can improve both structural and energy performances of existing buildings. In this work, an innovative integrated retrofitting technology based on insulated concrete formwork panels has been presented. The proposed technology is conceived to provide structural strengthening and thermal insulation, together with an architectural refurbishment. The proposed technology has been described in detail, analyzing single components and installation phases. Since the retrofitting intervention is conceived to be applied outside the buildings, it avoids interruptions of inside activities. Moreover, the formworks are prefabricated ad hoc to facilitate and speed up the installation phases on site. Insights into the design of structural components have been provided, and a calculation example has been reported for a simple case study masonry building, for which experimental results of a cyclic loading test were available. The cyclic behavior and failure mode of the building have been discussed. Moreover, procedures for analytical strength calculations have been reported, based on the approaches provided by Italian standard NTC18 [51] and Eurocode 2 [52]. Analytical evaluations of the horizontal global strength were performed using both design values and mean experimental values of material strengths, comparing the results of the latter case with those obtained experimentally. The comparison demonstrated that the adopted calculation approach is conservative and can be safely employed in design practice. Moreover, possible procedures for the structural modeling of buildings retrofitted with the proposed system have been illustrated. Two case studies are reported to describe how to model and analyze the retrofit interventions performed on complex RC frame and masonry buildings, also characterized by strong plan irregularity. Numerical analyses showed that retrofitted buildings present increased lateral stiffness, which translates in lower vibration periods, often belonging to the plateau of seismic response spectra. However, this increase in seismic demand is also accompanied by a much more significant increment of lateral load-carrying capacity. The increased stiffness of the retrofitted buildings allows a marked reduction in deformation demand of existing structural members at the considered seismic ultimate limit state and a limitation of drift-induced damages to non-structural elements for low-to-medium intensity earthquakes. Conflicts of Interest: The authors declare no conflict of interest.
2021-09-27T18:05:19.077Z
2021-08-20T00:00:00.000
{ "year": 2021, "sha1": "c54137fbedd88afcc636e384162caead78ae6fd4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/16/9363/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9804ac363bccddc7acef7c815339949b28dd0e4a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235624008
pes2o/s2orc
v3-fos-license
Unsupervised Deep Image Stitching: Reconstructing Stitched Features to Images Traditional feature-based image stitching technologies rely heavily on feature detection quality, often failing to stitch images with few features or low resolution. The learning-based image stitching solutions are rarely studied due to the lack of labeled data, making the supervised methods unreliable. To address the above limitations, we propose an unsupervised deep image stitching framework consisting of two stages: unsupervised coarse image alignment and unsupervised image reconstruction. In the first stage, we design an ablation-based loss to constrain an unsupervised homography network, which is more suitable for large-baseline scenes. Moreover, a transformer layer is introduced to warp the input images in the stitching-domain space. In the second stage, motivated by the insight that the misalignments in pixel-level can be eliminated to a certain extent in feature-level, we design an unsupervised image reconstruction network to eliminate the artifacts from features to pixels. Specifically, the reconstruction network can be implemented by a low-resolution deformation branch and a high-resolution refined branch, learning the deformation rules of image stitching and enhancing the resolution simultaneously. To establish an evaluation benchmark and train the learning framework, a comprehensive real-world image dataset for unsupervised deep image stitching is presented and released. Extensive experiments well demonstrate the superiority of our method over other state-of-the-art solutions. Even compared with the supervised solutions, our image stitching quality is still preferred by users. . The pipeline of proposed unsupervised deep image stitching. In the coarse alignment stage, the inputs are warped using a single homography. In the reconstruction stage, the warped images are used for reconstructing the stitched image from feature to pixel. Conventional image stitching solutions are feature-based methods, where feature detection is the first step that can profoundly affect stitching performance. Then a parametric image alignment model can be established using the matched features, by which we can warp the target image to align with the reference image. Finally, the stitched image can be obtained by assigning pixel values to each pixel in overlapping areas between the warped images. Among these steps, establishing a parametric image alignment model is crucial in the feature-based methods. In fact, the homography transformation is the most used image alignment model, which contains translation, rotation, scaling, and vanishing point transformation, accounting for the transformation from one 2D plane to another [10] correctly. However, each image domain may contain multiple different depth levels in actual scenes, which contradicts the planar scene assumption of the homography. There are often ghosting effects in the stitched results since a single homography cannot account for all the alignments at different depth levels. Conventional feature-based solutions alleviate the artifacts in two mainstream ways. The first way is to eliminate the artifacts by aligning the target image with the reference image as much as possible [11]- [20]. These methods partition an image into different areas and compute the homography matrix for each diverse area. By exerting spatially-varying warpings on these areas, the overlapping areas are well aligned, and the artifacts are significantly reduced. The second way is to hide the artifacts by researching for an optimal seam to stitch the Fig. 2. Motivation: the misalignments in pixel-level can be visually weakened in feature-level. Col 1: the results of stitching the warped images from unsupervised coarse alignment stage. Col 2: the results of stitching the warped features extracted by the 'conv1 2' in VGG19 [27]. reconstructing from feature to pixel by unsupervised reconstruction network. warped images [21]- [26]. Through optimizing a seam-related cost, the overlapping can be divided into two complementary regions along the seam. Then, a stitched image is formed according to two regions. The feature-based solutions can significantly reduce the artifacts in most scenes. Still, they rely heavily on feature detection so that the stitching performance can drop sharply or even fail in scenes with few features or at low resolution. Due to the incredible feature extraction capability of Convolutional Neural Networks (CNNs), recently learning-based approaches have achieved state-of-the-art performance in various fields such as depth estimation [28], optical flow estimation [29], [30], distortion rectification [31]. Increasing researchers try to apply CNNs to image stitching. In [32], [33], the CNNs are only used to extract feature points, while in [4], [7], [34], the CNNs are proposed to stitch images with fixed viewing positions. Regrettably, these methods are either not a complete learning-based framework [32], [33], or can only be used to stitch images with fixed views instead of arbitrary views [4], [7], [34]. Then, view-free deep image stitching methods [35], [36] are proposed to overcome the two problems simultaneously. In these view-free solutions, deep image stitching can be completed by a deep homography module, a spatial transformer module, and a deep image refined module. However, all the solutions are supervised methods, and there is no real dataset for deep image stitching because of the unavailability of stitched labels in actual scenes until now. Therefore, these networks can only be trained on a 'no-parallax' synthetic dataset, resulting in unsatisfying applications in real scenes. To overcome the limitations of feature-based solutions and supervised deep solutions, we propose an unsupervised deep image stitching framework that comprises an unsupervised coarse image alignment stage and an unsupervised image reconstruction stage. The pipeline is shown in Fig. 1. In the first stage, we coarsely align the input images using a single homography. Different from the existing unsupervised deep homography solutions [37], [38] that require extra image contents around the input images as supervision, we design an ablation-based loss to optimize our unsupervised deep homography network that is more suitable for the largebaseline scenes, where large-baseline is a relative concept to small-baseline in [38]. Besides, a stitching-domain transformer layer is proposed to warp the input images in the stitchingdomain with less occupied space than the existing deep stitching works [35], [36]. In the second stage, we present an ingenious strategy to reconstruct the stitched images from feature to pixel, eliminating the artifacts by unsupervised image reconstruction. In particular, we design a low-resolution deformation branch and a high-resolution refined branch in the reconstruction network to learn the deformation rules of image stitching and enhances the resolution, respectively. This reconstruction strategy is motivated by an observation: misalignments in feature-level are more unnoticeable than in pixel-level (Fig. 2 left). Compared with pixels, feature maps are more blurred, which indicates the misalignments in pixellevel can be eliminated to a certain extent in feature-level. Therefore, we believe it is easier to eliminate artifacts in feature-level than in pixel-level. To implement this, we first reconstruct the features of the stitched image that are as close to the two warped images as possible (Col 3 in Fig. 2). Then the stitched image can then be reconstructed at pixel-level (Col 4 in Fig. 2) based on the reconstructed features. The existing dataset in learning-based solutions [35], [36] is a 'no-parallax' synthetic dataset that cannot represent the practical application scene. And the datasets in feature-based solutions are too few to support deep learning training. To enable our framework the generalization ability in real scenarios, we also propose a large real-world image stitching dataset containing varying overlap rates, varying degrees of parallax, and variable scenes such as indoor, outdoor, night, dark, snow, and zooming. Here, we define overlap rate as the percentage of the overlapping area in the total area of the image. In experiments, we evaluate our performance in homography estimation and image stitching. Experimental results demonstrate the superiority of our method over other stateof-the-art solutions in real scenes. The contributions of this paper are summarized as follows: • We present an unsupervised deep image stitching framework consisting of an unsupervised coarse image alignment stage and an unsupervised image reconstruction stage. • We propose the first large real dataset for unsupervised deep image stitching (to the best of our knowledge), which we hope can work as a benchmark dataset and promote other related research work. • Our algorithm outperforms the state-of-the-art, including homography estimation solutions and image stitching solutions in real scenes. Even compared with the supervised solutions, our image stitching quality is still preferred by users. II. RELATED WORK In this section, we subsequently review the existing works in image stitching and deep homography estimation. A. Feature-Based Image Stitching According to different strategies to eliminate artifacts, the feature-based image stitching algorithms can be divided into the following two categories: Adaptive Warping Methods. Considering that a single transformation model is not enough to accurately align images with parallax, the idea of combining multiple parametric alignment models to align the images as much as possible is introduced. In [11], the dual-homography warping (DHW) is presented to align the foreground and the background, respectively. This method works well in the scene composed of two predominating planes but shows poor performance in more complex scenes. Lin et al. [12] apply multiple smoothly varying affine (SVA) transformations in different regions, enhancing local deformation and alignment performance. Zaragoza et al. [13] propose the as-projective-as-possible (APAP) approach, where an image can be partitioned into dense grids, and each grid would be allocated a corresponding homography by weighting the features. In fact, APAP would still exhibit parallax artifacts in the vicinity of the object boundaries, for dramatic depth changes might occur in these areas. To get rid of this problem, the warping residual vectors are proposed to distinguish matching features from different depth planes in [19], contributing to more naturally stitched images. Seam-Driven Methods Seam-driven image stitching methods are also influential, acquiring natural stitched images by hiding the artifacts. Inspired by the idea of interactive digital photomontage [39], Gao et al. [24] propose to choose the best homography with the lowest seam-related cost from candidate homography matrices. Then the artifacts are hidden through seam cutting. Referring to the optimization strategy of contentpreserving warps (CPW) [40], Zhang and Liu [22] propose a seam-based local alignment approach while maintaining the global image structure using an optimal homography. This work was also extended to stereoscopic image stitching [41]. Using the iterative warp and seam estimation, Lin et al. [23] find the optimal local area to stitch images, which can protect the curve and line structure during image stitching. These feature-based algorithms contribute to perceptually nature stitched results. However, they rely heavily on the quality of feature detection, often failing in scenes with few features or at low resolution. B. Learning-Based Image Stitching Getting a real dataset for stitching is difficult. In addition, deep stitching is quite challenging for the scenes with low overlap rate and large parallax. Subjected to these two problems, learning-based image stitching is still in development. View-Fixed Methods. View-fixed image stitching methods are task-driven, which are designed for the specific application scenes such as autonomous driving [6], [7], surveillance videos [4]. In these works, the end-to-end networks are proposed to stitch images from fixed views while they cannot be extended to stitch images from arbitrary views. View-Free Methods. To stitch images from arbitrary views using CNNs, some researchers propose to adopt CNNs in the stage of feature detection [32], [33]. However, these methods can not be regarded as a complete learning-based framework strictly. The first complete learning-based framework to stitch images from arbitrary views was proposed in [35]. The images can be stitched through three stages: homography estimation, spatial transformation, and content refinement. Nevertheless, this work cannot handle input images with arbitrary resolutions due to the fully connected layers in the network, and the stitching quality in real applications is unsatisfying. Following this deep stitching pipeline, an edge-preserved deep image stitching solution was proposed in [36], freeing the limitation of input resolution and significantly improving the stitching performance in real scenes. C. Deep Homography Schemes The first deep homography method was put forward in [42], where a VGG-style [27] network was used to predict the eight offsets of four vertices of an image, thus uniquely determine a corresponding homography. Nguyen et al. [37] proposed the first unsupervised deep homography approach with the same architecture as [42] with an effective unsupervised loss. Introducing spatial attention to deep homography network, Zhang et al. [38] proposes a content-aware unsupervised network, contributing to SOTA performance in small-baseline deep homography. In [43], multi-scale features are extracted to predict the homography from coarse to fine using image pyramids. Besides that, the deep homography network is usually adopted as a part of the view-free image stitching frameworks [35], [36]. Different from [37], [38], [42], [43], the deep homography in image stitching is more challenging, for the baseline between input images is usually 2X∼3X larger. III. UNSUPERVISED COARSE IMAGE ALIGNMENT Given two high-resolution input images, we first estimate the homography using a deep homography network in an unsupervised manner. Then the input images can be warped to align each other coarsely in the proposed stitching-domain transformer layer. A. Unsupervised Homography The existing unsupervised deep homography methods [37], [38] take the image patches as the input, which is shown in the white squares in Fig. 3 (a). The objective function of these methods can be expressed as Eq. (1): where I A , I B represent the full images of the reference image and the target image, respectively. P(·) is the operation of extracting an image patch from a full image, and H(·) warps one image to align with the other using estimated homography. From Eq. (1), we can see that to make the warped target patch close to the reference patch, the extra contents around the target patch are utilized to pad the invalid pixels in the warped target patch. We call it a padding-based constraint strategy. This strategy works well in small-baseline [38], or middle-baseline [37] homography estimations while it fails in the large-baseline case. In particular, when the baseline is too large (as illustrated in Fig. 3 (a)), there might be no overlapping area between the input patches, which leads to the meaningless estimation of homography from these patches. To solve this problem, we design an ablation-based strategy to constrain large-baseline unsupervised homography estimation. Specifically, we take the full images as the input, ensuring that all overlapping areas are included in our inputs. When we enforce the warped target image close to the reference image, we no longer pad the invalid pixels in the warped image. Instead, we ablate the contents in the reference image where the invalid pixels in the warped target image locate, as shown in Fig. 3 (b). Our objective function for unsupervised homography is formulated as Eq. (2): where is the pixel-wise multiplication and E is an all-one matrix with identical size with I A . As for the architecture of our unsupervised homography network, we adopt a multi-scale deep model proposed in [36], which connects feature pyramid and feature correlation in a unified framework so that it can predict the honography from coarse to fine and handle relative large-baseline scenes. B. Stitching-Domain Transformer Layer The spatial transformer layer was first proposed in [44], where images can be spatially transformed with gradient backpropagation guaranteed using the homography model. In image stitching, input images of the same resolution can output stitched images of different resolution according to the varying overlapping rates, which brings a considerable challenge to deep image stitching. The existing deep image stitching methods solve this problem by extending the spatial transformer layer [35], [36]. Specifically, these solutions define a maximum resolution for the stitched image so that all the input contents can be included in the output. In addition, the network will output images with the same resolution every time. However, most of the space occupied by black pixels outside the white box in Fig. 4 (a) are wasted. To deal with spatial waste, we propose a stitching-domain transformer layer. We define the stitching-domain as the smallest bounding rectangle of the stitched image, which saves the most space while ensuring the integrity of the image contents. The warped results of ours are illustrated in Fig. 4 (b), and our stitchingdomain transformer layer can be implemented as follows. First, we calculate the coordinates of the 4 vertices in the warped target image by Eq. (3): are the k-th vertex coordinates of the warped target image and the target image, respectively. (∆x k , ∆y k ) donate the offsets of the k-th vertex that are estimated form the aforementioned homogrpahy network. Then, the size of the warped image (H * × W * ) can be obtained by Eq. (4): where (x A k , y A k ) are the vertex coordinates of the reference image that have the same values as (x B k , y B k ). Finally, we assign the specific values to the pixels of the warped images (I AW , I BW ) from the input images (I A , I B ), which can be represented as Eq. (5): where I and H are the identity matrix and the estimated homography matrix, respectively. And W(·) donates the operation of warping an image using a 3 × 3 transformation matrix with the stitching-domain set to H * × W * . In this way, we transform the input images in the stitchingdomain space, effectively reducing the space occupied by feature maps in the subsequent reconstruction network. Compared with the transformer layer used in [35], [36], the proposed layer can help to stitch larger resolution images when the GPU memory is limited. IV. UNSUPERVISED IMAGE RECONSTRUCTION Considering the limitation that a single homography can only represent the spatial transformation in the same depth [10], the input images cannot be completely aligned in the real-world dataset in the first stage. To break the bottleneck of single homography, we propose to reconstruct the stitched image from feature to pixel. The overview of the proposed unsupervised deep image stitching framework is illustrated in Fig. 5. The reconstruction network can be implemented by two branches: low-resolution deformation branch (Fig. 5 top) and high-resolution refined branch (Fig. 5 bottom), learning the deformation rules of image stitching and enhancing the resolution, respectively. Large-Baseline Deep Homography Stitching-Domain Transformer A. Low-Resolution Deformation Branch Reconstructing the images only in the high-resolution branch is not appropriate because the receptive field decreases relatively as the resolution increases. To ensure that the receptive field of the network can completely perceive misaligned regions (especially in the case of high resolution and large parallax), we designed a low-resolution branch to learn the deformation rules of image stitching first. As shown in Fig. 5(top), the warped images are first down-sampled to a lowresolution, defined as 256×256, in our implementation. Then an encoder-decoder network consisting of 3 pooling layers and 3 deconvolutional layers is used to reconstruct the stitched image. The filter numbers of the convolutional layers are set to 64, 64, 128, 128, 256, 256, 512, 512, 256, 256, 128, 128, 64, 64, and 3, respectively. Furthermore, skip connections are adopted to connect the low-level and high-level features with the same resolution [45]. In this process, the deformation rules of image stitching are learned with content masks and seam masks (Fig. 6). The content masks are adopted to constrain the features of the reconstructed image close to the warped images, while the seam masks are designed to constrain the edges of the overlapping areas to be natural and continuous. In particular, we obtain the content masks (M AC , M BC ) using Eq. (5) by replacing the I A , I B with an all-one matrix E H×W , and the seam masks can be calculated by Eq. (6) and Eq. (7): where (i, j) donates the coordinate location, * represents the operation of convolution, and C clips all the elements to between 0 and 1. Then we design the content loss and seam loss in low-resolution as Eq. (8) and Eq. (9): where S LR is the low-resolution stitched image. L 1 and L P donate the L1 loss and the perceptual loss [46], respectively. To make the feature of the reconstructed image as close to that of the warped images as possible, we calculate the perceptual loss on layer 'conv5 3' of VGG-19 [27] which is deep enough to shrink the feature difference between the warped images. Next, the total loss function of low-resolution unsupervised deformation can be formulated as Eq. (10): Encoder Decoder Overlapping region Fig. 8. Visualization of the learning process of the low-resolution deformation branch. The stitched images are reconstructed from overlapping regions to non-overlapping regions. where λ s and λ c weight the contribution of the content constraint and seam constraint. B. High-Resolution Refined Branch After the initialized deformation in the low-resolution branch, we develop a high-resolution refined branch to enhance the resolution and refine the stitched image. The highresolution refers to the resolution of the output of the first stage. Actually, in our dataset, the resolution is bigger than 512×512. To illustrate the effect of high-resolution branch, we exhibit the outputs of two branches in Fig. 7. This branch is composed of convolutional layers entirely, as shown in Fig. 5 (bottom), which means it can deal with pictures of arbitrary resolution. To be specific, it consists of three separate convolutional layers and eight resblocks [47], of which the filter number of each layer is set to 64 except that of the last layer is set to 3. To prevent low-level information from being gradually forgotten as the convolutional network gets deep, the feature of the first layer is added with that of the penultimate layer. Moreover, each resblock is composed of convolution, relu, convolution, sum, and relu. We up-sample S LR to the resolution of the warped images and concatenate them together as the input of this branch. The output is the high-resolution stitched image S HR . And we conclude the loss function of the high-resolution refined branch L HR imitating Eq. (10) as Eq. (11): where L h Content and L h Seam are the content loss and seam loss in high-resolution which can be calculated using Eq. (8), (9) by replacing the S LR and low-resolution masks with the S HR and the high-resolution masks. When calculating the L P in high resolution, we adopt the layer 'conv3 3' of VGG-19, since this layer is shallower than the layer 'conv5 3' (used in L P of low resolution) and the output using this layer is more clear. C. Objective Function The high-resolution branch is designed to refine the stitched image, but it tends to cause artifacts in the stitched image, since the increase in resolution can relatively reduce the receptive field of the network (more details can be found in Section V-D). To enable our network the abilities to enhance resolution and to eliminate parallax artifacts simultaneously, a content consistency loss is proposed as Eq. (12): where S 256×256 HR is obtained by resizing S HR to 256×256 that is the resolution of the output in low-resolution branch. Taking all the constraints into consideration, we conclude our objective function of the image reconstruction stage as Eq. (13): where the ω LR , ω HR and ω CS represent weights of each part. D. Reconstruction from Feature to Pixel To exhibit the learning process from feature to pixel, we visualized the feature maps of the low-resolution deformation branch in Fig. 8. At the very beginning of the encoder stage, the network only focuses on the overlapping areas, and the features of non-overlapping areas are all suppressed. Next, as the resolution decreases, deeper semantic features are extracted and reconstructed. In the decoder stage, the network begins to pay attention to non-overlapping areas besides overlapping areas. As the resolution is restored, clearer feature maps are reconstructed. Finally, the stitched image is reconstructed at the pixel level. V. EXPERIMENTS In this section, extensive experiments are conducted to validate effectiveness of the proposed method. A. Dataset and Implement Details Dataset. To train our network, we also propose an unsupervised deep image stitching dataset that is obtained from variable moving videos. Of these videos, some are from [38] and the others are captured by ourselves. By extracting the frames from these videos with different interval time, we get the samples with different overlap rates ( Fig. 9 (b)). Moreover, these videos are not captured by the camera rotating around the optical center, and the shot scenes are far from a planar structure, which means this dataset contains different degrees of parallax ( Fig. 9 (c)). Besides, this real-world dataset includes variable scenes such as indoor, outdoor, night, dark, snow, and zooming ( Fig. 9 (a)). To quantitatively describe the distribution of different overlap rates and varying degrees of parallax in our dataset. We divide the overlap rates into 3 levels and define a high overlap rate greater than 90%, a middle overlap rate ranging from 60%-90%, and a low overlap rate lower than 60%. This classification criterion is formulated according to [37], [38], [42], where [38] is the represnetative work in high overlap rate. The average overlap rate of the proposed dataset is greater than 90%. And [37], [42] are the representative works in middle overlap rate for the average overlap rate of Warped COCO (disturbance < 32) dataset [42] is about 75%. Besides, to describe parallax accurately, we align the target image with the reference image using a global homography and then calculate the maximum misalignment error of corresponding feature points in the coarse aligned images to show the magnitude of parallax. In this way, we divide the parallax into 2 levels: small parallax with error smaller than 30 pixels and large parallax with error greater than 30 pixels. Fig. 9 (c) demonstrates the difference of different parallax intuitively. In particular, we get 10,440 cases for training and 1,106 for testing. Among our dataset, the ratios of overlap rates from high to low are about 16%, 66%, and 18%, while the ratios of parallax from small to large are about 91% and 9%. Although our dataset contains no ground-truth, we include our testing results in this dataset, which we hope can work as a benchmark dataset for other researchers to follow and compare. Details. We train our unsupervised image stitching framework in three steps. First, we train our deep homography network on the synthetic dataset (Stitched MS-COCO [35]) for 150 epochs. Second, we finetune the homography network on the proposed real dataset for 50 epochs. Third, we train the deep image reconstruction network on the proposed real dataset for 20 epochs. All the training process is unsupervised, which means our framework only takes the reference/target image as input and requires no label. The optimizer is Adam [48] with an exponentially decaying learning rate with an initial value of 10 −4 . We set λ s and λ c to 2 and 10 −6 . And ω LR , ω HR and ω CS are set to 100, 1 and 1, respectively. In testing, it takes about 0.4s to stitch 2 input images with resolution of 512×512. All the components of this framework are implemented on TensorFlow. Both the training and testing are conducted on a single GPU with NVIDIA RTX 2080 Ti. B. Comparison of Homography Estimation To evaluate the performance of the proposed ablation-based unsupervised deep homography objectively, we compare our solution with I 3×3 , SIFT [49]+RANSAC [50], DHN [42], UDHN [37], CA-UDHN [38], and LB-DHN [36] on the synthetic dataset and real dataset respectively. The I 3×3 refers to a 3 × 3 identity matrix as a 'no-warping' homography for reference, and SIFT+RANSAC is chosen as the representative of traditional homography solutions because it outperforms most traditional solutions as shown in [37], [38]. The DHN, UDHN, CA-UDHN, and LB-DHN are the deep learning solutions, of which UDHN and CA-UDHN are the unsupervised solutions that both adopt the padding-based strategy to train their networks. Synthetic dataset. The first comparative experiment is conducted on Warped MS-COCO that is the most known synthetic dataset for deep homography estimation. All the learning methods are trained on Warped MS-COCO. The results are illustrated in Table I(a), where 'Ours v1' is our model trained with this dataset in an unsupervised manner. From Table I(a), we can observe: (1) Ours v1 outperforms the existing unsupervised deep homography methods (UDHN, CA-UDHN), of which CA-UDHN is the SOTA solution in small-baseline deep homography. However, the performance of CA-UDHN in this dataset degenerates to be close to that of I 3×3 due to its limited receptive field. (2) After adopting our ablation-based unsupervised loss to LB-DHN, 4pt-Homography RMSE increases, which means this loss is not suitable for this 'no-parallax' synthetic dataset. Real Dataset. Then, we carry on a comparison on the proposed real dataset, which consists of varying degrees of parallax. Since this dataset lacks ground truth, we adopt the PSNR and SSIM of the overlapping regions to evaluate the performance, which can be calculated as Eq. (14): where PSN R(·) and SSIM(·) donates the operations of computing PSNR and SSIM between two images, respectively. We test DHN and UDHN using the public pretrained models. LB-DHN and Ours v1 are trained on Stitched MS-COCO [35] which is similar to Warped MS-COCO with lower overlap rate. Ours v2 is the model of finetuning Ours v1 about 50 epochs on the proposed real dataset. By analyzing the results shown in Table I(b) I(c), we can conclude: (1) The proposed unsupervised solution (Ours v2) outperforms all the methods, including the supervised ones in the real dataset. (2) Although Ours v1 and LB-DHN are both trained on the synthetic dataset, Ours v1 achieves better performance under the real dataset, which indicates the proposed unsupervised loss can equip the network with better generalization ability. C. Comparison of Image Stitching To verify our method's superiority in image stitching, we compare our method with feature-based solutions and compare with recent learning-based solutions (even if it is not fair to compare our unsupervised algorithms with the supervised ones). 1) Compared with Feature-Based Solutions In this section, we choose global Homography [10], APAP [13], robust ELA [18] as the representatives of feature-based solutions to compare with our algorithms. Of these methods, we implement Homography with global projective transformation, and we get the stitched results of APAP and robust ELA (adaptive warping methods) by running their open-source codes with our testing instances. After alignment, image fusion is adopted to produce the stitched image and reduce artifacts. Specifically, we fuse the warped images with the pixel-weighted principle, assigning a relatively large weight to the pixel with a high intensity value. Study on Robustness. The performance of feature-based solutions is easily affected by the quantity and distribution of the feature points, resulting in weak robustness in varying scenes. By contrast, the proposed method overcomes this problem. To validate this view, we test the feature-based methods and ours on our test set (1,106 samples). To simulation the change of feature quantity, we resize the test set to different resolutions, e.g., 512 × 512, 256 × 256, and 128 × 128. As the resolution decreases, the number of features decreases exponentially. The results are shown in Table II, where 'error' indicates the number of program crashes and 'failure' refers to the number of stitching unsuccessfully. Specifically, we define significant distortion (Fig. 10 top) and intolerable artifacts (Fig. 10 bottom) as 'failure'. All the stitched results of these (2) As the resolution decreases, the success rates of learning-based methods decrease while ours remains robust. Besides, to perceive the robustness more intuitively, Fig. 11 demonstrates two challenging examples in the scenes of indoors and dark. Since the sample in dark is too dark to see clearly, we impose image augmentation to better exhibit these results (Row 3 in Fig. 11). These examples are challenging for the feature-based solutions because the features in these scenes are hard to detect. In contrast, our solution stitches them successfully due to the fantastic feature extraction capabilities of CNNs. Study on Visual Quality. The proposed deep image stitching framework should be regarded as a whole which takes two images from arbitrary views as inputs and outputs the stitched result. Therefore, the traditional indicator that calculates the similarity of the overlapping regions is not suitable for our method. To compare with other methods quantitatively, we design user studies on visual quality. Specifically, we compare our method with Homography, APAP, and robust ELA one by one. At each time, four images are shown on one screen: the inputs, our stitched result, and the result from Homography/APAP/robust ELA. The results of ours and the other method are illustrated in random order each time. The user may zoom-in on the images and is required to answer which result is preferred. In the case of "no preference," the user needs to answer whether the two results are "both good" or "both bad". The studies are carried out in our testing set, which means every user has to compare each method with ours in 1,106 images. In this study, we invite 20 participants, including 10 researchers/students with computer vision backgrounds and 10 volunteers outside this community. The results are shown in Fig. 12. Neglecting the ratio of both good and both bad, we find that preferring ours is significantly more than preferring other methods, which means our results have higher visual quality in users' evaluation. To further demonstrate our performance, we also display the stitched results on the proposed real dataset (row 1-8 in Fig. 13) and on the classic image stitching instances outside of our dataset (row 9-10 in Fig. 13). All the cases are with varying degrees of parallax. Besides promising visual quality, it verifies the generalization ability of our model. 2) Compared with Learning-Based Solutions The existing learning-based image stitching methods (VFIS-Net [35] and EPISNet [36]) are supervised learning methods, which require extra labels to train the network. In the case that it is unfair to compare our unsupervised solution with the supervised ones, our method still exhibits a superiority over them on robustness, continuity, illumination, and visual quality. Fig. 13. Visual comparison of the image stitching quality. Row 1-8: instances with varying degrees of parallax from the proposed dataset. Row 9-10: "yard" [24] and "temple" [11] (classic image stitching instances outside of our dataset). Study on Robustness. VFISNet is the first deep image stitching work that can stitch images from arbitrary views in a complete deep learning framework. However, it has a nonnegligible shortcoming: it can only stitch images of 128 × 128. Therefore, only the result under the resolution of 128 × 128 is given when measuring its robustness. The detailed results in Table II shows that the robustness of ours is better than other supervised ones. This can be accounted for by the following two reasons: (1) Our unsupervised deep homography model outperforms the other methods on robustness, which significantly reduces failure cases caused by inaccurate homography estimation. (2) Our unsupervised deep image reconstruction model can effectively reduce artifacts by reconstructing the stitched image from feature to pixel, which reduces failure cases caused by intolerant artifacts. Study on Continuity. The supervised deep image stitching methods [35], [36] sacrifice the continuity of the edges (the edges between the reference image and the non-overlapping areas of the target image) to minimize artifacts. Although an edge-preserved network is proposed in EPISNet to weaken this problem, this problem still exists in a few testing cases. The discontinuity is demonstrated in the left picture of Fig. 14 (a), where discontinuous areas are framed and enlarged. This problem is settled perfectly in our unsupervised approach, as shown in the right picture of Fig. 14 (a). It gives credit to our constraint on seam masks, which enforces the edges of the overlapping areas close to one of the warped images. Study on Illumination. Another advantage of our method is that ours can smooth the illumination difference between the two images. The comparison with EPISNet are illustrated in 14 (b). The supervised methods fail to smooth the illumination difference because they are trained in a synthetic dataset with no illumination difference in the input images (the supervised methods cannot be trained in a real dataset due to the lack of stitched labels). On the contrary, our method is trained in real scenes, which can effectively learn how to smooth the illumination difference caused by different shooting positions. Study on Visual Quality. Similar to the user study with feature-based methods, we adopt the same strategy to in- Fig. 15. Since Bicubic interpolation inevitably brings blurs when zooming in on images, the probability of preferring our method is further greater than that of preferring VFIS-Net+Bicubic. Even compared with EPISNet, our method is still preferred on the visual quality of the stitched images. Besides that, Fig. 13 exhibits the visual comparative results with these supervised methods, where the green rectangles indicate the severely blurred regions and the red rectangles point to discontinuous edges. To perceive our visual quality more intuitively, more results are illustrated in Fig. 16, where the inputs and the outputs are demonstrated together. D. Ablation Studies In this section, ablation studies are performed on both network architectures and loss functions. In the architecture, we validate the effectiveness of the low-resolution branch (LR branch) and high-resolution branch (HR branch); in the loss, we test the function of the content loss, seam loss, and content consistency loss (CS loss). The properties of all the studied frameworks are shown in Table III. From the results which are illustrated in Fig. 17, we can observe: (1) The most straightforward combination of LR branch and content loss can realize image stitching. However, there are still two issues unresolved: seam distortions (row 1, col 4 in Fig. 17) and limited resolution. In our analysis, the seam distortion is the side effect of the proposed content loss. (2) Compared v2 with v1, the HR branch can effectively enhance the resolution of the stitched image. As the cost, a few artifacts (row 2, col 2 in Fig. 17) are introduced since the receptive field of HR branch convolution kernels is too small for higher resolution images. (b) Results on our proposed dataset. From left to right: "stairs", "snow", "grass", "lake", and "campus". (3) Compared with v2, v3 removes the seam distortions (row 3, col 4 in Fig. 17) using the proposed seam loss. By imposing a pixel-level similarity constraint on the edge of the overlapping area, the seam distortions are suppressed successfully. However, there are still artifacts (row 3, col 2 in Fig. 17) in the stitched image. (4) Compared with v3, ours removes the artifacts (row 4, col 2 in Fig. 17) using the proposed CS loss. The CS loss serves as an enhancer of the receptive field, which promotes the receptive field of the HR branch from that of the LR branch. VI. LIMITATION AND FUTURE WORK The proposed solution eliminates parallax artifacts through reconstructing the stitched images from feature to pixel. It is still essentially a stitching method based on a single homography. As the parallax increases, the alignment performance of the first stage will decrease, while the burden of the reconstruction network will also become heavier. When the parallax is too large, the reconstruction network may treat the misalignments as new objects to reconstruct. An example is shown in Fig. 18. In the future, we hope to solve this problem in two directions: 1) Improve the alignment performance of the alignment network to decrease the burden of the reconstruction network. 2) Increase the receptive field of the reconstruction network to deal with remained large misalignments. VII. CONCLUSION This paper proposes an unsupervised deep image stitching framework, comprising unsupervised coarse image alignment and unsupervised image reconstruction. In the alignment stage, an ablation-based loss function is proposed to constrain the unsupervised deep homography estimation in large-baseline scenes, and a stitching-domain transformer layer is designed to warp the input images in the stitching-domain space. In the reconstruction stage, an unsupervised deep image reconstruction network is proposed to reconstruct the stitched images from feature to pixel, eliminating the artifacts in an unsupervised reconstruction manner. Besides, a real dataset for unsupervised deep image stitching is presented, which we hope can work as a benchmark dataset for other methods. Experimental results demonstrate the superiority of our method over other stateof-the-art solutions. Even if compared with the supervised deep image stitching solutions, the results of our unsupervised approach are still preferred by users in terms of visual quality. However, the reconstruction ability is not unlimited, which indicates our solution may fail in the scenes with extremely large parallax. Considering our first stage is essentially an alignment model based on a single homography, the ability to handle large parallax can be improved by extending the linear deep homography network to a non-linear homography model. Moreover, the reconstruction performance can be further increased by increasing the receptive field of the reconstruction network, which is also an exploring direction of the future work.
2021-06-25T01:16:11.329Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "ba30f0917d863ebfb05d57431d9ab07a8973b218", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2106.12859", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ba30f0917d863ebfb05d57431d9ab07a8973b218", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
13980647
pes2o/s2orc
v3-fos-license
Ultra-Processed Food Products and Obesity in Brazilian Households (2008–2009) Background Production and consumption of industrially processed food and drink products have risen in parallel with the global increase in overweight and obesity and related chronic non-communicable diseases. The objective of this study was to analyze the relationship between household availability of processed and ultra-processed products and the prevalence of excess weight (overweight plus obesity) and obesity in Brazil. Methods The study was based on data from the 2008–2009 Household Budget Survey involving a probabilistic sample of 55,970 Brazilian households. The units of study were household aggregates (strata), geographically and socioeconomically homogeneous. Multiple linear regression models were used to assess the relationship between the availability of processed and ultra-processed products and the average of Body Mass Index (BMI) and the percentage of individuals with excess weight and obesity in the strata, controlling for potential confounders (socio-demographic characteristics, percentage of expenditure on eating out of home, and dietary energy other than that provided by processed and ultra-processed products). Predictive values for prevalence of excess weight and obesity were estimated according to quartiles of the household availability of dietary energy from processed and ultra-processed products. Results The mean contribution of processed and ultra-processed products to total dietary energy availability ranged from 15.4% (lower quartile) to 39.4% (upper quartile). Adjusted linear regression coefficients indicated that household availability of ultra-processed products was positively associated with both the average BMI and the prevalence of excess weight and obesity, whereas processed products were not associated with these outcomes. In addition, people in the upper quartile of household consumption of ultra-processed products, compared with those in the lower quartile, were 37% more likely to be obese. Conclusion Greater household availability of ultra-processed food products in Brazil is positively and independently associated with higher prevalence of excess weight and obesity in all age groups in this cross-sectional study. Introduction The prevalence of obesity has reached alarming levels in almost all countries of the world [1], [2]. In Brazil, increasing rates of obesity have been documented by repeated national surveys conducted since the 1970 s with evidence of acceleration in the 2000 s in all age groups above 5 years old [3]. A nationwide surveillance system based on telephone interviews implemented in all state capitals of the country since 2006 indicates an annual increase of around one per cent in the prevalence of obesity and of excess weight (overweight plus obesity) among adults [4]. It is now commonly stated that the pandemic of obesity is driven by radical changes in the global food system, and in particular since the 1980 s by the increased production, availability, affordability and marketing of processed food and drink products [2], [5], [6]. International authorities now increasingly recognize that high levels of consumption of various specific types of processed food or drink products are associated with weight gain and associated chronic non-communicable diseases [7], [8]. Food processing as such has however been largely ignored in dietary recommendations, dietary assessments, and epidemiological studies. One sufficient reason for this has been the nonexistence of clear definitions and classifications of processed foods [9], [10]. In recent years a classification of foodstuffs based on the extent, nature and purpose of food processing has been developed, and results based on the classification have been published. This divides foodstuffs into three groups. These are foods that are either fresh or minimally processed; processed culinary ingredients; and ready-to-consume food products, either processed or ultraprocessed. Processed products are whole foods preserved with salt, sugar or oil or by other methods such as smoking or curing. Ultra-processed products are essentially industrial formulations mostly or entirely made from industrial ingredients, typically containing little or no whole foods. [11], [12]. Studies in different countries show that ready-to-consume food products (processed or ultra-processed) taken together as a group, when compared with foods combined with processed culinary ingredients as made into dishes and meals, on average are more energy-dense, are higher in total fat, saturated fats, sugars and salt, and are lower in protein and dietary fiber [13], [14]. Ultraprocessed products in particular typically have properties that are conductive to overconsumption: they are often hyper-palatable and sold in large portion sizes; are durable and easy to transport and therefore liable to be consumed as snacks at any time and in almost any place; and are often marketed intensively and persuasively [15], [16], [17]. There is therefore reason to believe that high consumption of ready-to-consume food products in general, is a cause of weight gain, obesity and associated disorders and diseases [11], [16]. The only study so far conducted on the subject has reported an association between high consumption of these products and the occurrence of metabolic syndrome in adolescents from a mediumsized Brazilian city [18]. The association with other health outcomes at this time remains unknown. Tracking and understanding the association between ready-toconsume food products and obesity, and the implications, is crucial. In Canada, between 1938 and 2011, the share of these products as a percentage of dietary energy rose from 28.7% to 61.7% [19]. In Brazil the contribution of dietary energy from these products has also risen: in metropolitan areas from 20.3% in 1987-8 to 32.1% in -2008-9; and nationality from 23.0% in 2002-3 to 27.8 in 2008-9. They continue to displace foods and processed culinary ingredients used together to make freshly prepared meals [20]. In the same time period, the prevalence of obesity has also increased in Brazil [3]. The objective of this study has been to analyze the relationship between household availability of processed and ultra-processed products, separately and together, and the prevalence of excess weight (overweight plus obesity) and obesity in Brazil. Data Source and Sample All the data come from the 2008-2009 Household Budget Survey (HBS), conducted by the Brazilian Institute of Geography and Statistics on a probabilistic sample of 55,970 households [21]. The 2008-2009 HBS employed a complex clustered sampling procedure, first selecting census tracts and then selecting households within those tracts. The selection of census tracts was preceded by an examination of the tracts of the Master Sample of Household Surveys or Common Sample (containing the pool of the 12,800 tracts of the country) to obtain strata of households with high geographic and socioeconomic homogeneity. The geographic locations of tracts (region, state, capital city or other, urban or rural) and the years of schooling of the heads of households in the sector were considered, and 550 strata of households that were geographically and socioeconomically homogeneous were select-ed. The number of tracts selected from each stratum was proportional to the total. The number of sectors randomly selected from each stratum was proportional to the total number of households in the stratum. Next, households were selected in each tract by random sampling without reposition. Interviews were distributed uniformly in each selected stratum during the four quarters of the study to reproduce seasonal variations in purchases of food and other products [21]. Data Collection The main information taken from the 2008-2009 HBS included the household purchase of foods and drinks for home consumption and the weight and height of all household members. The purchase records of all foods and drinks for home consumption (approximately 850,000) were recorded in a specially designed booklet by the household members (or by the interviewer, when necessary) over a period of seven consecutive days [22]. Due to the relatively short reference period employed for the recording of the food expenditure in each household, it was decided to use the 550 sample strata as the study unit, for which the pattern of annual food purchases could be more accurately calculated. The mean number of households studied within each stratum was 101.8, ranging from eight to 796 households. Weight and height of all people residing at the household (n = 190.159) were measured by trained researchers using standard techniques, and recorded in specific questionnaires, along with characteristics of households and their members. Weight was measured using portable electronic scales with a maximum capacity of 150 kilograms (kg), and graduations of 100 grams (g). The value obtained was recorded in kilograms. Height was expressed in centimeters (cm) using recumbent length as the measure in children aged between zero and 23 months and stature in individuals aged 24 months or older. In order to measure length, infant anthropometers were used with a capacity of up to 105 cm and a scale in millimeters, whereas stature was measured using portable stadiometers with a 200 cm-long retractable tape measure, accurate to the nearest 0.1 cm. Upon completion of data collection, imputation procedures were applied to deal with nonresponses or erroneous responses associated with values rejected at the critical review stage [3]. Classification of Purchased Food Items All food items purchased by households, after the exclusion of non-edible parts [23], were converted into energy using the Brazilian Food Composition Table (TACO) [24] or as necessary the US official nutrient database for standard reference [25]. The quantity of each purchased food in each household stratum was expressed in daily kilocalories (kcal) per capita. Subsequently, the food items were classified into three groups, according to the nature, extent and purpose of industrial processing used in their manufacture [11], [12]. The first group is of foods, either fresh or minimally processed. Examples are grains (also known as cereals), and roots and tubers; legumes (pulses); fruits and vegetables; nuts and seeds; meat, fish, poultry and eggs; milk and natural yogurt. The second group is of processed culinary ingredients used with foods in the preparation of dishes. These are substances extracted from whole foods. Examples are flours and starches; oils and sugars; and salt (extracted from nature). The third group, the main subject of this study, is of ready-to-consume products. These are either processed or ultra-processed. Processed products are made from foods with the addition of substances such as salt, sugar or oil, and the use of processes such as smoking or curing. Examples include canned or bottled vegetables and legumes preserved in brine; fruits preserved in syrup; tinned fish preserved in oil or salted and smoked; salted and smoked meats; and cheese. Ultra-processed products are formulated predominantly or entirely from industrial ingredients, and typically contain little or no whole food. They often contain preservatives and cosmetic and other additives, and may also contain synthetic vitamins and minerals. Examples include: cake mixes, 'energy' bars; 'instant' packaged soups and noodles; many types of sweetened breads and buns, cakes, biscuits, pastries and desserts; chips (crisps); and very many other types of sweet, fatty or salty snack products; sugared milk and fruit drinks, soft drinks and 'energy' drinks; pre-prepared meat, fish, vegetable or cheese dishes, pizza and pasta dishes, burgers, French fries (chips), and poultry and fish 'nuggets' or 'sticks' ('fingers'); bread and other cereal products; hot dogs and other products made with scraps or remnants of meat; preserves (jams), sauces, meat, yeast and other extracts; ice-cream, chocolates, cookies (biscuits), candies (confectionery); margarines; canned or dehydrated soups; and infant formulas, follow-on milks and baby products. Indicators of Obesity We calculated the values of body mass index (BMI), for adults and elderly, and BMI-for-age, for children and adolescents, based on weight and height measurements taken. These values were expressed in Z-scores and were used for classification of the nutritional status, following the recommendations proposed by the World Health Organization for each age group [26], [27], [28]. Three different indicators of obesity were studied: the mean BMI (in Z-score), the prevalence of excess weight, defined as the percentage of people with BMI above 25 kg/m 2 for adults or above +2 Z-score for children under 5 years and +1 Z-score for children and adolescents (5 to 19 years), and the prevalence of obesity, defined as the percentage of people with BMI above 30 kg/m 2 for adults or above +3 Z-score for children under 5 years and +2 Z-score for children and adolescents (5 to 19 years). All indicators were calculated for each stratum (our study unit), including prevalence of excess weight and obesity, and these outcomes were used in the linear regression models. Data Analysis Initially, the amounts of processed and ultra-processed products were estimated. The mean values of excess weight and obesity prevalence and BMI were calculated according to quartiles of the dietary energy (expressed as calories) of the processed and ultraprocessed products as a proportion of the total purchased. Multiple linear regression models were used to assess the association between the availability of processed and ultraprocessed products (expressed in quartiles of calories), first separated, and each one of the indicators of obesity (outcomes). We included in the models socio-demographic variables frequently associated with food consumption and nutritional status, such as region, setting, income, gender and age, these last expressed as proportion of women, elderly and children in the stratum. Furthermore, we included other confounding variables, such as percentage of expenditure on eating out of home and complementary dietary energy (derived from foods and processed culinary ingredients). These were variables available in the database used. Based on these models, the expected values (values predicted by model) were calculated for the excess weight and obesity prevalence and for average BMI, according to the quartiles of dietary energy from processed and ultra-processed products, adjusted for the mean values of the confounding variables included in the models. All analyses were carried out using the statistics package Stata/ SE version 12.1 (Stata Corp., College Station, USA), considering the effects of complex sampling of the 2008-2009 HBS and enabling the extrapolation of the results for the entire Brazilian population. Ethical Aspects The present study used secondary data (2008( -2009 collected by the IBGE and available for public online consultation. The information contained in the database is confidential since specific data about each household such as identification of the household members, address and telephone are excluded. Results The average daily dietary energy household availability was 1581 kcal/person. Of this, processed products contributed 37 kcal (2.4%) and ultra-processed products contributed 386 kcal (25.5%). Table 1 shows that as the contribution of processed and ultraprocessed products, as a group, to dietary energy increased (from 15.4% to 39.4%) from the lower to the upper quartiles the prevalence of excess weight and obesity also increased (from 34.1% to 43.9%, and from 9.8% to 13.1%, respectively). We first assessed the association between each of the three outcomes and the dietary energy of processed products and ultra-processed products separately. The results showed that ultra-processed products were associated with the average BMI and with prevalence of both excess weight and obesity in the adjusted models, whereas processed products were not. Considering this, results in Table 2 and Table 3 are presented for ultra-processed products only. Table 2 shows the results of linear regression models for the association between dietary energy from ultraprocessed products, and excess weight and obesity. Both crude and socio-demographic-adjusted regression coefficients show a positive and statistically significant association. The variables that are most responsible for the changes in the estimates from the crude to the adjusted models are income and setting (urban or rural). Additional adjustment of dietary energy other than from ultraprocessed products made no significant difference. The residuals analysis of the linear regression models indicated a reasonable fit in the models (data not shown). Table 3 shows the predictive adjusted values of average BMI and the prevalence of excess weight and obesity according to the quartiles of household availability of ultra-processed products. People living in household strata belonging to the upper quartile (average 564 kcal) of consumption of ultra-processed products, compared with people in the lower quartile (average 220 kcal), were 37.4% more likely to be obese (from 9.9% to 13.6%). Discussion We believe that this study is the first to examine the relationship between consumption of ultra-processed products and obesity. Using a national representative sample of the Brazilian population of all age groups, a positive and independent association has been found between the household availability of ultra-processed products and obesity. These results are also relevant globally. Ultra-processed products dominate food supplies of many high-income countries, and production and consumption of these products is now rapidly increasing in middle-income countries and settings [29]. In this study we used data of HBS related to food purchase. We believe that our data are a reasonable estimative of intake, because previous studies indicate considerable agreement between data from HBS and individual food consumption surveys [30], [31]. Foods and products bought and consumed out of home were not included in the survey. To account for potential bias, the percentage of food expenditure allocated to food consumed out of home was considered. This variable adjusted for income was taken as a ''proxy'' for dietary energy consumed out of home, which in Brazil, at the time has been estimated at 18% of dietary energy [32]. Our study also does not take into account household food wastage. However, ultra-processed products are usually durable and have long shelf lives, and therefore generate little or no waste. So our data most probably underestimate the availability of ultra-processed products in Brazil. Physical activity and also smoking are not usually assessed in household budget surveys and so could not be included them as potential confounders for the association between consumption of ultra-processed products and obesity. However, previous studies in Brazil have found that physical activity patterns are strongly dependent on variables which were effectively controlled in our analyses, including gender, age, family income, urban or rural settings and the country's five regions [33], [34]. Also, the nationwide surveillance system for chronic diseases has shown that education (a ''proxy'' for income) and gender are related to smoking, and both these variables were included in the analyses [4]. In any case, as usual in observational studies, residual confounding can not be discarded. Due to the inclusion of all age groups in the analyses and the lower predictive value of BMI in the assessment of obesity in the elderly, we have conducted a sensitivity analysis excluding strata with more than 20% of individuals with 65 years or plus. Similar analysis was conducted considering only individuals older than 20 years. No changes in the magnitude or statistical significance of coefficients were found. Finally, an additional sensitivity analysis Table 1. Indicators of obesity among all age-groups according to the share of processed and ultra-processed food products in total household food availability (Brazil, 2008(Brazil, -2009). Classification follows recommendations of the World Health Organization for each age group [26], [27], [28]. *p,0.05 for linear regression across quartiles of dietary energy contribution of processed and ultra-processed products. doi:10.1371/journal.pone.0092752.t001 Table 2. Results from multiple linear regression models for the association between household availability of ultra-processed food products (kcal/person/day) and obesity indicators (Brazil, 2008(Brazil, -2009 Classification follows recommendations of the World Health Organization for each age group [26], [27], [28]. was done with the exclusion of strata with less than 30 households (2.55%) but this not changed the results and conclusions of the study and for these reasons, we used the original number of strata (n = 550) to the analyses. Furthermore, residual confounding due to imperfect measurement of income is also possible since income was reported by the families. We believe this problem has been attenuated because income data include all sources of income from all household members and were collected by trained interviewers with standardized and carefully detailed questionnaires. Our findings are consistent with the few studies that have examined the impact of food processing, or products that can be classified as ultra-processed, on obesity. In Guatemala, one study conducted using a representative sample of households has investigated the association between the prevalence of overweight/obesity and household food expenditure on processed food products. Using a somewhat different classification from ours, this study reported that a 10% increase in the proportion of ''partially processed'' and ''highly processed'' foods in total food expenditure was associated with an increase in the mean BMI of around 4% [35]. Most items included in the ''partially processed'' and ''highly processed'' groups belong to our group of ready-to-consume products. In the US, data from three cohorts (from 12 to 20 years) has reported an association between weight gain and increased consumption of various ultra-processed products, including French fries (also known as chips), potato chips (crisps), sweetened drinks, and processed meats, whereas several fresh and minimally processed foods were considered protective against weight gain [36]. Other prospective studies have confirmed an association between specific ready-to-consume products and weight gain, as well as other negative health outcomes. A 15-year prospective study, examining the consumption of fast-food snacks by North American young adults, has shown that changes in the frequency of weekly consumption of these products was directly associated with changes in body weight [37]. Another study conducted in five European countries for 5.5 years has found that a daily rise of 100 kcal in consumption of ultra-processed products such as white bread, processed meats, and soft drinks was positively associated with an increase in abdominal adiposity [38]. Finally, regular consumption of sweetened soft drinks is now generally agreed to increase incidence of overweight and obesity, and is also associated with increased incidence of disorders and diseases such as type-2 diabetes, cardiovascular diseases, hypertension, inflammation, atherogenic dyslipidemia, hyperuricaemia, gout, gall stones and renal diseases [39], [40], [41]. We suggest that the association with obesity found in our study is a result of many characteristics of ultra-processed products. These include their nutritional profile (as a group in general they are more energy-dense, and more fatty and more sugary, than the combination of foods and culinary ingredients made into freshly prepared meals) [13], [14]. As a group they also stimulate overconsumption (by their hyper-palatability, large portion sizes, convenience, and aggressive and persuasive marketing strategies) and the way of eating (they can be consumed at any time and in almost any place) [15], [16], [17]. The absence of association between processed food products and obesity is probably due to these characteristics, unique to ultra-processed products, rather than to their nutritional profile. If the findings of this study are supported by findings from other countries, they have important implications for public health policy. In recent decades very large including transnational food and drink corporations, most of whose products are ultra-processed, have rapidly become much more prominent [10], [42]. This study in Brazil shows that increased consumption of ultraprocessed food products is correlated with increased prevalence of excess weight and obesity. This we believe is because of the nature of these products and their intrinsic characteristics. We suggest that prevalence of excess weight and obesity can be controlled only if the production and consumption particularly of ultra-processed products is controlled and reduced. Table 3. Predictive values for obesity indicators according to the household availability of ultra-processed food products (Kcal/ person/day) (Brazil, 2008(Brazil, -2009). Obesity indicator Availability of ultra-processed products (mean values according to quartiles) Mean BMI (Z score) 2 Prevalence of excess weight (%) 1,2 Prevalence of obesity (%) 1 Classification follows recommendations of the World Health Organization for each age group [26], [27], [28]. 2 Adjusted indicators correspond to predicted values yielded by Model 3 (adjusted by log of income, proportion of women in stratum, proportion of elderly in stratum, proportion of children in stratum, setting, region, percentage of expenditure on eating out of home, and for complementary calories, including calories of processed food products), set for the mean value of the confounding variables. doi:10.1371/journal.pone.0092752.t003
2016-05-12T22:15:10.714Z
2014-03-25T00:00:00.000
{ "year": 2014, "sha1": "4de37011d5d5afced3e317351339e7603e4d49bd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0092752&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4de37011d5d5afced3e317351339e7603e4d49bd", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
32727281
pes2o/s2orc
v3-fos-license
CURTOBACTERIUM FLACCUMFACIENS PV . FLACCUMFACIENS DETECTION IN BEAN SEEDS USING A SEMI-SELECTIVE CULTURE MEDIUM The bacterial wilt caused by Curtobacterium flaccumfaciens pv. flaccumfaciens is currently considered one of the most important bacterial bean disease in Brazil. One of the most effective control methods against this disease is the use of healthy seeds. However, no methods are known that could be routinely used to detect this bacterium in bean seeds under Brazilian condition. The aim of this work was to evaluate qualitative and quantitative detection methods for Curtobacterium flaccumfaciens pv. flaccumfaciens in naturally-infected bean seeds, and the detection of this pathogen in thirty bean seed samples, by sowing onto a semi-selective culture medium the leachate obtained from soaked bean seeds. Both the qualitative and quantitative methods were effective for detecting the presence of the bacteria in the seeds samples analysed. The qualitative method proved more practical for rotine use; of the thirty bean seed samples analyzed by this method, fifty percent were infected with Curtobacterium flaccumfaciens pv. flaccumfaciens. INTRODUCTION Bean bacterial wilt, caused by Curtobacterium flaccumfaciens pv.flaccumfaciens (Cff), was first described in the USA in 1921 by Hedges (6), causing serious problems to the crop.This disease occurs in several European countries, as well as in Australia, Canada, Mexico, and Colombia (2).In Brazil, bacterial wilt has been verified in several regions resulting in loses in bean production (8,9,18). Typical symptoms of the disease in bean plants are mainly wilting, vascular darkening, and death of the above-ground part of the plant (7).Under field conditions during mildtemperature seasons infected bean plants have developed that lack bacterial disease symptoms.This fact has been observed by Thomas and Graham (17), who isolated Xanthomonas axonopodis pv.phaseoli and Cff from bean plant stems without external symptoms.Other plants, in addition to beans, have been reported as Cff hosts including pea, soybean, Phaseolus lunatus, Lupinus polyphyllus, Vigna cylindrica, V. sesquipedalis, Dolichos labllab, Phaseolus radiatus, P. lathyroides, P. calcaratus, and P. acutifolius (7,15). In bean plants, some control measures are employed against bacterial wilt include the use of healthy seeds, since Cff survives and is transmitted by seed (13,15).Burkholder (3) verified that Cff survived for 24 years in bean seeds stored under natural environmental conditions. In practice, few methods are described in the literature for Cff detection in bean seeds.The European and Mediterranean Plant Protection Organization recommends a visual examination of seeds, direct or indirect isolation of the bacterium, and a serum test for Cff detection in bean seeds for quarantine purposes (4).In Japan, again for quarantine purposes Mizuno (11) and Mizuno and Kawai (12) recommended the isolation of Cff from bean seeds in semi-selective culture media, which use specific carbon sources and antimicrobial agents, together with serum tests but these media are very expensive for routine use in Brazil (10).Tegli et al. (16) developed specific primers for Cff that can be used for detection of this bacterium in naturally-infected bean seeds, by the PCR technique. Considering the necessity for the development of a practical, effective, and low-cost method for the routine analysis of bean seeds in Brazil, the present work was carried out using Cff isolation in a semi-selective culture medium developed for this purpose (10). Comparison of methods for Curtobacterium flaccumfaciens pv. flaccumfaciens detection in bean seeds Quantitative and qualitative Cff analysis were carried out in six 200 g samples of bean seeds (approximately 1,000 seeds), cultivar Campeão II, from commercial field where the occurence of bacterial wilt was verified.Each sample containing 200g of seeds was soaked in 600 mL distilled and sterilized water for 24h, under refrigeration (5ºC).Following soaking, the seeds were manually stirred in flasks and the leachate liquid was sampled.For the quantitative analyse, 100 µL of the leach liquid obtained from the seeds and their dilutions (10 -1 , 10 -2 , and 10 -3 ) were sown onto the surface of CFFSM culture medium (10) with the aid of a Drigalski spatula.Four Petri dishes were sown for each concentration.In the qualitative evaluation, however, the seed leachate (same of quantitative analysis) was sown by streaking with a loop onto the surface of the CFFSM semi-selective culture medium.Four Petri dishes were sown, in two halves per each plate, totaling eight halves. The Petri dishes remained under incubation at 28-30ºC, for 96 to 120h, and colonies with cultural characteristics resembling Cff (colonies circular shape, with yellow to slighty orangish coloration; casein hydrolisis and slight fading of dye arround the colonies) were compared against the growth of a pure standard isolate (Feij-2634).Six bacterial isolates from each seed sample were selected for identification. Curtobacterium flaccumfaciens pv. flaccumfaciens isolation in naturally infected bean seeds Thirty seed samples from several regions of Brazil were analized for the presence of Cff.Twenty-two samples were analyzed twice, and eight samples were analyzed once.Five 200g subsamples were evaluated for each seed sample.Each subsample was transferred into a flask containing 600 mL distilled and sterilized water.The seeds were left in soaking for 24h, under refrigeration.After soaking, the seeds were manually stirred and the resulting suspension was plated by streaking on Petri dishes containing CFFSM culture medium.Four Petri dishes were sown for each 200g subsample, in two halves per each plate, totaling eight halves per subsample.The Petri dishes remained under incubation for a variable period between 96 and 120h, at 28 -30ºC.After incubation, observations were made for bacterial colonies that showed cultural characteristics which resembled Cff when compared with a pure standard isolate (Feij-2634).A given seed sample was considered to carry Cff when Cff growth was observed in at least one of the eight sown fields.Two to four bacterial isolates from each positiv sample were selected to be submitted for identification. Identification of bacterial isolates obtained from seeds Cff-suspected bacterial colonies were initially purified in a nutrient sucrose-agar medium (NSA) containing sodium chloride at 7%, and evaluated using Gram-staining, KOH test, pathogenicity test in cultivar Pérola bean plants, and identification by the Biolog® method (Biolog, Hayward, USA) previous described by Maringoni et al. (10). RESULTS AND DISCUSSION The presence of Cff in the bean seeds analyzed was observed, regardless of the method used quantitative or qualitative (Table 1).Both methods were effective to isolate Cff-suspected colonies in the seeds.All isolates were identified as Cff by the Biolog® method, regardless of their pathogenicity (Table 1).Three Cff isolates were not pathogenic to bean plants. The analysis of thirty bean seed samples from several localities in Brazil revealed that fifty percent of them were infected with Cff (Table 2).All suspected bacterial isolates were identificated as Cff, by the Biolog® method.Seven out of the ninety-seven bacterial isolates submitted to the pathogenicity test were non-pathogenic to Cultivar Pérola bean plants (Table 2).All suspected isolates showed cultural characteristics similar to Cff on the CFFSM culture medium (yellow colonies and the presence of casein hydrolysis and Congo red fading) Figure 1, and grew in NSA medium containing sodium chloride at 7%, were Gram-positive rods and did not form a string in the KOH test. Although a specific standard method for Cff detection in bean seeds does not exist in Brazil, especially for routine analysis, the methodology herein employed was effective to isolate this bacterium from naturally-infected bean seeds.The method of plating on a semi-selective culture medium is used in the detection of a number of phytopathogenic bacteria in seeds of several crops, and usually the seed leachate dilutions, either concentrated or not by centrifugation, are sown onto specific culture media as, for example, Pseudomonas savastanoi pv.phaseolicola in bean seeds (19), Xanthomonas translucens pv.translucens in wheat seeds (14), and Clavibacter michiganensis subsp.michiganensis in tomato seeds (5).Although the literature does not contain references about the direct sowing of seed leachate suspensions by streaking onto the surface of the culture medium (qualitative analysis) in order to isolate phytobacteria in seeds, this procedure proved as viable, since the isolation of Cff from the seed samples analyzed was consistent. Figure 1 . Figure 1.Cultural characteristics of Curtobacterium flaccumfaciens pv.flaccumfaciens isolated from bean seeds on the CFFSM culture medium. Table 1 . Recovery of Curtobacterium flaccumfaciens pv.flaccumfaciens from bean seeds by two methods. a Range of the similarity index value of isolates compared against C. flaccumfaciens database. Table 2 . Recovery of Curtobacterium flaccumfaciens pv.flaccumfaciens from naturally-infected bean seeds collected from several localities in Brazil.
2017-10-15T03:57:46.970Z
2006-10-01T00:00:00.000
{ "year": 2006, "sha1": "8385eac61fb1e76cb267cb3bdf8e6effe1bad4ef", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/bjm/v37n4/v37n4a09.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8385eac61fb1e76cb267cb3bdf8e6effe1bad4ef", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
38495729
pes2o/s2orc
v3-fos-license
The Solution Structure of Human Hepcidin, a Peptide Hormone with Antimicrobial Activity That Is Involved in Iron Uptake and Hereditary Hemochromatosis* 210 The antibacterial and antifungal peptide hepcidin (LEAP-1) is expressed in the liver. This circulating peptide has recently been found to also act as a signaling molecule in iron metabolism. As such, it plays an important role in hereditary hemochromatosis, a serious iron overload disease. In this study, we report the solution structures of the hepcidin-20 and -25 amino acid peptides determined by standard two-dimensional 1H NMR spectroscopy. These small cysteine-rich peptides form a distorted β-sheet with an unusual vicinal disulfide bridge found at the turn of the hairpin, which is probably of functional significance. Both peptides exhibit an overall amphipathic structure with six of the eight Cys involved in maintaining interstrand connectivity. Hepcidin-25 assumes major and minor conformations centered about the Pro residue near the N-terminal end. Further NMR diffusion studies indicate that hepcidin-20 exists as a monomer in solution, whereas hepcidin-25 readily aggregates, a property that may contribute to the different activities of the two peptides. The nuclear Overhauser enhancement spectroscopy spectra of the hepcidin-25 aggregates indicate an interface for peptide interactions that again involves the first five residues from the N-terminal end. related to any other previously known peptide family. Independently, hepcidin mRNA was found to be induced in the livers of mice by iron overload or treatment with lipopolysaccharide (3). The likely role of hepcidin in iron metabolism was further suggested by the observation that mice with disruption of the gene encoding the transcription factor USF2 failed to produce hepcidin mRNA and developed spontaneous visceral iron overload (5). Because the USF2 gene is located immediately upstream of the two murine hepcidin genes, and its disruption by neo gene insertion exerts a detectable effect even in a heterozygous state, it is thought that the upstream neo insertion exerts a cis-inhibitory effect on the downstream genes. In contrast, mice engineered to overexpress hepcidin experience severe iron deficiency anemia (6). Based on these observations, it has been suggested that hepcidin is the longsought signaling molecule that decreases iron absorption in the small intestine and iron release from stores in macrophages (7), in response to increased visceral iron stores or inflammation. The increase of hepcidin by inflammatory stimuli could serve the host defense strategy of denying essential iron to infecting microbes. Analysis of the sequence of hepcidin (DTHFPICIFCCGC-CHRSKCGMCCKT) revealed a very high percentage of cysteines (eight cysteines in both the 20-and 25-residue peptide). This is an unusually high amount of Cys when compared with the composition of other cysteine-rich antimicrobial peptides such as the defensins (8), tachyplesin (9), protegrin (10), and, more recently, snakin (11). Mass spectroscopy and chemical analysis have revealed that all of the Cys are bridged in the sequence, making this peptide a highly constrained peptide (1). The use of CD spectroscopy in the same study indicated the presence of a loop and a distorted ␤-sheet. Furthermore, hepcidin-20 was found to be generally more active against Staphylococcus aureus, Staphylococcus epidermis, group B Streptococcus, and Candida albicans. Clearly, a complete three-dimensional structural elucidation could give insight into the recognition of this peptide in both an antimicrobial and iron-regulatory capacity. Here we present an investigation of the structure and a study of the aggregation properties observed from 1 H NMR spectroscopy to show the amphipathic character as well as the unique structural characteristics of the 20-and 25-amino acid peptides (hepcidin-20 and hepcidin-25, respectively). 15 N-Phe, and 15 N-Gly were used in the Fmoc (N-(9-fluorenyl)methoxycarbonyl) process. Both isotopic forms of the refolded synthetic hepcidin had the predicted masses by spectrometry, and, when compared with the natural hepcidin (20-and 25-amino acid forms) isolated from urine (1), migrated identically in 12.5% acidurea PAGE and had an identical retention time on C18 reverse phase high precision liquid chromatography. NMR Spectroscopy-Approximately 2 mg of the 20-amino acid peptide was dissolved in 550 l of 90:10 H 2 O:D 2 O. The unadjusted pH was 3.2, and the concentration was determined to be 0.783 mM using UV absorption at 280 nm and a calculated molar extinction coefficient based on the number of half Cys residues in the peptide (480 M Ϫ1 cm Ϫ1 ). The NMR sample of the 25-residue peptide was prepared by dissolving 6.8 mg of purified peptide in 0.5 ml of 40 mM phosphate buffer, pH 3.5 (90% H 2 O:10% D 2 O). The concentration of the original aqueous sample was determined to be 1.6 ϫ 10 Ϫ3 M using UV absorption at 280 nm. To determine the NMR structure of both the 20-and 25-amino acid peptides, various NMR field strengths were used. The two-dimensional NOESY 1 (mixing times of 200 ms) and TOCSY (mixing times of 120 ms) spectra were acquired at 25°C on Bruker DRX 500 MHz and DRX 700 MHz NMR spectrometers. A separate NOESY spectrum was acquired at 13°C at 500 MHz. The same experiments were acquired with the D 2 O sample using the INOVA 800 MHz spectrometer at the National High Field Nuclear Magnetic Resonance Centre (University of Alberta, Edmonton, Alberta, Canada). The two-dimensional NOESY and twodimensional TOCSY experiments were also repeated at 400 MHz without 15 N decoupling. All two-dimensional experiments for the 25-amino acid peptide were 15 N decoupled during evolution and acquisition periods. A series of NOESY spectra were also collected over a range of mixing times of 50, 100, 150, 200, 300, 400, 500, and 600 ms for the two samples to monitor the NOE buildup. The spectra were acquired at 500 MHz with 2048 ϫ 600 data points in the directly and indirectly detected dimensions, respectively, and spectral widths of 6009 Hz. The 700 MHz spectra were acquired with 2048 ϫ 600 data points with 80 scans/ increment. At 800 MHz, the data were collected with 2048 ϫ 600 data points in the directly and indirectly detected dimensions, respectively, and spectral widths of 6009 Hz. Water suppression was achieved using excitation sculpting (12). The two-dimensional TOCSY and NOESY NMR spectra were processed with NMRPipe 3.4 and analyzed with the NMRView 4.1.3 (13) software package on workstations operating with the Redhat 7.1 version of the Linux operating system. The two-dimensional data were zero-filled once in each dimension and Fourier-transformed with a shifted sine-bell squared function. All NMR spectra were referenced externally to sodium 3-(trimethylsilyl)-1-propanesulfonate at 0.0 ppm. To determine which amides were in slow exchange, the sample was dissolved in D 2 O. Immediately after dissolution, a series of one-dimensional 1 H spectra were acquired over the following 24 h. Twenty min after the first 1 H spectrum was acquired, a 1-h two-dimensional TOCSY spectrum was collected. A 1 H-13 C heteronuclear single quantum coherence experiment was acquired at 700 MHz for hepcidin-20 with 1024 ϫ 128 data points over spectral widths of 9765 ϫ 4401 Hz and referenced to internal dioxane. A 1 H-15 N heteronuclear single quantum coherence was acquired at 500 MHz using hepcidin-25 to confirm the identity of the 15 N-labeled amino acids. NMR Diffusion-For the NMR diffusion experiments, each sample was dissolved in D 2 O, and peptide diffusion was monitored relative to internal dioxane (14,15). 2 Approximately 5 l of a 1% solution of dioxane in D 2 O was added to the sample as an internal standard. Pulsed field gradient diffusion experiments were collected with the PG-SLED sequence (16). The data were acquired at 700 MHz for hepcidin-20 and 400 MHz for hepcidin-25 using NMR probeheads equipped with proton observe and 3-axis gradient coils. Samples of peptide were dissolved in 100 l of D 2 O in a Shigemi tube (Shigemi Co., Ltd., Tokyo, Japan). The data were acquired by collecting 56 scans of 16,000 data points at each gradient amplitude and incrementing the gradient strength in 64 steps from 1.25% to 80% of the maximum output of the linear gradient amplifier. After data collection was completed, hepcidin-25 was diluted with 100 l of D 2 O, and data were reacquired. To process the data, a 1 Hz line broadening value was applied before Fourier transformation with the Bruker XWINNMR package version 2.6 at 400 MHz and version 3.0 at 700 MHz. From the resulting series of spectra, not fewer than 5 peptide resonances were chosen, and the decay of the peak intensities as a function of gradient strength was evaluated using the XWINNMR package. The one-dimensional spectra of the 20-amino acid peptide indicated that no spectral overlap occurred between the dioxane resonance and the peptide. However, the dioxane signal overlapped with a portion of the 25-amino acid peptide. Therefore, an average of the five peptide diffusion rates was used to fit the decay of the reference dioxane peak to a biexponential function. Calculated values for the hydrodynamic radii were determined using the previously determined empirical relationship (14). Structure Calculation-The assignment of the protein chemical shifts was determined using standard methods. Upon completion of the proton assignments, NOE-based distance restraints were collected from NOESY spectra and automatically allocated to close, medium range, and long distance interactions based upon intensity. A broad dihedral angle restraint was used to confine the bond angles (except for Gly) to the allowed Ramachandran space. The protein structures were determined using the programs CNS 1.1 (17) and ARIA (18). ARIA calculations were initiated using default parameters. In the final ARIA run, the number of structures generated in the seventh and eighth iterations was increased to 40 and 100, and in the eighth iteration, the 20 lowest energy structures were used for statistical analyses. For hepcidin-20, restraints were used from two-dimensional NOESY spectra at mixing times of 200, 400, and 600 ms at 700 MHz; 400 ms at 500 MHz; and 400 ms in D 2 O solution at 400 MHz. For hepcidin-25, constraints were used from the two-dimensional NOESY spectrum collected with a mixing time of 150 ms at 800 MHz. Molecular structures were viewed using MOLMOL (19) or GRASP (20) and analyzed using PROCHECK (21). Sedimentation Equilibrium Analyses-Samples were dialyzed against 100 mM NaCl and 50 mM citrate buffer at pH 3.5. Data were obtained at 20°C using a Beckman XL-I ultracentrifuge equipped with absorbance optics using spinning speeds of 26,000, 32,000, 38,000, and 44,000 rpm. Light Scattering-Dynamic light scattering experiments were obtained at 25°C with a DynaPro MSTC light scattering instrument (Protein Solutions Inc., Lakewood, NJ) using a laser wavelength of 827.6 nm. Before data acquisition, both a blank and 0.783 M hepcidin-20 peptide sample were filtered through 0.02 mm Anodisc 13 (Whatman International Ltd., Maidstone, United Kingdom) filters. For each sample, 100 data points were collected, and the hydrodynamic radii of the prominent species were evaluated using the Stokes-Einstein equation included in the Dynamics software (version 6.1.06). RESULTS AND DISCUSSION Nomenclature-To simplify the numbering, the cysteine residues will be referred to by their position in each peptide (i.e. first, second, third, and so forth), with numbering beginning at the N-terminal end of the peptide. Spectral Assignment and Structure Calculation for Hepcidin-20 -The shorter of the two peptides proved to be the more straightforward to assign because of good dispersion and wellresolved peaks in the NMR spectra. Using standard methods, near complete proton assignments were obtained. The amide proton resonances for the fourth and fifth Cys residues could not be observed in the two-dimensional TOCSY at room temperature or at lower temperatures for this peptide. Only at 500 MHz could very broad low intensity correlations be observed between the ␣H and ␤-protons of these two residues. The inability to resolve the two amide resonances is consistent with an exchange process occurring on the NMR time scale involving the fourth and fifth cysteine residues. In the two-dimensional TOCSY spectrum, there were two slightly offset amide correlations observed for the Thr 20 residue, consistent with two separate conformations for this C-terminal amino acid. There were several ␣-protons with chemical shifts consistent with ␤-sheet structure. To fully evaluate the chemical shift analysis using chemical shift index, a 1 H-13 C heteronuclear single quantum coherence experiment was acquired. The ␣-13 C chemical shift values (22) (except for the fourth and fifth cysteine ␣-13 C resonances that were not detected) are shown in Fig. 1B. The evaluation of the ␣-proton chemical shifts using the chemical shift index (23) is shown in Fig. 1C. Together, the indices show ␤-sheet character for significant portions of this peptide. A previous study confirmed that all eight Cys residues formed intramolecular bonds, but the identity of the pairings between individual Cys residues could not be determined (1). Consequently, results from every NMR experiment were carefully examined to help elucidate the location of the four disulfide bridges. The NOE interactions indicated that the N-and C-terminal ends of the peptide sequence were interacting. Therefore, the initial structural calculation using ARIA contained only the few NOE constraints along with sequential constraints provided by a single two-dimensional NOESY spectrum without assigned disulfide linkages or hydrogen bond assignments (24). The lowest energy structures obtained from these calculations assisted in the determination of the identity of the other ambiguous assignments. As more constraints were identified, it became obvious that two of the disulfide linkages were between the first and eighth Cys and the third and sixth Cys residues (Fig. 1E). The first and eighth Cys showed strong ␣H-␣H and ␣H-␤H interactions, whereas the third and sixth Cys amino acids showed an interaction between ␣H and ␤H protons from these two cysteines. Additional constraints for ARIA calculations came from backbone amide NH-␣H dihedral J-coupling values measured from the amide protons in the fingerprint region of the two-dimensional TOCSY experiment. Whereas the NOE evidence alone did not allow for decisive assignment of the remaining two disulfide bonds, several other independent observations demonstrate the second to seventh and fourth to fifth disulfide pairings. Results from the D 2 O exchange experiments indicated that the five backbone amide protons which were slow to exchange were Ile 3 , Phe 4 , Cys 5 , Gly 15 , and Cys 17 . Introducing the three possible pairings for the two disulfide bonds into the ARIA calculation produced structures in which the Ile 3 and Cys 17 amide protons could establish an antiparallel cross-strand interaction by the formation of hydrogen bonds with the carbonyl oxygens of these two opposing residues. Likewise a similar double hydrogen bond between Cys 5 and Gly 15 could easily be seen in the resulting structures. Therefore, these four hydrogen bond restraints were introduced, and the structures were recalculated. However, these additional hydrogen bond constraints did not reduce the overall energy difference between the three remaining possible disulfide interactions. Although a rough sketch of the emerging ␤-sheet pattern would lead to the conclusion that the second and seventh and the fourth and fifth cysteines would be the reasonable choices for the formation of disulfide bonds, visual comparison of the three possible structures indicated that the peptide could easily alter conformation to form one of the other two puckered shapes. Several independent observations supported a structure bridging the second to seventh and fourth to fifth Cys residues. Assignment of the TOCSY and NOESY spectra from all of the various field strengths and conditions indicated the absence of the backbone amide proton correlations in the fingerprint region only for the fourth and fifth Cys residues. In addition, the broad correlations for the ␣and ␤-protons indicated that exchange is occurring with these two residues on the NMR time scale. The formation of a vicinal cysteine disulfide bridge would result in the formation of an eight-member ring that would be fluxionally mobile. This unusual connectivity, although rare, is not unique in naturally occurring systems (25). The other two possible bridges would link either the second and fourth or the second and fifth Cys residues together. Given the broad line shape of the amide, ␣and ␤-protons for the fourth and fifth cysteines, it would be expected that the cysteines involved with the other half of the disulfide bridge would also show indications of resonance broadening caused by chemical exchange if bonded to the other cysteines. The broadening of these residues was inconsistent with the uniform sharpness of resonances for the other peptide protons. Further evidence in favor of disulfide connectivity between the fourth and fifth cysteines comes from comparison of the inter-residue distances between Cys5␣-Cys17␤. The NOE cross-peak intensity detected in the three NOESY spectra collected at 700 MHz is inconsistent with the large distance expected for a disulfide bond between the second and fourth cysteines. Additional evidence supporting linkage of the fourth cysteine to fifth cysteine comes from the slowly exchanging amide proton of Phe 4 of hepcidin-20. Inspection of the structures indicates that a possible intramolecular hydrogen bond could only form with the carbonyl oxygen of the first cysteine. The overall energy range calculated for the 20 lowest energy structures independently and in a water box indicates that introduction of the Phe 4 hydrogen bond is consistent with the conformation created in the structure resulting from fourth to fifth disulfide pairing. These observations indicate that the cysteines link in the following fashion: first to eighth, second to seventh, third to sixth, and fourth to fifth. This arrangement would create a rare vicinal cysteine linkage. Spectral Assignment and Structure Calculation for Hepcidin-25-Unlike hepcidin-20, inspection of the 1 H two-dimensional TOCSY NMR spectra indicated that more amide proton correlations were present in the fingerprint region than could be explained by a single structural conformation (data not shown). Furthermore, some of the correlations appeared somewhat broadened or not clearly resolved. The NOESY data lacked an abundance of backbone amide proton to side chain inter-residue correlations at either 500 or 800 MHz. Clearly, the majority of the correlations in the fingerprint region indicated sequential assignment for H␣ i or H␤ i to HN (iϩ1) . The bulk of the cross-strand connectivities were assigned from the few remaining correlations. As with hepcidin-20, the hairpin ␤-sheet structure agreed well with the two-dimensional NOESY correlations observed (Fig. 2C). After assignment of each resonance, a minimum of two conformations emerged with two sets of proton backbone and side chain resonances for residues Thr 2 to Ile 8 inclusive and Cys 23 to Thr 25 inclusive. Because this region of the peptide is centered about Pro 5 , contributions from proline cis-and transconformations would explain a doubling of these proton resonances. Similar to the hepcidin-20 spectra acquired at 700 MHz, the dispersion provided by the two-dimensional NOESY acquired at 800 MHz established a strong interaction between the first and eighth Cys ␣-protons and a slightly weaker interaction between the third and sixth Cys ␤-protons. Using these linkages to establish two disulfide bridges along with unambiguous constraints from Cys7␤-Cys23␣, Cys22NH-Cys10␣, and Gly12NH-Lys18␣, structural annealing calculations were completed. The NOE correlations for the minor conformation were also used to identify residues but were not suitable to calculate the structure of the minor conformational isomer. As with hepcidin-20, the proton chemical shift index analysis was completed for hepcidin-25 (shown in Fig. 1D). The results indicate the presence of ␤-sheet structure for both sides of the peptide with non-sheet characteristics for the loop. Structural Evaluation-Results of the ARIA calculations indicate that the 20 lowest energy structures for both hepcidin-20 and hepcidin-25 displayed good root mean square deviation values of 0.696 and 1.68 Å, respectively (Table I). Both of the peptides appear as a ␤-hairpin with the turn portion of the peptide curled toward the N and C termini (Figs. 2 and 3). The curl in the overall shape of the peptide creates a convex and concave surface on each side of the ␤-sheet. The degree of curl of the hairpin loop toward the rest of the molecule was the result of a combination of NOE constraints between the protons of adjacent residues as well as the backbone conforming to the constraints introduced by the four disulfide bonds. In the final structures, there were no long-range NOE constraints to establish the degree of curl of the peptide loop. Inspection of the cysteine pairings reveals that the disulfide bridges alternate on each side of the sheet, beginning on the convex side of the molecule at the two termini. Similar pairings have been noted for antimicrobial ␤-sheet peptides such as tachyplesin (26) and protegrin (9). Following from the N-terminal end, the hairpin turn begins at the vicinal cysteine juncture and ends at the arginine residue for both peptides. Further inspection of the side chain distribution shows that the convex side contains the hydrophobic side chains, whereas the concave side has the positively charged side chains, giving the peptide amphipathic characteristics (Fig. 4). These features have been noted to be typical for antimicrobial peptides (27). Perhaps the most intriguing feature of these two peptides is the vicinal cysteine disulfide bond between the fourth and fifth cysteine residues. Although the fourth cysteine through to the serine residue are all part of the ␤-hairpin loop, only the proton and 13 C resonances of the fourth and fifth Cys resonances are either significantly broadened or unobserved. The intensity of these two cysteines is in sharp contrast to the NOESY correlations for Cys 10 , Cys 11 , His 15 , Arg 16 , and Ser 17 that make up the remainder of the loop. This difference in contour appearance for the residues comprising the hairpin portion of the peptides suggests that any flexibility in peptide motion is localized at the fourth and fifth cysteine residues, which are involved in the eight-member vicinal disulfide ring. The presence of the rare vicinal disulfide bridge has been noted in other peptides and proteins. Methanol dehydrogenase (28), insecticidal neurotoxins Janus-faced atracotoxins (29), mercuric reductase (30), and mercuric transport protein (31) contain a vicinal disulfide linkage critical for their activity. For the known structures of methanol dehydrogenase and Janus-faced atracotoxins as well as hepcidin-20 and -25, the vicinal Cys residues are part of a distinct turn that shows the peptide bond between the two Cys residues to reside in a trans-configuration. Furthermore, the peptide angles (Ϫ53°and Ϫ50°) for the fourth Cys and (Ϫ174°and Ϫ171°) and (Ϫ129°and Ϫ120°for hepcidin-20 and -25, respectively) of the fifth Cys agree well with values determined for methanol dehydrogenase and Janus-faced atracotoxins (29). The presence of the vicinal disulfide in these compounds has been shown to be critical for enzyme and neurotoxin activity, respectively (25,32). NMR Diffusion-The NMR diffusion measurements for hepcidin-20 were carried out at 700 MHz using a comparison of diffusion constants between dioxane and the peptide. The peptide sizing data from NMR diffusion, sedimentation, and dynamic light scattering are shown in Table II. Comparison of the diffusion constants between dioxane and hepcidin-20 along with the hydrodynamic radius indicates that hepcidin-20 is a monomer in solution at the observed concentrations. Therefore, all observed NOE interactions from hepcidin-20 would be intramolecular and should be consistent with a monomeric structure. The tabulated results indicate that although hepcidin-20 exists as a monomer over the concentration values tested, hepcidin-25 aggregates as the concentration was increased. Sedimentation studies carried out on hepcidin-25 gave unusual results consistent with hepcidin-25 aggregating to the point of precipitation as the spinning speed was increased. The lowest spinning speed used for data collection is indicated in Table II. DLS studies also indicated that a high molecular weight aggregate was present at 1.61 mM (data not shown). Further indication of aggregation properties for hepcidin-25 was noted from a comparison of the NOE buildup curves for hepcidin-20 and -25 (data not shown). Mode of Aggregation for Hepcidin-25-The presence of additional correlations in the two-dimensional NOESY spectra that could not be assigned to either the major or minor conformations of hepcidin-25 suggests a possible multimer interface between aggregating molecules. Analysis of the NOEs indicates that nine such interactions involved Pro 5 and Phe 4 to Phe 9 and Met 21 (shown in Fig. 5). The presence of the additional correlations indicates that the formation of multimers occurs in a nonsymmetrical manner, with the main interface occurring between the side chains of Phe 4 , Pro 5 , Phe 9 , and Met 21 . The loss of the Phe 4 and Pro 5 residues in hepcidin-20 and the concomitant loss of aggregation would also support the multimeric interface involving predominantly the two phenylalanine residues of hepcidin-25. One possible arrangement satisfying these restraints is indicated in Fig. 5. The interfacial region established between Phe 4 and Phe 9 readily permits further aggregation with increasing concentration. The loss of the first five residues between hepcidin-25 and -20 removes the hydrophobic Pro and Phe and introduces a charged primary amide at Ile 6 . The reduction of hydrophobic character of this portion of the peptide sequence would most likely reduce the propensity to aggregate. Another feature possibly associated with the aggregation is the difference in appearance between hepcidin-20 and -25 with respect to the proximity of the loop portion of the peptide to the rest of the peptide. In the structure of hepcidin-25, the loop is further away or more open in appearance than that seen with hepcidin-20. This difference in conformation may be caused by the stacking of hepcidin-25 molecules in the aggregate because there were no specific NOE interactions defining the proximity of the loop to the rest of the molecule. In summary, the structures of hepcidin-20 and -25 reveal a distorted ␤-sheet shape with a hairpin loop. The ␤-sheet structure is stabilized by disulfide pairing of Cys residues and hydrogen bonding between the two antiparallel strands. This leads to a markedly amphipathic peptide structure, a hallmark of many antimicrobial and antifungal peptides. The aggregation properties of hepcidin-25 may explain the difference in antimicrobial activity when compared with hepcidin-20. The rare vicinal disulfide pairing in the hairpin loop of hepcidin may be a significant characteristic in the function of this peptide. It would be interesting to explore whether the vicinal Cys bridge is critical to the iron uptake activity of hepcidin.
2018-04-03T04:17:33.369Z
2002-10-04T00:00:00.000
{ "year": 2002, "sha1": "74c8485fe56ef1ef84534526bd5625cd4b4422b7", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/40/37597.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "540ae21b000e778b87846c4394789797f4cfb344", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
232258911
pes2o/s2orc
v3-fos-license
Outside-In Drilling Allows Avoidance of Two-Stage Surgery in Revision Anterior Cruciate Ligament Reconstruction The presence of preoperative tunnel widening and/or malposition can pose technical challenges for revision anterior cruciate ligament reconstruction. This Technical Note describes the use of outside-in drilling to avoid the need for 2-stage reconstruction in the presence of tunnel widening or semi-anatomic tunnels. T he presence of preoperative tunnel widening and/or malposition can pose technical challenges for revision anterior cruciate ligament reconstruction (R-ACLR). To avoid convergence with previous tunnels and avoid the risk of inadequate graft fixation, many authors advocate for bone-grafting and a 2-stage approach if tunnels are semi-anatomic or >12 mm in diameter. This Technical Note describes the use of outside-in drilling to avoid the need for 2-stage reconstruction in the presence of tunnel widening or semi-anatomic tunnels. The rationale for this strategy is simply that outside-in drilling is anatomically unconstrained. This means that revision tunnels can be drilled at very different trajectories to those drilled for the failed primary reconstruction and even if tunnel convergence occurs at the aperture, graft fixation is not compromised. In contrast, the available tunnel trajectories with transtibial or anteromedial portal techniques are restricted. Therefore, new femoral tunnels drilled at R-ACLR are more likely to converge with previous widened (unless grossly malpositioned), or semianatomic tunnels. However, in contrast to an outsidein technique, when convergence occurs, it does so with a similar trajectory to the original tunnel, potentially exacerbating issues of widening and risking the inability to achieve adequate graft fixation. Surgical Technique (With Video Illustration) This Technical Note presents a 1-stage technique for R-ACL with outside-in drilling to avoid the need for 2-stage reconstruction in the presence of tunnel widening or semi-anatomic tunnels (Video 1). Advantages and disadvantages and pearls and pitfalls of outside-in drilling for R-ACL are presented in Tables 1 and 2, respectively. Preoperative Planning Preoperative imaging is important to confirm graft rupture and evaluate the status of the articular cartilage, menisci, and secondary restraints. Furthermore, it is important to determine whether previous metallic hardware is present and if its location warrants removal at the time of R-ACLR. Preferred preoperative imaging comprises magnetic resonance imaging (MRI) and standard anteroposterior and lateral radiographs of the knee. Computed tomography scans are not routinely obtained because MRI and plain radiographs have been demonstrated to be equally useful for determining tunnel position and widening. 1 Step 1: Positioning and Diagnostic Arthroscopy The patient is placed on the operating table in the standard arthroscopy position with a lateral support at the level of a padded tourniquet and a foot roll positioned to stabilize the leg at 90 s of knee flexion (Fig 1). High anterolateral and anteromedial portals are established. A diagnostic arthroscopy is performed, and meniscal and cartilage lesions are addressed before R-ACLR. The intercondylar notch is debrided of previous graft material and then particular attention is given to the position and size of femoral and tibial tunnels. If widening is present and likely to interfere with new tunnels, then a coring reamer is used to drill the new femoral tunnel. This provides a cylinder of cancellous bone graft that can be used to fill the enlarged tunnel and ensure good graft fixation. Step 2: Graft Harvest and Preparation It is our preference to use ipsilateral autograft for R-ACLR. Boneepatellar tendonebone (BPTB) or semitendinosus autograft are frequently used. The main deciding factor is availability, determined by which graft was used for the primary ACL reconstruction. If BPTB is used, harvesting is done through a minimally invasive double-incision technique to avoid the risk of injury to the infrapatellar branches of the saphenous nerve. The typical BPTB graft dimensions are a diameter of 10 mm, with a 10 Â 25-mm bone plug at the level of the tibial tuberosity and a 9 Â 15-mm bone plug at the level of patella. The longer bone block harvested from the tibial tuberosity is prepared for press-fit fixation in the femoral tunnel. Usually, the patellar bone plug is 1 or 2 mm smaller than tibial bone plug. Drill holes are made in each bone block and passing sutures (2-VICRYL) are placed. The patellar tendon defect and paratenon is closed with 0-VICRYL. If the semitendinosus is selected, the graft is harvested using an open-ended tendon stripper (Pigtail Hamstring Tendon Stripper; Arthrex, Naples, FL) and the tibial insertion is preserved. The semitendinosus tendon is then tripled over itself and tagged with no. 1 ETHI-BOND sutures (Ethicon, Somerville, NJ) to tubularize the graft. The goal is to obtain a graft with a diameter of 8 to 10 mm, with a length of 12 cm from its tibial insertion. Rarely, the gracilis tendon is also used to achieve a sufficient ACL graft diameter (>7 mm), but Outside-in drilling is anatomically unconstrained and easily allows divergent tunnels to be created Drilling divergent tunnels obviates problems with tunnel widening and graft fixation Outside-in femoral tunnel drilling typically results in a round, non-oval tunnel, allowing 360 graft healing A coring reamer used for femoral drilling provides an excellent source of cancellous autograft although it is rarely needed Two-stage R-ACLR surgery can reliably be avoided The procedure benefits from cost minimization; specifically, it avoids the expense of 2-stage surgery, uses only routine equipment and interference screw fixation, and does not require the use of allografts or bone graft substitutes Disadvantages An additional incision is needed on the femoral side (compared with a transtibial or transportal technique) Recent systematic review demonstrates that only 15% of ACLRs are performed with outside-in femoral tunnel drilling and therefore it is a new technique with an associated learning curve for the majority of orthopedic surgeons ACL, anterior cruciate ligament; ACLR, anterior cruciate ligament reconstruction; R-ACLR, revision anterior cruciate ligament reconstruction. Palpate and surface mark the lateral collateral ligament to minimize the risk of iatrogenic injury when drilling the femoral tunnel Create the femoral tunnel with a coring reamer to obtain a cylinder of cancellous autograft, if needed Drill the new femoral tunnel so that it is divergent to the tunnel created for the primary ACL reconstruction Position interference screws judiciously in cases of tunnel widening or previous semi-anatomic tunnels, e.g. if the tibial tunnel is slightly too posterior, place the screw posterior to the graft to result in an anatomical position Use a footprint guide to ensure that an adequate posterior wall is accounted for and that a blowout will not occur Pitfalls Iatrogenic injury to the LCL Failure to use the anatomically unconstrained nature of outside-in drilling and create tunnels with similar trajectories to those used for primary ACLR. This is most likely to occur if the primary ACLR was also performed with an outside-in technique Failure to plan for removal of previous hardware (if present, and if needed) Collision between the new R-ACLR femoral tunnel and a lateral tenodesis. This is most likely if the ACL femoral tunnel is too proximal on the lateral cortex ACL, anterior cruciate ligament; ACLR, anterior cruciate ligament reconstruction; LCL. lateral cruciate ligament; R-ACLR, revision anterior cruciate ligament reconstruction. e692 more usually it is harvested and set aside for reconstruction of the anterolateral ligament. Step 3: Drilling of the R-ACLR Femoral Tunnel The femoral tunnel is created using an outside-in approach. The drill guide (outside-in ACL guide; Arthrex) is placed intra-articularly at the femoral origin of the ACL, in a mid-anteromedial bundle position. The drill guide is then placed on the lateral femoral cortex. The typical location is slightly anterior and distal to the lateral epicondyle. When revising a primary ACL performed with a transportal or transtibial technique, this location gives near perpendicular divergence between new and old tunnels (Figs 2 and 3). The guide pin is drilled and correct placement confirmed arthroscopically prior to drilling a tunnel of the same diameter as the graft. A coring reamer (trephine; Arthrex) is routinely used to drill the femoral tunnel. This makes available a cylindrical cancellous bone graft. A shaver is inserted into the new tunnel and used to remove any graft remnant or old suture material. OUTSIDE-IN DRILLING FOR ONE-STAGE ACL REVISION Step 4: Drilling of Tibial ACL Tunnel The same principles of outside-in drilling are applied to the tibial tunnel. The tibial guide is set at 55 and a guide pin is placed at the center of the ACL footprint, reaming is performed according to graft diameter. Again, any graft remnant or suture material is removed using a shaver. Using a BPTB Graft The BPTB graft is passed from the femoral side to the tibial side with a suture shuttle. Press-fit fixation is achieved in the femoral tunnel. The knee is placed at 30 of flexion, and tibial fixation is achieved using a BioComposite interference screw (Arthrex), sized according to the diameter of the ACL graft. Secondary fixation is achieved on the tibial side by tying the shuttling suture over a bone bridge. Figure 4 shows the postoperative MRI with the BPTB graft inside the femoral tunnel. Using a Hamstring Graft A suture shuttle is used to pass the ACL graft into the knee and the femoral tunnel, via the tibial tunnel. The graft is fixed with BioComposite interference screws (Arthrex) measuring the same size as the graft Additional Techniques for Achieving Adequate Graft Fixation and Tunnel Position Without the Need for Two-Stage Surgery The routine harvesting of a cylinder of cancellous autograft provides an immediate solution for widened tunnels where graft fixation might otherwise be compromised. In the case of semi-anatomic tunnels, this is a potentially useful option because the graft can be placed into the previous tunnel/s in a stacked or "snowman" configuration. However, when using outside-in drilling, the requirement to use graft is exceptionally rare. More routinely, it can often be sufficient to judiciously use screw placement to achieve an anatomic graft position, e.g. if the tibial tunnel is slightly too posterior, the interference screw is placed posterior to the graft to result in an anatomical position. Lateral Extra-Articular Procedures Lateral extra-articular procedures are reported to confer a low-rate of residual laxity and rerupture after R-ACLR. 2 Furthermore, combined anterolateral ligament reconstruction þ R-ACLR is associated with a significantly greater rate of return to the preinjury level of sport when compared with isolated R-ACLR. 3 It is our preference to perform anterolateral ligament reconstruction with a hamstring tendon ACL autograft (combined graft) and a modified Lemaire when using a BPTB ACL autograft, in accordance with previously published techniques (Fig 5). 4 Postoperative Rehabilitation The postoperative rehabilitation is unchanged from that used following primary ACL reconstruction. Bracefree, full weight bearing with crutches is allowed immediately after the procedure. Early rehabilitation is focused on obtaining full extension and quadriceps activation. Pivoting-contact sports are allowed from 9 months and after neuromuscular recovery. Discussion Outside-in drilling offers the major advantage of being anatomically unconstrained. This allows the creation of R-ACLR tunnels that are markedly divergent to the tunnels created for primary ACLR, therefore permitting adequate graft fixation and positioning regardless of the pre-operative presence of tunnel widening or malposition. Other advantages over transtibial and transportal techniques include longer femoral tunnels (increasing the likelihood of adequate graft fixation) and a reduced risk of posterior blowout. Furthermore, using a coring reamer provides cancellous autograft and in combination these techniques allow a single-stage approach to R-ACLR regardless of tunnel widening or malposition. Clinical outcomes of 1-and 2-stage R-ACLR are not significantly different with respect to subsequent revision rates according to 2 recent systematic reviews. 5,6 It therefore seems logical to avoid a 2-stage procedure whenever possible so that the associated increased morbidity of 2 procedures, a prolonged period of knee instability (before definitive surgery), multiple periods of rehabilitation, and increased health care and societal cost also can be avoided. Other techniques that can be used to overcome technical challenges during single-stage R-ACLR include the use of fast setting bone substitutes. 7 Previous authors also have reported the use of stacked screws and/or bioabsorbable screws to fill voids. 8,9 Although it is important to be familiar with a wide range of techniques, in the experience of the senior author they are rarely needed when outside-in drilling is used (only 2/409 consecutive R-ACLR underwent bone grafting of tibial tunnels, no patients underwent femoral bone grafting, and no patients required stacked screws to fill voids or underwent 2-stage revision; unpublished data and forthcoming series) Outside-in drilling is a safe technique with a low risk of adverse events. The main risk is iatrogenic injury to the lateral collateral ligament when creating the femoral tunnel drilling. This is easily avoided by palpating and surface marking the lateral collateral ligament before drilling. Additional risks include placing the tunnel too proximal and posterior resulting in posterior cortex blowout. However, this is also easily avoided by using footprint guides to ensure an adequate posterior wall. It is noteworthy that a recent systematic review demonstrated that only 15% of femoral tunnels at primary ACLR are drilled with an outside-in technique. This finding may reflect that it has previously been suggested that outside-in drilling is associated with an increased graft bending angle, more shear stress and an increased risk of rupture. 10,11 However, more recent clinical study has debunked this message and instead demonstrated that outside-in drilling reproduces native graft inclination angles in both sagittal and coronal planes, and that neither transtibial or standard anteromedial portal techniques do. 12 It is therefore the opinion of the authors that outside-in drilling is an important technique for R-ACLR surgeons to learn. It offers considerable advantages, particularly the anatomically unconstrained nature and ability to drill divergent tunnels to obviate the technical issues caused by semi-anatomic tunnels and widening.
2021-03-18T05:16:52.345Z
2021-02-08T00:00:00.000
{ "year": 2021, "sha1": "8c1c0a22a488cf809ecb2c95e0c5561dafea2418", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S221262872030339X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c1c0a22a488cf809ecb2c95e0c5561dafea2418", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
12047977
pes2o/s2orc
v3-fos-license
Integral Chow rings of toric stacks The purpose of this paper is to prove that integral Chow rings of toric stacks are naturally isomorphic to Stanley-Reisner rings. Introductuon Intersection theory with integer coefficients of Fulton-MacPherson type on smooth algebraic stacks was developed by Kresch, Edidin-Graham and Totaro ([4], [10], [14]). In particular, the integral Chow ring of a smooth stack which has staratifications by quotient stacks (for example, quotient stacks) was defined. The integral Chow ring of a smooth algebraic stack is an interesting and deep invariant of the stack which reflects the geometric structure together with the stacky structure on it. It is challenging to compute them for interesting algebraic stacks. Some examples of smooth Deligne-Mumford stacks were calculated. For example, in [4] Edidin and Graham calculated the integral Chow rings of the moduli stacks of elliptic curves M 1,1 and M 1,1 . Vistoli calculated the integral Chow ring of the moduli stack of the curves of genus 2 ( [16]). The purpose of this paper is to compute the integral Chow rings of toric stacks defined in ( [7], [8]). Our work of the presented paper is motivated by the categoryequivalence between the 2-category of toric stacks and the 1-category of stacky fans (cf. [8, Theorem1.2 and Theorem 1.4]). Before stating our main result, let us recall the result of Chow rings of toric varieties. Theorem (Fulton-Sturmfels, Danilov, Jurkiewicz). Let N = Z d be a lattice. Let ∆ be a non-singular fan in N ⊗ Z R and let X ∆ be the associated toric variety. Let us denote by A * (X ∆ ) the Chow ring of X ∆ . Then there exists a natural isomorphism of graded rings (Stanley-Reisner ring of ∆) ∼ → A * (X ∆ ). If ∆ is a simplicial fan and the base field is in characteristic zero, then A * (X ∆ ) ⊗ Z Q has a ring structure and there exists a natural isomorphism of graded rings Here we would like to invite the reader's attention to the fact that that if ∆ is simplicial and not non-singular, the (operational) Chow group A k (X ∆ ) for k ≥ 1 could differ from the module of the "degree k part" of Stanley-Reisner ring of ∆ (cf. Example 2.7). Furthermore in such case a somewhat surprising point is that the Stanley-Reisner ring of ∆ could be nonzero in degrees higher than the dimension of the toric variety X ∆ (cf. Example 2.7). Since the Chow groups of an algebraic space in degrees higher than its dimension are zero, thus Stanley-Reisner rings gives us a The author is supported by Grand-in-Aid for JSPS fellowships. combinatorial phenomenon which is unaccountable in the framework of schemes and algebraic spaces. Now we state our main result. Here we explain how the usual relations on intersection product of torus-invariant cycles on a simplicial toric variety X Σ (cf. [5, page 100]) are derived from that of the toric stack X (∆,∆ 0 can ) in A * (X (∆,∆ 0 can ) ). There exists a coarse moduli map π (∆,∆ 0 can ) : X (∆,∆ 0 can ) → X ∆ . This functor defines the proper push-forward (π (∆,∆ 0 can ) ) * : A * (X (∆,∆ 0 can ) ) ⊗ Z Q → A * (X ∆ ) ⊗ Z Q. By [10, Theorem 2.1.12 (ii)] and [15, Proposition 6.1], (π Σ ) * induces an isomorphism of groups. Moreover, since a general stabilizer group of a toric stack is trivial, (π Σ ) * defines an isomorphism of rings (cf. [15, (6.7)]). Thus the ring structure of A * (X (∆,∆ 0 can ) ) ⊗ Z Q yields that of A * (X ∆ ) ⊗ Z Q. The proper push-forward (π (∆,∆ 0 can ) ) * : , since the order of stabilizer group of a generic geometric point on V (σ) is mult(σ) (cf. [7,Proposition 4.13]). Here mult(σ) is the multiplicity of σ, and V (•) is the torus-invariant subvariety which corresponds to a cone in ∆. Thus, the relation (*) in A * (X (∆,∆ 0 can ) ) induces under (π Σ ) * the relations There are more interesting points to notice. It is known that the operational Chow group of a complete toric variety is torsion-free (cf. [6]). On the contrary, the integral Chow group of a complete toric stack could have a lot of torsion elements. Moreover, as noted above, it could be nonzero in degrees higher than the dimension of the toric stack (cf. Example 2.7). This is a substantial difference to the intersection theory on schemes and algebraic spaces. The presented result can be also viewed as an application of intersection theory with integral coefficients ( [4], [10], [14]) to toric stacks. We hope that the reader finds our computation here shows a nice relation between toric stacks and combinatorics. The presented paper is organized as follows. In section 1, for the computation of the integral Chow rings (cf. [4]), we obtain the quotient presentations of toric stacks defined in [7] and [8]. For this purpose, we generalize the functor defined in [1], which represent a smooth toric variety (in characteristic zero), to a certain groupoid. In section 3, we present the proof of the main result. Finally, we calculate some examples. Notations And Conventions. Set N = Z d and M = Hom Z (N, Z). Let •, • be the dual pairing. Let ∆ be a fan in N R = N ⊗ Z R (we asumme that all fans are finite in this paper) (cf. [5]). Denote by ∆(1) the set of rays. Let us denote by v ρ the first lattice point on a ray ρ ∈ ∆(1). Finally, let ∆ max denote the set of maximal cones in ∆. A pair (∆, ∆ 0 ) is called a stacky fan if ∆ is simplicial fan in N R and ∆ 0 is a subset of ∆ ∩ N such that for any cone σ in ∆, σ ∩ ∆ 0 is a sub-monoid of σ ∩ N which is isomorphic to N r where r = dim σ, such that for any element e ∈ σ ∩ N, there exists a positive integer n such that n · e ∈ σ ∩ ∆ 0 . The initial point of ρ ∩ ∆ 0 is said to be the generator of ∆ 0 on ρ. Let v ρ denote the first lattice point of ρ ∩ N and n ρ the initial point of ρ ∩ ∆ 0 . The positive integer l ρ such that l ρ · v ρ = n ρ is said to be the level of ∆ 0 on ρ. Notice that Σ 0 is completely determined by the levels of ∆ 0 on rays of ∆. Each simplicial fan ∆ has the canonical free-net ∆ 0 can , whose level on every ray in ∆ is 1. Give a stacky fan (∆, ∆ 0 ), we have the associated toric stack X (∆,∆ 0 ) over a base scheme S. If S is the spectrum of a field k of characteristic zero, X (∆,∆ 0 ) is a smooth Deligne-Mumford stack that is of finite type and separated over k. For details, we refer to [7, section 4], [8]. (∆, ∆ 0 )-collections In [1], given a fan ∆ and a scheme Y , Cox defines notions of ∆-collections on Y and equivalences between them. Then he showed that the functorF ∆ : Y → {∆-collections on Y }/ ∼ represents the toric variety X ∆ if ∆ is non-singular. If ∆ is singular, unfortunately, the functor of ∆-collections fails to represent the toric variety X ∆ . The aim of this section is to generalize the notion of ∆-collections and [1, Theorem 1.1] and give a quotient presentation for a toric stack X (∆,∆ 0 ) defined in [7] (cf. Corollary1.9). where L ρ is an invertible sheaf on Y , u ρ ∈ H 0 (Y, L ρ ) and c m is an isomorphism of invertible sheaves with the following additional properties: (1) c m ⊗ c m ′ = c m+m ′ for all m, m ′ ∈ M. When ∆ is the empty set, the set of morphisms from and is the empty set if otherwise. Let S be a scheme, and define a fibered category as follows. The objects of F (∆,∆ 0 ) over a S-scheme Y are (∆, ∆ 0 )-collections on Y . A morphism between two objects in Ob(F (∆,∆ 0 ) )(Y ) is a morphism of (∆, ∆ 0 )collections on Y . With the natural notion of pullbacks, F (∆,∆ 0 ) is a fibered category over (S-schemes). By fppf descent theory for quasi-coherent sheaves, F (∆,∆ 0 ) is a stack with respect to fppf topology. Theorem 1.4. Let S be the spectrum of an algebraically closed field k of characteristic zero. Let X (∆,∆ 0 ) be the toric stack (over k) associated to (∆, ∆ 0 ) (cf. [8]). Then there exists an isomorphism of stacks The proof of Theorem 1.4 proceeds in several steps. if and only if for any ray ρ ∈ ∆ both stabilizer groups of geometric points on the generic points of D ρ and D ′ ρ have the same orders. Since the order of the stabilizer group of a geometric point on the generic points of D ρ equals to the level of ∆ 0 on ρ, we have ∆ 0 = Σ 0 . In this subsection, we prove (a) in Lemma 1.5. Unless stated otherwise, we work over k. Consider a collection We shall refer to such collections as linear ∆-collections. Let r for all r ∈ R. The following functor There exists a natural action a : . Then we have a natural 2-isomorphism between the both two composites. Thus there exists a morphism z : [11, (10.7)]), the functor z is essentially surjective. To prove the fully faithfulness, note first that we may work fppf locally on Y and put two linear ∆-collections for any ρ ∈ ∆(1)}. In this case, the set of homomorphism from z( and bijectively corresponds to the set of mor- ) are the empty sets. Thus z is fully faithful. Therefore F (∆,∆ 0 ) is a smooth Deligne-Mumford stack of finite type and separated over k. Remark 1.6. The above argument also implies that the stack F (∆,∆ 0 ) over a general scheme is algebraic. Namely, we conclude that: Remark 1.8. Let (∆, ∆ 0 can ) be a stacky fan with the canonical free-net such that ∆ is non-singular. Let X ∆ (resp. X (∆,∆ 0 can ) ) be the toric variety (resp. the toric stack) over Z associated to ∆ (resp. (∆, ∆ 0 can )) (the definition of toric stacks ( [7], [8]) works over arbitrary base schemes). While X (∆,∆ 0 can ) is isomorphic to the toric variety X ∆ over Z, it is not clear whether or not F (∆,∆ 0 )/Z is isomorphic to X ∆ over Z. 1.2. The coarse moduli space for F (∆,∆ 0 ) . In this subsection, we prove (b) in Lemma 1.5. Clearly, we may suppose that rays in ∆ span the vector space N ⊗ Z R. Set G := G (∆,∆ 0 ) . First by imitating the proof of [2, Theorem 2.1], we see that geometric quotient (in the sense of Mumford [12]) of L ∆ × G a → L ∆ is a toric variety X ∆ associated to ∆. We will show that the toric variety X ∆ is a coarse moduli space for [L ∆ /G] ∼ = F (∆,∆ 0 ) . To this aim, by [13, Theorem 2.6 (iii)], it suffices only to prove that q : and gives a bijection on geometric points. Note first that X ∆ is a geometric quotient and thus q induces a bijection on geometric points. The properness of q follows from the fact that q is a universal submersion, in particular universal closed map (It is easy to see that q is separated and of finite type). Therefore the coarse moduli space for F (∆,∆ 0 ) is a toric variety X ∆ . We complete the proof of Theorem 1.4. 2 Corollary 1.9. Let k be an algebraically closed field k of characteristic zero. Let X (∆,∆ 0 ) be the toric stack (over k) associated to (∆, ∆ 0 ) (cf. [8]). Then there exists an isomorphism of stacks over k Integral Chow Rings of Toric Stacks In this section, we calculate integral Chow rings of toric stacks. We shall use notation similar to section 1, and from now on we assume that the base field k is an algebraically closed field of characteristic zero. Our computation is based on intersection theory on stacks due to Kresch, Edidin-Graham, and Totaro. For details, we refer to [4] [10] [14]. Let us fix some notations. If ∆ is a fan (resp. a stacky fan), then for a cone δ ∈ ∆ we denote by V (δ) (resp. V (δ)) the torus-invarint cycle on the toric variety X ∆ (resp. the toric stack X (∆,∆ 0 ) ), which corresponds to δ. If π (∆,∆ 0 ) : X (∆,∆ 0 ) → X ∆ denotes a coarse moduli map, then the cycle V (δ) defines to be the reduced cycle π −1 (∆,∆ 0 ) (V (δ)) red . For a ray ρ ∈ ∆, if no confusion seems likely to arise, we may write D ρ (resp. D ρ ) for the torus-invarinat divisor V (ρ) (resp. V (ρ)). As the first step to Theorem 2.2, we will calculate the Picard goup of a toric stack. Proof. Put G := G (∆,∆ 0 ) . Observe first that every invertible sheaf on L ∆ is trivial. Indeed L ∆ is a smooth toric variety, and any invertible sheaf (line bundle) L on L ∆ is represented by a linear form of torus-invariant divisors with integer coefficients. Every torus-invariant divisor on L ∆ comes from some toric divisor on A ∆(1) (recall L ∆ ⊂ A ∆(1) !). Since every torus-invariant divisor on A ∆(1) is a principal divisor, thus L is trivial. Then we obtain On the other hand, there exists an isomorphism of groups Hence we obtain our claim. Proof of Theorem 2.2. First of all, if two cones σ and τ in Σ span the cone γ, then V (σ) and V (τ ) intersect transversally at V (γ) by [7,Proposition 4.19] (it also follows from the quotient presentation). This implies the relation (*). Next we shall show that the map from Stanley-Reisner ring to the Chow ring is an isomorphism. By Corollary 1.9, the toric stack X (∆,∆ 0 ) has the quotient presentation [ Here H is a finite abelian group, and k = #∆(1) − dim(X (∆,∆ 0 ) ). Set G = G (∆,∆ 0 ) and W := L (∆,∆ 0 ) . Note that W has the form Note that we may view G as the closed subgroup of the maximal algebraic torus in W . According to the definition of Chow groups due to ), first of all, in order to compute the Chow ring of X (∆,∆ 0 ) , we shall construct a certain N(k + r)dimensional representation of G, i.e., an action of G on the affine space V := A N (k+r) such that V has an open set U on which G acts freely and whose complement has codimension more than N − 1. To this aim, by choosing the primitive m i -th root ζ i in the base field for 1 ≤ i ≤ r, we embed G = G k m × Z/m 1 Z × · · · × Z/m r Z into the closed group subscheme of G k m × G r m as follows: G = G k m × Z/m 1 Z × · · · × Z/m r Z ֒→ G k m × G r m , (u, l 1 , . . . , l r ) → (u, ζ l 1 1 , . . . , ζ lr r ), where u ∈ G k m and (l 1 , . . . , l r ) ∈ Z/m 1 Z × · · · × Z/m r Z. This embedding yields the action of G on an affine space A k+r . We extend this action to A k+r × · · · × A k+r = Then the action of G on U is free and A N (k+r) − U has codimension more than N − 1. Then we have A i (X (∆,∆ 0 ) ) = A i+N (k+r) ((W × U)/G) for N > dim(X (∆,∆ 0 ) ) − i (cf. [4], [10]). Here G acts on W × U diagonally (this is a free action). 2 From now on the positive integer N is assumed to be greater than 1. The group is generated by torus-invariant divisors V (ρ) as ρ ranges over Σ(1) (cf. [6, Proposition 2.1]). The rational equivalent relations on these divisors are generated by the linear forms Σ ρ∈Σ(1) m, v ρ · V (ρ) as m ranges over M ′ = Hom(N ′ , Z). Here v ρ is the first lattice point of ρ. We denote by I the subgroup of ⊕ ρ∈Σ(1) Z · D ρ generated by the above linear forms Σ ρ∈Σ(1) m, v ρ · D ρ . On the other hand, by Proposition 2.3, there exists a natural isomorphism where η(D ρ ) = V (ρ). Next we show the following Proposition.
2007-07-17T17:23:59.000Z
2007-05-24T00:00:00.000
{ "year": 2007, "sha1": "941a48f1355ce4dc11072e104a4ec827b480d63c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0705.3524", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "941a48f1355ce4dc11072e104a4ec827b480d63c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
216501559
pes2o/s2orc
v3-fos-license
Motives and Dynamic of Community-Based Aquaponics for Urban Farming in Semarang Urban farming through aquaponics has begun in Semarang, since 2016. More than eighty people formed an aquaponics community. The community has conducted routine training on aquaponics to develop urban farming through this system. But the number of aquaponics was unstable. Therefore, the purpose of this research was to find out the motives and dynamics of the aquaponics community in supporting urban farming. The method of this research used a qualitative approach. The results of studies with the analysis of the theory of motives and group dynamics show that this community was in the class of altruism and collectivism motivation so that it has a chance of sustainability even though at certain moments it decreases. As for the dynamics of some actors who think that they weren’t in line with expectation, factors emerge that weaken the community in aquaponic. However, those who have altruism and collectivism motives will tend to survive because they got personal satisfaction and a good impact on their environment. Therefore, when many actors leave this system, they keep trying to return to carry out aquaponic activities in support of urban farming in Semarang. Introduction Food and Agriculture Organization (FAO) as a driver of the world hunger-reduction program has several strategies to increase access to food in urban areas, one of which is urban farming. The urban farming program is a nursery, planting, processing and distribution of a diversity of agricultural products, using human resources, land and water, products and services found in urban areas [1]. Urban farming is the planting, processing, and distribution of food and other products through intensive crop cultivation and animal husbandry in cities and the surrounding area, and reuse of natural resources and urban waste, to obtain a diversity of crops and livestock. In other words, urban farming is a food production activity in urban areas to facilitate access to food for urban communities. This activity has several benefits for the environment, health, and socio-economic community [2]. The benefits of urban agriculture for public health are improving nutrition through fresh food. As a result of research in the United States that shows the population of cities that do urban agriculture with vegetable and fruit commodities affect the diet and nutrition of these people [3]. With these many advantages, FAO has, on various occasions, launched urban agriculture programs, and can be used as a forum to open jobs and increase citizens' incomes [4]. The social benefits of urban agricultural activities include creating a safe and comfortable environment, community development, and building social capital, educating and developing the young generation. The results found that gardens and agriculture can beautify the environment as well as employ and benefit the residents. The maximum effects of urban farming can be achieved by active community participation [5]. Urban farming on a small scale has a land area of 92-1,000 m2, and a large scale has an area of more than 1,000 m2. The agricultural size has a different function where the small scale is used as a garden for educational programs, composting, plantations of medicinal plants and vegetables, and fruits that can be eaten immediately. In contrast, the large size is used as urban community agriculture, nurseries, animal husbandry, and horticulture [6]. According to the type of activity, urban agriculture is divided into four general categories, namely community-based, institutional, public, and commercial. [7]. Community-based urban farming has small-scale agriculture and community gardens that are voluntarily managed by non-profit organizations. This community-based urban agriculture aims to develop communities and provide social and educational programs. In the application of community-based urban agriculture, it is essential to recognize, understand, and respect the values, desires, and motives of the actors. Middle-income people in middle-income areas most often practice Urban agriculture. Many of the benefits of urban farming reflect the beliefs and motivations of these practitioners and their communities. Like community-based activities, the best results occur when community members identify priorities and set goals for the project and then work together and with community partners to achieve these goals. Without community support, urban agriculture faces enormous obstacles [6][7][8]. Most urban agriculture uses conventional methods with planting media in the form of soil so that many still use pesticides, which endanger health. Therefore, with the reason to get healthier agricultural products and limited land, the community began to use aquaponics technology. This system is a combination of aquaculture without using plants media in the form of soil which can be placed on limited land [9,10]. The aquaponics is a farming system that utilizes fish manure as a source of nutrition for plants and the utilization of plants as a controller of water quality for fish, hence it is said that aquaponics is a method of sustainable agriculture and aquaculture, both of which form mutualism dependency. The aquaponic results are organic vegetables and fish that are very good for human [10][11][12]. Besides, community-based urban farming activities are driven by the motivation of each individual to achieve the goals they have. Motivation can be defined as the forces within a person who drives to satisfy himself [13]. Everyone has the motivation to reach the goal. Particular interest is a powerful and pervasive motive even though the human capacity to care and participate is not limited to interests. Community involvement will strengthen when understanding what reasons make individuals care for the welfare of others and society in general [14]. In Semarang, the existence of aquaponics urban farming is carried out by people who have a hobby of agriculture and motivation to meet the needs of healthy fish and vegetables. Awareness of the fulfilment of healthy food is developing in aquaponics farming communities in various places, one of which is the aquaponics community in Kandri Village, in the District of Gunungpati Semarang. The residents who attempted to develop an aquaponic in their private house and public area since 2016 [15]. Then, urban farming through aquaponics spread in various places in the city of Semarang, both developed by individuals or groups. The number reached more than 80 aquaponics. Although, urban farmers, through this system will be easier to do by creating an aquaponic community for mutual learning. Then, the existence of the aquaponic community, urban agriculture, can be running well in Semarang. But in reality, there is still an increase and decrease in the number of aquaponics systems in the development of urban agriculture. This condition indicates that the group experienced internal dynamics. From a research standpoint, groups are several people who join in interaction with each other at one or a series of meetings, where each member receives impressions or perceptions of other members that are different enough to be able to react to each other. The group define as two or more people involved in psychological relationships with each other so that they can influence each other [16]. The group is a community consisting of one or more individuals who interact with each other to accomplish a specific goal. The groups are created formally and informally within the organization at different times and for different purposes. Those groups have negative and positive influences on the organization structure and function [17]. Therefore the focus of this study is to analyze the motives and dynamics of community-based aquaponics in developing urban farming in Semarang. Methods The study explores the motives and dynamics of community-based aquaponics for urban farming in Semarang so that qualitative methods are expected to be able to explain these issues. This method was chosen to collect descriptively about urban aquaponics phenomena, which provides a more comfortable response to those interviewed. The method was used to examine the condition of a natural object, where the position of the constituent is as a key instrument, sampling the data source. Data collection in this study was conducted both primary and secondary such as literature review, documents, interviews, and observations. This study applied semi-structured interviews where interviews of this type were also included in the category of in-depth interviews [18,19]. Some of the initial interviewees were able to express their motives for engaging in urban agriculture through aquaponics. The participants are key persons who involve in aquaponic urban farming in Semarang. Based on the case study rules, the researcher will be open to all data that can explain the case so that here the data will be combined with triangulation. Tools and techniques such as observation, in-depth interview, preliminary meetings, intergroup meetings, and workshops were used to gain stakeholders' participation [20,21]. Therefore, researchers sought to obtain information through interviews, following members of the aquaponic community social media groups, both WhatsApp, aquaponiksemarang.blogspot.com, and their social learning processes. Result and discussion Semarang urban farming through aquaponics began from Kandri Village in 2016. From this village, the community had done agriculture with a minimal area. They believe urban farming with aquaponics is one of the solutions to meet healthy food needs. The activity shows that the current form of urban agriculture has very diverse systems. In the past, people planted the media using soil in pots in their home yards. At present, people are doing urban farming without using soil media, as seen in this system [15]. Some small aquaponics that is still running can be seen in figure 1. Small vertical aquaponic system and figure 2. Small aquaponic in private area. Aquaponic urban farming activities go hand in hand with programs of the agriculture service to continue to promote urban farming programs in various places. Moreover, Semarang City has been declared as one of the 100 resilient cities programs. The local government continues to encourage people who live in urban areas to utilize the narrow land around the house for urban farming. The pattern of farming is beneficial because it can provide additional family income. One of the government's efforts to support these activities is to hold an urban farming environment competition [22]. Community motivation in urban farming through aquaponics In conducting urban farming activities through aquaponics, each individual or group has a motivation. Motivation is an internal condition that arouses a person to action, encourages individuals to achieve specific goals, and keeps individuals interested in certain activities. Other meanings, motivation is an internal and external impulse in someone who is indicated by the existence, passion and interest, encouragement and needs, hopes and ideals, appreciation, and respect [23]. Motivation in the community to participate in group activities which are classified into 4 (four), namely egoism, altruism, collectivism, principlism [14]. See in table 1. Powerful; directly focused on the common good. May be limited to the ingroup. Directed toward universal and impartial good. Often seems weak; vulnerable to rationalization. Source: [14] 3.1.1. There are less egoism and more altruism motives. For aquaponics participant, the egoism motive is not the main thing. The desire of individuals who are members of the aquaponic community is the existence of hobbies and habits in farming and the nature of the farmers, but limited land ownership. The interest of aquaponics farmers then developed into motivation to support the government's urban agriculture program as a goal to fulfil their hobbies and talents. This interest encourages farmers' motivation to get personal satisfaction in pursuing a hobby such as seeing green land that grows well around them. At some stage, aquaponic farmers have motivated altruism to participate in the urban agriculture program in Semarang. They provide fish and vegetable production for neighbours, in the form of fish and vegetables free of charge or sold at low prices. Communities with altruistic motives argue that the impact they feel is more beneficial. They can consume their agricultural products and get healthy food. Local initiators play an essential role in aquaponic development because this urban farming model is technology transfer. There are various disciplines in this system, namely mechanics, biology, nitrification, and others. The desire to implement this technology became a strong initial motive for the community. They often didn't care about the cost of making aquaponics quite expensive, because they didn't expect money profits. Besides, there is a neighbourhood leader who has a motive for improving the quality of their settlement. The urban farming program provides funding for settlement development through competitions and environmental improvement programs. From the results of the forum group, the discussion indicates that aquaponic development cannot be alone, must work together with others. Togetherness through social media and regular meetings as part of the aquaponics community is very important to do. Urban agriculture with altruism motives at a particular stage can provide benefits to the community, resulting in a reasonably big trend reaching more than eighty aquaponic farmers in 2018 in Semarang [15]. Aquaponics activities require collaboration with others. Every person who wants to have aquaponics, he must learn this system from people who are experts, one of which is the aquaponics community, as seen in figure 3, elaborated on the motivational process in altruism. Figure 3. The motivational process in altruism However, there are many problems faced by these farmers, which are related to the use of aquaponic technology. Another problem is the difficulty in breeding and aquaponic management costs, which are quite expensive. If they do not find a solution to this problem, then some of them will stop temporarily to do aquaponics. The results of field observations show that various obstacles have brought together aquaponic farmers to solve problems as a group. Groups are factors that have a physical and social order with characteristics that build and unite individuals [17]. The group struggles to survive and protect its existence. They take steps towards the inevitable risks of every living thing, such as separation, disintegration, and making efforts to grow and develop using their environmental opportunities. However, it is the same with living things and individuals; if some problems and risks cannot overcome with them, then discomfort, instability, and disruption can also occur in groups. Over time, this group can separate or join other groups or disappear. According to [24], In group formation, several things underlie it, including (a) social interaction, (b) stable structure; (c) similarity of interests; and (d) see themselves as part of the group. Informing aquaponic groups, the stages described as follows: a. Social interactions. One of the essential characteristics of the group is that more than two people do social interaction. In other words, group members must influence one another. Communication between individuals consists of verbal and nonverbal. Oral is an interaction such as conveying strategies to achieve targets and nonverbals such as smiles, gestures, and facial expressions that can have an impact on each other. Community interest in supporting urban farming through aquaponics has been primarily due to this process of social interaction. b. Stable structure. In theory, groups must have a stable structure even though reactions in groups can change. A stable relationship within the group is needed to keep the group more harmonious and function optimally as a whole. Therefore, the aquaponics community formed The Aquaponic Farmers Group organization. In it, there is a clear division of tasks to carry out activities to support urban farming. There are less egoism and more altruism. c. The similarity of interests. The similarity of interest in a group is one of the characteristics of the group. Equality of interest has a sustainable impact because group members will be more integrated and provide discussion in the group. d. Look at themselves as part of a group. The group consists of several people who are aware that they are part of a community and can distinguish themselves from people outside the group. The character possessed by members of this group can form responsibilities within group members and will have an impact on organizational behaviour. Besides, this aquaponic community has several types of groups, namely formal and informal. Kinds of groups are seen from how groups develop, hierarchies and tasks, and relationships between group members. Legal groups formed with the goals and tasks that bind members to achieve essential goals. While informal solely develop groups to provide solutions to problems. Formal groups work together with various institutions and other legal parties; the form of this official aquaponic group is a cooperative business entity. Collectivism motives are characteristic of the aquaponics community. The obstacles faced affecting the behaviour of the aquaponic community. Most stopped temporarily, but there is a small portion of aquaponic activists to continue supporting urban agriculture. Related parties such as the agriculture service, universities, and other associated services still provide support for active farmers such as seed assistance, aquaponic equipment support. The basis of collective and principles motivation causes the community to set aside obstacles and minimal profits from urban farming activities through aquaponics. The existence of the moral tenets and togetherness between members, creating these aquaponics activities, still survives. For those who only have egoism motives, such as an interest in aquaponics to find just profit, their aquaponics activities no longer last. For new participants to become members of the aquaponic community, they will take part in prebasic training activities and workshops. Prospective participants must attend basic training and aquaponic production workshops. As for the next, they will learn together through social media and regular meetings with the term 'Chat with the Aquaponic Communities'. The process of collectivity in the aquaponic community can be seen in Figure 4. one another in achieving common goals. Getting a sense of security in participating in groups is manifested in how members protect one another from conflicts outside the group. The next reason is to meet the social needs that realised in becoming active members who help other members in meeting aquaponics needs with other members. The last reason is to increase self-confidence on aquaponics. As seen in tabel 1. Four Motives for Community Involvement, hence collectivism is characteristic of aquaponics activities to support urban farming. Dynamics of community-based aquaponics At the beginning of the aquaponics activities, there was a central figure who ran this system. Within a few months, many people were interested in learning and having aquaponics. Then formed an aquaponic community to support urban farming activities. Because government funding and other parties support this activity. So there are various dynamics in the management of aquaponics by this community. An understanding of group dynamics in the aquaponic community is to find out how to deal with a group's problems. Failure to see group dynamics can lead to unproductive group meetings and member disharmony. Group dynamics according to [25], are divided into 5 five fundamental concepts that can be seen to overcome group failures, namely (a) communication processes and interaction patterns; (b) interpersonal interest and cohesion; (c) social interference and influence; (d) power and control; and (e) the culture described as follows: The communication process and interaction patterns are fundamental group dynamics and components of social interaction that affect the behaviour and attitudes of group members. Communication includes verbal, nonverbal or virtual. Verbal communication occurs in groups that meet directly, whereas virtual communication occurs in-group members who communicate via mobile phones or through social media. This virtual communication called one-way communication, where the sender only respond after the message has been sent. Verbal communication is done by conducting aquaponics training in community groups after registering for a practice. After getting aquaponics understanding, then they develop in their public or private area since 2016. However, there are aquaponics owners who don't participate in the training, so they don't understand the technology. As seen in table 2, the dynamic of community-based aquaponics in Semarang. Table 2 shows that many participants have to understand the system and know how to care for aquaponics, but some other participants did not understand this system. Aquaponics learning community always active on social media; this forms new knowledge about urban farming. Every aquaponics communicator, they send a message that has meaning for changing the perspective of agriculture and raising fish. Current champions read and understand the meaning of the word carefully. In a face-to-face meeting, aquaponic members always communicate, even if they do not communicate verbally, their nonverbal behaviour can be observed to find out the message delivered. In contrast, verbal messages from mobile and virtual groups have essential implications for aquaponics. It happens at various levels, both central and regional, thus creating an individual or interest group aquaponics. In theory, the most effective way of group communication to ensure that the recipient understands the meaning of the sender is for the recipient to provide feedback about the sense that he understands. Group members can give several questions to emphasise the intended purpose to prevent distortion in communication. Toseland and Rivas (2001) in [25] argues that useful feedback must (i) describe the content of communication as felt by members; (ii) immediately sent to members who send messages after the message received; and (iii) expressed tentatively so that it is clear that feedback is intended to clarify the original message rather than confront or attack the sender. For aquaponics, discussion in social media is a significant activity, to increase their understanding. The pattern of interaction is also a fundamental group dynamics process. Some common trends of cooperation include the main pillar, where the aquaponic leader is the central figure, and most of the communication takes place from member to leader to member. Interaction patterns are influenced by the tendency of members to communicate. Some members are friendlier than others and bring more opportunities to interact so that the aquaponics community can running well. Interaction patterns are influenced by verbal and nonverbal cues such as comments, eye contact, and other expressions. The status and power relations in this aquaponic community also influence patterns of interaction. Members with higher status tend to communicate more than members of lower rank. Interpersonal attraction and emotional bonds that form between members also affect patterns of interaction. For example, members of subgroups tend to interact more with each other than with other group members. Group size also affects interactions. The smaller the group, the more opportunities each member has to communicate. Physical arrangements can also have an essential impact on patterns of interaction. The most important thing for the aquaponic community is to have several members who take the role of facilitators to complete tasks and other members who take parts that meet the socialemotional needs of members. Thus the members who guard the group on duty are empathetic, and some help the group develop positively. Although not all members can running as expected. Conclusion Regarding the discussion above, especially the motives and dynamics of community-based aquaponics for urban farming in Semarang, the outcome as follows: Community-based aquaponics plays a role in supporting urban farming activities. The members collaborate based on abilities and capacities in aquaponic development with social learning. So that their motives that can survive are altruism and collectivism, this motivation is the character of an aquaponic participant. In the aquaponics development process, there are differences in knowledge and understanding among the participants, which is due to group dynamics. Most of them feel comfortable and continue their activities. However, some members lack knowledge of the aquaponic system, and their device stops running temporarily.
2020-04-16T09:02:20.442Z
2020-04-04T00:00:00.000
{ "year": 2020, "sha1": "40010523c9d2b62f1e4cca8d303733ad3b4e2f01", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/448/1/012096", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "22a594901d7372ac0c1ebc5f3825ce1f97c0d53a", "s2fieldsofstudy": [ "Environmental Science", "Sociology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Sociology" ] }
35649362
pes2o/s2orc
v3-fos-license
Retinal detachment with a break at pars plicata associated with congenital malformation of the lensezonuleeciliary body complex Retinal detachment with a break at the pars plicata associated with congenital malformation of lensezonuleeciliary body complex is rare; most reports are of young Japanese male patients with atopic dermatitis. The present case report is the first to describe the condition in a Chinese patient with no atopic dermatitis or trauma history. A 22-year-old male presented with blurred vision in the left eye for 4 months. Fundus examination revealed shallow lower temporal retinal detachment. Further examination with scleral indentation under maximal pupil dilatation identified a break at the far periphery beyond the ora serrata and pars plana. Gonioscopy revealed a pars plicata break at the nonpigmented ciliary epithelium associated with congenital ciliary process hypoplasia and subtle lens defect at the same meridian. The retina was successfully reattached after segmental scleral buckling, cryopexy, and laser photocoagulation. Copyright © 2014, The Ophthalmologic Society of Taiwan. Published by Elsevier Taiwan LLC. All rights reserved. Introduction Retinal detachment with a break at the pars plicata was first described in 1953 in a case report of perforating trauma. 1 A multicenter study showed that 4.8% of retinal detachment in patients with atopic dermatitis was caused by breaks at the pars plicata. 2 Retinal detachment with a pars plicata break associated with lens coloboma and adjacent hypoplastic ciliary processes is rarely reported, and most reports are of Japanese patients with a history of atopic dermatitis. 3,4 The present case report describes unilateral shallow retinal detachment with a break at the pars plicata and associated congenital malformation of lens coloboma and rudimentary ciliary process without atopic dermatitis in a Chinese patient. The retina was reattached after performing segmental scleral buckling, cryopexy, and laser photocoagulation. Case Report A 22-year-old Chinese male developed progressive blurred vision in the left eye during 4 months. He was previously examined by several ophthalmologists without a definite diagnosis and was referred to our clinic for further evaluation and management. Upon examination, his best-corrected visual acuity was 6/6 in the right eye and 3/60 in the left eye. The refractive error was À7.5 to 0.5 Â 180 in the right eye and À8.0 to 1.75 Â 170 in the left eye. He had no history of previous ocular trauma or systemic disease including atopic dermatitis. Slit-lamp microscopy showed clear lens and silent anterior chamber bilaterally. Binocular ophthalmoscopy of the left eye showed a shallow retinal detachment at the temporal lower quadrant in the 2 to 5 o'clock meridian with macular involvement but without a definite retinal break (Fig. 1). Fluorescein angiography showed a silent optic disc and macula without any sign of exudative retinal detachment. Optical coherence tomography also revealed a detached neurosensory retina from the retinal pigment epithelium at the macula ( Fig. 2A). A detailed retinal binocular examination with the contact lens under microscopy also failed to demonstrate a retinal break to the ora serrata. After an extensive discussion with the patient, a segmental scleral buckling was recommended. Intraoperatively, the scleral indentation revealed a break beyond the ora serrata and pars plana Conflicts of interest: The authors have no potential financial or nonfinancial conflicts of interest to declare. at the temporal lower quadrant from the 3:30 to 4:30 o'clock meridian. Cryopexy was performed at the peripheral retina and pars plana adjacent to the break, and a high segmental buckle was applied at the ora serrata posterior to the break. The subretinal fluid resolved completely 10 days postoperatively (Fig. 3), and the optical coherence tomography showed an attached macula (Fig. 2B). However, postoperative slit-lamp microscopy with goniolens revealed a break at the pars plicata nonpigmented epithelium with its edge pulled to the lens (Fig. 4). The surrounding ciliary process was rudimentary, indicating a focal hypoplastic ciliary body. The detached membrane of pars plicata extended posteriorly and was continuous with the detached pars plana and retina. Diffuse light with retroillumination during maximal pupil dilatation showed a subtle lens defect with segmental flattening adjacent to the pars plicata break (Fig. 5). The patient was diagnosed with a retinal detachment with a pars plicata break associated with congenital malformation of the lensezonuleeciliary body complex. Three months later, the patient experienced head trauma by bumping into a door. Fundus examination revealed localized shallow subretinal fluid surrounding the pars plicata break, and no additional break was noted. Laser photocoagulation was applied directly onto the scleral buckle and its posterior edge to confine the subretinal fluid (Fig. 6A). His condition remained stable during the 3-year follow-up (Fig. 6B), and the best-corrected visual acuity remained at 6/60 owing to long-term macular detachment prior to definite diagnosis and treatment. Discussion Retinal detachment with a pars plicata break associated with lensezonuleeciliary body congenital malformation is rarely reported. 3e5 It may be overlooked on examination because the detachment is typically shallow, and the break is concealed at the distal periphery behind the iris with only subtle lens anomaly, visible only during maximal pupil dilation. The most important procedure to detect the pars plicata break is binocular indirect ophthalmoscopy with scleral indentation, along with an ultrasound biomicroscopy, if available. 6 Previous reported cases of pars plicata breaks are in young Japanese male patients, most of whom have a history of atopic dermatitis. 3,4 To the best of our knowledge, this is the first reported case in a Chinese patient and one without any history of atopic dermatitis. The pathogenesis of spontaneous break in the pars plicata nonpigmented epithelium in patients with congenital malformation of lensezonuleeciliary body complex remains unknown. All of the reported cases have a pars plicata break located in a region of Fig. 3. Ten days after performing the scleral buckling and cryopexy, the retina is well attached. A break (arrow) anterior to ora serrata is found at the temporal lower quadrant. segmental ciliary process hypoplasia. 3e5 Embryology shows that nonpigmented and pigmented epithelium of ciliary body is the anterior extension of the neural retina and retinal pigment epithelium, respectively. The two ciliary body epithelial layers are normally tightly attached by intercellular junctions. 7,8 Congenital ciliary process hypoplasia may be associated with degenerative epithelium, which has a loose intercellular junction in an animal study. 9 Thus, the hypoplastic ciliary process may be more vulnerable to traction of the zonules, which is generally believed to cause the pars plicata break and lead to retinal detachment. 10,11 Retinal detachment associated with a pars plicata break shares several similarities to retinal detachment caused by retinal dialysis; both occur predominately in young males, most have an inferotemporal break, and are easy to overlook owing to the challenge of examination. 12,13 In retinal dialysis, the break occurs at the ora serrata and is usually associated with trauma and lacks any lensezonuleeciliary body anomaly. 12,13 Retinal detachment related to pars plicata breaks is usually treated with anterior scleral buckling at the ora serrata and cryopexy or diathermy to seal the breaks. 4,14 Some authors recommend performing the segmental intrascleral buckling directly above the break, but this may result in marked astigmatism. 3,5,14 For larger tears, encircling buckling is recommended to reconstruct the artificial ora serrata and release the zonular traction. 14 Pars plana vitrectomy with lensectomy are considered in recurrent cases or cases with particularly large tears to alleviate the tractional force. 5 In this patient, high segmental scleral buckling targeting the ora serrata combined with cryopexy at the peripheral retina and pars plana adjacent to the break were adequate to reattach the retina. Additional laser photocoagulation at the buckle confined the shallow subretinal fluid following the later episode of head trauma. In conclusion, the present report marks the first Chinese case of spontaneous retinal detachment with a pars plicata break associated with malformation of the lensezonuleeciliary body complex. Although the retina was successfully surgically reattached, the visual prognosis is guarded because of the chronicity of the macular detachment. Detailed retinal examination including binocular indirect ophthalmoscopy with scleral indentation and gonioscopy of pars plicata should be considered in cases of retinal detachment without a visible break, even in patients without previous atopic dermatitis.
2018-04-03T03:58:23.875Z
2014-09-18T00:00:00.000
{ "year": 2014, "sha1": "bf90390c09aec5fe15d02cecd37caea5c960cf85", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tjo.2014.06.003", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0c76da2c2ca1dc5ee0601c2f01508c5217e104c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
107101274
pes2o/s2orc
v3-fos-license
Boundary lubrication by adsorption film A complete understanding of the mechanism of boundary lubrication is a goal that scientists have been striving to achieve over the past century. Although this complicated process has been far from fully revealed, a general picture and its influencing factors have been elucidated, not only at the macroscopic scale but also at the nanoscale, which is sufficiently clear to provide effective instructions for a lubrication design in engineering and even to efficiently control the boundary lubrication properties. Herein, we provide a review on the main advances, especially the breakthroughs in uncovering the mysterious but useful process of boundary lubrication by adsorption film. Despite the existence of an enormous amount of knowledge, albeit unsystematic, acquired in this area, in the present review, an effort was made to clarify the mainline of leading perspectives and methodologies in revealing the fundamental problems inherent to boundary lubrication. The main content of this review includes the formation of boundary film, the effects of boundary film on the adhesion and friction of rough surfaces, the behavior of adsorption film in boundary lubrication, boundary lubrication at the nanoscale, and the active control of boundary lubrication, generally sequenced based on the real history of our understanding of this process over the past century, incorporated by related modern concepts and prospects. Definition of boundary lubrication A comprehensive and popular definition of lubrication given by Wikipedia is as follows. Lubrication is the process or technique employed to reduce friction between, and wear of one or both, surfaces in close proximity and moving relative to each other, by interposing a substance called a lubricant between them. The lubricant can be a solid, a solid−liquid dispersion, a liquid, a liquid−liquid dispersion or a gas. As pointed out by Dorinson and Ludema in their textbook [1], such a dictionary definition of lubrication as the mitigation of friction and wear should be regarded as a description rather than a definition, as it does not answer what lubrication is, without understanding what friction and wear are, which are also two terms of phenomenological description and lack of rigorous scientific definition. In addition, the effects of a lubricant film are not limited solely to friction and wear, but also encompass other aspects of an interface, such as the surface energy, surface forces, electronic and ionic emissions, electric double layer, adhesion, and cohesion, which are of great importance in practice. Although there are no well-accepted rigorous scientific definitions available for the term "lubrication", it has been well explored and clarified that there are two distinctive basic lubrication regimes, fluid film lubrication and boundary film lubrication (or boundary lubrication for short), in the accepted engineering terminology. Fluid film lubrication is the lubrication regime in which there is a continuous fluid film separating solid surfaces [2]. A characteristic of fluid film lubrication is that the external load on the solid surfaces is fully supported by the pressure generated within the fluid film. When a fluid film is created and 116 Friction 3(2): 115-147 (2015) maintained by an external pump forcing the lubricant within the space or gap between the parts, it operates in the hydrostatic mode. When the fluid film is formed and maintained by the viscous drag of the moving surfaces themselves as they are sliding relative to one another with a wedge, we say that it operates in the self-acting hydrodynamic mode. If the pressure in the fluid film is so high that deformation of the solids under the pressure is not negligible, and/or the changes in the viscosity and density of the interposing lubricant within the gap are no longer ignorable, such as under counterformal conditions, we call such a state elastohydrodynamic lubrication (or plastohydrodynamic lubrication when the deformation is plastic). The principle of fluid film lubrication is solely dominated by fluid mechanics and solid mechanics. The thermal effects on fluid film lubrication can also be taken into account. Differing from fluid film lubrication, the lubricant under boundary lubrication does not possess fluidity regardless of where it originates from or whether it is generated by a gas, a solution, or other medium, and thus fluid mechanics fails to adequately describe the behavior of boundary films. Another characteristic of boundary film is that it covers the solid substrate in the form of molecularly thin layers, from a monolayer to multiple layers. In boundary lubrication, if the solid substrate is atomically smooth and/or the local pressure is insufficiently high to break down the intervening boundary film, the load is fully carried by the molecularly thin layer, as illustrated in Fig. 1. Usually, the external load is jointly borne by the boundary film itself and the distributed contacting asperities that penetrate off the thin boundary film for most rough engineering surfaces, as shown in Fig. 2. Upon the relative sliding of one rough surface against the other, a series of events rich in variation occur, including stick-slip motion, temperature rise, wear debris generation, material transfer, and tribofilm formation. In the majority of engineering applications, a transition regime between the fluid film and boundary film lubrication regimes is often encountered when the operation conditions are changeable. The transition regime is called mixed lubrication. To deal with mixed lubrication, theories for both fluid film and boundary film lubrication are needed. The significance of boundary lubrication The effectiveness of boundary lubrication for the reduction in friction and wear can be demonstrated through a recreation of Leonardo da Vinci's friction experiments, which was recently reported [3]. The authors attempted to construct a nearly faithful recreation of Leonardo da Vinci's apparatus for measuring friction based on his notebook illustrations, and investigated the conditions under which Leonardo da Vinci conducted his experiments. They indeed reproduced Leonardo da Vinci's findings of static friction coefficients with wood of μ s = 0.25 ± 0.03, but only when sliding blocks were used with roughly cut surfaces sullied by natural oils in fingertips and hands and by dust in the air. The static friction coefficient, μ s , measured under dry, clean, smooth, and sanded conditions is 0.72 ± 0.04. Eliminating sanding from the experimental procedure lowers the static friction coefficient (μ s = 0.44 ± 0.04), and introducing fine olive wood sawdust on the sliding surfaces achieves an average static friction coefficient of μ s = 0.35 ± 0.03. This study illustrates how sensitive static friction is to changes in surface roughness and contamination, or in other words, to the state of boundary lubrication. Static friction has an extremely important significance Friction 3(2): 115-147 (2015) 117 in machine design and operation. A high static friction means that an extra-high resistance needs to be overcome, in addition to the inertial force or torque of the driven parts when starting or re-starting a machine. This is an especially important issue in the design of heavy-duty mechanical systems, such as those used in mining and metallurgical industries, in which a soft-start technology is required. For micro mechanical electrosystems (MEMS), on the other hand, as only a very limited driving force is available, if the static friction is high, overcoming the frictional resistance is difficult, thus giving rise to a stiction failure, or causing a severe unacceptable deformation owing to the weak microstructure involved. Even if the frictional resistance can be overcome with an adequate capacity of the driving system, the difference in magnitude between the static friction and kinetic friction, the latter being generally lower than the former, usually causes a stick-slip motion at each start of the machine, which is harmful for the stability of the mechanical system, especially for ultra-precision instruments such as a gyroscope. To obtain a smooth and quiet start, viscous type (i.e., near zero friction at the initial speed with a gradual increase under moving speeds) rather than Coulomb type (a step change in friction with movement) of Stribeck relationship is desirable. The coverage, microstructure, and behavior of boundary film, as well as the shear strength of asperity contacts (see Fig. 2), are the key factors determining the static friction, which are discussed in Sections 3, 4, and 5. The significance of lubrication in reducing friction in industrial applications is often described by a plot of the so-called Stribeck curve (Fig. 3), which shows how the kinetic friction coefficient of a lubricated bearing changes with a lumped operation parameter, B = ηV/P, where η is the dynamic viscosity of the lubricant (Pa·s), V represents the sliding speed (m/s), and P denotes the specific load defined as P = W/DL (W, external load/N; D, bearing diameter/m; L, bearing length/m). Parameter B, also regarded as bearing number, represents the intensity of operation condition for achieving full film lubrication. Only when B is greater than a critical value can the lubricated bearing work in the full film lubrication regime, which is preferable to boundary lubrication or mixed lubrication because not only friction coefficient but also wear are relatively low. The desired working point is in the vicinity of the minimum point of the curve. If the bearing number is in the right zone far beyond the desired working point, the viscous energy loss dissipated not only within the bearing but also in the entire lubricant circulation system will increase remarkably. Therefore, lower viscosity oils, such as 0W/20 or 0W/10, are better than 5W/40 or 5W/30 for engine lubrication, provided that full film lubrication of the engine parts will be maintained. With the development of many smaller and heavyduty machine components, the specific load is continuously increasing, causing a continual reduction in the bearing number. Under the harsh conditions of elevated temperature, vacuum environment, solid lubrication, reciprocating movement around deadpositions, starved lubricant supply, and the starting and stopping processes, the contact surfaces may experience or even maintain at a working state at which the bearing number is close to zero, whereby boundary lubrication becomes dominant, and the friction coefficient therefore depends weakly on the bearing number, but strongly on the chemistry and microscopic structures of both the lubricant and the contact surfaces. In the boundary lubrication regime, the coefficient of friction usually increases and fluctuates much more than for a full film lubrication, and damage to the contact surfaces therefore becomes inevitable. This results in a worsening of the mechanical efficiency and the durability of the machine components. Unlike in full film lubrication, a quantitative prediction of the friction and wear during the design stage is very difficult for boundary lubrication, if not impossible, as there are no reliable theoretical models available. Therefore, most research studies on boundary lubrication have been conducted experimentally. Experimental techniques for characterizing boundary lubrication As described in Section 1.1 and illustrated in Fig. 2, a molecularly thin boundary film is sandwiched between two irregularly rough solids under a particular load. There are at least three significant obstacles for experimental exploration of boundary lubrication [4]. First, asperity contact occurs principally at a multitude of the summits of the surface roughness, meaning that the basic material volume affected by contact tends to be extremely small even at the nanoscale level, and is thus difficult to detect. Second, the heterogeneous mating interface is embedded deeply into two solids that are normally not transparent, and are thus inaccessible to most scientific characterization techniques. Finally, during the process of sliding, the contacting microstructures are not static, but evolve as the rubbing progresses. To overcome these difficulties, various experimental techniques have been invented and developed, which are either used independently or along with traditional friction testers. Herein, we classify and briefly introduce the experimental techniques available for boundary lubrication investigations into three categories, precharacterization, in-situ characterization and postcharacterization, based on whether the experimental characterization is conducted before, during, or after the friction and lubrication tests. Pre-characterization mainly focuses on one or both of the solid surfaces to be lubricated and contacted. The scope of the pre-characterization can cover the structural, physical, and chemical properties in a wide range from the macroscopic to atomic scales. There are a variety of scientific instruments and tools available for pre-characterization of a solid surface. Modern surface topography instruments, including various optical microscope, stylus profilometer, scanning tunneling microscope (STM), and atomic force microscope (AFM), enable researchers to quantitatively reveal 3D geometric surface structures down to the nanometer scale. The scanning electron microscope (SEM) and tunneling electron microscope (TEM) have become powerful tools for high-resolution inspection and semi-quantitative characterization of surface and sub-surface structures. Hardness tester, nanoindentor and nanoscratching tester, contact angle meter, surface tension meter and various types of functional AFM are widely used to characterize the mechanical and other physical properties of a solid surface. To characterize the chemical composition and molecular structure of a solid surface, Auger electron spectroscopy (AES), electron spectroscopy for chemical analysis (ESCA), X-ray photoelectron spectroscopy (XPS), secondary ion mass spectrometry (SIMS), and Raman spectroscopy are also available. However, most electron, ion, or X-ray based tools require a high vacuum environment to accommodate the sample to be inspected, which means boundary films adsorbed on the solid surface are cleaned up. To measure the amount of mass and thickness of an adsorbed boundary film, quartz crystal microbalance (QCM) and ellipsometer are very useful tools. An in-situ characterization of boundary lubrication and friction is much more difficult to achieve than pre-characterization because of the second and third obstacles mentioned above. AFM working in contact mode is often used to study boundary lubrication at the nanoscale, or at the microscale by replacing the tip with an attached microsphere, because of its measurement capability for normal and lateral forces. By equipping a miniature actuator and force sensor, SEM and TEM can be used to capture atomic images of the deformation and rupture of an asperity contact and the material transfer during the rubbing process. A limitation of such a powerful technique is that it is only possible to use under vacuum condition. In a vapor-or liquid-phase environment, which most boundary lubrication encounter, optical methods need to be used. Based on the high resolution of film thickness with the method of fringes of equal chromatic order (FECO), surface force apparatus (SFA) is a powerful instrument for the characterization of boundary film lubrication. Fourier transform infrared spectroscopy (FTIR), Raman spectroscopy, and sum frequency generation (SFG) are also used in studies on boundary lubrication, usually incorporated with a friction tester. A disadvantage of these optical techniques is the necessity to use an optically transparent contacting part. For non-transparent contacts, acoustic emission (AE) sensor and charge emission detector are useful tools for detecting basic physical events occurring during boundary lubrication. All of the experimental techniques used for precharacterization of lubricated and rubbed surfaces are suitable for post-characterization. Although important, a correlation of the surface characteristics between pre-and post-characterizations in a point-to-point manner is difficult to achieve. In addition, nonintentional contamination and the adsorption/ desorption of the substances from/into the environment is a problem that needs to be prevented for precise post-characterization. Adsorption film and the structure of the present review According to the definition by IUPAC, adsorption refers to an increase in the concentration of a substance at the interface of a condensed and a liquid or gaseous layer owing to the operation of surface forces. The substance, regarded as adsorbate, is as general tiny particles of atoms, ions, or molecules from a gas, liquid, or dissolved-solid phase. The surface upon which the adsorbate resides once it is adsorbed is relatively much larger in size, and the body of the surface is regarded as adsorbent. The reverse process of adsorption is desorption. Adsorption and desorption are surface-based processes driven by changes in the surface energy. They spontaneously occur, obeying the laws of thermodynamics. Therefore, they may appear anywhere and at any time in an intentional or non-intentional way, regardless of our awareness of them. Under certain conditions, the amount of adsorbate, or the adsorption film thickness, is determined by the equilibrium between the rates of adsorption and desorption, and the microstructure of the adsorption film is a result of the balance between the adsorbateadsorbate intermolecular energy, adsorbate-surface intramolecular energy, and the adsorbate-external field interactions. When the adsorbate-surface intramolecular interaction mainly comes from the weak van der Waals forces, the adsorption is generally classified as physisorption. If the characteristic of the adsorbate-surface intramole-cular interaction is covalent bonding, the adsorption is classified as chemisorption. The formation of boundary films based on physisorption or chemisorption is described in Section 2 of this review. The effects of boundary film on the adhesion and friction of rough surfaces are detailed in Section 3, followed by a discussion in Section 4 of the behavior of the adsorption film in boundary lubrication. In Section 5, research progress on boundary lubrication by adsorption film at the nanoscale is highlighted. Finally, in Section 6, active control of boundary lubrication is introduced, and the role of electrostatic attractions and repulsions between charged adsorbates and a charged surface in the formation of boundary film are emphasized. Each section represents a focal point in revealing the fundamental mechanism of boundary lubrication from a certain era, generally sequenced based on the real history of our understanding of this process during the past century; the modern concepts of these topics are also naturally incorporated into each section. Adsorption films are commonly boundary films in a gaseous or liquid phase environment. Under high vacuum conditions, however, adsorption film is absent or negligible, and thus solid film lubrication is the only available choice. Although solid film lubrication is an important branch of boundary lubrication, it is not included in this review owing to space limitations. Another important topic of boundary lubrication excluded from this review is chemical reaction film as there are already a number of review papers on this topic in the literature, see references [4][5][6] for examples. The function of adsorption film The first systematic study of boundary lubrication was conducted by Hardy [7] in the early 1920s, which was the start of intensive study on the mechanism of boundary lubrication for nearly a century. Hardy measured the static friction using a homologous series of paraffins, alcohols, and acids as lubricants. An amazing simple relationship was obtained that friction is a function of separate contributions by the solid surfaces, the chemical series to which the lubricant belongs, and the chain length of the lubricant molecules. To interpret these data, he created a picture illustrating the mechanism of boundary lubrication in which the molecules of the lubricant are oriented at each sliding surface to form a monolayer adsorption film (Fig. 1). The solids sink through the bulk lubricant until the surfaces are separated by only the unimolecular films of the lubricant adsorbed on each surface. As the lubricant molecules consist of a polar head group and an alkyl tail, the head groups adhere to the substrate when sliding, whereas the tails are exposed to be a closely packed layer, and the two boundary layers rub past each other instead of the surfaces themselves. Owing to the relatively weak shear strength of the van der Waals interactions between the adsorbed layers, lower friction can be achieved. Hardy also assumed that the extent of the reduction in the fields of surface force, which determines the adhesion force between the nonpolar tails of the adsorbed molecules, is a function of the molecule chain length, thereby explaining the linear relationship observed between the friction and molecular weight for different members of a homologous series. Based on his experiments and analyses, Hardy emphasized the necessity for applying modern physical and chemical concepts and methods to the study of the lubrication process. The existence of orientated films was soon confirmed by X-ray scattering experiments [8]. Hardy's theory was also supported by Cameron [9], who used the Kirkwood-Müller formalism to calculate the van der Waals interactions, and thus deducing the friction force (1960). It is reasonable to suppose the interactions between the adsorbed molecules contribute to the total frictional force. However, as Hardy's experiments used an enormous amount of lubricant between the solid surfaces, the model he created was unable to be validated, and a question therefore remained regarding whether a monolayer of adsorbed film is able to provide boundary lubrication effect. This question was investigated by testing the lubrication properties of artificially prepared surface films, especially ordered molecular films such as Langmuir−Blodgett (LB) films and self-assembled monolayers (SAMs), which will be illustrated in the following sections. Another drawback of Hardy's experiments was that only static friction, and not kinetic friction, was recorded and considered in his paper, which was limited by poor instrumentation. Many complications can arise during motion owing to the complex and varying nature of the surfaces when in contact, such as fluctuations of the local pressure and temperature, which were later recognized to contribute to the rich phenomena in boundary lubrication. Langmuir−Blodgett films and their application in boundary lubrication LB films, named after the two scientists who invented this technique of film preparation, are single or multiple layers of amphiphilic molecules, deposited from the surface of a liquid onto a solid substrate. When a drop of dilute solution containing amphiphilic molecules with a hydrophilic head group and a hydrophobic tail is spread onto a water/air interface, the hydrophilic head group is preferentially immersed in water with the hydrophobic tail pointing toward the air. A monolayer of amphiphilic molecules with large spacing between the molecules ( Fig. 4(a)) is then compressed by a barrier, forming a condensed and ordered molecular film ( Fig. 4(b)). Finally, the monolayer on the water/air interface is transferred onto a solid substrate either by vertical deposition (Fig. 4(c)) or horizontal lifting method. Multilayer LB films can be achieved by repeated deposition of single layers on a solid substrate, with the molecular direction changing alternatively for different monolayers (Y-type), or remaining in the same molecular direction (X-or Z-type). Using the newly developed LB technique by themselves, Langmuir [10] was the first to show that a single layer of fatty acid is sufficient to reduce the friction coefficient of glass surfaces from about 1 for clean glass to about 0. 1 (1934). The main development in understanding the role of thin layers in boundary lubrication came from Bowden and Leben [11], who deposited specific layers of stearic acid on a steel plate surface using the same technique, and investigated the lubrication properties (1940). A steel slider was run across the coated steel plate repeatedly over the same track, and the friction during each run was observed. The process was continued for 100 runs, or until the surfaces were badly torn and the friction had increased to a significantly high level. The experiment was repeated using different numbers of films on top of the lower steel plate. A marked change in the behavior was observed during the running process, and the results obtained for various film thicknesses were recorded (Fig. 5). It is clear that the friction coefficient is almost as low as 0.1 even when a single layer of the film is present on the surface during the first run. Not only the friction coefficient but also the wear is quite similar to that found when excess stearic acid is present on the surface. However, when repeatedly running over the same track, the friction soon begins to rise, and eventually the friction and wear reach the same state as unlubricated surfaces. It is sufficient to conclude that the monolayer acting as boundary film is rapidly worn off as a result of the repeated sliding process. Similar results were obtained in the presence of other initial film thicknesses, but the rate of the boundary film being worn off was shown to decrease upon the increase in the thickness of the film. Eventually, when a sufficiently large number of film layers are present, the well-lubricated condition remains even after 100 runs along the same track. Bowden's experiments confirmed the importance that the thin adsorption layers of lubricant play in boundary lubrication. However, because of their complex preparation procedure and low resistance to sliding, LB films were gradually replaced by SAMs in studies on boundary lubrication. Self-assembled monolayers and their application in boundary lubrication Self-assembled monolayers are formed spontaneously through the immersion of an appropriate substrate into a solution of amphiphilic molecules. This technique is one of the most frequently studied systems at the molecular level because of its well-defined structure, strong head group binding on the substrate, and dense packing of the hydrocarbon chains [12], which are ideal for studies on boundary lubrication. Moreover, the way in which SAMs act as boundary lubrication film is very similar to that of additives functioning in commercial lubricating oils. Actually, chemical reaction films formed by some kinds of special additives such as ZDDP are in many circumstances more effective as boundary film [13], which have been reviewed in some high-quality papers [4−6], but not included in this article. SAMs bind to a substrate surface by physisorption, chemisorption, or both. SAMs bounded by chemisorption usually consist of chain molecules with a head group that binds to a specific surface through chemical bonding, and a tail group that has desirable chemical qualities to the exposed interface, the binding of which is relatively strong, providing sufficient resistance to motion. The thiol group is a frequently used head group for attachment to a metal substrate such as gold or silver, and such alkanethiol SAMs are known to form highly ordered structures [14]. In general, a well-functional SAM through chemical bonding requires a special matching between the head group and the substrate, such as hydrolyzed trichlorosilane on silicon-based surfaces [15], and a carboxylic group on metal oxide. Hydrophobic and electrostatic interactions are the dominant driving force for physically adsorbed SAMs formed from surfactant molecules, which are usually composed of a hydrophobic chain and a charged head group [16,17]. The aggregation structures of physically adsorbed SAMs are quite diverse depending on the properties of the surface, the molecular structure of the surfactant, and the concentration of the solution, in which monomers and spherical, cylindrical, and hemicylindrical aggregations are commonly formed [18]. Physically adsorbed SAMs are usually weakly attached, and are easily peeled off from the surface during the sliding process. Fortunately, a physically adsorbed film can recover very soon under a deficit, which may be the main reason for providing efficient boundary lubrication effect; for example, the SAMs formed by SDS in aqueous solution can recover very quickly, i.e., on the order of 10 ms on a graphite surface [19,20]. A physically adsorbed surfactant on a solid/liquid interface was reported to improve boundary lubrication properties significantly, and is especially efficient in water-based lubricants [21−24]. There are some other types of SAMs, such as polymer brushes, charged (polyelectrolyte) [25,26] or neutral [27], adsorbed or grafted, on a solid surface [28], each of which has its own special tribological properties related to its unique structure. There was also an attempt to form an ionically bound polymer layer and mobile multilayer with self-healing effect to provide robust and long-lasting boundary lubrication performance [29]. In addition to friction and lubrication experiments, to elucidate the mechanism of boundary lubrication, a thorough inspection of the boundary film is necessary [24]. Adsorption isotherms have long been used to study the adsorption properties of SAMs on a solid/liquid interface [30], but it was far from satisfactory to investigate the adsorption at an interface at the molecular level. Actually, studies on the adsorption process of different interfaces have long been hampered by the need to discriminate between the few atoms at the interface, and the many more atoms that exist in the two bulk phases involved. A number of modern surface-sensitive techniques were developed in the late 20 th century to overcome this obstacle, such as AFM [31][32][33], electrochemistry [34], QCM [35−37], dual polarization interferometry (DPI) [38], and neutron reflectivity [39−41], to provide more information about SAMs on the solid surface both in air and in a liquid. A significant amount of useful knowledge has been acquired from these useful tools applied to SAMs, such as the physical and chemical properties of the molecules, the film thickness, the adsorbed mass, the aggregation structure, and the interactions between the molecules and between the molecules and solid surface. To provide a detailed introduction to the principle of such detection methods, and to provide useful guidelines on the criteria required to select the most appropriate techniques for studying a specific system, a comprehensive review of these modern techniques was made by Zaera [42]. The boundary lubrication properties of SAMs in both macroscopic and nanoscale tribology experiments are illustrated in the following sections. Adhesion between lubricated surfaces Ever since Hardy's study was first released, the important role played by a thin layer of adsorbed lubricant acting as boundary film has been widely recognized. However, it did not take long for researchers to realize that the rich behaviors of boundary lubrication cannot be solely attributed to the adsorption film. In the 1940s, adhesion between surfaces was found to be inevitable during the sliding process under boundary lubrication conditions, which was of vital importance in understanding the rich phenomena in boundary lubrication, such as wear and stick-slip. It was nearly at this same time that the adhesion between dry surfaces in air was acknowledged to play a major part in explaining Amontons' laws of friction. The time consistency was not only because both these works were led by Bowden and Tabor, but also because the experiments that the friction laws were derived from were also under somewhat unintended boundary-lubrication conditions owing to a boundary layer generated by the oxidation of the surface or by contamination [43]. Since then, a framework for boundary lubrication research has been built in which the quality of the boundary film is evaluated based on its ability to reduce direct contact or adhesion between solid surfaces, which is still the most efficient logic in interpreting boundary lubrication from a macroscopic or engineering perspective. The relationship between adhesion and Amontons' laws of friction The phenomenon in which the frictional force is directly proportional to the applied load regardless of the area of the sliding surfaces, known as Amontons' laws of friction (1699), can be traced back to the work of Leonardo da Vinci (1452−1519), which has been verified by an enormous number of engineering applications and everyday experience [44], but the interpretation is much more complicated. As surface roughness is obvious on surface, early insight is mainly focused on the interlocking of the surface asperities. Based on his observations, Coulomb concluded that friction is mainly determined by the work of lifting the load over the summits of these asperities (1781). A breakthrough was made by Bowden [43], who conducted experiments to measure the real contact area by measuring the electrical resistance across the surface of the metals in contact (1950). The results show that the real contact area may be unexpectedly small, perhaps less than 1/10,000 th of the apparent area of contact. It is also amazing that the real contact area is directly proportional to the applied load, nearly independent of the surface size, shape, and roughness. Although contact stress will cause an elastic deformation in the vicinity of the points of contact, the experiments indicate that the summits of irregularities upon which the bodies are supported flow plastically, and are crushed down until their cross-section is sufficient to support the applied load. Based on these experiments, it is reasonable to correlate friction with adhesion, since the real contact area between the solids is proportional to the applied load, independent of the actual size of the surfaces, and Amontons' laws of friction can also be understood in this way. Later, based on the measured surface roughness data, Greenwood and Williamson [45], and Whitehouse and Archard [46], proposed contact models to describe the processes of contact between rough surfaces during sliding, including the asperity height distributions, skewness, and waviness of the surfaces. The real contact area was proved to be rigorously proportional to the load for rough surfaces because of elastic deformation, regardless of whether plastic deformation has occurred. A simple and elegant explanation of Amontons' laws of friction was thus achieved, which is still the basic understanding of friction process. Friction in air is also under boundary lubrication conditions by a boundary layer caused by the oxidation of the surface or contamination [43], as the friction coefficient will be much higher when the surface films are fully driven off at the condition that the contact surface is in a high vacuum. Adhesion between lubricated surfaces Bowden also found that plastic deformation and adhesion will also occur in the boundary lubrication regime, instead of the picture proposed by Hardy in which the surfaces are fully separated by adsorption films, although the adsorption film do play a major role in reducing friction and wear in boundary lubrication. The adhesion between friction pairs under boundary lubrication conditions was confirmed by Bowden and Leben [11] who took photomicrographs of the track and compared them with the unlubricated cases. The results show that the track of a lubricated solid surface is similar to, but somewhat smaller than, that of an unlubricated surface. Based on a systematic study of the friction and wear of paraffins, alcohols, and acids as lubricants not only in bulk liquid but also as adsorbed monolayers, Bowden and Tabor [11, 47−49] created a widely accepted model for boundary lubrication (1940). When lubricated surfaces are placed in contact, a plastic flow of the solid will occur until the area is sufficiently large to support the applied load. However, the pressure distribution is not uniform over the contact region. At certain points, the pressure will be much higher than average, and at these points, a local breakdown of the lubricant film may occur (area A 1 in Fig. 2). The breakdown of lubricant film will lead to the formation of contact junction or adhesion between the sliding surfaces, the size of which will be much larger than the lubricant molecule. Thus, the resistance of the transverse motion is the sum of the force needed to break the junctions between surfaces and the shear resistance generated by the lubricant film itself during the sliding process, which can be expressed as where A is the area supporting the applied load (including A 1 and A 2 in Fig. 2), α is the fraction of the area over which a breakdown of the lubricant film has occurred, α = A 1 / (A 1 + A 2 ), s m is the shear strength of the contact junctions, and s is the shear strength of the adsorbed boundary film. For well-lubricated surfaces, the area of direct contact (A 1 ) is very small compared with the real contact area, while the shear strength of these junctions is much higher than lubricant film so that they are responsible for the major part of the resistance to motion. The role of the adsorbed boundary layer is to reduce the amount of real contact area or adhesion between the sliding surfaces by interposing an intermediate layer, and possesses a relatively low shear strength while sliding. A framework for understanding boundary lubrication was generated. An enormous number of studies regarding the efficiency of boundary film to avoid adhesion between solid surfaces and the behaviors of the molecules in the boundary layer have since been conducted based on this framework. The relationship between adhesion and stick-slip phenomenon Bowden observed that stick-slip may occur when the lubricating performance of the boundary film is relatively poor. The transition between stick-slip friction, which is harmful to the friction pair, and continuous sliding, which is desirable, can be controlled by several factors [49]. Under the same conditions, continuous sliding with a low friction coefficient is easier to achieve using stearic acid as lubricant, which can form metallic soap on a steel surface, than using physically adsorbed films such as hydrocarbons and alcohol. The chain length is also an important parameter in this transition. Taking steel surfaces lubricated by fatty acids as an example, stick-slip friction occurs when the chain length is short, and continuous sliding is achieved with an appreciable reduction in friction and wear when the chain reaches a certain length of 5 or 6 carbon atoms, which corresponds to a molecular weight of about 100. Another influencing factor is temperature. A transition from continuous sliding to stick-slip friction upon an increase in temperature may occur when the heating is insufficient to cause appreciable oxidation of the lubricant, and is reversible upon cooling. For pure paraffins and straight-chain alcohols, the transition occurs at the bulk melting point of the compound, and the transition temperature is sharp and clearly defined. With fatty acids, the transition temperature depends on the load, velocity, and other experimental conditions, but is usually higher than the bulk melting point of a fatty acid. Bowden was aware that the stick-slip phenomenon is related to the intermittent clutching caused by adhesion and breaking away of the surfaces, but he did not provide a more detailed explanation. The stick-slip phenomenon, which is a central topic in boundary lubrication, but has yet to be clearly illustrated, has aroused the interest of scientists in many other disciplines. Actually, research on the stickslip phenomenon, characterized by a non-uniform relative motion between rubbing surfaces, is of vital importance in understanding articular cartilage damage [50], vibrations in vehicle suspensions and brake systems [51], erratic motion in industrial machinery and tools [52], and even active geological faults during an earthquake [53,54]. Although stickslip may be welcome as a way of generating elegant sound by musical instrument such as a violin, more often the appearance of stick-slip is detrimental as damage and wear of the materials are often caused by stick-slip motion accompanied by a harsh noise. The stick-slip motion or friction-induced vibration in an engineering system can be described and predicted by dynamic friction modeling and simulations [55−57]. Taking advantage of the high calculating efficiency of modern computers, dynamic friction modeling can reproduce the dynamic motion of a sliding system very close to the experimental results, even the hysteretic effects of friction, and is sufficiently powerful to provide credible guidance on the design of an engineering system. However, a model regarding the dependence of friction coefficient on sliding conditions such as the relative velocity, internal variables, and relative acceleration has to be pre-established before the simulation, as more of a semi-empirical formula of friction, which ignores the nature of the friction process. Another important achievement in understanding the mechanism of stick-slip is the phase transition model [58] established by shearing a confined thin layer of lubricant film by SFA and related molecule dynamic simulations, which is illustrated in greater detail in Section 5.1. However, an experimental condition in which there is no breakdown of the boundary film with atomic smooth sliding surfaces under very low contact pressure cannot be applied to normal macroscopic friction process. Using ZrO 2 ball and stainless steel plate as friction pair, Zhang and Meng [24] conducted a systematic study on the boundary lubrication performance, especially the stick-slip phenomenon in SDS aqueous solutions (2014). The SAMs formed on the steel surface by SDS molecules acts as boundary layer to provide lubrication during the sliding contact. As the SDS concentration increases, the mass of the adsorbed SDS molecules increases monotonously, whereas the structure of the adsorbed layer changes from monomers to hemimicelles, as illustrated in Fig. 6. The stick-slip phenomenon was studied (Fig. 7) at four typical concentrations of 0.01, 0.1, 1, and 10 mM, representing the adsorption state with different adsorbed masses and adsorption structures. At the SDS concentration of 0.01 mM, regular stick-slip spikes are present, and repeat periodically at a constant driving velocity characterized by the static friction coefficient (μ k ) derived by the force needed to initiate sliding from rest, and the kinetic friction coefficient (μ s ) derived by the force needed to maintain the sliding process. As the concentration increases to 0.1 mM, both static and kinetic friction coefficients decrease. As the concentration increases to 1 mM, the stick-slip disappears, shifting to continuous sliding with a low friction coefficient of about 0.1. When the concentration reaches 10 mM, at which the adsorbed SDS molecules form hemimicelles on the surface of stainless steel, stick-slip appears again as the kinetic friction coefficient remains at a low level, but the static friction coefficient becomes quite large. When the driving velocity increases, the kinetic friction coefficient stays nearly steady at a low value about 0.1, whereas the static friction coefficient decreases until a critical driving velocity is reached, at which point the stick-slip friction turns into continuous sliding (μ k = μ s ). The dependence of the static friction coefficient on driving velocity results from the fact that the static friction coefficient depends mainly on the adhesion, which increases over time. The general process of stick-slip friction can be concluded from these characteristics of stick-slip phenomenon. During continuous sliding or the slip process when stick-slip occurs, the adhesion, or α, is mainly determined by the adsorbed mass of SDS on steel surface, regardless of the adsorption structure, so that the kinetic friction coefficient decreases monotonously as SDS concentration increases, and remains stable at about 0.1 when SDS concentration is sufficiently high. While during the stick process in stick-slip friction, α is related to both the adsorbed mass and the adsorption structure. When SDS molecules are adsorbed as monomers, the static friction coefficient also decreases with SDS concentration. However, when the SDS concentration is 10 mM, [24] although the adsorbed mass is high enough, the regular hemimicelles expose some area of bare surface, which facilitates the adhesion to grow, so that α is large and the static friction coefficient is high. The influence of sliding velocity on the static friction coefficient depends on the phenomenon that adhesion grows over time. When the driving velocity is low, there is sufficient time for the adhesion or α to grow during the stick process so that there is large stickslip spikes. As the driving velocity increases, the time for adhesion or α to grow becomes smaller and smaller until reaching the critical velocity, at which point stick-slip turns into continuous sliding. This research shows that stick-slip comes from the inconsistency of friction between sliding and sticking, and the boundary layer can affect them separately by influencing the adhesion between solid surfaces. The scenario of correlating the macroscopic boundary lubrication properties with the adsorption film behaviors at the nanoscale should be emphasized in the future. The effect of surface films on the plastic deformation of solid surfaces Material studies have shown that the presence of a surface film on solid surface can remarkably influence the mechanical behavior such as plastic deformation [59]. The effect of surface film on the plastic deformation (Fig. 8), and thus on the boundary lubrication properties (Fig. 9), was studied by Buckley [60] in 1972. Figure 8 shows a typical stress-strain curve for a material under a normal state, with surface-active liquid present on the surface (showing the Rehbinder effect), and with an oxide film present on the solid surface (showing the Roscoe effect) [61]. With the surface film present providing surface hardening or increased hardness with the Roscoe effect, the stress will increase for a given strain. With the application of a surface-active species to the solid surface, such as an organic acid which imparts a softening to the surface, the stress will decrease for a given strain because of Rehbinder effect. The curve of the normal surface lies between these two curves. Both friction and wear characteristics are different under these three sets of surface conditions (Fig. 9). The oxidized zinc surface has the narrowest track width, indicating that plastic deformation is severely restricted by the oxide film. There is also a series of fine lines that look like lamellae running normal to the track, showing the restriction to plastic deformation that develops on the zinc surface during the deformation process, which is unique to the three conditions. On the contrary, the surface-active liquid increases the ductility or deformability of the solid surface, and thus the widest track is produced with pure plastic deformation. The track width data correlate completely with the stress strain data in that the track width decreases with an increase in the surface hardening. The friction coefficient is the lowest with the surface-active liquid during the sliding process, mainly because the shear resistance at the interface is lower with surface-active liquid than with surface oxide, which is not directly related to the mechanical properties of the substrate. Careful consideration should be taken regarding these surface effects in the presence of a certain type of surface film since plastic deformation needs to be strictly restricted in the friction pair of many machines. These surface effects will also contribute to the wear between lubricated surfaces, as described in the following section. Wear between lubricated surfaces It may be more important to reduce wear than friction in the boundary lubrication regime, as wear can cause the breakdown of the mechanical parts directly. Wear is also the core factor in understanding the running-in process, which is inevitable in contacting parts [62]. However, wear in boundary lubrication is a complex process, including both chemical attack and physical damage under many different conditions. Any slight change in the operation conditions may change the nature of the wear process [63]. Our understanding of wear process is much less than that of the friction process, and most studies on wear are largely empirically based. When considering friction force in the boundary lubrication regime, it can be generally concluded that the breakdown ratio or adhesion strength between the surfaces will determine the friction force. However, when considering wear, the relationship between the adhesive junction and wear is much more complicated. When a junction is formed between the sliding surfaces, the shearing may occur in several different ways according to the strengths of the two substrates and the junction. If the junction is weaker than the two substrates, shearing will occur at the actual interface where the junction is formed, at which the wear will be very small although the friction may be high. If the weakest point exists in one of the substrates, shearing will often occur within the bulk of the weaker substrate, under which condition there will be a considerable removal of the softer material. If two similar substrates are used for boundary lubrication, the process of deformation and welding will work-harden them and appreciably increase the shear strength in the junction, so that the shearing will rarely occur at the interface but within the bulk of the substrates, which will lead to heavy surface damage on both sides. Actually, any adhesion between surfaces will cause an increase in friction, but not necessarily an increase in wear. Based on previous experimental data on wear rate under different conditions, Archard [64] demonstrated theoretically that the wear rate is proportional to the load, and inversely proportional to the hardness of the substrate material, and the specific wear rate, K, is highly related to the probability of wear when an adhesive junction is formed (1953). Under boundary lubrication conditions, the boundary film will decrease the specific wear rate significantly. For metal surfaces under good boundary lubrication conditions, the value of K is as low as 10 −6 to 10 −7 , under which condition the adhesion between surfaces is appreciably reduced and most of the adhesion junctions do not produce wear. The lubricant films themselves also exhibit a marked difference in their resistance to wear [65]. For example, fatty acid films a few molecular layers thick are more resistant to wear than cholesterol film of the same thickness. Generally speaking, a film with a better capability of reducing friction is also more efficient at reducing wear. The surface mobility of the film and the ability to achieve a rapid adsorption from the bulk of the lubricant are also important factors in wear resistance. Under different boundary lubrication conditions, the wear process may be influenced by many other factors beyond adhesion, such as chemical reactions, oxidation, corrosion, fatigue, and scuffing [66]. The specific wear rate, K, will change by several orders of magnitude according to these factors. In 1971, Beerbower [66] conducted a comprehensive survey of the influencing factors, models, and mechanism of wear in the boundary lubrication regime, and drew a diagram illustrating the various modes of wear and lubrication mechanisms according to the specific film thickness and specific wear rate. This diagram, shown in Fig. 10, is based on a large number of experiments on the related conditions, and is very instructive for engineering and further studies of the wear mechanism. Summary of the adhesion between lubricated surfaces It is generally accepted that the mechanism of boundary lubrication between macroscopic sliding surfaces is the welding and shearing of rough opposing surfaces caused by a local breakdown of the boundary film, which has been the standard framework for the research on boundary lubrication. A number of complicated processes during boundary lubrication, such as stick-slip and wear, can be understood under this scenario. But there is no general agreement on how to predict the breakdown ratio α of boundary film precisely in a certain system since the value with large difference even not in the same order was reported in different systems [44]. Actually, the adhesion between surfaces is a dynamic process, as welding and shearing always occurs during the sliding process. Other factors such as the temperature and pressure distributions, as well as the behavior of the boundary film, should also be considered to achieve a more in-depth understanding of the boundary lubrication process. The behavior of adsorption film in boundary lubrication Owing to the persuasive studies conducted during the first half of the 20 th century, the important role of an adsorbed thin film as a lubricant in boundary lubrication became generally accepted, whereas the behavior of the boundary film itself in the sliding process was not well understood, and soon afterward became a focus of research. Methods based on mechanics, thermodynamics, and other factors were introduced in the description of the boundary film before a sufficient number of experimental techniques became available to directly probe the behaviors of boundary film molecules at the nanoscale. The dynamic properties of adsorption film Based on Bowden's adhesion model for boundary lubrication, Kingsbury [67] proposed a thermodynamic method (1958) to determine the breakdown ratio of boundary film α, which was treated as related to the ratio of the time it takes for a single counterface asperity to pass the distance between two adsorbed molecules on the surface to the duration of the period when a molecule stays in the adsorbed state according to Frenkel's formula. This relationship can be expressed as where Z is the distance between the adsorbed molecules, v is the velocity of the relative motion, t 0 is the period of vibration of the molecules on an order of about 10 −14 s, E is the adsorption heat, R is the gas constant, and T is the absolute temperature. The friction coefficient in the boundary lubrication regime can be calculated by substituting α from Eq. (2) into Eq. (1) influenced mainly by v and T, which are known to be important parameters in the friction process. Since Eq. (2) was derived based on the gas properties, which may not necessarily be accurately applied to liquid or solid molecules, the calculation of the friction coefficient on this basis is quite approximate. This model has also been adopted and modified, for example, by Rowe [68] for calculating the wear rate of rubbing bodies, and by Wang and Huang [69] for calculating the friction coefficient in boundary lubrication, testified experimentally somewhat in their sliding conditions. The behavior of adsorbed lubricant molecules under load The speculation regarding the pressure exerted on the lubricant film was made by Adamson [70], who was inspired by the discrepancy of Bowden's adhesion model in explaining the behavior of the electrical conductivity in boundary lubrication (1960). Under small loads, the conductivity is very small, as would be expected from Bowden's model, because of the small value of A1 and the high electrical resistance of the film. However, under a higher load in which the lubrication performance is not diminished, the electrical conductivity increases to about the same as for unlubricated surfaces. From Adamson's viewpoint, the boundary film itself is under mechanical pressure, as the load is also supported by the boundary film. The pressure exerted on boundary film most probably comes about physically through deformation of the uneven substrate surface where the pressure is insufficient to form direct contact between surfaces by actually displacing the film, but is sufficient to place certain constriction, and hence pressure, upon it. As the real contact area is only a small fraction of the total area of apparent contact, only occasional small patches of the film undergo mechanical pressure. As a result, there is a tendency for pressurized film molecules to escape from the pressurized region to adjacent normal regions, as illustrated in Fig. 11. At the same time, the pressurized film may consist of molecules that are more or less lying flat. It is generally true that part of boundary film will be put under pressure, which brings about changes in the adsorption state of the film molecules during the sliding process, which should be considered and correlated to the boundary lubrication properties if possible. The behavior of lubricant molecules when sliding It can be dated back to the work of Hardy that the friction coefficient of boundary lubrication was found to be smaller with a larger molecular weight of lubricant molecules. An illustrative model called the cobblestone model was created to correlate this phenomenon with the behavior of lubricant molecules when sliding, which was first proposed by Tabor [71] (1981) and developed further by Homola [72], and has been efficient at explaining the interfacial friction at a low load with little or no wear. This model can be illustrated as pushing a cart over a road of cobblestones in which the wheels of the cart (which represent the molecules of lubrication film) must be made to roll over the cobblestones (representing the molecules of the lower surface) before the cart can move. For the cart, the downward force of gravity represents the attractive intermolecular forces between the two material surfaces. When at rest, the wheels find grooves between the cobblestones where they sit in potential energy minima, and thus the cart is in stable mechanical equilibrium. A lateral force is required to raise the wheels against the gravity force to initiate motion. Pushing is needed for the motion to continue, during which energy is dissipated by the liberation of heat each time a wheel hits the next cobblestone. In this model, both the externally applied load and the attractive intermolecular forces are considered to have an effect on the boundary lubrication properties. It is intuitively clear that the larger the liquid molecules, the smoother the surfaces appear to the wheels, and hence the easier for them to roll over the surfaces. This is a good descriptive model for understanding the chainlength effect under certain boundary lubrication conditions, especially with spherical lubricant molecules. Furthermore, the modern concept of energy dissipation can also be incorporated in such a model [73]. Summary of the behavior of adsorption film in boundary lubrication The above studies and speculations regarding the behavior of the thin-layer adsorbed lubricant molecules were endeavors aimed at an understanding of boundary lubrication at the molecule level. Although very crude with many unsatisfactory features, the attempts at interpreting boundary lubrication in different ways each fit certain aspects of the observed boundary lubrication properties, clearly show the complicated nature of boundary lubrication and the need for additional cross-disciplinary studies. These models also represent the transition from macroscopic view of boundary lubrication to the microscopic or nanoscale perspective, which became a focus of later scientists, as discussed in the following section. Boundary lubrication at the nanoscale The most significant development during the past several decades for boundary lubrication came from experiments conducted to acquire information regarding the lubrication properties at the molecular level. As illustrated in Section 3, at most interfaces of technological relevance, contact occurs at numerous asperities. Consequently, the importance of investigating single asperity contacts in studies on the fundamental tribological properties of an interface has long been recognized [74,75]. The emergence and improvements in AFM and SFA have allowed systematic investigations into the interfacial problems with a sufficiently high resolution. Microscopic information on boundary lubrication is important not only from a fundamental perspective but also because of the increasing demand in understanding the lubrication behavior of ultra-thin lubricant films on sooth solid surfaces, especially in high-density magnetic recording technology and micro/nanoelectromechanical systems (MEMS/NEMS) [76]. Experimental techniques in investigating boundary lubrication at the nanoscale Studies on boundary lubrication at the nanoscale are quite different from those on conventional tribology or macrotribology. Experiments are conducted with a relatively small mass under lightly loaded conditions. Under this situation, negligible wear occurs, and the surface properties dominate the boundary lubrication performance. Interfacial phenomena with atomic smooth surfaces involving ultra-thin films (as low as 1 nm, which is close to the size of a lubricant molecule) are a main concern [77]. As the relatively sliding velocity between surfaces is very small, the sliding consistently falls into boundary lubrication state far from the hydrodynamic lubrication regime under lubricated conditions. The utilization and skilled operation of special experimental equipment are of vital importance in acquiring accurate boundary lubrication information at the nanoscale. The AFM and SFA are the main facilities used in the study of boundary lubrication at the nanoscale. The AFM was first developed by Binnig et al. [78] in 1986 based on the design of the scanning tunneling microscope (STM) [79], and was used to measure ultra-small forces present between the AFM tip and the sample surface, the principle of which is shown in Fig. 12. Later, the AFM was modified to measure friction force as well, a version that is generally called friction force microscope (FFM) or lateral force microscope (LFM) [74,80], which are used in studies on tribology at the nanoscale. The normal and lateral forces are usually measured by detecting the bending and torsional deflection of the cantilever supporting the tip. These two types of deformations can be recorded simultaneously by processing the signals in a four-quadrant photo detector irradiated by a laser beam reflected from the back of the cantilever. The spring constants of the cantilever and the sensitivity of the photo detector should be determined a priori to derive the load and friction force from the measured deformations. The adhesion force can be also obtained by measuring the force-distance curve [81], where the cantilever bending is plotted versus sample displacement, and by recording the largest tensile load of the cantilever during the retraction [82]. The SFA, first developed in the late 1960s [83], is commonly employed to study both static and dynamic properties of molecularly thin liquid films sandwiched between smooth surfaces [84]. The SFA [85] consists of a pair of atomically smooth sheets (usually mica) mounted on cross-placed cylindrical lenses that can be pressed into contact, the principle of which is shown in Fig. 13. Actuators attached to the surface supports are used to exert normal or shear forces and to control the spacing between surfaces down to the angstrom level. The contact area and surface separation are usually measured by interference method with angstrom precision. Piezo bimorphs are attached to one of the surfaces to impart a lateral displacement or oscillation for friction and viscosity measurements. The atomic flatness of a surface is a necessity in achieving an atomic resolution. The materials usually used in studies on nanotribology include mica [86] and graphite [74], which have large areas with an atomically smooth surface that can be achieved by peeling as a result of the special layered structure of the materials, and silicon and silicon-based materials such as silica [15], which are acquired by polishing a silicon surface to an atomic smoothness and related post-treatment process. Gold, which is an inert metal, can be flattened, either by annealing [87] or sputtering/ evaporating [33], on an atomically smooth surface. Other metals such as steel [23] can be highly polished, and are thus candidates in these types of studies. Particularly, the material used in an SFA experiment should be flexible thin layer for mounting on the cylindrical lens, and should be optically transparent, so that mica is the best choice for such a material. On such atomically or relatively smooth surfaces with a roughness at the nanometer or sub-nanometer scale, the LB films and SAMs can be prepared in a more ordered and packed manner, which is ideal in studies on boundary lubrication at the nanoscale. Phase transition of a confined thin layer of lubricant and related boundary lubrication properties When confined as a thin layer with a thickness close to a small multiple of the molecular diameter, the behavior of a liquid can no longer be described even qualitatively in terms of the bulk properties. With a high precision of the separation control of the SFA, the film will show a repulsive thinning with increasing pressure, where the liquid film is squeezed out of the contact area one molecular layer at a time. The layering transition was first observed by Gee et al. [88], and has since been explored extensively both experimentally [89] and theoretically [90] (1990). Furthermore, the effective viscosity of the thin film may be severalorders of magnitude larger than the bulk value, showing solid-like behaviors. If this occurs, the molecular configuration during the sliding process will be complicated when one of the surfaces is made to move laterally or to shear past the other. Under such conditions, the film alternately melts and freezes during the motion, resulting in stick-slip friction, as schematically shown in the lower part of Fig. 13. Stick occurs when the film is solid-like, giving rise to the static friction force, and slip occurs in the shearinduced molten state, giving rise to the kinetic friction force [58]. The stick-slip friction can be shifted to continuous sliding dependent on the sliding velocity, temperature, and load, which are factors influencing the behavior of the first-order transition between a solid-and liquid-like state of the film [91]. For example, there is a critical velocity above which the stick-slip will turn into continuous sliding. Actually, the phase transition is a much more complicated process, showing several types of ordered structures, in which the parameters deriving the dynamic transitions such as the velocity, temperature, and load are highly correlated with each other [58]; for more systematic information, refer to the work by Ruths and Israelachvili [92]. The phase transition model for boundary lubrication has been widely accepted for such a special system as proved by the enormous number of conducted experiments and computer simulation results [92]. Compared to the stick-slip behavior of a macroscopic friction system, the stick-slip originating from a thin layer of confined film shows certain similarities. For example, a transition from stick-slip friction regime to continuous sliding can be induced by an increase in velocity. However, the mechanisms between the two systems are completely different. For macroscopic friction conditions, the stick-slip phenomenon is mainly related to the breakdown ratio of the boundary film or the adhesion strength, whereas the occurrence of stick-slip during the sliding when lubricated by a thin film with a nanometer thickness is mainly caused by the solid-like to liquid-like transition of the film. Hydration lubrication Since the boundary lubrication properties of a confined thin layer of organic lubricant was intensively studied using SFA, a question was then raised regarding the behavior of a thin layer of water in such a system, which led to the theory of hydration lubrication. The nature of hydration lubrication is particularly relevant to the unique properties of water [93]. For most nonassociating liquids, such as organics and oils, the solid phase is denser than the liquid phase, so that the liquid has a tendency to become solid-like with a much higher viscosity, when confined between two surfaces sliding past each other across films that are only a few monolayers thick. In contrast, the liquid phase of water is uniquely denser than the solid phase, so that water retains a liquid-like fluidity even when confined between solid surfaces in films as low as a single layer in thickness, as the densification of the confined thin water film tends to suppress its tendency to solidify. A hydration layer in which water molecules are tightly bound to ions or ionized surfaces in an aqueous medium because of their large dipoles can be viewed as a special form of an extremely thin water film. The hydration of charges leads to a strong repulsion when they approach each other to within a few nanometers or less, arising from the reluctance of the ions or surfaces to shed their hydration sheath. The shear properties of such a system were first studied by Raviv and Klein [94] in 2002, and a remarkably low friction coefficient of no greater than 0.0002 was achieved with a mean contact pressure of 0.3 MPa. During the sliding process, as the shear rate is much lower than the relaxation rate of the hydration shells, the hydrated ions will respond to shearing in a liquid-like fashion [95]. The ultra-low friction coefficient arises from the hydration lubrication mechanism in which the hydration layer sustains a high normal pressure and behaves as a fluid under shearing at the same time, as shown in Fig. 14. This type of hydration lubrication mechanism was later found to function in aqueous systems with charged ions, such as hydrated polymer brushes [96], amphiphilic surfactants [97], and liposomes [98]. The lubrication in biology was also expected to be somewhat related to the hydration lubrication mechanism [99], in addition to other frictional dissipation processes. The hydration lubrication mechanism, research into which was led by Klein's group during the past decade, is a new perspective for understanding and controlling the boundary lubrication process in an aqueous medium. The critical point here is that the hydration shells surrounding charged, zwitterionic, or polar groups are tenaciously attached and are thus able to support large amounts of normal stress without being squeezed out; at the same time, they are also fluid so that the shear stresses can be very low when they slide past each other or past surfaces. The charisma of this system comes from the fact that an ultra-low friction coefficient can be achieved in such a simple system by a liquid found in everyday life. Problems remain as this theory is now deemed unable to systematically identify and rank charged groups, or their combinations, to determine their hydration lubrication efficiency [100]. When salt ions are combined with other lubrication additives, it is not easy to distinguish whether the lubrication performance comes from hydration lubrication or another mechanism such as the influence of salt ions on the adsorption state [101] or conformation [102] of organic adsorption film. Another challenge comes from whether and how such a system can be applied to normal macroscopic boundary lubrication experiments or even to real machines as a way to dramatically reduce the amount of energy dissipation. Boundary lubrication in single asperity contact at the nanoscale Using AFM and the improved version of LFM, the boundary lubrication properties in a single asperity contact at the nanoscale can be investigated between a nanoscale tip and a smooth surface coated by SAMs. For well-bonded SAMs on a substrate, the friction process is just like a tip sliding on the top of molecular springs or a brush [103], which has compliant features and can experience an orientation and compression under a particular load. The friction and wear of such a process is usually very low until reaching a critical load at which the SAMs are worn away by the tip (Fig. 15). Interesting results have been achieved regarding the effect of chain length on friction, which is one of the most fundamental questions regarding the nanotribological properties of alkylsilane SAMs. Salmeron's Fig. 15 Dependence of friction on load for a C18 alkylsilane monolayer on mica, in which four distinct regime exist: (Ⅰ) elastic regime; (Ⅱ) distortion and displacement of the SAMs; (Ⅲ) tip in contact with mica substrate; (Ⅳ) wear of mica substrate [104]. group [104] conducted early studies on the chain length effect using alkylsilane chains of different lengths on mica probed by a Si 3 N 4 tip (1996). The general results show that the friction decreases with an increase in the chain length. As the load increases beyond 300 nN, the plot of the friction response shows distinct regimes corresponding to the elastic response of the coating, plastic deformation and displacement of the coating, contact of the tip with the substrate, and substrate damage, as shown in Fig. 15. The main reason for this was concluded that the increasing order being stabilized by the van der Waals attractions as the increase in the chain length promotes a reduction in friction. Additional studies were carried out regarding the chain length effect on friction, in which similar [105,106] or different [107] results were reported. Although the chain-length effect on the friction at both the macroscopic scale and the nanoscale shows a similar trend, their mechanisms are different, as molecular dynamics simulation has shown that the poor lubrication performance of shorter chain alkanes at the macroscopic scale may lie in their inability to effectively separate the asperity contact between the sliding surfaces [108]. Some other factors of SAMs were investigated regarding their boundary lubrication properties. Loosely packed and disordered SAMs were shown to exhibit higher interfacial friction than those that were well packed and highly ordered, which is attributed to the stronger interaction with the tip by an enhanced area of contact, and the increased van der Waals interactions of liquid-like SAMs as compared with crystalline SAMs [14]. The terminal functionality was found to influence the friction at the nanoscale. For example, the nanoscale friction of the SAMs was measured to be on the order of −COOH > −OH > −CH 3 for the same chain length [12]. The higher friction for −COOH functionality at the nanoscale was explained by the formation of hydrogen bonds between the contacting functional groups. It was also reported that mixed SMAs composed of two molecules with different chain lengths may have much lower friction than pure SAMs, which is caused by the structure of a highly ordered under-layer and a disordered, mobile outer-layer/canopy of the mixed SAMs [109]. A question quickly arises regarding how these factors influence their macroscopic boundary lubrication properties when the scale is changed by several orders of magnitude. The application of boundary lubrication at the nanoscale Generally speaking, studies on boundary lubrication at the nanoscale is far from the working conditions of machines in the boundary lubrication regime. However, there are some areas of industry in which the SAMs at the nanoscale can act as an efficient boundary film. The first is MEMS devices, which are utilized in a large number of successful products such as accelerometers, gyroscopes, and pressure sensors. The study of the nanotribology of SAMs is in fact closely related to the development of MEMS, where the high surface-to-volume ratio makes the interfacial interactions a main factor in the wear and lifetime of such devices [110,111]. The materials used in MEMS are mainly silicon based, the fabrication process of which was developed from the microelectronics industry; thus, mass production can be achieved. The release process to free the movable parts will always lead to an undesirable adhesion, the effect of which is generally referred to as stiction. SAMs are ideal in reducing the amount of stiction as the head groups can be chosen to attach to an oxidized silicon substrate, whereas the hydrophobic tail groups can greatly reduce the adhesion and interfacial friction. SAMs have been used to control stiction and friction in the contacting parts to achieve a better operational performance [112]. The ability of SAMs to withstand processing environments such as high temperatures and mechanical stresses from contact, and the packing densities, are also important concerns in the design of SAMs used in MEMS products [76]. Another successful example is the hard disk drive, which has become a necessary device in our daily lives. A disk drive consists of a magnetic recording disk and a slider containing a magnetic recording head to read and write the data. During operation, the disk is rotated by a centrally located motor running at between 7,200 and 15,000 rpm, while the slider flies over the disk bear by air within a few nanometers in order to detect the magnetic domain orientation. For a reliable operation of the slider-disk interface, SAMs of a perfluoropolyether (PTFE) lubricant are usually coated on the disk with a thickness of about 1 nm [113]. With the continual decrease in the head-disk clearance to increase the areal density, there is an increasing probability of lubricant transfer to the slider even in the absence of any head-disk contact, which is a severe hazard to the stable operation of the system and must therefore be avoided. The transfer mechanism is believed to involve lubricant evaporation from the disk surface and condensation onto the slider, the process of which can be reduced by decreasing the thickness of the coated SAMs, increasing the number of polar hydroxyl end groups per lubricant molecule, increasing the film bonding ratio or using lubricants with a stiffer backbone [114]. Summary of boundary lubrication at the nanoscale Facilitated by the AFM and SFA with high precision, a number of advances in both theory and application have been achieved over the past several decades. With the arrival of nanotechnology, a new perspective of boundary lubrication at the nanoscale has come into view, and our understanding of boundary lubrication at the molecular level has been rapidly expanding. The ultra-low friction coefficient and stick-slip without wear are rarely observed in macroscopic experiments of boundary lubrication. This trend will continue owing to the relatively young age of the studies conducted in this area and the rich phenomena when scaling down to the molecular level. Because many complications in macroscopic systems, such as contact with multiple asperities, can be avoided, the relative simplicity of such a system also contributes to the elegant expression of these theories. However, it may take a great deal of effort before the newly discovered knowledge regarding boundary lubrication at the nanoscale can be applied in real engineering of macroscopic systems. Connecting boundary lubrication at the nanoscale to the macroscopic scale presents a genuinely exciting and so far largely unexplored area of research. Based on the enormous number of studies conducted on the behaviors of boundary film at the nanoscale, it is time for scientists to reconsider the behaviors of boundary film in macroscopic systems, and the related boundary lubrication properties. New methodologies, theories, and experimental designs are needed in such studies to bridge the gap of boundary lubrication between the nanoscale and macroscopic scale. 6 Active control of boundary lubrication 6 .1 Significance of active control From the viewpoint of engineering, finding a way to control boundary lubrication is as important as an understanding of the mechanisms of boundary lubrication. The goal of active control of boundary lubrication is not only at controlling the magnitude of friction and wear [23], but also at controlling the distributions of friction and wear over the contact surfaces, as well as their variations over time [115]. Although lower friction and wear are beneficial in most engineering applications, proper levels of friction and wear are more preferable in many practical cases. For example, dangerous skidding will occur during high-speed rolling contact if the frictional traction force provided at the contact point is insufficiently high, such as in a roller-race contact of a rolling bearing, tire-road contact of a passenger car, or wheel-rail contact of a high-speed train. Bolt connections in a machine may become gradually loosened during operation if the static friction coefficient at the screw thread interface is too low. Zero-wear is also not the perfect case. A deliberate running-in process with controlled wear at the level of asperities of the mating surfaces is beneficial for all of the following machine operations. In some advanced surface finishing processes, such as lapping and chemical mechanical planarization (CMP), a proper wear rate is definitely required for maintaining an acceptable level of productivity [116,117]. In a machinery system, there are a number of contacting interfaces distributed throughout many locations, and different interfaces at the different locations of the machine have different roles and require different desired friction coefficient values. Therefore, it is meaningful to be able to finely tune the friction coefficient, not just the contact force, according to the design requirement for the specific function of each interface. For certain motion mechanisms, such as in a clutch, gecko-mimicked motion mechanism [118], or robot arm joint, a quick switch between a smooth motion and a reliable dead-locking is required. In the detachment and movement stage, lower kinetic friction and wear are preferable, while higher adhesion and static friction are better during the attachment and sticking stage, which means that friction should be changeable in a timely manner. Conventional boundary lubrication technologies are unable to fulfill such complicated tasks of controlling friction in magnitude, contacting position and time. Applying certain types of external fields provides the possibility to achieve the active control of boundary lubrication, the progress of which is described below. Potential controlled boundary lubrication in aqueous solutions Since the early use of electricity, people have dreamed of modulating friction by the means of electricity. In 1875, Edison patented a telegraphy device that utilized electricity to change the friction between metal and paper moistened with bromo chloralum or alcohol [119]. Although he claimed that the friction can be changed by the passage of electricity through the surfaces in contact, in his patent, he neither showed his experimental results nor gave an explanation of the possible mechanism of his findings. The first fundamental study on active control of boundary lubrication was conducted by Bowden and Young [120] in the early of 1950s. They introduced the latest potentiostatic technique developed in electrochemistry to the field of tribology, and found that the static coefficient of friction between a platinum rider and a platinum wire in a dilute solution of sulfuric acid depends on the relative interfacial potential of the platinum electrodes to a standard reference electrode. When the interfacial potential is within the region of either hydrogen or oxygen evolution, the static coefficient of friction is lower than that of an intermediate potential. The authors attributed the reduction in friction to the adsorption of hydrogen or oxygen gas film, which plays the role of boundary lubrication. Based on the progress made in the electrical doublelayer theory during the 1960s, some scientists tried to explain Bowden's experimental findings in terms of the difference in the repulsive force between electrical double layers on rubbing surfaces upon applying an electric potential [121], rather than the change in boundary film as suggested by Bowden. Such speculation of the electrical double-layer repulsive effect on friction was also adopted by Zhu et al. [122] in the early 1990s, and was used to analyze their experimental results obtained with iron/iron and iron oxide/alumina oxide rubbing contacts in Na 2 SO 4 solution. However, their experimental results showed that the effect of interfacial potential on friction became weaker and weaker with an increase in the normal load when an inorganic electrolyte was used as lubricant. On the other hand, when a small amount of carboxylic acid was added into the aqueous solution, adjusting the interfacial potential within the range of 1.3 to 1.2 V versus a standard calomel electrode (SCE) resulted in a remarkable change in the coefficient of friction, from about 0.15 at a positive potential, to 0.31 at a potential more negative than 0.8 V, even under a relatively high contact pressure of 150 MPa. The authors reported that positive interfacial potentials produce iron carboxylate on the surfaces, thereby reducing friction significantly, whereas negative interfacial potentials give rise to a higher friction. During the same time period, Brandon et al. [123] also presented similar experimental curves of static coefficient of friction versus interfacial potential for iron upon contact with a mild steel in an aqueous solution containing octanoic acid under a light load with a tilting electrochemical cell. The authors presumed that the effect of interfacial potential on friction is the result of electrostatic interactions between the negatively charged octanoate species and the charged contact surfaces. In 2002, Chang et al. [124] found that the electrolysis of water might be the key trigger to the abrupt change in friction coefficient in the presence of a sufficiently negative electric potential because the transition from low to high friction in their test conditions was coincident with that of the electrolysis of the solution when adjusting electrode potential. They attributed the observed change in friction to the cleaning action of the local high pH value due to electrolysis under severely negative interfacial potentials for the laurylsulfonate species, which may form a good boundary lubricating film under positive or mildly negative potential conditions. These observations are consistent with the previous experimental results with similar aqueous lubrication [120−123]. To clarify which of the above-mentioned mechanisms, i.e., gas film lubrication owing to the electrolysis of water proposed by Bowden and Young [120], electrical double-layer repulsion proposed by Bockris and Argade [121], electrostatic interaction between the charged lubricious species and contact surfaces proposed by Brandon et al. [123], or cleaning of the surfactant film owing to the increased pH value through the electrolysis of the aqueous solution proposed by Chang et al. [124], is responsible for the observed potential-dependent change of friction coefficient in aqueous lubrication, experiments were repeatedly conducted with a refined adjustment of the concentration of sodium dodecyl sulfate (SDS) surfactant and the applied electrical potential. Meanwhile, electrochemical QCM technique was used to monitor the adsorption and desorption of the surfactant when the electrical potential was changed [23]. It was revealed that the electrolysis of a solution is not a necessary condition for the transition of friction to occur if the concentration of the surfactant additives is much lower than its critical micelle concentration (CMC) [23]. As shown in Fig. 16, for a blank solution, the measured friction coefficient under the same load and speed conditions is 0.45, and is independent of the electrical potential. For the low concentration group (SDS concentration < 5 mM), friction transition occurs within the potential range of 0.4 to 0.05 V, which is slightly different for different concentrations. It should be noted that, when the electrode potential is in the positive region, a layer of DS − anions is absorbed on stainless steel surface, as shown in Fig. 17, and the macroscopic friction is relatively low. On the other hand, when the potential is lower than 0.4 V, the DS − anions desorb from the surface of the stainless steel, as shown in Fig. 17, and the macroscopic friction is relatively high. For the high concentration group (SDS concentration > 5 mM), however, the potential for friction transition shifts to the range of 1.2 to 1.0 V, which is close to the equilibrium potential for hydrogen evolution [124], implying that the desorption of DS − anions and the hydrolysis of water proceed simultaneously under high SDS concentrations. Therefore, it can be concluded that the coefficient of friction can be controlled within the range from the low value in the presence of an adsorbed SDS layer Fig. 17 Relationship between the adsorbed mass changes of SDS on stainless steel surface and the control potential for SDS solution with a concentration of 1 mM using negative-going potential scanning from 0.2 V to −0.1 V with a scan rate of 5 mV/s [23]. to the high value in the absence of the adsorbed SDS layer, depending on the polarity and magnitude of the charge on the contact surface. This provides a reasonable explanation for the potential-dependent friction phenomena from the standpoint of boundary lubrication. Moreover, it was demonstrated that the friction coefficient can follow the change of the applied potential in any prescribed forms, such as a square wave, sinusoidal wave, or triangle wave, or in a manner of feedback control [125,126]. The response time of the friction to the potential is determined based on the charge/discharge time and mass transfer of the surfactant of the tribosystem. The control of local friction is also realizable by separating the contact surface into several zones with insulating layers and setting up different potential changes in different zones [115], or by using the bipolar technique with a uniform electrode. Potential controlled boundary lubrication in non-aqueous solutions The tunability of boundary lubrication of aqueous surfactant solutions described above is attractive because of the feasibility of smarter friction control. However, the application of an aqueous lubrication in industry is limited because of the drawbacks of a limited temperature range, evaporation, and corrosion against most metallic materials used in engineering, although it is dominant in biological systems. In recent years, an analogous potential effect on boundary lubrication in a non-aqueous system was found. The primary difficulty in extending the potential controlled friction technique to a non-aqueous lubrication is in the high electrical resistivity of most lubricating oils, which makes the electrochemical techniques of monitoring and maintaining the electrode potentials suitable for aqueous solutions inapplicable to the situations in which oil lubrication is used. To overcome this problem, Zhu et al. [127] selected some polar fluids, including propylene carbonate (PC), tetrahydrofuran (THF), diethyl adipate (DEA), and butylbenzene (BB) as model base lubricants, and mixed them with three types of additives, ferrocene, dibenzyldisulfide, and carboxylic acids, with different chain lengths, respectively. The authors also added some supporting electrolytes to increase the conductivity of the non-aqueous fluids. Their experimental results indicate that the friction coefficient of iron-on-iron contact maintains a lower value in the negative potential range, and increases by 50% as the electrode potential is shifted from the rest potential of about 0.1 V (SCE) to the positive potential of 1.5 V when lubricated using a 0.05 wt% octadecanoic acid solution in PC based fluid. The tendency of the change in friction with the electrode potential in the non-aqueous lubrication condition, however, is contrary to that they reported in aqueous lubrication condition [122]. Because no experimental data on the adsorption of boundary film under different electrode potentials are provided in [122,123,127,128], the inconsistency of the potential-dependent lubrication behavior between the aqueous and non-aqueous carboxylic acid solutions cannot be satisfactorily explained solely from the effect of the electrode potential on the adsorption of carboxylic acid on iron surfaces. Yang et al. [129] choose the SDS surfactant in a pure propylene carbonate based fluid as a model lubricant to investigate the effect of potential on friction. Propylene carbonate is a polar aprotic solvent and is frequently used as a high permittivity component of electrolytes in lithium batteries. Because of its high dielectric constant (ε = 65.0 at 25 °C ), many types of nonorganic and organic salts, including the SDS surfactant, can be dissolved and dissociated in PC. No supporting electrolytes, except the SDS surfactant, were added to the solution to prevent the SDS boundary film from possible effects of ions. Water was also excluded from the solution, suppressing the effect of water on the SDS aggregate formation and the hydration effect on boundary lubrication. The boundary lubrication property of the SDS/PC solution within the potential range of 1.5 V to +1.5 V versus Ag/AgCl was explored using a ball-on-disk friction tester. Meanwhile, the adsorption mass of SDS surfactant on stainless steel surface within the same potential range was measured using a quartz crystal microbalance (EC-QCM-D). The experimental results showed the potential-dependent changes in the boundary lubrication performance of the propylene carbonate solution. When the applied potential was positive, both the friction and wear of the tested stainless steels were relatively lower. As the potential was shifted to the negative regime, the friction coefficient increased by 100% or more, depending on the load condition and hardness of the stainless steels. The experimental results verify the coincident changes in the friction coefficient and the desorption amount of the SDS film, the mechanism of which is consistent with that found in the aqueous SDS solution. When the SDS surfactant was replaced with ionic liquids (ILs), 1-octyl-3-methylimidazolium tetrafluoroborate ([OMIm]BF 4 ), 1-octyl-3-methylimidazolium hexafluorophosphate ([OMIm]PF 6 ), and 1-decyl-3-methylimidazolium hexafluorophosphate ([DMIm]PF 6 ), a similar effect of the potential on boundary lubrication was found [129], suggesting that the adsorption of ions contributes greatly to boundary lubrication of the IL additives, and that the boundary lubricating ability can be improved to the utmost by applying a proper electrical potential to the steel surface. This mechanism even applies to pure IL as lubricant, with different types of adsorbed ion on the friction pair at different surface potentials [130]. Propylene carbonate is not the only candidate as the base fluid suitable for potential-dependent boundary lubrication. Other ester oils with polarity, such as diethyl succinate (DES), can also be effective when mixed with surfactants or ionic liquids. The results are similar to those found in PC solutions. This similarity confirms the mechanism of potential-dependent boundary lubrication for ester-based lubricants. In addition, it is not surprising to expect the similar potentialdependent boundary lubrication in a polyelectrolyte (PE)-lubricated tribosystem. Other approaches for active control of boundary lubrication One example of electric-field induced friction reduction and control in PE lubrication was reported by Drummond [131], who studied the interactions between molecularly smooth mica surfaces coated by PE brushes produced by self-assembly of amphiphilic diblock polystyrene-polyacrylic acid co-polymers (PS36-b-PAA125) from an aqueous solution at pH 10, by using AFM and QCM in a liquid environment to confirm the adsorption of a pH-responsive, nonstructured polymer layer, and measured the normal and lateral forces under electric field conditions using SFA. The adsorption of poorly solvated hydrophobic PS blocks (anchor) is driven by dispersion forces. On the contrary, the water soluble PAA moieties (buoy) are negatively charged at a pH above neutral, and are electrostatically repelled from the mica surfaces. Differing from the direct current electrical potential application mentioned in Sections 6.2 and 6.3, the author applied an alternating voltage between the lubricated surfaces instead. A remarkable friction reduction of up to 80% was observed with an increase in the amplitude of the alternating voltage. Furthermore, the friction reduction depends on not only the amplitude but also the frequency of the alternating voltage. A substantial reduction in friction can be observed when the frequency is lower than 1 kHz, but no effect is detectable at larger frequencies. In addition, the friction reduction effect is considerably lessened within a narrow frequency window of around 500 Hz. Below this frequency, the effect is quite substantial, whereas at larger frequencies it is progressively diminished. The author described the frequency dependence of friction in terms of the dynamic conformal change of PE ionization under the electric field. Recently, Strelcov et al. reported that boundary lubrication properties can be actively controlled between an AFM tip and a salt surface when the relative humidity is above a certain threshold and a sufficiently strong electric field is applied [132]. A plausible mechanism was proposed that an ordered structure of electric double layer as a result of water condensation and dissolution of polarizable ions accounts for the reduction of friction force. Another way to control the boundary lubrication is to modulate the inherent surface forces, including shortrange van der Waals forces and long-range Casimir force, within the range of attractive to repulsive values by means of an external field. Since the Casimir force is dependent on the permeability of the interacting objects that can be controlled by an external magnetic field, it is applicable to adjust the Casimir force by this field. However, for most magnetic materials with low frequency magnetic responses, the adjustment range is insignificant. Recently, Ma et al. proposed a superparamagnetic metameta-material (MMM) to realize nontrivial, high-frequency permeability as well as small permittivity to obtain a repulsive Casimir force between the MMM plate and a metal plate [133]. The authors predicted that the Casimir force can be successfully adjusted between repulsion and attraction continuously by an external magnetic field. Magnetic fluid (or magnetorheological fluid) is another system that can be actively controlled by external magnetic field, which is a colloid suspension of single-domain ferromagnetic particles dispersed in a carrier liquid. Using magnetic fluid as lubricant, the friction coefficient in boundary lubrication regime can be magnified four-fold when a certain external magnetic field is applied in the four-ball tribological tester reported by Hu et al. [134]. Using a magnetic surface, Chen et al. designed a lubricating system in which magnetic fluid exhibits better lubricating properties with an applied magnetic field than without such a field, by forming a magnetically arrayed film on the surface [135]. As a type of electromagnetic wave, UV light was also used to achieve active control of boundary lubrication. In such a system, chiral-nematic polymer-network coating containing photosensitive azobenzene molecules acts as boundary film, the conformation of which is sensitive to radiated UV light, accounting for the light controlled boundary lubrication properties. There is also evidence that boundary lubrication properties can be regulated by changing the solvent, electrolyte, pH, and temperature of the solution, reviewed in Ref. [136]; however, these methods cannot be deemed an active control of boundary lubrication when compared to previous discussed methods because a fast yet reversible control of boundary lubrication properties is not easy to achieve. Summary of the active control of boundary lubrication Active control represents the future of boundary lubrication, although studies on it are still at a premature stage, and the technology is far from reaching the industrial application stage. The achievements in this area mainly include the revealing of the active control mechanisms in different systems, which was comprehensively reviewed in this section. Control of boundary lubrication by potential remains the most convenient and intensively investigated type of active control [136], the mechanism of which is summarized in Fig. 18. Generally speaking, using anionic surfactant as lubricating additive, the friction coefficient is low lubricated by the adsorption film at a relatively positive applied potential, and is high for a relatively negative applied potential as the adsorption film is desorbed by electrostatic repulsion. The critical potential for the transition of friction coefficient varies with the concentration of the surfactant, as the adsorption amount and adsorption structure change, as well as the response of adsorption film to applied potential, as illustrated in Fig. 18. This mechanism provides a model lubricating system, using water or oil as base lubricant, with the additive of surfactants or ionic liquids, the boundary lubrication effect of which can be remarkably controlled at applied surface potential. New methods for the active control of boundary lubrication are also emerging, such as a boundary lubricating system of polyelectrolyte, pure IL, a salt surface in a vapor environment, photosensitive polymer-network, or magnetic fluid, each of which has its own unique properties in response to an external field. Compared to the enormous effort in Fig. 18 Mechanism of potential controlled friction with adsorbed boundary film. illustrating the nature of boundary lubrication, the active control of boundary lubrication is a relatively new area that should be given more attention. The developing trends in such a new field are further prospected in the conclusion of this review. Conclusions and remarks The present review on boundary lubrication by adsorption film is unable to include all knowledge regarding this vigorous field that has been uncovered by scientists during the past century, but does provide a view of the mainline of leading perspectives and methodologies in revealing the fundamental problems of boundary lubrication. Unlike some areas in natural science such as electromagnetism, in which the abundant phenomena involved can be fully revealed by the elegant Maxwell's equations, no simple conclusions can be drawn up in the research on boundary lubrication. Whenever one boundary lubrication model has been established based on an enormous number of experiments using a particular system, strong evidence would emerge indicating that the model was far from satisfactory in fully revealing the nature of boundary lubrication. The complexity of boundary lubrication system is the main reason for the inherent difficulties in related studies, but is also its amazement in attracting the attention of a significant number of researchers ever since it was categorized as an independent lubrication regime. Generally speaking, most research on boundary lubrication has been conducted experimentally, as a quantitative prediction of the friction and wear at boundary lubrication is difficult to achieve. Thus, our understanding of boundary lubrication, or the ability to model boundary lubrication in theory, depends greatly on the experimental facilities available in the contemporary laboratory. For example, based on his meticulous investigation using his developed macroscopic friction tester, Bowden's model dominated the interpretation of boundary lubrication for decades, and a real look into the boundary lubrication properties at the nanoscale was only made after sufficiently precise equipment such as SFA and AFM were invented. Thanks to the amplification in the number of modern techniques available for not only boundary lubrication studies but also for characterizing the properties of boundary film, an elaboration on the correlation between the boundary lubrication performance and the behaviors of boundary film during the sliding process has been an efficient protocol in revealing more detailed aspects of the boundary lubrication process. One central topic has dominated the past discussion of the nature of boundary lubrication, and will likely remain an area of focus in future research, i.e., how to link the macroscopic boundary lubrication properties with its functioning process at the nanoscale. As early as the first study on boundary lubrication, conducted by Hardy, the main strategy was trying to illustrate the boundary lubrication properties at the macro scale by a speculated model mainly involving the interaction of boundary films at the molecular level. Later, Bowden's model emphasized the role of direct contact and adhesion between lubricated sliding surfaces, and used a parameter α to generalize the breakdown ratio of boundary film. This framework in understanding the process of boundary lubrication is simple yet efficient from a macroscopic perspective, but ignores the real distribution of the areas where breakdown of the boundary film occurs and the behaviors of boundary film during the sliding process. Fortunately, the behaviors of boundary film during the sliding process have been intensively investigated by AFM and SFA over the past several decades, through which the behaviors of boundary film can be surveyed directly at the molecular level. However, in the boundary lubrication process at the macroscopic level, the behaviors of the boundary film may be related to but still differ from those under an ideal single asperity contact at the nanoscale. Although we are very close, we still have not uncovered the complex multi-scale process that occurs during boundary lubrication at the macroscopic level. The lubrication process is macroscopic as a whole, whereas the basic volume of a material affected through contact tends to be extremely small, even at the nanoscale, with an uneven distribution of factors such as the contact pressure and temperature, and the changing of these factors during the sliding process. Statistical mechanics, which is efficient at connecting macroscopic quantity with micro molecular motion in systems such as a gas, may also play a part in solving a similar problem for boundary lubrication, at least in relating the breakdown ratio of boundary film with the distribution function of the contact pressure and temperature, as one simple example. There have also been attempts at correlating the boundary lubrication properties at the macroscopic level for sliding rough surfaces with the behavior of lubricant molecules by combining phase transition model with thermodynamic method [137]. A multi-scale analysis of the interfacial interactions has also been employed to determine the initiation of instable wear of boundary lubrication [138]. As engineering science, despite the abundant theories illustrating the complicated process of boundary lubrication, the ultimate goal of this research topic is to efficiently control the boundary lubrication properties so as to accommodate its application in machinery systems. During the past century, the main focus of attention was on the synthesis and characterization of lubricating additives, which easily achieved an application in engineering systems and in the marketplace. However, there is no denying that the mechanisms of active control of boundary lubrication in certain model lubricating systems have been nearly clarified. For example, water or oil based lubricating system, with the additive of surfactants or ionic liquids, can be actively controlled under the potential of a direct or alternating voltage as a response of boundary film to applied field. It is expected that lubricating systems that are able to be actively controlled can be widely expanded, at least, but not limited to, such constitution of the existing model systems. New mechanisms and model systems that may have a better performance will also be discovered. The general goal of an active control system is to achieve regulation of tribology properties at a larger magnitude and more quickly. To find its application in industry, for example, in an automobile clutch, other factors such as the stability, compatibility, and economic efficiency will also have to be taken into consideration. Moreover, the goal of active control of boundary lubrication is not only controlling the magnitude of the friction and wear, but also controlling their distribution over a contact surface, which may also find an industrial application. As prospects in the research on boundary lubrication, the trends listed herein are not comprehensive, but represent several important and meaningful breakthroughs that may be achieved in the near future.
2019-04-11T13:14:03.384Z
2015-06-30T00:00:00.000
{ "year": 2015, "sha1": "f324b38abae38b6a0695baedf7e454b894a5176a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-015-0084-4.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c55b1ca172387924f4ca83e8b0589a1fb772290e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
13932111
pes2o/s2orc
v3-fos-license
Caspase-9 mediates Puma activation in UCN-01-induced apoptosis The protein kinase inhibitor 7-hydroxystaurosporine (UCN-01) is one of the most potent and frequently used proapoptotic stimuli. The BH3-only molecule of Bcl-2 family proteins has been reported to contribute to UCN-01-induced apoptosis. Here we have found that UCN-01 triggers Puma-induced mitochondrial apoptosis pathway. Our data confirmed that Akt-FoxO3a pathway mediated Puma activation. Importantly, we elucidate the detailed mechanisms of Puma-induced apoptosis. Our data have also demonstrated that caspase-9 is a decisive molecule of Puma induction after UCN-01 treatment. Caspase-9 mediates apoptosis through two kinds of feedback loops. On the one hand, caspase-9 enhances Puma activation by cleaving Bcl-2 and Bcl-xL independent of caspase-3. On the other hand, caspase-9 directly activated caspase-3 in the presence of caspase-3. Caspase-3 could cleave XIAP in an another positive feedback loop to further sensitize cancer cells to UCN-01-induced apoptosis. Therefore, caspase-9 mediates Puma activation to determine the threshold for overcoming chemoresistance in cancer cells. The apoptosis pathway is closely related to the Bcl-2 family proteins in which antiapoptotic members sequester multidomain proapoptotic proteins, thereby inhibiting their active role in apoptosis. In contrast, BH3-only proteins that are considered stress sensors can dissociate Bax-like proteins from their antiapoptotic sequestrators, and thus leading to apoptosis. 1 The expression of Bcl-2 family proteins is regulated during carcinogenesis, 1 and the expression of both the Bcl-2 and Bcl-xL antiapoptotic proteins is associated with resistance to antitumor agents such as cisplatin (CP). 2 The inhibition of the protective function of antiapoptotic Bcl-2 members can either restore the normal apoptotic process in cancer cells or circumvent resistance to chemotherapy. 3,4 In this regard, enhanced expression of BH3-only proteins can effectively bind the antiapoptotic members and prevent the function of these proteins. Some reports suggest that the BH3-only protein Puma has important roles in p53-dependent and -independent apoptosis in human cancer cells and mediates cell death through the Bcl-2 family proteins Bax/Bak and the mitochondrial pathway. 5,6 Our studies also reveal that Puma upregulation induces cell apoptosis in chemoresistant ovarian cancer cells, 7,8 confirming the requisite role of Puma in chemosensitivity. 7-Hydroxystaurosporine (UCN-01) is a protein kinase C-selective inhibitor that is successfully used in phase I and II clinical trials. 9,10 As a modulator, UCN-01 enhances the cytotoxicity of other anticancer drugs such as DNA-damaging agents and antimetabolite drugs by putative abrogation of G2and/or S-phase accumulation induced by these anticancer agents. 11 As a single agent, UCN-01 exhibits two key biochemical effects, namely accumulation of cells in the G1 phase of the cell cycle and induction of apoptosis. 12 Both these effects may be important for its anticancer activity. Previous studies have demonstrated that UCN-01 potently decreased the levels of activated the phosphorylation level of Akt (p-Akt) in in vitro or in in vivo systems. [12][13][14] Some researchers have also approved that UCN-01 can modulate Bcl-2 family members to potentiate apoptosis in cancer cells. 15,16 These reports suggest that Akt and Bcl-2 family proteins may be the potent targets of UCN-01 to trigger cancer cell apoptosis. In this study, we also investigate the role of Puma in UCN-01-induced apoptosis and confirm that p53-independent Puma induction is pivotal for the anticancer effects of UCN-01. Moreover, we first elucidate the detailed mechanism of Pumainduced apoptosis after UCN-01 treatment. We found that Puma expression mediated caspase-9 and caspase-3 activation. Among the caspase proteins, caspase-9 has a key role in Puma-induced apoptosis. Our data demonstrated that caspase-9 could mediate Puma-induced apoptosis through two feedback pathways. On the one hand, activated caspase-9 was initiated followed by caspase-3 activity, and activated caspase-3 cleaved XIAP in a positive feedback loop to strengthen Puma expression. On the other hand, caspase-9 itself cleaved antiapoptotic Bcl-2 and Bcl-xL to positively enhance Puma induction. These results provide the detailed mechanistic insight into therapeutic response to UCN-01 and the theoretical basis for its applications. Results Puma is induced by UCN-01 in an Akt-FoxO3a-dependent and p53-inpendent manner. Our study revealed that UCN-01 treatment resulted in Puma induction in a variety of tumor cells, such as human colon cells (HCT116, HCT116 p53 KO, HT29 and DLD1), human ovarian cells (A2780/S and A2780/CP), leukemia cells (K562/S and K562/CP) and breast cancer cells (MCF-7 and MDA-MB-231) ( Figure 1). These results further demonstrated that Puma induction was not dependent on p53 activation and revealed that UCN-01 could trigger Puma expression in different cancer cell lines regardless of their chemosensitivity. We further studied whether Akt-FoxO3a pathway was involved in regulating Puma induction in our experimental systems. As illustrated in Figure 2a and Supplementary Figure 1A, FoxO3a small interfering RNA (siRNA) efficiently decreased FoxO3a expression and Puma induction, as well as cell apoptosis after UCN-01 treatment. We then determined whether FoxO3a regulates Puma expression. We used the chromatin immunoprecipitation (ChIP) assay to detect the interactions between FoxO3a and the Puma promoter, as described previously. 7 Our results revealed that FoxO3a could act on the Puma promoter after UCN-01 treatment ( Figure 2b). Moreover, gel analysis also proved that the binding of FoxO3a to the Puma promoter was significantly enhanced after treatment. These results suggest that FoxO3a can directly bind to the Puma promoter to activate its transcription following UCN-01 treatment. We then determined whether Akt mediates FoxO3ainduced Puma expression. Transfection with Akt1 vector increased p-Akt and total Akt expression. Meanwhile, Akt1 transfection decreased FoxO3a nuclear translocation and increased its cytosolic location after UCN-01 treatment. Furthermore, Akt1 overexpression suppressed Puma expression ( Figure 2c). Therefore, the induction of Puma by FoxO3a following UCN-01 treatment was to be mediated through Akt inhibition. Puma mediates UCN-01-induced apoptosis. We first determined the apoptotic effect of UCN-01 in various cancer cell lines. Cells were treated with UCN-01 and apoptosis was confirmed by a DNA fragmentation ELISA assay at indicated concentrations. These results revealed that UCN-01 effectively induced apoptosis in A2780/CP and HT29 cells ( Figure 3a). Flow cytometry analysis with PI staining further demonstrated the effect of UCN-01 on apoptosis in breast cancer cells (Figure 3b). The other cancer cells showed the same results (data no shown). Further experiments demonstrated that UCN-01 induced the release of Cyt c and nuclear condensation and fractions in murine embryonic fibroblast (MEF) cells (Figures 4a and b). Moreover, UCN-01 also triggered Puma expression and caspase-3 cleavage (Figure 4b). Puma KD by siRNA confirmed that Puma is necessary for caspase-3 and -9 activation and subsequent cell apoptosis induced by UCN-01 (Figures 4c and d). Caspase-9 regulates Puma and caspase-3 activation in apoptosis. Next, we detect the downstream events of Puma activation. We found that the apoptotic rate of MCF-7 was apparently lower than that of MDA-MB-231 after UCN-01 treatment (Figure 3b). Because MCF-7 lacks caspase-3 expression, 17 we speculate that the other molecules are involved in Puma-induced apoptosis in addition to caspase-3. It is likely that there is a caspse-3-independent pathway in apoptosis. A recent study confirms that caspase-9 triggered caspase-3-independent apoptosis in UCN-01 treatment. 18 Moreover, caspase-9 is critical for caspase-3 activation. 19 We then examined whether caspase-9 contributes to Puma-induced apoptosis. We found that Puma indeed initiated caspase-9 activation (Figure 5a). Puma KO or KD by siRNA prevented caspase-9 cleavage (Figure 5a and Supplementary Figure 1B). Conversely, the caspase-9 inactivation by its inhibitor (zLEHD) decreased Puma expression after UCN-01 treatment ( Figure 5b). Moreover, caspase-9 inhibition impeded caspase-3 activation. Flow cytometry assay revealed that caspase-9 inhibition decreased cell apoptosis (Figure 5c). We then used siRNA to interfere with caspase-9 expression. Caspase-9 Figure 2A). Moreover, the inhibition of caspase-9 induction obviously decreased Puma expression and cell apoptosis ( Supplementary Figures 2A and C). These results indicate that caspase-9 has a key role in Pumainduced apoptosis. Puma activates caspase-9 through the mitochondrial pathway. Caspase-9 then mediates the downstream apoptotic factor activation, such as caspase-3. Furthermore, when there is deficiency of caspase-3, caspase-9 positively regulates Puma expression through a certain feedback loop. Caspase-9 mediates Puma activation by cleaving Bcl-2 and Bcl-xL. We then examined the feedback pathway that contributed to Puma activation. A previous study has pointed out that caspase-9 induces feedback apoptosis of the mitochondrion by the cleavage of antiapoptotic Bcl-2 and Bcl-xL. 20 Our experiments proved that Bcl-2 and Bcl-xL was degraded during the time course of UCN-01 treatment ( Figure 6a). Furthermore, caspase-9 inactivation restrained the cleavage of Bcl-2 and Bcl-xL ( Figure 6b). Similarly, the inhibition of caspase-9 also prevented the degradation of Bcl-2 and Bcl-xL (Supplementary Figure 2A). We next determined whether the cleavage of Bcl-2 and Bcl-xL is indeed involved in caspase-9-dependent apoptosis. We also generated the Bcl-2 or Bcl-xL mutant (D/A) as described previously. [20][21][22] Wild-type (WT) or caspase-resistant mutant Bcl-2 or Bcl-xL was found to be stably transfected with HCT116 p53 KO cells. We examined the change of HA-Bcl-2 or HA-Bcl-xL transfectants. As expected, these mutants were resistant to degradation after DNA fragments co-immunoprecipitated with the target protein FoxO3a were subjected to QRT-PCR. QRT-PCR assays were conducted on cells that were either left untreated (con) or treated with UCN-01 (1 μM for HCT116 p53 KO and 250 nM for A2780/CP) for 24 h. Numbers on the y axis represent the levels of FoxO3a association with the Puma promoter region after normalizing to Ct values from input samples. Data shown are means ± S.D. from three independent experiments. (d) ChIP was carried out on fixed cancer cells following 12 h of UCN-01 (1 μM for HCT116 p53 KO and 250 nM for A2780/CP) treatment. An antibody specific for FoxO3a or no antibody was used to show specificity. PCR was carried out using primers surrounding the FoxO3a binding sites in the Puma promoter. (e) Cancer cells were transfected with Ctrl and constitutively active Akt1 (T308D and S473D) vector for 48 h and then treated with UCN-01 (1 μM for HCT116 p53 KO and 250 nM for A2780/CP) for 24 h. Treated cells were lysed for detection. For FoxO3a nuclear translocation analysis, cells were incubated with UCN-01 and subjected to subcellular fractionation as described in the Materials and Methods section. Nuclear FoxO3a is referred to N-FoxO3a and cytosolic FoxO3a is referred to C-FoxO3a. Lamin B1 was used as a nuclear marker. Data are representative of at least three independent experiments UCN-01 treatment (Figures 6c and d). Overexpression of Bcl-2 or Bcl-xL potently inhibited caspase-9-induced apoptosis. WT Bcl-2 and Bcl-xL still induced cell death (Figures 6e and f). Caspase-3 regulates Puma activation and XIAP cleavage in apoptosis. We then examined the functional role of caspase-3 in Puma-induced apoptosis. We noted that caspase-9 activation resulted in the loss of XIAP in apoptosis (Figures 5a and b). However, when there is deficiency of caspase-3, the degradation of XIAP was stopped. These results suggest whether caspase-3 can regulate cell death through XIAP degradation. We first determined the effect of caspase-3 deficiency on Puma expression and cell death. We used HCT116 p53 KO and A2780/CP cells as cancer models. CP treatment was used as a control (Ctrl). We used the caspase-3 inhibitor (zDEVD) to specifically inhibit caspase-3 activity and found that caspase-3 inactivation decreased cell apoptosis after UCN-01 treatment alone, as well as with CP treatment. It is noteworthy that caspase-3 inactivation decreased Puma expression. Moreover, caspase-3 activation indeed resulted in the loss of XIAP in apoptosis after treatment (Figures 7a and b). We transfected Ctrl or caspase-3 vector into MCF-7 cells and found that Puma expression increased following caspase-3 expression compared with Ctrl expression (Figure 7c). Moreover, caspase-3 induction resulted in PARP cleavage and apoptosis (Figure 7c). To further confirm the function of caspase-3 in Puma-induced apoptosis, we used siRNA to KD caspase-3 expression in HCT116 p53 KO and A2780/CP cells. Deficiency of caspase-3 decreased Puma expression and subsequently apoptosis. Meanwhile, caspase-3 inactivation led to the inhibition of XIAP degradation in apoptosis after treatment (Figure 7d and Supplementary Figure 3A). To inquire whether XIAP depletion during apoptosis is dependent on proteasomal degradation, HCT116 p53 KO and A2780/CP cells were treated with the proteasome inhibitor ). These results were consistent with the reported study 23 and suggest that caspase-3 cleaves XIAP into truncated fragment, which is subsequently committed to proteasomal degradation. To confirm the effect of caspase-3 on XIAP, we transfected WT-XIAP-Flag or XIAP (D242E)-Flag into HCT116 p53 KO and A2780/CP cells. Ectopic expression of XIAP (D242E)-Flag variant in which the caspase cleavage motive was mutated prevented the depletion of XIAP compared with WT-XIAP-Flag (Figure 7f), and thus failed to show the cleaved product of XIAP following UCN-01 treatment upon proteasomal inhibition by MG132. Correspondingly, XIAP (D242E)-Flag appeared to be more potent in inhibiting apoptosis compared with WT-XIAP-Flag (data not shown). These results demonstrated that caspase-3 contributes to Puma-induced apoptosis, in which caspase-3 can cleave XIAP and positively regulate Puma activation to enhance apoptosis. Smac contributes to XIAP depletion, caspase processing and apoptosis enhancement. We then detected whether second mitochondria-derived activator of caspase (Smac) is involved in UCN-01-induced apoptosis for it can connect with XIAP to release caspase-9 and -3. 23, 24 We used HCT116 p53 KO and A2780/CP cells as the example. CP treatment was used as a Ctrl. Other cells showed the same results (data not shown). Our data revealed that Smac was released from the mitochondria to cytosol after UCN-01 treatment (Figure 8a). Moreover, siPuma transfection inhibited Smac release and confirmed that Smac release is a downstream event of Puma activation (Figure 8b and Supplementary Figure 3B). We next transfected Ctrl, WT-Smac-Flag or Smac-ΔMTS-Flag into HCT116 p53 KO cells. Smac-ΔMTS-Flag lacks the mitochondrial target sequence and locates in the cytosol. 23 We found that both WT-Smac and truncated Smac expression enhanced caspase-9 and -3 cleavage to XIAP depletion ( Figure 8c). Moreover, truncated Smac expression had more effect on caspase processing and XIAP degradation, as well as apoptosis, compared with WT-Smac. It was noteworthy that Smac expression had little effect on Cyt c release (Figure 8c), indicating that Smac does not mediate Cyt c activation in UCN-01-induced apoptosis. Further siRNA experiments revealed that Smac inhibition prevented UCN-01-induced apoptosis (Figure 8d) and demonstrated that Smac contributes to XIAP depletion, caspase processing and apoptosis enhancement. Puma and caspase-9 mediate the antitumor activity of UCN-01 in the xenograft models. To determine whether the caspase-9-and Puma-dependent apoptotic effect of UCN-01 contributes to its antitumor activity in vivo, we cloned the Puma siRNA (Si Puma-1) or Si Casp-9-1 into the pSilencer 2.1-U6 hygro plasmid to get Puma shRNA or Casp-9 shRNA, and later we stably transfected Puma shRNA or Casp-9 shRNA into HCT116 p53 KO or A2780/CP cells, respectively. Thus, we got different HCT116 p53 KO or A2780/CP cells ( Supplementary Figures 4A and B). These cells were injected into the nude mice to establish xenograft tumors. Tumor-bearing mice were treated with UCN-01 as described previously, 25 and tumor volumes were measured every 3 days for 30 days. We found that HCT116 p53 KO or A2780/CP tumors responded to UCN-01 treatment with slower growth and were generally half the size of the untreated tumors following treatment. In contrast, HCT116 p53 KO/Puma KD, HCT116 p53 KO/caspase-9 KD, A2780/ CP/Puma KD or A2780/CP KO/caspase-9 KD tumors were indistinguishable from the untreated mice and did not respond to UCN-01 treatment (Figures 9a and b). The similar results were also found in the tumor weight (Figures 9c and d). To evaluate the possible adverse effects of UCN-01, the weight of the mice was monitored every 3 days throughout the whole experiment. The weight curve of UCN-01-treated group paralleled very closely with that of the Ctrl group (Figures 9e and f). No ruffled fur or toxic death was observed in the UCN-01-treated group. Tumor cell apoptosis was assessed by terminal deoxynucleotidyltransferase-mediated dUTP nick-end labeling (TUNEL) assay. As shown in Figures 10a and b, a significantly greater percentage of TUNEL-positive nuclei could be observed in HCT116 p53 KO or A2780/CP tumors treated with UCN-01 when compared with the tumors from the Ctrl group (Figures 10a and b). However, HCT116 p53 KO/ Puma KD, HCT116 p53 KO/caspase-9 KD, A2780/CP/Puma KD or A2780/CP KO/caspase-9 KD tumors revealed almost the same percentage of TUNEL-positive nuclei with that of the Ctrl group (Figures 10c and d). These data clearly show the necessity of Puma and caspase-9 for the in vivo antitumor and apoptotic effects of UCN-01. Discussion The previous study had revealed that UCN-01 could induce Puma-related apoptosis in a p53-independent manner. 25 In this study, we proved these results and further provided the evidence that UCN-01 could induce Puma expression in other cancer cells with dysfunctional p53 such as A2780/CP 7 and K526. 26 Our recent study provides the evidence that JNK and Akt corporately mediates Puma induction in apoptosis. 7 In this study, we also confirmed that Akt-FoxO3a regulates Puma expression. However, although JNK was activated in UCN-01induced apoptosis, we found that JNK was not involved in regulating Puma expression. JNK inhibitor SP600125 inhibited JNK activation but had little effect on Puma induction and apoptosis (data not shown). These results indicate that different agents may allow for simultaneous Puma induction via multiple pathways; therefore, lowering the threshold proapoptotic activity is required for apoptosis induction. 8 Of course, we still need to perform further experiments to study the reasons why JNK is not involved in apoptosis induced by UCN-01 treatment in our study. As far as we know, Puma initiates apoptosis mainly through the mitochondrial apoptotic pathway, such as Cyt c release and subsequent caspase activation. 6 In this study, our data indeed revealed that Puma triggered this apoptotic pathway. However, interestingly, Puma still induced apoptosis in MCF-7 cells, which is caspase-3 deficient. 17 The result confirms at least two things. First, caspase-3 participates in the process of apoptosis, but cannot play a decisive role. Second, the other pathway is involved in Puma-induced cell death. The recent study has proved that caspase-9 contributes to UCN-01-treated apoptosis. Caspase-9 induced an Apaf-1independent apoptosis in MCF-7 cells after UCN-01 treatment. 18 We speculate that caspase-9 has a key role in Puma-induced apoptosis after UCN-01 treatment. demonstrated that Puma initiated caspase-9 activation. Moreover, caspase-9 not only participates in Puma-induced apoptosis but also regulates Puma expression through cleaving Bcl-2 and Bcl-xL. Caspase-9 inactivation efficiently inhibited Puma expression, caspase-3 cleavage and cell death. Therefore, our results may elaborate the molecular mechanism of Apaf-1-independent caspase-9 activation in apoptosis. After UCN-01 treatment, activated Puma mediates Cyt c release and subsequently caspase-9 activation. Caspase-9 as a central molecule transmits the apoptotic signal in cell death. On the one hand, caspase-9 enhances Puma activation by feedback amplification independent of caspase-3. Puma activation could induce the release of AIF and Endo G, 27,28 which induce caspase-3-independent cell death. 28 On the other hand, caspase-9 also triggered caspase-3 signaling pathway in the presence of caspase-3. Our data also revealed that caspase-3 could mediate Pumainduced apoptosis through a feedback loop. The previous study demonstrated that caspase-3 regulate Cyt c release and Bak activation by a feedback amplification loop. 6,29 It is possible that caspase-3 also unleashes further mitochondrial changes that are necessary for full execution of apoptotic events to enhance Puma activation. Indeed, Puma induction triggered Smac release from the mitochondria and Smac induced distraction of XIAP from caspase-3 to result in the initial activation of caspase-3. Activated caspase-3 then cleaved XIAP for proteasomal degradation, thereby initiating a positive amplification loop causing enhanced apoptosis. Cell viability and apoptosis assays. Four methods were used to assess AT101-induced apoptotic cell death: detection of DNA fragmentation with the Cell Death Detection ELISA Kit (Roche Diagnostics, Basel, Switzerland), western blot analysis of caspase activation, PARP cleavage and measurement of apoptotic cells by flow cytometry (PI staining for sub-G1 or AnnexinV/PI). The Cell Death Detection ELISA Kit quantified the apoptotic cells by detecting the histone-associated DNA fragments (mono-and oligonucleosomes) generated by the apoptotic cells as described previously. 30 Cell fractionation and immunoblotting. Mitochondrial and cytoplasmic cell fractions were obtained by differential centrifugation as described previously. 30 Nuclear and cytoplasmic lysates were prepared according to a previously published protocol. 7 Immunoblotting was carried out as described previously. 31 Western blot was carried out with antibody dilutions as follows: actin at 1 : 20 000; Puma, XIAP, Flag, p-Akt (Ser 473), Akt, caspase-3, HA, caspase-9, Bcl-2 and Bcl-xL at 1 : 2000; and PARP, FoxO3a, Cyt c, Smac, Lamin B1 at 1 : 1000. ChIP assay. The ChIP assay was performed using the Chromatin Immunoprecipitation Assay Kit (Upstate Biotechnology, Lake Placid, NY, USA) according to the manufacturer's protocol. ChIP was performed with 3 μg of FoxO3a antibody (Sigma) incubated with protein G-coated magnetic beads overnight with rotation at 4°C. The DNA fragments that co-immunoprecipitated with the target proteins FoxO3a were subjected to quantitative real-time PCR (QRT-PCR) analysis using various primer sets. The Ct value of each sample was normalized to the Ct value obtained from the PCR reaction using the corresponding input genomic DNA as a template. Primer sequences for Puma and FoxO3a binding were designed as described previously. 7 Primer sequences used were as follows: Puma FoxO3a, 5′-GCCGCCACTGCAGTTAGAG-3′ and 5′-AACAGCCGGTTATTGGCC-3′. Immunostaining. The experiments were performed according to our previous report. 30 MEF cells were seeded in 24-well plates with Lab-Tek Chamber Slides with a Cover (Nalge Nunc International, Naperville, IL, USA) in 500 μl medium and incubated overnight. Cells were then treated with UCN-01 (6 μM) for 48 h. The medium was removed and cells were fixed in 4% formaldehyde containing 0.1% glutaraldehyde for 15 min at room temperature (RT). After rinsing with cold PBS (pH 7.4), cells were permeabilized with 0.5% Triton X-100 for 10 min at RT. After blocking with 5% goat serum, Cyt c (7H8.2C12; BD Pharmingen, San Diego, CA, USA) (1 : 100 dilution) was added, and the fixed cells were incubated with antibodies at 37°C for 1 h, followed by incubation with anti-mouse IgG-Cy5 (Millipore, Boston, MA, USA; 1 : 128 dilution) for 1 h. After the removal of antibodies, cells were rinsed with PBS and mounted with 90% glycerol. Fluorescence was immediately observed using an Olympus DP72 microscope (Olympus Corporation, Tokyo, Japan). In vivo tumor experiments. In vivo experiments were performed according to our previous report with some modifications. 8,32 To study the antitumor activities of UCN-01 in vivo, HCT116 p53 KO and A2780/CP models were established. We cloned the Si Puma-1 or Si Casp-9-1 into the pSilencer 2.1-U6 hygro plasmid to get Puma shRNA or Casp-9 shRNA, and later we transfected Puma shRNA or Casp-9 shRNA into HCT116 p53 KO or A2780/CP, respectively. We screened the cell lines to obtain stably transfected cell lines. Thus, we obtained HCT116 p53 KO/Puma KD, HCT116 p53 KO/caspase-9 KD, A2780/CP/Puma KD or A2780/CP /caspase-9 KD. In brief, 4 × 10 6 different HCT116 p53 KO cell lines or 2.5 × 10 6 different A2780/CP cell lines were subcutaneously injected into the right dorsal flank of 6-to 8-week-old female athymic nude BALB/c mice. Following tumor growth for 7 days, the tumor-bearing mice were randomly assigned into the following two groups (10 mice per treatment group): (a) Ctrl group; (b) UCN-01-treated group. Mice were injected intraperitoneally for 5 consecutive days with 9 mg/kg UCN-01 diluted in 20 mmol/l sodium citrate buffer (pH 6). 25 Tumor volumes were evaluated according to the following formula: tumor volume (mm 3 ) = 0.52 × length × width 2 . The weight of the mice was measured at 3-day intervals. At the end of the experiment, the mice were killed. Tumor net weight of each mouse was measured. The tumor tissues were collected for subsequent TUNEL experiments (see below). All studies involving mice were approved by the Institutional Animal Care and Treatment Committee of Sichuan University (Chengdu, China). TUNEL assay. The presence of apoptotic cells within the tumor sections was evaluated by the TUNEL technique using the DeadEnd Fluorometric TUNEL System (Promega, Madison, WI, USA) following the manufacturer's protocol. Percent apoptosis was determined by counting the number of apoptotic cells and dividing by the total number of cells in the field (5 high power fields/slide). Statistical analysis. Statistical analysis of the differences between the groups was performed using the Student's t-test with Po0.05 considered statistically significant. Conflict of Interest The authors declare no conflict of interest.
2016-05-18T13:29:42.276Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "b2a475c188a52111197ba87bb35d53c8f8bd463f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/cddis2014461.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2a475c188a52111197ba87bb35d53c8f8bd463f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119158166
pes2o/s2orc
v3-fos-license
Gluing theory of Riemann surfaces and Liouville conformal field theory We study the gluing theory of Riemann surfaces using formal algebraic geometry, and give computable relations between the associated parameters for different gluing processes. As its application to the Liouville conformal field theory, we construct the sheaf of tempered conformal blocks on the moduli space of pointed Riemann surfaces which satisfies the factorization property and has a natural action of the Teichm\"{u}ller groupoid. Introduction The Liouville conformal field theory is well studied since it is an important example of non-rational conformal field theories, and there are remarkable relations pointed by physicists with the quantum Teichmüller theory (cf. [V, T3, T5]) and the 4 dimensional gauge theory (cf. [AGT]). A basic tool in the study of the Liouville theory is to consider expansions of local conformal blocks by gluing parameters of Riemann surfaces. For example, the AGT correspondence conjectures the coincidence between these expansions and instanton partition functions. Furthermore, Teschner [T1, T2, T3, T4, T5] claims that by studying analytic continuations of these expansions, one may obtain spaces of global conformal blocks satisfying the factorization principle. The aim of this paper is to apply the gluing theory of Riemann surfaces to the study of Liouville conformal blocks, especially of Teschner's consideration. First, we study the gluing theory of Riemann surfaces using formal algebraic geometry, and give computable relations between the associated parameters for different gluing processes. By this result, one can study arithmetic geometry of Teichmüller groupoids which was introduced by Grothendieck [G], and studied by Moore-Seiberg [MS] and others [BK1,BK2,FG,G,HLS,NS]. Second, by studying analytic continuations of (local) gluing conformal blocks, we construct (generally infinite dimensional) Hilbert spaces consisting of "tempered" Liouville conformal blocks. More precisely, these Hilbert spaces give a sheaf of conformal blocks, namely a vector bundle with projectively flat connection on the moduli space of pointed Riemann surfaces which satisfies the factorization property and has a natural action of the Teichmüller groupoid. Therefore, we can give a mathematical foundation of considerations by Teschner [T3, T4, T5] on the "modular functor conjecture", namely that there exists a global theory of Liouville conformal blocks which gives a modular functor in the sense of Segal [Se]. The organization of this paper is as follows. In Section 2, we recall results of [I1, I2] on computable relations between deformation parameters of degenerate (algebraic) curves which are used in Section 3 to construct Teichmüller groupoids in the category of arithmetic geometry. In Section 4, by combining these results with results of Teschner [T1, T2, T3] and Hadasz-Jaskólski-Suchanek [HJS], we construct the sheaf of tempered Liouville conformal blocks. Deformation of degenerate curves 2.1. Degenerate curve. We recall the well known correspondence between certain graphs and degenerate pointed curves, where a (pointed) curve is called degenerate if it is a stable (pointed) curve and the normalization of its irreducible components are all projective (pointed) lines. A graph ∆ = (V, E, T ) means a collection of 3 finite sets V of vertices, E of edges, T of tails and 2 boundary maps such that the geometric realization of ∆ is connected. A graph ∆ is called stable if its each vertex has degree ≥ 3, i.e. has at least 3 branches. Then for a degenerate pointed curve, its dual graph ∆ = (V, E, T ) by the correspondence: V ←→ {irreducible components of the curve}, E ←→ {singular points on the curve}, T ←→ {marked points on the curve} such that an edge (resp. a tail) of ∆ has a vertex as its boundary if the corresponding singular (resp. marked) point belongs to the corresponding component. Denote by |X| the number of elements of a finite set X. Under fixing a bijection ν : T ∼ → {1, ..., |T |}, which we call a numbering of T , a stable graph ∆ = (V, E, T ) becomes the dual graph of a degenerate |T |-pointed curve of genus rank Z H 1 (∆, Z) and that each tail h ∈ T corresponds to the ν(h)th marked point. In particular, a stable graph without tail is the dual graph of a degenerate (non-pointed) curve by this correspondence. If ∆ is trivalent, i.e. any vertex of ∆ has just 3 branches, then a degenerate |T |-pointed curve with dual graph ∆ is maximally degenerate. 2.2. Generalized Tate curve. Let ∆ = (V, E) be a stable graph without tail, and under an orientation of ∆, i.e., an orientation of each e ∈ E, denote by v h the terminal vertex of h ∈ ±E (resp. the boundary vertex b(h) of h ∈ T ). Take a subset E of ±E = {e, −e | e ∈ E} whose complement E ∞ satisfies the condition that and that v h = v h ′ for any distinct h, h ′ ∈ E ∞ . We attach variables α h for h ∈ E and q e = q −e for e ∈ E. Let A 0 be the Z-algebra generated by α h (h ∈ E), 1/(α e − α −e ) (e, −e ∈ E) and 1/( According to [I1,Section 2], we construct the universal Schottky group Γ associated with oriented ∆ and E as follows. For h ∈ ±E, let φ h be the element of P GL 2 (B) = GL 2 (B)/B × given by where P GL 2 acts on P 1 by linear fractional transformation. For any reduced path ρ = h(1) · h(2) · · · h(l) which is the product of oriented edges h(1), ..., h(l) such that h(i) = −h(i + 1) and v h(i) = v −h(i+1) , one can associate an element ρ * of P GL 2 (B) having reduced expression φ h(l) φ h(l−1) · · · φ h(1) . Fix a base point v b of V , and consider the fundamental group π 1 (∆, v b ) which is a free group of rank g = rank Z H 1 (∆, Z). Then the correspondence ρ → ρ * gives an injective antihomomorphism π 1 (∆, v b ) → P GL 2 (B) whose image is denoted by Γ. It is shown in [I1, Section 3] (and had been shown in [IN,Section 2] when ∆ is trivalent and has no loop) that for any stable graph ∆ = (V, E) without tail, there exists a stable curve C ∆ of genus g over A which satisfies the following: • The closed fiber C ∆ ⊗ A A 0 of C ∆ given by putting q e = 0 (e ∈ E) is the degenerate curve over A 0 with dual graph ∆ which is obtained from P v := P 1 A 0 (v ∈ V ) by identifying α e ∈ P ve and α −e ∈ P v −e (e ∈ E), where α h = ∞ if h ∈ E ∞ . • C ∆ gives a universal deformation of C ∆ ⊗ A A 0 . • C ∆ ⊗ A B is smooth over B and is Mumford uniformized (cf. [M]) by Γ. • Let α h (h ∈ E) be complex numbers such that α e = α −e and that Then for sufficiently small complex numbers q e = 0 (e ∈ E), C ∆ becomes a Riemann surface which is Schottky uniformized (cf. [S]) by Γ. We apply the above result to construct a uniformized deformation of a degenerate pointed curve which had been done by Ihara and Nakamura (cf. [IN, Section 2, Theorems 1 and 10]) when the degenerate pointed curve is maximally degenerate and consists of smooth pointed projective lines. Let ∆ = (V, E, T ) be a stable graph with numbering ν of T . We define its extension∆ = (Ṽ ,Ẽ) as a stable graph without tail by adding a vertex with a loop to the end distinct from v h for each tail h ∈ T . Then from the uniformized curve associated with∆, by substituting 0 for the deformation parameters which correspond to e ∈Ẽ − E and by replacing the singular projective lines which correspond to v ∈Ṽ −V with marked points, one has the required universal deformation. Comparison of deformations. A rigidification of an oriented stable graph ∆ = (V, E, T ) with numbering ν of T means a collection τ = (τ v ) v∈V of injective maps One can see that any stable graph has a rigidification by the induction on the number of edges and tails. Let ∆ = (V, E, T ) be a stable graph with numbering of T such that only one vertex, which we denote by v 0 , has 4 branches and that the other vertices have 3 branches. Fix an orientation of ∆, and denote by h 1 , h 2 , h 3 , h 4 the mutually different elements of ±E ∪ T with terminal vertex v 0 . Then one can take a rigidification τ = (τ v ) v∈V of ∆ such that and hence x = x h 1 gives the coordinate on P 1 Z − {0, 1, ∞}. Denote by C (∆,τ ) the uniformized deformation given in 2.2 which is a stable |T |-pointed curve over having two boundary vertices one of which is a boundary of h 1 , h 2 (resp. h 1 , h 3 ) and another is a boundary of h 3 , h 4 (resp. h 2 , h 4 ). Then one can identify T ′ , T ′′ with T naturally, and it is easy to see that according as x → 0 (resp. x → 1), the degenerate |T |-pointed curve corresponding to x becomes the maximally degenerate |T |-pointed curve with dual graph ∆ ′ (resp. ∆ ′′ ). Let ∆ ′ (resp. ∆ ′′ ) without e ′ 0 (resp. e ′′ 0 ) have the orientation naturally induced from that of ∆, and let h ′ 0 (resp. h ′′ 0 ) be the edge e ′ 0 (resp. e ′′ 0 ) with orientation. For i = 1, 2, 3, 4, we denote by h ′ i (resp. h ′′ i ) the oriented edge in ∆ ′ (resp. ∆ ′′ ) corresponding to h i , and identify the invariant part of E as that of E ′ and E ′′ . Then as seen above, for a rigidification τ ′ (resp. τ ′′ ) of ∆ ′ (resp. ∆ ′′ ), we have the uniformized deformation C (∆ ′ ,τ ′ ) (resp. C (∆ ′′ ,τ ′′ ) ) which is a stable |T |-pointed curve over Then we will consider two isomorphisms of C (∆,τ ) to C (∆ ′ ,τ ′ ) and to C (∆ ′′ ,τ ′′ ) . Note that under the isomorphisms, the comparison between parameters of the base rings depends on the situation whether some h i (1 ≤ i ≤ 4) are loops or not. In Theorem 2.1 below, we make the comparison in restricted cases for the saving of space since the other cases are seen to be treated similarly from the proof. denote by y i the deformation parameters associated with h i for i ∈ I, and denote by s j (resp. t j ) the deformation parameters associated with h ′ j (resp. h ′′ j ) for j ∈ {0} ∪ I. Then we have where under this isomorphism, the variables of the base rings A (∆,τ ) and A (∆ ′ ,τ ′ ) are related as where under this isomorphism, the variables of the base rings A (∆,τ ) and A (∆ ′′ ,τ ′′ ) are related as Remark 1. In (1) and (2) above, the constant terms of the ratios in A (∆ ′ ,τ ′ ) , A (∆ ′′ ,τ ′′ ) are clearly either 1 or −1, and these signs can be easily determined from the data of rigidifications. If |h 1 | = |h 2 | in (2) particularly, then y 1 /(t 0 t 1 ) belongs to A (∆ ′′ ,τ ′′ ) 2 and hence this constant term is 1 since by [I1, Proposition 1.3], the reduced element Remark 2. From the properties of generalized Tate curves given in 2.2, one can see that the assertion in (1) (resp. (2)) holds in the category of complex geometry when x, y e and s e ′ (resp. 1 − x, y e and t e ′′ ) are sufficiently small. Proof. We review the proof given in [I2] since it also gives the method of comparing deformation parameters of degenerate curves. We prove the theorem when ∆ has no tail from which the assertion in general case follows, and we only prove (1) since (2) can be shown in the same way. Over a certain open subset of (x, y e (e ∈ E)) | x ∈ C × , y ∈ C with sufficiently small absolute values |x| and |y e |, C (∆,τ ) gives a deformation of the degenerate curve with dual graph ∆. Hence there exists an isomorphism given by y i → 0 (1 ≤ i ≤ 4) and y e → 0 (e ∈ E inv ) correspond to those of of C (∆ ′ ,τ ′ ) given by s i → 0 and s e → 0 respectively. Since these two curves are Mumford uniformized, a result of Mumford [M,Corollary 4.11] implies that the uniformizing groups Γ (∆,τ ) and the isomorphism defined by this conjugation. Since eigenvalues are invariant under conjugation and the cross ratio of 4 points a, b, c, d is invariant under linear fractional transformation, one can see the following: (A) For any γ ∈ Γ (∆,τ ) , the multiplier of γ is equal to that of ι(γ) via the above isomorphism. (B) For any γ i ∈ Γ (∆,τ ) (1 ≤ i ≤ 4), the cross ratio [a 1 , a 2 ; a 3 , a 4 ] of the attractive fixed points a i of γ i is equal to that of ι(γ i ) via the above isomorphism. We consider the case that |h i | (1 ≤ i ≤ 4) are mutually different. A 1 = Z x, y 1 x , y 2 x , y 3 , y 4 , y e (e ∈ E inv ) , whose quotient field is denoted by Ω 1 , and let I 1 be the ideal of A 1 generated by x, y 1 /x, y 2 /x, y 3 , y 4 and y e (e ∈ E inv ). Then from (A) and (B) as above and results in [I, §1], we will show that the isomorphism descends to A 1 ∼ = A (∆ ′ ,τ ′ ) , where the variables are related as in the statement of Theorem 1 (1). We take local coordinates ξ h as and z ∈ P 1 (Ω 1 ) with ξ h (z) ∈ I 1 , then by [I1, Lemma 1.2], the attractive fixed point a of γ is given by lim n→∞ γ n (z), and hence ξ h(1) (a) ∈ I 1 . For each v ∈ V , fix a path ρ v in ∆ from the base point v b to v. If ρ i ∈ π 1 (∆, v 0 ) has reduced expression · · · h i (1 ≤ i ≤ 4), then the attractive fixed points a i of γ a 3 ; a 2 , a 4 ] ∈ x · (A 1 ) × . Furthermore, by Proposition 1.4 and Theorem 1.5 of [I1], the attractive fixed points a hence from (B), we have the comparison of y 1 /x and s 1 follows from applying (B) to for distinct oriented edges h 5 , h 6 with terminal vertex v −h 1 . Similarly, we have the comparison of y 2 /x (resp. y 3 , y 4 , y e (e ∈ E inv − {loops})) and s 2 (resp. s 3 , s 4 , s e , and further if e ∈ E inv is a loop, then the comparison of y e and s e follows from applying (A) to γ = ρ * ve −1 · φ e · ρ * ve . Therefore, the above isomorphism descends to One can show the assertion in the case that |h 1 | = |h 2 | similarly. The following result was given substantially in [I2, 3.1], [I3, 1.2], and is explicitly shown here since this is crucial to prove results in Section 4. Theorem 2.2. Let the notation be as above. Then there are elements u give deformation parameters of the closed fibers C ′ 0 (resp. C ′′ 0 ) of C (∆ ′ ,τ ′ ) (resp. C (∆ ′′ ,τ ′′ ) ), namely one has Proof. Assume that all branches starting from v 0 are not loops. We put y i = y |h i | as in Theorem 2.1. Take or −y 4 (i = 4). and u i (i ≥ 5) by specifying one of y e and −y e for each e ∈ E inv . Then by Theorem 2.1, under x → 0 (resp. 1), x (resp. 1 − x) and u i (1 ≤ i ≤ 3g − 4) are deformation parameters of the maximally degenerate pointed curve C ′ 0 (resp. C ′′ 0 ). In the case that there are loops with boundary vertex v 0 , one can also take required deformation parameters using Theorem 2.1. Ihara-Nakamura's deformation. We consider deformations of maximally degenerate curves using "standard" local coordinates on P 1 which is studied by Ihara and Nakamura [IN] with application to Galois theory on arithmetic fundamental groups of curves. Let ∆ = (V, E, T ) be a trivalent graph such that rank Z H 1 (∆, Z) = g, |T | = n, and C denote the associated degenerate curve over Z which is a union of P We take a local coordinate z h on each P v h such that {marked points and singular points on P v h } = {0, 1, ∞}, and that z h (α h ) = 0. Then one can define the Ihara-Nakamura deformation C IN of C 0 over the ring Z[[q e ]] of integral formal power series of variables q e (e ∈ E) by the relation z h z −h = q |h| (h ∈ ±E). As is shown in [IN,2.4.2], for any v 0 ∈ V , for each γ ∈ π 1 (∆, v 0 ), one can associates an element γ * ∈ P GL 2 (Z[[q e ]]) as follows. Let where g 0 , g d are defined by z −h 0 = g 0 (z), z = g d z h d−1 . Then γ → γ * gives a representation π 1 (∆, v 0 ) → P GL 2 (Z[[q e ]]) whose image is a Schottky group over Z[[q e ]] as in 2.2. Then by the same way as in the proof of Theorem 2.1 especially using the assertions (A) and (B), one can compare the deformation parameters q e and those given in Theorem 2.1. Teichmüller groupoid 3.1. Moduli space of curves. We review fundamental facts on the moduli space of pointed curves and its compactification [DM, KM, K]. Let g and n be non-negative integers such that n and 2g − 2 + n are positive. Let M g,n (resp. M g, n ) denote the moduli stacks over Z of proper smooth curves of genus g with n marked points (resp. with n marked points having non-zero tangent vectors). Then M g, n becomes a principal (G m ) n -bundle on M g,n . Furthermore, let M g,n denote the Deligne-Mumford-Knudsen compactification of M g,n which is defined as the moduli stack over Z of stable curves of genus g with n marked points, and M g, n denote the (A 1 ) n -bundle on M g,n containing M g, n naturally. For these moduli stacks M * , * and M * , * , M an * , * and M an * , * denote the associated complex orbifolds. A point at infinity on M g,n (resp. M g, n ) is a point on M g,n (resp. M g, n ) which corresponds to a maximally degenerate n-pointed curve, and a tangential point at infinity is a point at infinity with tangential structure over Z. We describe the boundary of M g,n . Denote by D 0 the divisor of M g,n corresponding to singular stable marked curves which are desingularized to stable curves of genus g−1 with n + 2 marked points. For an integer i with 1 ≤ i ≤ [g/2], and S be a subset of P = {1, ..., n} such that 2i − 2 + |S|, 2(g − i) − 2 + n − |S| are positive. denote by D i,S the divisor of M g,n corresponding to singular stable marked curves which are desingularized to the sum of pairs of stable curves of genus i with |S| marked points and of genus g − i with n − |S| marked points. Then M g,n − M g,n consists of normal crossing divisors D 0 , D i,S , and hence M g, n − M g, n consists of the pullbacks of D 0 , D i,S by the natural projection M g, n → M g,n which we denote by the same notation. 3.2. Teichmüller groupoid. The Teichmüller groupoid for M g, n is defined as the fundamental groupoid for M an g, n with tangential base points at infinity. Its fundamental paths called basic moves are half-Dehn twists, fusing moves and simple moves defined as follows. Let ∆ = (V, E, T ) be a trivalent graph as above, and assume that ∆ is trivalent. Then for any rigidification τ of ∆, ±E ∪ T = v∈V Im(τ v ), and hence A (∆,τ ) is the formal power series ring over Z of 3g + n − 3 variables q e (e ∈ E). First, the half-Dehn twist associated with e is defined as the deformation of the pointed Riemann surface corresponding to C ∆ by q e → −q e . Second, a fusing move (or associative move, Amove) is defined to be different degeneration processes of a 4-hold Riemann sphere. A fusing move changes (∆, e) to another trivalent graph (∆ ′ , e ′ ) such that ∆, ∆ ′ become the same graph, which we denote by ∆ ′′ , if e, e ′ shrink to a point. We denote this move by ϕ(e, e ′ ). As is done in [I2, Section 3] and [I3, Theorem 1], one can construct this move using Theorem 2.2. Finally, simple move (or S-move) is defined to be different degeneration processes of a 1-hold complex torus. Then as the completeness theorem called in [MS], the following Theorem 2.1 is conjectured in [G] and shown in [BK1,BK2,FG,HLS,MS,NS] (especially in [NS,Sections 7 and 8] using the notion of quilt-decompositions of Riemann surfaces). Put S = Q/2 + √ −1 · R + , and for each α ∈ S, denote by V α the irreducible highest weight representation of Vir c with generator e α which is annihilated by L n (n > 0) and has the L 0 -eigenvalue ∆ α = α(Q − α), where c = 1 + 6Q 2 . Then for any v ∈ V α , L n (v) = 0 if n ≫ 0. There exists a unique inner product ·, · Vα on V α such that Under 2g − 2 + n > 0, let C be a Riemann surface of genus g with n marked points P 1 , ..., P n and local coordinates t i (i = 1, ..., n) vanishing at P i . We associate highest weight representations V α i of Vir c to P i (i = 1, ..., n), and define the action of Denote by D C the Lie algebra of meromorphic differential operators on C which may have poles only at P 1 , ..., P n . Then (invariant) conformal blocks associated with (C; P i , t i ) are linear maps F C : n i=1 V α i → C satisfying the invariance property: where χ is regarded as an element of n i=1 C((t i ))∂ t i (cf. [FB,T3,T5]). If (g, n) = (0, 3), then F C is uniquely determined by the values F C (e α 1 ⊗ e α 2 ⊗ e α 3 ). Let C i (i = 1, 2) be two Riemann surfaces with n i + 1 marked points and associated local coordinates, and denote by C 1 ♯C 2 the pointed Riemann surfaces obtained by gluing C i at their (n i + 1)th points via the gluing parameter q. The gluing of conformal blocks F C 1 , F C 2 by β ∈ S is defined as is the product of q ∆ β and a formal power series of q with constant term F C 1 (v 1 ⊗ e β )F C 2 (e β ⊗ v 2 ). For a Riemann surface C with n + 2 marked points and associated local coordinates, the gluing F β C ♯ of the conformal block F C can be defined in a similar way, where C ♯ denotes the pointed Riemann surface obtained by gluing the (n + 1)th and (n + 2)th points on C. Let σ be a pants decomposition of a Riemann surface of genus g with n marked points and local coordinates, and β be an S-valued function on the set E(σ) of edges associated with σ. Then we define the gluing conformal block F β σ as the gluing of conformal blocks on 3-pointed Riemann spheres, and it is represented as a formal power series of deformation parameters of the degenerate curve associated with σ. Let C/S be a family of stable curves over C of genus g with n marked points P i and local coordinates t i (1 ≤ i ≤ n). Denote by σ i : S → C the section corresponding to P i . Then it is shown in [BK2,7.4] that in the category of algebraic geometry, one can let T S act on the sheaf of conformal blocks as follows. For a vector field θ on S, there exists a liftθ as a vector field on C − n i=1 σ i (S) since it is affine over S. Take Then by the definition of conformal blocks, this action gives the action of T S on the sheaf of conformal blocks on C/S. We denote ∇ the corresponding connection. Then the following result is well known more or less, and can be checked using statements and the proof of [BK2,Sections 7.4 and 7.8]. (2) The residue of ∇ around the singular locus of C/S is given by the action of L 0 . Proof. The assertions (1) and (2) are shown in [BK2,Proposition 7.4.8] and [BK2,Example 7.4.12 and Corollary 7.8.9] respectively. We prove (3). By [BK2,Propositions 7.8.6 and 7.8.7] and the proof, F β σ is the image of a constant section by a T S -equivariant map to the sheaf of conformal blocks over S = Spec[[q e (e ∈ E(σ))]], and hence is a flat section of ∇. Tempered conformal blocks. We recall results of Teschner [T1, T2, T3] on analytic continuations of Liouville conformal blocks on 4-pointed Riemann spheres. We normalize N (α 1 , α 2 , α 3 ) = F C (e α 1 ⊗ e α 2 ⊗ e α 3 ) as in [TV,(8.3) and (12.22)], and σ, σ ′ be pants decompositions of P 1 C − {0, 1, ∞, x} which are connected by a fusing move x ∈ (0, 1). Then it is shown in [T1, T2, T3] that for each β ∈ S, the associated conformal block F β σ : V α 1 ⊗ V α 2 ⊗ V α 3 ⊗ V α 4 → C can be analytically continued along (0, 1) to a meromorphic form around x = 1 which is represented as where S = Q/2 + √ −1 · R + for a kernel function Φ β,β ′ and a measure dµ(β ′ ) explicitly given in [T2, 5.2] and [T3, 2.1]. Therefore, the analytic continuation along (0, 1) gives rise to a canonical isomorphism between the Hilbert spaces The analytic continuation of F β σ along a simple move in M an 1, 1 is given by Hadasz-Jaskólski-Suchanek [HJS], and that along the half-Dehn twist associated with an edge e is the multiplication by exp π √ −1∆ β(e) . Using the above results, we will define the space of tempered Liouville conformal blocks. Let C be a Riemann surface of genus g with n marked points P i and local coordinates t i (1 ≤ i ≤ n). Take a tangential point p ∞ at infinity, and a path π in M an g, n from p ∞ to the point p C corresponding to (C; P i , t i ). Denote by σ the pants decomposition corresponding to p ∞ , and take an S-valued function β on the set E(σ) of edges associated with σ. becomes a formal power series of q e (e ∈ E(σ)), and denote its constant term by C β σ (v). Let F β C be a conformal block associated with (C; P i , t i ) defined by the condition where p ∈ π approaches to p ∞ as t ↓ 0, and Trans p C p denotes the parallel transport by ∇ along π from p to p C . Then its analytic continuation as the flat section of ∇ along π to p ∞ has the main part C β σ (v) e∈E(σ) t ∆ β(e) , and hence is equal to F β σ (v) as a formal power series multiplied by e∈E(σ) q ∆ β(e) e . We define the space CB temp (⊗ n i=1 V α i , C) of tempered conformal blocks associated with (C; P i , t i ) as the direct integral which is isomorphic to the Hilbert space of square-integrable functions on (β(e)) e∈E(σ) | β(e) ∈ S ∼ = (R + ) 3g−3+n . (1) The Hilbert space CB temp (⊗ n i=1 V α i , C) is independent of π and p ∞ . (2) The Hilbert space CB temp (⊗ n i=1 V α i , C) satisfies the factorization property in the following sense. For Riemann surfaces C i (i = 1, 2) with n i + 1 marked points and local coordinates, Similarly, for a Riemann surface C with n+2 marked points and local coordinates, one has a canonical isomorphism (3) By the connection ∇, CB temp (⊗ n i=1 V α i , C) has a projective action of the Teichmüller groupoid for M g, n such that the action of fusing moves and simple moves is given by the action in the case when (g, n) = (0, 4) and (1, 1) respectively. Proof. First, we prove (1). Since ∇ is projectively flat, CB temp = CB temp (⊗ n i=1 V α i , C) is independent of the homotopy class of π. Then by Theorem 3.1, to prove (1), it is enough to show that CB temp is independent of moving p ∞ by fusing moves and simple moves. Let σ and σ ′ be pants decompositions of Riemann surfaces of genus g with n marked points such that σ, σ ′ are connected by a fusing move ϕ. Then a gluing conformal block F β σ is represented as the gluing F β C 1 ♯C 2 of F β 1 C 1 and F β 2 C 2 , where C 1 denotes a 4-pointed Riemann sphere associated with ϕ. By the above result of Teschner [T1, T2, T3], there exists a form F ′ C 1 which is the parallel transport of F β 1 C 1 along the fusing move in M an 0, 4 associated with ϕ. Then the parallel transport of F β σ along ϕ becomes the gluing of F ′ C 1 and F β 2 C 2 by the deformation parameters u i given in Theorem 2.2. Therefore, CB temp is independent of moving p ∞ by fusing moves. By the result of Hadasz-Jaskólski-Suchanek [HJS], the space of tempered conformal blocks for 1-pointed curves of genus 1 is stable under a simple move. Therefore, in a similar way as above, one can show that CB temp is independent of moving p ∞ by simple moves. Second, we prove (2) in the former case (and the latter case can be shown in a similar way). Take pants decompositions of the Riemann surfaces C i (i = 1, 2) which give a pants decomposition of C 1 ♯C 2 , and denote by p ∞ the associated tangential point p ∞ at infinity. Then one can obtain the required isomorphism from the description of the space of tempered conformal blocks associated with C 1 ♯C 2 by p ∞ . The assertion (3) follows from the construction of the space of tempered conformal blocks and the proof of (1), (2).
2017-10-26T04:41:33.000Z
2017-10-26T00:00:00.000
{ "year": 2017, "sha1": "1727d2f9697cb807df51c7bbde6beacf842b9537", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1727d2f9697cb807df51c7bbde6beacf842b9537", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
16002644
pes2o/s2orc
v3-fos-license
A Deep Survey of the Fornax dSph I: Star Formation History Based on a deep imaging survey, we present the first homogeneous star formation history (SFH) of the Fornax dwarf spheroidal (dSph) galaxy. We have obtained two-filter photometry to a depth of B ~ 23 over the entire surface of Fornax, the brightest dSph associated with the Milky Way, and derived its SFH using a CMD-fitting technique. We show that Fornax has produced the most complex star formation and chemical enrichment histories of all the Milky Way dSphs. This system has supported multiple epochs of star formation. A significant number of stars were formed in the early Universe, however the most dominant population are the intermediate age stars. This includes a strong burst of star formation approximately 3 to 4 Gyr ago. Significant population gradients are also evident. Similar to other dSphs, we have found that recent star formation was concentrated towards the centre of the system. Furthermore, we show that the central region harboured a faster rate of chemical enrichment than the outer parts of Fornax. At the centre, the ancient stars (age>10 Gyr) display a mean metallicity of [Fe/H] ~ -1.4, with evidence for three peaks in the metallicity distribution. Overall, enrichment in Fornax has been highly efficient: the most recent star formation burst has produced stars with close to solar metallicity. Our results support a scenario in which Fornax experienced an early phase of rapid chemical enrichment producing a wide range of abundances. Star formation gradually decreased until ~4 Gyr ago, when Fornax experienced a sudden burst of strong star formation activity accompanied by substantial chemical enrichment. Weaker star forming events followed, and we have found tentative evidence for stars with ages less than 100 Myr. introduction The dwarf spheroidal galaxies (dSphs) are the least luminous galaxies known. They display remarkably high mass-to-light ratios (∼100 to 1000), and the stars in each system are known to reside at the centre of a massive dark halo (M vir ∼ 10 8 − 10 9 M ⊙ ) which extends far beyond the observed limiting radii (Walker et al. 2007;Simon & Geha 2007). In terms of stellar population, these systems can have surprisingly complex star formation histories (SFHs). All dSphs contain a population of ancient stars , however some (such as Fornax) have been able to maintain multiple epochs of star formation and chemical enrichment over a Hubble time. These systems are relatively simple environments compared to larger galaxies, and are therefore a starting point in the study of star formation and enrichment. Simulations suggest a cyclical process, in which the gas collapses to form stars, is then chemically enriched and blown out by pockets of massive star formation, and then collapses again to repeat the cycle. Salvadori et al. (2008) propose a time frame of ∼250 Myr for a single cycle. There are, however, open questions regarding star formation in dwarf galaxies. Population gradients suggest that the most recent bursts of star formation in each dSph were concentrated towards the object's centre (Harbeck et al. 2001). Also, the Milky Way Halo contains a population of extremely metal-poor stars, with abundances of [Fe/H] < −5 (Christlieb et al. 2002;Frebel et al. 2005), whereas the dSphs do not contain a population with metallicities below [Fe/H] ∼ −3 (Helmi et al. 2006). This suggests that the gas sourcing the first generation of stars in these systems was pre-enriched. Moreover, Grebel & Gallagher (2004) have argued that the variety of SFHs in dSphs is not the result of reionization, and hence 'local processes' have influenced star formation in each object. These include the regulation of gas dynamics due to internal feedback (Dekel & Silk 1986) while ram pressure stripping and tidal interaction with the Milky Way are also important factors (Mayer et al. 2006). With an integrated absolute magnitude of M V = −13.1 (Mateo 1998), Fornax is the brightest dSph associated with the Milky Way (excluding the tidally dirsupting Sagittarius system). It lies at a distance of 138 kpc, and proper motion measurements indicate its orbit around the Milky Way is roughly circular with an eccentricity of e = 0.13 +0.25 −0.02 and an orbital period of 3.2 +1.4 −0.7 Gyr (Piatek et al. 2007). Walker et al. (2006) have completed a large kinematic survey of this system, collecting radial velocities for 206 member stars over the entire surface of Fornax. They measured a flat velocity dispersion profile, indicating a significant dark component, and find a massto-light ratio of M/L V ∼ 15 within 1.5 kpc (approximately half the tidal radius). Compared to other dSphs, the star formation history of Fornax is unusually complex. All dSphs contain some number of ancient stars (Grebel & Gallagher 2004), however early studies of Fornax revealed an extended giant branch, including carbon stars, suggesting a strong intermediate-age component (Demers & Kunkel 1979;Aaronson & Mould 1980, 1985. Additionally, Fornax contains a significant young stellar component. Originally discovered by Beauchamp et al. (1995), the photom-1 etry of Stetson et al. (1998) and Saviane et al. (2000) later identified main sequence stars with ages as young as Myr. The analysis of Gallart et al. (2005) indicates a burst of star formation in the centre of Fornax 1-2 Gyr ago, which has continued almost to the present day. Indeed, more than half the stars on the RGB are thought to be younger than 4 Gyr (Battaglia et al. 2006). Of all the dSphs, Fornax has experienced the most recent star formation. Fornax also contains population gradients, in which the young stars are preferentially located towards the centre (Stetson et al. 1998;Battaglia et al. 2006). This property is common in the dSph population (Harbeck et al. 2001) and suggests that the gas required for subsequent star formation episodes was more successfully retained in the core of the dark halo than in the outer regions. In Fornax, the young component is not aligned with the main body and is highly structured, including a shell-like feature which may indicate an accretion event ∼2 Gyr ago (Coleman et al. 2004;Olszewski et al. 2006). In addition to an age spread, the stars in this system cover a significant range in metallicity. Tolstoy et al. (2001), Pont et al. (2004) and Battaglia et al. (2006) have examined the chemical abundances of Fornax red giants using spectra of the Ca II triplet features, and found a metallicity range of −2.5 ≤ [Fe/H] ≤ 0.0. The data set of Battaglia et al. contained enough Fornax members (562) such that stellar metallicities could be accurately related to kinematics. They found that the metal-rich stars have a colder velocity dispersion, and the metal-poor component shows signs of non-equilibrium kinematics towards the centre of Fornax (r < 2r c ). Additionally, high resolution spectra of 81 red giants in the centre of Fornax indicate that s-process elements are unusually strong, hence stellar winds (such as those from AGB stars) have dominated the chemical enrichment of Fornax in the last 2-4 Gyr (Letarte 2007). Despite the progress described above, Fornax has been lacking a homogeneous study of its SFH over the entire surface of the system. The HST photometry of Buonanno et al. (1999) confirmed the presence of young stars at the centre of Fornax and included evidence of seperate bursts of star formation. By combining these results with Ca II triplet metallicities, Tolstoy et al. (2001) created schematic star formation and chemical enrichment histories. The preliminary SFH produced by Gallart et al. (2005) was based on VLT/FORS1 photometry (depth I ∼ 24.5) also located at the centre of Fornax, and another field approximately one core radius from the centre. Young stars were present in both fields, however the authors noted a significant difference in SFH between the two. In summary, although the general trend of star formation at the centre of Fornax is known, the aggregate history is yet to be determined. Hence, we present the first results of a deep, homogeneous photometric data set over the entire surface of Fornax. We have extracted the SFH from this photometry using the CMD-fitting techniques developed by Dolphin (2002). Not only has this allowed us to derive a global SFH for this system, we have also examined the SFH as a function of position to search for population gradients. Our data have a limiting magnitude of B ∼ 23.0, thus we are sensitive to main sequence stars in Fornax with an age of 3 Gyr and less. Also, the red giant branch, red clump and horizontal branch stars allowed us to track the ages and metallicities of the populations with ages > 3 Gyr. This is the first complete SFH of this system derived from deep photometry. 2. the survey 2.1. Data Reduction Images encompassing the surface of the Fornax dSph were obtained using the ESO/MPG 2.2m Telescope equipped with the Wide-Field Imager. This instrument provides a 34 ′ × 33 ′ field of view using a 4 × 2 mosaic of 2048 × 4096 pix 2 CCDs and a pixel resolution of 0.24 ′′ pix −1 . The survey aim is to obtain photometry to 23rd magnitude over the entire body of Fornax. The first stage presented here contains Fornax itself and the outer shell noted by Coleman et al. (2005). Thus far, 21 pointings have been obtained, covering a sky area of 5.25 deg 2 . An overlap region of ∼4 ′ between each field was chosen to ensure the photometric zeropoint was constant across the survey. A schematic diagram of the fields is shown in Fig. 1. We obtained three 600s dithered exposures in both B and R for all fields, allowing us to reach magnitudes of B = 23.0 and R = 23.5 (50% completeness limits) in all fields. The images were taken during 11 nights in Oct/Nov 2006 in median seeing conditions of 1.4 ′′ (range 0.9 ′′ − 2.1 ′′ ). The data were reduced using standard procedures in the mscred package in IRAF: the overscan region and the bias frames were used to subtract the pedastal current from each science image, which were then trimmed. Twilight flat fields were combined to produce a master flat frame in B and R for each night, which were then used to flat-field the science images. -Schematic diagram detailing the layout of the fields currently observed. The red ellipses represent the core and tidal radii (Mateo 1998). The dashed lines are the limit of our survey and the dotted lines represent the overlap regions between fields. The shaded circles represent the five globular clusters associated with Fornax and are labelled accordingly. These circles have radii of 1.5 ′ , approximately equal to the tidal radii of each cluster (Mackey & Gilmore 2003). The two arcs represent the shelllike features noted by Coleman et al. (2004Coleman et al. ( , 2005, and the dashed ellipses represent the boundaries of our radial bins. Three dithered images were taken in each field, and we used routines in the mscred package to combine the images and remove gaps between the CCDs (Valdes 2002). An astrometric solution was constructed for all images by matching them to the first USNO CCD Astrograph Catalogue (UCAC1), which has an average precision of 31 mas in the magnitude range 8 < R < 16 (Zacharias et al. 2000). The rms of our solution was <0.2 ′′ in all fields. The individual CCD images were then combined using the mscimage routine to produce three single (i.e. non-mosaic) images for each field in both filters. These three images were then median combined with the mscstack routine, which matches images based on their astrometry and removes the CCD gaps. Finally, the combined image for each field was then corrected for zeropoint gradients across the field of view using the mscskysub routine. Photometry Photometry was derived using DAOPHOT (Stetson 1987). We measured the background level in each field to calculate a standard deviation of the sky, σ. Each image was then searched for all sources 4σ above the background level, and aperture photometry was used to estimate their brightnesses. In a crowded field such as the centre of Fornax, the PSF-fitting technique in DAOPHOT provides a more accurate measure of stellar magnitudes compared to aperture photometry, as it allows the signal of adjacent stars to be disentangled. Hence, we examined the brightest 60 stars in each image and used those with no apparent neighbours and a well-defined Gaussian shape to construct a master PSF for each science frame. This was then fitted to every source in the image using a fitting radius of 8 pixels (1.9 ′′ ) or 1 − 2 half-width, half-maxima of the PSF (depending on seeing). To measure the completeness and photometric accuracy of our photometry, we performed artificial star tests on all science frames in both filters (i.e. 42 science images). We placed 1600 artificial stars in the image and attempted to recover them with DAOPHOT, where the photometric uncertainty was then determined as the dispersion of the returned magnitudes about the mean (that is, not the input). This was repeated for artificial stars at every 0.25 magnitudes in the B and R frames. To ensure a constant zeropoint across the survey, we matched stars in the overlap regions between fields using their astrometry and measured the mean inter-field difference in B and R. This is the same technique used in our previous Fornax survey (Coleman et al. 2005). The interfield corrections were accurate to ∼0.02 mag in both B and R. As an example, the final match between the fields F6 and F11 is shown in Fig. 2. A final zeropoint correction was made by matching our data to the catalogue of Stetson et al. (1998), which contains photometry of the core of Fornax in B and R to a similar limiting magnitude as our survey. The R filter attached to the 2.2m/WFI is a standard Cousins filter, and the R-band photometry was well matched between the two datasets. In contrast, the B filter (BB#B/123 ESO878) covers a larger wavelength range than the standard Johnson filter 1 , and therefore requires a colour correction to match standard B magnitudes. The ESO website provides a correction for the Bband photometry using the (B − V ) colour; we determined a (B − R) colour correction by comparing our data to the Stetson et al. catalogue, yielding the following result, , with a bootstrap error of 0.02 mag. The overall photometric zeropoints are accurate to 0.03 mag. Colour Magnitude Diagram The colour magnitude diagram (CMD) of Fornax reinforces the complex star formation history (SFH) of this object: it contains multiple stellar populations with a vast range in age and chemical abundance. A full description of the stellar populations at the centre of Fornax is given by Stetson et al. (1998) and Saviane et al. (2000). However, the outer regions of Fornax are not well known, and in Fig. 3 we present the first deep CMDs for the entirety of Fornax. It has been theorised that Fornax may contain strong radial population gradients (e.g. Saviane et al. 2000), hence we have divided the data set into the four elliptical regions shown in Fig. 1. The regional boundaries are at radii of r c , 2r c , 3r c and r t , where r c = 13.8 ′ and r t = 76.0 ′ are the core and tidal radii listed by Mateo (1998). The distribution of young stars in Fornax is not aligned with the system's major axis and it is also known to contain strong asymmetries (e.g. Stetson et al. 1998), however the majority of these stars are contained in the core region (this will be shown in the next paper in this series ; Coleman, in prep.) and hence the assumption of elliptical regions is not vital for this sub-population. The upper four panels show the photometry for sources in Region 1 (r < rc), Region 2 (rc < r < 2rc), Region 3 (2rc < r < 3rc) and Region 4 (3rc < r < rt). The photometry is 95% complete to a magnitude of B = 23.0, however this number improves slightly (∼0.25 mag) in the outer regions due to less stellar crowding. We have removed all sources with a non-stellar sharpness and a large photometric uncertainty (σ (B−R) > 0.3 mag). The four lower panels represent the 'signal' of each subset above the background. Region 1 (the core region) contains approximately 28,000 sources. Moving outwards, another 41,000 stars were selected in Region 2, 31,000 in Region 3, and 51,000 in Region 4. For convenience, we have also performed a background subtraction on each CMD, removing the field population using the 'signal-to-noise' technique provided by Grillmair et al. (1995). A full description of this technique as applied to the current data set will be given in the next paper in this series. In summary, we have divided the CMD into a grid of cells, counted the number of Fornax and field stars in each cell, and then used this to remove the field contamination (mostly foreground Milky Way stars). The results are shown in the lower panels of Fig. 3. The central CMD (upper left panel) shows all sources in Region 1, drawn from within the inner red ellipse shown in Fig. 1. Immediately visible is the red giant branch extending downwards from B ∼ 20, which contains old and intermediate age stars (age > 1 Gyr) and is thickened due to the age and metallicity spread. The most densely packed feature is the red clump (RC), centred at (B − R) = 1.3, B = 22. This is made up of core helium burning stars, and is essentially a young-to-intermediate age, metal-rich horizontal branch (HB). A hint of a HB extending towards the blue is also visible. We also note that the HB appears to extend redward from the RC, however these objects are artefacts of the observational dithering pattern: a CMDselection indicated that they lie pre-dominantly in regions in which only a single frame of B-band data was available. This effect is most prominent in the HB star-rich central four fields, where a few of the HB stars display large photometry errors in the B filter (and hence an artificial colour spread in the HB itself is created). Although it is a small artefact (it exists in less than 1% of the surveyed area), its effect on our star formation history results is discussed further in §3.2.3. Returning to our discussion of the central region CMD, a slight overdensity lies approximately 0.6 mag above the RC, identified as the colour-magitude clumping at the start of the AGB cycle . Below the RC lies the sub-giant branch, which is also thickened by the age and metallicity spread. Finally, the blue column of stars extending downwards from (B − R) = −0.2, B = 20 is the young main sequence. Beauchamp et al. (1995) discovered this feature in Fornax, and Saviane et al. (2000) subsequently identified main sequence stars as young as 200 Myr. Indeed, of all the Milky Way dSphs, Fornax has the most recent star formation. At first glance, the thick red giant and sub-giant branches are common to all four regions, indicating that the wide range of stellar ages and metallicities continues well beyond the core radius. However, the differences encountered in the stellar population when moving outwards from the centre of Fornax are remarkable. We see a decrease in the prominence of the young main sequence, possibly indicating that recent star formation (i.e. less than 4 Gyr ago) was preferentially located towards the centre of Fornax. Another clear difference between the four regions lies in the morphology of the HB. The red clump is present in all four regions, however the HB extends further into the blue region as we move outwards; it terminates at (B − R) ∼ −0.2 in the Region 4 CMD. This indicates that the outer regions of Fornax contain a significant (if not dominant) population of old, metal-poor stars. Further emphasis of this point is provided by the red giant branch (RGB): the lower panels of Fig. 3 show that as we move outwards, the RGB shifts towards the blue, thus indicating a decrease in mean metallicity with increasing radius. Overall, a comparison of the four CMDs shown in Fig. 3 would suggest that later bursts of star formation and chemical enrichment were preferentially located towards the centre of Fornax. 3. star formation history Numerical fitting of CMDs allows a study of the SFH of a dwarf galaxy (e.g. Gallart et al. 1996;Tolstoy & Saha 1996;Aparicio et al. 1997;Dolphin 1997;Holtzman et al. 1999;Olsen 1999;Hernandez et al. 2000;Harris & Zaritsky 2001). Moreover, the large mosaic presented in this paper provides an opportunity to study the spatial variation of the SFH within Fornax. To obtain a detailed picture of the SFH we use the CMD-fitting software MATCH (Dolphin 2002), which applies maximumlikelihood methods to fit photometric data with simple model CMDs. By converting data and models to so-called Hess-diagrams (2-D histograms of the stellar density as a function of colour and magnitude; Hess 1924) a direct pixel-by-pixel comparison is possible. The model CMDs are based on theoretical isochrones from Girardi et al. (2002) and include realistic photometric errors and com-pleteness, which are obtained from the artificial star tests described earlier. By determining the best-fitting linear combination of model CMDs for different age and metallicity bins, the SFH and metallicity evolution are then constrained. The accuracy of the recovered metallicities depends not only on the data quality, but also on the quality of the isochrones and the stellar evolution tracks on which they are based. de Jong et al. (2008) tested MATCH on a set of six globular clusters with varying metallicities using isochrones based on the same stellar evolution tracks from Girardi et al. (2002). They show that the recovered metallicities are always within 0.2 dex of the spectroscopic values. In all results presented in this paper we therefore include an additional contribution to the metallicity uncertainties of 0.2 dex. Since there are variations in seeing and sky brightness between the different fields in the mosaic, the SFH fits are done separately for each field. Furthermore, the four radial bins are treated separately to enable an analysis of the radial variation of the stellar populations in Fornax. CMD fitting method The main free parameters in CMD fitting are distance, age, metallicity and extinction, although the binary fraction and the assumed initial mass function (IMF) also play a role. To limit the number of free parameters to the age and metallicity, we assume reasonable priors on the others parameters. For all our fits we assume a binary fraction of 0.5 and a Salpeter IMF (Salpeter 1955), which to the CMD depth probed here is practically equal to, for example, a Kroupa IMF (Kroupa et al. 1993). During the past decade, several different studies have all found consistent distances to Fornax of 138 ± 4 kpc (Mateo 1998;Bersier 2000;Rizzi et al. 2007), justifying a prior on the distance in our SFH fits. To account for the small uncertainty in the distance, we perform all fits for three fixed distances, namely 135, 138 and 141 kpc. According to the dust extinction maps from Schlegel et al. (1998) For each individual fit, the distance and two kinds of extinction are fixed, while the star formation rates (SFRs) for the age and metallicity bins serve as the free parameters. Since the isochrones are spaced more evenly in log(t) than in t, the age bins are defined in log(t). They have binwidths of ∆ log t = 0.15, with the oldest bin corresponding to ages between 11 and 16 Gyr and the youngest to 10 and 16 Myr, for a total of 21 age bins. The full metallicity range spanned by the isochrones, [Fe/H] = −2.4 to 0.0 is covered with 16 bins of 0.15 dex width. To account for contamination by foreground stars and faint background galaxies, a control field CMD is created from all star-like sources outside the limiting radius of Fornax, and used as an additional model CMD in the fits. All stars with −1 < B − R < 3 and B < 23 and R < 23 are fit, using Hess-diagram bin sizes of 0.16 in magnitude and 0.08 in colour. Figs. 4 and 5 show the Hess diagrams of Regions 1 and 3 recovered from Field 15 with the corresponding bestfit models and residuals. In general the fits are good, but some systematic problems arise in reproducing the exact shape of the HB/RC and its extent towards the red. This imperfect modeling of the HB/RC region is a general problem for theoretical isochrones, due to the complicated processes taking place during this phase in stellar evolution. Newer isochrone sets than the one currrently used by MATCH (Girardi et al. 2002) should improve this situation, but our current results are not strongly affected by these problems. However, as the details of the theoretical isochrones used do influence the exact values of metallicities and ages we recover, some extra uncertainty should be taken into account when interpreting the fit results. After running the 60 fits (3 distances, 4 foreground and 5 internal extinction values) for each region, the values of the maximum-likelihood measure of goodness-of-fit Q (see Dolphin 2002;de Jong et al. 2008) of all fits are compared. The expected random (i.e. not due to actual SFH differences) variance in this Q parameter, σ, when fitting a specific CMD, is calculated using Poisson statistics and from Monte Carlo simulations using random drawings from the best model. All fits that have a value of Q within 1σ of the best fit are considered as 'good' fits and used to construct the SFHs presented in the remainder of this paper. A Global View As a first pass, we examine the aggregate SFH of the Fornax system. Combining the SFHs obtained from all regions and fields gives the total SFH of Fornax, presented graphically in Fig. 6 and in tabular form in Table 1. Clearly, Fornax has a complex history and has been forming stars continuously over most of the age of the universe, with non-zero SFRs being found in most age bins. After the first stars formed more than 10 Gyr ago, a slow decline is detected until approximately 4 Gyr ago, when the star formation rate sharply increased for a period of ∼1 Gyr. After this sudden intensified star formation episode, a very low level of star formation has continued until the present day. For comparison, we also show in Fig. 6 the schematic SFH derived by Tolstoy et al. (2001) based on photometry and Ca ii triplet results. We have shifted the peak of their relative star formation rate to match our peak SFR. The results are well matched for the last few Gyr, however our CMD-fitting code is better able to extract detailed star formation rates from the early history of Fornax. We also show the total stellar mass formed during each bin in Table 1 and Fig. 7. From this, we calculate the total stellar mass formed in Fornax to be 6.1 +0.8 −0.7 × 10 7 M ⊙ , where we have integrated the Salpeter IMF down to a mass of 0.15M ⊙ . Fig. 6.-Total star formation history (upper panel) and agemetallicity relation (lower panel) for Fornax. In both panels, the horizontal error bars indicate the widths of the age bins. The error bars on the star formation rates are not independent; a higher SFR in one bin would be compensated by a higher SFR in adjacent bins. Metallicities are only shown for age bins with a SFR greater than 0.0005 M ⊙ yr −1 and are SFR-weighted means. Both panels contain a comparison with previous work, represented by the dotted lines. The upper panel contains the schematic SFH derived by Tolstoy et al. (2001) scaled to match our peak star formation rate detection, and the lower panel contains our estimate of the evolution of the average metallicity derived by Battaglia et al. (2006). Fig. 7.-Stellar mass formed during each age bin, taken directly from the total star formation history (Fig. 6). Initially, the average metallicity of the stars shows little evolution. A large fraction of the ancient stars seems to have formed from pre-enriched gas, as the mean metallicity in the oldest age bin (11 − 16 Gyr) is already [Fe/H] = −1.4. The metallicity only starts to increase significantly from this value at an age of around 5 Gyr. Overall, these results are in good agreement with previous studies of the SFH and metallicity of Fornax. We show in the lower panel of Fig. 6 the approximate age-metallicity relation derived by Battaglia et al. (2006) based on spectra of 562 RGB stars extending from the centre of Fornax to the tidal radius. This line represents our estimate of the evolution of the average iron abundance from Fig. 23 of Battaglia et al. (2006), however we note that each age bin encompasses a wide range of stellar metallicities and thus this line should be regarded as a guide only. Although they are well within the uncertainties, our results are slightly more metal-rich than the spectroscopic results. Such a small offset in metallicity might be caused by slight inaccuracies in the isochrones, the photometric calibration, or the distance to Fornax. A Refined View Population gradients are a common feature of dSphs (Harbeck et al. 2001). Generally, more recent star formation events are more centrally concentreated and produce stars with a greater metallicity compared to the older stellar population. In Fig. 8 we show the total SFHs (left) and metallicity evolutions (right) for all four radial regions (e.g. for Region 1 the four SFHs of the portions in fields 10, 11, 15, and 16 have been combined). Overplotted in each panel are the SFHs and metallicities for four individual regions, to give an idea of the field-to-field variation. Since the panels on the right only show the average metallicities and provide no insight into the metallicity distribution, the combined fit results for the four radial regions with the complete age-metallicity grid are shown in Fig. 9. There is a striking difference in the SFH when moving from Region 1 (top panels) to Region 4 (bottom panels). In the center of Fornax, the burst of star formation that occurred 3 − 4 Gyr ago stands out strongly. Moving outwards, the proportion of stars produced during this burst decreases and in the outskirts are not found at all. Stars younger than 1 Gyr are also mostly found in the center, with hardly any significant star formation at these ages in the outermost radial bin. This kind of radial dependence, with young stars more centrally concentrated than old stars is a common characteristic of dSphs, and evidence for this was found before in Fornax (Stetson et al. 1998;Battaglia et al. 2006). At all radii, the metallicity evolution follows the same pattern. It starts off at [Fe/H] ∼ −1.5 and remains constant until ∼5 Gyr ago. Then the metallicity starts to increase rapidly to −0.5, after which there is little indication of further enrichment. This rapid increase in metallicity coincides with the strong burst of star formation. Fig. 9.-CMD fitting results for the four radial bins. From left to right the panels are for the central bin (labeled with a "1") to the outermost bin (labeled with a "4"). The grayscale indicates the star-formation rate in each age-metallicity bin, with a darker value corresponding to a higher value. Comparing the SFHs of individual fields within a radial bin (i.e. the coloured lines in Fig. 8) shows no significant difference in the central bin. Since in the central part the dynamical timescales are shortest, the populations are well-mixed. In the outermost region there is also no sign of spatial variation, which is probably because there are mostly ancient stars which have had sufficient time to diffuse throughout the galaxy. In Regions 2 and 3, however, significant field-to-field variation is visible in the strength of the 4 Gyr burst. The detailed structural properties of the stellar populations in Fornax is the subject of the next paper in this series (Coleman, in prep.). Abundance Variations in the Ancient Stars Although Fornax is found to be relatively metal-rich, Fig. 9 shows that the spread in metallicities is significant. At all ages the metallicity spread is at least ∼1 dex, and especially the oldest stars (>10 Gyr) show a very large spread, with metallicities ranging between [Fe/H] = −0.5 to our metallicity cut-off at −2.4. Note that in all regions there is a non-zero SFR in the most metal-rich two age bins. This is caused by MATCH attempting to fit the redward extension of the HB (see Fig. 3) with an additional, very red RC. This redward extension is an artefact of the observational dithering pattern in which some (less than 1%) of the HB stars have poor B-band photometry, hence this detection of very metal-rich, ancient stars is also an artefact, and they are excluded from our star formation history. Based on spectroscopy of 562 RGB stars in Fornax, Battaglia et al. (2006) found two distinct populations: a metal-rich ([Fe/H] ∼ −0.9) component, and a metalpoor ([Fe/H] ∼ −1.7) component with a large metallicity spread. Consistent with observations of other dSphs, they found the metal-rich stars to be more centrally concentrated than the metal-poor population. To examine the metallicities of the oldest stars, we show histograms of the metallicity distribution of the stars in the oldest age bin in Fig. 10. In Region 1, the metallicity distribution of these ancient stars peaks at [Fe/H] ≃ −1, but also contains many of stars with lower metallicities. The peak at [Fe/H] ≃ −1 is also apparent in Region 2, but becomes less strong in Region 3 and is practically absent in Region 4. This outer bin appears to harbour a broader peak at [Fe/H] ≃ −1.5, and a third peak at [Fe/H] −2. It should be noted that this third peak is most likely an accumulation of more metal-poor stars outside our metallicity range. However, the histograms of the inner three regions are consistent with the presence of three peaks at metallicities of [Fe/H] ≃ −1.0, −1.5, and −2.0 dex. To summarise, we have recovered the metal-rich component (centred at [Fe/H] ∼ −1.0) of Fornax, however our results support the presence of three distinct peaks in the metallicity distribution of the ancient stars rather than the two proposed by Battaglia et al. (2006). To test the statistical significance of the three peaks, we created ten Monte Carlo realisations of the Fornax stellar population using our SFH and age-metallicity relation. A comparison of the resulting metallicity distributions indicated that, despite some slight differences, the three peaked function was present in every synthetic population. We therefore argue that the metal-poor component of Battaglia et al. (2006) can be sub-divided into two separate populations. This suggests that Fornax experienced three main star formation events in the period >10 Gyr ago, a hypothesis to be tested with further spectroscopic data. The Inner Shell In a previous, wide-area survey, Coleman et al. (2004) noticed a shell-like feature near the center of Fornax in our field 15 (see Fig. 1). Subsequently, Olszewski et al. (2006) confirmed the presence of the feature based on deep photometry obtained with Magellan, and determined an age of 1.4 Gyr and a metallicity of [Fe/H] ∼ −0.7 for its stars. Because of the relatively small number of stars in this overdensity and the large contamination by the overall Fornax stars, obtaining a SFH is difficult, as also shown by Olszewski et al. (2006). Therefore, we opt for constraining the properties of the stars in the feature using a single component (SC) fitting strategy, described in detail in de Jong et al. (2008). In short, simple stellar population models with a narrow age and metallicity range are fit to the observed CMD and their goodness-of-fit is compared to that of the best-fitting single component model. All stars in a 2. ′ 5 wide elliptical annulus containing the inner shell were extracted from the survey. This was then divided into two regions: the 17 • arc containing the shell, while the remainder of the annulus was used as a control field. The control field-subtracted Hess diagram of the shell is shown in panel (a) of Fig. 11, where the MSTO and RC of the shell population clearly stand out. Panel (b) of the same figure shows the areas in the age-metallicity plane that lie within 1, 2, and 3σ of the best-fit values, indicated with an asterisk. We find the age of the stars in the shell to be 1.6 ± 0.4 Gyr and the metallicity [Fe/H] = −0.9 +0.3 −0.2 dex. Our SFH results indicate a very low SFR for this age and the average age-metallicity relation predicts a higher metallicity of [Fe/H] ≃ −0.5 for stars of the general Fornax population of the same age. Thus, these results seem consistent with the interpretation of the shell resulting from an accretion event (Coleman et al. 2004(Coleman et al. , 2005, rather than being part of the underlying stellar populations. This accretion hypothesis will be discussed further in a later publication. Luminosity History With the SFH in hand, it is possible to construct the CMD of Fornax as it would have looked at some time in the past. In this way, the total luminosity of the system can be traced as a function of time. Combining the SFH fits of all fields, the overall SFR as function of age and metallicity was used to construct artificial CMDs for Fornax at various points in the last 10 Gyr, assuming in all cases a binary fraction of 0.5 and a Salpeter IMF. By extending the CMDs far enough down the LF, the flux of all stars can be used to calculate the total luminosity. Although several systematic effects hamper a very precise measurement of M V , the value we obtain for the present day, M V ≃ −13.0, is very close to the literature value of M V = −13.1 (Mateo 1998). The evolution of the V-band luminosity of Fornax during the past 10 Gyr is shown in Fig. 12. This figure is for illustrative purposes only, and any uncertainties given would be estimates at best. However, we can see that while Fornax has experienced a general trend towards increasing brightness as more gas is converted to stars, there are significant variations caused by bursts of star formation. We find that the total luminosity of Fornax has a range of at least 1 magnitude over a Hubble time, hence at some point it was less than 50% of its current brightness. discussion The mechanisms of star formation and chemical enrichment in low-mass galaxies are not well understood. Qualitatively, the process is simple to envision. Initially, gas collapses at the centre of a dark halo to form the first generation of stars. The heaviest of these quickly evolve and blowout material, injecting chemicals and energy into the surrounding gas cloud. This stellar feedback causes the enriched gas cloud to expand and, if the potential well is deep enough, then collapse back to the centre of the dark halo to create a new generation of chemically enriched stars. Thus, a cyclical process is established in which multiple generations of stars form with a progressive increase in chemical abundance. However, there are still many unresolved questions regarding dSph star formation. Helmi et al. (2006) have shown that, in contrast to the Galactic Halo, the dSphs are conspicuously lacking stars with [Fe/H] < −3.0. This suggests that the gas sourcing the oldest stars in dSphs was pre-enriched. Given that all dSphs are thought to contain some fraction of ancient stars (i.e. with ages > 13 Gyr; Held et al. 2000) the initial enrichment must have been an extremely rapid process, via some mechanism which is unclear. Furthermore, the SFHs for each dSph are remarkably different. Some dSphs contain purely old stars (ages > 10 Gyr) and are characterised by a simple SFH (e.g. Draco), whereas others are dominated by intermediate-age stars and have been able to maintain multiple epochs of star formation and chemical enrichment (e.g. Fornax). This is the leading question concerning star formation in dwarf galaxies: why do Draco and Fornax reside at the centres of dark halos with similar masses (Walker et al. 2007), yet they differ in brightness by a factor of more than ten? Mayer et al. (2006) discussed this point, and described two possible scenarios: (i) Either Fornax and Draco initially contained the same amount of gas, hence local effects have allowed Fornax to produce stars with a greater efficiency, or, (ii) Fornax had access to a tenfold greater reservoir of gas, and Draco experienced a similar (but scaled down) initial star formation. We note that these are extreme scenarios, and something in between is not precluded. Grebel & Gallagher (2004) found that the reionization of the universe did not cause the expected reduction in dSph star formation, hence 'local effects' are thought to be the dominant factor producing the variety of dSph SFHs. Differences in the level of tidal distortion, mechanical feedback and gas infall experienced by each system are cited, however the precise nature of these local effects is uncertain (e.g. Dekel & Silk 1986;Mayer et al. 2006). In this regard, Ferrara & Tolstoy (2000) have noted that a dSph's dark matter fraction will influence its SFH. External forces (e.g. tidal and ram pressure stripping) can remove blownout gas from a dSph, however a massive dark halo will allow the satellite to retain its gas. Ancient Stars Fornax contains a large number of stars and has a complex star formation history, hence it is an ideal object to compare to simulations. Firstly, our best fits show that the ancient stars (age > 10 Gyr) in this system have a mean metallicity of [Fe/H] ≈ −1.4 (Fig. 6). This indicates that the first few Gyr contained an intense period of star formation and chemical enrichment. At some level, this enrichment appears to have occurred throughout the whole body of Fornax: Fig. 9 indicates that the oldest stars in every radial bin contain a number of [Fe/H] ∼ −1.0 stars. However, we also note a metallicity gradient in this ancient population, such that the central stars display a mean iron abundance approximately 0.3 dex greater than those in the outer regions. This is consistent with the spectroscopic results of Battaglia et al. (2006). The three peaks in the metallicity distribution function (Fig. 10) are possibly evidence for three main bursts for star formation in the early Universe. In summary, while our results indicate that the first few Gyrs saw a swift chemical enrichment process in Fornax, they also show that this enrichment was enhanced towards the centre of the system. In general, our results for the first few Gyr of Fornax are well reproduced by models. Marcolini et al. (2006) constructed a 3D hydrodynamic model describing gas dynamics and chemical enrichment in a dwarf galaxy including the contribution of supernovae. The metalicity distribution function we find for the ancient Fornax stars is similar to that produced by the Marcolini et al. model. Furthermore, they find that stars located towards the centre of a dSph are the product of a more efficient chemical enrichment (Marcolini et al. 2008), as is seen in Fornax (Battaglia et al. 2006). Salvadori et al. (2008) presented a semi-analytic cosmological model following star formation in a dSph galaxy in a Milky Way-type environment, preenriched to an abundance of [Fe/H] ∼ −3. They showed that a dSph experiences intense star formation in the first few hundred Myr, with multiple cycles of star bursts followed by gas blowout and infall, with accompanying chemical enrichment. Their metallicity distribution function and mean metallicity are roughly equivalent to those we measured for the old stars in Fornax. It is clear that simulations are able to accurately reproduce the first epoch of star formation in a dSph environment. Salvadori et al. (2008) find a rapidly decreasing star formation rate with time, such that approximately 2.5 Gyr after virialization the rate has fallen well below 10 −4 M ⊙ yr −1 . This corresponds to an age of ∼9 Gyr, or the second data point in our global SFH for Fornax (Fig. 6), where we measure a star formation rate of approximately 3 × 10 −3 M ⊙ yr −1 . Hence, although the Salvadori et al. (2008) model provides an excellent reproduction of star formation in a dSph such as Sculptor, they note that a more complex SFH (such as that seen in Fornax) requires a different set of conditions. Intermediate Age Stars In this context, we find the star formation rate in Fornax to be an approximately constant value 2 of 3 × 10 −3 M ⊙ yr −1 in the period from 9 to 4 Gyr ago. This period also witnessed a slow, monotonic increase in iron abundance. However, star formation in Fornax experienced a sudden increase approximately 3 to 4 Gyr ago, jumping threefold to ∼10 −2 M ⊙ yr −1 with an accompying spike in chemical enrichment. These results all agree with the Ca ii triplet results of Pont et al. (2004). Our results also suggest that this epoch of star formation was relatively short-lived, lasting 1−2 Gyr, and was confined to the central ∼0.5r t (1500 pc) of Fornax (Fig. 8). There are a variety of explanations for this continued star formation. Salvadori et al. (2008) state that a refinement of the reionization criterion would allow massive dSphs with a lower initial gas-to-dark ratio in their models, thereby leading to less efficient mechanical feedback and more regular star formation activity. This could explain the steady star formation seen in the period from 9 to 4 Gyr ago, however it cannot account for the subsequent burst. As an alternative scenario, Fornax may have experienced an injection of new gas to fuel this next generation of stars. We have already presented evidence for a merger in Fornax, proposing that a gas-rich dwarf galaxy merged with this system to fuel strong star formation activity (Coleman et al. 2004(Coleman et al. , 2005. However, the timing is problematic: the original scenario requires that this merger occured approximately 2 Gyr ago, or 2 Gyr after the suddent burst of star formation found here. Furthermore, our chemical enrichment history shows that the new burst of stars was accompanied by a sudden increase in abundance, hence this would imply that, prior to star formation, the metal abundance of the gas was at least that of Fornax and therfore unlikely to be of foreign origin. We would therefore argue that the 4 Gyr burst was fueled by gas originating in Fornax. This scenario requires gas blown away by star formation to have resided in the outer regions of Fornax for at least 5 Gyr before collapsing back to the dSph. Gas expelled to the outer regions (or even the halo) of a satellite system is generally expected to be removed by ram pressure stripping and tidal distortion. However, Blitz & Robishaw (2000) presented evidence that Hi clouds exist in many Local Group dSphs, situated up to 10 kpc (or, approximately three tidal radii in the case of Fornax) from the centre of the system. This is supported by the strong evidence for Hi associated with Sculptor (Carignan et al. 1998;Bouchard et al. 2003). Indeed, Bouchard et al. (2006) discovered an Hi cloud located to the North of Fornax. Whether this is associated with the dSph is not certain (the radial velocity of Fornax is inconveniently close to that of the Milky Way in this direction), however Bouchard et al. (2006) state a minimum mass of 1.5 × 10 5 M ⊙ at the distance of Fornax. Mayer (2005) noted that an object following an orbit with a low ellipticity (such as Fornax; Piatek et al. 2007) will be better able to retain its gas. It therefore seems possible that the massive, extended dark halo of Fornax could have allowed gas in the outer regions to remain bound to the system for an extended period. An external influence on star formation in a satellite system are tidal interactions with the larger host galaxy (Mayer et al. 2006). Tidal forces experienced by a satellite system as it orbits its host can induce burts of star formation (Barton et al. 2000) as the interstellar gas clouds are compressed (Mihos & Hernquist 1996). Using HST images over a four year epoch, Piatek et al. (2007) measured the proper motion of Fornax and derived an orbital period of 3.2 +1.4 −0.7 Gyr with an eccentricity of e = 0.13 +0.25 −0.02 . Tidal forces scale as R −3 , hence this orbit implies that Fornax will experience a distortion force change of at least 50% as it moves from pericentre to apocentre. A pericentric passage approximately 4 Gyr ago is possible within the current solution (we are grateful to S. Piatek for sharing his orbital code), however the uncertainties are currently too large for a fair comparison between orbit and SFH. Recent Activity Finally, we examine the recent activity in Fornax. Following the 4 Gyr burst, there was another star forming event 400 − 600 Myr ago, and a more recent event approximately 100 Myr ago (Fig. 6). These 100 Myr old stars have been previously noted by Stetson et al. (1998) and Saviane et al. (2000), and are the youngest stars yet observed in a dSph. However, as a new result, we have also detected a small number of younger stars. Table 1 indicates that in the period 10 − 100 Myr ago, ∼1500 M ⊙ were formed in Fornax. This is minute compared to Fornax as a whole (see Fig. 7), yet it is tentative evidence that Fornax may have been forming stars almost to the present day. We ignore the abundance measurements for this youngest population, given that they are based on only a few stars in each age bin and have significant errors. Table 1) while the blue points represent the <100 Myr stars. The right panel shows the CMD of the core region of Fornax. To demonstrate the difficulty in detecting this small, age < 100 Myr population, we show an artificial CMD of the Fornax young stars in Fig. 13. The left panel is a 'typical' prediction 3 for the young stars based on our SFH. Extinction, photometric errors and completeness for the survey have been included. Note that the distribution of the blue points (age < 100 Myr) is almost identical to that of the red points (age = 100 Myr), thus these populations are degenerate and it is not possible to distinguish between them based only on photometry. However, it shows that such a very young population could easily have evaded detection up to now. Given the small number of young stars predicted by our SFH, and the difficulty in separating them from slightly older stars, we classify this as tentative evidence for an ultra-young population in Fornax. Nonetheless, we expect very little foreground contamination blueward of B − R = 0.4, hence the brightest blue stars shown in the right panel would argue that Fornax did indeed experience star formation less than 100 Myr ago. High resolution spectra of these young stars is required to accurately determine their age. Lithium is an important age diagnostic for late-type stars as it is easily destroyed in stellar interiors, hence a follow-up survey could target the resonance doublet of Li i at 6708Å (e.g. Montes et al. 2001) in these young stars to demarcate the age of the most recent star forming event in Fornax. summary and conclusions Based on two-filter photometry to a magnitude of B ∼ 23, we used a CMD-fitting technique to derive the star formation history for the Fornax dSph. All dSphs contain some number of ancient stars, however these systems are known to display a wide variety of SFHs. Fornax formed a significant number of its stars in the early Universe (age > 10 Gyr) and subsequently experienced a constant star formation rate. This behaviour can be reproduced by the simulations. However, in the period 3 − 4 Gyr ago, Fornax experienced a sudden burst of star formation, approximately three times the rate for the previous 5 Gyr. The cause of this activity is unclear. The star formation rate has been decreasing ever since, with smaller events 400−600 Myr and 100 Myr ago. We also find tentative evidence for a small number of stars (total mass ∼1500M ⊙ ) which have formed in the past 100 Myr. Fornax contains the most recent star formation activity of any Local Group dSph. Strong radial gradients in the SFH are also evident. As noted by previous authors (Stetson et al. 1998;Saviane et al. 2000), recent star formation has been concentrated towards the centre of Fornax. This trend is seen in other dSphs (Harbeck et al. 2001). Furthermore, we have found that chemical enrichment was more efficient at the dSph's centre. Even the oldest stars display a metallicity gradient, such that the inner stars have a mean iron abundance approximately 0.3 dex greater than the outer stars. Indeed, the first few Gyr in Fornax appear to have been a time of intense star formation and chemical enrichment: the age > 10 Gyr stars display a mean abundance of [Fe/H] = −1.4. The oldest stars in Fornax also show three peaks in the metallicity distribution, possibly evidence for three main bursts of star formation in the period >10 Gyr ago. We find the metallicity to have increased monotonically in the period 9 − 4 Gyr ago, and then experienced a sharp increase in conjunction with the intense burst of star formation described above. Thus, while the first few Gyr of star formation in Fornax can be reproduced by models, the cause of the burst 4 Gyr ago is unclear. We have previously proposed that a gas-rich dwarf galaxy merged with Fornax approximately 2 Gyr ago to produce sub-structure (Coleman et al. 2004(Coleman et al. , 2005, however we cannot reconcile the timing of this event with the observed peak in the SFH. Therefore, we suggest that gas enriched and blown out by earlier star forming events Hi clouds settled in the outer regions of Fornax, and then re-collapsed to fuel an intense period of star formation at the centre of the dSph. This event may have been caused by tidal interactions with the Milky Way during a pericentric passage.
2008-05-09T15:26:18.000Z
2008-05-09T00:00:00.000
{ "year": 2008, "sha1": "46d1b961e6e3722f848b6028a93e6cb4f13701fe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0805.1365", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46d1b961e6e3722f848b6028a93e6cb4f13701fe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7872575
pes2o/s2orc
v3-fos-license
Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Meta-Analysis of Treatment Outcome and Predictors of Response Background Deep brain stimulation (DBS) has been proposed as an alternative to ablative neurosurgery for severe treatment-resistant Obsessive-Compulsive Disorder (OCD), although with partially discrepant results probably related to differences in anatomical targetting and stimulation conditions. We sought to determine the efficacy and tolerability of DBS in OCD and the existence of clinical predictors of response using meta-analysis. Methods We searched the literature on DBS for OCD from 1999 through January 2014 using PubMed/MEDLINE and PsycINFO. We performed fixed and random-effect meta-analysis with score changes (pre-post DBS) on the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) as the primary-outcome measure, and the number of responders to treatment, quality of life and acceptability as secondary measures. Findings Thirty-one studies involving 116 subjects were identified. Eighty-three subjects were implanted in striatal areas—anterior limb of the internal capsule, ventral capsule and ventral striatum, nucleus accumbens and ventral caudate—27 in the subthalamic nucleus and six in the inferior thalamic peduncle. Global percentage of Y-BOCS reduction was estimated at 45.1% and global percentage of responders at 60.0%. Better response was associated with older age at OCD onset and presence of sexual/religious obsessions and compulsions. No significant differences were detected in efficacy between targets. Five patients dropped out, but adverse effects were generally reported as mild, transient and reversible. Conclusions Our analysis confirms that DBS constitutes a valid alternative to lesional surgery for severe, therapy-refractory OCD patients. Well-controlled, randomized studies with larger samples are needed to establish the optimal targeting and stimulation conditions and to extend the analysis of clinical predictors of outcome. Introduction Obsessive-compulsive disorder (OCD) is characterized by the presence of upsetting, persistent thoughts, images, or impulses, which are experienced as intrusive and senseless (obsessions) and/or excessive repetitive behaviors or mental acts (compulsions) intended to neutralize the anxiety induced by the obsessions [1]. OCD has a lifetime prevalence of 2.3% [2] and causes substantial dysfunction in social adjustment, employment, marriage, family relationships and socioeconomic status [3]. Despite exhaustive use of optimal behavioral and pharmacological treatments, an estimated 10% of OCD patients remain resistant to all therapies and suffer from severe symptoms leading to marked functional impairment [4]. Deep brain stimulation (DBS) has been proposed as a last-resort option and an alternative to stereotactic lesional neurosurgery for this group of extremely disabled patients. DBS permits focal, adjustable and reversible neuromodulation through the implantation of electrodes that send electrical impulses to specific locations in the brain. In recent years DBS has been tested as a therapeutic option for several neuropsychiatric conditions including OCD, depression, anorexia nervosa and addictions [5]. In the case of OCD, the therapeutic effect of DBS has been tentatively related to its capacity to modulate abnormal activity and synaptic connectivity in circuits involving the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC) and striatum [6], brain areas that have been implicated in the pathophysiology of the disorder [7]. Reductions in OCD severity in response to DBS range from 52-54% in patients receiving ventral capsule/ventral striatum (VC/VS) or nucleus accumbens (NA) stimulation to 41% in those with electrodes implanted in the subthalamic nucleus (STN) [8]. Percentage of responders-subjects with a reduction in their symptom severity of at least 35%-varies from 10% [9] to 61.5% [10]. These discrepant results may be at least partially related to the differences in anatomical targeting, electrode design and stimulation protocols used. Certain data also suggest that some manifestations of the disorder, i.e. "just-right" experiences or the need for symmetry, may be less likely to respond to DBS [11, 12], although the low number of patients included in each study has complicated the identification of clinical markers of response. In view of the clinical heterogeneity of the disorder, the analysis of these predictors would be extremely helpful to facilitate the selection of candidates for DBS since the technique is not free from potentially severe adverse effects and is a highly economical and human resources requesting option. So, the goal of the current meta-analysis was 1) to systematically record the treatment effects of DBS in severe therapy-refractory OCD patients, and 2) to identify any clinical variables associated with a better response to this therapeutic approach. Search strategy for identification of studies We performed a comprehensive PubMed/MEDLINE and PsycINFO search from January 1999 through January 30, 2014, including the following terms: "deep brain stimulation" or "DBS" in association with "obsessive-compulsive", "obsessive-compulsive disorder" or "OCD". These words were searched as key words, title, abstract and Medical Subject Headings. Reference lists from retrieved reports were reviewed for additional relevant studies. Selection of studies Candidate studies-judged on the basis of their title and abstract-had to satisfy the following criteria to be eligible for inclusion in this review: 1) human studies assessing the efficacy of DBS on OCD according to changes on the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) scores or percentage of responders defined by standardized criteria; 2) subjects aged 18-75 years with a diagnosis of OCD according to the Diagnostic and Statistical Manual of Mental Disorders IV [1] or International Classification of Diseases criteria [13]; 3) studies published in English in peer-reviewed journals. Data extraction Data were recorded as follows: -sample characteristics: age, gender, age at onset of OCD, duration of OCD, OCD symptom dimensions. -DBS-related: brain target, lead model, duration of stimulation. -study-related: single or double-blind; sham-controlled; parallel or crossover designs -primary outcome measure: score changes (pre-post DBS) on the Y-BOCS -secondary outcome measure: number of responders to treatment based on standardized criteria (> 35% reduction in post-treatment Y-BOCS scores) and changes on quality of life (QOL) measures -acceptability of treatment: overall dropout rates and side effects Data synthesis and analyses Analyses were performed using the statistical software R3.0.1 [14] with meta package for metaanalysis [15] and IBM SPSS Version 20 (IBM Corporation, Chicago, IL, USA). Weighted proportion meta-analysis was used to adjust for study size using the DerSimonian-Laird model to allow for heterogeneity inclusion in the analysis. Effect sizes were calculated with fixed and random-effects models, and risk ratios were presented as a forest plot. The forest plot shows study-specific risk ratios (and their 95% CIs) and the relative weighted contribution of each study, as well as the risk ratio estimate pooled across all studies. Heterogeneity was assessed using the Q statistics and I 2 index [16]. Values of p< 0.1 for the former and > 35% for the latter were deemed as indicative of between-study heterogeneity [17]. Student's "t" test and Spearman's rank correlation were used to analyze the association of age and age at OCD onset with response to DBS. Pearson's chi squared test and Wilcoxon test were used to study gender differences and influence of neuroanatomical target on response to DBS. Finally, Fisher's exact test was used to assess differences on response to DBS according to OCD symptom dimensions. Literature search Flow of information according to PRISMA statement, study selection and reasons for exclusion are provided in Fig 1. Our electronic and reference list search found 301 studies, after discarding duplicates, that were potentially relevant to this meta-analysis. Of these, 270 were not included in the analysis because they met exclusion criteria. Thirty-three articles met the eligibility criteria (see Table 1). One was excluded because it was the abstract of a poster presentation [18] and another reported results on comorbid Anorexia Nervosa in an OCD patient treated with DBS but did not provide information on changes in OCD symptoms [19]. Studies included: main characteristics Thirty-one studies were included in this meta-analysis [9][10][11][12], comprising 116 subjects with OCD treated with DBS (S1 Table). The main characteristics of the studies included are described in Table 1. Twenty-four studies including 83 patients addressed DBS of "striatal areas", including the anterior limb of the internal capsule (ALIC), the ventral capsule and ventral striatum (VC/VS), the nucleus accumbens (NA) or the ventral caudate nucleus; five studies including 27 patients reported results on stimulation of the subthalamic nucleus, and two studies from Mexico, including six patients, described results of DBS applied at the inferior thalamic peduncle. Stimulation parameters were highly heterogeneous between studies: although all them employed high frequency stimulation (from 100 to 130 Hz), pulse width ranged from 60 to 450 μs and voltage from 2 to 10,5 V; different models of electrodes (3387, 3887, 3487; Medtronic Inc, Minneapolis, Minnesota) as well as active contact points were also used in the different samples. Authors of some of the articles included in the meta-analysis were contacted to gather further information. Pre-post severity of OCD symptoms Patient-level data relating to Y-BOCS score changes were available for 13 studies, including 66 patients. Mean percentages of reduction, standard deviation and standard error for each study were calculated to perform the meta-analysis on the percentage of improvement. The fixed effect model could not be used since it overestimates the percentage of improvement due to the excessive weight of Mallet et al.'s results [21] in two patients with high and almost identical percentages of improvement. The random effect model estimates the global percentage of improvement at 45.1% (95% CI = 29.4% to 60.8%). This wide confidence interval can be attributed to the reduced sample size of the studies as well as to their heterogeneity (Q = 734.6, df = 12, p <0.0001; I 2 = 96.4%) (see Fig 2 for the associated Forest Plot) Percentage of responders Response to treatment-defined by operationalized criteria as a reduction on Y-BOCS scores > 35%-was analyzed in studies including more than one subject in order to estimate its variability. Patient-level data was available from 12 studies, while four provided results on pooled data (percentage of responders in the study Age and gender. No significant differences were detected between responders and nonresponders to DBS in terms of age (responders 38.6 years ± 11.1 versus non-responders 37.2 years ± 8.4, t: -0.6, df = 74.9, p-value = 0.5) or gender (responders: 26/19 male/female; nonresponders: 20/14 male/female, χ 2 = 0.009, p = 0.9). Current age was not correlated with percentage of Y-BOCS score reduction (Spearman's Rho = 0.07, p = 0.5). No significant differences were detected between males and females in percentage of Y-BOCS score reduction (male: 41.7% ± 27.1 versus female: 43.4% ± 27.0, Wilcoxon test W = 272.5, p = 0.2) (S2 Table). Quality of life Age at OCD onset. Responders to DBS reported a significantly older age at onset of OCD than non-responding patients (responders 17.1 years ± 7.9 vs non-responders 13.7 years ± 6.9, t = -2.0, df = 67.1, p = 0.04, 95% CI = -6.7 to -0.03). A tendency was detected for a significant positive correlation between age at onset of OCD and percentage of Y-BOCS scores reduction after DBS (Spearman's Rho = 0.2, p = 0.05). Acceptability of treatment Five patients dropped out from DBS without completing the planned period of stimulation, representing 4.7% of the implanted patients. Two of these cases were in the early study by Nuttin et al. [23]; both patients finally underwent anterior capsulotomy due to the limited benefits of DBS and extremely fast battery depletion. The other three subjects were from the Mexican group who received DBS at the inferior thalamic peduncle [35]. One died of a cocaine overdose, another presented tuberculous meningitis and was explanted, and the last one stopped attending follow-up controls after 18 months of DBS. Side effects reported in the various studies are presented in Table 2. Discussion The aim of this study was to measure the response to DBS in severe treatment-resistant OCD patients using meta-analysis. The data available from 116 subjects produced a global percentage of Y-BOCS score reduction of 45.1% and a global percentage of responders of 60.0%. Better response to DBS was associated with older age at OCD onset and with the presence of sexual/ religious obsessions and compulsions. No significant differences were detected in the percentage of responders or in Y-BOCS score reduction between patients who received stimulation of striatal areas and those with STN implanted electrodes. These results confirm that DBS appears to have an efficacy comparable to that reported for capsulotomy or cingulotomy, ablative techniques after which 64% and 56% of patients respectively are rated as significantly improved [48,49]. Nevertheless, severe adverse effects seem to Deep Brain Stimulation for OCD be less frequent with DBS than with lesional neurosurgery. Three cases of intracranial hemorrhages were reported, representing 2.6% of the total number of patients, compared with figures of 15.8% in some studies of ablative interventions [50]. Five subjects presented an infection of the scalp, chest or abdominal wound, but they were controlled with antibiotic therapy, and just one patient suffered a tonic-clonic seizure. Interestingly, no persistent frontal syndrome, cognitive impairment or personality changes have been described for OCD patients receiving DBS. The most frequent stimulation-related adverse effect was a hypomanic state or at least some kind of mood disinhibition, reported in nearly one from five patients. Transient worsening of anxiety while searching for optimal stimulation parameters has also been frequently described. Nevertheless, almost all studies describe these stimulation-related adverse effects as mild, transient and reversible after the adjustment of the stimulation parameters. Five drop-outs were registered among the 116 implanted patients worldwide. Two of them were patients in the early study in Belgium by Nuttin et al. [23], when experience in the use of the technique was still limited, while the last three were from the Mexican group implanted at the inferior thalamic peduncle [35]. This Mexican sample is not obviously comparable to others included in this meta-analysis, since 50% of the subjects presented alcohol and cocaine dependence, a comorbidity generally considered as an exclusion criterion for DBS use in OCD. So DBS, although not an innocuous procedure, appears to constitute a safe therapeutic option for severe treatment-resistant OCD patients, associated with mild and transient emotional and somatic side effects. On the other hand, DBS imposes its own burdens including need for programming by an expert center, battery depletion, device failures, need for urgent interventions in the event of an emergent DBS-related side effect and high economic cost. Most published studies focus their attention on symptom reduction after DBS and scarce data is available for changes on quality of life in these highly-resistant and chronic severally ill patients [9,37,47]. Although results are not easily comparable because of the heterogeneity of the assessment tools, studies suggest that despite the invasive nature of the treatment and the discomfort derived from the surgical procedure and the stimulation process, most patients report a significant improvement in at least some aspects of their quality of life. Interestingly, this improvement was not directly correlated with the reduction of symptom severity and was reported even by non-responding patients. Moreover QOL keep on improving years after DBS initiation, even when no further reduction of OCD severity was evident, suggesting than factors other than OCD intensity-anxiety release, reward processing and motivation, affective statusinfluence QOL and that patients need time to adapt to and benefit of their new situation. Distant DBS effects on abnormal neural connectivity in the cortico-striato-thalamo-cortical circuit involved in OCD might explain why stimulation of different brain regions finally achieves similar percentages of improvement. Stimulation of STN has been reported to decrease OFC and mPFC metabolism as well as ACC activity [51] while stimulation of the ALIC has similarly been associated with decreased OFC [23,29], subgenual ACC and right DLPFC metabolism [52]. Interestingly, while STN stimulation did not significantly modify comorbid depressive and anxiety symptoms [33], a significant and early improvement in mood and anxiety levels, preceding any change in OCD severity, is commonly reported in patients receiving stimulation in striatal areas [11]. Future studies should address the local and distant effects responsible for mutual as well as distinct mechanisms of action of DBS depending on specific targets in order to personalize the choice of the optimal implantation area according to the individual presentation of the illness. Better response to DBS was associated with older age at OCD onset. Age of onset has been postulated as an important marker for subtyping OCD. Early age of onset patients show more severe forms of OCD, poorer prognosis for pharmacological treatment, higher familial aggregation of both OCD and tic disorders, and a higher specific comorbidity pattern mainly with ADHD, Tourette's syndrome and bipolar disorder [53,54]. A few studies have directly addressed the existence of differences in neuroimaging findings between OCD patients with early and late onset of the disorder, with inconclusive results. Pediatric studies suggest that children and adolescents with OCD show abnormalities of the putamen, globus pallidus and thalamus [55]. A recent study by Correia et al. [56] addressing the concentration of iron in the basal ganglia suggested a neurobiological distinction between early and late onset OCD: late onset patients, but not early onset ones, showed significantly higher iron concentrations than healthy controls, particularly in this area, although it is not clear whether iron metabolism plays a direct role in OCD or is just a correlation of other dysfunctions such as serotonergic neurotransmission. Therefore, further studies are needed to determine whether any specific structural or functional brain difference associated with early-onset OCD mediates its poorer response to DBS. According to the results of this meta-analysis, the presence of sexual/religious obsessions and compulsions was associated with a significantly better response to DBS. Recent studies have associated this OCD clinical dimension with specific brain functional connectivity patterns: patients with more sexual/religious obsessions demonstrated relatively greater connectivity between the ventral caudate and the middle and anterobasal insular cortex than patients with other symptom dimensions as well as healthy controls in a study addressing alterations of ventral corticostriatal functional connectivity in OCD [57]. Since Figee et al. [58] recently reported that the reduction of OCD symptoms after DBS was correlated with a fall in excessive frontostriatal connectivity recorded at baseline, it might be hypothesized that abnormal insulostriatal connectivity is especially sensitive to the capacity of DBS to normalize brain connectivity. Further neuroimaging studies focusing on changes in connectivity patterns in relation to the response to DBS of different OCD symptom dimensions are needed to confirm this hypothesis. The present manuscript has a number of limitations. First, the small sample sizes in the studies included complicate the assessment of inter-study heterogeneity. The studies were heterogeneous in terms of anatomical targeting, electrode design and stimulation parameters. This makes the comparison between studies difficult, and reflects the fact that DBS for OCD is a tool that is still under development. Second, we decided to address the response to DBS in all available patients worldwide instead of restricting our analysis to double-blind sham-controlled studies, since only six of the published studies (with only 45 subjects) reported this design in their methodology. Moreover, even in this small body of studies, the duration of active and sham periods was not easily comparable since it lasted from minutes [20] to three months [33], including 15 days [11], 21 days [29] or 30 days [37]. Nevertheless, in all these six studies, active stimulation was significantly more effective than the sham condition, which had to be shortened or cancelled in many patients due to severe clinical deterioration. There is a need for further well-controlled randomized trials to compare active versus sham DBS. Third, information on OCD symptom dimension, which emerged as one of the clinical predictors of response, was not assessed using specifically designed tools in any study even though it was available for 95 patients. The information must therefore be extracted from clinical descriptions, which limits its replicability. Fourth, no meta-regression analyses could be conducted to establish predictors of response, owing to the small number of patients included. Statistical comparison of subgroups was used, instead, as an exploratory method to address this important clinical issue. Finally, as in all meta-analyses, a potential publication bias and the risk of including limited-quality trials must be considered. We tried to address these concerns by the comprehensive and systematic review of the literature and the use of stringent inclusion criteria. Although the number of severe OCD treatment-resistant patients treated with DBS is still low, and optimal targeting and stimulation parameters are still under debate, the results of this meta-analysis confirm that DBS constitutes an alternative to ablative surgery for this group of extremely ill patients and presents an acceptable adverse effect profile. Further well-controlled randomized studies in larger samples are needed to confirm and extend our findings on clinical predictors of response, and thus to improve both patient selection and response rate. Supporting Information S1
2017-04-15T00:14:51.017Z
2015-07-24T00:00:00.000
{ "year": 2015, "sha1": "a1d192202592e3b820202e0e8c90bf18fa732db5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0133591&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3c2984e563d8a4989cccbb6e8c73e18a1803eb3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
203482604
pes2o/s2orc
v3-fos-license
An Audit of Use of Supraglottic Airway Devices in Pediatric Patients Aim: Pediatric patients have unique anatomical, physiological and pharmacological characteristics. The process of administering anesthesia for pediatric surgeries is quite challenging. Such cases are usually performed under general anesthesia using face masks, endotracheal tubes (ETT) or supraglottic airways (SGA) depending upon type and duration of surgery. Use of SGA has various advantages over the other two and their use is increasing day by day. We carried out an audit retrospectively to extract data of surgeries where SGA were used over a duration of six months. Primary objective was to delineate percentage of usage of SGA and secondary were to study associated complications and identify areas of improvement, if any. Materials and methods: Subsequent to International Electrotechnical Commission (IEC) approval, all perioperative details related to patients and surgeries were collected from anesthesia records. A number of other parameters were also recorded. Results: Number of patients managed under SGA during 6 months duration were 120 as compared to total of 400. Thus, the usage was 30%. There was no difficult SGA placement. Neuromuscular blockers were used in 10% cases. Dislodgement of device was noted in 12.5% patients and laryngospasm in 10%. Change of size of device was required in seven patients weighing 10 kg. Conclusion: The practice of use of these devices has revolutionized the field of pediatric anesthesia with advantages like avoidance of use of muscle relaxant. They are very tachydidactic and freindly to use. Some vigilance is required to prevent and treat complications associated with their use. Clinical siginficance: The implications of SGAs are becoming wider day by day and in near future with more advance devices, they might still have wider applications than endotracheal tubes. INTRODUCTION Pediatric patients are not just small adults. They have unique anatomical, physiological and pharmacological characteristics. The process of administering anesthesia for pediatric surgeries is quite challenging. In our institute number of such cases are usually performed under general anesthesia using face masks, ETT or SGA depending upon the type and duration of surgery. Use of SGA has various advantages over the other two, and their use is increasing day-by-day. Advantages of SGA over face mask: • Provides more secure and reliable means of ventilation • Hands of anesthetist are free • Can be easily inserted by clinical/nonclinical staff as it requires minimal training • Lower risk of aspiration and less operating room pollution. Advantages of SGA over ETT are: • Avoids need for laryngoscopy and associated stress response • Avoids need of muscle relaxants • Reduced requirement of anesthetic agents for airway insertion • Provides effective ventilation similar to ETT • Easier to insert and short learning curve • Lower incidence of postoperative sore throat as compared to ETT. We carried out an audit retrospectively to extract data of 6 months duration of surgeries where SGA was used. The primary objective was to delineate the percentage of usage of SGA and secondary were to study associated complications and identify areas of improvement if any. MATERIALS AND METHODS Subsequent to IEC approval, all perioperative details related to patients and surgeries for 6 months were collected from anesthesia records. Inclusion Criterion • All elective surgeries where SGA was used • Pediatric patients less than 12 years of age • ASA grades 1 and 2 patients. Exclusion Criterion • Surgeries where alternative devices were used for airway management • Emergency surgeries Outcome Variables The parameters surveyed were: • Type of SGA (I-gel, Proseal LMA, Classic) used • Age and weight of the patient • Number of attempts required to insert SGA • Type and duration of surgery • Position of patient • The patient maintained on spontaneous or controlled ventilation • Associated complications • Postoperative complaints (sore throat, hoarseness) • Time to discharge A standard proforma for recording these parameters was used. SGA was inserted by anesthesia trainees and by consultants only if the first attempt failed. Successful device placement and adequate ventilation were evidenced by bilateral chest excursion, squarewave capnogram tracing with positive pressure ventilation and the absence of airway obstruction. Difficult SGA placement was defined as more than three attempts taken to insert SGA. Desaturation was defined as a fall in saturation to less than 92% and hypercarbia as a rise in end-tidal CO 2 more than 50 mm Hg. Power Calculation and Statistical Analysis This was a purely observational study with no power calculations. For extracting a percentage of usage of SGA, a total number of surgeries performed during the same period was taken as the denominator. Data were assessed for normality and presented as mean (SD), median, range, and frequency (percentage). RESULTS A number of patients managed under SGA during 6 months duration were 120 as compared to a total number of 400. Thus, the usage was 30%. I-gel was used SGA in 91% of cases, proseal in 28% and classic SGA was used in just one case. Ninety-five percent of SGA were inserted in the first attempt by anesthesia trainees. Four patients required the second attempt, and only two cases required the third attempt. There was no difficult SGA placement. Nearly, 72.5% of cases were maintained on controlled ventilation whereas spontaneous ventilation was used in 27.5% cases. Neuromuscular blockers were used in 10% of cases ( Table 2). Dislodgement of the device was noted in 12.5% of patients. On the other hand, laryngospasm was observed in 10% of patients. Change of the size of the device was required in seven patients weighing 10 kilograms. Desaturation, aspiration, hypercarbia was not seen in any patient. Trauma defined as the presence of blood on the device was also absent ( Table 3). The mean duration of surgery was 1.4687 hours. The mean duration for days till discharge was 3.4833. DISCUSSION The SGA is a device that facilitates oxygenation and ventilation while sitting immediately outside the larynx to form a perilaryngeal seal. They are an established part of the routine, emergency pediatric airway management, and neonatal resuscitation. The first SGA was invented by Dr Archie Brain in 1988, 1 since then over the past 30 years many variations and many new SGAs have come into practice. Early trials found that the design of pediatric LMA was a similar version of the adult and not anatomically designed for children. Since then, improvements in the design and availability of sizes [size 1 (0-5 kg) to size 3 (30-50 kg)], together with favorable clinical experience have led to the increasing use of SGA. The classic, proseal, and I-gel sizes 1, 1.5, 2, 2.5 and 3 are suitable for children of various ages. Fastrach and CTrach are not available in pediatric sizes. The size of a device suitable for a child is decided by his weight. The reference range is written on the LMA tube close to the distal end along with the cuff volume to be used. We did a retrospective audit of 6 months of data to know the trends in our practice of using SGA in the pediatric age group. The percentage of usage of SGA was 30%. Out of 400 surgeries undertaken in 6 months period, 120 were performed under SGA, 80 cases under face mask and 200 cases under ETT. The usage of ETT was 50%. Thus, over the years SGA has been used for a variety of cases where ETT were used earlier. Out of 120, 14.1% were females and 85.83% males, urogenital issues being more common in males. I-gel was used in most patients in view of ease of insertion, availability, and presence of gastric drain. Most SGAs were inserted in the first attempt. Only 3% required a second attempt and 1.66% third attempt. There was no difficult SGA placement. Most SGAs were placed by anesthesia trainees with little experience with the device. Thus, proving that their learning curve is very low and are quite easy to insert. 2 The ones requiring second and third attempts were handled by consultants. Patients were maintained on controlled ventilation in 72.5% and on spontaneous in 27.5% subject. Muscle relaxants were required in 12 cases. Thus, demonstrating that SGAs can be efficiently used without the need of a paralyzing agent. Airway obstruction occurs due to malpositioning, folding of the epiglottis, biting on the tube, laryngospasm. Lingual edema following extubation can lead to difficult airway situation. Trauma to lips, gingiva, teeth, tongue can occur because of inappropriate size. Aspiration of stomach contents is a potential hazard, as they do not form an airtight seal around the larynx. Limiting the use to fasted patients and preventing gastric distention can avoid this problem. The smaller the child, the higher the risk of complications. 3,4 Most problems have been reported with the use of the size one SGA. Our audit showed similar results. Change of airway device was documented in 10%. Three cases required a shift to ETT out of which, two were posted for cystoscopy with lithotomy position. So, change in the position must have caused dislodgement of SGA. Jagannathan et al. did a study comparing I-gel with Supreme LMA size 1.5, 2, 2.5 and 3 no. and concluded that Igel required a greater number of manipulations to maintain patent airway. 5 We found that in borderline weight (example 10 kg) when 1.5 I-gel was used, it had to be replaced by 2 no. I-gel because of a leak or inefficient ventilation. So, we recommend the use of bigger size whenever such overlapping of weights is seen. Seven cases were noted which required a change of the size of the device. Abukawa et al. did a study in 70 children with I-gel 1, 1.5 and 2 and they concluded that complication rates were higher with 1.5 number I-gel. 6 Dislodgement of the device was seen 12.5%. It was mainly seen when the patient's position was changed to ensure a clear airway, considerable vigilance is required when fixing I-gel in the mouth and to avoid the negative effects of flexion of the proximal tubing. 7 Careful positioning, fixation, and handling of the device is the key to this problem. Laryngospasm occurred in 10% patients, in one patient at induction while in remaining at the time of removal of the device. The SGA should not be removed in light planes as this may cause coughing, laryngospasm, hypersalivation or desaturation. 8,9 We recommend removal in a deeper plane of anesthesia when the patient is not wide awake but breathing spontaneously. In most cases, CPAP helped in relieving spasm. Three cases were treated with injection propofol 1 mg/kg and two with succinylcholine 0.5 mg/kg to break laryngospasm. Hypercarbia is mainly associated with spontaneous ventilation. Spontaneous ventilation (increase work of breathing, low FRC) and use of closed circuit all amount to hypercarbia. Most patients were maintained on controlled ventilation with inhalational agent and propofol, thus explaining the absence of hypercarbia. Shorter duration of surgeries can relate to the absence of airway edema, sore throat, and hoarseness in our audit. About 29.1% of patients got discharged on the same day and 28.3% on a subsequent day. In our practice, social reasons like staying at long distance, lack of immediate access to medical help prevent early discharge. There were few limitations like the data being collected retrospectively which may have resulted in underreporting. ENT surgeries were not included. Airway complications would have been possibly more frequent in that case. CONCLUSION The overall usage of SGA in pediatric patients was found to be 30%. The practice of use of these devices has revolutionized the field of pediatric anesthesia. They have a number of advantages including avoidance of the use of muscle relaxant. The SGAs are very tachydidactic and friendly to use. Some vigilance is required to prevent and treat complications associated with their use. Their implications are becoming wider day by day and in the near future with more advanced SGAs, they might still have wider applications than endotracheal tubes.
2019-09-17T03:02:32.804Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "4ceecd4e7c70bddb72100ae861d50519a1baaa44", "oa_license": null, "oa_url": "https://doi.org/10.5005/jp-journals-10049-0047", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "120ccd69ed3901f15f981ab0bc49c0704727e588", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
99372227
pes2o/s2orc
v3-fos-license
Carbon nanotubes in the surfactants dispersion: formation of the microenvironment The chemical shifts from the protons belonging to surfactants differing only by the nature of counterions (Li, Na, Cs) are considered in their interaction with the surface of carbon nanotubes. It was shown that the dominant mechanisms of interaction of the surfactant molecules with the nanotubes surface in aqueous solutions depend on the nature of counterions. In particular, sodium dodecyl sulfate molecules interact with the nanotubes surface mainly by their head groups instead of tail groups, and thus it can be assumed that the surface of nanotubes can be coated with a layer of flattened micelles. In the other cases, a structureless random adsorption of surfactant molecules with partially ordered arrangement of head and tail groups, incompact for the molecules of lithium dodecyl sulfate or cling swarm round for the molecules of cesium dodecyl sulfate is more likely. Introduction In systems of nanoparticles there is a significant percentage of their constituent atoms residing at the surface, whereby the properties of nanomaterials are extremely sensitive to the microenvironment. Any change in the microenvironment, such as surface doping, functionalization of the surface by means of organic functional groups or its modification by coating of molecules layers causes a change in electronic and photophysical properties of nanoparticles. This sensitivity to external influences opens up the possibility not only for creating materials with desired properties, but also for the controlled variation of these properties. In particular, carbon nanotubes because of the nature of their structure are considered as one of the most versatile nanomaterials. Single-walled carbon nanotubes consisting of atoms constituting the graphene plane rolled into a tube are almost perfect one-dimensional systems in which there is a quantum confinement and the high correlation between the charge carriers. Depending on the arrangement of carbon atoms with respect to their axes (viz. chirality) the nanotubes can be direct bandgap semiconductors, or metals with nearly ballistic conduction. Features of the structure of carbon nanotubes allow using them to create the various devices in photonics and optoelectronics. Of particular interest are the unique optical properties of semiconducting nanotubes -their ability to nearinfrared emission of semiconducting species under electrical or optical excitation with certain parameters. Other intensively developed applications are creating nanotransistors, field emission displays, fuel cells, solar cells, photodetectors, switches, gas-and chemosensors based on carbon nanotubes [1][2][3][4][5]. Note that the electronic and optical properties of nanotubes, all of which are surface atoms, are extremely sensitive to the atoms of nearest microenvironment. There is a way to change the electronic properties of nanotubes by purposefully forming atoms of the local environment in a certain way. The appearance of methods that allow to obtain a stable suspension of carbon nanotubes in the surfactants dispersion [6] has led to the possibility of controlling the optical characteristics of nanotubes due to the change of dielectric properties of the local environment (by the surface modification) [7]. Another area of using carbon nanotubes stabilized by the surfactant is creating a system that can limit the nonlinear laser radiation with increasing intensity, which is important in developing practical tools for creating protection for eyes and sensors. Designed suspension limits the laser light in a broad spectral range by scattering of incoming radiation by light-induced inhomogeneities of the medium. Possibilities of parameter control of nonlinear optical limiting by varying the composition and properties of the system components are considered in papers [8,9]. Nevertheless, despite the fact that carbon nanotubes have attracted extensive research interest, their practical usage in many areas is still hampered by the difficulty of their separation into monodisperse in diameter, length and chirality fraction. In papers [10][11][12], it was shown as surfactants can be used to achieve the desired separation. However, it is necessary to know the mechanisms of surfactant molecules adsorption on carbon nanotubes to further improve this procedure, as well as for the controlled change of properties of the systems containing the nanotubes. Methods and materials The method of nuclear magnetic resonance (NMR) which has been successfully applied to solve many problems of colloid chemistry was used as the main method of experimental study of the morphology of multicomponent systems [13][14][15]. The chemical shifts of NMR signals of proton, carrying information about their immediate environment and the dynamic processes occurring within the system were studied. All experiments were carried out on 1 H nuclei using an AVANCE III spectrometer (Bruker, Germany) operating at a proton-resonance frequency of 600.03 MHz. The parameters of spectrum registration were as follows: spectrum width, 7211.5 Hz; number of points, 64K; number of scans, 8; relaxation delay, 5 s; and duration of the 90° pulse, 8.3-10.7 s. Under the experimental conditions due to the short lifetime of the surfactant molecules in various states (~10 -6 s), the observed signal was a weighted sum of contributions from molecules in different states: a) monomer, b) micellar, c) associated with the nanotube surface. The resonance frequency was determined with an error of 0.1 Hz which corresponds to 0.17 10 -3 ppm. For this study we have chosen multiwall carbon nanotubes of carbon nanomaterial "Taunit" dispersed in solutions of three surfactants differing of counterions nature -sodium, lithium and cesium dodecyl sulfates, in deuterated water (Deuteriumoxid 99.9%) at a temperature T = 30 °C (Na, Li) and Т = 40 ºС (Cs). For the preparation of suspensions, ~ 5 mg carbon nanotube samples were filled with surfactant solutions in deuterated water of the desired concentration to the volume of 1 ml. The need to use deuterated water was dictated by the possibility of removing the strongest contribution of the signal coming from the protons of ordinary water, which makes it possible to study the signals from protons that are a part of the hydrocarbon chains of surfactant. Solutions with nanotubes were subjected to ultrasonication for 15 minutes by an Elma Sonic S 40H device, then centrifuged at an ELMI centrifuge for 10 minutes at 10,000 g. The upper part of the solution above the dense sediment was selected for measurements. Surfactant content was varied in a range of 1-100 mM. Sodium dodecyl sulfate, SDS, (Sigma, L4509) and lithium dodecyl sulfate, LiDS, (ACROS Organics) with a main substance content of 99% were used as received. Cesium dodecyl sulfate, CsDS, was obtained from SDS by ion exchange in CsCl (from ECROS) solution. A mixed solution of SDS (0.5 M) and CsCl (1.0 M) was prepared at 50 °С. Then the solution was allowed to stand at 50 °С for 2 h and at room temperature for next 12 h. The sediment was separated from the solution using a Buchner funnel and again dissolved in CsCl solution. After three consecutive recrystallizations, the sediment was washed with acetone and dried over sulphuric acid till constant mass was obtained. The product yield was about 85%. The degree of ion exchange for Na + and Cs + was determined by flame photometry (air-acetylene mixture) at 2,300°C and it was not lower than 96%. Experiment In this paper, an experimental study of the processes occurring on the surface of carbon nanotubes when modifying them by the surfactants was carried out. In the case of SDS (NaDS) dispersions, the maximum deviations of the chemical shifts due to the presence of carbon nanotubes were observed for the protons of α-CH 2 group of alkyl chain which are closest to the head group SO 4 . The concentration dependence of difference of the proton chemical shifts for α-CH 2 groups of SDS in the presence and absence of carbon nanotubes is shown in figure 1 (left). Some, but much weaker, changes were observed for the group of β-CH 2 . At the same time, no changes in proton chemical shifts for other groups were observed. In the case of LiDS dispersions, the maximum deviations of the chemical shifts due to the presence of carbon nanotubes were observed for the protons of β-CH 2 group rather than for the protons of α-CH 2 group. The concentration dependence of the maximum response of proton chemical shifts for the proton β-CH 2 groups in the presence of carbon nanotubes is shown in figure 1 (right). The proton resonance lines shifts corresponding to all four non-equivalent proton positions are clearly seen in cesium dodecyl sulfate (see figure 2). It should be noted that the maximum response of Results and discussion The fact that in the case of sodium dodecyl sulfate the maximum deviations of chemical shifts were observed for protons of α-CH 2 group of alkyl chain which are closest to the head group SO 4 suggests that the adsorption of SDS occurs not as it was originally proposed in [6] and is still considered valid by many authors, for example, [16]. Previously, it was thought for reasons of chemical affinity that the hydrophobic molecules of SDS interact by their hydrocarbon tails with the hydrophobic surface of carbon nanotubes, shielding them from contact with water, and the hydrophilic head groups are turned to the aqueous surfactant phase, facilitating solubilization of the nanotubes. Some authors suggest that structureless random adsorption with no preferential arrangement of the head and tail groups is responsible for the stabilization of the dispersions [17,18]. On the one hand, since the nanotubes are dispersed well enough with SDS [19], we may conclude that the head groups of the surfactant molecules are really rotated to the aqueous medium. On the other hand, our data suggest that the surfactant molecules interact with the surface of nanotubes by means of their head groups. Tail groups are not involved into the interaction. This means that the dominant adsorption mechanism is another one. It can be assumed that at concentrations above the critical micelle concentration, the surface of nanotubes is coated with a layer of flattened micelles, like during adsorption of molecules SDS on the aluminum oxide surface [20]. The images of similar formations on the nanotube surface which is functionalized by another amphiphilic compound are presented in [21]. Incidentally, the dense arrangement of flattened micelles is to some extent similar to a bilayer. In addition, we can assume the presence of adsorption on carbon nanotubes not only of surfactant macroions, but also of a certain amount of sodium counterions with possible recovery of integrity of some dissociated macromolecules. A significant relative change in the conductivity of SDS dispersions observed by us (unpublished data) at low concentrations of surfactants in the presence of nanotubes can only be explained by a significant decrease in the number of carriers of both signs in the solution. It should be noted that the adsorption of positive sodium counterions is not surprising because the surface of nanotubes in the aqueous solutions charged negatively in accordance with existing literature data. Sodium counterion adsorbed due to Coulomb interaction attracts surfactant macroion recovering for some time (since all the processes possess dynamic nature) integrity of the molecule. That is why the micelles may have a flattened shape being essentially transitional option from gemimicelles to micelles. In the case of lithium dodecyl sulfate the following picture can be proposed. Largely hydrated lithium counterions can not be adsorbed on the surface of nanotubes. However, the presence of lithium counterions near the surface causes the head groups to come off from the surface to interact with them. That is why the maximum response of chemical shift was observed not for α-СН 2 groups but for β-СН 2 groups. For the rest of surfactant ions, the most probable is the structureless random adsorption with partially ordered arrangement of head and tail groups, incompact for the molecules of lithium dodecyl sulfate because the head and tail parts are not strongly associated with the nanotubes. A slightly different picture was observed for cesium dodecyl sulfate. The addition of cesium ions to SDS can isolate the carbon nanotubes from direct contact with water molecules more effectively, as noticed in [7]. According to our data, the dominant mechanism of adsorption of CsDS is characterized by a strong interaction of all parts of the molecule with the nanotubes surface. This means that some counterions can be adsorbed on the surface of nanotubes, thus connecting with the surfactant ion sufficiently to form a stable and electrically neutral molecule. Then, the surfactant molecules are located almost parallel to the plane of the carbon sheet due to weaker interactions. As a result, the surfactant ions densely lie on the surface characterizing the cling swarm round structureless adsorption. Therefore, the use of DSC leads to surface modification of carbon nanotubes at much lower concentrations. Conclusion The study of chemical shifts from the protons belonging to the three surfactants differing only by the nature of counterions in their interaction with the surface of carbon nanotubes was carried out. It was shown that the dominant mechanisms of interaction of the surfactant molecules with the nanotubes surface depend on the nature of counterions. In particular, sodium dodecyl sulfate molecules interact with the nanotubes surface mainly by their head groups instead of tail groups, and thus it can be assumed that the surface of nanotubes can be coated with a layer of flattened micelles. In the other cases, the most likely is a structureless random adsorption of surfactant molecules with partially ordered arrangement of head and tail groups, incompact for the molecules of lithium dodecyl sulfate or cling swarm round for the molecules of cesium dodecyl sulfate. It was suggested that a modification of nanotube surfaces is related to the processes of a dynamic nature: adsorption of some of the counterions and restoring the integrity of a certain number of dissociated surfactant molecules.
2019-04-08T13:08:59.440Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "06b0a4e4917a3fee246fbbbda68c8416008ae933", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/690/1/012030", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4b204a04f9fdf45a84e9dc545b6ac1af0bf400d1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
237349420
pes2o/s2orc
v3-fos-license
Design of experiment-driven stability-indicating RP-HPLC method for the determination of tofacitinib in nanoparticles and skin matrix Tofacitinib—an oral JAK inhibitor—has been recently approved by US FDA to treat moderate to severe RA. The delivery of tofacitinib to specific inflammation site at joint via topical route using nanoformulations helps in managing the potential adverse effects. The objective is to develop and validate a simple, specific, and sensitive stability-indicating HPLC method for quantification of tofacitinib in topical nanoformulations and different matrices (adhesive tape, and skin layers, i.e., stratum corneum, viable epidermis, and dermis). The major objective was to avoid use of instruments like LC–MS/MS and to ensure a widespread application of the method. A 32 factorial ‘design of experiments’ was applied to optimize process variables, to understand the effect of variables on peak properties. The calibration curve showed regression coefficient (R2) 0.9999 and linearity in the concentration range of 50 to 15,000 ng/mL, which is suitable for the analysis of conventional dosage forms and nanoformulations. Method validation was performed as per ICH guideline Q2 (R1). The accuracy by recovery studies ranged between 98.09 and 100.82%. The % relative standard deviations in intraday and interday precisions were in the range of 1.16–1.72 and 1.22–1.80%, respectively. Forced degradation studies indicated the specificity of method and showed stability-indicating potential for tofacitinib peak. The validated method provides a quantification method of tofacitinib in the presence of formulation excipients, dissolution media, and skin tissues in detail. In addition, the method was successfully utilized for determination of various dermatokinetics profile of tofacitinib. ointment has shown good therapeutic activity for treating psoriasis condition in clinical trials [5]. The topical administration of tofacitinib can overcome the limitations of oral therapy, such as pre-systemic metabolism, gastrointestinal problems, dose escalation, and nontissue distribution and systemic side effects (decrease in neutrophils count). Moreover, compared to the oral route, the topical and transdermal routes require a low dose of tofacitinib, attenuating systemic side effects. In the case of topical and transdermal delivery, the outermost layer on the skin (stratum corneum) acts as a barrier [6]. The stratum corneum consists of corneocytes in the intercellular matrix form a barrier for the permeation of drugs through the skin. Therefore, lipid-based nanocarriers, viz. solid lipid nanoparticles (SLNs), have been designed to enhance the permeation of drug through the skin [7]. Further, the formulated lipid nanocarriers need to be evaluated for entrapment efficiency, drug release, and permeation through skin layers. Most importantly, to evaluate the topical drug delivery, there is a need to assess the permeation of tofacitinib through different layers of the skin (dermatokinetics). Thus, to understand the dermatokinetics, it is essential to develop a method for determination of tofacitinib in the skin tissues. A thorough literature survey revealed that there are few tandem mass spectrometry (LC-MS/MS) [8][9][10][11][12][13][14], and matrix-assisted laser desorption ionization mass spectrometry imaging (MALDI-MS) [15] methods are available for the estimation of tofacitinib alone and in combination with other analytes. These mass spectrometry methods are sensitive, but the routine estimation of in vitro release samples, and stability samples would be difficult and expensive when the number of samples is more. Additionally, these methods involved very complicated procedures and required additional sample treatment. The reported HPLC methods for tofacitinib determination showed lower limit of quantification in microgram level only and not fully validated as per ICH guidelines with respect to stability studies and robustness [8,[16][17][18][19][20][21][22][23]. The reported spectrometric methods are economical and adequate for the estimation of tofacitinib in pure form and in its dosage forms [24,25]. However, these spectrometric methods were not explored for nanogram sensitivity, purity of target peak, and stability. Recently, a fully automated in situ ultraviolet fiber optic system with 10-mm-arch probes for the estimation of TF has been reported [26]. However, this is quite expensive, require accurate installation, a trained operator, and it might be difficult to design acceptable measurement systems for routine analysis. Further, the above reported methods were not investigated and validated in the presence of skin tissue or for complex nanoformulations. Therefore, in the current work, we developed and validated a rapid, accurate, and precise stability-indicating reverse-phase HPLC method for the estimation of tofacitinib up to 50 nanogram levels in nanocarrier formulation and skin tissues. Further, the design of experiment (DoE) was applied to study the factor interaction and their impact on chromatographic properties [27]. The developed method is suitable for industry and academia for quantification of tofacitinib in nanoformulations, with ease and economic way. It is employed for evaluation of entrapment efficiency, stability, and in vitro release profile of tofacitinib-loaded SLNs. The % recovery of tofacitinib in the skin matrix was calculated, and the method was successfully applied for studying the dermatokinetics of tofacitinib on topical application. Solvents and chemicals Methanol and acetonitrile (HPLC grade) were acquired from Merck Limited, Mumbai (India). Ammonium acetate, potassium phosphate monobasic, orthophosphoric acid, and tetra-hydrofuran were acquired from Merck. Milli-Q water was attained in-house from a Milli-Q water purification system, Millipore (USA). Instrumentation and chromatographic conditions The chromatographic experiments were conducted on the HPLC system (Shimadzu, Kyoto, Japan) comprised of model LC-10AT, a binary pump, SIL-HTA autosampler (Shimadzu, Kyoto, Japan), column oven (CTO-10AS) compartment, and SPD-M20A photodiode array (PDA) detector. Chromatographic separation was carried out at 30 ± 0.5 °C using LiChrospher R 100 RP-18 analytical column (Hibar R 250-4.6; 5 μm; Merck R ). In the first instance, preliminary trials were conducted to acquire knowledge about the method performance and identification of various critical, independent parameters, and its effect on dependent variables. Systematic method development strategies were applied to identify the independent parameters with less number of trials. Primarily, acetonitrile and methanol were tried with various % ratios of ammonium acetate and phosphate buffers (pH 5, 10 mM) to obtain desired peak symmetry. A 3 2 full factorial experiment design was applied for the optimization of the mobile phase composition. The detection was carried out with a PDA detector with 30 µL injection volume at wavelength 285 nm. The data acquisition and HPLC system were controlled by the LC solution software version 1.24 SP1. Preparation of stock, calibration, and quality control standards A stock solution of tofacitinib (1 mg/mL) was prepared by dissolving an accurately weighed amount of tofacitinib in methanol. The working standard solution (100 µg/mL) and calibration standards 50-15,000 ng/ mL were prepared through serial dilution with methanol. The quality control (QC) standards were prepared from the standard stock at three concentration levels: low QC (250 ng/mL), medium QC (8000 ng/mL), high QC (12,000 ng/mL), and the lower limit of quantification (LLOQ) (50 ng/mL). DoE methodology and optimization of analytical method Using the above-executed trials, the organic phase and pH of phosphate buffer demonstrated a high impact on retention time and tailing factor. Therefore, to investigate the effect of organic phase composition and phosphate buffer pH on drug retention time and tailing factor, DoE methodology was applied [28]. A 3 2 factorial design consisting of 2 factors at 3 levels was considered for an experimental plan with Design-Expert 8.0 Stat-Ease Inc. Minneapolis, USA. The two independent variables % organic phase (X 1 ), and pH of phosphate buffer (X 2 ) with 3 levels (− 1 (3.5), 0 (4.5), + 1 (5.5)) were confiscated as the actual value. The retention time and tailing factor were considered as dependent variables as responses Y 1 and Y 2 , respectively. Validation of the developed method The validation of the analytical method was performed for system suitability, linearity, range, detection limit, quantification limit, specificity, accuracy, precision, carryover effect, and robustness according to the ICH Q2 (R1) guideline (2005). System suitability System suitability test was preferred for chromatographic methods to ensure that the system is efficient to give reproducible results. The performance of the system was evaluated by injecting six replicates of 10 µg/ mL concentration with optimized chromatographic conditions. Linearity, limit of detection and quantification limit Linearity was determined with a concentration range between 50 and 15,000 ng/mL with six calibration standards. The obtained data were fitted into linear regression analysis, and the calibration curve was plotted by the analyte peak area on the x-axis against the concentration of analyte on the y-axis. The detection limit (or) limit of detection (LOD) and quantification limit (or) limit of quantification (LOQ) were decided based on the signal-to-noise (S/N) ratio. Initially, in the system suitability test, the signal-to-noise ratio was obtained based on the detector response. The preferred S/N ratios were 3:1 and 10:1 for LOD and LOQ, respectively. LOD and LOQ were calculated based on the below mentioned formula [29]. From these, LLOQ has been determined and considered as the lowest standard of the calibration curve. Accuracy and precision The measurement of the observed value's proximity to a given value is known as accuracy. Precision, on the other hand, relates to the closeness of measurement values to one another. The accuracy and precision of the quality control samples LQC, MQC, and HQC, as well as LLOQ, were determined in six replicates on intra-and interdays. Accuracy was denoted as % bias and the precision as % relative standard deviation (RSD). The acceptance requirement for precision and accuracy of quality control samples according to regulatory criteria was ≤ ± 2% and ≤ ± 10% RSD, respectively [30]. Carryover effect The carryover was assessed by analyzing successive samples (10 µg/mL, 12 µg/mL, and 15 µg/mL) of the linearity curve followed by a blank. Carryover acceptance criteria should not be higher than 20% of LLOQ. Robustness Robustness can be defined as the reproducibility potential of the developed method in the same laboratory conditions with slight modification in chromatographic conditions and different HPLC systems with specified conditions. Initially, the robustness of the developed analytical method was performed by changing the column oven temperature, to ± 5 °C, and mobile phase pH, to ± 0.5. Further, the optimized method was tested on another laboratory condition with the Shimadzu system (model number: LC 2010CHT). Specificity The specificity of the developed analytical method was studied in the presence of formulation excipients (lipids). A known concentration of tofacitinib within the linearity range was spiked into the lipids and analyzed using the developed HPLC method. The interference of lipids with the retention time of analyte and peak purity was observed. Stability-indicating property of the developed method The stability-indicating property of the developed analytical method was studied by exposing the tofacitinib solution to stress conditions as per ICH Q1A (R2) guidelines. The stress studies of tofacitinib solution were conducted under acidic hydrolysis, base hydrolysis, oxidation, and thermolytic conditions [31]. The acid hydrolysis and base hydrolysis were carried out by preparing the tofacitinib solution (200 µg/mL) using 0.5 M Hydrochloric acid and 0.5 M sodium hydroxide and kept for reaction on a water bath at 60 °C for 6 h. Similarly, for oxidative degradation, tofacitinib solution was prepared using 2% hydrogen peroxide and heated at 60 °C under reflux condition for 3 h. For thermal degradation, the tofacitinib solution was heated at 80 °C under reflux condition for 6 h. After subjecting tofacitinib solution (200 µg/mL) to the abovesaid stress conditions, a concentration of 10 µg/mL was prepared using methanol. The samples of acid and base were neutralized before dilution with methanol to protect the column. All the samples were filtered through a 0.2-µm filter before injecting it into HPLC analysis [32,33]. The chromatogram of different stress conditions was recorded and compared with the normal condition. The retention time of different degradant peaks and drug peaks was identified and calculated the % degradation. Applicability of the developed method for skin studies of tofacitinib and dermatokinetic assessment The validated method for quantification of tofacitinib in the presence of skin tissue matrix was used to evaluate tofacitinib penetration through skin. Initially, skin tissue was homogenized using an Ultra-Turrax type homogenizer. Furthermore, the homogenized tissue was spiked with a predetermined concentration of tofacitinib solution (500 µg/mL) and centrifuged.. The supernatant (1 mL) was collected, and from this a concentration of 10 µg/mL was prepared. The samples were filtered through the 0.22-µ membrane filter before analysis and observed for peak specificity, and percent recovery was calculated [34]. Inhouse tofacitinib cream was prepared using stearic acid as base (0.5 mg/g) and applied topically (350 mg). After topical application, skin samples were collected for each time point at 2, 4, 6, 8, 12, 24 h. The collected skin samples were washed with phosphate buffer and gently wiped with the cotton. Tape stripping analysis was performed to separate the epidermis and dermis layers. The collected tapes and skin were soaked for 6 h in methanol. After 6 h, the samples were filtered through 0.25-µ filter and then analyzed at − 20 °C until analyzed as per validated method. Tofacitinib concentration in epidermis and dermis-time profiles were analyzed by non-compartmental model approach to determine t 1/2 , half-life; C 0 , tofacitinib concentration in epidermis and dermis at t = 0; AUC 0-t , area under the curve from zero to the last measurable point; AUC 0-∞, area under curve from time 0 extrapolated to infinity. Applicability in characterization of nanoformulation The validated analytical method was solicited for its use in the characterization of SLNs. The tofacitinib-loaded SLNs were prepared by the hot emulsification technique [35,36]. The SLNs were prepared using Precirol as solid lipid and Poloxamer 407 as a surfactant as per the reported method. The prepared formulation was analyzed for its entrapment efficiency, in vitro drug release (pH 7.4), and stability studies. Entrapment efficiency The entrapment efficiency of prepared SLNs formulation was estimated by indirect method. In brief, the formulation was subjected to ultracentrifugation using Remi cooling centrifuge (Mumbai, India). The clear supernatant was diluted with the methanol and analyzed by the validated method. In vitro drug release studies The in vitro release of tofacitinib from SLNs dispersion was estimated using the dialysis bag technique. The study was performed using pH 7.4 phosphate buffer with 0.15% sodium lauryl sulfate. A known amount of formulation was transferred into the dialysis bag and maintained at 32 ± 0.5 °C. The samples were collected at regular intervals 1, 2, 4, 6 h and replaced with the fresh buffer to maintain sink condition. The tofacitinib concentration was assessed using an approved method after the samples were filtered through a 0.22-m filter. Stability studies Pharmaceutical formulation development involves stability at storage conditions to maintain drug activity. Thus, short-term stability of prepared SLNs formulation was carried out at room temperature and controlled condition for % Entrapment Efficiency three months. The formulation was tested for entrapment efficiency after three months of storage period using the validated analytical method. DoE methodology and optimization of analytical method Preliminary studies were performed for the development of HPLC method. Initially, different combinations of mobile phases were explored using methanol, acetonitrile, and aqueous buffers (potassium phosphate and ammonium acetate − 10 Mm). Peak splitting and broad peaks were observed in the case of acetonitrile, whereas peak symmetry was found acceptable in the case of methanol. Methanol was combined with different proportions of aqueous buffers, and chromatogram properties (peak area, tailing factor, and retention time) were observed. The peak properties were analyzed, and it was observed that methanol in combination with 10 mM phosphate buffer showed good peak properties. The above-mentioned preliminary studies were performed with the aqueous mobile phase in 50:50 v/v. The injection volumes of 20 µL and 30 µL were screened, and it was found that 30 µL showed good peak area with reduced tailing factor. The above trials revealed that the peak properties were mostly influenced by the methanol percentage and pH of the aqueous phosphate buffer. Further, to optimize these parameters, full factorial designed was applied. A total of 11 experimental runs were performed based on 3 2 factorial design, and obtained results were analyzed for the retention time and tailing factor. The results of the responses are specified in Table 1. To evaluate the relation between the dependent and independent variables, the response surface methodology plots were produced from Design expert software, as reported in Fig. 1. The experimental runs' responses were fitted into linear, second-order, and quadratic models. Quadratic models proved to be the best fit with p < 0.0001. The best fit model was found to be quadratic models (p < 0.0001). The model summary statistics suggested the quadratic model for responses, because of the low prediction error sum of squares (PRESS) value of the quadratic model. Here, the low standard deviation, high adjusted R 2 value, specifies the good correlation of the fitted model with experimental data. The model was analyzed and validated by analysis of variance (ANOVA), and the results are shown in Table 2. The independent variables exhibited a reasonable impact on retention time. ANOVA analysis ( Table 2) showed model F-value of 80.51 (p value < 0.0001), stipulated that the model was significant. The 3D plots, 2D contour plots, and final polynomial equation (1) for actually coded variables showed the relationship between % of the organic phase (X 1 ) and pH of phosphate buffer (X 2 ), on the retention time (Y 1 ) (Fig. 1). Figure 1a, b shows that the methanol ratio in the mobile phase had shown a notable effect on retention time. The retention time was decreased with an increase in methanol percentage in mobile phase composition. In contrast, increasing the pH of phosphate buffer resulted in increased retention time. The combination of both the variables X 1 and X 2 showed a parabolic effect on the retention time. The obtained polynomial equation (1) has confirmed the same. The polynomial equation (1) showed that the % of the organic phase (X 1 ) showed a negative effect while pH of phosphate buffer (X 2 ) showed a positive effect, but the combination of both the independent variables exhibited a negative effect on retention time (Y 1 ). The independent variables showed a reasonable impact on the tailing factor. ANOVA analysis showed model F-value of 50.03 with p value 0.0003, indicating that the model was significant ( Table 2). The 3D plots and final polynomial equation (2) for actual coded variables showed the relationship between % of the organic phase (X 1 ) and pH of phosphate buffer (X 2 ), on the tailing factor (Y 2 ) (Fig. 1a, b). Figure 1c, d shows that the methanol ratio in the mobile phase had shown a prominent effect on the tailing factor. The obtained polynomial equation (2) has proved the same. The polynomial equation (2) showed that the % of the organic phase (X 1 ) showed a negative effect, while the (1) pH of phosphate buffer (X 2 ) exhibited a positive effect, but the combination of both the independent variables exhibited a positive effect on the tailing factor (Y 2 ). (2) Tailing factor (Y 2 ) = +3.42085 − 0.060278X 1 The 3 2 factorial design presented 50 solutions for the optimized chromatographic conditions, but the solutions were reduced by setting the goal values. The optimized Fig. 1 a, b represent the 3D response curve and 2D contour plots for response 1, i.e., retention time; c, d represent the 3D response curve and 2D contour plots for response 2, i.e., tailing factor chromatogram conditions were found to be methanol and 10 mM phosphate buffer (pH 3.5) in the ratio of 50:50% v/v with a flow rate of 0.8 mL/min. Validation of the developed method System suitability The system suitability was estimated by six replicate injections of 10 μg/mL concentration of tofacitinib with the optimized chromatographic conditions. The tailing factor was found to be 0.997, which indicated the acceptability of peak properties. The % RSD for peak area was within ± 2% indicated the suitability of the system. The standard calibration curve of tofacitinib was constructed. The linearity between the concentration of tofacitinib and peak area was obtained in the range of 50 ng/mL to 15 µg/mL with an excellent regression coefficient of 0.999. The obtained linear regression coefficient was y = 47.34 x − 749.10. In the regression equation, 'x' is the concentration of tofacitinib and 'y' is the peak area at 285 nm. The calibration curve of tofacitinib is presented in Fig. 2. The LOD and LOQ of the developed analytical method were determined using the signal-to-noise ratio, and values were found to be 16.5 and 49.5 ng/mL, respectively. The LLOQ was found to be 50 ng/mL (n = 6). The results proved that the developed method was sensitive enough to detect and quantify tofacitinib in nanogram levels. This method would be beneficial for routine analysis of tofacitinib in nanoformulation where the entrapped drug is in low concentration. Accuracy and precision The accuracy and precision were estimated by standard addition method with three concentrations of LQC (250 ng/mL), MQC (8000 ng/mL), and HQC (12,000 ng/ mL). The chromatograms representing LQC and HQC are presented in Fig. 3. The method demonstrated acceptable % recovery and reproducibility with % RSD and % bias less than 2%. Intra-and inter-day accuracy and precision data of tofacitinib are depicted in Table 3. Carryover effect The carryover effect was estimated by analyzing successive samples, i.e., 10 μg/mL, 12 μg/mL, and 15 μg/mL concentration of tofacitinib followed by blank sample. There was no tofacitinib peak observed in the blank at the retention time of 6.1 min. The result demonstrated that no carryover effect was observed; thus, this method could be utilized for an unremitting run for more number of samples. The optimized chromatographic conditions were evaluated for robustness with the deliberate change in oven temperature and mobile phase pH. The effect was found to be insignificant with these two variables on analysis of LQC, MQC, and HQC. The RSD (%) was less than 2 and showed good reproducibility of peak properties. The obtained result implied that the developed method was stable against small variations in intrinsic parameters. Robustness The robustness of the analytical method was performed on another Shimadzu system (model number: LC 2010CHT) with the optimized chromatogram conditions. The results demonstrated that there was no change in peak area and tailing factor. The observed changes were insignificant; hence, this method could be easily transferable from one laboratory to another laboratory. Specificity The specificity study was performed with the presence of lipids used for SLNs preparation. There was no peak interference observed at the retention time of the tofacitinib peak, and the purity of peak was found to be 99.99%. The forced degradation studies such as acid hydrolysis, base hydrolysis, oxidation, and thermal were successfully carried out. The stability results of tofacitinib are shown in Table 4. The chromatogram of acid hydrolysis (Fig. 3c) sample of tofacitinib showed degradation peaks at 3.67, 3.8, and 4.6 min. There was 14.78% degradation of tofacitinib observed under acidic hydrolysis. The chromatogram of base hydrolysis (Fig. 3d) sample of tofacitinib showed peak splitting, and the impurity peak was prominent and eluted at the retention time of 5.5. This indicates that nearly 75% of tofacitinib was degraded. The observed base degradation result was confirmed with the reported study by Younis et al. They reported that above the pH ~ 9, tofacitinib citrate showed the highest degree of degradation [37]. In contrast, under oxidative stress conditions, the chromatogram (Fig. 3e) of tofacitinib generated impurity peaks at 3.7 and 4.6 min, and 20.25% degradation of tofacitinib was observed. The thermal degradation samples of tofacitinib did not show any impurity peak (Fig. 3f ). The sample showed 1.5% degradation of tofacitinib. The results confirmed that the developed analytical method could discriminate the degradation products from the retention time of tofacitinib. The developed method was found to be selective for tofacitinib and indicated the method stability-indicating property. This can be applied for stability testing of tofacitinib in-process and finished products. Applicability of the developed method for skin studies of tofacitinib and dermatokinetic assessment The appropriateness of the method was confirmed in dermatokinetics studies on topical administration of tofacitinib. The half-life for tofacitinib was found to be 6 h. The trapezoidal rule was employed for the calculation of AUC 0-last for tofacitinib. Considering the obtained results, the proposed and validated method could be widely used for their routine analysis and for understanding the in vivo dermatokinetics studies. Applicability in characterization of nanoformulation The applicability of the validated analytical method was appraised by determining the percentage entrapment efficiency, cumulative drug release of tofacitinib from the SLNs, stability, and estimation of tofacitinib from the skin tissues. Tofacitinib-loaded SLNs were analyzed in triplicate for the entrapment efficiency. The entrapment efficiency of tofacitinib was found to be 96.44 ± 0.62%. Further, the stability of the prepared nanoparticles was determined by estimating the entrapment efficiency of tofacitinib-loaded SLNs after 3 months. The entrapment efficiency after 3 months was found to be 76.37 ± 1.31%, whereas controlled samples showed 99.65% of the initial sample. The validated method was successfully determined the change in the entrapment efficiency of the formulation. Thus, this method can be used to estimate the shelf life of tofacitinib in nanoformulation. The in vitro release is acquiring considerable interest as a surrogate test for product capability. The in vitro drug release samples were collected and filtered through the 0.22-µ membrane, and the concentration of tofacitinib was determined by the developed method (n = 3). The cumulative in vitro drug release from nanoparticles was found to be 34.45 ± 1.65% at 6 h. The developed method appropriately quantified the tofacitinib in the lipid nanoformulations from the first hour until the end of the release study. The total amount of drug was found to be 98.9%, and the unreleased drug from SLNs was found to be 64.55 ± 0.98%. The developed method showed good agreement for the characterization of drug release in dissolution media. Therefore, the developed method can be used for the routine assessment of the effect of formulation factors and quality control. The skin tissue samples were analyzed in six replicates for the determination of % recovery of tofacitinib. The % recovery of tofacitinib from the skin tissue was found to be 95.61 ± 0.96, with peak purity of 99.99 ± 0.54%. A specific peak was observed at 6.1 min without any interference with the skin matrix. The obtained results confirmed that the developed method can be applied for the quantification of tofacitinib during skin permeation and retention studies, dermatokinetics studies. Discussion The aim of this research work is to improve and validate the RP-HPLC method for estimating tofacitinib in nanoparticles and dermatokinetics studies. The preliminary trials revealed that the peak properties were mostly influenced by the percentage of organic phase and pH of the aqueous phosphate buffer. Using the principle of optimization, we set out to improve the process. The application of the DoE approach allowed for a systematic and easy screening of method variables [38]. Response surface methodology-based 3 2 factorial design was used to confirm the optimum organic phase composition in mobile phase, buffer pH for the analytical method and to ascertain a mathematical relationship between variables and responses (tailing factor and retention time) [39]. The tailing factor was decreased with an increase in methanol percentage in mobile phase composition, while the pH of phosphate buffer also showed a significant decrease in the tailing factor [40]. The optimized chromatogram conditions exhibited retention time (6.1 min), which is a time-saving RP-HPLC method. The analytical methods involving expensive high-cost equipment complicated sample preparation procedures are difficult to apply in routine laboratories [10,14,15]. However, the previously reported spectrometric methods could not confirm the sensitivity in nanogram level, peak purity, and tofacitinib-specific chemical stability. The current validated method has additional advantages over previously reported methods such as short run time, low flow rate, and simple sample preparation procedure and validated as per ICH guidelines with respect to stability-indicating property, robustness, and specificity [20,25]. Intraday and interday precision at all QC levels of tofacitinib showed that the % RSD of developed methods were in the range of 1.1666 ± 0.019% to 1.7295 ± 0.0423% and 1.2219 ± 0.014% to 1.8017 ± 0.032%, respectively. The % bias on intraday and interday was found in the range of 0.818 ± 0.39%-1.908 ± 0.20% and 0.824 ± 0.39-1.685 ± 0.43%, respectively. The acceptable values of % RSD and % bias less than 2% indicated that the validated RP-HPLC method is reliable and precise and is in excellent accordance with the regulatory guidelines [41]. The obtained LOD and LOQ values were found to be 16.5 and 49.5 ng/mL indicating that this method has a higher sensitivity than previously reported RP-HPLC methods for tofacitinib [16]. The specificity study confirmed that the developed analytical method could quantify tofacitinib in the presence of formulation excipients. The appropriateness of the method was also confirmed in dermatokinetics studies on topical administration of tofacitinib [42]. Finally, the validated method was applied to quantify tofacitinib during solid lipid nanoparticle characterization, skin retention studies, and dermatokinetics studies [43]. The method quantified the tofacitinib in skin layers after application of topical formulation. After topical application of (350 mg) conventional cream, the epidermal concentration and dermal concentration of tofacitinib were slowly reached to C max of 9.88 μg/cm 2 and 19.95 μg/cm 2 , respectively, at 6 h (t max ). The AUC 0-∞ in epidermal and viable skin layers was found to be 224.78 μg/cm 2 and 345.04 μg/cm 2 , respectively. Based on the findings, the proposed and validated method could be widely utilized for regular analysis and to quantify tofacitinib in in vivo dermatokinetics experiments [44]. Conclusion A simple, sensitive, reproducible, robust, cost-effective stability-indicating analytical method has been developed effectively using the Design of experiments for quantification of tofacitinib in pharmaceutical conventional dosage forms and nanoformulations. The optimized chromatographic conditions were fully validated as per ICH Q2 (R1) guidelines and found to be economical for the routine analysis in laboratory conditions compared to the reported LC-MS and HPLC methods. The stabilityindicating studies showed distinct peaks from drug peak during the acid, base, oxidative, and thermal hydrolysis conditions. The validated analytical method evidenced its utility in the estimation of tofacitinib in the pharmaceutical nanoformulations and different skin layers (viable epidermis and dermis). Thus, the developed method can be utilized for industry and academia for tofacitinibloaded nanoformulations characterization and stability evaluation.
2021-08-30T13:39:55.190Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "9bbfca7be192b76d4bce930b90f357b75b08900c", "oa_license": "CCBY", "oa_url": "https://fjps.springeropen.com/track/pdf/10.1186/s43094-021-00325-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9bbfca7be192b76d4bce930b90f357b75b08900c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
258416969
pes2o/s2orc
v3-fos-license
CRISPR/Cas9 and piggyBac Transposon-Based Conversion of a Pathogenic Biallelic TBCD Variant in a Patient-Derived iPSC Line Allows Correction of PEBAT-Related Endophenotypes Induced pluripotent stem cells (iPSCs) have been established as a reliable in vitro disease model system and represent a particularly informative tool when animal models are not available or do not recapitulate the human pathophenotype. The recognized limit in using this technology is linked to some degree of variability in the behavior of the individual patient-derived clones. The development of CRISPR/Cas9-based gene editing solves this drawback by obtaining isogenic iPSCs in which the genetic lesion is corrected, allowing a straightforward comparison with the parental patient-derived iPSC lines. Here, we report the generation of a footprint-free isogenic cell line of patient-derived TBCD-mutated iPSCs edited using the CRISPR/Cas9 and piggyBac technologies. The corrected iPSC line had no genetic footprint after the removal of the selection cassette and maintained its “stemness”. The correction of the disease-causing TBCD missense substitution restored proper protein levels of the chaperone and mitotic spindle organization, as well as reduced cellular death, which were used as read-outs of the TBCD KO-related endophenotype. The generated line represents an informative in vitro model to understand the impact of pathogenic TBCD mutations on nervous system development and physiology. Introduction Induced pluripotent stem cells (iPSCs) are stem cells reprogrammed from adult somatic cells with different embryonal origins (endoderm, ectoderm, or mesoderm) that can be differentiated in vitro into functional multilineage mature cells [1]. IPSCs meet the defining criteria of pluripotent stem cells (i.e., self-renewal ability, and in vitro differentiation potential). They can be differentiated into any cell lineage in an ontogeny-recapitulating manner [1], allowing the opportunity to generate informative in vitro disease models. IPSCs have been successfully used to explore the pathogenetic mechanisms and pathophysiology of a wide range of neurological diseases [2]. Notwithstanding this unique potential, there are issues that need to be addressed for their reliable use. A major concern is related to the variability in the differentiation potential of different clones that are obtained from the same parental cell line [3]. This limitation is intrinsic to this model system and may result in an inappropriate interpretation of the disease endophenotype. Variability is regularly observed also in control iPSCs derived from healthy individuals, which complicates the understanding of the findings when comparing iPSC-derived differentiated cells obtained from patients and healthy subjects. The generation of isogenic iPSC lines (i.e., cells with a genome identical to the parental ones in which the disease-causing mutation(s) has (have) been corrected) allows this drawback to be overcome [4]. For this reason, an increasing number of studies aimed at understanding the mechanism of disease use patient-derived iPSCs and their isogenic controls obtained by gene-editing techniques [5][6][7][8][9][10][11]. The clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system is a widely used gene-editing tool that is considered fast, easy, and cheap. It requires the generation of the guide RNA (gRNA) containing a 20-nucleotide homologous sequence followed by a trinucleotide (NGG) protospacer adjacent motif (PAM) in the target, and expression of a CRISPR-associated (Cas) endonuclease [12]. Despite this acknowledgment, the application of the CRISPR/Cas9 technology to iPSC modeling still does not represent an easy endeavor, as homologous recombination (HR), which is its rate-determining step, occurs with extremely low efficiency in iPSCs [13], and recovery of correctly targeted clones without positive selection is labor-intensive and with unpredictable efficiency. Moreover, genome editing in iPSCs is also difficult due to their tendency to encounter programmed cell death when they are cultured as single cells, requiring the use of reporter systems or selectable markers to facilitate the identification of the rare recombination events. After selection, the reporter or selection marker needs to be removed, and tools (e.g., the piggyBac (PB) transposon system) have been designed to accomplish this goal in a footprint-free manner [14][15][16][17][18]. We previously reported that biallelic hypomorphic/loss-of-function variants in TBCD, the gene encoding the tubulin folding cofactor D, one of the five co-chaperones required for assembly and disassembly of α/β-tubulin heterodimer, perturb microtubule dynamics and cause a recessive neurodevelopmental/neurodegenerative disorder (PEBAT; MIM 617193) [19]. The clinical phenotype is relatively homogeneous, even though variability in the onset and progression is observed. The presence of microcephaly in a large proportion of affected individuals, the progressive nature of the disease, and the occurrence of cortical atrophy and hypomyelination followed by cerebellar atrophy indicate that this disorder is characterized by both neurodevelopmental and neurodegenerative features, in which the less severe phenotypes are characterized by early-onset neurodegeneration [19][20][21][22][23][24][25]. In patient-derived fibroblasts, defective TBCD function affects the assembly and disassembly of αβ-tubulin polymers, resulting in a shift toward a more rapidly growing and more stable microtubule population [19]. At the centrosome, TBCD is required for the initiation of microtubule growth and organization of the mitotic spindle [26,27]. Consistently, we documented an altered spindle structure in patients' fibroblast lines, with disorganized, tangle-shaped mitotic microtubules, and markedly reduced aster formation [19]. Here, we combined the CRISPR/Cas9 and piggyBac transposon technologies to generate a footprint-free knock-in isogenic iPSC line from a parental iPSC line carrying a homozygous inactivating variant in TBCD and validate the conversion event by assessing features of the PEBAT-associated endophenotype. CRISPR/Cas9-Induced HR in iPSCs We used the CRISPR/Cas9 nuclease system together with the piggyBac transposase approach to correct the homozygous pathogenic TBCD variant, c.3365C>T (p.Pro1122Leu), in a previously generated and characterized iPSC line (Compagnucci C. and Tartaglia M., unpublished data). To ensure proper targeting in the TBCD gene, we selected one specific guide-RNA sequence (gRNA1) using Benchling's CRISPR tool and the Wellcome Trust Sanger Institute Genome Editing database. To correct the TBCD c.3365C>T change in the parental patient-derived iPSCs (TBCDmut-iPSCs), we constructed a targeting donor plasmid using two~700 bp-long segments (including the mutation site) upstream and downstream of the TTAA site, as homologous arms ( Figure 1). The TBCD-PB-PGK-PU∆TK donor vector was nucleofected together with the gRNA1 and Cas9 nuclease in TBCDmutated iPSCs. Following incubation with puromycin for positive selection, 18 drugresistant colonies were obtained ( Figure 2). Among these, 15 colonies were expanded and tested for homologous recombination by PCR amplification using primers P3 (annealing in a region of the TBCD gene upstream of the targeting construct) and P4 (annealing in the piggyBac cassette). The presence of the integrated selection cassette in the correct genomic position was verified in 11 iPSC colonies (73.3%) ( Figure 3A). Sanger sequencing was then performed to select the clones in which HR-mediated gene correction occurred, resulting in seven clones in which the gene correction occurred in one allele (46.6%) (7/15) and one clone (clone 18) in which the correction occurred in both alleles (6.6%) (1/15) ( Figure 3B). We used the CRISPR/Cas9 nuclease system together with the piggyBac transposase approach to correct the homozygous pathogenic TBCD variant, c.3365C>T (p.Pro1122Leu), in a previously generated and characterized iPSC line (Compagnucci C. and Tartaglia M., unpublished data). To ensure proper targeting in the TBCD gene, we selected one specific guide-RNA sequence (gRNA1) using Benchling s CRISPR tool and the Wellcome Trust Sanger Institute Genome Editing database. To correct the TBCD c.3365C>T change in the parental patient-derived iPSCs (TBCDmut-iPSCs), we constructed a targeting donor plasmid using two ∼700 bp-long segments (including the mutation site) upstream and downstream of the TTAA site, as homologous arms ( Figure 1). The TBCD-PB-PGK-PUΔTK donor vector was nucleofected together with the gRNA1 and Cas9 nuclease in TBCD-mutated iPSCs. Following incubation with puromycin for positive selection, 18 drug-resistant colonies were obtained ( Figure 2). Among these, 15 colonies were expanded and tested for homologous recombination by PCR amplification using primers P3 (annealing in a region of the TBCD gene upstream of the targeting construct) and P4 (annealing in the piggyBac cassette). The presence of the integrated selection cassette in the correct genomic position was verified in 11 iPSC colonies (73.3%) ( Figure 3A). Sanger sequencing was then performed to select the clones in which HR-mediated gene correction occurred, resulting in seven clones in which the gene correction occurred in one allele (46.6%) (7/15) and one clone (clone 18) in which the correction occurred in both alleles (6.6%) (1/15) ( Figure 3B). Transposon Excision in the Corrected iPSCs Since the parental iPSC line carries a homozygous inactivating variant in TBCD, we performed the transposon excision only in the homozygous clone. Specifically, for the selection cassette removal, clone 18 was transiently transfected with a modified PB transposase [hyPB int(−)], unable to re-integrate a sequence flanked by the PB terminal repeats [28]. After negative selection using ganciclovir, we obtained 4 resistant clones, which were screened by PCR using a pair of primers, P3/P4. Only one clone (clone 18.3) was not amplified by these primers and was therefore considered free of the PB transposon ( Figure 3C). Subsequently, primers P3 and P5, which mapped in a region of the TBCD gene located outside of the targeting construct (~2000 bp) was used to confirm the presence of the edited TBCD genomic region by PCR ( Figure 3C). By DNA sequencing, we validated the introduction of the conversion event and the absence of other sequence changes ( Figure 3D). The excision efficiency was 25% (1/4 clones examined). Transposon Excision in the Corrected iPSCs Since the parental iPSC line carries a homozygous inactivating variant in TBCD, we performed the transposon excision only in the homozygous clone. Specifically, for the selection cassette removal, clone 18 was transiently transfected with a modified PB transposase [hyPB int(−)], unable to re-integrate a sequence flanked by the PB terminal repeats [28]. After negative selection using ganciclovir, we obtained 4 resistant clones, which were screened by PCR using a pair of primers, P3/P4. Only one clone (clone 18.3) was not amplified by these primers and was therefore considered free of the PB transposon ( Figure 3C). Subsequently, primers P3 and P5, which mapped in a region of the TBCD gene located outside of the targeting construct (~2000 bp) was used to confirm the presence of the edited TBCD genomic region by PCR ( Figure 3C). By DNA sequencing, we validated the introduction of the conversion event and the absence of other sequence changes ( Figure 3D). The excision efficiency was 25% (1/4 clones examined). No Occurrence of Off-Target Event by CRISPR/Cas9 Gene Editing The targeted gene correction is expected to have a low impact on the whole-genome mutational load in human ES and iPSC lines [29,30]. To confirm this assumption, we excluded off-target events in the edited iPSC line by checking the occurrence of the top nine potential off-target sites that had been predicted by the Wellcome Trust Sanger Institute Genome Editing database (https://wge.stemcell.sanger.ac.uk//, accessed on 19 October 2021). For each site, we designed pairs of primers that covered the predicted indel, and Sanger sequencing analysis demonstrated the absence of any off-target event ( Figure 4). Isogenic iPSCs Retain Their Pluripotent Behavior and Genomic Integrity To confirm the pluripotency of the corrected isogenic iPSC line, we tested its positivity to alkaline phosphatase ( Figure 5A) and expression of a panel of stem cell markers. Immunostaining of OCT4, SOX2, TRA-1-60, and SSEA4 ( Figure 5D) and qPCR analysis of OCT4 and SOX2 ( Figure 5E) indicated that the selected iPSC line retained pluripotency following gene editing. DNA integrity was verified by assessing the eight most common karyotype abnormalities reported in iPSCs by a genetic analysis assay ( Figure 5B). A multiplex competitive PCR using TBCDand GAPDH-specific primer pairs confirmed the occurrence of two corrected copies of the gene in the isogenic cell line ( Figure 5C). The isogenic cells also preserved the capability to proliferate ( Figure S1A) and differentiate into the three embryonic germ layers, as shown by the expression of NCAM (ectoderm), SOX17 (endoderm), and brachyury (mesoderm) ( Figure S1B). No Occurrence of Off-Target Event by CRISPR/Cas9 Gene Editing The targeted gene correction is expected to have a low impact on the whole-genome mutational load in human ES and iPSC lines [29,30]. To confirm this assumption, we excluded off-target events in the edited iPSC line by checking the occurrence of the top nine potential off-target sites that had been predicted by the Wellcome Trust Sanger Institute Genome Editing database (https://wge.stemcell.sanger.ac.uk//, accessed on 19 October 2021). For each site, we designed pairs of primers that covered the predicted indel, and Sanger sequencing analysis demonstrated the absence of any off-target event (Figure 4). Correction of the Pathogenic TBCD Variant Restores the Level of the Protein Previous work showed that TBCD levels are significantly decreased in fibroblasts homozygous for the p.Pro1122Leu amino acid substitution compared to control cells, due to accelerated degradation of the mutated protein [19]. TBCD protein levels were assessed in the isogenic cell line, parental iPSC line carrying the homozygous disease-causing mutation, and control iPSCs by Western blot analysis, indicating that the correction of the pathogenic variant was associated with restored levels of the TBCD protein ( Figure 6C). Correction of the Pathogenic Homozygous TBCD Variant Restores the Alteration of Mitotic Spindle Structure Associated with Loss of TBCD Function TBCD localizes at the centrosome and midbody, where it participates in centriologenesis, spindle organization, and cell abscission [26,27]. We previously demonstrated that patient-derived fibroblasts expressing biallelic pathogenic TBCD variants exhibit disorganized, tangle-shaped mitotic microtubules, and an altered spindle structure [19]. This endophenotype was confirmed in the parental patient-derived iPSC lines ( Figure 6A). To investigate the rescue of this feature in the edited iPSC line, we performed confocal microscopy analysis using β-tubulin as a marker of the mitotic spindle. In contrast to what was observed in parental iPSCs, the altered spindle microtubule organization was rescued in isogenic iPSCs ( Figure 6A). Moreover, the increased apoptotic rate of parental iPSCs associated with the altered spindle microtubule organization was reversed in the corrected iPSC line ( Figure 6B). Overall, these findings provide evidence that correction of the homozygous pathogenic TBCD variant results in a rescue of pathological endophenotypes associated with PEBAT. Off-target analysis of designed CRISPR/Cas9 system. By direct sequencing analysis, no offtarget events were detected at nine candidate sites within exonic regions of the genome. Isogenic iPSCs Retain Their Pluripotent Behavior and Genomic Integrity To confirm the pluripotency of the corrected isogenic iPSC line, we tested its positivity to alkaline phosphatase ( Figure 5A) and expression of a panel of stem cell markers. Immunostaining of OCT4, SOX2, TRA-1-60, and SSEA4 ( Figure 5D) and qPCR analysis of OCT4 and SOX2 ( Figure 5E) indicated that the selected iPSC line retained pluripotency following gene editing. DNA integrity was verified by assessing the eight most common karyotype abnormalities reported in iPSCs by a genetic analysis assay ( Figure 5B). A multiplex competitive PCR using TBCD-and GAPDH-specific primer pairs confirmed the occurrence of two corrected copies of the gene in the isogenic cell line ( Figure 5C). The iso- SOX2 and OCT4 genes. Data are normalized to control and presented as the mean ± SEM; three biological replicates are indicated as n = 3. A one-way ANOVA parametric test is used to assess statistical significance. The bar graph below represents the ratio of densitometry values of the target region to the reference region (TBCD/GAPDH product ratio). Data are normalized to control and presented as the mean ± SEM, n = 2. Kruskal-Wallis followed by Dunn s post hoc tests are used to assess statistical significance. (D) Immunofluorescence assays demonstrating positive immunostaining for the pluripotency markers SOX2, TRA 1-60, OCT4, and SSEA 4 in control iPSCs, parental iPSCs, and control isogenic line. Scale bar = 50 µm. The bar graphs show the signal quantification of pluripotency markers in relation to the total number of cells. Data are presented as mean ± SEM; three biological replicates are shown as n = 3, according to an ordinary one-way ANOVA parametric test. (E) The bar graph shows the maintenance of pluripotency of the gene-edited iPSC lines, as demonstrated by the expression of SOX2 and OCT4 genes. Data are normalized to control and presented as the mean ± SEM; three biological replicates are indicated as n = 3. A one-way ANOVA parametric test is used to assess statistical significance. Correction of the Pathogenic TBCD Variant Restores the Level of the Protein Previous work showed that TBCD levels are significantly decreased in fibroblasts homozygous for the p.Pro1122Leu amino acid substitution compared to control cells, due to accelerated degradation of the mutated protein [19]. TBCD protein levels were assessed in the isogenic cell line, parental iPSC line carrying the homozygous disease-causing mutation, and control iPSCs by Western blot analysis, indicating that the correction of the pathogenic variant was associated with restored levels of the TBCD protein ( Figure 6C). Discussion The advent of iPSC technology has revolutionized the use of human in vitro models for neurodevelopmental/neurodegenerative disorders [31]. Indeed, iPSC-derived cells have been increasingly used for investigating molecular and cellular pathophysiological mechanisms underlying inherited diseases. In particular, iPSC modeling has successfully been employed to model neurologic disorders and diseases in which the pathophysiology is not recapitulated by animal models. Nevertheless, a drawback in using iPSCs as a model system is linked to the difficulty of properly ascribing the observed phenotype to the disease-causing mutation(s). This issue can be overcome by genome editing introducing the disease-associated mutation(s) of interest into control iPSCs or, alternatively, correcting the genetic lesion(s) in patient-derived iPSCs, in order to generate isogenic cell line pairs with identical genetic backgrounds, differing only by the presence/absence of the diseasecausing variant of interest [32]. Precise genome editing in human iPSCs has historically been challenging; however, in the past decade, researchers have performed many studies to improve the efficiency of genome editing [33]. Among these, CRISPR/Cas9 technology represents the most powerful strategy, allowing the introduction or correction of specific mutations [34]. Hence, the combination of iPSC technology with CRISPR/Cas9 gene editing offers unprecedented opportunities to develop in vitro disease models. To name a few, CRISPR-Cas9 technology has successfully been applied to generate informative models for amyotrophic lateral sclerosis [35], Huntington's disease [36], Duchenne muscular dystrophy [37], and inherited retinal degeneration [38]. Recently, the same technology in human progenitor cells (HSPCs) has been applied as a precise genome editing tool for treating beta-thalassemia and sickle cell disease [39]. Despite the great potential of iPSC-genome engineering using the CRISPR/Cas9 system, the laborious clonal selection remains a critical step. While this problem can be solved by the introduction of reporter systems or selectable markers, the safe removal of the cassette selection remains a critical issue. Given its ability to excise an exogenous DNA sequence completely from the genome in a footprint-free manner, the CRISPR/Cas9-associated piggyBac transposon system has become a valuable tool for targeted genetic manipulation [17,18,[40][41][42]. TBCD is a microtubule manufacturing protein that in concert with four additional chaperones (TBCA, TBCB, TBCC, TBCE) and Arl2 is part of the molecular machinery required for the polymerization/depolymerization of MT and is thus essential to MT dynamics [43]. TBCD has been involved in centriole biogenesis and participates in the assembly of cilia and flagella, which are important to cell proliferation and differentiation during development [26,27,44]. Moreover, the roles of TBCD in MT dynamics appear crucial to the production of neuronal progeny, neuronal migration, and the development of synaptic connectivity between cortical postmitotic neurons, glial cells, and oligodendrocytes [45,46]. Variants in genes encoding tubulins and microtubule-associated proteins, which alter the microtubule function and dynamics, have been associated with human cortical malformations and neurodevelopmental disorders [47][48][49][50][51][52]. In particular, pathogenic mutations affecting TBCD have been shown to underlie PEBAT, an early-onset progressive encephalopathy characterized by brain atrophy. There is biochemical and functional evidence supporting the pathogenic effects of the TBCD variants on protein synthesis/stability/function, resulting in aberrant microtubule dynamics and altered mitotic spindle organization [19,22,23]. Despite the central importance of genes encoding tubulins and microtubule-associated proteins in MT dynamics, which play important roles in neurodegenerative and neuronal function maintenance, relatively few in vitro models of tubulin variants are available. ENU mutagenesis experiments on murine models have identified alleles in Tuba1a and Tubb2b, and a single mouse model for CFEOM (Tubb3) [53] and a Zebrafish model for PEBAT (tbcd) [21] have been generated. Here, we generated an isogenic human model of iPSCs from a patient with PEBAT using CRISPR/Cas9 genome editing. We show that the generated isogenic iPSC line retains pluripotency and normal karyotype and it is capable of differentiating into the cells of the three embryonic layers. Importantly, the correction of the homozygous TBCD mutation restored proper TBCD protein levels, which are crucial for normal neuronal morphogenesis [21] and rescued the aberrant spindle morphology associated with defective TBCD function. These results further support the notion that defective TBCD function underlies the disease mechanism of this rare neurodevelopmental and neurodegenerative disorder and indicate a fundamental function for TBCD in the fine-tuning of the assembly and disassembly of the microtubule network. The use of the presently generated in vitro model provides unique opportunities to explore the pathophysiological mechanisms underlying TBCD loss of function in the proper cellular context. This model is also expected to provide an experimental tool to identify and/or validate effective targeted therapeutic approaches directed to counterbalance the aberrant MT dynamics characterizing PEBAT. To correct TBCD-mutated iPSCs, the donor plasmid was generated by amplifying a segment of~1400 bp using the patient's genomic DNA as a template, in which the causative variant is present (chr17:80,896,008). The genomic segment was subcloned into the pGEM-T Easy Vector (Promega, Madison, WI, USA), and the wild-type (WT) allele was introduced by site-directed mutagenesis using the QuikChange II Site-Directed Mutagenesis Kit (Agilent Technologies, Santa Clara, CA, USA). The selection cassette flanked by the enhanced piggyBac (ePB) terminal repeats and containing an independent promoter (PGK) driving the expression of the PU∆TK bifunctional protein [54], which confer resistance to puromycin and sensitivity to ganciclovir (GCV), was amplified using a targeting donor plasmid as previously described [55]. Subsequently, exploiting the restriction site for the HpaI enzyme (GTTˆAAC), the donor vector was digested by HpaI. The drug-mediated selection cassette was flanked by the PB terminal repeats and inserted between two TTAA sites on each homologous arm (HA) (Figure 1). Primers for donor vector generation are summarized in Table S1. Generation of Patient-Derived iPSCs The studies were conducted in compliance with the Code of Ethics of the World Medical Association (Declaration of Helsinki), and with national legislation and institutional guidelines (local institutional ethical committee, Ref. 2357_OPBG_RC_2020, date of approval 19 February 2021). TBCD-mutated iPSCs were obtained from primary skin fibroblasts of an affected male individual (c.3365C>T, p.Pro1122Leu) with informed consent, and control iPSCs were purchased from System Biosciences. Cells were reprogrammed in house using non-integrating episomal technology as described in Borghi R. et al. (2021) [56]. Maintenance of Human iPSCs The iPSC lines derived from the patient, those genetically corrected, and control iPSCs were all grown in feeder-free conditions using matrigel (Corning Inc., Corning, NY, USA) in mTeSR Plus (Stem Cell Technologies, Vancouver, BC, CA) and incubated at 37 • C, 5% CO 2 . The medium was changed every other day and the cells were split and transferred to new 6-well plates when they were 70-80% confluent. CRISPR-Cas9 Gene Editing To correct the pathogenic mutation, TBCD-mutated iPSCs were nucleofected using an IDT protocol. Briefly, crRNA (200 µM) and tracrRNA (200 µM) were assembled at 95 • C for 5 min to a final duplex concentration of 100 µM and then incubated with Cas9 nuclease (60 pmol) (IDT Corporation) for 20 min at room temperature. RNP complex and donor vector (2 µg) were then mixed with 200,000 single-cell TBCD-mutated iPSCs in P3 Primary Cell Nucleofector solution (Lonza, Morrisville, NC, USA). Samples were subsequently nucleofected using a 4D-Nucleofector System (Lonza). iPSCs were then seeded on a 6-well plate in mTeSR Plus medium added with 10 µM Y27632 (Sigma Aldrich, St. Louis, MO, USA). Two days after nucleofection, 0.5 mg/mL puromycin (Sigma Aldrich) was added for positive drug selection. After 14 days, visible colonies of iPSCs were mechanically isolated and expanded. PCR amplification and Sanger sequencing were performed to test for homologous recombination. Table S1. RNA Extraction and Real-Time qPCR Total RNA was extracted from iPSCs using TRIzol reagent (ThermoFisher, Waltham, MA, USA) according to the manufacturer's protocol. The reverse transcription reaction was performed with 1 µg of total RNA, and cDNA was generated by the SuperScript IV First-Strand Synthesis System (ThermoFisher) using random hexamers. RT-qPCR was performed using Fast SYBR Green Master Mix (Applied Biosystems) and a QuantStudio 7 Pro Real-Time PCR System (ThermoFisher) according to the manufacturer's instructions. Primers for qPCR are summarized in Table S1. Relative changes in gene expression were calculated using the 2 −∆∆Ct method. Quantitative RT-PCRs were repeated in triplicate from at least two independent experiments. Removal of Selection Cassette from Correct TBCD-Mutated iPSCs To remove the PB-PGK-PU∆TK selection cassette, cells were nucleofected with 5 µg of the hyPB int(−) piggyBac transposase [39]. After 7 days, 40 µM ganciclovir (Sigma Aldrich) was added to the medium for negative drug selection, and surviving cells were cultured until iPSC colonies appeared. The clones were then individually isolated for expansion and characterization. Off-Target Analysis We used the CRISPR/Cas9 target prediction tool "Wellcome Trust Sanger Institute Genome Editing database" (https://wge.stemcell.sanger.ac.uk//, accessed on 19 October 2021). The top 9 potential exonic off-target sites were analyzed. We chose off-target sites in exonic regions with 4 mismatches, as there were no off-target sites with fewer mismatches. For each site, we designed products that spanned~500 bps of the predicted indel region and performed a Sanger sequencing analysis to confirm the absence of mutations. Alkaline Phosphatase Assay IPSCs were plated on slides, washed with PBS, and fixed with 4% PFA for 10 min at room temperature. ALP staining was carried out using the Phosphatase Alkaline Kit (Sigma Aldrich). Cells were incubated for 30 min at room temperature with a solution based on naphthol AS-BI and fast red violet LB. The cells were photographed using a Leica DM1000 (Leica Microsystems, Wetzlar, Germany) featuring Leica LAS X software (Leica Microsystems). Trilineage Differentiation Assay A trilineage differentiation assay was performed with a STEMdiff Trilineage Differentiation Kit (Stem Cell Technologies, Vancouver, BC, CA), following the manufacturer's instructions. IPSCs were plated onto Matrigel and the appropriate trilineage medium was added for 5 days to the wells in order to perform endoderm or mesoderm differentiation or for 7 days to induce the ectoderm differentiation. Cells were fixed, stained, and imaged to document their positivity to anti-SOX17 (1:3200, rabbit), anti-NCAM (1:400, rabbit), and anti-BRACHYURY (1:1600, rabbit) (Cell Signaling). Genome Integrity Assay Molecular karyotyping of isogenic iPSCs was performed using the hiPSC Genetic Analysis Kit (Stem Cell Technologies ) following the manufacturer's instructions, in order to detect the most common karyotype abnormalities reported in human iPSCs (Chr 1q, Chr 4p, Chr 8q, Chr 10p, Chr 12p, Chr 17q, Chr 18q, Chr 20q, Chr Xp). DNA was extracted from iPSCs with the QIAamp DNA Blood Mini kit (Qiagen, Hilden, Germany) and quantified with NanoDrop 2000/2000c Spectrophotometers (ThermoFisher). Data were analyzed with a Genetic Analysis Application supplier (Stem Cell Technologies). TUNEL Assay The nuclear DNA fragmentation measurement was carried out with a DeadEnd Fluorometric TUNEL System Kit (Promega) following the manufacturer's instructions. Briefly, iPSCs were fixed with 4% paraformaldehyde for 10 min at room temperature and permeabilized with 0.2% Triton X-100 (Sigma Aldrich) for 10 min. Cells were incubated with Equilibration Buffer, Nucleotide Mix, and rTdT Enzyme at 37 • C for 1 h in order to label DNA strand breaks with fluorescein-12-dUTPAdd. Nuclei were stained with Hoechst 33342 dye (Invitrogen). MTT Assay Thiazolyl blue tetrazolium (MTT) (Sigma Aldrich) was used as a colorimetric indicator as MTT is reduced to formazan (a violet-blue water-insoluble molecule) by metabolically active cells. The cells were seeded in 96-well cluster plates and the following day the medium containing MTT powder (5 mg/mL) was added. The cells were then incubated for 2 h and 30 min at 37 • C. After incubation, formazan crystals appeared at the cell surface and were quantified by an EnSpire Multimode Plate Reader (Perkin Elmer, Boston, MA, USA). Statistical Analyses Multiple technical replicates and biological replicates were utilized for all experiments and three independent experiments were performed for each assay. Data were represented using mean and standard error (mean ± SEM), and significance was tested using ANOVA (parametric tests) for normally-distributed data and Kruskal-Wallis (nonparametric tests) when normal distribution could not be assessed. GraphPad-Prism software (Prism 8.0.2, GraphPad Software) was used to analyze the data. Institutional Review Board Statement: This study was approved by the Ethical Committee of the Ospedale Pediatrico Bambino Gesù (Ref. 2357_OPBG_RC_2020). DNA specimens from the subjects included in this study were collected following procedures in accordance with the ethical standards of the declaration of Helsinki protocols and approved by the Review Boards of all involved institutions, with signed informed consents from the participating subjects/families. Informed Consent Statement: The biological material from the subjects included in this study was collected following procedures in accordance with the ethical standards of the declaration of Helsinki protocols, with signed informed consents from the participating subjects. Data Availability Statement: The data that support the findings of this study are available on request. The generated lines should be requested to CC and MT.
2023-04-30T15:08:01.129Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "823a34848d44d777c634deb1eceb345f1acb292c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c899ae8b416bd92bf658970a857b7df9c1aaebe5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
11698495
pes2o/s2orc
v3-fos-license
From Phineas Gage and Monsieur Leborgne to H.M.: Revisiting Disconnection Syndromes On the 50th anniversary of Norman Geschwind's seminal paper entitled ‘Disconnexion syndrome in animal and man’, we pay tribute to his ideas by applying contemporary tractography methods to understand white matter disconnection in 3 classic cases that made history in behavioral neurology. We first documented the locus and extent of the brain lesion from the computerized tomography of Phineas Gage's skull and the magnetic resonance images of Louis Victor Leborgne's brain, Broca's first patient, and Henry Gustave Molaison. We then applied the reconstructed lesions to an atlas of white matter connections obtained from diffusion tractography of 129 healthy adults. Our results showed that in all 3 patients, disruption extended to connections projecting to areas distant from the lesion. We confirmed that the damaged tracts link areas that in contemporary neuroscience are considered functionally engaged for tasks related to emotion and decision-making (Gage), language production (Leborgne), and declarative memory (Molaison). Our findings suggest that even historic cases should be reappraised within a disconnection framework whose principles were plainly established by the associationist schools in the last 2 centuries. Introduction Much of our knowledge about higher cognitive functions and complex behaviors derives from the description of historic seminal cases that helped shape neuroscience (Finger 1994;Compston 2009Compston , 2011aCompston ,b, 2014. These cases reinforced localizationist ideas of cognitive functions related to the activity of discrete and fairly localized brain regions (Catani, Dell'acqua, Bizzi et al. 2012), including the association of social behavior with orbitofrontal cortex (Harlow 1868;Damasio et al. 1994), speech production with Broca's area (Broca 1861a,b), and declarative memory with medial temporal lobe structures (Scoville and Milner 1957). Localization by area is an oversimplification of the actual workings of the brain Mesulam 1999;Catani, Dell'acqua, Bizzi et al. 2012). The localizationist bias stems from 2 main limitations. First, the overall idea of equating localization of symptoms with localization of functions may be incorrect. This point was already raised by several authors defending associationist theories, who argued that it is entirely possible that some symptoms can be explained by a secondary effect on other regions distant from the site of the damage but functionally impaired (Wernicke 1874;Jackson 1881;Lichtheim 1885;Jackson 1894;Dejerine 1895;Monakow 1914). Second, for a long time we have been unable to map lesions onto discrete circuits due to a lack of methods for visualizing single tracts in the living human brain . Indeed, modern neuroimaging has shown that many complex functions rely on the coordinated activity of distant regions connected by long-range fibers coursing through the cerebral white matter. Damage either to cortical areas or to underlying connections has far-reaching consequences on distant regions (Baron et al. 1981) through either diaschisis (i.e., dysfunction of a distant region connected to the damaged area) (Monakow 1897(Monakow , 1914Carrera and Tononi 2014) or disconnection (i.e., dysfunction of 2 intact areas connected by a damaged tract) (Wernicke 1874;Geschwind 1965a,b;Catani and ffytche 2005). However, the understanding of neuroanatomy during the first half of the 20th century was insufficient to capture the complexity of psychological functions. For many researchers, anatomy became largely irrelevant to the development of psychological models of function and dysfunction (Marie 1906;Brodmann 1909;Head 1926;Lashley 1929). For others, detailed cortical parcellation and localization became the only way to understanding cognitive functions (Von Economo and Koskinas 1925;Vogt and Vogt 1926). Inspired by this vivid debate between localizationism and holism and motivated by the recent reports on the behavioral manifestations in animals with interhemipheric disconnection (Myers and Sperry 1953;Schrier and Sperry 1959;Gazzaniga et al. 1963), Norman Geschwind reappraised the associationist ideas in his Brain paper entitled "Disconnexion syndromes in animals and man": In the pages which follow I hope to give an account of the implications of thinking in terms of disconnexions for both clinical practice and research. The synthesis presented here was developed piecemeal out of study of the literature and clinical observation. I will not, however, present it in the order of its development but rather will try to organize the facts and theories along simple anatomical lines. There is, I believe, a unity in the theory which justifies this approach, and I hope that it will significantly contribute to clarification of the presentation. There are many facts recorded in the following pages; there is also much speculation which is, however, nearly all subject to the checks of future experiment and clinical observation.' (Geschwind 1965a,b) In his 2-part paper, Geschwind proposed a way of thinking that would influence future generations. The "facts recorded", to which he refers, came mainly from 2 sources: 1) data on the anatomy of connections derived, when possible, from pioneering primate studies conducted during the middle of the 20th century (Glees et al. 1950;Adey and Meyer 1952;Nauta 1964) and 2) clinical observation of patients who underwent postmortem autopsy examinations (Geschwind and Kaplan 1962). The merit of his approach was the aim to bridge the gap between these 2 fields in a pre-imaging era. Well aware of the limitations of this approach, he acknowledged that his speculations were to be verified by future experiments and clinical observation. This test had to wait another 10 years before early PET, SPECT, and CT scanners became available for clinical anatomical correlation studies (Gainotti et al. 1972;Naeser et al. 1982). Patients were studied with extensive neuropsychological batteries of tests to document their symptoms in detail, coupled with lesion mapping and group analysis . Both cortical and subcortical lesion localization were considered a crucial contribution to the clinical presentation. Thanks to Geschwind's vision, the anatomy of white matter connections derived from 19th century postmortem dissections was revisited in the living human brain and disconnection lesions to specific tracts were considered a valid mechanism for newly described syndromes (e.g., tactile agnosia; Geschwind and Kaplan 1962). Geschwind's premature death in 1984 robbed him of the opportunity to appreciate the tremendous impact that his ideas, coupled with methodological advancements in the field of white matter imaging, had on contemporary behavioral neurology. This advance is particularly striking when we consider the development of diffusion MRI, which provides unprecedented access to the anatomy of white matter pathways in the living human brain. One advantage of this approach is the possibility of mapping the anatomical trajectories of cortico-cortical and cortico-subcortical pathways and correlate anatomical variation with cognitive performance Rojkova et al. 2015). Tractography provides anatomical information about white matter organization, and functional correlates can be proposed through tractography, combined with other functional methods, or its application to brain-lesioned patients. For many historic patients with unique brain lesions, however, information on the extent of white matter damage is unavailable. On the 50th anniversary of Geschwind's seminal contribution, we pay tribute to his work by revisiting the clinico-anatomical correlations of 3 famous neurological patients. For the first time, we combine advanced diffusion methods Catani, Dell'acqua, Vergani et al. 2012;Dell'Acqua and Catani 2012) and meta-analysis of functional MRI studies (Spaniol et al. 2009;Yarkoni et al. 2011) to propose damage to connections and explain symptoms using a network approach. The 2 complementary approaches allow identification of the overlap between the altered structural networks and the impaired functional networks in each patient. Our findings contribute to a better understanding of brain networks and the effect of disconnection on classic neurobehavioral syndromes. Methods In this study, we collected original datasets of 3 cases that represent true milestones in the history of neurology. The case of Phineas Gage described by John Harlow (1868) marked the beginning of modern clinical investigations of the frontal lobe and related behaviors (Mesulam 1990(Mesulam , 2002Damasio et al. 1994;Ratiu and Talos 2004a,b). Louis Victor Leborgne, also known as Monsieur Leborgne or "tan tan," was the first non-fluent aphasic patient reported by Paul Broca in 1861 (Dronkers et al. 2007;Domanski 2013). The case of Henry Gustave Molaison, known in the scientific community as H.M., has helped us to understand the link between medial temporal lobe damage and memory deficits (Corkin 2013). For these 3 cases, we were able to obtain digital data for their skull (Phineas Gage) or brain MRI (postmortem for Monsieur Leborgne and in vivo for H.M.). Below we give a brief account of these 3 cases and details on the datasets we obtained. We then explain how their lesions were reconstructed, the process of normalizing their data sets to a common space of reference, and the method of mapping their lesions onto an atlas of white matter connections from the tractography of 129 healthy subjects. Phineas Gage, Louis Victor Leborgne, and Henry Gustave Molaison: Clinical History and Imaging Processing Phineas Gage (1823-1860) Gage was 25 years old when he made a costly mistake at his workplace that resulted in an iron bar passing through the left side of his skull. Despite extensive damage to his forehead, he survived the accident, but not without consequences. According to John Harlow, the local doctor, who followed Gage throughout his recovery, he became "fitful, irreverent, indulging at times in the grossest profanities (which was not previously his custom), manifesting little deference for his fellows, impatient of restraint or advice when it conflicts with his desires." In this regard, his mind was radically changed, so decidedly that his friends and acquaintances said he was "no longer Gage" (Harlow 1868). Harlow argued that the behavioral changes in Gage's personality were the direct result of the damage to the left frontal lobe. Unfortunately, there is no detailed psychological assessment of Gage at the time of the incident or at any point in the following years of his short life; but his clinical manifestations have been interpreted as resulting from deficits in rational decision-making and emotion processing (Damasio et al. 1994). After his death, Gage's skull and tamping iron have been housed in the Warren Museum of Anatomy in Boston. In our study, we used the axial (0.5 × 0.5 × 0.5 mm) computed tomography scan (CT-scan, Siemens AG, Erlangen, Germany) of Gage's skull acquired by Ratiu and Talos (2004a,b) and the dimensions of the original tamping iron to create a tridimensional model of the bar passing through Gage's skull (diameter 31.75 mm). The bar entered under the left zygomatic bone and passed through the greater and lesser wings of the sphenoid bone, creating the hole we observed near the midline of the skull in the left frontal bone. We then registered Gage's skull to the Montreal Neurological Institute (MNI) space (MNI152 nonlinear 6th generation; http://www.bic.mni.mcgill.ca) using affine and elastic deformation provided in the MIPAV v5.3.4 software package (http://mipav.cit.nih.gov) and the following anatomical landmarks: vertex, nasion, subnasal point, left and right supraauricular point, maximum occipital point, lateral pterygoid plate, and external occipital protuberance. After registration, the simultaneous display of the trajectory of the bar and the MNI 152 brain enabled us to estimate the extent of the lesion (Fig. 1). Louis Victor Leborgne (1809-1861) In 1839, Leborgne, a 30-year-old Frenchman, was admitted to Bicêtre Hospital following the sudden loss of his ability to speak. Leborgne was born in Morêt-sur-Loing and lost his mother at the age of 3. He moved to Paris with his family when he was 11 years old, and it is very likely that he received a formal education, his father being a teacher and all his siblings literate (Domanski 2013). He had epilepsy in his youth and lived at home until he became mute. At the time of his admission to the hospital, he was unmarried, and his father died shortly thereafter, which may explain why he remained an inmate at Bicêtre Hospital for 21 years. His condition eventually deteriorated, and he became paralyzed on his right side, spending the last 7 years of his life bedridden. In 1861, he developed gangrene in his right leg and was transferred to the surgical ward. Here, he was seen by the attending doctor, Paul Broca, who could do nothing to save his life. Broca had just returned from a meeting of the Société d'Anthropologie de Paris where Ernest Auburtin presented the case of Monsieur Cullerier, a patient who had shot himself in the head and was admitted to Saint-Louis hospital with an open wound to his forehead. Auburtin took this opportunity to test the hypothesis that speech was localized in the frontal lobe as suggested by his father in law, Jean-Baptiste Bouillaud, and by Franz Joseph Gall before him. He applied light pressure with a blade to the wounded man's frontal lobe, and his speech "suddenly terminated; a word that had been commenced was cut in two. The faculty of speech reappeared as soon as the compression ceased" (Auburtin 1861). Broca saw in Leborgne the opportunity to confirm at the autopsy table Aubertin's prediction about speech localization. Indeed, he found a lesion in the posterior third of the left inferior frontal gyrus (Fig. 2). Broca presented his work to the Société d'Anthropologie and published his findings the same year (Broca 1861a,b). Broca's report, although certainly not the first on the topic (see, Auburtin 1863 for a review of the cases reported before Broca), served as a signpost for the beginning of the modern study of cerebral localization. Leborgne's brain has been preserved in the Dupuytren Museum in Paris for the past 150 years. In 2007, Dronkers and colleagues used a 1.5 T MRI scanner (General Electric Signa Echospeed HDX LCC Magnet 8.2.5) to acquire T 1 -weighted (1 × 1 × 1 mm) images of Leborgne's brain (Dronkers et al. 2007). With these images, it was possible to define Leborgne's lesion, using automated methods for lesion identification (ALI) (Seghier et al. 2008). ALI automatically classifies T 1 maps into a set of probabilistic maps of gray matter, white matter, cerebrospinal fluid, and atypical tissue. The probability of damage refers to the likelihood that the damage occurred within a given voxel. If ALI identified a damaged tissue with a probability that equals 50%, there is a 50-50% chance that this tissue is damaged. Therefore, a voxel was classified as being damaged when >50% probability was detected (Fig. 2). Leborgne's brain was registered with the MNI152 using the affine and elastic deformation provided in the Statistical Parametric Mapping 8 software package (SPM8; http://www.fil.ion.ucl.ac.uk); a mask of the lesion was used to exclude the contribution of the damaged voxels to the registration (Friston et al. 1995;Brett et al. 2001). Molaison had petit mal seizures that began at age 10 and grand mal seizures that began at age 15. During adolescence, the epileptic attacks became more severe and were uncontrolled with pharmacological treatment (Mauguiere and Corkin 2015). His family doctor advised his parents to consult William Beecher Scoville, a neurosurgeon at the Hartford Hospital in Connecticut, USA. At that time, Scoville was performing psychosurgical procedures on patients with psychosis, consisting of the unilateral removal of medial temporal lobe structures. Scoville made the fortuitous observation that the operation was effective in reducing seizures in 2 psychotic women who had epilepsy (Scoville et al. 1953). Molaison's EEG results did not indicate an epileptic focus, but showed diffuse bilateral activity, on the basis of which Scoville decided to perform the experimental procedure in both left and right medial temporal lobes. The treatment palliated his seizures, but unexpectedly left him with a severe and lasting anterograde amnesia (Scoville and Milner 1957). This declarative memory impairment affected his ability to record new events and facts postoperatively. Molaison was able to maintain information online for about 30 s, but his ability to convert shortterm memories into long-term memories was lost (Corkin 1984). Henry Gustave Molaison In 1993, Corkin and colleagues collected T 1 -weighted MRI images (1 × 1 × 1 mm) of Molaison's brain using a 1.5 T scanner (General Electric Signa, Milwaukee, WI, USA) (details of the acquisition in Corkin et al. 1997). We used these images to define the lesions in Molaison's brain using ALI (Seghier et al. 2008) (Fig. 3). The lesion analysis and registration to the MNI152 were the same as for Leborgne's data set. Mapping Disconnections in Phineas Gage, Louis Victor Leborgne, and Henry Gustave Molaison The next step in the analysis was to map lesions from each patient onto tractography reconstructions of white matter pathways obtained from a group of healthy controls. We first obtained diffusion data sets and tract reconstructions, then used complementary approaches to map the disconnections (tract specific vs. data-driven), and later conducted metaanalyses for each patient to validate the disconnection results with complementary fMRI activation studies published in the literature. Diffusion-Weighted Imaging Acquisition We recruited 129 healthy, right-handed volunteers (59 male, 70 female) aged 18-79 years and diffusion MRI scans were obtained from each participant. We acquired 60 contiguous near-axial slices on a 3T GE Signa HDx TwinSpeed system (General Electric, Milwaukee, WI, USA) with the following parameters: rostrocaudal phase encoding, voxel size 2.4 × 2.4 × 2.4 mm, matrix 128 × 128, slices 60, NEX 1, TE 93.4 ms, b-value 3000 s/mm 2 , 60 diffusion-weighted directions, and 7 non-diffusion-weighted volumes, using a spin-echo EPI sequence. Cardiac gating was applied with effective TR of 20/30 R-R intervals. At each slice, raw diffusion-weighted data were simultaneously registered and corrected for subject motion and geometrical distortions using Explore DTI (http://www.exploredti.com; Leemans and Jones 2009). Standard diffusion tensor tractography does not allow the reconstruction of the 3 branches of the superior longitudinal fasciculus (SLF I, II and III) because of the crossing of the dorsal association fibers with commissural and projection fibers (Thiebaut de Schotten, ffytche et al. 2011; Dell'Acqua and Catani 2012). Hence, we used spherical deconvolution to estimate multiple orientations in voxels containing crossing fibers and visualize the 3 branches of the SLF (Alexander 2006). A modified (damped) version of the Richardson-Lucy algorithm for spherical deconvolution (Dell'acqua et al. 2010) was employed using Star-Track software (http://www.natbrainlab.com) a freely available MATLAB 7.8 (http://www.mathworks.com) toolbox. Algorithm parameters were chosen as previously described . A fixed fiber response corresponding to a shape factor of α = 1.5 × 10 −3 mm 2 /s was chosen . Fiber orientation estimates were obtained by selecting the orientation corresponding to the peaks (local maxima) of the fiber orientation distribution (FOD) profiles. To exclude spurious local maxima, we applied an absolute and a relative threshold. A first "absolute" threshold excluded small local maxima due to noise or isotropic tissue. This threshold was 3 times the amplitude of a spherical FOD obtained from a gray matter isotropic voxel. A second "relative" threshold of 8% of the maximum amplitude of the FOD was applied to remove the remaining local maxima with values greater than the absolute threshold (Dell'acqua et al. 2010). Tractography Whole brain tractography was performed from brain voxels with at least 1 fiber orientation. Streamlines were reconstructed using a modified Euler integration algorithm ). In regions with crossing white matter bundles, the algorithm followed the orientation vector of least curvature (Schmahmann et al. 2007). Streamlines were halted when a voxel without fiber orientation was reached or when the curvature between 2 steps exceeded a threshold of 45°. Atlas-Based Analysis of Disconnection For this analysis, we created an atlas of the human brain connections from the 129 healthy participants according to methods described in previous work (Thiebaut de Schotten, ffytche et al. 2011;Catani, Dell'acqua, Vergani et al. 2012;. Tractography dissection of the fornix, cingulum, uncinate fasciculus, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus was performed using a multiple region-of-interest (ROI) approach (Catani and Thiebaut de Schotten 2008). The anterior, posterior, and long segments of the arcuate fasciculus were dissected using a 2 ROI approach as described by Catani et al. (2007), and the cortico-spinal tract and optic radiations were dissected following the guidelines provided by Thiebaut de Schotten, ffytche and colleagues (2011). The anterior thalamic and fronto-striatal projections were reconstructed using previously published methods (Behrens et al. 2003;Cohen et al. 2009). We followed earlier reports in dissecting the frontal aslant tract, orbitopolar tracts, and frontal superior and inferior longitudinal fasciculi (Catani, Dell'acqua, Vergani et al. 2012;Thiebaut de Schotten et al. 2012). We isolated the 3 branches of the superior longitudinal fasciculus (SLF I, II, and III) using a multiple ROI approach (Thiebaut de Schotten, Dell'Acqua et al. 2011). In total, 22 tracts were reconstructed (see Fig. 4 for a visual summary of all these connections). For each tract, binary visitation maps were created by assigning each voxel a value of 1 or 0, depending on whether the voxel was intersected by the streamlines of the tract. For each participant, convergence (CS) maps contrasting for white matter (Dell'acqua et al. 2006) were registered to the MNI152 template provided with the FMRIB Software Library package (FSL, http://www. fmrib.ox.ac.uk/fsl/). For the registration, we combined affine with diffeomorphic deformations (Avants et al. 2007;Klein et al. 2009) using Advance Normalization Tools (ANTs, http://www.picsl. upenn.edu/ANTS/). Binary visitation maps of each dissected tract were normalized to MNI space using both affine and diffeomorphic deformations. Normalized binary visitation maps were then averaged to create percentage overlap maps, and we used 50% overlap maps for the localization and quantification of the lesions (Thiebaut de . We quantified the severity of the disconnection by measuring the proportion of the tract disconnected (Thiebaut de Schotten et al. 2008) using Tractotron software (http://www.brainconnectivitybehaviour.eu). The severity of the disconnection was converted into z-values, allowing us to assess which tracts were statistically more damaged (z > 1.96; 2-tailed P < 0.05). All tracts were included in the z-score calculation. Lesion-Based Approach to Mapping Disconnection The inverse of the affine and diffeomorphic deformations calculated according to the description given above was used to register the normalized lesions of Gage, Leborgne, and Molaison to the native space of the 129 participants. In each healthy dataset, Tractotron software allowed us to register lesions as seedpoints to track streamlines passing through the damaged regions (http ://www.brainconnectivitybehaviour.eu). We created a binary visitation map of the streamlines intersecting the lesion. These maps were normalized to MNI space using the affine and diffeomorphic deformations calculated above. We created percentage overlap maps by summing at each point in MNI space the normalized visitation map of each subject; hence, the value in each voxel of the visitation maps varied according to intersubject variability. These maps were projected onto the average 3D rendering of MNI152 using the Brainvisa 4.3 package (http:// brainvisa.info) (Figs 6-8). Meta-analyses To identify the functional networks damaged in the 3 cases we applied a meta-analysis approach to functional MRI studies using a method described by Yarkoni et al. (2011;http ://neurosynth.org) was used to identify the functional networks damaged in the 3 cases. We searched for brain regions that are consistently activated in studies that load highly on 3 features: "decision-making" (see Supplementary References 1) and "emotion" (see Supplementary References 2) for Gage and "fluency" (see Supplementary References 3) for Leborgne. For Molaison, we used a previously published meta-analysis, which reported areas separating activation related to encoding and retrieval of episodic memories (Spaniol et al. 2009). The results were superimposed on the 3D reconstruction of the MNI152 images (Figs 6-8). Table 1 lists for each case the percentage of damage to each tract (visually represented in Fig. 4) and the corresponding z-score. The lesion-based analysis indicated direct damage to the orbitofrontal cortex, dorsolateral prefrontal cortex, and temporopolar cortex. In addition, the lesion disconnected several areas not directly affected by the tamping iron. These areas include the frontal pole, posterior inferolateral frontal cortex, anterior and posterior cingulate, pre-supplementary motor area, precuneus, posterior temporal, and dorsolateral occipital cortices. Other subcortical structures that were partially disconnected from the frontal lobe were the thalamus, the striatum, and amygdala (Fig. 6a). The meta-analysis of fMRI studies on decision-making and emotional processing showed an extended network that overlapped the tractography-derived structural networks, except for the inferior parietal regions (Fig. 6b). The lesion-based analysis indicated that the damage in Leborgne's brain affected an extended network involving not only Broca's territory but also distant regions in the frontal, parietal, and temporal lobes (Fig. 7a). The meta-analysis of fMRI studies reporting activations for verbal fluency tasks showed a functional network included in the structural network identified by tractography (Fig. 7b). Other regions (e.g., primary motor cortex) affected were not part of the verbal fluency meta-analysis and may account for Leborgne's right hemiplegia. The lesion-based analysis showed that damage in Molaison's brain affected a network of areas, including the medial temporal cortices, retrosplenial cortex, orbitofrontal cortex, and gyrus rectus (Fig. 8a). Other subcortical disconnected regions included the mammillary bodies and septal nuclei. Using the meta-analysis of fMRI studies carried out by Spaniol et al. (2009), we found that the areas activated during encoding and retrieval of declarative memories overlapped the cortical projections of the tracts compromised in Molaison's brain. These areas included the posterior parahippocampal and retrosplenial cortices, anterior and posterior cingulate gyrus, the dorsomedial prefrontal cortex, precuneus, orbitofrontal cortex, mammillary bodies, and anterior thalamic nuclei (Fig. 8b). Discussion Modern approaches to brain function rely on the ability to map the complexity of brain networks underlying cognition and behavior. In our study, we revisited 3 seminal cases in the history of neurology to better understand the contribution of disconnection to their cognitive and behavioral syndromes. Our analysis showed that in all 3 cases, the damage compromised both short-and long-range connections, suggesting that disconnection mechanisms occurred beyond the lesion site. Below we discuss these findings in greater detail for each patient. The uncinate fasciculus connects the anterior temporal regions (entorhinal cortex, amygdala, temporopolar cortex) with medial and lateral orbitofrontal cortex (Crosby et al. 1962). Temporal regions connected by the uncinate fasciculus are involved in episodic and semantic memory and emotion (Von Der Heide et al. 2013;Murray et al. 2014). The orbitofrontal regions are associated with response inhibition, mood regulation, and reward (Berlin et al. 2004;Kramer et al. 2013;Kumfor et al. 2013). Lesions to the anterior temporal and orbitofrontal regions or their connections often cause mood and behavioral symptoms. In traumatic brain injury, for example, patients with lesions to this anterior orbitofrontal-temporal network show socially inappropriate and disinhibited behavior, impulsivity, compulsive eating, reduced control of emotional response, reduced empathy, rigidity, and perseveration (Zappala et al. 2012;Dal Monte et al. 2014). Patients with anterior temporal lobe epilepsy may also manifest delusions and hallucinations. Damage to the uncinate fasciculus and its cortical projections has been reported in children with conduct disorder (Sarkar et al. 2013) and adults with psychopathy (Craig et al. 2009). Hence, the disconnection of the left uncinate fasciculus in Gage may account for some of the behavioral manifestations reported by Harlow (1868). The uncinate fasciculus has been also associated with semantic deficits in patients with neurodegenerative disorders ). In the original accounts, Harlow did not report whether Gage showed impairment in naming or semantic knowledge. One may speculate that damage to the uncinate fasciculus was limited to the medial "limbic" portion of the uncinate fasciculus, leaving the most lateral projections to Broca's area intact. The frontal intralobar networks include 3 sets of connections between different regions of the frontal lobe: the fronto-orbitopolar tract, frontal aslant tract, and frontal superior and inferior longitudinal tracts (Catani, Dell'acqua, Vergani et al. 2012;Thiebaut de Schotten et al. 2012). The frontal orbitopolar tract represents a transmodal network for binding memories and emotions with olfactory, taste, visual, and auditory inputs. Multisensory association and limbic integration are important to guide complex cognitive and behavioral functions, such as reward behavior associated with sensory and abstract reinforcers (e.g., monetary gain and loss) (Kringelbach 2005) or response inhibition (e.g., go-no-go tasks) (Iversen and Mishkin 1970). The frontal aslant tract connects Broca's territory with medial frontal areas (including the pre-supplementary motor area and cingulate cortex) (Lawes et al. 2008;Oishi et al. 2008;Ford et al. 2010;Guevara et al. 2011). In patients with traumatic brain injury, damage to the frontal aslant tract is correlated with impaired Figure 5. Major tracts that were damaged in Gage (damage affected at least 30% of the tracts' volume, z-score = 1.7), in Leborgne (damage affected at least 55% of the tracts' volume, z-score = 1.29), and in Molaison (damage affected at least 5% of the tracts' volume, z-score = 1.39). response inhibition (Bonnelle et al. 2012). Interestingly, the strength of activation of the inferior frontal gyrus and the pre-supplementary motor area during fMRI-based response inhibition tasks has been associated with recurrent antisocial behavior (Aharoni et al. 2013). Other deficits include speech initiation problems, from which patients often recover due to the bilateral distribution of this tract (see discussion for Leborgne). The frontal superior and inferior longitudinal tracts connect regions of the frontal lobe involved in decision-making at different levels, from a low-processing level in the posterior frontal regions to a high-processing level in more anterior frontal regions (Badre and D'Esposito 2007;Badre 2008;Christoff et al. 2009). Overall, these longitudinal tracts permit the anatomical binding necessary for complex cognitive control (Koechlin et al. 1999(Koechlin et al. , 2003Koechlin and Summerfield 2007). While more posterior frontal regions appeared intact in Gage's brain, damage to the connections between posterior and anterior frontal regions could explain his deficits in high-level cognitive control. The associative circuit subsumes the dorsolateral prefrontal cortex, dorsal caudate nucleus, internal pallidum, and ventral anterior thalamic nuclei. Lesions to the associative circuit impair attention, working memory, strategy formation, and cognitive flexibility (Stuss and Benson 1984). The limbic circuit incorporates the medial and orbitofrontal cortices, ventral striatum (i.e., nucleus accumbens), external and internal pallidum, and mediodorsal thalamic nucleus. Functions of the limbic loop overlap with those of the fronto-orbitopolar tract described above. Gage had significant damage to this loop, although the exact extent of the lesion is difficult to quantify using our indirect approach. Overall, our analysis uncovered extensive frontal lobe damage in Gage's brain. This abnormality extended beyond the orbitofrontal and dorsolateral cortices, which were directly damaged by the bar. The atlas-based approach identified several tracts affected by the lesion, and the lesion-based approach showed that the dysfunction impacted an extended network of areas that are commonly activated during the performance of decision-making, emotion processing, and reward tasks. Louis Victor Leborgne Soon after Broca's (1861a,b) publication, the concept of a center for spoken language was harshly criticized by Pierre Marie (1906), Henry Head (1926), and many others. Their dissent was based on empirical evidence of the existence of patients with non-fluent aphasia without damage to Broca's area. Broca was also criticized for not performing dissections of the whole brain but limiting his investigation to the cortical surface. Indeed, when computerized tomography (CT) and magnetic resonance imaging (MRI) scans of Leborgne's brain were published, it was evident that the lesion extended well beyond the inferior frontal gyrus to include large regions of the underlying white matter (Castaigne et al. 1980;Signoret et al. 1984;Cabanis et al. 1994;Dronkers et al. 2007). Our study confirmed that the extensive lesion in Leborgne's brain affected almost all dorsolateral tracts of the left hemisphere, including the arcuate fasciculus and frontal aslant tract, both of which support language. The long segment of the arcuate fasciculus connects Wernicke's with Broca's region, whereas the anterior segment of the arcuate fasciculus (or third branch of the superior longitudinal fasciculus) connects Broca's to Geschwind's territory (in the inferior parietal lobule) ). In addition, the frontal aslant tract connects Broca's to the pre-supplementary area. These 3 tracts constitute a complex network dedicated to speech production (Roelofs 2014). In Leborgne, whose only verbal output was limited to a few words, the lesion to these 3 tracts explains his poor verbal fluency. Further, our analysis suggested that Leborgne's pathology extended to tracts that are not part of the language system. The left cortico-spinal tract, for example, was damaged at different levels (corona radiata, internal capsule), which accounts for his right hemiplegia. It is difficult to say whether damage to other tracts, such as the uncinate fasciculus, frontal inferior longitudinal fasciculus, fronto-orbitopolar tracts, and superior longitudinal fasciculus had an impact on Leborgne's behavior. Broca did not report any other significant impairment and noted that Leborgne had normal intelligence. There is no mention, for example, of limb apraxia, which is usually associated with left hemisphere damage to the superior longitudinal fasciculus. Similarly, damage to the reporting activations related to encoding and retrieval of declarative memories (Spaniol et al. 2009). frontal orbitopolar tract may have caused behavioral problems that were not reported in the case notes. It is also true that Broca's knowledge of the patient was minimal and limited to a surgical consultation for the gangrenous leg. In the absence of more detailed clinical notes, speculation about possible symptoms caused by tracts not directly involved in language is risky. Henry Molaison Molaison's groundbreaking case established that bilateral medial temporal lobe lesions cause severe amnesia (Scoville and Milner 1957). MRI studies carried out in 1992 and 1993 showed that the resection included the medial temporal polar cortex, most of the amygdaloid complex, and all of the entorhinal cortex (Corkin et al. 1997). Also removed were the anterior ∼2 cm of the dentate gyrus, hippocampus, and subicular complex, and the rostral portions of perirhinal and parahippocampal cortices. Molaison's memory impairment was more severe than that of amnesic patients with selective hippocampal lesions (Zola-Morgan et al. 1994), suggesting that damage to his entorhinal, perirhinal, and parahippocampal cortices exacerbated the deficit. Lesion studies in monkeys and fMRI studies in humans provide abundant evidence that these areas are recruited during the performance of declarative memory tasks, yet the contribution to memory processes from the preserved caudal portion of Molaison's perirhinal and parahippocampal cortices could not support normal memory performance (Corkin et al. 1997;von Allmen et al. 2014). Our findings are consistent with the view that medial temporal lobe structures are a part of an extended network of cortical and subcortical structures that support memory consolidation, storage, and retrieval (Scoville and Milner 1957;Warrington 1985;Markowitsch 2000;Gaffan et al. 2001Gaffan et al. , 2002Moulin et al. 2013;Annese et al. 2014). The fornix is a medial structure composed of commissural and projection fibers. The majority of the fibers of the fornix connect the hippocampus with the mammillary bodies, the anterior thalamic nuclei, and the hypothalamus; the fornix also has a small commissural component known as the hippocampal commissure (Crosby et al. 1962;Aggleton 2008;Nieuwenhuys et al. 2008). Damage to fibers of the fornix in Molaison may have contributed to his anterograde memory deficits, but the profound memory impairment cannot be explained entirely by the fornix disconnection. Aggleton (2008) showed that the anterograde amnesia observed with disconnection of the fornix without damage to the medial temporal lobe is not as severe as that seen in patients with bilateral hippocampal damage. The milder impairment with fornix lesions may stem from the fact that information from the hippocampus can travel to other structures of the limbic system through alternative pathways, such as the ventral cingulum (Vann and Albasser 2011) and uncinate fasciculus (Metzler-Baddeley et al. 2012). Both pathways were affected in Molaison. The damage to the connections to the orbitofrontal cortex (via the uncinate fasciculus) and precuneus (via the posterior cingulum) is noteworthy, because these areas are affected in neurodegenerative disorders involving memory such as mild cognitive impairment and early Alzheimer disease (Acosta-Cabronero et al. 2012). For the most part, Molaison's preoperative semantic knowledge was intact and did not deteriorate from 1953 ( preoperatively) to 2000 (Kensinger et al. 2001;Steinvorth et al. 2005). He did, however, show deficits on category and letter fluency tasks and tests of definitions (Kensinger et al. 2001;Schmolck et al. 2002). Several factors may account for his poor performance: low socioeconomic background, substandard education, slow response times, and damage to temporal neocortex. Semantic memory relies on a large, distributed cortical network that includes areas in the inferior frontal gyrus and the temporal pole bilaterally (Martin and Chao 2001;Hoffman et al. 2014). The latter 2 regions are interconnected through the uncinate fasciculus and the anterior commissure, which were damaged in Molaison's brain. Bilateral abnormalities of the uncinate fasciculus have been associated with semantic deficits in neurodegenerative disorders (Mummery et al. 1999;Compston 2011a,b;Catani, Mesulam et al. 2013), and the disconnection of these tracts may have contributed to his shortcomings on certain semantic tasks. Axonal tracing studies in animals show that the anterior fibers of the anterior commissure also project to olfactory regions (e.g., olfactory bulb, anterior perforated substance, etc.) (Crosby et al. 1962). These anterior olfactory-linked fibers seem to be present in humans (Kiernan 1998;Di Virgilio et al. 1999), and the damage to these tracts that we have identified in H.M. may have contributed to his deficit in odor quality discrimination and recognition (Eichenbaum et al. 1983). MRI scans and more recent postmortem examination of Molaison's brain confirmed that the medial temporal stem was partially excised and white matter anatomy appeared significantly altered (Augustinack et al. 2014). It is likely that the surgery damaged other small white matter pathways in addition to those detected by our tractography analysis. In particular, it is highly probable that damage included the perforant fibers between the hippocampus and the entorhinal cortex, which are important in memory processes (Witter et al. 2000), or the fibers of the stria terminalis linking amygdala to the hypothalamus and involved in the regulation of adrenergic response to acute stress. In addition, the mammillary nuclei, which receive projections from the hippocampi through the fornix, were recently reported as shrunken in a postmortem study of Molaison's brain (Annese et al. 2014). General Discussion and Limitations This study demonstrated the validity of applying an atlas-based approach to reappraise the effects of disconnection in 3 historic patients for whom data on lesion location and clinical deficits were available. Still, a note of caution is in order because each step used in our analysis presented challenges that could have generated artifactual results. The absence of the brain, as in the case of Phineas Gage, and the deformation of Leborgne's brain due to preservation in a jar for more than 150 years posed difficulties when we tried to map the real extent of the lesion (Clark et al. 2003). Further, Leborgne's case is problematic as 21 years elapsed between the onset of his language deficits and his death. This is a limitation as Leborgne's brain lesion may have become more extensive or additional lesions not related to his language deficits may have occurred in the subsequent years of his adult life. In addition, delineating a precise margin between pathological and normal tissue is particularly challenging (Seghier et al. 2008). Once the borders of the lesion were reconstructed, mapping the lesion onto the tract was often prone to biases related to misregistration, incorrect estimation of the size of the tracts, and interindividual variability in white matter anatomy (Crinion et al. 2007;Jones et al. 2013). The lack of diffusion weighted imaging images for the 3 cases led us to gather indirect anatomical information from a data set of 129 normal men and women aged 18-79 years. To obtain an average representation of the anatomy of each tract from the whole sample, several steps were necessary, including registration, overlapping, and thresholding. Thus, while the atlas generated after these steps provided an overall estimation of the tracts' anatomy, it may not precisely match the exact individual anatomy of the 3 patients. Further, the diffusion-weighted imaging that we have used for this analysis is based on a population of 18-to 79-year-old healthy participants and is not age or sex matched for each historic patient. Age-related changes in volume and diffusion indices of white matter pathways have been reported in previous studies (Stadlbauer et al. 2008;Rojkova et al. 2015). Similarly, sex-related differences have been reported for the right arcuate fasciculus (Catani et al. 2007). These differences can lead to under-or overestimation of the exact extension of the lesion. For this reason, we have included in our interpretation those tracts that showed a trend towards significant disconnection. It should also be noted that tract estimation was based on tractography, which has many flaws. Even with more advanced diffusion methods, like spherical deconvolution, artifacts can occur due to partial volume effects and difficulty in reconstructing complex anatomical configurations (e.g., crossing, kissing, and fanning) and low spatial resolution (Dell'Acqua and Catani 2012; Kristo et al. 2013). These drawbacks could lead to underor overestimation of the real anatomy of the tracts with consequences for the atlas-based approach. For example, if the real extent of a tract is underestimated, the atlas-based analysis may incorrectly indicate a relative sparing of the underestimated tract. White matter pathways also show a descending gradient of intersubject variability going from the stem portion (>90% of the population studied) of the white matter pathways to the most peripheral zones (<50% of the population studied; Thiebaut de Schotten, ffytche et al. 2011). In our analysis, we chose probabilities above 50% to consider only the almost invariable anatomical core of each single tract and not its periphery (Thiebaut de Schotten, ffytche et al. 2011). Another limitation is the lack of a precise quantification of the severity of the disconnection. We used as a surrogate measure of tract damage the proportion of the tract that was intersected by the lesion. While this measure can provide an approximate estimate of the overall involvement of the tract, it does not indicate whether the lesion affected critical fibers. For example, a small lesion located in the internal capsule could lead to a greater functional impairment of the functions related to the cortico-spinal tract than a larger lesion in the corona radiata. Unfortunately, the historic descriptions of the behavioral symptoms manifested by Gage and Leborgne remain incomplete. Indeed, these observations occurred at the dawn of behavioral neurology, and it is very difficult to back-trace definitive clinicoanatomical conclusions. Further, many connections of the human brain are unknown, and some tracts were not included in the analysis (most of the u-shaped fibers). Clinico-anatomical correlations were particularly difficult in Gage and Leborgne due to the lack of detailed information on the clinical manifestations and evolution of their disorders (e.g., extent of recovery). Lesions and symptoms change over time and could lead to modifications of the link between brain and behavior (Kolb and Gibb 2014;Papagno and Vallar 2014). For example, while Gage recovered his behavioral functions to some extent and could even hold a job, he eventually developed epilepsy and other symptoms (Macmillan 2002). Recovery might be related to "reserve" networks, such as preserved structures on the right hemisphere (Geschwind 1965a,b;Forkel et al. 2014), which might account for symptom improvement (Duffau 2014). Finally, mapping symptoms onto single tracts is subject to some criticisms. Cortical lesions by definition destroy the white matter tracts associated with them (Geschwind 1965a,b) and in pure white matter lesions the cortex is not affected. This suggests that a network dysfunction is the common denominator for all brain disorders, and a tract-based nomenclature should be preferred to a cortical localizationism. However, syndromes certainly result from a dysfunction of an extended network of cortical and subcortical areas connected by several tracts . Hence, mapping the disconnection in patients should not lead to an underestimation of the role of the cortex. Indeed, our lesion-based analysis (Figs 6a, 7b, and 8a) revealed cortical regions that were directly or indirectly affected by the disconnection. Conclusions Today as 50 years ago, the clinico-anatomical correlation method remains pivotal in our understanding of the complex relations between brain and behavior (Mah et al. 2014). The disconnection paradigm, as envisaged by Geschwind in his landmark paper and revitalized today by the availability of methods for mapping connections in the living human brain, is key to a comprehensive approach to probing its complexity. Our findings suggest that social behavior, language, and memory depend on the coordinated activity of different regions rather than single areas in the frontal or temporal lobes. While the exact contribution of cortical and disconnection mechanisms remains to be defined with more precision (Critchley 1953;Mesulam 1981Mesulam , 2005Damasio 1989), our findings suggest that insights from famous cases that greatly contributed to the advance of neurological knowledge should not be considered to be set in stone. Funding This study represents independent research in part funded by the National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London. The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health. Funding to pay the Open Access publication charges for this article was provided by the Welcome Trust. Notes We thank the Natbrainlab (www.natbrainlab.com) for insightful discussion, the French Agence Nationale de la Recherche for their support of this project ( project PHENOTYPES, no. ANR-13-JSV4-0001-01) and the Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and Institute of Psychiatry, King's College London. M.C. is recipient of a New Investigator Award from the Wellcome Trust (103759/Z/14/Z). Additional funding comes from the program "Investissements d'avenir" ANR-10-IAIHU-06. N.F.D. received a Research Career Scientist Award from the US Department of Veterans Affairs Clinical Sciences Research and Development Program; The content is solely the responsibility of the authors and does not necessarily represent the official views of the Department of Veterans Affairs or the United States government. This article was also prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) and supported within the framework of a subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Program. Conflict of Interest: None declared.
2018-04-03T03:59:02.165Z
2015-08-12T00:00:00.000
{ "year": 2015, "sha1": "1a498f3193bd7a79da8ec587118e3f5041cbacb8", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cercor/article-pdf/25/12/4812/17308145/bhv173.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f0cb13320d00390a6d38240a351f2f1746ced533", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
269844456
pes2o/s2orc
v3-fos-license
Design of Digital Twin Cutting Experiment System for Shearer This study presents an advanced simulated shearer machine cutting experiment system enhanced with digital twin technology. Central to this system is a simulated shearer drum, designed based on similarity theory to accurately mirror the operational dynamics of actual mining cutters. The setup incorporates a modified machining center equipped with sophisticated sensors that monitor various parameters such as cutting states, forces, torque, vibration, temperature, and sound. These sensors are crucial for precisely simulating the shearer cutting actions. The integration of digital twin technology is pivotal, featuring a real-time data management layer, a dynamic simulation mechanism model layer, and an application service layer that facilitates virtual experiments and algorithm refinement. This multifaceted approach allows for in-depth analysis of simulated coal cutting, utilizing sensor data to comprehensively evaluate the shearer’s performance. The study also includes tests on simulated coal samples. The system effectively conducts experiments and captures cutting condition signals via the sensors. Through time domain analysis of these signals, gathered while cutting materials of varying strengths, it is determined that the cutting force signal characteristics are particularly distinct. By isolating the cutting force signal as a key feature, the system can effectively distinguish between different cutting modes. This capability provides a robust experimental basis for coal rock identification research, offering significant insights into the nuances of shearer operation. Introduction While new clean energy technologies are experiencing rapid development, fossil energy continues to serve as the predominant global energy source.In 2022, there was a 1% increase in global energy demand compared to the previous year, with fossil fuels contributing 82% of the total energy supply.China's overall energy consumption rose to 5.41 billion metric tons of standard coal in 2022, marking a 2.9% increase from the preceding year.Longwall mining utilizing a shearer stands out as the most widely utilized method among underground coal mining techniques.The efficiency and productivity of longwall mining operations hinge upon the cutting performance of the shearer along the longwall face [1,2]. As a crucial component of the fully mechanized mining system, the shearer plays a key role in enabling efficient and concentrated coal extraction processes.This advancement has notably diminished the frequency of safety incidents in coal mines and enhanced the occupational environment for miners.The successful deployment of intelligent control mechanisms for the shearer is essential for the potential automation of fully mechanized mining operations.Consequently, the technology for distinguishing between coal and rock has been the subject of widespread study and focus [3][4][5].Current methods for identifying coal and rock primarily utilize the operational signals produced during the shearer's cutting of various coal and rock formations for classification purposes [6,7].Thus, to yield significant insights, the experimental systems designed to simulate the shearer's cutting process must adhere to specific standards of resemblance and dependability. The methodologies employed by researchers to conduct experimental investigations for coal and rock identification predominantly encompass: • In situ experiments, where sensors and measurement instruments are deployed directly on the mining face to gather operational signals from coal and rock for subsequent analysis and evaluation [8]; Surface experiments, involving the use of an actual coal mining apparatus to perform real-life cutting tests on coal and rock on the surface, thereby collecting a range of performance metrics [9][10][11]; Simulating shearer drum cutting tests, wherein scholars have devised and fabricated a simulating shearer drum apparatus to replicate the cutting actions on coal and rock materials [12][13][14]; In situ collection directly at the coal mining face guarantees the authenticity of the data acquired, yet it is subject to the constraints of a high-risk and severe work setting.Concurrently, devices used for signal collection must comply with explosion-proof standards, which escalates the experimental risk.Conducting experiments on the ground necessitates the availability of coal mining machinery, genuine coal and rock mediums, and pertinent apparatus, leading to relatively high testing expenses.On the other hand, the simulating shearer drum cutting experiment offers a cost-efficient approach to simulation experiments, although the reliability of this system remains a topic that warrants further examination.The simulating shearer drum experiment serves as an effective tool for analyzing and investigating the diverse information produced during the coal mining process, particularly in the study of coal and rock identification.For the precise replication of the shearer's operational state while cutting, both the experimental apparatus and the simulation materials, constructed based on similarity theory, must closely resemble the original model.Currently, comprehensive and systematic research on experiments for identifying coal and rock is lacking. Digital twin technology involves creating a virtual model of a physical entity in a digital format, enabling a bidirectional mapping, dynamic interaction, and real-time connection between the physical and digital spaces, as shown in Figure 1.Through digital twins, the attributes, structure, state, performance, functionalities, and behaviors of physical entities are mapped into the digital world, forming highly realistic dynamic multi-dimensional, multi-scale, and multi-physical models.The evolution of the digital twin is divided into three stages [15]: the virtual model stage, the basic digital twin stage, and the adaptive digital twin stage.In the first stage, we achieved digital twin modeling of the physical entity; in the second stage, we achieved data intercommunication between the twin and the physical domain, including data transfer and interaction between different digital twin models as well as between the digital model and the physical model; the third stage is the adaptive digital twin, aimed at achieving precise prediction of the physical entity by the digital twin and full-process collaborative optimization control.Therefore, drawing on existing research and simulation of similar theoretical principles, this paper outlines the development and construction of an experimental system for digital twin cutting of coal mining machines, aiming to collect performance data and realize the first two stages of digital twinning. The digital twin cutting system is primarily divided into physical and digital spaces, with data exchange between the two spaces facilitated through IoT technology.The physical space consists of three parts: the cutting section, the experimental platform feed device, and the electrical control section.The experimental platform is modified from the machining center, with major modifications including the design of a simulated drum; a cutting state detection sensor system; and the addition of three-dimensional cutting force, drum torque, vibration, and sound sensors.A grating scale sensor has been added to the feed device section of the experimental platform.The electrical control section has been enhanced with The digital twin cutting system is primarily divided into physical and digital spaces, with data exchange between the two spaces facilitated through IoT technology.The physical space consists of three parts: the cutting section, the experimental platform feed device, and the electrical control section.The experimental platform is modified from the machining center, with major modifications including the design of a simulated drum; a cutting state detection sensor system; and the addition of three-dimensional cutting force, drum torque, vibration, and sound sensors.A grating scale sensor has been added to the feed device section of the experimental platform.The electrical control section has been enhanced with a worktable motor inverter and a cutting motor inverter, enabling speed control of these motors.The digital twin cutting experiment system block diagram is shown in Figure 2. The digital twin cutting system is primarily divided into physical and digital spaces, with data exchange between the two spaces facilitated through IoT technology.The physical space consists of three parts: the cutting section, the experimental platform feed device, and the electrical control section.The experimental platform is modified from the machining center, with major modifications including the design of a simulated drum; a cutting state detection sensor system; and the addition of three-dimensional cutting force, drum torque, vibration, and sound sensors.A grating scale sensor has been added to the feed device section of the experimental platform.The electrical control section has been enhanced with a worktable motor inverter and a cutting motor inverter, enabling speed control of these motors.The digital twin cutting experiment system block diagram is shown in Figure 2. Design of the Simulated Shearer Drum The design of the simulated drum is the key to the entire experimental system.Only by following the similarity theory can we ensure the similarity between the cutting of the simulated drum and the cutting of the coal mining machine in the coal mine. Principle of Similarity Theory The similarity π theory can be stated as: "For a physical system that contains n physical quantities and k fundamental dimensions, these n physical quantities can be expressed as a functional relationship among (n − k) independent similarity criteria π1, π2, …, πn−k.".This means that any physical equation: can be rewritten according to the similarity π theory as: Design of the Simulated Shearer Drum The design of the simulated drum is the key to the entire experimental system.Only by following the similarity theory can we ensure the similarity between the cutting of the simulated drum and the cutting of the coal mining machine in the coal mine. Principle of Similarity Theory The similarity π theory can be stated as: "For a physical system that contains n physical quantities and k fundamental dimensions, these n physical quantities can be expressed as a functional relationship among (n − k) independent similarity criteria π 1 , π 2 , . .., π n−k ".This means that any physical equation: f(x 1 , x 2 , x 3 , . . . ,x n ) = 0 (1) can be rewritten according to the similarity π theory as: Through this transformation, the original physical equation is converted into a criterion relation, simplifying the problem.When the prototype and the model are similar, if the similarity criteria maintain the same value at corresponding points and corresponding moments, then their π relations should also be identical, that is: The second similarity theorem indicates that in mutually similar phenomena, the similarity criteria do not need to be derived using similarity indicators.As long as the relationship equations of various physical quantities are converted into the form of dimensionless equations, the terms of these equations are the similarity criteria.p i is used to represent the ith physical quantity in a system, and m i to represent the corresponding physical quantity in another system (a similar system).The ratio of these two physical quantities is called the similarity coefficient (or transformation coefficient), denoted as C i : Sensors 2024, 24, 3194 5 of 29 Equation (4) indicates that every physical quantity of a system is converted into the corresponding physical quantity in another system through the linear transformation of the parameter C i .In the transformation, the transformation coefficients C i for different physical quantities (such as the modulus of elasticity E and length L) can be different, but within the set of similar systems, each transformation coefficient C i is strictly constant.In similarity analysis, different similarity coefficients C i play the role of assigning values to different physical quantities (including geometric quantities).The choice of similarity coefficients C i depends on the nature of the problem under study and experimental conditions, among other factors.Moreover, the similarity coefficients are constant in two similar systems but have different values for a third system that is similar to these two systems. Design of the Simulated Drum As the core cutting component in the coal mining machine's cutting process, designing a similar simulation around the cutting drum is key to ensuring the experimental system and the prototype machine's working conditions are similar.The simulation cutting drum builds processes based on similarity theory as shown in Figure 4. Therefore, this thesis focuses on the drum structure of the coal mining machine as the main research subject, derives similarity criteria through MLT dimensional analysis, and studies the cutting mechanism of the drum during operation, as well as the related motion and structural parameters. Design of the Simulated Drum As the core cutting component in the coal mining machine's cutting process designing a similar simulation around the cutting drum is key to ensuring th experimental system and the prototype machine's working conditions are similar.Th simulation cutting drum builds processes based on similarity theory as shown in Figur 4. Therefore, this thesis focuses on the drum structure of the coal mining machine as th main research subject, derives similarity criteria through MLT dimensional analysis, and studies the cutting mechanism of the drum during operation, as well as the related motion and structural parameters.Parameters defining the drum's geometric structure as Table 1 include its overal diameter (D), the external diameter of the blade (Dy), the depth of cut made by the drum (B), the angle of elevation for the spiral blade (αy), the leading distance of the blade (L) the total number of blade heads (Z), the spacing between blades (Sy), the angle of wrap fo the blade around the hub (βy), the spacing of the picks (Tc), the angle at which the pick are mounted (γ), and the angle of pick inclination (λs).For material parameters, the focu lies on simulating the compressive strength (σ) and the density (ρ) of coal and rock, in lin with the criteria for coal and rock identification.Operational parameters encompass th range of the swing angle (θ) for the rocker arm, the rotational velocity of the drum (n and the speed of traction (v) [16]. Parameters Symbols Units M L T Parameters defining the drum's geometric structure as Table 1 include its overall diameter (D), the external diameter of the blade (D y ), the depth of cut made by the drum (B), the angle of elevation for the spiral blade (α y ), the leading distance of the blade (L), the total number of blade heads (Z), the spacing between blades (S y ), the angle of wrap for the blade around the hub (β y ), the spacing of the picks (T c ), the angle at which the picks are mounted (γ), and the angle of pick inclination (λ s ).For material parameters, the focus lies on simulating the compressive strength (σ) and the density (ρ) of coal and rock, in line with the criteria for coal and rock identification.Operational parameters encompass the range of the swing angle (θ) for the rocker arm, the rotational velocity of the drum (n), and the speed of traction (v) [16].Using the MLT basic dimensional system (i.e., mass, length, time), the values and dimensions of the relevant parameters of the prototype are listed.In the design of similar models, dimensionless physical quantities have the same values between the prototype and the model, making it unnecessary to derive related similarity criteria.From the analysis above, it is necessary to derive similarity criteria for a total of five parameters as Table 2: D, n, ρ, v, σ.According to the second theorem of similarity (the π theorem), with five similarity parameters and three basic dimensions, the number of π rules is two, calculated as 5-3.Dimensional analysis shows that D, n, ρ include the three basic dimensions of M, L, T, and the determinant formed by them is not zero.They correspond, respectively, to parameters related to the structure of the cutting part, the motion parameters of the coal mining machine, and the characteristics of the cutting material.Therefore, these are selected as the basic physical quantities to list in the dimensional matrix exponent table. Parameters Use the exponential method to analyze the dimensions of the system and obtain the linear homogeneous equations of the quality system. The similarity criterion for calculating different dimension parameters is: The π term is an invariant, and the similarity index is one according to the first similarity theorem.The similarity coefficient expression is as follows: Sensors 2024, 24, 3194 7 of 29 The similarity coefficient for drum diameter, C D , requires that the model drum diameter be ≤400 mm, therefore C D is set to 1/3 as Table 3.The simulated cutting material is made from coal dust and rock dust cut by the prototype machine, hence C ρ equals 1.Based on the structure of the prototype shearer, it is determined that the simulated cutting drum is configured with teeth arranged in a sequential manner.The number of blade heads on the drum is set to two, corresponding to three cutting lines, with two teeth configured on each cutting line.Based on this, establish the model of the cutting drum, as shown in Figure 5. The similarity coefficient for drum diameter, CD, requires that the mod diameter be ≤400 mm, therefore CD is set to 1/3 as Table 3.The simulated cutting is made from coal dust and rock dust cut by the prototype machine, hence Cρ equ Based on the structure of the prototype shearer, it is determined that the si cutting drum is configured with teeth arranged in a sequential manner.The nu blade heads on the drum is set to two, corresponding to three cutting lines, with t configured on each cutting line.Based on this, establish the model of the cutting d shown in Figure 5. The Device Structure of the Experimental Platform The physical experimental platform was constructed through a compre modification of the machining center model machine tool, involving mult The Device Structure of the Experimental Platform The physical experimental platform was constructed through a comprehensive modification of the machining center model machine tool, involving multiple key components: the base, bed, lifting platform, saddle, workbench, crossbeam, and tool post support, among others.The base provides fundamental support for the machine tool, with its stability being crucial to the overall work efficiency and precision of the machine; the bed serves as the main frame of the machine tool, supporting the installation of various parts.Positioned at the top of the bed, and connected through dovetail guides, the crossbeam is equipped with a drum support at its front, facilitating the installation of other tools or devices.The lifting platform enhances the flexibility and functionality of the machine tool by enabling vertical movement of the workbench through a vertical screw connected to the nut on the base.The workbench, including the rotary table and saddle, plays a key role in performing specific tasks and can accommodate a variety of work demands. This modification focuses on three main aspects: • The design and improvement of the spindle part-the primary task is to develop a spindle part suitable for the experimental device, including designing an appropriate drum device and connecting it to the machine tool's spindle.This step is vital to ensuring that the machine tool can perform the required experimental operations.• The integration of sensors and data acquisition systems-to accurately monitor and evaluate various parameters during the experimental process, the experimental device integrates efficient sensors and data acquisition systems.This includes measuring physical parameters such as force, temperature, and vibration, and involves the realtime collection, processing, and analysis of data to ensure the accuracy and reliability of experimental results.• The development and application of the numerical control system-an advanced numerical control system has been developed to precisely control the spindle cutting motor and the workbench motor.This system is not only easy to use but also provides high precision and rapid response control, meeting the complex experimental requirements and changing work conditions. Through these modifications, the physical experimental platform has significantly improved in flexibility, precision, and efficiency. Design of the Experimental Platform Spindle The main feature of the physical experimental platform is its complex main motion transmission system.This system transmits the rotary power of the cutting motor through a series of precisely configured transmission shafts (Shafts I to IV) to the main spindle.The spindle then drives the simulated cutting drum mounted on it to perform cutting actions, simulating the operation of a real coal mining machine.The power transmission process starts from the cutting motor and ends with the simulated drum, with multiple mechanical components working together to complete the power transmission and speed change. In the initial stage, the power of the cutting motor is transmitted to Shaft I through a flexible coupling, ensuring that Shaft I rotates at the same speed as the motor.Shaft I uses a pair of fixed gear ratios to transmit power to Shaft II.Shaft II is equipped with a triple sliding gear device, which can provide three different speeds to Shaft III as needed.Similarly, the triple sliding gears on Shaft IV mesh with the gears on Shaft III, allowing Shaft IV to achieve three speeds based on the speed of Shaft III.Thus, Shaft IV can achieve nine different speed changes.Moreover, the double sliding gears at the right end of Shaft IV mesh with the gears on the spindle, allowing the spindle to reach eighteen different speeds to meet various cutting conditions. The spindle is a carefully designed hollow shaft, equipped with a special centering cone hole, end plane, and external cylindrical surface at the front end, along with two end-face keys, intended to ensure effective torque transmission and precise positioning of the equipment.The through-hole of the spindle is used for installing the tensioning tool rod and provides a pathway for the sensor cables on the drum to pass through to the rear signal collection device.To accurately monitor the cutting torque of the simulated drum, a torque sensor is installed between the spindle and the drum.To minimize the impact of tangential forces on the measurement, the sensor is externally equipped with a bearing seat, fixed to the top beam, effectively bearing tangential forces and ensuring accurate measurement.The flanges at both ends of the torque sensor are connected to the external cylindrical surface of the spindle and the simulated drum through a coupling, ensuring efficient power transmission and precise control.The model and physical structure of the platform spindle are shown in Figure 6. tangential forces on the measurement, the sensor is externally equipped with a bearing seat, fixed to the top beam, effectively bearing tangential forces and ensuring accurate measurement.The flanges at both ends of the torque sensor are connected to the external cylindrical surface of the spindle and the simulated drum through a coupling, ensuring efficient power transmission and precise control.The model and physical structure of the platform spindle are shown in Figure 6. Design of the Sensor and Acquisition System This article illustrates the working principles and operational frequency bandwidths of various types of sensors on the experimental platform by analogizing the sensors' functioning and frequency responses with the human body's visual, tactile, and auditory models as shown in Figure 7.The experimental platform includes thermal imaging, threedimensional force, torque, vibration, and sound sensors, which are used to detect visual, force, tactile, and auditory signals, respectively.The thermal imaging sensor operates within a frequency range of 0 to 2 Hz, mainly for visual signal detection.The threedimensional force and torque sensors have a bandwidth of 0 to 2 kHz, used for force detection.The vibration sensor's bandwidth ranges from 0 to 10 kHz, for tactile signal detection.The sound sensor operates over a wider bandwidth, from 20 to 20 kHz, for auditory signal detection.The differences in these bandwidths reflect the capabilities of the sensors to capture the respective physical signals. The experimental platform utilizes multimodal sensors to comprehensively monitor the operation of the experimental platform, including temperature changes during the cutting process, three-dimensional force on the cutting teeth, drum torque, simulated drum vibration, cutting noise, drum rotation speed, worktable displacement, and cutting motor current.This monitoring network consists of a three-dimensional force sensor, a torque sensor, two grating scale sensors, an axial encoder sensor, three Hall current sensors, a vibration acceleration sensor, and a sound sensor, ensuring precise monitoring of the coal mining machine's operational status.Through this system, researchers can accurately record and analyze the operational data of coal mining machines in a simulated environment, which is crucial for understanding the working principles of coal mining machines, identifying potential issues, optimizing design, and improving efficiency. Design of the Sensor and Acquisition System This article illustrates the working principles and operational frequency bandwidths of various types of sensors on the experimental platform by analogizing the sensors' functioning and frequency responses with the human body's visual, tactile, and auditory models as shown in Figure 7.The experimental platform includes thermal imaging, threedimensional force, torque, vibration, and sound sensors, which are used to detect visual, force, tactile, and auditory signals, respectively.The thermal imaging sensor operates within a frequency range of 0 to 2 Hz, mainly for visual signal detection.The three-dimensional force and torque sensors have a bandwidth of 0 to 2 kHz, used for force detection.The vibration sensor's bandwidth ranges from 0 to 10 kHz, for tactile signal detection.The sound sensor operates over a wider bandwidth, from 20 to 20 kHz, for auditory signal detection.The differences in these bandwidths reflect the capabilities of the sensors to capture the respective physical signals. Drum-Monitoring Sensor System The simulated drum parameter monitoring sensor system includes a cutter tooth three-dimensional force sensor, drum torque sensor, vibration sensor, and sensor acquisition system.The cutter tooth three-dimensional force sensor is specifically designed for coal and rock cutting conditions, capable of detecting the three-dimensional force on the cutter tooth, with X, Y, and Z signal output channels [17].Each channel has its own independent signal collection circuit, ensuring no interference between any two channels and guaranteeing the sensor's authenticity and reliability.In simulated cutting tests, it is necessary to measure the cutting torque of the multi-tooth experimental drum.The experimental platform utilizes multimodal sensors to comprehensively monitor the operation of the experimental platform, including temperature changes during the cutting process, three-dimensional force on the cutting teeth, drum torque, simulated drum vibration, cutting noise, drum rotation speed, worktable displacement, and cutting motor current.This monitoring network consists of a three-dimensional force sensor, a torque sensor, two grating scale sensors, an axial encoder sensor, three Hall current sensors, a vibration acceleration sensor, and a sound sensor, ensuring precise monitoring of the coal mining machine's operational status.Through this system, researchers can accurately record and analyze the operational data of coal mining machines in a simulated environment, which is crucial for understanding the working principles of coal mining machines, identifying potential issues, optimizing design, and improving efficiency. Drum-Monitoring Sensor System The simulated drum parameter monitoring sensor system includes a cutter tooth threedimensional force sensor, drum torque sensor, vibration sensor, and sensor acquisition system.The cutter tooth three-dimensional force sensor is specifically designed for coal and rock cutting conditions, capable of detecting the three-dimensional force on the cutter tooth, with X, Y, and Z signal output channels [17].Each channel has its own independent signal collection circuit, ensuring no interference between any two channels and guaranteeing the sensor's authenticity and reliability.In simulated cutting tests, it is necessary to measure the cutting torque of the multi-tooth experimental drum.The torque sensor is installed on the main drive shaft of the experimental stand to capture torque data during the cutting process, supporting the analysis of cutting performance.An IEPE vibration sensor is chosen for its strong anti-interference capability and wide frequency response range up to 15 KHz (±3 dB), with a measurement range of ±50 g.Sensors are connected to magnetic bases through threads, and magnetic bases are adhered to the surface with polishing glue.Both the cutter tooth three-dimensional force sensor and the drum torque sensor are based on the strain gauge principle and have undergone sensitivity and linearity checks before leaving the factory.To ensure data accuracy, the collection system still requires calibration, as shown in Figure 8. Drum-Monitoring Sensor System The simulated drum parameter monitoring sensor system includes a cutter tooth three-dimensional force sensor, drum torque sensor, vibration sensor, and sensor acquisition system.The cutter tooth three-dimensional force sensor is specifically designed for coal and rock cutting conditions, capable of detecting the three-dimensional force on the cutter tooth, with X, Y, and Z signal output channels [17].Each channel has its own independent signal collection circuit, ensuring no interference between any two channels and guaranteeing the sensor's authenticity and reliability.In simulated cutting tests, it is necessary to measure the cutting torque of the multi-tooth experimental drum.The torque sensor is installed on the main drive shaft of the experimental stand to capture torque data during the cutting process, supporting the analysis of cutting performance.An IEPE vibration sensor is chosen for its strong anti-interference capability and wide frequency response range up to 15 KHz (±3 dB), with a measurement range of ±50 g.Sensors are connected to magnetic bases through threads, and magnetic bases are adhered to the surface with polishing glue.Both the cutter tooth three-dimensional force sensor and the drum torque sensor are based on the strain gauge principle and have undergone sensitivity and linearity checks before leaving the factory.To ensure data accuracy, the collection system still requires calibration, as shown in Figure 8.To match the input signals of the selected data acquisition card, all of the sensors' output signals were uniformly converted to voltage signals.Appropriate signal conditioners were chosen based on the characteristics of each signal for processing.Then, the sensors' output voltage signals were transmitted to the computer through the data acquisition board.The hardware architecture of the experimental setup's measurement is shown in Figure 9. Platform Monitoring Sensor System The experimental platform sensor monitoring system is meticulously designed to capture and analyze a wide range of parameters that are essential for evaluating the performance and condition of mechanical systems.It includes several advanced components, as shown in Figures 10 and 11 Grating Ruler Sensor: Acquires high-precision positional information related to displacement, operating on precise measurements from a grating scale.This is crucial for tasks demanding high accuracy.Drum Rotary Encoder: Gathers data on the rotational angle of the main spindle, key for understanding the dynamics of drum rotation and providing insights into the spindle's speed and direction.Cutting Sound Sensor: With an IEPE (integrated electronics piezoelectric) sound sensor, the system captures ambient Platform Monitoring Sensor System The experimental platform sensor monitoring system is meticulously designed to capture and analyze a wide range of parameters that are essential for evaluating the performance and condition of mechanical systems.It includes several advanced components, as shown in Figures 10 and 11. To match the input signals of the selected data acquisition card, all of the sensors' output signals were uniformly converted to voltage signals.Appropriate signal conditioners were chosen based on the characteristics of each signal for processing.Then, the sensors' output voltage signals were transmitted to the computer through the data acquisition board.The hardware architecture of the experimental setup's measurement is shown in Figure 9. Platform Monitoring Sensor System The experimental platform sensor monitoring system is meticulously designed to capture and analyze a wide range of parameters that are essential for evaluating the performance and condition of mechanical systems.It includes several advanced components, as shown in Figures 10 and 11 Grating Ruler Sensor: Acquires high-precision positional information related to displacement, operating on precise measurements from a grating scale.This is crucial for tasks demanding high accuracy.Drum Rotary Encoder: Gathers data on the rotational angle of the main spindle, key for understanding the dynamics of drum rotation and providing insights into the spindle's speed and direction.Cutting Sound Sensor: With an IEPE (integrated electronics piezoelectric) sound sensor, the system captures ambient Grating Ruler Sensor: Acquires high-precision positional information related to displacement, operating on precise measurements from a grating scale.This is crucial for tasks demanding high accuracy.Drum Rotary Encoder: Gathers data on the rotational angle of the main spindle, key for understanding the dynamics of drum rotation and providing insights into the spindle's speed and direction.Cutting Sound Sensor: With an IEPE (integrated electronics piezoelectric) sound sensor, the system captures ambient noise, including sounds from cutting operations.Powered by a constant current source, Additionally, the system incorporates a data acquisition converter essential for converting analog signals into digital data for computer analysis, as shown in Figure 12.This feature enables comprehensive analysis of sensor data, supports real-time monitoring, and aids in post-operation evaluation.Additionally, the system incorporates a data acquisition converter essential for converting analog signals into digital data for computer analysis, as shown in Figure 12.This feature enables comprehensive analysis of sensor data, supports real-time monitoring, and aids in post-operation evaluation. Design of the Control System Due to the design limitations of its spindle transmission system and feed transmission system, the machining center can only select from a few fixed gear ratios, which fails to meet the diverse requirements for spindle cutting speed and feed speed in cutting tests.Cutting tests are a crucial part of determining machining conditions, including choosing the optimal spindle speed and coal rock sample movement speed, to ensure high efficiency and precision in the machining process.To address this issue, a specialized numerical Sensors 2024, 24, 3194 13 of 29 control (NC) system for the cutting experimental apparatus was developed.This system is designed to provide precise control over the cutting process, enabling adjustments to the spindle cutting speed and workpiece feed speed beyond the original fixed gear ratios of the machining center. The core of this NC system consists of two main parts: control of the spindle cutting motor and control of the worktable motor, as shown in Figure 13.The spindle cutting motor control is responsible for adjusting the cutting speed of the spindle, allowing for a wide and precise range of speeds.This flexibility is crucial for conducting cutting tests under various conditions to identify the most efficient cutting parameters.Similarly, the control of the worktable motor plays a key role in managing the feed speed of the workpiece.By precisely controlling the movement of the worktable, the system ensures that coal rock samples are fed at the optimal speed.The electrical control device for the cutting experimental platform is shown in Figure 14. Due to the design limitations of its spindle transmission system and feed transmission system, the machining center can only select from a few fixed gear ratios, which fails to meet the diverse requirements for spindle cutting speed and feed speed in cutting tests.Cutting tests are a crucial part of determining machining conditions, including choosing the optimal spindle speed and coal rock sample movement speed, to ensure high efficiency and precision in the machining process.To address this issue, a specialized numerical control (NC) system for the cutting experimental apparatus was developed.This system is designed to provide precise control over the cutting process, enabling adjustments to the spindle cutting speed and workpiece feed speed beyond the original fixed gear ratios of the machining center. The core of this NC system consists of two main parts: control of the spindle cutting motor and control of the worktable motor, as shown in Figure 13.The spindle cutting motor control is responsible for adjusting the cutting speed of the spindle, allowing for a wide and precise range of speeds.This flexibility is crucial for conducting cutting tests under various conditions to identify the most efficient cutting parameters.Similarly, the control of the worktable motor plays a key role in managing the feed speed of the workpiece.By precisely controlling the movement of the worktable, the system ensures that coal rock samples are fed at the optimal speed.The electrical control device for the cutting experimental platform is shown in Figure 14. Design of the Control System Due to the design limitations of its spindle transmission system and feed transmission system, the machining center can only select from a few fixed gear ratios, which fails to meet the diverse requirements for spindle cutting speed and feed speed in cutting tests.Cutting tests are a crucial part of determining machining conditions, including choosing the optimal spindle speed and coal rock sample movement speed, to ensure high efficiency and precision in the machining process.To address this issue, a specialized numerical control (NC) system for the cutting experimental apparatus was developed.This system is designed to provide precise control over the cutting process, enabling adjustments to the spindle cutting speed and workpiece feed speed beyond the original fixed gear ratios of the machining center. The core of this NC system consists of two main parts: control of the spindle cutting motor and control of the worktable motor, as shown in Figure 13.The spindle cutting motor control is responsible for adjusting the cutting speed of the spindle, allowing for a wide and precise range of speeds.This flexibility is crucial for conducting cutting tests under various conditions to identify the most efficient cutting parameters.Similarly, the control of the worktable motor plays a key role in managing the feed speed of the workpiece.By precisely controlling the movement of the worktable, the system ensures that coal rock samples are fed at the optimal speed.The electrical control device for the cutting experimental platform is shown in Figure 14. Application Service Layer The evolution of the physical cutting experimental model is achieved by setting rollers with different rotation speeds and X, Y, Z sliding tables with different feed speeds to cut n groups of coal and rock samples with different hardness and distribution characteristics, as shown in Figure 15.This process allows for the collection of corresponding sensor data, which is then used to continuously revise and update the model of the cutting mechanism on the experimental platform. Application Service Layer The evolution of the physical cutting experimental model is achieved by setting rollers with different rotation speeds and X, Y, Z sliding tables with different feed speeds to cut n groups of coal and rock samples with different hardness and distribution characteristics, as shown in Figure 15.This process allows for the collection of corresponding sensor data, which is then used to continuously revise and update the model of the cutting mechanism on the experimental platform.The planning and control algorithm training for virtual cutting tests begins with setting an initial coal and rock model, as shown in Figure 16.Then, based on the cutting planning and control algorithms, virtual cutting is conducted in a digital twin environment.By analyzing the state perception data of the digital twin, the cutting state is identified, and the cutting planning and control algorithms are adjusted accordingly.In the virtual environment, a closed-loop system of planning-control-cutting-feedback is formed, allowing for continuous optimization and updating of the planning and control algorithms.The planning and control algorithm training for virtual cutting tests begins with setting an initial coal and rock model, as shown in Figure 16.Then, based on the cutting planning and control algorithms, virtual cutting is conducted in a digital twin environment.By analyzing the state perception data of the digital twin, the cutting state is identified, and the cutting planning and control algorithms are adjusted accordingly.In the virtual environment, a closed-loop system of planning-control-cutting-feedback is formed, allowing for continuous optimization and updating of the planning and control algorithms. Application Service Layer The evolution of the physical cutting experimental model is achieved by setting rollers with different rotation speeds and X, Y, Z sliding tables with different feed speeds to cut n groups of coal and rock samples with different hardness and distribution characteristics, as shown in Figure 15.This process allows for the collection of corresponding sensor data, which is then used to continuously revise and update the model of the cutting mechanism on the experimental platform.The planning and control algorithm training for virtual cutting tests begins with setting an initial coal and rock model, as shown in Figure 16.Then, based on the cutting planning and control algorithms, virtual cutting is conducted in a digital twin environment.By analyzing the state perception data of the digital twin, the cutting state is identified, and the cutting planning and control algorithms are adjusted accordingly.In the virtual environment, a closed-loop system of planning-control-cutting-feedback is formed, allowing for continuous optimization and updating of the planning and control algorithms.The synchronous cutting experiment of the twin-physical system is divided into two main areas: the digital space and the physical space, as shown in Figure 17.The digital space cutting planning algorithm refers to the initial stage where a digital algorithm is used to plan how the cutting of the coal rock samples will be executed.Cutting after planning, the control algorithm would be responsible for the actual execution of the cutting process in the digital twin system.The experimental platform digital twin represents a virtual replica of the physical experimental platform, where the cutting algorithms are tested. Based on feedback and results from the digital twin, the coal rock model is updated to reflect new insights or to improve the cutting process.The physical space is a set of actual physical samples that will be cut in the experiment.Experimental platform is the physical counterpart to the digital twin where the actual cutting takes place.Real-time sensors on the experimental platform provide real-time data on the cutting process.The system recognizes the state of the cutting process, using real-time sensor data. main areas: the digital space and the physical space, as shown in Figure 17.The digital space cutting planning algorithm refers to the initial stage where a digital algorithm is used to plan how the cutting of the coal rock samples will be executed.Cutting after planning, the control algorithm would be responsible for the actual execution of the cutting process in the digital twin system.The experimental platform digital twin represents a virtual replica of the physical experimental platform, where the cutting algorithms are tested.Based on feedback and results from the digital twin, the coal rock model is updated to reflect new insights or to improve the cutting process.The physical space is a set of actual physical samples that will be cut in the experiment.Experimental platform is the physical counterpart to the digital twin where the actual cutting takes place.Real-time sensors on the experimental platform provide real-time data on the cutting process.The system recognizes the state of the cutting process, using real-time sensor data.The synchronization (labeled "Sync") between the digital twin and the physical platform suggests that data and insights are shared between the two to ensure that the digital planning and control algorithms are accurate and reflective of the real-world physical cutting process.Overall, the system is designed to use a digital-physical twin approach to simulate, plan, and control the cutting of coal rock samples, aiming for optimization of the process and better prediction of outcomes. Model Layer The model layer mainly analyzes and stores the following models: the geometric model of the cutting experiment platform, the geometric model of the coal rock sample, and the mechanism model of the cutting experiment platform. Geometric Model The geometric model refers to a mathematical representation method used to describe the shape and structure of objects.Geometric models are three-dimensional (such as solid objects) and aim to mathematically capture and express the geometric features of objects, enabling computers to process, analyze, render, and simulate them.Geometric models are usually represented by data structures consisting of vertices, edges, and faces, which can construct complex geometric models, from simple geometric bodies to highly detailed 3D models.The synchronization (labeled "Sync") between the digital twin and the physical platform suggests that data and insights are shared between the two to ensure that the digital planning and control algorithms are accurate and reflective of the real-world physical cutting process.Overall, the system is designed to use a digital-physical twin approach to simulate, plan, and control the cutting of coal rock samples, aiming for optimization of the process and better prediction of outcomes. Model Layer The model layer mainly analyzes and stores the following models: the geometric model of the cutting experiment platform, the geometric model of the coal rock sample, and the mechanism model of the cutting experiment platform. Geometric Model The geometric model refers to a mathematical representation method used to describe the shape and structure of objects.Geometric models are three-dimensional (such as solid objects) and aim to mathematically capture and express the geometric features of objects, enabling computers to process, analyze, render, and simulate them.Geometric models are usually represented by data structures consisting of vertices, edges, and faces, which can construct complex geometric models, from simple geometric bodies to highly detailed 3D models. The geometric models of the cutting experiment platform mainly include: X-and Y-axis feeding, the Z-axis lifting model, and the cutting unit model, as shown in Figure 18.X-and Y-axis feeding and the Z-axis lifting model describe the motion control of the cutting device in three orthogonal directions, including the adjustment of feeding speed and lifting speed.The cutting unit model refers to the design and functionality of a single cutting drum that completes the coal rock cutting task. The geometric models of the cutting experiment platform mainly include: X-and Yaxis feeding, the Z-axis lifting model, and the cutting unit model, as shown in Figure 18.X-and Y-axis feeding and the Z-axis lifting model describe the motion control of the cutting device in three orthogonal directions, including the adjustment of feeding speed and lifting speed.The cutting unit model refers to the design and functionality of a single cutting drum that completes the coal rock cutting task.The coal rock sample model includes: the 3D model of the coal rock sample and the coal rock interface model of the coal rock sample.The 3D model of the coal rock sample represents the three-dimensional appearance of the coal rock sample, containing the shape, size, and internal structure of the coal rock.The coal rock interface model studies the properties of the interface between coal and rock, which is crucial for understanding mechanical behavior and cutting efficiency during the cutting process. Mechanism Model The mechanism model is a type of model used to describe and explain the behavior of a phenomenon or system.It is based on an understanding of the system's internal mechanisms, principles, and interactions, revealing the rules and processes of the system's operation at a microscopic level.Mechanism models are usually established on the basis of a professional field, focusing on the components of the system and their interactions.They describe the dynamic characteristics and behavior of the system through mathematical equations, logical relationships, or graphics.Mechanism models emphasize The coal rock sample model includes: the 3D model of the coal rock sample and the coal rock interface model of the coal rock sample.The 3D model of the coal rock sample represents the three-dimensional appearance of the coal rock sample, containing the shape, size, and internal structure of the coal rock.The coal rock interface model studies the properties of the interface between coal and rock, which is crucial for understanding mechanical behavior and cutting efficiency during the cutting process. Mechanism Model The mechanism model is a type of model used to describe and explain the behavior of a phenomenon or system.It is based on an understanding of the system's internal mechanisms, principles, and interactions, revealing the rules and processes of the system's operation at a microscopic level.Mechanism models are usually established on the basis of a professional field, focusing on the components of the system and their interactions.They describe the dynamic characteristics and behavior of the system through mathematical equations, logical relationships, or graphics.Mechanism models emphasize an understanding of the internal mechanisms and processes of the system, offering stronger interpretability.The mechanism models of the cutting experiment platform include: the single-tooth force model, the simulated drum force model, the cutting power transmission model, and the cutting and feeding motor model. The single-tooth force model focuses on the mechanical behavior and force conditions of a single cutting tooth during the coal rock cutting process.The simulated drum force model simulates the force generated by the drum during coal rock cutting, including the drum's dynamic parameters and force.The cutting power transmission model analyzes the energy transfer path from the power source to the cutting head, and how the power is transmitted and acts on the coal rock. This system transmits the rotary power of the cutting motor through a series of precisely configured transmission shafts (Shafts I to IV) to the main spindle.Taking a single-stage parallel-axis system as an example, the translational-rotational model is presented in Figure 19, and its corresponding dynamical equations are formulated as shown in Equation (8).The lumped parameter model for the multistage gearbox of the cutting experimental platform is shown in Figure 20. Sensors 2024, 24, x FOR PEER REVIEW 18 of 29 an understanding of the internal mechanisms and processes of the system, offering stronger interpretability.The mechanism models of the cutting experiment platform include: the single-tooth force model, the simulated drum force model, the cutting power transmission model, and the cutting and feeding motor model.The single-tooth force model focuses on the mechanical behavior and force conditions of a single cutting tooth during the coal rock cutting process.The simulated drum force model simulates the force generated by the drum during coal rock cutting, including the drum's dynamic parameters and force.The cutting power transmission model analyzes the energy transfer path from the power source to the cutting head, and how the power is transmitted and acts on the coal rock. This system transmits the rotary power of the cutting motor through a series of precisely configured transmission shafts (Shafts I to IV) to the main spindle.Taking a single-stage parallel-axis system as an example, the translational-rotational model is presented in Figure 19, and its corresponding dynamical equations are formulated as shown in Equation ( 8).The lumped parameter model for the multistage gearbox of the cutting experimental platform is shown in Figure 20.The variable r i denotes the base circle radius of gear i (where i = 1, 2); θ i represents the rotational angle of the gear; k 12 , C 12 , e 12 , and α 12 , respectively, signify the time-varying mesh stiffness, mesh damping, cumulative mesh error, and mesh angle of the gear pair.k xi , k yi , c xi , and c yi correspond to the radial support stiffness and damping in the x and y directions for gear i; T i refers to the torque acting on gear i. E 12 and E i , respectively, represent the amplitudes of the meshing frequency error for the gear pair and the rotational frequency error of gear i; ζ 12 and η i are the initial phases of the meshing frequency error and rotational frequency error, respectively; δ 12 is the meshing deformation on the tooth surface engagement line considering the comprehensive error. The equations of motion for the cutting motor model, the cutting drive system dynamic model, and the drum load model are compiled and organized into matrix form.This yields the electromechanical coupled system dynamics mathematical model for the cutting section, as shown in Equation ( 9).In the equation, X represents the generalized coordinate vector, with X = [x i y i θ i ], where i = m, 1, 2, . .., 5, d; M, T L , and T e are the generalized mass matrix, load vector, and the electromagnetic torque vector of the cutting motor, respectively.K m , K t , and K b , respectively, represent the mesh stiffness matrix, torsional stiffness matrix, and bearing stiffness matrix, respectively; C m and C t represent the mesh damping matrix and torsional damping matrix, respectively.The equations in Equation (9) describe a system of second-order differential equations for the mechanical transmission system.For ease of computation, it is first necessary to reduce the order, transforming it as follows: This can further be expressed in matrix form: .X .. Based on the aforementioned mathematical model, simulation models of the cutting motor and the cutting drive system are constructed separately on the MATLAB/Simulink (R2022b) platform, as shown in Figure 21.The angular displacement and angular velocity of the cutting motor are used as shared variables to transfer data between the cutting motor and the cutting drive system.This data is then used to calculate in real time the torsional load on the motor output shaft, which is directly fed back to the cutting motor.Consequently, this process establishes an electromechanical coupled simulation model for the cutting drive system.motor and the cutting drive system are constructed separately on the MATLAB/Simulink (R2022b) platform, as shown in Figure 21.The angular displacement and angular velocity of the cutting motor are used as shared variables to transfer data between the cutting motor and the cutting drive system.This data is then used to calculate in real time the torsional load on the motor output shaft, which is directly fed back to the cutting motor.Consequently, this process establishes an electromechanical coupled simulation model for the cutting drive system.During computation, the ode45 solver provided in MATLAB is utilized (employing the fourth-and fifth-order Runge-Kutta method, which uses a fourth-order method to During computation, the ode45 solver provided in MATLAB is utilized (employing the fourth-and fifth-order Runge-Kutta method, which uses a fourth-order method to generate candidate solutions and a fifth-order method to control errors, constituting an adaptive step-size numerical solution technique for ordinary differential equations).This allows for the solving of the system's differential equations, thereby obtaining the dynamic response of each component of the system. Data Layer The data layer involves the composition of data and its flow process.Its core functions include data management, data forwarding, data storage, and data collection.Data management, as the foundation of the data layer, includes the organization, cleaning, and transformation of data, and ensuring data quality and consistency.Data forwarding is mainly responsible for transferring data from one part of the system to another, such as from a storage system to an application server.Data storage focuses on storing collected data in databases for subsequent querying, analysis, and processing.Data collection is the process of acquiring data from various sensor terminals. The composition of the data layer includes coal rock sample model data, cutting experimental platform model data, algorithm model data, and sensor data.Coal rock sample model data refers to the attribute data used to create three-dimensional models and predict and analyze the characteristics of coal rock samples.Cutting experimental platform model data involves the parameters of the three-dimensional model of the cutting experimental platform, which are used to build the simulation model of the cutting process.Algorithm model data is generated by processing and analyzing collected data through machine learning or other data analysis methods, producing data used for prediction or decision support.Sensor data refers to the information collected in real time from experimental platform sensors, which is crucial for monitoring and controlling the cutting process.In summary, the data layer spans the entire process from data collection and processing to application, forming a complete data management and analysis system. Figure 22 illustrates the architecture and process of a data application developed using Node.js(v20.13.1) which, with its event-driven and non-blocking I/O capabilities, can efficiently handle a large number of concurrent operations, making it highly suitable for web applications and systems that require high-performance I/O operations.The entire architecture is based on an event-driven and non-blocking I/O model to optimize performance and concurrency handling. Figure 22 illustrates the architecture and process of a data application developed using Node.js(v20.13.1) which, with its event-driven and non-blocking I/O capabilities, can efficiently handle a large number of concurrent operations, making it highly suitable for web applications and systems that require high-performance I/O operations.The entire architecture is based on an event-driven and non-blocking I/O model to optimize performance and concurrency handling.Node.js is used to build WebSocket services for different applications.It is a serverside JavaScript runtime environment based on Google's Chrome V8 engine, capable of executing JavaScript code.Node.js inherently supports TCP/IP and HTTP protocols, while building WebSocket protocols with Node.js requires the additional use of the core HTTP Node.js is used to build WebSocket services for different applications.It is a server-side JavaScript runtime environment based on Google's Chrome V8 engine, capable of executing JavaScript code.Node.js inherently supports TCP/IP and HTTP protocols, while building WebSocket protocols with Node.js requires the additional use of the core HTTP Server library provided by Node.js.This article has chosen the WS library for construction, which can be directly downloaded and installed using NPM.Node.js is not limited to server-side operations but can also be used in the Internet of Things (IoT).Applications can interact with the physical space of the real world, such as collecting sensor data and controlling motors through I/O operations. The core of the software is the Event Loop, supported by the libuv library.libuv is a library specially designed for asynchronous I/O operations, which works across platforms and plays a key role in Node.js.The Event Loop is responsible for coordinating all asynchronous operations in the program.The Event Queue lists various types of operations waiting to be processed, including the Internet of Things (IoT), databases (MySQL), threedimensional engines (Unity 3D), application logic layer (Application), and data model layer (Model).These operations are queued and wait for the Event Loop to process them in order.Worker threads are responsible for handling operations that may block the Event Loop.In Node.js, these operations are usually performed through built-in modules or extension modules, such as file system access, network requests, or executing some CPU-intensive processing tasks. Digital Twin System Interaction Interface Experimental platform digital twin systems are created using Unity3D (2020.3.43 f1c1), where three-dimensional models are imported and then driven based on sensor data.The steps for importing 3D models are as follows: First, prepare the model file and use 3ds Max (20.2.0.2320) software to convert the experimental platform model into an fbx format supported by Unity3D, as shown in Figure 23. Then, drag the model file into the Assets folder of the Unity3D editor to complete the import of the model file.Unity3D will automatically process the imported model.Next, drag the imported model from the Assets folder to the scene, and adjust the model's position, rotation angle, and scale to meet the requirements of the scene. Digital Twin System Interaction Interface Experimental platform digital twin systems are created using Unity3D (2020.3.43 f1c1), where three-dimensional models are imported and then driven based on sensor data.The steps for importing 3D models are as follows: First, prepare the model file and use 3ds Max (20.2.0.2320) software to convert the experimental platform model into an fbx format supported by Unity3D, as shown in Figure 23.Then, drag the model file into the Assets folder of the Unity3D editor to complete the import of the model file.Unity3D will automatically process the imported model.Next, drag the imported model from the Assets folder to the scene, and adjust the model's position, rotation angle, and scale to meet the requirements of the scene. Programming is required to process sensor data.First, determine the type and interface of the sensor used.In Unity3D, C# scripts can be written to read sensor data.In addition, the WebSocket library's API is used to read sensor data transmitted over the network.Based on the obtained sensor data, Unity3D's Transform component is used to adjust the position and rotation angle of the experimental platform model.The digital twin experimental platform's system interaction is shown in Figure 24.Programming is required to process sensor data.First, determine the type and interface of the sensor used.In Unity3D, C# scripts can be written to read sensor data.In addition, the WebSocket library's API is used to read sensor data transmitted over the network.Based on the obtained sensor data, Unity3D's Transform component is used to adjust the position and rotation angle of the experimental platform model.The digital twin experimental platform's system interaction is shown in Figure 24.The experimental platform digital twin system also integrates visible light and infrared video camera streams.By using Unity's WebCamTexture class, a WebCamTexture object can be created, and the camera's name specified.Then, assign the WebCamTexture object to the Material's Texture property to display the video on a GameObject in the scene.In addition, control of the WebCamTexture's playback and pause is required. The system uses the LineRenderer component to display a dynamic curve graph of real-time sensor data.The operation process is as follows: Create an empty GameObject in the Unity editor and add a LineRenderer component to it.Then, write a new C# script to control the updating and rendering of the curve data.In the script, calculate the position of each point on the curve based on the dynamically updated data, and use the LineRenderer's SetPositions() method to update these points, thereby achieving dynamic updating of the curve.The experimental platform digital twin system also integrates visible light and infrared video camera streams.By using Unity's WebCamTexture class, a WebCamTexture object can be created, and the camera's name specified.Then, assign the WebCamTexture object to the Material's Texture property to display the video on a GameObject in the scene.In addition, control of the WebCamTexture's playback and pause is required. The system uses the LineRenderer component to display a dynamic curve graph of real-time sensor data.The operation process is as follows: Create an empty GameObject in the Unity editor and add a LineRenderer component to it.Then, write a new C# script to control the updating and rendering of the curve data.In the script, calculate the position of each point on the curve based on the dynamically updated data, and use the LineRenderer's SetPositions() method to update these points, thereby achieving dynamic updating of the curve. Preparation of Simulated Coal Sample In this experiment, the mass ratios of coal dust, cement, sand, and water were used as control variables to study the variation in compressive strength.Initially, the simulated coal samples were prepared for each experimental scheme, as shown in Figure 25.The preparation process included the following steps: first, coal dust, sand, and cement were sifted through a 20-mesh screen.Then, according to the established experimental scheme, coal dust, cement, sand, and water were measured in sequence and thoroughly mixed and stirred evenly.The mixed materials were then filled into molds and compacted to form.The standard experimental coal column calibration samples used 100 mm cubic columns.Two days later, after the samples had basically solidified, demolding and labeling were performed.After demolding, calipers were used for measurement.Because the initial compression area has a significant impact on the actual compressive strength results during the compression process, it is necessary to ensure the parallelism of both ends adequately.The coal column was placed horizontally on the platform, and a dial indicator was used to collect the height, ensuring the coal column's surface was smooth to avoid stress concentration. The simulated coal samples needed to be cured at room temperature for 14 days.After curing, the calipers and electronic balance were used for measurement and weighing, the density of the materials was calculated and recorded.The experiment used a 20 kN microcomputer-controlled electronic universal concrete compressive strength testing machine to test the uniaxial compressive strength of the simulated coal samples, as shown in Figure 26.The experiment chose a displacement-controlled load application mode and set the loading rate at 1.5 mm/min.As the test machine gradually increased the given load, the simulated coal samples were pressed until destruction, and the compressive strength was recorded, as shown in Figure 27.The standard experimental coal column calibration samples used 100 mm cubic columns.Two days later, after the samples had basically solidified, demolding and labeling were performed.After demolding, calipers were used for measurement.Because the initial compression area has a significant impact on the actual compressive strength results during the compression process, it is necessary to ensure the parallelism of both ends adequately.The coal column was placed horizontally on the platform, and a dial indicator was used to collect the height, ensuring the coal column's surface was smooth to avoid stress concentration. The simulated coal samples needed to be cured at room temperature for 14 days.After curing, the calipers and electronic balance were used for measurement and weighing, the density of the materials was calculated and recorded.The experiment used a 20 kN microcomputer-controlled electronic universal concrete compressive strength testing machine to test the uniaxial compressive strength of the simulated coal samples, as shown in Figure 26.The experiment chose a displacement-controlled load application mode and set the loading rate at 1.5 mm/min.As the test machine gradually increased the given load, the simulated coal samples were pressed until destruction, and the compressive strength was recorded, as shown in Figure 27. Finally, the experimental data were averaged to obtain the average compressive strength, and according to the mass ratios of coal dust, cement, sand, and water, the sample was poured into the coal rock holder designed for the experimental platform, as shown in Figure 28. a 20 kN microcomputer-controlled electronic universal concrete compressive strength testing machine to test the uniaxial compressive strength of the simulated coal samples as shown in Figure 26.The experiment chose a displacement-controlled load application mode and set the loading rate at 1.5 mm/min.As the test machine gradually increased th given load, the simulated coal samples were pressed until destruction, and th compressive strength was recorded, as shown in Figure 27.Finally, the experimental data were averaged to obtain the average compressive strength, and according to the mass ratios of coal dust, cement, sand, and water, the sample was poured into the coal rock holder designed for the experimental platform, as shown in Figure 28.The experimental cutting test conditions of the test platform were set as follows: the simulated drum speed was set to 60 r/min, the translation speed of the cutting sample was 0.5 m/min, the drum outer edge diameter was 385 mm, the pick installation angle was 40 • , and the pick inclination angle was 0 • .This experiment aims to conduct cutting tests on three samples with different hardnesses, which are: Finally, the experimental data were averaged to obtain the average compre strength, and according to the mass ratios of coal dust, cement, sand, and water sample was poured into the coal rock holder designed for the experimental platfor shown in Figure 28.The experimental cutting test conditions of the test platform were set as follow simulated drum speed was set to 60 r/min, the translation speed of the cutting sample 0.5 m/min, the drum outer edge diameter was 385 mm, the pick installation angle wa and the pick inclination angle was 0°.This experiment aims to conduct cutting tes three samples with different hardnesses, which are: Analysis of Infrared Thermal Imaging Infrared thermal imaging typically uses false colors to represent different temperature intervals, creating an intuitive visual representation that shows the relative temperature distribution in different areas of the image.However, this method cannot directly provide accurate temperature measurements and requires further analysis through algorithmic processing.In contrast, grayscale images intuitively display temperature information using black and white colors, with brightness variations from white to black indicating the spectrum from high to low temperatures.As shown in Figure 29, converting the color image of infrared thermal imaging into a grayscale image and then calculating the temperature is an effective method.This process includes two steps: first converting the color image into a grayscale image, and then deriving the temperature values based on the grayscale levels. The grayscale value of a pixel is linearly related to a certain range of temperatures.Therefore, the first step involves converting a color image into a grayscale image.This conversion is achieved by calculating the weighted values of the three channels of the color image, as demonstrated by the following equation: In the equation, Y represents the converted grayscale value, Mi represents the matrices of the extracted different color channels, where r, g, and b represent the red, green, and blue color channels, respectively. During direct contact between the pick and coal rock samples, heat is generated due to the impact, compression, and friction between them, leading to a rise in temperature of the pick and its cutting area.When the translation speed of the cutting sample and the drum rotation speed are constant, the properties of the coal rock become the key factors affecting the temperature changes in the pick and coal rock wall.This means the temperature variations in the pick and coal rock wall after cutting will also differ.At the beginning of the cutting phase, the temperature of the contact surface between the pick and coal rock sample gradually increases.As the cutting progresses, the thickness of the cut by the pick increases, leading to a rapid rise in temperature of the cutting surface.At this point, the rate of heat exchange between the cutting surface and air also accelerates.Since the pick during the cutting phase is embedded in the coal rock sample, the infrared thermal imaging system cannot capture the real-time temperature of the pick.However, when the pick rotates out of the coal rock sample with the drum, the infrared thermal imaging system can capture the highest temperature region on the pick.white to black indicating the spectrum from high to low temperatures.As shown i 29, converting the color image of infrared thermal imaging into a grayscale im then calculating the temperature is an effective method.This process includes tw first converting the color image into a grayscale image, and then deriving the temp values based on the grayscale levels.The grayscale value of a pixel is linearly related to a certain range of tempe Therefore, the first step involves converting a color image into a grayscale ima conversion is achieved by calculating the weighted values of the three channel color image, as demonstrated by the following equation: The purpose of this experiment is to measure the temperature changes in the cutting teeth when cutting samples of different hardness.In order to clearly capture the temperature information of the cutting teeth and reduce the interference of coal rock debris on temperature measurement, we reduced the advance speed of the coal rock samples.At the beginning of the experiment, the temperature changes rapidly; as the experiment progresses, the temperature changes gradually stabilize.At this point, the thermal imaging results are more representative.Figure 30 shows the grayscale change curve of the temperature after stabilization when cutting coal rock samples of different hardness.As the hardness increases, both the mean and fluctuation of the grayscale also increase.The figure averages the top 25% and bottom 25% of grayscale values, with fluctuations increasing by 17% and 14.6%, respectively. Analysis of the Force Sensor Signals In the initial stage, as the cutting thickness of the coal sample by the cutter bit is relatively thin, the amplitude of the cutting force on the load-time domain curve is relatively small.With the continuous rotation of the drum, the cutter bit penetrates deeper into the coal sample, leading to a gradual increase in the instantaneous cutting thickness.This process is manifested as an increase in the amplitude of the cutting force on the loadtime domain curve. According to the pattern of cutting force fluctuation, a complete cutting cycle can be divided into four stages: the initial elastic deformation stage, the plastic deformation stage, the main crack formation stage, and the crack propagation stage.After the end of the crack propagation stage, the collapse of the coal block causes a sharp decrease in the cutting force.Once the cutting thickness reaches its maximum value, it will gradually decrease.During this process, the cutting load of a single cutter bit shows an overall trend of gradually decreasing from the maximum peak value until it finally exits the cutting and enters the no-load stage.The change in load during this stage is primarily due to the gradual decrease in cutting thickness caused by the rotation of the drum, which in turn reduces the corresponding cutting load. The calibration results depict sensitivities of 0.748 mV/V, 2.367 mV/V, and 2.83 mV/V for the three-dimensional force [17], respectively.Furthermore, the cross-sensitivity error was lower than 5.02%.The cutting load Fi can be solved by means of the coupling matrix K. Figure 31 averages the top 20% of force values, with fluctuations increasing by 40% and 27.6%, respectively. Analysis of the Force Sensor Signals In the initial stage, as the cutting thickness of the coal sample by the cutter bit is relatively thin, the amplitude of the cutting force on the load-time domain curve is relatively small.With the continuous rotation of the drum, the cutter bit penetrates deeper into the coal sample, leading to a gradual increase in the instantaneous cutting thickness.This process is manifested as an increase in the amplitude of the cutting force on the load-time domain curve. According to the pattern of cutting force fluctuation, a complete cutting cycle can be divided into four stages: the initial elastic deformation stage, the plastic deformation stage, the main crack formation stage, and the crack propagation stage.After the end of the crack propagation stage, the collapse of the coal block causes a sharp decrease in the cutting force.Once the cutting thickness reaches its maximum value, it will gradually decrease.During this process, the cutting load of a single cutter bit shows an overall trend of gradually decreasing from the maximum peak value until it finally exits the cutting and enters the no-load stage.The change in load during this stage is primarily due to the gradual decrease in cutting thickness caused by the rotation of the drum, which in turn reduces the corresponding cutting load. The calibration results depict sensitivities of 0.748 mV/V, 2.367 mV/V, and 2.83 mV/V for the three-dimensional force [17], respectively.Furthermore, the cross-sensitivity error was lower than 5.02%.The cutting load Fi can be solved by means of the coupling matrix K. Figure 31 Analysis of the Torque Sensor Signals The torque fluctuations in drum cutting are relatively strong, and the load changes are irregular, with load peaks varying significantly.This is closely related to the number of cutting teeth involved in the cutting process and the properties of the material being cut.Different cutting teeth have different cutting entry angles and cutting thicknesses at the same moment, so it is not feasible to study the overall drum cutting load using a simple direct proportionality based on a single cutting tooth form.During drum cutting, it is not just an ideal scenario of cutting teeth engaging with the material; other components on the drum also come into contact with the specimen.The cumulative load from these components significantly affects the drum load.Figure 32 Analysis of the Torque Sensor Signals The torque fluctuations in drum cutting are relatively strong, and the load changes are irregular, with load peaks varying significantly.This is closely related to the number of cutting teeth involved in the cutting process and the properties of the material being cut.Different cutting teeth have different cutting entry angles and cutting thicknesses at the same moment, so it is not feasible to study the overall drum cutting load using a simple direct proportionality based on a single cutting tooth form.During drum cutting, it is not just an ideal scenario of cutting teeth engaging with the material; other components on the drum also come into contact with the specimen.The cumulative load from these components significantly affects the drum load.Figure 32 Conclusions This paper elaborates on the design and experimental validation of a digital twin cutting experiment system for a shearer, focusing on simulating the cutting process for coal and rock identification.This encompasses the development of a simulated shearer drum based on the principle of similarity theory, the establishment of a comprehensive experimental platform, and the application of digital twin technology to bridge the gap between physical experiments and digital simulations.The key components of the study include: The design of a simulated shearer drum-employing similarity theory to ensure the simulated drum accurately mirrors the cutting actions of a real coal mining machine, thus enhancing the reliability of the simulation experiments. The experimental platform device structure-modifying existing machinery to meet experimental requirements, including the integration of sensors and a data acquisition system for real-time monitoring and analysis. The software system design in digital space-developing a digital twin that comprises data layers for management and analysis, models for simulation, and application layers for interactive experimentation and algorithm training. The simulated cutting experiments-performing tests with prepared coal samples to collect data on various physical forces, torque, thermal imaging, vibration, and sound, aimed at analyzing the cutting process and improving efficiency and safety in coal mining operations.By conducting time domain analysis of sensor signals collected during the cutting of materials of different strengths, it was found that the characteristics of the cutting force signal were the most distinct.Extracting the cutting force sensor signal as a Conclusions This paper elaborates on the design and experimental validation of a digital twin cutting experiment system for a shearer, focusing on simulating the cutting process for coal and rock identification.This encompasses the development of a simulated shearer drum based on the principle of similarity theory, the establishment of a comprehensive experimental platform, and the application of digital twin technology to bridge the gap between physical experiments and digital simulations.The key components of the study include: The design of a simulated shearer drum-employing similarity theory to ensure the simulated drum accurately mirrors the cutting actions of a real coal mining machine, thus enhancing the reliability of the simulation experiments. The experimental platform device structure-modifying existing machinery to meet experimental requirements, including the integration of sensors and a data acquisition system for real-time monitoring and analysis. The software system design in digital space-developing a digital twin that comprises data layers for management and analysis, models for simulation, and application layers for interactive experimentation and algorithm training. The simulated cutting experiments-performing tests with prepared coal samples to collect data on various physical forces, torque, thermal imaging, vibration, and sound, aimed at analyzing the cutting process and improving efficiency and safety in coal mining operations.By conducting time domain analysis of sensor signals collected during the cutting of materials of different strengths, it was found that the characteristics of the cutting force signal were the most distinct.Extracting the cutting force sensor signal as a characteristic value can effectively distinguish various cutting modes, providing a reliable experimental solution for coal rock identification research. Figure 1 . Figure 1.The physical space and the digital space. Figure 2 . Figure 2. Digital Twin Cutting Experiment System Block Diagram. Figure 1 . Figure 1.The physical space and the digital space. Figure 1 . Figure 1.The physical space and the digital space. Figure 2 . Figure 2. Digital Twin Cutting Experiment System Block Diagram.Figure 2. Digital Twin Cutting Experiment System Block Diagram. Figure 2 . Figure 2. Digital Twin Cutting Experiment System Block Diagram.Figure 2. Digital Twin Cutting Experiment System Block Diagram.The digital space comprises the data layer, the model layer, and the application layer, including the visualization of the experimental platform's digital twin.The data layer is responsible for data forwarding, storage, management, and collection, covering coal rock sample data, experimental platform operational data, algorithm model data, and sensor data.The mechanism model layer includes the mechanism model, the geometric model of the cutting experimental platform, and the coal rock sample model.The application layer involves model evolution physical cutting experiments, algorithm training virtual cutting Figure 3 . Figure 3. Digital Twin Cutting Experiment System Design Process Diagram. Figure 3 . Figure 3. Digital Twin Cutting Experiment System Design Process Diagram. Figure 4 . Figure 4.The simulation cutting drum builds processes based on similarity theory. Figure 4 . Figure 4.The simulation cutting drum builds processes based on similarity theory. Figure 5 . Figure 5. (a) The model of the cutting drum; (b) the physical cutting drum. Figure 5 . Figure 5. (a) The model of the cutting drum; (b) the physical cutting drum. Figure 6 . Figure 6.(a) The model of the platform spindle; (b) the physical platform spindle. Figure 6 . Figure 6.(a) The model of the platform spindle; (b) the physical platform spindle. Figure 7 . Figure 7.The sensors' functioning and frequency responses. Figure 7 . Figure 7.The sensors' functioning and frequency responses. Figure 8 . Figure 8.(a) Arrangement of sensors; (b) force sensor calibration; (c) torque sensor calibration.Figure 8. (a) Arrangement of sensors; (b) force sensor calibration; (c) torque sensor calibration.To match the input signals of the selected data acquisition card, all of the sensors' output signals were uniformly converted to voltage signals.Appropriate signal conditioners were chosen based on the characteristics of each signal for processing.Then, the sensors' output voltage signals were transmitted to the computer through the data acquisition board.The hardware architecture of the experimental setup's measurement is shown in Figure 9. Figure 9 . Figure 9.The Drum Signal Acquisition System Block Diagram. . Figure 9 . Figure 9.The Drum Signal Acquisition System Block Diagram. Figure 9 . Figure 9.The Drum Signal Acquisition System Block Diagram. . it converts acoustic signals to voltage signals for noise analysis, useful in monitoring tool wear or detecting operational anomalies.Motor Current Transformer: Monitors electric currents across three channels (A, B, and C) of the motor, aiding in the detection of phase imbalances.Motor Voltage Transformer: Tracks the voltage signals from the inverter to the motor.noise, including sounds from cutting operations.Powered by a constant current source, it converts acoustic signals to voltage signals for noise analysis, useful in monitoring tool wear or detecting operational anomalies.Motor Current Transformer: Monitors electric currents across three channels (A, B, and C) of the motor, aiding in the detection of phase imbalances.Motor Voltage Transformer: Tracks the voltage signals from the inverter to the motor. Figure 12 . Figure 12.The Platform Signal Acquisition System Block Diagram. Figure 11 . Figure 11.(a) Cutting Sound Sensor; (b) Drum Rotary Encoder.Additionally, the system incorporates a data acquisition converter essential for converting analog signals into digital data for computer analysis, as shown in Figure12.This feature enables comprehensive analysis of sensor data, supports real-time monitoring, and aids in post-operation evaluation. Figure 12 . Figure 12.The Platform Signal Acquisition System Block Diagram.Figure 12.The Platform Signal Acquisition System Block Diagram. Figure 12 . Figure 12.The Platform Signal Acquisition System Block Diagram.Figure 12.The Platform Signal Acquisition System Block Diagram. Figure 13 . Figure 13.The Platform Control System Block Diagram. Figure 14 . Figure 14.Electrical Control Device for the Cutting Experimental Platform. Figure 13 . Figure 13.The Platform Control System Block Diagram. Figure 13 . Figure 13.The Platform Control System Block Diagram. Figure 14 . Figure 14.Electrical Control Device for the Cutting Experimental Platform.Figure 14. Electrical Control Device for the Cutting Experimental Platform. Figure 14 . Figure 14.Electrical Control Device for the Cutting Experimental Platform.Figure 14. Electrical Control Device for the Cutting Experimental Platform. Figure 15 . Figure 15.Model evolution by physical cutting experiment. Figure 16 . Figure 16.Algorithm training by virtual cutting experiments. Figure 15 . Figure 15.Model evolution by physical cutting experiment. Figure 15 . Figure 15.Model evolution by physical cutting experiment. Figure 16 . Figure 16.Algorithm training by virtual cutting experiments.Figure 16.Algorithm training by virtual cutting experiments. Figure 16 . Figure 16.Algorithm training by virtual cutting experiments.Figure 16.Algorithm training by virtual cutting experiments. Figure 17 . Figure 17.Synchronous cutting experiment of twin-physical system. Figure 17 . Figure 17.Synchronous cutting experiment of twin-physical system. Figure 18 . Figure 18.The geometric models of the cutting experiment platform. Figure 18 . Figure 18.The geometric models of the cutting experiment platform. Figure 19 . Figure 19.Translational-rotational model of a fixed-shaft gear set. Figure 20 . Figure 20.Lumped parameter model for the multistage gearbox. Figure 19 . Figure 19.Translational-rotational model of a fixed-shaft gear set. Figure 20 . Figure 20.Lumped parameter model for the multistage gearbox.Figure 20.Lumped parameter model for the multistage gearbox. Figure 20 . Figure 20.Lumped parameter model for the multistage gearbox.Figure 20.Lumped parameter model for the multistage gearbox. Figure 21 . Figure 21.The mechanism model of the cutting experiment platform. Figure 21 . Figure 21.The mechanism model of the cutting experiment platform. Figure 22 . Figure 22.The architecture and process of a data application developed using Node.js. Figure 22 . Figure 22.The architecture and process of a data application developed using Node.js. Figure 24 . Figure 24.Digital Twin Experimental Platform System Interaction Interface. • Experimental mode one-cutting ratio simulated coal seam material, with a compressive strength of 2.71 MPa and a density of 1388.46 kg/m 3 ; • Experimental mode two-cutting ratio simulated coal seam material, with a compressive strength of 3.46 MPa and a density of 1506.56 kg/m 3 ; • Experimental mode three-cutting ratio simulated coal seam material, with a compressive strength of 4.13 MPa and a density of 1658.45 kg/m 3 . Figure 28 . Figure 28.Injection of Mixed Materials into Coal Rock Sample Holder. •Figure 28 . Figure 28.Injection of Mixed Materials into Coal Rock Sample Holder. Figure 30 . Figure 30.Grey value analysis of experimental images (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. Figure 30 . Figure 30.Grey value analysis of experimental images (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. averages the top 20% of force values, with fluctuations increasing by 40% and 27.6%, respectively. Figure 31 . Figure 31.Experimental pick force analysis (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. shows the torque sensor values from experiments with three different cutting modes.It is very difficult to distinguish between the three cutting modes based on these time domain indicators. Figure 31 . Figure 31.Experimental pick force analysis (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. shows the torque sensor values from experiments with three different cutting modes.It is very difficult to distinguish between the three cutting modes based on these time domain indicators. Figure 32 . Figure 32.Experimental torque value analysis (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. Figure 32 . Figure 32.Experimental torque value analysis (a) Experimental mode one; (b) Experimental mode two; (c) Experimental mode three. Table 1 . Parameters of the simulated shearer cutting system. Table 1 . Parameters of the simulated shearer cutting system. Table 2 . Dimensional matrix of spiral drum design parameters. Table 3 . Similarity coefficients of the simulated shearer cutting system. Table 3 . Similarity coefficients of the simulated shearer cutting system
2024-05-19T15:20:42.585Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "4cdb94c2d26438b4e5e4be946182337324159baf", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f326765ecb8caf1483258b63e6274a167760abc7", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
190533794
pes2o/s2orc
v3-fos-license
Standardized patient methodology in mainland China: a nationwide survey Background To describe the current status of standardized patient (SP) practice in mainland China. Methods We conducted a nationwide survey in 2016. One hundred and eighty-three SP educators (SPEs) responded to the questionnaire, representing 80 medical centers from 25 provinces in mainland China. All of these centers were affiliated with China Standardized Patients Practice Teaching Guidance. In the survey, we assessed the methods of SPs’ recruitment, hourly wage, how SPs were used and challenges of SP role. We also compared these data among the 4 different regions in China. Results In mainland China, the most frequent range of SPs’ age was between 30 and 40 years (24.8%). The SPs were usually recruited by recommendations from the SPEs or a current SP (43.8%), as well as advertising in the hospitals (43.8%). The mean hourly wage was US$12.60 for teaching activities and US$18.82 for medical examinations. The median frequency for training SPs was 12.9 times per year. The SPs were used in areas such as internal medicine (89.6%), surgery (79.2%) and pediatrics (56.3%). The most challenging parts for the SPs were to remember all of the key points of the cases (51.9%) and portraying the emotions of the case (51.9%). Almost half of the SPs reported that, when interacting with medical students, they had difficulty in providing feedback in consistent with students’ learning objectives. SPs’ gender, age, rewards and scenarios playing were different significantly among the 4 geographic regions in China (P < 0.05). Conclusions This survey provided the reliable data on the current situation of SP application in China. SP activities have had an encouraging progress but regional development imbalance. Background A standardized patient(SP), also known as simulated patient, sample patient, or patient instructor, is an individual trained to act as a real patient in order to simulate a set of clinical symptoms or problems. Standardized patients have been widely used to support teaching and evaluation of medical students in developed countries since Barrows' original description in 1968, and studies have reported the validity and reliability of the use of SPs [1][2][3]. Although SPs are employed extensively in the developed countries, little is known about how individual schools use and evaluate SPs in developing countries [4]. An initial survey conducted in 2009 described the functions and program structures of SP in US and Canada [5]. In 2010, a Japanese survey demonstrated that SP satisfaction is high but challenges in case mastery and feedback tasks are evident. [6]. An European study proposed a clear need of collaboration among different centers [7]. SPs were introduced in China for medical education in 1991 by Paula Stillman [8] and were officially put into use in medical teaching and assessment in 1994. Although a growing number of medical centers have launched SP programs since then, there is still a lack of descriptive data on the ways in which SPs are used in China. In 2016 we conducted a national survey in China to understand how the SP programs are operated in individual schools in China. We collected information on the demographic features of the SPs, the methods of SP recruitment, the payment for SPs, and challenges of SP roles. We also described the differences of SP programs among the different geographic regions of China. Survey design and data collection We conducted the survey in mainland China in September 2016. Using the membership list of China Standardized Patients Practice Teaching Guidance (CSPC), we included 80 of the 86 medical centers, consisting medical colleges, academic hospitals and medical examination centers. The medical centers are located in 25 of the 34 administrative divisions of China (including provinces, municipalities, and autonomous regions). Questionnaires were sent to 243 standardized patient educators (SPEs), and 183 (75.3%) of them agreed to participate and completed the questionnaires. For the analysis, we combined "North" and "Northeast" regions into "North", "Northwest" and "Southwest" regions into "West", as these the respective regions have similar medical educational institutes. The final data comes from of 80 medical centers across all 4 geographic regions in China (North, East, South-central, and West). The study has been approved by the Ethics Committee of Peking Union Medical College Hospital (S-K705). Questionnaires The survey questionnaire was drafted base on the published questionnaires for SP survey [9], and was modified according to the questionnaires for SP and SP educators developed in PUMCH. The questionnaire was then reviewed by experts, including three senior medical teachers, one epidemiologist and one statistician. A pilot survey of 10 SPEs was performed in PUMCH and another academic medical center. All scales showed adequate reliability with Cronbach's Alpha (> 0.74). Statistical analysis Continuous data were presented as mean (SD) or median (IQR) as appropriate, and categorical variables were presented as n (%). We compared between groups using one-way ANOVA or Kruskal-Wallis test for continuous variables and χ2 test for categorical variables. All p values were two-sided, and a p value of less than 0.05 was deemed significant. Analyses were performed with IBM SPSS Statistics (version 21.0, SPSS Inc., Chicago, IL, USA). Demographic features Among the 80 medical centers affiliated with CSPC, 48 centers have provided SP-based medical education and examinations, indicating that more than half of the medical centers (60%) have employed the SP program. The median number of SPs in each center was 18 (14) persons. And the median launch time of SP programs in the medical centers was 5 (6) years. The ratio of female: male SPs was 2.1:1. The most frequent age range of the recruited SPs was between 30 and 40 years (24.8%), followed by 20-30 (18.4%), 50-60 (18.0%), 40-50 (17.8%), 16-20 (16.1%) and 60-80 years (4.8%) ( Table 1). SPs' recruitment and rewards In mainland China, SPs were recruited in various ways. The most common way of recruiting SP members were referrals by the SPEs or a current SP (21, 43.8%). Other ways of recruitment included advertisement posting in the hospitals (21, 43.8%), recruitment among hospital patients (17, 35.4%) and through public media channels (16,33.8%). In 42 (87.5%) of the 48 medical centers, they used more than 3 methods to recruit SPs. The rewards of SPs varied by the different medical activities performed. The mean hourly wage was RMB $85.7 (approximately US $12.60) for teaching activities, while a higher hourly wage of RMB $ 128.0 (approximately US $18.82) was paid for medical examinations ( Table 2). SPs' training and application In the responding medical centers, the median frequency of training for SPs was 12.9 times per year. The most frequent way of SP training was by giving lectures, with a median frequency of 8.1 times per year, followed by clinical practices (3.1 times per year) and video training (1.8 times per year). The well-trained SPs were certified to participate in medical education. Most of them were used to train medical students in medical history taking and physical examination during the pre-clinical stage. The SPs were used in areas including internal medicine (89.6%), surgery (79.2%) and pediatrics (56.3%). The SPs were employed in the initial visit scenario in over 90% of the centers. Meanwhile, other different scenarios, including return visit (33, 68.8%), patient education (21, 41.7%), telling bad news (14, 29.2%) and conversation with relatives (20, 41.7%) were also designed for advanced medical education (Table 3). Although 40 (83.3%) centers agreed that it is very important or important to apply SPs for physical examination, only a few centers (9, 18.8%) used that. Further, only 9 (18.8%) centers considered it was feasible to apply female SPs for breast examination. The female: male SP ratio for rectal examination was also low, at a rate of 4.2% (n = 2) female versus 16.7% (n = 9) male. In this survey, 28 (58.3%) medical centers reported their current use of SPs in OSCE, and the other 20 centers were in preparation to apply SPs in OSCE. Training challenges in SPs' performance and feedback SPEs identified several challenges in SPs' performance, including to have the SPs to remember all the key points of the case (95, 51.9%) and to portray the emotions of the patient (95, 51.9%). followed by role shaping (76, 41.5%) and the use of appropriate vocabularies (23, 12.6%). Training SPs to give appropriate feedback to medical students is an advanced stage of SP training and is a challenge for SPEs. Indeed, SPEs reported that almost half of the SPs had difficulty in providing feedback in consistent with the student's learning goals. Other challenges expressed by SPEs included consistently maintaining the emotion of the case throughout the role playing (76, 41.5%), avoiding general comments (69, 37.7%), expressing well-balanced positive and negative points (48, 26.2%), and emotional control (29, 15.8%) ( Table 4). Comparisons among geographic regions Data were compared among the different geographical regions in China (shown in Tables 1-4). We found SPs' gender, age, rewards and SPs' application scenarios were different significantly among the 4 geographic regions (P < 0.05). More female SPs were recruited in the North/East regions than the Southcentral/West regions. The highest rewards of SPs (teaching activities and medical examinations) were reported in the North region. As for scenario topics for SP cases, the North/East regions designed more cases in regards to the conversations with relatives. There were no significant differences in the rest of the items collected in this study (P > 0.05). Different Chinese medical centers had similar ways of recruiting SPs, with recommendation by the practitioners being the most common. Recruitment using media channel was effective but not widely used in China. The use of SPs varied largely across centers, covering areas such as internal medicine, surgery, pediatrics, psychiatry, gynecology and obstetrics, neurology and emergency medicine. However, some traditional scenarios (such as first and return visits) and patient interactions such as patient education, telling bad news and conversation with relatives were also included for SP performance in most centers. It is noted that there were much more centers in the northern and eastern areas utilized SPs in scenarios practice for interactions with relatives. This may be due to the more developed SP programs in these regions that covered a wider range to topics. One of the unique finding in this survey was the payment of SPs. Compared to developed countries, the employment of SP practice in mainland China is still in early development. SPs in Canada and the USA were paid at $16/h [5], while in China it varied from $6 to $35/h, suggesting a lack of mature and consistent reward system. Interestingly, SPs were tended to be paid higher in the medical centers that have the SP program launched earlier, suggesting medical centers with mature SP programs are willing to pay higher to retain the SPs. In regards to the regional difference, Northern centers tended to pay higher to SPs than the rest of the regions. Chinese SPEs expressed that having the SPs to remember the key points and portraying emotion that matched the case were the challenging. Training SPs to give appropriate feedback to medical students was also a challenge, as 51.1% SPEs expressed that having the SP to provide feedback in consistent with medical student's learning objective was the most difficult part of the job. Several scales such as Maastricht Assessment of Simulated Patients (MASP) [10] have suggested helpful ways to assess SP performance. In our survey, only a few centers in China reported the application of such evaluation tools, reflecting the need to improve the assessment system of SP training in China. Objective structural clinical examination (OSCE) was proposed in 1975 and now is a well-recognized approach to evaluate clinical skill performance and competence in skills such as clinical examination, communication etc. OSCE was introduced to China in 1990s, and there were 40 medical centers using OSCE in medical examination by 2013. It should be noted that most of the Chinese medical centers tended to use SP cases for medical history taking. The application of SPs in physical examination was not widely accepted. Only a few centers are using SP in such area, although some of the centers expressed that they were in establishment to use SP in physical examination. This situation is different from Europe, in which the application rate of SP in physical examination was as high as 73.8% [7]. Traditional Chinese culture might be the main reason for the less use of SP in physical examination, in which physical examination is considered highly private, especially for females. Our study revealed potential for cooperation among Chinese medical centers, as only 14 (29.2%) had the experiences of sharing SP education resources. A majority of the centers, 29 (60.4%), designed cases independently. The independent operation of the SP programs could potentially increase the overall costs. In 2016, CSPC was established in mainland China. In the future, it is believed that more collaboration and exchange activities will be conducted among Chinese medical centers, as well as with institutions outside China. Although this study represents the first national-wide survey conducted in China, it has its limitations, such as the selection bias. This study was based on the questionnaires received from the SPEs, the results may not be reflecting the complete situation of SPs. Additionally, some medical centers in undeveloped districts were not participated in this survey, which could overestimate the development status of the SP program in China. Future studies are needed to address these issues in order to provide a more complete data set of SP practice in China. Conclusions SP activities in China have had an encouraging progress, although there are still some aspects that remain to be improved. More educational resources should be provided to support the development of SP programs in China. Abbreviations MASP: Maastricht Assessment of Simulated Patients; SP: Standardized patient
2019-06-19T13:13:34.703Z
2019-06-17T00:00:00.000
{ "year": 2019, "sha1": "6c1186404c5e33013e4301534c7b3399a369c7e1", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-019-1630-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07a17d31e359e5cbdd325374cfe82c426aa00f54", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
258831217
pes2o/s2orc
v3-fos-license
Burnout Among Psychiatry Residents and One Program’s Approach to Creating a Culture of Wellness Psychiatry residency training includes unique characteristics that can predispose trainees to burnout, including vicarious traumatization, prevalence of patient suicide and violence in the workplace, and social stigma surrounding mental health. For the purposes of this article, the authors examine these contributing factors and address how psychiatry residency training programs, specifically the Kaiser Permanente Oakland program, are responding to these unique challenges with wellness initiatives. Initiatives to promote wellness at Kaiser Permanente Oakland include a resident and faculty–led wellness committee, work-hour limits, reasonable call schedules, a robust mentorship program, funded social and networking events programs, and comprehensive mental health services. Introduction Burnout rates among medical residents vary depending on specialty. Within psychiatry residencies, there are multiple factors more prevalent in the field that increase the risk of residents experiencing burnout. Individuals seeking psychiatric services have a greater likelihood of significant predisposing psychosocial stressors and increased risk of substance-use disorders, suicide, chronic pain, and strained interpersonal relationships. 1 Elucidating the details of adverse events from patients can result in vicarious traumatization and compassion fatigue in additional to moral injury for practioners delivering empathetic care, which may create negative changes in a practioner's view of self, others, and the world. 1 Vicarious traumatization, in turn, can diminish resiliency, increasing the risk of practioner burnout. Emotional resilience can further be impaired by high-stress situations that psychiatry residents are more likely to encounter. For residents in psychiatry, the prevalence of experiencing the death of a patient by suicide is estimated to be between 31% and 69%, staggeringly high numbers that result in significant psychosocial stress for practioners. 2 Surveys conducted on psychiatrists to study emotional responses following these tragic events highlight themes of helplessness, feelings of horror, anxiety, and lasting negative impacts. 2 Additionally, 25% to 64% of psychiatry residents report experiencing physical assault by patients at some point during their 4 years of training. 3 Crisis prevention and de-escalation training is a required component of many residency programs; however, the perception of training quality, duration, and utility can vary, with hours of training varying between 0 and 27 hours; training might include modules including didactics only, active instruction on restraint procedures, and evaluation of safety procedures such as room searches and metal detectors, for example. 4 Programs where residents feel underprepared can worsen exhaustion, raising the risk of chronic workplace stress. 4 Social stigma toward mental health disorders and psychiatry as a profession can create an additional burden for practioners. In 2020, approximately 20% of individuals in the United States reported having 1 or more mental health disorders, but fewer than half reported receiving any mental health treatment. 5 This mismatch results from multiple factors, including cultural stigma related to mental illness, limited available mental health services, and prohibitively high out-of-pocket costs. 5 The stigma exists within the culture of medicine as well, as psychiatry is occasionally dismissed as an "art" or a "soft science," diminishing the importance of evidencebased treatments and advances in the field. Although these notions are evolving as the stigma toward mental health issues has lessened, the stigma remains a significant barrier that can negatively impact morale and increase burnout in resident training. Furthermore, the persistence of this narrative highlights the importance of the hidden curriculum-implicit norms and standards conveyed through words and actions instead of formal instruction-in psychiatry residency programs to demonstrate to both trainees and their patients the primacy of mental health awareness and treatment. Burnout Interventions in Psychiatry Residency Programs Regarding specific interventions to mitigate burnout in residency, multiple studies have addressed ways residents can change their own behavior and relationship with stress. 6 Hence the locus of control has typically been placed on the resident physician. 7 Within psychiatry residencies specifically, several studies demonstrate time-limited (eg, 8 wk or 12 wk) programming around mindfulness-based interventions or wellness curriculums that report a positive reception, at least for the short term. 8,9 Long-term impact on wellness of these interventions has not typically been measured, and likely relies on significant ongoing individual commitment. Other strategies, including focus groups for residents on improving connectivity, have demonstrated some longitudinal positive effect on burnout. 10 Overall, however, there has been less investigation of modifiable factors in the workplace environment that could be improved to enhance the resident experience. A few reports have pointed to the need for organization and systems-level strategies to improve clinician well-being more effectively. 11,12 The minority of literature that does assess programmatic interventions has widely been limited to the study of the work-hour requirement restrictions, as other interventions have been difficult to evaluate. 6 That said, work hours should not be overlooked in the evaluation of a residency program's wellness climate. The association between excessive work hours and adverse occupational health consequences is demonstrated by a recent World Health Organization and International Labor Organization report on excess ischemic heart disease mortality risk in workers who work more than 55 hours per week. 13 Additionally, studies of resident physicians demonstrate there is a dose-response relationship between work hours and depression. 14 A recent paper by the Yale Department of Psychiatry emphasizes the importance of shifting focus from individual trainees to the training program and provides a means (via a validated scale, "Residency Community Well-Being" [RCWB]) to measure residency climate factors associated with burnout. 15 The authors found that residents working more than 60 hours per week had significantly lower scores on the RCWB, including all significant meaningful subscales of wellness (program leadership, structures, and practices; resident interpersonal relationships; and resident mistreatment). This association is consistent with the hypotheses that effective physician well-being efforts must both balance time demands and improve culture of wellness factors. 16 The RCWB was developed based on a deductive approach after a thorough analysis of the literature focusing on community satisfaction. The survey was administered to residents representing 18 specialties. RCWB domain scores were found to correlate with scores on other assessment tools (eg, the Copenhagen Burnout Inventory, the Maslach Burnout Inventory) measuring factors RCWB domains were associated with, including burnout. In addition to manageable work-hour demands, significant contributors to lowering burnout included whether residents reported experiencing a sense of belonging, pride, freedom from bullying and microaggressions, and high-quality leadership. These findings provide a good start to validate the work of the RCWB measure, which represents an assessment tool that captures important aspects of community well-being factors affecting resident well-being. Researchers working with the RCWB and those developing other tools to assess community well-being can contribute much to the field with further validation work. Predictive validity can be established by assessing the correlation between community well-being assessment scores and scores on burnout assessment tools. Wellness Initiatives at Kaiser Permanente Oakland's Psychiatry Residency Program As a newer residency program, Kaiser Permanente Oakland Psychiatry Adult Residency Training Program has drawn on lessons learned from the literature on resident wellness and the observed efforts of other Kaiser Permanente training programs. To start, work-hour limits of 80 hours averaged per week are strictly enforced. While seemingly an unambiguous requirement, unfortunately this metric is not always guaranteed in programs that rely on residents to self-report their hours for program compliance surveys, resulting in findings with questionable validity. 16 Therefore, to increase accurate duty-hour reporting among residents, the Kaiser Permanente Oakland program leaders clearly frame reporting as a means to gauge resident workload and, when duty hours are exceeded, a nonpunitive approach is taken consisting of problem-solving discussions between the resident and supervising physician. Additionally, overnight calls are not required, and day calls are limited to weekends only (on average one weekend/m), almost entirely in the first and second years. By the third year, Kaiser Permanente Oakland Psychiatry residents in good standing are allowed to moonlight at a variety of local inpatient hospitals for an hourly stipend to supplement their residency salary. Although moonlighting is permitted among many psychiatry residencies, the regularly scheduled work hours demanded from a resident's home hospitals do not always lend to ample time off (especially while balancing personal wellness activities), to take advantage of supplemental work. In a survey of 238 psychiatry residents across the US, the majority who moonlight agreed that moonlighting enhances their clinical education 17 ; this, in addition to the additional income, may increase resident job satisfaction and reduce financial distress. At Kaiser Permanente Oakland, the establishment of the moonlighting relationships with outside hospitals was an initiative led by residents with the Program's support. Having this additional level of autonomy (both in terms of clinical practice and in having control over one's schedule) available midway through training has been highly celebrated by the senior-level residents. Certainly, the benefits of moonlighting opportunities (eg, extra income, additional training in novel environments), must be weighed against the risk of increasing burnout (eg, excessive workload leading to emotional exhaustion). The voluntary nature of signing up for moonlighting helps mitigate burnout from an institutionally directed standpoint; however, residents must also be cognizant of and accountable to their personal wellness barometers. As a systems-level intervention, Kaiser Permanente Oakland Psychiatry Residency provides residents with access to the Maslach Burnout Inventory (MBI) and prompts residents at regular intervals to complete the assessment. The Program Director has repeatedly announced an open-door policy for any resident who feels they are experiencing burnout. Positive scores on the MBI can help to elucidate the domains where burnout is occurring and guide the discussions for individual mitigation action plans. Furthermore, PTO (paid time off) is protected at 4 weeks/year with an additional 5 days of protected time for educational leave (conferences, seminars, special projects), and an additional "wellness" day off that can be used during the year for any personal reason. Sick leave is protected, as compared to many residency programs, where utilized sick days are required to be "paid back" on future dates when the resident is scheduled to be off. Furthermore, as a program that is primarily driven by direct attending-resident working interactions, when a resident is sick or needs to attend a medical appointment, the attending covers the patients. This means that other residents are not pulled into work from time off or an elective rotation to cover the sick resident, reducing potential stress or guilt associated with taking time off. In a recent informal survey of the Kaiser Permanente Oakland Psychiatry PGY-2 class, 100% stated they felt supported by the attendings and the Program Director regarding sick/medical visit leave, which was felt to be as important as having the policies in place to begin with. For more holistic measures, Kaiser Permanente Oakland has developed many systemic practices to contribute to wellness. These initiatives were informed from several resources, including learnings from the literature on burnout and wellness, and data from periodic resident needs assessments, including both direct and anonymous feedback channels. A summary of the implemented measures may be found below (Table 1) From the beginning, Kaiser Permanente Oakland's Psychiatry Residency Program has had a Wellness Committee chaired by representatives from each resident class, the Chief Residents, and three faculty members. The yearly budget provided by Graduate Medical Education has grown every year as the resident body has multiplied and the COVID-19 in-person gathering restrictions have begun to loosen. Funds are mainly used toward communitybuilding events that take place outside of work hours to foster interests and connection outside of the residency bubble (eg, dinners, social events, wellness-related activities). The committee also serves as an incubator for cultivating a culture of wellness by employing an iterative process of soliciting and incorporating feedback, program development and implementation, and evaluation that have resulted in many of the systemic wellness measures. There are certainly areas of improvement regarding reduction of burnout and promotion of wellness in the Kaiser Permanente Psychiatry Residency Program. The Committee is currently performing an updated review on the literature on residency wellness programs and creating a comprehensive survey for current residents to determine what factors within the domain of community well-being are most important to them and perform an evaluation of validated indicators of occupational wellbeing such as burnout and professional fulfillment. These values will then be aligned with readiness for change defined by current strategy and available resources. Conclusions Evidence-based, comprehensive strategies and specific interventions to improve wellness factors are critically needed at a time of increasing burnout among all physicians working on the front lines; psychiatry residents are in a unique position of working to improve the mental health of their patients specifically, while often sacrificing their own in the process. Interventions are often promoted at the individual level without meaningful systemic change. Our program has used literature to inform us of the initiatives the authors have to address resident wellness and will continue to develop these initiatives. To succeed, this process requires collaboration between program directors, faculty, residents, GME, and wellness leaders, who should work together to assess community well-being systematically, develop strategies to improve relevant wellness factors, assess progress, and continue to reevaluate our efforts. Individual mental health and well-being maintenance • Distribution of wellness resources at orientation and periodically throughout the year (including dedicated time to review concept of burnout-remind residents of the free Maslach Burnout Inventory, review wellness practices and resources, and provide list of sliding-scale psychotherapists in the community) • Weekly resident catered lunches
2023-05-23T06:17:19.133Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "d64db5142234547296d8c677a8cf825c212a2825", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "47d2a1ba723ce1a4b04240d486b8c7ea8d61bc3a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245020391
pes2o/s2orc
v3-fos-license
Optimization of follow-up in patients with papillary thyroid cancer who show no evidence of disease 9–12 months after treatment Abstract Background Papillary thyroid cancer (PTC) has an excellent prognosis, and recurrence is rare in patients with no evidence of disease (NED) after initial treatment. Despite this, several guidelines recommend long and costly follow-up, with limited evidence of improved patient outcomes. This study aims to examine the value of follow-up in patients with NED after treatment for PTC, by determining the rate of recurrence, recurrence-associated morbidity, and death, and whether any recurrence was diagnosed through the follow-up programme. Methods Patients operated for PTC at Lund University Hospital between January 2004 and December 2016 were eligible. Patients with T1a N0/NX were excluded as well as patients with any other thyroid malignancy. Data were collected retrospectively by searching the patients’ medical records. NED was defined as thyroglobulin less than 1 ng/ml, thyroglobulin antibodies less than 20 kIU/l, and negative imaging. Biochemical recurrence was defined as thyroglobulin greater than 1 ng/ml, and/or thyroglobulin antibodies greater than 20 kIU/l. Structural recurrence was defined as a strong suspicion of recurrence on imaging and/or histological proof of recurrence. Results Out of a cohort of 187 patients, there were 90 patients with NED who were followed for a median of 6.3 years. Three patients had biochemical recurrence; none of them had symptoms, nor were they treated for their recurrence. Three had structural recurrence; all were above 75 years old and only one was diagnosed through the follow-up programme. No patient died of PTC; five patients died during the follow-up. Conclusion Follow-up as it is designed today cannot identify recurrences accurately and seems to be of questionable benefit in younger patients with NED after treatment for PTC. Introduction Papillary thyroid cancer (PTC) is the most common malignancy of the thyroid, and its incidence is rising 1 . Despite this, the mortality rate has remained low, and the prognosis is excellent 2 . PTC is treated by surgery with or without postoperative radioiodine treatment. After treatment, PTC can recur, most often in locoregional lymph nodes, but there is also a risk of distant metastasis after treatment. Previous studies have indicated that the recurrence rate for PTC is high and that it can occur a long time after initial treatment [3][4][5] . Recurrence rates were also shown to be higher in patients with more advanced disease at diagnosis, that is higherstage disease [3][4][5] . However, more recent studies have shown that when stratifying patients according to their treatment response, the recurrence rate in patients with no evidence of disease (NED) after treatment is very low, including patients with advanced disease at diagnosis [6][7][8][9][10][11] . Despite this, most guidelines still recommend long follow-up [12][13][14][15][16] . Furthermore, guidelines also recommend suppression treatment with levothyroxine, resulting in iatrogenic hyperthyroidism. Suppression treatment results in low levels of thyroid-stimulating hormone, which has been shown to reduce recurrence risk [17][18][19] , but it has also been associated with long-term adverse effects, such as an increased risk of atrial fibrillation and osteoporosis 20,21 . Thus, the present guidelines may lead to overtreatment and unnecessary follow-up of patients with NED after treatment for PTC. It also seems that previous results have not been fully implemented in clinical guidelines. Therefore, this study aimed to determine if follow-up is necessary for patients with NED after initial treatment by investigating the risk of recurrence and death. The study also aimed to identify risk factors for recurrence and to examine whether the recurrences were found inside or outside the follow-up programme. Study design and included patients A single-centre retrospective observational study was performed at Skå ne University Hospital Lund, Sweden. This unit is a tertiary referral centre for patients with malignancy in the thyroid. Patients who had primary surgery for PTC between 1 January 2004 and 31 December 2016 were included. Patients with a postoperative histopathological diagnosis of any other malignancy in the thyroid, such as follicular, medullary, anaplastic, poorly differentiated, or other cancers, were excluded. Patients who were lost to follow-up, that is patients who moved to another healthcare region, were excluded, as were patients with stage T1a N0/X, since the current Swedish guidelines indicate that the latter group should not be followed up due to a very low risk of recurrence 22 . Data collection Patients were identified by cross-linking data from two clinical registers: the Scandinavian Quality Register for Thyroid, Parathyroid, and Adrenal Surgery (SQRTPA) and INCA, the Swedish national register for thyroid cancer 23,24 . SQRTPA has registered patients since 2004 and INCA since 2015 in Skå ne. Registration in SQRTPA is carried out after written informed consent is given by patients. After identification of patients, data for the present study was further added by searching patients' electronic hospital records and entered into a specific database for the study. This study was approved by the regional ethics committee (DNR 2019-02060) as well as the committee for matters regarding quality registers, medical databases and preparation in Skå ne Region (case number 130-139), both of which waived the need for further patient consent. The variables collected from hospital records, SQRTPA and INCA were: age at diagnosis, sex, date of surgery, TNM stage, multifocality, the extent of primary treatment, disease status including unstimulated thyroglobulin (Tg) levels, thyroglobulin antibodies (TgAb) levels, imaging at the follow-up visit 9-12 months after treatment and at all other follow-up visits afterwards, type and treatment of, and status after, recurrence if any, disease status after treatment of recurrence, whether the recurrence was found through the followup programme or not, and cause of death in patients who died during follow-up. Swedish national guidelines for papillary thyroid cancer The patients included in this study were treated and followed up according to the Swedish national guidelines for PTC. The first national guidelines for PTC in Sweden were published in 2012. They were based on the guidelines published by ETA in 2006 25 . According to the Swedish guidelines from 2012, all tumours except T1a were recommended for total thyroidectomy followed by radioiodine suppression therapy. The patients were then recommended a 1-year follow-up visit with neck ultrasonography and Tg and TgAb tests. Low-risk patients (pT1b, T2, pN0-X, M0-X) were recommended annual follow-up with Tg and TgAb tests, and if these showed undetectable levels after 2 years they could be referred for continued follow-up by their family doctor. Highrisk patients (pT3, T4, pN1a, pN1b, M0-X) were recommended every other year follow-up visits with Tg and TgAb tests at a specialist centre for a minimum of 10 years before referral to their family doctor 22 . Classification of no evidence of disease, endpoints and risk groups Response to treatment was classified at the scheduled follow-up visit 9-12 months after treatment, that is thyroid surgery with or without postoperative radioiodine remnant ablation (RRA). Patients were classified as having NED if they had unstimulated Tg levels less than 1 ng/ml, and TgAb levels less than 20 kIE/l, as well as no signs of disease on imaging. Imaging was performed according to hospital protocols and included cervical ultrasonography. Recurrence was classified as: biochemical recurrence only with Tg greater than 1 ng/ml and/or TgAb greater than 20 kIE/l without imaging or clinical suspicion of recurrence; or structural recurrence with imaging strongly suspected for and/or biopsyverified recurrence or metastasis with or without biochemical evidence of recurrence. Patients were followed from the date of their primary surgery until the last scheduled follow-up visit in the follow-up programme. Every patient's medical record was searched for signs of recurrence at any visit to Skå ne University Hospital until and including the end date of 1 December 2020. Time to recurrence was calculated by using the date of primary surgery as the starting date, and the first record of the recurrence as the end date. Death was ascertained either through patients' medical records or through INCA. Death was defined as overall or disease-specific, using information available either from death certificates or patients' hospital records. Patients were stratified into groups of a low, intermediate or high risk of recurrence by American Thyroid Association (ATA) 2015 12 based on the patients' pathological TNM (pTNM) stage at diagnosis, using the TNM 7 th edition 26 . Statistical analysis Statistical analysis was performed using STATA/MP, version 16.1 for Mac (StataCorp, College Station, Texas, USA). Medians and interquartile ranges were reported for continuous variables; numbers and percentages for discrete variables. Characteristics among patients with and without recurrence were compared using Pearson's chi-squared or Fisher's exact test, where appropriate. All tests were two-sided, and a difference with a P < 0.050 was considered significant. Characteristics of patients and clinical outcomes The inclusions and exclusions of patients in this study are illustrated in Fig. 1. There were 295 patients treated with thyroid surgery and with PTC as the only malignant thyroid diagnosis during the study period; nine were lost to follow-up, and a further 108 had T1aN0/X. Of the remaining 178 patients, 88 had either biochemical or structural persistent disease. Thus, 90 patients remained who had NED after treatment, and who constituted the study cohort of the present report. Baseline clinicopathological characteristics of the included patients are shown in Table 1. The median age was 48 (i.q.r. 40-64) years. There were 74 women and 16 men, who were followed for a median of 6.3 years; a median period of 4.7 years was in the scheduled follow-up programme. Their clinical outcomes are presented in Table 2. Eighty-four patients (93.3 per cent) remained free of tumour during follow-up, while three (3.3 per cent) patients developed exclusively biochemical signs of recurrence, and additionally three (3.3 per cent) developed a confirmed structural recurrence. Five patients (5.6 per cent) died during follow-up. They were all above 75 years of age at the time of death. According to medical records and death certificates, all died of causes other than PTC. Risk factors for recurrence Due to the low number of included patients and the low number of events, no multivariable analysis to identify risk factors for recurrence was possible. Patients with structural recurrence were more often aged 75 years or older at treatment (P ¼ 0.004), they had a higher risk group (stage) at diagnosis (P ¼ 0.04), and more often had a multifocal primary tumour (P ¼ 0.04), Table 2. Recurrences The details of the patients who had recurrence are summarized in Table 3. The biochemical recurrences were detected 3.1, 5.1 and 15.0 years after initial diagnosis. These patients were 32, 39 and 72 years old at diagnosis of the primary disease. The structural recurrences were detected 2.3, 2.7 and 6.9 years after the initial diagnosis. Patients with structural recurrence were 76, 85 and 86 years old at diagnosis of their primary disease. Biochemical recurrences Out of the three patients with biochemical recurrence, two presented exclusively with elevated concentrations of TgAb; the third had elevated levels of Tg. None of them presented with any symptoms of recurrence during their follow-up and none of them were treated further. All of them were recommended to have continued follow-up, and during this study none of them had any structural evidence of disease and none of them died. Structural recurrences All patients with structural recurrence of disease were female and 75 years or older at the time of diagnosis and all of them were 80 years or older at the time of recurrence. One structural recurrence was detected at a scheduled follow-up visit; the other two were detected due to symptoms leading the patient to seek care outside the follow-up programme. All three patients with structural recurrence were scheduled for or had undergone treatment for their recurrence. None of these patients remained disease-free at the end of follow-up, and two of the patients with structural recurrence died. One died from a heart attack while the other had several potential causes of death stated in the medical records (renal failure, infection and possibly stroke), none related to PTC. Discussion The present study found that of 90 patients with NED after treatment for PTC, only three developed a structural recurrence. Out of these three patients with structural recurrence, the follow-up programme detected only one; the other two patients had symptomatic recurrences and sought medical care between follow-up visits. A further three patients had elevated levels of Tg or TgAb, which were classified as biochemical recurrences. The rate of structural recurrence of 3.3 per cent in the present study is in line with rates reported in previous studies [6][7][8][9][10][11] . The finding that only one out of three recurrences was detected through the follow-up programme has not been published previously. The definition of biochemical recurrence varies. Some authors have suggested a suppressed Tg greater than 1.0 ng/ml 6 ; others also include raised TgAb, above 60 7 or 100 kIU/l 8 . Two out of three patients with biochemical recurrence in this study had levels of TgAb between 20 and 60 kIU/l, which would not have been classified as recurrences in previous studies [6][7][8] . None of the patients with biochemical recurrence in this study had symptoms or received treatment but continue to be followed up. Their risk of any future structural recurrence is unclear. In the study by Han and colleagues 8 , out of ten patients converting from TgAb levels less than 100 kIU/l after treatment to levels above 100 kIU/l during follow-up, none developed structural recurrence. Rising Tg seems to impart a higher risk of later structural recurrence. Thus, in the same study, five out of 37 (13.5 per cent) patients with rising Tg later developed structural recurrence. Scheffel and co-workers 27 found six structural recurrences in 90 patients with NED and rising levels of Tg, resulting in a rate of 6.7 per cent. These results suggest that the risk of structural recurrence after biochemical signs of recurrence may not be very high. The third patient in this study with biochemical recurrence demonstrated an elevated Tg to 4.6 ng/ml. This patient did not undergo RRA at baseline, indicating that the rising levels of Tg might have been due to regrowth of benign thyroid remnants, and not recurrence of malignant disease. In contrast to Tuttle and colleagues 6 who found that structural recurrences occur 4 to 11 years after primary treatment, two of the three structural recurrences in the present study were detected after 2.3 and 2.7 years. Due to this relatively short recurrence time, it is possible these were not true recurrences but instead persistent disease. Both recurrences were metastatic lymph nodes in the neck. Both of the patients with these recurrences were initially diagnosed with lymph node involvement at primary diagnosis, one in the central neck (N1a) and the other in the lateral neck (N1b). A study by Llamas-Olier and colleagues 28 in 2018 showed that N1b at diagnosis is a risk factor for early recurrence or persistent disease. Another study, from Memorial Sloan Kettering Center in 2015, of 3664 patients with differentiated thyroid cancer illustrated that the risk of recurrence increases with age, regardless of stage 29 . Thus, they found a 37-fold increase in the risk of recurrence in patients above 70 years of age compared with patients below the age of 40. This aligns with the results in the current study, where patients with structural recurrences were 76, 85 and 86 years old at the time of diagnosis, compared with the median age of 48 years in the cohort. Some studies have investigated whether molecular markers can be used to determine prognosis in thyroid cancer 30 . The most studied marker is mutation of the BRAF gene. A study from 2013 published by Xing and co-workers 31 showed that the risk of metastasis, death and old age at diagnosis are all higher in PTC containing a specific BRAF mutation. Thus, it is plausible that PTC is not a single disease but consists of several different molecular types of cancer, and that prognosis is determined by early genetic events. Unfortunately, data on BRAF mutations were not available in the present study. The results of the present study contrast with earlier studies which found high rates of recurrence after treatment for PTC. For instance, Mazzaferi and Kloos 5 in 2001 found a PTC-recurrence rate of 23.5 per cent, and a study from the Mayo Clinic of 800 patients who had PTC surgery between 1946 and 1970 found a recurrence rate of 18 per cent 3 . Early studies such as these are often cited to support the value of extensive follow-up of PTC. However, their results are in stark contrast to the 1 to 4 per cent recurrence rate presented in more recent studies of patients with NED after treatment [6][7][8][9][10][11] . Reasons for the lower contemporary recurrence rate might be improved surgical treatment. Today, surgeons perform more compartment-oriented lymph node dissection instead of so-called 'berry picking', where only suspicious-looking lymph nodes are excised 32 . Other reasons include improved RRA 33 . Most importantly, the evaluation of disease status after surgical treatment with Tg, TgAb and imaging, makes prediction of recurrence much more precise than previously 34,35 . In this regard, it is important to note that out of 295 patients with PTC in this study, 88 did not have NED after primary treatment, but showed signs of persistent disease. Thus, the very low risk of recurrence only relates to patients with NED after treatment. A strength of the present study is its single-centre design, which minimizes confounding factors regarding differences in treatment and follow-up between different clinics. This study is the first northern European study on how to follow patients with PTC and NED optimally after treatment. Some limitations of the present study need to be mentioned. First, the Tg assay has changed over time. Before 2016, Skå ne University Hospital Lund used an assay with a sensitivity of 1.0 ng/ml. From 2016, a more sensitive assay of 0.1 ng/ml was introduced. Thus, patients who were considered biochemically free of disease before 2016 could theoretically still have had levels of Tg between 0.1 and 1.0 ng/ml. ATA 2015 12 recommend a suppressed Tg level of less than 0.2 ng/ml as the definition of NED, the sensitivity of the Tg assay used in the period 2004-2016 of 1.0 ng/ml at Skå ne University hospital Lund, which was also used in the present study, made the inclusion criteria of less than 0.2 ng/ml impossible; therefore Tg less than 1.0 ng/ml was used to define NED. In 2009, all thyroid cancer treatment was centralized to Lund. Due to this centralization of healthcare, the number of patients included in this study who had surgery after 2009 is much greater than before 2009. The detection of more indolent PTC in later years could also have affected the results. Other limitations of this study include its retrospective design and the low number of recurrences, which decrease the statistical power and make it impossible to perform multiple regression analysis to identify risk factors. However, age above 75 years was clearly a significant risk factor for recurrence, since no patient below 75 years of age with NED had a structural recurrence. A further limitation is the lack of molecular analysis in the present paper. It can only be speculated that old age of patients with structural recurrence is a confounder for a specific molecular type of cancer associated with higher recurrence risk. Further studies are needed to explore this. Only three out of 90 patients who had NED after primary treatment experienced structural recurrence, none of them was below 75 years of age and the follow-up programme accurately identified only one of them. Thus, follow-up, as it is designed today, seems to be of questionable benefit in younger patients with PTC and NED 9 to 12 months after treatment. Funding The Anna-Lisa and Sven Eric Lundgren Foundation for Medical Research.
2021-12-12T05:11:44.381Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "f83587d4ac78037f592a5eb512ca00961209c5d8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/bjsopen/zrab119", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f83587d4ac78037f592a5eb512ca00961209c5d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257496100
pes2o/s2orc
v3-fos-license
Scaling Vision-Language Models with Sparse Mixture of Experts The field of natural language processing (NLP) has made significant strides in recent years, particularly in the development of large-scale vision-language models (VLMs). These models aim to bridge the gap between text and visual information, enabling a more comprehensive understanding of multimedia data. However, as these models become larger and more complex, they also become more challenging to train and deploy. One approach to addressing this challenge is the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the model into smaller, specialized sub-models that can jointly solve a task. In this paper, we explore the effectiveness of MoE in scaling vision-language models, demonstrating its potential to achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost. Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling VLMs. We hope our work will inspire further research into the use of MoE for scaling large-scale vision-language models and other multimodal machine learning applications. Introduction The ability to understand and generate natural language from visual information is a critical component of many real-world applications, including visual question answering (VQA), visual reasoning, and multimodal information retrieval.In recent years, the success of deep learning in natural language processing (NLP) has led to the development of large-scale vision-language models (VLMs) (Tan and Bansal, 2019;Chen et al., 2020;Li et al., 2021b;Gan et al., 2020;Kim et al., 2021a;Alayrac et al., 2022;Wang et al., 2022c;Shen et al., 2022b;Li et al., 2021a;Shen et al., 2022a;Jia et al., 2021;Li et al., 2022;Yu et al., 2022) that leverage powerful neural network architectures to encode and decode multimodal information.However, state-of-the-art vision-language models like Flamingo-80B (Alayrac et al., 2022), BEIT-3-1.9B(Wang et al., 2022b), and PaLI-17B (Chen et al., 2022) can be computationally expensive and difficult to train, which has motivated researchers to explore ways of improving their efficiency and effectiveness. Recently, sparsely activated Mixture of Experts (MoE) models have been successfully employed to scale both vision (Riquelme et al., 2021;Lou et al., 2021;Mustafa et al., 2022) and text models (Shazeer et al., 2017;Lepikhin et al., 2020;Zoph et al., 2022;Du et al., 2022).These models are motivated by the need to increase model parameters while controlling compute costs.In addition, these models provide other advantages, including sparsity that can mitigate catastrophic forgetting in continual learningg (Collier et al., 2020;Komatsuzaki et al., 2022), and an inductive bias that can enhance performance in multitask learningg (Ma et al., 2018;Kudugunta et al., 2021;Kim et al., 2021b).Overall, the use of MoEs has proven to be a promising strategy for scaling deep learning models across various domains. Building on the success of MoEs in individual domains and applying the intuition that sparse models may better handle different tasks versus dense counterparts, we investigate the potential of MoEs for visionlanguage modeling.To this end, we take the first step in this direction and explore models that can process both images and text for vision-language tasks.One similar effort has been studied in LIMOE (Mustafa et al., 2022), where the authors proposed a modal-agnostic CLIP-style (Radford et al., 2021) multimodal MoEs architecture, but their focus is mainly on the contrastive pre-training objective and vision-only downstream tasks.There are two limitations in this setting: (1) The increasing model capacity of MoEs under the the simple contrastive objective can easily lead to over-fitting issues. (2) The vision-only benchmarking does not reveal the full power of scaling up multimodal models.Alternatively, our goal is to demonstrate the effectiveness of MoEs under generative modeling for vision-language tasks and provide a more comprehensive foundation for future research in this area.Specifically, we propose a novel VLM architecture that employs MoE to scale both the text-based and vision-based feed-forward networks (T-FFN and V-FFN, respectively) in a unified framework.Our approach divides the model into multiple sub-models, each of which is responsible for processing a modal-specific subset of the input data.The text and vision input representations are then aligned via three mask data modeling objectives (Wang et al., 2022b). We train a range of VL-MoE models and evaluate the model on vision-language classification, visionlanguage retrieval, vision-only and language-only tasks, Our experiments demonstrate that MoE can significantly improve the efficiency and effectiveness of VLMs, enabling them to handle large-scale, real-world multimedia data.We scale BASE-size model up to a 1.8B parameter VL-MoE LARGE/16E , which only applies 560M parameters per token and achieves competitive performance with dense models that make use of similar or more pre-training image-text pair data and apply 3-4× more parameters per token. In summary, our contributions are as follows: • We propose VL-MoE, the first large-scale generative MoEs multimodal models for vision/langaugeonly, as well as vision-and-language tasks. • We explore various scaling strategies, including increasing dense model size, increasing expert numbers, and scaling either T-FFN or V-FFN alone, to investigate the trade-offs between model complexity and performance on various downstream tasks. • We present ablations to understand VL-MoE model's behavior, interpretability, and our design choices. For model architecture, there are two main designs.The first design, utilized by models such as (Radford et al., 2021;Jia et al., 2021;Yuan et al., 2021) separately encodes each modality with different encoders.While this approach performs well for image-text retrieval tasks, it struggles with complex vision-language tasks like visual reasoning.The second design, employed by models like (Tan and Bansal, 2019;Li et al., 2021a;Lu et al., 2019;Li et al., 2019;Kim et al., 2021a;Chen et al., 2022;Alayrac et al., 2022), uses a complex fusion module with cross-modal attention to combine modalities.However, this design sacrifices efficiency for improved performance.Recently, a new design has emerged with the MOME Transformer used in both VLMO and BEIT-3.This design unifies the dual-encoder and fusionencoder models by introducing a mixture-of-modalityexperts technique.With MOME, various modalities are encoded within a shared Transformer block, allowing for improved scalability and achieving state-of-the-art performance on vision-language tasks.There is an increasing interest to grow the VL model capacity with an affordable compute budget, including MoE (Mustafa et al., 2022) and the injection of new trainable modules on pre-trained models (Alayrac et al., 2022;Shen et al., 2022a;Liu et al., 2023b;Li et al., 2023d,b;Koh et al., 2023); the former remains less studied. For pretraining objectives, multiple cross-modal pretraining objectives have been studied.They can be categorized into two classes: (1) Discriminative modeling, including image-text contrastive learning (Radford et al., 2021;Jia et al., 2021), image-text matching (Tan and Bansal, 2019;Kim et al., 2021a;Li et al., 2021a;Bao et al., 2022b) and word-patch/region alignment (Chen et al., 2020;Kim et al., 2021a); (2) Generative modeling, including masked language modeling (Tan and Bansal, 2019;Su et al., 2020;Kim et al., 2021a) or prefix language modeling (Wang et al., 2022c), masked region modeling (Tan and Bansal, 2019), multimodal prefix language modeling (Wang et al., 2022c).Recently, BEIT-3 shows strong scaling results by unifying the generative multimodal pretraining objective with masked data modeling, which comprises masked image modeling and masked language modeling on the monomodal encoders and masked multimodal modeling on the multimodal encoder.In this paper, we perform MoE study, by adopting the MOME Transformer as the backbone dense network and generative (masked data) modeling as pretraining objectives given its simplicity and scaling ability. Sparse Mixture of Experts models.We build upon the concept of deep sparse MoEs, which have been studied independently in both Computer Vision (Riquelme et al., 2021;Lou et al., 2021;Mustafa et al., 2022) and Natural Language Processing (Riquelme et al., 2021;Lou et al., 2021;Mustafa et al., 2022;Shazeer et al., 2017;Lepikhin et al., 2020;Fedus et al., 2021;Du et al., 2022;Zoph et al., 2022;Clark et al., 2022;Zhou et al., 2022;Komatsuzaki et al., 2022;Kudugunta et al., 2021;Shen et al., 2023) in the context of conditional computation.The goal of conditional computation is to increase the number of model parameters without a proportional increase in computational cost, which is achieved by selectively activating only relevant parts of the model based on input-dependent factors (Bengio, 2013;Chen et al., 1999;Davis and Arel, 2013).MoE models use a learned gating mechanism that activates only a subset of k experts out of E ≫ k for a given input, allowing an input to select either all experts (Eigen et al., 2013) or only a sparse mixture thereof, as in recent massive language models (Fedus et al., 2021;Du et al., 2022). While many works aim to improve the gating mechanism itself (Hazimeh et al., 2021;Lewis et al., 2021;Roller et al., 2021;Zhou et al., 2022), MoE models have also been studied for multitask learning (Hazimeh et al., 2021;Kudugunta et al., 2021) with per-task routers (Ma et al., 2018), although a shared pool of experts is typically used. MoE models have been explored for multimodal learning as well, with LIMOE (Mustafa et al., 2022) and Uni-MoE (Zhu et al., 2022) being most relevant to our work.However, LIMOE considers the CLIP-style contrast as the pre-training objective, and vision/retrieval tasks as the downstream evaluation.Uni-MoE focuses on routing decisions with limited experts and evaluates on caption/vision/language/retrieval tasks.To the best of our knowledge, the proposed VL-MoE is the first the MoE scaling study to consider the generalized generative modeling objective in the VL pre-training, and we evaluate its scaling performance in a more comprehensive manner, including vision/language-only, as well as vision-and-language tasks. Method We first describe the masked data modeling pretraining objectives.We next discuss MoEs, sparse MoEs and present how we apply sparse MoEs methodology to vision-language models, before explaining our design choices for the routing algorithm and the implementation of VL-MoE. Vision-Language Masked Data Modeling We utilized a unified masked data modeling objective (Wang et al., 2022b) to pretrain VL-MoE on monomodal (i.e., images and texts) and multimodal data (i.e., image-text pairs).This approach has been demonstrated to be scaling-friendly with small batchsizes.Our pretraining process involved masked image modeling on monomodal image data, masked language modeling on monomodal text data, and masked visionlanguage modeling on multimodal image-text pairs. Masked Language Modeling We use masked language modeling (MLM) to learn language representations from large-scale text-only data.For MLM, 15% of tokens in monomodal text data are randomly masked, and the model is trained to recover the masked tokens from the corrupted input text.Masked tokens are replaced by a [MASK] token 80% of the time, a random token 10% of the time, and kept the original tokens 10% of the time, following BERT (Devlin et al., 2019). Masked Image Modeling In addition to masked language modeling, VL-MoE uses masked image modeling (MIM) to learn vision representations from large-scale image data.For MIM, block-wise masking is applied to 40% of image patches, and the pretraining objective is to reconstruct the discrete visual tokens of masked patches, following BEiT (Bao et al., 2022a).The im- age tokenizer of BEITv2 (Peng et al., 2022) is used to obtain the discrete tokens as the reconstructed targets. Masked Vision-Language Modeling To learn aligned vision-language representation, we use masked vision-language modeling (VLM), which extends masked language modeling and masked image modeling to multimodal data.The task aims at recovering masked image patches and text tokens based on visual and linguistic clues.In VLM, text tokens (with 50% mask ratio) are randomly masked as in MLM, and the model is trained to recover the masked text tokens based on the joint image-text representations.Image patches are also masked with the same ratio as in MIM, and the corresponding visual tokens are predicted based on the image-text pair.The VLM task further encourages the model to learn alignments between image and text pairs. VL-MoE Architecture Input Representation.To obtain text representations, the input text is tokenized and projected onto word embeddings ({w i } M i=1 ), where M is the length of the tokenized text sequence.Two special tokens, a startof-sequence token ([T CLS]) and a special boundary token ([T SEP]), are added to the sequence.Text representations are obtained by summing the word embeddings and text position embeddings, resulting in For image representations, the input 2D image v ∈ R H×W ×C is split and reshaped into N = HW /P 2 patches v p ∈ R N ×(P 2 C) , where C is the number of channels, (H, W ) is height and width of the input image, and P is the patch size.These patches are then flattened into vectors and linearly projected to obtain patch embeddings following vision Transformers (Dosovitskiy et al., 2020;Touvron et al., 2020;Bao et al., 2022a).We prepend a learnable special token [I CLS] to the sequence.The resulting image input representations are given by To form image-text input representations, we concatenate image and text input vectors, resulting in The dense backbone network of VL-MoE is a shared multimodal Transformer, illustrated in Figure 1.To encode different modalities, we utilize a mixture-of-modality-experts (MOME) Transformer (Bao et al., 2022b;Wang et al., 2022b), which takes image and text representations of monomodal data, as well as representations of image-text pairs as input.The MOME Transformer comprises multiple layers of blocks, each consisting of a multi-head self-attention layer and a feed-forward expert layer.While the selfattention module is shared across modalities, each feedforward expert layer contains a pool of modality-specific experts (V-FFN, T-FFN, or VL-FFN) that act as a substitute for the feed-forward network in standard Transformers.This allows for hard routing over the pool of feed-forward networks based on the modality of the input tokens. Conditional Computation with MoEs.The concept of conditional computation involves selectively activating different parts of a neural network based on the input (Bengio, 2013).One specific approach is to use a mixture-of-experts (MoE) model, where different "experts" handle different regions of the input space (Jacobs et al., 1991).In this paper, we adopt the MoE layer proposed in (Shazeer et al., 2017), which consists of E experts and is defined as Here, x is the input to the layer, e i : R D → R D is the function computed by expert i, and g : R D → R E is the "routing" function that determines the input-dependent weights for the experts.Both e i and g are implemented as neural networks.Although this formulation still involves a dense network, it can be made sparse by restricting g to assign only k ≪ E non-zero weights, thereby eliminating the computation of unused experts.This approach allows for super-linear scaling of the number of model parameters in both training and inference. VL-MoE.We apply sparse MoE to vision-language models in the context of the MOME.As illustrated in Figure 1, inputs from different modalities are routed to V-FFN and T-FFN in the first (L − F ) layers, and V-FFN, T-FFN, or VL-FFN in the last F layers.To avoid instability due to modality input imbalance when applying MoEs to modal-agnostic VL-modules in V-MOE (Riquelme et al., 2021), we only use MoE for V-FFN and T-FFN in the first (L − F ) layers.V-FFN and T-FFN have two layers and a GeLU (Hendrycks and Gimpel, 2016) non-linearity: V/T-FFN(x) = W 2 σ gelu (W 1 x).For VL-MoE, we replace a subset of V-FFN and T-FFN with V-MoE and T-MoE layers, where each expert is an FFN with the same architecture e i (x) = FFN θi (x) but different weights . This design pattern is similar to that of GShard (Lepikhin et al., 2020) and V-MOE (Riquelme et al., 2021) models.In V-MoE and T-MoE layers, each token x ∈ R D is processed sparsely by k out of E available experts.To select which one, a lightweight V/T-Router predicts gating weights per token: g(x) = softmax(W g x) ∈ R E , where W g ∈ R D×E is learned.The k activated experts' outputs are combined linearly according to the gating weights: To ensure computational efficiency and implementation constraints, each expert in VL-MoE has a fixed buffer capacity, which determines the maximum number of tokens it can process.The assumption is that tokens are approximately balanced across experts.In case the capacity is exceeded, some tokens are not processed by the expert and are dropped, leading to a decrease in the success rate.This rate is a vital indicator of balanced routing and training stability.To mitigate this problem, we employ Batch Priority Routing (BPR) (Riquelme et al., 2021;Mustafa et al., 2022), which selectively skips tokens based on their routing weights.BPR prioritizes tokens with larger routing weights, as they are deemed more informative.Our results show that BPR is crucial for stable training of VL-MoE.We further analyze token routing decisions in Section 5 and dropped tokens in Appendix. Pretraining Setup Pretraining Data.Our pretraining process uses both monomodal and multimodal data.The monomodal data comprises ImageNet-22K for images and English Wikipedia and BookCorpus (Zhu et al., 2015) for text.The multimodal data combines four datasets of imagetext pairs: Conceptual Captions (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), COCO (Lin et al., 2014), and Visual Genome (Krishna et al., 2017), containing a total of 4 million images and 10 million imagetext pairs.Pretraining Setting.For the large-size model, we employ a 24-layer Transformer network with 1024 hidden size and 24 attention heads, following VIT (Dosovitskiy et al., 2020), BEiT (Bao et al., 2022a), and VLMO (Bao et al., 2022b).The use of VL-FFN starts at 20th layer.The base/small-size model is an 12/8-layer Transformer network with 768/384 hidden size and 12/6 attention heads, where VL-FFN is used in 10/8th layer.We randomly initialize the model parameters using the method described in BEiT (Bao et al., 2022a).The image resolution is set to 224 × 224, and the patch size is 16 × 16.The maximum sequence length for text is 96.We use a batch size of 6, 144 and train the model from scratch for 200k steps, which is equivalent to 40 epochs of the image-text pairs.Each batch contains 2, 048 images, 2, 048 texts, and 2, 048 image-text pairs.We perform image augmentation using random resized cropping, horizontal flipping, and color jittering, following the same method as BEiT (Bao et al., 2022a).The text data is tokenized using a SentencePiece (Kudo and Richardson, 2018) tokenizer with a vocabulary size of 64k.We use the Adam optimizer (Kingma and Ba, 2015) with β 1 = 0.9 and β 2 = 0.999 to optimize the model.The peak learning rate is 2e-3, and we use linear warmup for the first 10, 000 steps and cosine learning rate decay.The weight decay is 0.05, and we disable dropout and use stochastic depth (Huang et al., 2016) with a rate of 0.1.The three pretrain losses are equally weighted as in BEIT-3 (Wang et al., 2022b). MoE Setting.For the default setting of MoEs in VL-MoE BASE/16E , we use E = 16 experts for T-FFN and V-FFN, respectively.All VL-MoEs activate k = 1 expert per token, similar to Switch Transformer (Fedus et al., 2021) and LIMoE (Mustafa et al., 2022).We replace every second dense T-FFN or V-FFN sublayer with MoE sublayer following GShard (Lepikhin et al., 2020) and Switch Transformer (Fedus et al., 2021).We use BPR for stability in V-MoE (Riquelme et al., 2021).For auxiliary loss, we use loading loss in (Shazeer et al., 2017) for T-FFN's MoE and averaged loading loss and importance loss in V-MoE (Riquelme et al., 2021) for V-FFN's MoE.The combination ratio for auxiliary loss is set as 0.01 in all our experiments We use 32 expert parallelism and TUTEL (Hwang et al., 2022) for fast routing and computation.All the models are based on Deep-Speed (Rasley et al., 2020).Pre-training experiments are done on 32 Nvidia Tesla V100-32GB GPUs.Following ST-MoE (Zoph et al., 2022), we freeze all the MoE modules (router and expert network) during finetuning process.The capacity factor C is set to be 1.05 during training and 1 during inference following (Riquelme et al., 2021).Table 1: Finetuning results of different models on vision-language classification tasks and image-text retrieval tasks. We report vqa-score on VQA test-dev and test-standard split, accuracy for NLVR2 development and public test set (test-P) and top-1 recall for image retrieval (IR) and text retrieval (TR).( * denotes the model that is reproduced by us and trained with the same setting as VL-MoE.) Vision-and-Language Downstream Tasks In our study, we explore the performance of VL-MoE on vision-and-language downstream tasks through finetuning experiments on three standard tasks: visual question answering (Goyal et al., 2017), natural language for visual reasoning (Suhr et al., 2019), and image-text retrieval (Plummer et al., 2015;Lin et al., 2014).Following BEIT-3, we use 480 × 480 image resolution for VQA fine-tuning and 384 × 384 for the other tasks. Visual Question Answering (VQA).For VQA, the task is to generate/choose the correct answer given a natural image and a question.Following previous work (Kim et al., 2021a;Bao et al., 2022b;Wang et al., 2022b), we utilize the VQA 2.0 dataset (Goyal et al., 2017) and formulate it as a classification problem with 3, 129 most frequent answers.We finetune VL-MoE as a fusion network to encode both the image and question.We use the final encoding vector of the [T CLS] token as the representation of the image-question pair, and feed that into a classifier layer to predict the label. Natural Language for Visual Reasoning (NLVR2). Visual reasoning task aims to predict whether a text description is true about a pair of images.We use NLVR2 (Suhr et al., 2019) dataset for evaluation.Following OSCAR (Li et al., 2020), VinVL (Zhang et al., 2021) and VLMO (Bao et al., 2022b), we reformulate the triplet input into two image-text pairs, each containing the text description and one image.We use VL-MoE as a fusion network to jointly encode the image and text. The concatenated final vector of [T CLS] token from the two pairs is then fed into a classification layer to predict the label. Image-Text Retrieval.For image-text retrieval, it contains both image-to-text retrieval and text-to-image retrieval for different target modalities.We use the widely used COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015) datasets to evaluate the model, and adopt the Karpathy split (Karpathy and Fei-Fei, 2015) following common practices.Noted that in the architecture of VL-MoE and BEIT-3 (Wang et al., 2022b), it does not involve the image-text matching module as existing in CLIP (Radford et al., 2021).Table 2: Results of base-size models on image classification (ImageNet-1K) and natural language inference (MNLI-m).We report top-1 accuracy for both. 2022b) and BEIT-3.During inference, VL-MoE is used to encode images and text separately and compute the matching scores by the dot product of image and text vectors to obtain the top-k candidates. Table 1 presents the results of our vision-language model on classification and retrieval tasks, including VQA, NLVR2, COCO, and Flickr30K.To ensure a fair comparison, we provide details on the amount of pretraining image-text pair data, pretraining steps, and the number of parameters per input token.Following LIMOE (Mustafa et al., 2022), we define the number of parameters per input token as the number of parameters that the model applies to each image-text token pair.Notably, VL-MoE LARGE/16E contains 2 billion parameters in total, but only applies 560 million parameters per token.Additionally, all routers combined account for less than 0.5 million parameters.Our model outperforms previous large/base-size models on VQA, NLVR2, COCO, and Flickr30K by a significant margin, particularly when compared to a reproduced BEIT-3 (Wang et al., 2022b), which was pretrained using the same settings as VL-MoE.Moreover, to the best of our knowledge, VL-MoE is the first to demonstrate that a mixture-of-experts architecture can successfully scale with a comparably modest architecture size and training counts, while achieving generalization performance on a range of tasks in the context of vision-language tasks.Interestingly, Switch Transformer (Fedus et al., 2021) struggles with generalization for language MoE, while V-MOE (Riquelme et al., 2021) and LIMOE (Mustafa et al., 2022) only evaluate on downstream vision tasks.Additionally, VL-MoE even outperforms VLMO LARGE and ALBEF, which are pretrained with more imagetext pair data and initialized from pretrained models, on COCO and Flickr30K and achieves competitive performance on VQA and NLVR2.We assume that this may be due to the fact that the capacity of VL-FFN has not been scaled in VL-MoE, as reflected in the pretraining plot in Figure 2 (the difference of VLM loss between VL-MoE and dense BEIT-3 model is smaller compared to that of MLM and MIM loss).We leave the scale of the VL-FFN module for future work, considering the increasing instability in modal-agnostic MoE architectures demonstrated in LIMOE (Mustafa et al., 2022). Vision/Language-Only Downstream Tasks Image Classification.We use the image classification task to evaluate the model on the vision-only downstream task, where the objective of this task is to categorize an input image into its corresponding class.We employ the ILSVRC-2012 ImageNet dataset (Russakovsky et al., 2015), which consists of 1.3M images with 1k classes.Following BEIT (Bao et al., 2022a) and VLMO (Bao et al., 2022b), we perform average pooling over the final vectors and feed the resulting vector into a linear classifier layer to predict the label. Natural Language Inference.We use the natural language inference task to evaluate the model on the language-only downstream task.The task involves determining the relationship between two pieces of text.In this task, a model is given a premise sentence and a hypothesis sentence, and it needs to determine whether the hypothesis is true, false, or undetermined based on the information provided in the premise.We use Multi-Genre Natural Language Inference (MNLI) (Williams et al., 2018) dataset, which contains 433k sentence pairs annotated with textual entailment information.We evaluate on matched (MLM-m) setting only. As shown Table 2, we compare VL-MoE with two base-size vision Transformers and V-MOE-B/16-E16 on image classification.For BEIT, BEIT-3 BASE and VL-MoE BASE/16E , we perform intermediate finetuning on ImageNet-22k to compare with VIT pretrained on ImageNet-22k.The model performs competitively with previous state-of-the-art supervised and self-supervised models on ImageNet-1k.Besides the dense counterpart BEIT-3 BASE , VL-MoE also outperforms other strong vision-language models (SIMVLM) pretrained with more data and more steps on MNLI-m. Discussions We conduct ablation studies to analyze the contributions of Mixture-of-Experts module used in VL-MoE from different perspectives.We evaluate the models on visual reasoning (NLVR2), image-text retrieval (Flickr30k), image classification (ImageNet-1k) and natural language inference (MNLI-m).Table 3: Ablation studies of scaling strategies (all the results are based on VL-MoE SMALL/E16 models).All the *-MoE uses 16 experts (where T/V stands for applying MoE on the T/V-FFN). Scaling Strategy Scaling Strategy.In addition to scaling both T-FFN and V-FFN, we have also explored different scaling strategies by applying Mixture-of-Experts (MoEs) modules for either T-FFN or V-FFN alone.The results of our experiments are presented in Table 3.Our findings indicate that scaling a single modality can improve the downstream performance on the corresponding modality as well as overall vision-language tasks.However, we observed that scaling both vision and language modalities leads to the most balanced performing model with 70.6% averaged performance.This may be attributed to the fact that we employ three different pretraining objectives for each modality, and scaling each modality contributes to better optimization of the specific modality pretraining loss as well as the VLM loss.For further evidence, we include the pre-training loss in Appendix. Number of Experts. The optimal number of experts in Mixture-of-Experts (MoEs) is still a topic of debate, as there is no agreement on the ideal number.Previous NLP research has experimented with a wide range of expert numbers, ranging from thousands in early studies (Shazeer et al., 2017;Fedus et al., 2021), to as low as 32 or 64 in more recent research (Zoph et al., 2022;Du et al., 2022;Zhou et al., 2022), which has become the standard for vision models (Riquelme et al., 2021;Mustafa et al., 2022).In Figure 5, we investigate this further with VL-MoE, and our findings suggest that larger expert pools consistently yield performance improvements. Effects of the Auxiliary Losses.As previously mentioned, experts in MoEs have a fixed buffer capacity, and without intervention, top-k MoEs tend to collapse, leading to poor performance as most tokens are dropped (Shazeer et al., 2017;Zhou et al., 2022).To prevent this, prior research has employed auxiliary losses to promote balanced routing (Riquelme et al., 2021;Zoph et al., 2022;Zhou et al., 2022;Mustafa et al., 2022).However, as shown in LIMOE (Mustafa et al., 2022), in multimodal settings, new challenges emerge, such as modality misbalance, where one data type may be more prevalent than the other.We design VL-MoE in a modal-specific fashion to prevent the instability caused by imbalance of multimodal data and experiment with different auxiliary losses for V-MoE: loading balance loss (Shazeer et al., 2017), averaged loading balance and important loss ("vloss") (Riquelme et al., 2021), z-loss (Zoph et al., 2022)). 1 We present the results on VL-MoE SMALL/E16 in Figure 4, which suggest that Z-loss presents to hurt the vision-and-lanaguage pretraininig of VL-MoE and using loading balance loss only will introduce unstable training and underperforming models.The "vloss" turns out to lead to most stable training, which is consistent with V-MOE (Riquelme et al., 2021) and LIMOE (Mustafa et al., 2022).BPR also helps in stablizing training. Token Routing Examples in VL-MoE.In Figure 3, we provide a qualitative analysis of token routing decisions on COCO.For vision tokens, their specialization is clear, as they are routed to specific experts such as food and vegetable experts, eyes experts, OCR experts, etc.On the other hand, language tokens show signs of syntax specialization, with some experts processing mostly padding tokens, while others focus on nouns and adjectives (and some padding), excluding prepositions, determiners, or verbs.Comparision with LIMOE.In LIMOE (Mustafa et al., 2022), the single-modality MoE architecture and the employed contrastive loss are the two main building blocks.To directly compare the two components of multimodal LIMOE under our setting, we thoroughly experimented with optimizing either the single-modality MoE architecture or VL-MoE with contrastive or masked data modeling (MDM) loss.However, we found that the models fail to converge when optimizing the LIMOE architecture with the MDM loss, likely due to the fact that the MDM losses consist of three losses aiming for different modalities, which may exacerbate the modality imbalance problem and make it difficult to optimize MoEs even equipped with the entropy balancing loss in (Mustafa et al., 2022).Therefore, we focused on optimizing VL-MoE and LIMOE with the contrastive loss, as it yielded more stable results.However, it should be noted that while LIMOE uses 1.8B image-text pairs, our setting only has 4M.We then report the training and validation loss across steps by optimizing VL-MoE or LIMOE with the contrastive loss in Figure 8.The batch size is set to be 2k.From the zero-shot validation results, it can be seen that both models quickly overfit to the 4M imagetext pairs, but the single modality MoE architecture in LIMOE inherits more instability. Furthermore, we use 4M data to enrich the experiments using contrastive loss with different model settings in Table 5.We can see that LIMOE seems to exhibit a trend where performance doesn't improve much or even decreases as the number of training steps increases (from 75k to 100k), especially in the 105M parameter setting.This could be a sign of overfitting, where the model is starting to fit the training data more closely but is not generalizing as well to the validation/test data.Increasing the number of experts for LIMOE does not lead to significant performance gains, especially in the 105M parameter setting.This might Models Size IN0shot Conclusion In this paper, we have explored the use of Mixture-of-Experts (MoE) for scaling vision-language models.Our experiments demonstrate that MoE can be a promising technique for improving the efficiency and effectiveness of vision-language models.Specifically, we have shown that dividing a large vision-language model into smaller, specialized sub-models through MoE can achieve stateof-the-art performance on several benchmarks while reducing computational costs.Our experiments have also shown that larger expert pools yield consistent performance improvements.Furthermore, we have explored the impact of MoE on model interpretability and found it can improve the interpretability of vision-language models by providing better insights into how the model processes different inputs. In conclusion, our findings suggest that MoE is a valuable technique for scaling vision-language models, enabling them to handle large-scale, real-world multimedia data.Our work opens up new research directions for exploring the effectiveness of MoEs in other visionlanguage tasks, such as visual question answering, visual reasoning and image-text retrieval, and we hope our findings will inspire further investigations into this research area. A Appendix A.1 Further Analyses "Dropped" Tokens.In MoE training, the issue of "Dropped Tokens" is inherited (Lepikhin et al., 2020;Shazeer et al., 2017;Mustafa et al., 2022;Riquelme et al., 2021;Zhou et al., 2022) and caused by the limited capacity of each MoE expert, which can lead to instability.To provide a detailed analysis of this issue, we present Figure 6, which illustrates the distribution of dropped tokens in VL-MoE BASE/16E across different pre-training tasks.The figure shows that MLM and MIM tasks exhibit a more balanced distribution of tokens compared to VLM task, which may explain the improved performance of using MoEs in the former two pre-training tasks, as depicted in Figure 2. Additionally, the problem of dropped imag tokens is more severe compared to dropped text tokens, which aligns with the results of different scaling strategies presented in Section 5 and the findings in (Mustafa et al., 2022;Riquelme et al., 2021). Pretrain Losses for Different Scaling Strategies.We additionaly report the effect of different scaling strategy in Section 5 for VL-MoE SMALL/16E scaling on three mask language modeling (MLM), mask image modeling (MIM), and masked vision-language modeling (VLM) pre-training tasks across training steps in Figure 7.The results support our hypothesis that using three distinct pretraining objectives for each modality and scaling each modality leads to improved optimization of both the specific modality pretraining loss and the VLM loss. Additional Results We conduct experiments using COCO captions following (Wang et al., 2022b), where VL-MoE achieves 139.2 for CIDEr and 23.1 for SPICE, which outperforms the BEIT-3 with 137.5 for CIDer and 22.7 for SPICE using base-size.We also observe interesting routing specialization when generating the final word "cake" considering the T-MoE in VL-MoE in Figure 3. "NN: lady" and "NN: slicing" route to experts 1 and 13 respectively."DT: A, a" both route to expert 1. "JJ: hairnet, big" route to expert 7.These routings underscore the inherent nature of expert specialization in the VL-MoE model, potentially highlighting its advantages.Natural Language for Visual Reasoning (NLVR2). For results of Table 1, the base/large-size models are fine-tuned for 10 epochs with 128 batch size.The peak learning rate of the base-size models is set to 5e-5.The input image resolution is 384 × 384.For ablation experiments, we fine-tune the models for 10 epochs with 128 batch size, and choose learning rates from {5e-5, 1e-4}.The input image resolution is 224 × 224.All the ablation results of NLVR2 are averaged over 3 runs. COCO.We fine-tune the base/large-size model for 20 epochs with 2048 batch size.The peak learning rate is 2e-5 and the input image resolution is 384 × 384. Flickr30K.For results of Table 1, the base/large-size models are fine-tuned for 40 epochs with a batch size of 2048 and a peak learning rate of 1e-5.We use the fine-tuned model on COCO as the initialization.The input image resolution is 384 × 384.For all ablation experiments, we fine-tune the models for 10 epochs with 1024 batch size.The peak learning rate is set to 5e-5, and the input image resolution is 224 × 224. ImageNet-1k.We fine-tune the base-size VL-MoE with V-MoE and V-FFN only for 15 epochs with 2048 batch size.The peak learning rate is 3e-5 and the input image resolution is 384 × 384. MNLI.We fine-tune the base-size VL-MoE with T-MoE and T-FFN only for 10 epochs with 32 batch size. The peak learning rate is 3e-5. A.3 Formula of Auxiliary Loss Given a token x ∈ R D , we denote by g(x) = softmax(W x) ∈ R E the gating weights across the E experts, with W ∈ R E×D being the routing parameters. When we deal with a batch of multiple tokens {x i } n i=1 , we use the notation X ∈ R n×D . Importance loss. We follow the definition from (Riquelme et al., 2021;Mustafa et al., 2022).The importance loss Ω imp ensures that the gating weights are evenly distributed among the experts, maintaining a balanced profile.For any expert e ∈ {1, . . ., E}, we have imp e (X) = x∈X g(x) e and the loss Ω imp is defined via the squared coefficient of variation for imp(X) = {imp e (X)} E e=1 Ω imp (X) = std(imp(X)) mean(imp(X)) 2 . Load loss.Like previously, we follow (Riquelme et al., 2021).We assume the gating weights g noisy (x) are obtained by perturbing the routing function with noise, i.e., g noisy (x) = softmax(W x + ε) with ε ∼ N (0, σ 2 I) and σ = 1/E.We denote η k the kth largest entry of W x + ε.The importance loss Ω imp aims to balance the selection probability of experts by focusing on the likelihood of choosing them, as assigning tasks to experts is a discrete process.The load loss Ω load complements this by striving to even out the number of assignments among the experts.To calculate the selection probability, the expert e ∈ {1, . . ., E} is assumed to be among the top-k even when resampling only the noise as v-loss.The notation "v-loss" we used in Section 5 is essentially the final employed loss in V-MOE (Riquelme et al., 2021), where Ω vloss (X) = 0.5 * Ω imp (X) + 0.5 * Ω load (X). (a) Encode Image Only (b) Encode Text Only (c) Encode Image-Text Pair (d) V-MoE & T-MoE Figure 1: The encoding process of VL-MoE for various modality inputs, for which gray and colored blocks indicate non-activated and activated modules, respectively.(a) For image input only, the encoding process switches to V-MoE or V-FFN (b) For text input only, the encoding process switches T-MoE or T-FFN.(c) For image-Text Pair input, the encoding process switches, V-MoE & T-MoE and VL-FFN.(d) For the early layers, we scale the V-FFN and T-FFN with Sparse Mixture-of-Experts as V-MoE and T-MoE, respectively.VL-MoE will utilize conditional computation to allocate tokens in a modality-specific fashion.V/T-MoE converts multiple V/T-FFNs as experts, where the image/text input will be conditionally routed by V/T-Router Network. Figure 2 : Figure 2: Effect of VL-MoE scaling on three mask language modeling (MLM), mask image modeling (MIM), and masked vision-language modeling (VLM) pre-training tasks across training flops. Figure 3 : Figure 3: Token routing decisions on COCO.Examples of vision tokens routing decisions and breakdown of language token routing decisions at the V/T-MoE layer placed in the 6-th encoder block -i.e.middle of the networkfor VL-MoE LARGE/16E . Figure 4 : Figure 4: Effect of auxiliary loss on training stability. Figure 5 : Figure 5: Effect of Experts Number. Figure 8 : Figure 8: Comparision of Dense, VL-MoE, and LIMOE on contrastive pre-training task across training steps. p e (x) = 1 − Φ η k − (W x) e σwith Φ the cumulative distribution function of a Gaussian distribution.The load loss Ω load is eventually defined byΩ load (X) = std(load(X)) mean(load(X)) 2 where load(X) = {load e (X)} E e=1 , load e (X) = x∈X p e (x).Z-loss.The z-loss Ω zloss introduced in(Zoph et al., 2022) aims at controlling the maximum magnitude of the router activations A = {W x i } n i=1 ∈ R n×E with entries a i,e = (W x i ) e .The loss is defined by Ω zloss (X Table 4 : Efficiency results of base-size VL-MoE models with different optimizations. Efficiency In Table4, we use one V100×16 node for benchmarking the efficiency of VL-MoE with various optimizations.The EP stands for the expert parallelism provided in DeepSpeed library and KN denotes the specialized kernel fusing operation we implemented (expert dispatch as well as bias gelu fusion).
2023-03-14T01:16:43.169Z
2023-03-13T00:00:00.000
{ "year": 2023, "sha1": "9d12916dd46df7a6446cbec0bc4d054f7dafcdab", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2023.findings-emnlp.758.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9d5e5c432fc2544fa0b15c73698f80379cec0509", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
254021424
pes2o/s2orc
v3-fos-license
A unified information-theoretic model of EEG signatures of human language processing We advance an information-theoretic model of human language processing in the brain, in which incoming linguistic input is processed at two levels, in terms of a heuristic interpretation and in terms of error correction. We propose that these two kinds of information processing have distinct electroencephalographic signatures, corresponding to the well-documented N400 and P600 components of language-related event-related potentials (ERPs). Formally, we show that the information content (surprisal) of a word in context can be decomposed into two quantities: (A) heuristic surprise, which signals processing difficulty of word given its inferred context, and corresponds with the N400 signal; and (B) discrepancy signal, which reflects divergence between the true context and the inferred context, and corresponds to the P600 signal. Both of these quantities can be estimated using modern NLP techniques. We validate our theory by successfully simulating ERP patterns elicited by a variety of linguistic manipulations in previously-reported experimental data from Ryskin et al. (2021). Our theory is in principle compatible with traditional cognitive theories assuming a `good-enough' heuristic interpretation stage, but with precise information-theoretic formulation. Introduction Human language comprehension is linked to (at least) two distinct and robust event-related potential (ERP) components detectable through electroencephalography-the N400 and P600. The N400 is a negative-going waveform that peaks at around 400 ms after the onset of linguistic signal, whereas the P600 is a positivity at around 600 ms. Since their discovery, a great deal of research has attempted to ascertain the functional interpretation of the N400 and P600 signals in order to shed light on the neural mechanisms of human language processing [eg. 16,9,11,13,25,26,14,15,26]. Recent psycholinguistic theories have proposed a heuristic interpretation stage of language comprehension, where comprehenders form a plausible interpretation based on a subset of information in the input signal [13,15,25]. In such theories, the N400 reflects the degree of semantic mismatch in heuristic interpretation, and P600 indexes the effort of resolving conflicts between the heuristic interpretation and the veridical signal. This idea has been formalized in a noisy-channel framework, where comprehenders rationally infer a probabilistic distribution on the intended utterance given the received input while taking into account the fact that the input may contain errors ("noise"). In support of this idea, prior work has established that there is a reduced N400 and a larger P600 when a plausible corrected sentence can be recovered from an original sentence containing a semantic error [8,23]. However, none of the proposed theories can currently explain the full range of empirical ERP patterns (see [2]), and existing models are not integrated with more general computational neuroscientific models. We propose an information-theoretic computational-level model of the N400 and P600 ERP components in language processing, formalizing the noisy-channel intuition described above and integrating multiple strands of psycholinguistic research into a quantitative model that explains previouslyreported results while making successful novel predictions about linguistic ERPs 1 . Model Our model builds on Surprisal Theory, an empirically successful theory of behavioral signatures of language comprehension such as reading time [10,17,3,24,27], which is in line with recent computational neuroscientific proposals to quantify cognitive effort information-theoretically [21,28,7,12,4]. Surprisal Theory holds that the magnitude of processing effort for a word x t given a context of previous words x <t should be proportional to the information content or surprisal S t of the word given its context: Our model maintains the idea that the total amount of processing effort is given by surprisal, but we partition the surprisal into two parts, corresponding to different forms of information processing and to the two distinct ERP signals. T W <t W t x <t x t Figure 1: The comprehender's generative model. T is the speaker's intended structure. At time t, the structure T contains words W <t and (the past context) and W t (current word). The comprehender observes a noisy form of the past context x <t and the current word x t . Surprisal decomposition Consider a comprehender perceiving a sentence at time t, currently observing word x t in the context of (a memory trace of) previously-observed words x <t . We formalize the idea of a 'heuristic interpretation' in the generative model shown in Figure 1. Here the comprehender is trying to infer the value of a variable T representing the speaker's intended structure, for example a complete parse tree. Crucially, the link between the intended structure T and the input words x is not deterministic: speakers may make errors in production, or environmental noise may disrupt the signal, and comprehenders should be able to correct for these factors. We formalize this idea by introducing random variables for heuristic words W <t and W t , corresponding to the values of past words and the current word within the speaker's inteded structure T . The heuristic words give rise to the input words through a noise model, a distribution p N (x | W ) representing all kinds of errors that might occur during language production and transmission. We propose that, with each incoming word, the comprehender is updating her representations of the heuristic words W and structure T . Within the generative model of Figure 1, the surprisal S t can be partitioned into two parts, corresponding to (A) the new information content of the heuristic words themselves, termed heuristic surprise, and (B) the update to beliefs about the heuristic words given the input words, termed discrepancy signal: where · indicates an average with respect to the probability distribution p(W ≤t | x ≤t ). The heuristic surprise is an upper bound on the information provided by the heuristic words about the structure T . We propose that the N400 magnitude is proportional to the heuristic surprise A and the P600 magnitude is proportional to the discrepancy signal B for distinct positive scalars α and β in: Noise model The model quantities A and B are both averages with respect to the comprehender's probability distribution on heuristic words given input words, p (W | x). This distribution can be written using Bayes' rule as (4) To fully specify the model, therefore, requires us to specify (1) a noise model p N representing likely errors in production and/or transmission, and (2) a prior probability distribution p(W ), which reflects the probability that a speaker would want to produce a sequence of words W . Implementation To generate heuristic words for experimental stimuli, we follow Eq. 4 applied independently to individual words. Here p(W ) for a single word is calculated using the Masked Language Model RoBERTa [18], and p(x | W ) for a single word is calculated as where λ is a constant free parameter and d(w, x) is the Levenstein edit distance between input word x and heuristic word w. In order to generate candidate corrections, we replace target word in the input sentence with a special token <mask>, and use RoBERTa to generate the probability distribution to fill in the masked token. We selected top 100 predictions as our W . After that, we calculate the posterior probability by multiplying the RoBERTa probability with Eq. 5. We calculate the conditional probability of the current word W t given the context W <t with the autoregressive transformer GPT-2 [22]. The conditional probability of veridical target x t given the veridical context x <t is also calculated with GPT-2. Dataset We validate our model using ERP data from [23], who report experiments designed to test how well noisy-channel error correction can explain linguistic ERP patterns. 2 The experiments have four conditions (see Table 1, one with a semantic violation (Sem), one with a syntactic violation (Syn), one semantic critical condition (SemCrit) with a semantic violation which could be attributed to noise, and a control sentence without any error (Control). The ERP effects in the three experimental conditions are all calculated in terms of differences to the ERP signal in the control condition. In the N400 time window, there is a significant N400 effect in Sem and SemCrit conditions, where the N400 effect in SemCrit condition is reduced (see Fig. 2a. In the P600 time window, there is a significant P600 effect in Synt and a smaller but significant P600 effect in SemCrit condition (see Fig. 2c). Semantic The storyteller could turn any incident into an amusing hearse. N400 Syntactic The storyteller could turn any incident into an amusing anecdotes. P600 SemCrit The storyteller could turn any incident into an amusing antidote. N400, P600 Control The storyteller could turn any incident into an amusing anecdote. - Table 1: List of conditions, sample sentences and ERP effects in dataset. Fig. 2 shows the simulated and empirical N400 and P600 effect sizes across conditions in the dataset, with λ = 400 (see Appendix A.2 for results of a hyperparameter search on λ). The simulated effect sizes from the RoBERTa-based model implementation are similar to the effect sizes in real human ERP experiments. One potential issue in the use of language models for word probabilities is that, although large language models are sensitive to syntactic violations [5,6], the surprisal penalty associated with syntactic violations is smaller than for semantic violations (see Appendix A.1). This means that, in order to get corrections in the Synt condition, it is necessary to set λ to a very high value, 400. Model results with a smaller value of λ = 300 are shown in Appendix A.2; these show a better fit to the SemCrit condition for the N400. We statistically confirmed the relationship between the empirical ERP amplitudes (N400 and P600) and our information-theoretic measures (heuristic surprise A and discrepancy signal B) in maximal linear mixed-effects models including by-subject and by-item intercepts and slopes [1]. We find a significant main effect of heuristic surprise on N400 amplitude (t = −6.57, p < .001), and a significant main effect of discrepancy signal on P600 amplitude (t = 3.64, p < .001). In comparison, we find no significant effect of true surprisal on P600 (t = 0.20, p = 0.84), suggesting that our proposed decomposition of surprisal provides a better fit of the overall ERP components than true surprisal alone [20,19]. Conclusion We presented a neuro-computational model of N400 and P600 ERP components in language processing. We modeled the ERP components based on a generalized theory of surprisal. We argue that surprisal of word can be decomposed into two parts-a heuristic surprise and a discrepancy signal, which correspond to N400 and P600 respectively. The two measures have a clear cognitive interpretation. The heuristic surprise signals the processing difficulty at the target word position given heuristic context, and the discrepancy signal represents the efforts of updating the discourse from inferred to true structure. We approximate the distribution on heuristic interpretations via a noisy-channel process, and implement it with large-scale language models. The theory is validated with experimental results. Our model provides an information-theoretic quantitative theory of language-related neural signals. By linking ERP components to Surprisal Theory, our model creates a precise formal link between theories of ERPs and other behavioral measures of language processing. The work calls for coregistration of brain and behavioral experiments to better understand the underlying cognitive process. Our model highlights the role of probabilistic inference in language processing, and provides a computational implementation of it. The noisy-channel model for heuristic interpretations abstracts away how different linguistic cues are weighted and combined by evaluating the heuristic interpretation based on a balance between prior belief and its divergence with new evidence. Furthermore, we leverage recent computational models from the field of natural language processing to implement our theory, which allows us to take into account the statistical variations in real experimental inputs. While we provide an implementation using pre-trained language models and edit distance, we want to acknowledge that this model is on a computational level, and further work could be done on the precise algorithmic nature of the heuristic interpretation generation process. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] The computing can be done on a modern personal computer or google Colab with less than three hours. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? A Appendix A.1 Comparison between human-and LM-generated word probabilities. Table 3 shows the cloze probability obtained from human sentence completion task and from GPT-2 generations. All three experimental conditions (Sem, SemCrit and Synt) have close to 0 human cloze probabilities, however, GPT-2 assigns a lower surprisal to the Synt condition than the other two semantically anomalous conditions. The discrepancy between human and language model probabilities indicates that language models might have under-estimated the surprisal of syntactically anomalous sentences. A.2 Hyper-parameter Tuning We explored the effect of λ with a grid search from 100 to 500, with a step size of 100, and with two marginal conditions (λ = 0 and λ = 1000). The simulated N400 and P600 across our selection of λ, together with true surprisal of the stimuli, are summarized in Table 4. When λ = 0, the heuristic surprise is the surprisal the most predictable words given the previous, regardless of the true target received. As λ increases, it becomes more different to do error correction, with an increased heuristic surprise and a decreased error correction. Importantly, sentences in different conditions have different sensitivity to λ. Sentences in Synt and SemCrit have an easy fix, and therefore are more likely to be corrected given an increasingly large λ. After visual inspection, we chose λ = 400. Fig. 3 show a comparison between true ERP effect size in human experiment and simulated effect size when λ = 300. The model prediction aligns well with the real N400 effect size, but it underestimate the size of P600 effect. This is because GPT-2 underestimates the total surprisal of word when it has a syntactic violation.
2022-11-27T21:32:01.446Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "45761806a2dddecf7796e63557c2795a61e64401", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ed8628dcd13c49bc120a5685a6b1ffe397380e43", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
260221393
pes2o/s2orc
v3-fos-license
The Stock Prices Prediction Performance of Hidden Markov Models in the Luxury Category : The stock market is the place where issued stocks are transferred, traded and circulated, including exchange market and over-the-counter market. Because it is based on the issuance market, it is also called the secondary market. The structure and trading activities of the stock market are more complex than the issuance market (the primary market), and its role and influence are also greater. It is precisely because of its complex systems and processes that achieving accurate predictions is very difficult and challenging. The Hidden Markov Model is not a commonly used model in predicting the next day’s stock price. Hence, I will focus on the Hidden Markov Model with four luxury giants to prove whether the HMM is suitable for that industry, and which company fitted most. crowned the world's richest man. In other words, we could hardly neglect the effort made by these luxury companies to the world stock market which is really worth to make research. 2.Literature Review There are hundreds of studies applying the Hidden Markov Model, and in recent years the relevant reviews have shown the accuracy and uniqueness of the HMM. When predicting the genes coding for proteins, the HMM shows the researchers with the greatest accuracy and foresight, and it has broken the stereotype [4]. Apart from predicting the internal part of human beings, the HMM is also a good tool to check the outer production of people, including speech recognition [1] and face identification [3]. In the post-pandemic period, it is also a highlighted issue which is COVID-19 detection. HMM, a cough recognition system has emerged to assess COVID-19 [6]. When it comes to finance, the Hidden Markov Model can also make a great effort to it, and several researches have been done concerning it. For instance, researchers have used both the Hidden Markov Model (HMM) and Support Vector Regression Model (SVR) to forecast the stock [5,7] The improved hidden Markov model and deep learning are used for financial information extraction [8]. Xing Gu at the University of Ontario has proved that an HMM-Driven Approach can be the early-warning alert system for financial-instability detection [2]. Hidden Markov Model Hidden Markov Model (HMM) is a statistical model, which is used to describe a Markov process with hidden unknown parameters. The difficulty is to determine the implied parameters of the process from the observable parameters. These parameters are then used for further analysis, such as pattern recognition. Hidden Markov Model for stock trend judgment The hidden Markov Model is always be reckoned as one of the most powerful tools to predict nonstationary systems, including Stock Markets. At the same time, Stock Market is exhibiting the data and prices continuously, which is an essential feature fitted with the HMM. Suppose Ot is a vector of four numbers, which are daily close, open, high, and low. St is also a variable that is called the state of the day t. And other terminologies applied in the Hidden Markov Mode are listed as follows. Time Prediction of Stock Prices The key point of predicting the coming days' stock price is calculated observations before K logarithmic likelihood, and through the window to the past data move in the direction of the day, it is the same as before all the size of the subsequence of logarithmic likelihood is used in the comparison. Then, we determine the past one day, before its K observation log-likelihood closest to a subsequence, the sequence of the next day the price will be predicted. We then calculate the change in the spread from the date of determination to the next day. This change is then added to the price for the day to obtain our forecast for the next day. Subsequently, after we obtain real observations, we incorporate them into our data set and readjust our model parameters to ensure that our model does not diverge. In short, we fixed the size of the subsequence and located another subsequence from past data that showed a similar pattern. We then map the behavior of the identified subsequences to the subsequences used for prediction. Selecting multiple hidden states for an HMM is a critical task. In section, we use two commonly used criteria: AIC and BIC to evaluate HMM performance with different numbers of states. These criteria apply to HMMS because, in the model training algorithm, namely the Baum-Welch algorithm, the EM method is used to maximize the log-likelihood of the model. We limit the number of states from two to six to keep the model simple and feasible for stock predictions. Each of these criteria is calculated using the following formula: where, k = N 2 + 2N − 1, In this paper, I will use BIC as the measurement of the model performance to select the best company of the four by the HMM. 4.Experimental analysis Mean Absolute Percentage Error (MAPE) can be defined as In this project, the main objective is to determine the efficiency of HMMS in predicting stock prices. We use the open source python library hmmlearn to train the model and compute the likelihood of observations. I have observed 2520 groups of statistic from August 23, 2013 to June 27, 2023 of the daily open, close, high and low for Dior, Europe's high-end eyewear giant EssilorLuxottica (ESLX), the French luxury giant Kering (PRTP) and the Swiss luxury giant Richemon (CFR) each. We forecast prices for the last 100 days starting at day 100, then rescale the model using its true observations to forecast prices at day 99, and so on. After calculating the MAPE, I plotted them into the following line graphs in order to compare with the actual figures evidently. After that, I optimize the model by choosing the model with the smallest BIC value, which is a function of the number of states. The following four figures (Figure 1 to Figure 4) containing four line charts showing close price, open price, high price and low price of DIOR, ESLX, CFR and PRTP each and every graph are showing with the predicted and actual price by black full line and red dotted line respectively. In these graphs, readers can clearly distinguish which of the four companies are more suitable for using the Hidden Markov Model after observing the difference between the two lines in each graph. However, to get the high level of accuracy, I will then apply MAPE to recheck the result. Figure 1 to 4, we can find that the red dotted line is plotted surrounding the predicted black full line, which indicates that the HMM is performing well in predicting the stock price numbers and trends. Table 1 below. In the equation of the MAPE, we know that the higher the MAPE is, the more error the HMM has caused. Result Implementing the HMM to observe the trends revealed from the corresponding true figures, I reach the predicted numbers of Open, Close, High, and Low and conclude that the final results from the two ways give similar MAPE values. The HMM model is more acute when predicting the following trend from the fluctuant stock prices. Having said that the predicted result of the HMM shows its attention to the stock volatility, it is tough for it to reach the accurate price of the turning points in volatility. There is still a certain level of error in the fluctuating areas. However, in general, the HMM can give investors the right trend for future stock prices. When it comes to the MAPE values, the PRTP is performing the best in the four generally, with the lowest average number of the four MAPE, followed by DIOR. Nevertheless, in most cases in reality, the investors care more about the errors of the high price and low price. In the table, both DIOR and PRTP are still shown with good performance. All in all, DIOR and PRTP are well-fitted with the HMM, while ESLX and CFP are still needed to be inspected. Conclusion Having said that the chosen models can make great impacts on the observation in most cases including the quantity of states in HMM, it is not such challenging when we try to find the optimal state using the BIC method to find the optimal model. The volatility is captured clearly by the use of HMM. When comparing the four companies, the HMM is always a good method to predict the stock prices, and select the proper one to invest in. Nevertheless, to decide which of the four companies is the fittest one for the stock traders using the Hidden Markov Model, the one who shows with less turning point should be chosen. Therefore, with the above conclusion and the visible graphs and the accurate table, PRTP is the best one applying to the HMM.
2023-07-28T15:38:19.706Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "bef1fdd6273c77109e8890280d2070c8986c0228", "oa_license": "CCBY", "oa_url": "http://www.clausiuspress.com/assets/default/article/2023/07/23/article_1690158997.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae8b2b4eba16adda7a4d7820b966df2e89873422", "s2fieldsofstudy": [ "Computer Science", "Business", "Economics" ], "extfieldsofstudy": [] }
251819926
pes2o/s2orc
v3-fos-license
The Matthew Effect in Running: An Analysis of Elite Endurance Athletes Over 23 Years The purpose of this study was to investigate the frequency of countries represented in the TOP20 long-distance elite runners ranking during 1997–2020, taking into account the countries’ Human Development Index (HDI), and to verify if the Matthew effect can be observed regarding countries’ representativeness in the raking alongside the years. The sample comprised 1852 professional runner athletes, ranked in the Senior World TOP20 half-marathon (403 female and 487 male) and marathon (480 female and 482 male) races, between the years 1997–2020. Information about the countries’ HDI was included, and categorized as “low HDI”, “medium HDI”, “high HDI”, and “very-high HDI”. Athletes were categorized according to their ranking positions (1st–3rd; 4th–10th; > 10th), and the number of athletes per country/year was summed and categorized as “total number of athletes 1997–2000”; “total number of athletes 2001–2010”; and “total number of athletes 2011–2020”. The Chi-square test and Spearman correlation were used to verify potential associations and relationships between variables. Most of the athletes were from countries with medium HDI, followed by low HDI and very-high HDI. Chi-square test results showed significant differences among females (χ2 = 15.52; P = 0.017) and males (χ2 = 9.03; P = 0.014), in half-marathon and marathon, respectively. No significant association was verified between HDI and the total number of athletes, but the association was found for the number of athletes alongside the years (1997–2000 to 2001–2010: r = 0.60; P < 0.001; 2001–2010 to –2011–2020: r = 0.29; P < 0.001). Most of the athletes were from countries with medium HDI, followed by those with low HDI and very-high HDI. The Matthew effect was observed, but a generalization of the results should not be done. Introduction The Human Development Index (HDI) is an international index used to provide additional information beyond the economic information provided by the Gross Domestic Product [37]. The HDI is defined as a general measure of human development in a given population, determined by socioeconomic status, life expectancy (i.e., health), per capita income (i.e., income), and formal education access (i.e., education) [37]. Similarly, HDI, developed by the United Nations Development Programme (UNEP), has been cited as one of the most powerful variables that play a relevant 1 3 role, along with cultural factors, in promoting a favourable environment for sporting development [10]. From an ecological perspective, the interplay of subjectenvironment has been highlighted as an essential key to human development through several domains (e.g., cognitive, behavioural, social, motor) [8,35]. Since athlete development is a non-linear process [19], different variables act together for its expressions, such as personal characteristics, motivational aspects, social/economic facilities, and cultural factors. So, support from different levels is required [19], meaning that the role of individual factors (i.e., anthropometric, physiological, technical-tactical, psychological) [20,39], economic characteristics and training structure [13], and context-specific characteristics are relevant to sports participation and performance [3,11,28,32]. Studying Brazilian swimming athletes, Gomes-Sentone et al. [16] reported that HDI, income, and education level were important social indicators for sports performance, and similar results were reported by Costa et al. [10] among soccer players. On the other hand, Santos et al. [30], studying junior, elite professionals, and masters athletes presented in the Athletics World Rankings (i.e., 100 m and 10,000 m running distances), found that countries with very high HDI were the most represented in the 100 m ranking, while countries with moderate/low HDI were the most represented in the 10,000 m ranking. Previous studies highlighted that environmental characteristics are also important predictors of sports participation [5]. In summary, the published studies highlighted those environmental characteristics were relevant in both the development and maintenance of athletic performance [11,18,25]. Thus, considering the context-specific characteristics between countries, it is possible to postulate that these differences can be associated with between-countries performance differences, which can lead one country to be more competitive than the others, achieving higher international performance and recognition [6,33]. Furthermore, this scenario can lead these countries to receive more financial sports investments through their government and/or stakeholders/ sponsors, allowing them to increase their visibility and success at the international level, due to better conditions for sports development and support for elite athletes. This fact may illustrate the concept of the "Matthew effect" [1], which highlights that initial advantage tends to beget further advantage and disadvantage further disadvantage. Over time, these differences tend to create widening gaps between those who have more and those who have less [1]. The "Matthew effect" has been studied in a wide variety of contexts and institutional settings, such as sociology, education, biology, and economics [14,22,27]. In the context of sport, it is possible to observe this occurrence in the Brazilian setting, where differences between its regions tend to favour those with more favourable socioeconomic indicators [10,16,29]. These regions, which receive higher amounts of economic investments [26], tend to invest more in sports and talent development programs in a higher number of sports clubs, and competition events, which contributes to these regions, usually concentrating the highest number of highelite athletes at a national level [34]. Furthermore, athletes identified as talented tend to move to these regions or are recruited by development programs and sports clubs from these regions, with the purpose to have success in the practice/modality [2]. For example, in endurance running, it was reported that socioeconomic variables (i.e., sports investment and gross domestic product) and competition venue were associated with countries' likelihood of having athletes in the top 10 rankings, in the European continent [33]. Furthermore, in the Brazilian context, a relationship between population size and the State's Gross Domestic Product with the number of athletes in the national ranking was observed [33]. Moreover, at an international level, information about the success of the African endurance runners has been extensively studied [17], but information regarding the relationship between countries' HDI and the success in this modality is still not conclusive. Thus, the purpose of this study was to investigate the frequency of countries represented in the TOP20 long-distance elite runner rankings from 1997 to 2020 whilst taking into account countries' HDI. We also aimed to verify if the "Matthew effect" can be observed regarding countries' representativeness in the ranking over 23 years, i.e., if having a high number of athletes in the ranking in the first years is associated with a high number of athletes in the following years. We hypothesized that the number of athletes in the first decade would be associated with the number of athletes in the last decade and that countries' HDI would be negatively associated with the number of athletes each country has in the ranking of the modality. Study design and data source The study used a cross-sectional design. All data were collected in November 2020 from the official results section of the Tilastopaja website (www. tilas topaja. eu/). All available results from the world's best half-marathon and marathon marks in official outdoor events, between 1997 and 2020, were compiled for both male and female sexes. Available information included: the athlete's name, date of birth, sex, race time, citizenship, date of the competition, and venue. Athletes' age was computed using the date of birth and the date of the competition. Determination of the "Matthew effect" To identify the existence of the "Matthew effect", information about the number of athletes ranked in the TOP20 per country was considered. Previous studies have shown a decrease in participation and presence in the ranking for European athletes, in comparison to African athletes in the last years [24]. However, taking into account the temporal interval available to be used in the present study and the number of athletes by country, we decided to cluster it into three different time intervals ("1997-2000"; "2001-2010"; 2011-2020"). Statistical analysis Descriptive statistics are presented as mean (standard deviation), and frequencies (%). The normality was tested by the Kolmogorov-Smirnov test. Chi-square test, followed by the Tukey test for multiple pairwise comparisons was performed in WinPepi to verify the association between athletes ranking position (1st-3rd; 4th-10th; > 10th) and countries' HDI classification (low HDI; medium HDI; high HDI; very-high HDI), considering race distance for both sexes. Spearman correlation (r) was used to estimate the relationship between countries' HDI and the total number of athletes per country across the year intervals and also consider the full range of years. The magnitude of the correlation was determined by the scale proposed by Batterham and Hopkins [4] as follows: r < 0.1, trivial; r = 0.1-< 0.3, small; r = 0.3-< 0.5, moderate; r = 0.5-< 0.7, strong; r = 0.7-< 0.9, very strong; r = 0.9 to < 1.0 almost perfect; and r = 1.0, perfect. Bootstrap results were performed based on 1000 bootstrap samples. Statistical Package for Social Sciences (SPSS), version 26 ® , was used, adopting a significance level of 95%. Results The athletes' mean age was 27.5 (4.5) and 25.6 (3.6) years for women and men half-marathoners, and 28.4 (4.1) and 28.0 (4.0) years for marathoners of both sexes (women and men, respectively). The sample distribution, according to HDI, was: 50.1% (n = 927) from medium HDI countries, 24.8% (n = 459) from low HDI countries, 22.2% (n = 412) from very-high HDI, and 2.9% (n = 54) from high HDI countries. Figure 1 presents the athletes' distribution per race distance, according to countries HDI. The majority of athletes were from countries with medium HDI, except female marathoners, where most of them were from veryhigh HDI (36%) and low HDI (30.4%). Table 1 presents the chi-square results. For the "1st-3rd" category, in both distances and for both sexes, the highest frequency was observed for countries classified as medium HDI, and this frequency was significantly higher compared to other countries classification. In general, countries with medium HDI were higher represented in the ranking, except for groups "4th-10th" and " > 10th" among female marathoners, where the highest representativeness was observed for countries with a very high HDI. Table 2 and Fig. 2 show the Spearman correlation results for the association between countries' HDI and the total number of athletes in the ranking across the years (intervals). Non-significant associations were observed between HDI and the total number of athletes, regardless of the year interval. A positive, moderate and significant association was found between the number of athletes Discussion The purpose of this study was to investigate the frequency of countries represented in the TOP20 long-distance elite runners ranking during 1997-2020, taking into account countries' HDI, and to verify if the "Matthew effect" can be observed regarding countries' representativeness in the ranking alongside the years. The main findings reveal that most of the endurance athletes were from countries with medium and/or low HDI in the last 20 years; (ii) most female marathoners were from countries with a very high HDI, followed by a low HDI, and (iii) a noncorrelation between HDI and number of athletes were showed, but a positive and significant association was verified for the number of athletes in different years. Although most of the female athletes came from Kenya and Ethiopia, countries such as Japan, Romania, China, Russia, Germany, Great Britain, and Italy represent, together, approximately 28% of the ranking, which can explain the results found. It seems to have a consensus, in the available literature, regarding the role of economic and social factors in sports participation and high-level sports performance [7,25]. At an international level, a previous study highlighted the role of the States in the Olympic success, of which 50% of this success can be associated with countries' HDI, population size, and political regime [6]. Similar results were observed by Thuany et al. [33], studying Brazilian elite endurance athletes, where the authors reported that population size and GDP were related to states' representativeness (determined by the number of athletes) in the national ranking of the modality. At the European level, economic proxy (i.e., sports investment) and place of competition (i.e., housing running events) seem to increase the chances of a runner being ranked among the 10 best athletes in endurance running [33]. A previous study, conducted by Santos et al. [30], identified a positive correlation between countries' HDI and the number of athletes in the IAAF (World Athletic) ranking from 2006 to 2016; however, in the present study, no significant association between HDI and the number of athletes in the ranking was observed. Disagreement between these results can be related to differences in sample characteristics (i.e., sex and competitive level) and years considered (2006-2016 vs. 1997-2020 in the present study). In addition, results from the present study can be related to the fact that most of the athletes in the TOP20 ranking, during the last 20 years, are from African countries, especially Kenya (47%, if considering the whole temporal range, and 50.8% if considering the last 10 years) and Ethiopia (22.4%, if considering the whole temporal range, and 38.2% when considering the last 10 years). Both countries do not have a large population size, nor even a high HDI, since they are ranked in the 143rd and 173rd positions, respectively (http:// hdr. undp. org/). However, available data indicated that in 2015, about 2.98% of the Kenyan adult population were unemployed, and about 37.1% of the population lived below the poverty line (https:// data. world bank. org/ count ry/ kenya). However, a different economic scenario was described by Onywera et al. [23], whose results showed that about 40% of the Kenyan population were unemployed, and at least 50% lived below the poverty line. These socioeconomic data do not reflect the country's representativeness at an international level in endurance running sports, since most of the best athletes come from these countries classified as having low HDI, such as Kenya and Ethiopia [21]. Given the significant poverty conditions experienced by Kenyans, sports participation could be motivated by the possibility of economic empowerment [23]. However, another potential fact is that running is a relevant aspect of Kenyan life, being part of the country's sporting culture [15]. These results reinforce the idea that sports performance is "country-specific" [12], and although Kenya is not a wealthy country, it is still one of the nations with the best endurance-running athletes. The hegemony of a country in a given sport is not a recent phenomenon (e.g., Sprinters-Jamaica; Soccer-Brazil; Basketball-EUA; Hockey-Canada), and the country's success is associated with cultural and environmental characteristics (e.g., athletes development programs, number of competitions, number of clubs, sports investment), or even with the athletes perspective of social ascension trough the sport [23]. The "Matthew effect" was indirectly tested, and the study hypothesis was supported, showing a direct relationship between the numbers of athletes, across the year's intervals. The results demonstrated that a high number of athletes in the ranking in the first decade was positively associated with the number of athletes in the second and third decades. Since 1968 (Mexico City Olympic Games), Kenya and Ethiopia have dominated the long-distance running events in Track and Field [21,38]. The hegemony of African athletes among the best runners worldwide was associated with a plethora of factors, especially the genetic characteristics, environmental factors (i.e., altitude), morphological and physiological indicators [9,23,31], in addition to the motivational characteristics. Since participation in sport can potentially lead to better living conditions for athletes (in part due to possible economic empowerment, better access to facilities and care), this fact can contribute to a higher number of youth becoming interested in track and field as a potential career in these countries [15]. Increasing the number of potential athletes to achieve the elite status and, as a consequence, keeping these countries in the highest position in the ranking of the modality, i.e., the higher the number of athletes internationally ranked, the higher the sport practice increases in the country, and the higher the chances of the country keep its position in the ranking. It is interesting to note that, in this case, the "Matthew effect" is observed not because of higher economic conditions in a country, but instead having athletes well-ranked can be a positive stimulus for youth to take part in the modality, keeping and/ There were some limitations in the current study. First, we considered only the TOP20 athletes around the world, which indicated that differences could be identified if other athlete's categories were considered, or even through the use of a different ranking classification and/or distance (i.e., middle-running) and temporal range (i.e., before 1997). Second, to identify the "Matthew effect", we only considered the last 20 years, and could be of relevance to investigate a longer period. Thirdly, the lack of time between the HDI (2014) and the sum of athletes in different years could be associated with an absence of significant association. Conclusion Half of the elite endurance athletes ranked in the TOP20 between 1997 and 2020, are from countries with a medium HDI, followed by a low HDI and a very high HDI. Instead of any significant association being observed between countries' HDI and athletes ranking positions, an illustration of the Matthew effect was observed, since a positive and significant relationship in the number of ranked athletes over the years was observed, where those countries with the highest number of athletes in the first decade were most represented in the subsequent decade. However, these results do not allow the generalization, and future studies should consider investigating a longer time interval, comprising years before 1997, and the relationship with other socioeconomic factors. Acknowledgements Not applicable. Authors contribution All authors contributed to the study's conception and design. Material preparation, data collection and analysis were performed by MT and TNG. The first draft of the manuscript was written by MT and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding Open access funding provided by University of Zurich. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Availability of Data and Materials The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Conflict of interest The authors have no relevant financial or non-financial interests to disclose. Ethical approval Non-aplicable. Consent to participate Informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-08-26T15:10:45.789Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "d734d8c04e539a070d7c16ae47f4ce0460a6905a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42978-022-00176-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fa6e1d1ccf1e2b983fc5c9f25a4ba3bca356e8a3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
17149996
pes2o/s2orc
v3-fos-license
NP models with extended gauge groups and extra dimensions: Impact on flavour observables in RS$_c$ Deviations with respect to Standard Model predictions have recently shown up in angular distributions of the FCNC induced mode $B^0 \to K^{*0} \mu^+ \mu^-$. Within New Physics models, such tensions might be explained by new contributions to the Wilson coefficients of the effective Hamiltonian governing this decay. I discuss the issue in the framework of the Randall-Sundrum model with custodial protection (RS$_c$), giving also predictions for other rare $B$ decays. Introduction Among rare B decays, the mode B → K * + − plays a prominent role. Being a loop-induced process within the Standard Model (SM), possible new particles in the loops can modify the predictions for the numerous observables that can be measured, namely, the branching ratio, the forwardbackward lepton asymmetry, the K * longitudinal polarization fraction in a few bins of q 2 , the + − invariant mass, which were measured at the B factories for = e, µ. Recently, LHCb has found discrepancies with respect to SM predictions that could be hints of New Physics. Here I discuss this issue, describing the study performed in [1] within the Randall-Sundrum model [2] with custodial protection (RS c ) [3]. I also review the results obtained within RS c for the related modes B → K ( * ) νν [4], for which only upper bounds on the branching ratios are available [5,6,7]. 2. B → K * + − and B → K ( * ) νν decays: effective Hamiltonians and general features The b → s + − transition is described by the effective Hamiltonian The corresponding primed operators are obtained reversing the quark field chirality. α, β are colour indices, λ a the Gell-Mann matrices, F µν and G a µν denote the electromagnetic and the gluonic field strength tensors, e and g s the electromagnetic and the strong coupling constants, m b is the b quark mass. The operators proportional to the strange quark mass have been neglected. Only the unprimed operators appear in SM. Taking into account the K * subsequent decay into Kπ, the fully differential decay width reads: . α is the fine structure constant at the Z 0 scale and θ W the Weinberg angle. The function X depends on the ratio of the top and W masses x t = m 2 t /M 2 W [14]. In NP scenarios, also O R can be present, and C L,R assume model specific values. It is useful to intro- [6] . η probes the presence of O R , while ε measures the deviation of C L from its SM value. Predictions in NP extensions can be expressed in terms of η and ε. In [4] the branching fractions and the spectra in the normalized neutrino pair invariant mass s B = q 2 /m 2 B have been computed and, for the decay B → K * νν, also the polarization fractions for longitudinally and transversely polarized K * : Denoting the branching ratio for expected to be affected by a small hadronic uncertainty [6]. Randall-Sundrum model with custodial protection The RS model is defined in a five-dimensional spacetime with metric ds 2 = e −2ky η µν dx µ dx ν − dy 2 , where η µν = diag(+1, −1, −1, −1), x denote the ordinary 4D coordinates and y varies in the range 0 ≤ y ≤ L (y = 0 is called UV brane, y = L IR brane). The parameter k is fixed to k = 10 19 GeV to adress the hierarchy problem through a geometrical mechanism. The custodially protected variant of the model is based on the group SU(3) c × SU(2) L × SU(2) R × U(1) X × P L,R [3]. The discrete P L,R symmetry implies a mirror action of the two SU(2) L,R groups, preventing large Z couplings to left-handed fermions. The group is broken to the SM group by boundary conditions (BC) on the UV brane; moreover, Higgs-driven spontaneous symmetry breaking occurs, as in SM. All fields can propagate in the bulk, except for the Higgs localized close to the IR brane. Due to the compactification of y, towers of Kaluza-Klein (KK) excitations exist for all particles. The zero modes are identified with SM particles. To distinguish particles having a SM correspondent from those without it, Neumann BC on both branes (++) are imposed, while Dirichlet BC on the UV brane and Neumann BC on the IR one (-+) are chosen for fields without SM partners. The enlarged gauge group leads to new gauge bosons. In the case of SU(2) L and SU(2) R they are W a,µ L and W a,µ R (a = 1, 2, 3), respectively, while the U(1) X gauge field is X µ . Charged gauge bosons are defined as W ± L(R)µ = . As for neutral fields, W 3 R and X mix to give Z X and B; B mixes with W 3 L giving Z and A fields. Zero modes and higher KK modes of gauge fields also mix. Neglecting modes with KK number larger than 1, mixings occur, (W [15]. In the Higgs sector, the Higgs field H(x, y) transforms as a bidoublet under SU(2) L × SU(2) R and as a singlet under U(1) X . It contains two charged and two neutral components. Only one of the two neutral fields, h 0 , has a non-vanishing vacuum expectation value v = 246.22 GeV, as in SM. Moving to fermions, SM left-handed doublets fit in bidoublets of SU(2) L × SU(2) R , together with two new fermions. Right-handed up-type quarks are singlets; neutrinos are only left-handed. Right-handed down-type quarks and charged leptons transform as (3, 1) ⊕ (1, 3) SU(2) L × SU(2) R multipltes in which additional new fermions are also present. The relation Q = T 3 L + T 3 R + Q X holds among the electric charge Q, the third component of the SU(2) L and SU(2) R isospins T 3 L,R and the charge Q X . The profiles of zero-mode fermions involve the fermion bulk mass, which is the same for fermions in the same SU(2) L × SU(2) R multiplet. As in SM, quark flavour eigenstates undergo a rotation to give mass eigenstates. Denoting by U L(R) , D L(R) the rotation matrices of up-type left (right) and down-type left (right) quarks, respectively, the CKM matrix is V CKM = U † L D L . Their matrix elements are involved in the Feynman rules of tree-level flavour-changing neutral currents that exist in the model, mediated by Z, Z , Z H , and by the first KK mode of the photon and of the gluon. Such elements depend on the 5D Yukawa couplings λ u,d i j of up and down-type quarks, constrained to reproduce quark masses and CKM elements. Adopting the assumption of real and symmetric λ u,d matrices, one is left with six independent entries among their elements, namely λ u 12 , λ u 13 , λ u 23 , λ d 12 , λ d 13 , λ d 23 , which, together with the bulk mass parameters, represent the set of numerical inputs of our study. In the RS c model the Wilson coefficients, C RS = C SM + ∆C, have been derived in [16], except for C 7 and C 7 computed in [1] with the same assumptions adopted in [16] . Different computational schemes for C ( ) 7 were used in [17]. The new contributions ∆C are obtained scanning the parameter space. In [1,4] the quark bulk mass parameters and the independent entries of the matrices λ u,d have been fixed imposing quark masses and CKM constraints, as well as constraints derived in [18] using the measurements of the coupling Zbb, of the b-quark left-right asymmetry parameter and of the forward-backward asymmetry for b quarks. The parameter space is further reduced imposing that B(B → K * µ + µ − ) and B(B → X s γ) lie within the 2σ range of the measurements [19,20]. For further details I refer to [1] . In Fig. 1 SM and RS c predictions for A FB and P 5 are compared, varying the model parameters and including the uncertainty on the form factors computed in [21] using light-cone QCD sum rules [22]. The form factor uncertainty has an impact on the SM results, except for the position of the zero in A FB (q 2 ), almost free of uncertainty. In RS c , deviations from SM are small, and discrepancy with data is found as well. For B → K * τ + τ − no data are available at present [1] . Considering the modes B → K ( * ) νν, in Fig. 2 The pattern of correlations among the various observables is interesting [4]. B(B 0 → K * 0 νν) and F L are correlated, while B(B 0 → K * 0 νν) and A T are anticorrelated, as well as R K/K * and F L , a pattern that can be viewed as a specific feature of RS c . Similar features appear in the decays B s → (φ , η, η f 0 (980))νν [4]. Conclusions In the RS c model, deviations with respect to SM predictions are found in several observables relative to the modes B → K * + − and B → K ( * ) νν , even though small. Correlations among observables exist, that can be used to discriminate this model from other NP scenarios.
2015-10-09T13:04:45.000Z
2015-10-09T00:00:00.000
{ "year": 2015, "sha1": "49ae8c64aec3dc539a35c14e71c0ffe2bd93875d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bf1de28140b2b746114f80ecaab0d799e88cfd51", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119111309
pes2o/s2orc
v3-fos-license
Fractional quantum Hall states of dipolar fermions in a strained optical lattice We study strongly correlated ground states of dipolar fermions in a honeycomb optical lattice with spatial variations in hopping amplitudes. Similar to a strained graphene, such nonuniform hopping amplitudes produce valley-dependent pseudomagnetic fields for fermions near the two Dirac points, resulting in the formation of Landau levels. The dipole moments polarized perpendicular to the honeycomb plane yield a long-range repulsive interaction. By exact diagonalization in the zeroth-Landau-level basis, we show that this repulsive interaction stabilizes a variety of valley-polarized fractional quantum Hall states such as Laughlin and composite-fermion states. The present system thus offers an intriguing platform for emulating fractional quantum Hall physics in a static optical lattice. We calculate the energy gaps above these incompressible states, and discuss the temperature scales required for their experimental realization. I. INTRODUCTION Fractional quantum Hall (FQH) effect [1,2], which was first discovered in GaAs/AlGaAs heterostructures [3], is a remarkable manifestation of strong correlations between electrons. It arises from fractional filling of a massively degenerate Landau level in a high magnetic field, where the interaction effect is significantly enhanced. Consequently, the ground states are highly entangled in both real and momentum spaces, as exemplified by Laughlin wave functions [4]. FQH states are examples of topologically ordered states of matter with long-range entanglement, and exhibit anyonic excitations with fractional charge and statistics [5]. The statistics obeyed by anyons form a representation of the braid group, and can be non-Abelian. Possible non-Abelian anyons in the half-filled second Landau level [6][7][8] offer candidate building blocks for a fault-tolerant topological quantum computation [9]. Since the realization of FQH states requires an extremely clean two-dimensional system with high mobility, their studies have been limited to silicon, III-V, and oxide heterostructures [3,6,[10][11][12] and graphene [13,14]. Laser-cooled atomic systems, which have unprecedented cleanness, can offer a new platform for the studies of FQH states [15,16]. While a usual magnetic field does not produce a Lorentz force for neutral atoms, different methods of engineering synthetic magnetic fields that do produce such a force have been developed [17,18]. Such methods include rotation [19][20][21][22] and optical dressing [23][24][25][26][27] of atoms in continuum and laser-induced tunneling in optical lattices [28][29][30][31] and synthetic dimensions [32][33][34]. On the theoretical side, a variety of FQH states have been predicted to appear in scalar Bose gases in synthetic magnetic fields, which include a bosonic Laughlin state [35] and non-Abelian Read-Rezayi states [22,36,37]. High controllability of ultracold atoms offers a potential advantage in the manipulation of non-Abelian excitations over solid-state devices. While high synthetic magnetic fields have already been realized with the technique of laser-induced tunneling [28-31, 33, 34], the Raman processes used in this technique involve heating of the system, which crucially limits the time scale of experiments. Recently, Tian, Endres, and Pekker [38] have proposed an interesting scheme that is free from this difficulty. Their theoretical proposal is inspired by the fact that in graphene [39][40][41] and molecular graphene [42], nonuniform strain induces valley-dependent high pseudomagnetic fields for fermions near the two Dirac points [43]. The authors of Ref. [38] have proposed a method of generating spatially varying hopping amplitudes in a honeycomb optical lattice, which can mimic these systems. It is based on a simple configuration where three Gaussian laser beams intersect at 120 • but their centers are displaced from the center of the system. This scheme can realize quasiuniform high pseudomagnetic fields in a static optical lattice, and significantly enlarge the time scale of experiments. It is interesting to ask what quantum phases emerge by loading interacting fermions in such a "strained" honeycomb optical lattice. For strained graphene, where electrons interact via a Coulomb interaction, the emergence of valley-polarized (fractional) quantum Hall states and valley-symmetric topological states has been discussed [44,45]. For ultracold spin- 1 2 fermionic atoms, the dominance of an intercomponent s-wave interaction is likely to lead to the spontaneous spin polarization; the resulting system is essentially noninteracting due to the absence of an intracomponent s-wave interaction, and cannot stabilize a topologically ordered state. By contrast, if the fermions possess large electric or magnetic dipole moments [46], they interact via a long-range interaction even when the spin state is polarized. There has recently been a remarkable progress in the creation and manipulation of dipolar Fermi gases. Fermionic polar molecules such as 40 K 87 Rb [47][48][49][50] and 23 Na 40 K [51,52] have been prepared in their absolute ground states while magnetic atoms such as 161 Dy [53], 167 Er [54], and 53 Cr [55] have been brought to Fermi degeneracy. In this paper, we study strongly correlated ground states of dipolar fermions in a strained honeycomb optical lattice, which can be realized with the scheme of Ref. [38]. The dipole moments are taken to be polarized perpendicular to the honeycomb plane, yielding a long-range repulsive interaction. The low-energy effective theory of this system is given by interacting Dirac fermions near two valleys in mutually antiparallel magnetic fields. We simulate this theory by exact diagonalization (ED) in the zeroth-Landau-level (ZLL) basis in a spherical geometry. We find that there appear a variety of valley-polarized FQH states such as Laughlin [4] and composite-fermion states [56][57][58] of particles and holes. The present system thus offers an intriguing platform for emulating FQH physics in a static optical lattice. We calculate the energy gaps above these incompressible states, and discuss the temperature scales required for their experimental realization. The rest of the paper is organized as follows. In Sec. II, we describe our system and explain how spatially varying hopping amplitudes in a honeycomb optical lattice generate pseudomagnetic fields. We then derive a low-energy effective theory of this system, which has the form of interacting two-species Dirac fermions in antiparallel magnetic fields. In Sec. III, we formulate the problem using the ZLL basis in the spherical geometry, which is useful in numerical analyses. In Sec. IV, we present our ED results. In particular, we perform an extensive search for FQH states, and calculate the energy gaps above these ground states. We discuss the possibility of realizing these states in a particular optical lattice setup. In Sec. V, we present a summary of this paper, and discuss an outlook for future studies. In Appendix A, we give details of the calculation of pseudopotentials. II. DIPOLAR FERMIONS IN A STRAINED HONEYCOMB OPTICAL LATTICE We consider a system of fermions loaded into a honeycomb optical lattice with an effective "strain" due to spatially varying hopping amplitudes. Each fermionic atom or molecule possesses a electric or magnetic dipole moment polarized perpendicular to the honeycomb plane, yielding a long-range dipole-dipole interaction V (r) = Cr −3 , where C is a constant. We review how spatially varying hopping amplitudes produce valley-dependent pseudomagnetic fields for fermions near the two Dirac points. We then introduce the continuum description of the system in terms of interacting two-species Dirac fermions in antiparallel magnetic fields. The honeycomb lattice consists of two sublattices A and B. We introduce three vectors which connect any A site to its three neighboring B sites. Here, a is the length of a nearest-neighbor bond, and e x and e y are the unit vectors along x and y directions, respectively. The triangular Bravais lattice is generated by the basis vectors a 1 = δ 1 − δ 2 and a 2 = δ 1 − δ 3 , and the area of the unit cell is A c = |a 1 × a 2 | = 3 √ 3a 2 /2. For a sufficiently deep optical lattice, the kinetic part H kin of the Hamiltonian is well described by a tightbinding model in a honeycomb lattice. We first consider a spatially uniform optical lattice with hopping amplitudes t j (> 0) along δ j (j = 1, 2, 3) [59]. In this case, H kin is given by where c X (k) annihilates a fermion with wave vector k on the sublattice X(= A, B). When t 1 = t 2 = t 3 , the energy bands ±|f (k)| exhibit two Dirac cones at the two Brillouin zone corners K ξ = ξK = −ξ(4π/3 √ 3a)e x , where ξ = ± is the valley index. When t j 's are not equal, the Dirac points are shifted from K ξ . To see it, we set k = ξK + q and expand f (k) in terms of qa. Assuming t j ≈ t (j = 1, 2, 3), we find where v F = 3ta/2 is the velocity of the Dirac fermions, and As seen in Eq. (4), the two Dirac points shift in mutually opposite directions by the vectors ξA/ . When the Fermi level is close to the Dirac points, H kin can be effectively described at low energies by two-species massless Dirac fermions as where ∆ is a high-wave-number cutoff, p = −i (∂ x , ∂ y ) is the momentum operator, and τ ξ = (ξσ x , σ y ) with (σ x , σ y , σ z ) being the Pauli matrices. When going from Eq. (6) to Eq. (7), we have performed replacements and the Fourier transformation The fermionic operator c(r) on the original lattice site r ∈ X(= A, B) is related to Ψ ξ (r) = t (Ψ ξ,A (r), Ψ ξ,B (r)) as When the hopping amplitudes vary slowly in space, the shift ξA in the Dirac Hamiltonian (7) also varies spatially and plays a role of a pseudovector potential. Tian et al. [38] have proposed that such spatially varying hopping amplitudes t j (r) (j = 1, 2, 3) can be generated in a honeycomb optical lattice by starting from a standard configuration of three Gaussian laser beams intersecting at 120 • [60,61] and displacing the centers of the beams from the center of the systems. The induced pseudovector potentials ξA(r) are shown to lead to quasiuniform pseudomagnetic fields ξB(r) = ξ[∂ x A y (r) − ∂ y A x (r)] for fermions near the two valleys. The mutually opposite signs of the pseudomagnetic fields for the two valleys come from the fact that the nonuniform hopping amplitudes do not break the time-reversal symmetry. We note that here time reversal is defined as the complex conjugation operator for polarized fermions (and does not involve a spin rotation). Next we consider the interaction part of the Hamiltonian, where n(r) = c † (r)c(r) is the number operator at the site r, and the colons : · : indicate the normal ordering. By performing replacement r = X=A,B r∈X → X d 2 r/A c and using Eq. (10), we obtain the effective interactions between Dirac fermions as Here, valley-converting processes with ξ 1 = ξ 2 or η 1 = η 2 involve highly oscillating factors e ±2iK·r or e ±2iK·r , and can be neglected if the interaction V (r − r ) varies slowly over the scale of the lattice constant. In such a case, the interaction Hamiltonian can be recast into the sum of intra-valley and inter-valley density-density interactions as where ρ ξ (r) = Ψ † ξ (r)Ψ ξ (r) is the density operator for the valley ξ = ±. At each valley K ξ , a spatially uniform pseudomagnetic field ξB leads to the formation of relativistic Landau levels [62,63] Each level has the degeneracy N φ = A/2πl 2 B , where A is the area of the system. Below we consider the case when the Fermi level lies near the ZLL n = 0 and this level is partially populated as in Fig. 1. When the interaction energy scale is much smaller than the Landau-level spacing √ 2 v F /l B , we can analyze the interaction Hamiltonian (13) within the restricted manifold spanned by the ZLL states. The number of fermions, N ξ , in the ZLL is independently conserved at each valley K ξ since H int does not involve inter-valley tunneling within our approximation in Eq. (13). Exact diagonzalization calculation can thus be performed separately in each sector with fixed N ± . Similar to the case of graphene, we define the filling factor ν as which ranges over −1 < ν < 1 in the present case. The case of ν = 0 corresponds to the half-filled ZLL. Because the particle-hole transformation relates the physics for ±|ν|, we focus on the case of −1 < ν < 0 (i.e., 0 < ν < 1). III. SPHERICAL GEOMETRY To analyze the interaction Hamiltonian (13) in the ZLL, it is useful to adopt the spherical geometry [64,65], which is uniform and has no edge. In Sec. III A, we review the relativistic Landau model on a sphere, which has been solved in Refs. [66,67]. We describe the derivation of the single-particle eigenstates in the ZLL, based on an algebraic method formulated recently by Hasebe [68]. In Sec. III B, we construct Haldane's pseudopotentials [64] in the ZLL for a power-law-decaying interaction, with particular focus on the case of a dipole-dipole interaction. A. Single-particle eigenstates We consider two-species Dirac fermions labeled by the valley index ξ = ± and subject to antiparallel pseudomagnetic fields on a sphere. Each species has two pseudospin states, which correspond to the two sublattices of the honeycomb lattice. We introduce the polar coordinates (r, θ, φ) and the associated unit vectors e r , e θ , e φ . We place magnetic monopoles of valley-dependent integer charges ξN φ ≡ ξ(2S) (in units of flux quantum 2π ) at the center of the sphere. These monopoles produce uniform magnetic fields ξBe r on the sphere of radius is the magnetic length. The corresponding vector potentials in the Schwinger gauge are given by ξA(r), where As an analogue of the Dirac Hamiltonian in Eq. (7), we consider the following single-particle Hamiltonian on a sphere: where is an analogue of the dynamical momentum for the relativistic problem, and τ ξ = (ξσ x , σ y , ξσ z ). Here, the last term in Eq. (18) originates from the spin connection [66][67][68], and has the effect of modifying the monopole charge ξ(2S) by ξ(∓1). Using Eq. (16) and the representation of ∇ in the spherical coordinate, we obtain To reveal the algebraic aspect of the problem, it is useful to introduce the edth differential operators [69][70][71] ð (S) and the orbital angular momentum operator Here, L (S) is the generator of the spherical symmetry in the non-relativistic Landau problem on a sphere with a monopole charge 2S [64,65], and obeys the standard algebra of an angular momentum. Furthermore, the edth and angular momentum operators obey the following algebra: Using the edth operators (20), the Hamiltonian (17) is expressed simply as [68] Using Eqs. (22c) and (22d), one can show that the following total angular momentum operator J ξ commutes with H ξ : The unsymmetric form of this operator for the two species arises from the effective shifts in the monopole charge due to the spin connection in Eq. (18). Using Eqs (22a) and (22b), we find from which we obtain the energy spectrum The n-th level has the magnitude j = S − 1 2 + |n| of the angular momentum J ξ , and is (2j + 1)-fold degenerate. The sphere spectrum (26) coincides with the disc spectrum (14) when |n| S. The single-particle eigenstates in the ZLL (n = 0) have the total angular momentum j = S − 1 2 ≡S, and are given by for the valleys ξ = ±1, respectively. Here, m = −S, ...,S is the z component of total angular momentum, and r is constrained to the surface of the sphere (r = Re r ). We have also introduced the spinor coordinates and their complex conjugate counterpartsū andv, where c and s are shorthand notations. The normalization factor NS m is calculated to be Both the states in Eq. (27) In particular, the m =S state is localized around the south (north) pole of the sphere for ξ = + (−). Such reversed locations between the two valley arise from the fact that the mutually antiparallel pseudomagnetic fields are induced around the two valleys. B. Pseudopotentials We consider the interaction Hamiltonian (13) within the restricted manifold spanned by the ZLL states (27). In this restricted manifold, the interactions can be conveniently represented in terms of Haldane's pseudopotentials [64,65]. We calculate the pseudopotentials for both the intra-and inter-valley interactions. Expressions of the pseudopotentials for a general interaction potential V (r) are derived in Appendix A. Here we calculate them for a power-law-decaying potential V (r) = Cr −α , in particular, for a dipole-dipole interaction with α = 3. We note that the calculation of the pseudopotentials goes basically in the same way as the non-relativistic case [64,65] since the ZLL states (27) have essentially the same form as the lowest-Landau-level states in the non-relativistic case. To introduce the pseudopotentials, we first note that because of the spherical symmetry, two-body eigenstates for a general interaction potential V (|r 1 − r 2 |) are constructed through the angular momentum coupling of Eq. (27) as where S , m 1 ;S, m 2 |I, M is the Clebsch-Gordan coefficient and ξ, η = ±. Here, I and M are the magnitude and z-component, respectively, of the total angular momentum of the two particles. The pseudopotential V (ξ,η) I is defined as the eigenvalue of the interaction potential V (|r 1 − r 2 |) for the state (31). Since the interaction Hamiltonian (13) consists only of two-body scattering processes, we can decompose H (ξ,η) int in terms of the two-body eigenstates (31) as Here, we have introduced the pair creation operator where c (ξ) † m is the fermionic creation operator for the mth state (27) in the ZLL at the valley K ξ . For intravalley interactions (ξ, η) = (+, +) and (−, −), the sum in Eq. (32) is restricted to odd 2S − I because of the Fermi statistics. Remarkably, while the interaction Hamiltonian (13) is originally specified by the continuous function V (r), it is now represented by a finite number of parameters, {V (ξ,η) I }. As described in Appendix A, for a general interaction V (r), the pseudopotentials are obtained as and c ≡ cos(θ/2) and s ≡ sin(θ/2) as defined in Eq. (28). Below we calculate Eq. (34) for a power-law-decaying potential V (r) = Cr −α . We note that integrals in Eq. (35) diverge for some k owing to a short-distance singularity of V (r), and need a careful treatment. We first consider the intra-valley interactions. For 4S− 2k + 1 − α > −1, i.e., k < 2S + 1 − α 2 , the integral in Eq. (35a) converges, and is calculated to bẽ The integral in Eq. (35a) diverges otherwise. Here, the factorial x! for a real number x > 0 is defined via the Gamma function as x! = Γ(x + 1). For I < 2S + 1 − α 2 , the sum in Eq. (34) involves only convergent numbers, and is calculated to be (see Appendix A) For a dipole-dipole interaction (α = 3), V (+,+) I diverges for I = 2S, but this does not correspond to an allowed scattering channel for fermions. Thus we can use Eq. (37) for all the allowed scattering channels for the intra-valley interactions. We note that for a Coulomb interaction (α = 1), Eq. (37) coincides with the result in the nonrelativistic case in Ref. [65]. We next consider the inter-valley interactions. For 2k+ 1 − α > −1, i.e., k > α 2 − 1, Eq. (35b) is calculated to bẽ For k ≤ α 2 − 1, the integral in Eq. (35b) diverges, and we need to regularize it appropriately. A natural shortdistance cutoff for the interaction potential V (r) is given by the length a of a nearest-neighbor bond introduced in Eq. (1). Setting V (Rθ) = 0 for 0 ≤ θ ≤ a/R, we find For a dipole-dipole interaction (α = 3), this cutoffdependence occurs for k = 0, which contributes to Eq. (34) for all 0 ≤ I ≤ 2S. For the experimental condition considered in Sec. IV C, we have a/l B ≈ 0.25. Here we use this ratio in evaluating Eq. (39). Setting 2S = 2S − 1 = 12, we plot the pseudopotentials for the intra-and inter-valley interactions in Fig. 2. We first find that the inter-valley pseudopotentials have a far larger scale than the intra-valley ones. This comes 38) and (39) into Eq. (34). For the intra-valley interactions, only the channels with odd (even) I are allowed for even (odd) 2S because of the Fermi statistics. The intra-valley pseudopotential with I = 2S = 12 is not shown because it diverges, but in any case does not correspond to an allowed channel for fermions. from the diverging contribution of the short-distance part of V (r) as found in Eq. (39), and causes a spontaneous valley-polarization as we discuss later in Sec. IV A. We further find that the intra-and inter-valley pseudopotentials depend differently on I: the former (latter) monotonically increases (decreases) with increasing I. This can be understood as follows. Equation (30) indicates that a particle having an average angular momentum J ξ is localized in the direction of −ξ J ξ on the sphere. Thus, an intra-valley (inter-valley) repulsive interaction on the sphere implies an "antiferromagnetic" ("ferromagnetic") interaction between the angular momenta of the two particles, resulting in a larger energy cost for larger (smaller) I. IV. NUMERICAL INVESTIGATION OF FRACTIONAL QUANTUM HALL STATES We consider the situation in which the ZLL is partially populated as in Fig. 1, and numerically investigate the FQH states stabilized by a dipole-dipole interaction. We have performed ED calculations for the interaction Hamiltonian (13) on a spherical geometry, using the pseudopotential representation as described in Sec III B. We demonstrate that owing to the strong inter-valley pseudopotentials, the ground state is spontaneously fully valley-polarized for an arbitrary filling factor. We then carry out an extensive search for incompressible states in the valley-polarized case, and find that a variety of FQH states such as Laughlin and composite-fermion states are stabilized. We estimate the energy gaps above these states, and discuss the possibility of realizing these states in experiments. When specifying the filling factor in this section, we use ν rather than ν in Eq. (15) since the former corresponds directly to the filling factor in the non-relativistic case. As mentioned at the end of Sec. II, we focus on the case of 0 < ν < 1 (i.e., −1 < ν < 0). We note that Ref. [44] has analyzed strongly correlated phases in strained graphene through ED of a lattice model in a torus geometry. Compared to their work, our approach based on a continuum theory on a sphere can significantly simplify the search for candidate incompressible states through the use of the total angular momentum of the ground state as we demonstrate in Sec. IV B. Furthermore, there is no topological degeneracy of the ground state on a sphere, which simplifies also the analysis of the gap above the ground state. Meanwhile, our approach is not suitable in treating shortdistance details of interactions on a lattice as in Ref. [44]. A. Valley-polarization We first demonstrate that the ground state is spontaneously fully valley-polarized for an arbitrary filling factor. Figure 3 presents the dependence of the ground-state energy on the population imbalance N + −N − between the two valleys, for each fixed value of N = N + +N − . We find that for each N , the lowest-energy state is found for the fully imbalanced case N + − N − = N . This indicates that the ground state is spontaneously fully valley-polarized. This occurs for all values of the monopole charge 2S that we have investigated. This can be understood from the enhanced role of inter-valley interactions as found in the behavior of the pseudopotentials in Fig. 2. We note that a similar behavior has been discussed as a phase separation instability in Ref. [73]. B. Numerical search for FQH states Focusing on the fully valley-polarized sector with (N + , N − ) = (N, 0), we have carried out an extensive search for incompressible ground states in the (2S, N ) plane as shown in Fig. 4. We note that ED in this sector is insensitive to the choice of the short-distance cutoff since the intra-valley pseudopotentials V does depend on the cutoff, but does not correspond to an allowed scattering channel for fermions). Furthermore, in this sector, the results are symmetric around ν = 1/2 because the particle-hole transformation relates the filling factors ν and 1 − ν. I. FQH states with 0 < ν < 1/2 that can be identified in Fig. 4. Each state has the characteristic filling factor ν and shift δ. For each state in the table, at least three filled circles are found to be on the line with (40) in Fig. 4. All of these states can be interpreted as integer quantum Hall states of composite fermions with ( ν, δ) in Eq. (41), and the corresponding p and n values are also shown. Because of the spherical symmetry, the total angular momentum, which is defined as the sum of the J ξ operator in Eq. (24) over all the particles, commutes with the Hamiltonian. ED calculation can thus be performed separately for different values of the magnitude J of the total angular momentum. Incompressible states in general appear as the unique ground states with J = 0, which are indicated by filled circles in Fig. 4. The area of each filled circle is proportional to the neutral gap, which is defined as the excitation gap for fixed (2S, N + , N − ). In the thermodynamic limit, the filling factor is given by ν = N/(2S) = N/(2S + 1). However, for incompressible states on a finite sphere, the relation between N and 2S involves a characteristic shift δ [74]: where δ depends on an individual candidate wave function. For example, the Laughlin state at ν = 1/(2p + 1) has the shift δ = 2p + 1. The relation (40) can be used to identify different FQH states. FQH states with 0 < ν < 1/2 that can be identified in Fig. 4 are summarized in Table I. All the states can be interpreted as integer quantum Hall states of composite fermions [56][57][58], which have the following filling factor and shift: These states include the Laughlin states (n = 1) [4] and Jain's principal sequence (p = 1) as the special cases. We note that the counterparts of the states in Table I under the particle-hole transformation, which have the filling factors ν = 3/5, 2/3, etc., are also seen in Fig. 4. We note that the appearance of the Laughlin state at the 1/3 filling for dipolar fermions has also been discussed in Refs. [75][76][77]. Among the FQH states listed in Table I, the most prominent gaps are found for the principal sequence ν = n/(2n + 1) with n = 1, 2. In order to discuss the experimental realizability of these states, we estimate the excitation gaps above the ground states in the thermodynamic limit. In Fig. 5, we plot the ED data of the neutral excitation gap as a function of 1/N . For the ν = 1/3 state, we fit the data with a quadratic function as was done for non-relativistic fermions interacting via a Coulomb interaction [65]; for the ν = 2/5 state, we perform a simple linear fit since the dependence of the data on N is less smooth. The fits give the following results: In order for ∆ ν=1/3 ≡ ∆ (∞) ν=1/3 to be the lowest excitation energy above the ground state, it must be smaller than the excitation energies to the non-fully-valley-polarized sectors. Figure 3 indicates that this is true. For N = 5, which corresponds to the ν = 1/3 FQH state, the gap to the sector with (N + , N − ) = (N − 1, 1) is given by δE 0.103(C/l 3 B ); this is slightly larger than ∆ ν=1/3 , and should be at least comparable to it even if it is extrapolated to the thermodynamic limit. We note that using trial wave functions, Ref. [76] has estimated the quasihole excitation energy of the Laughlin state at the 1/3 filling to be (0.0132±0.0020)C/l 3 B , which is several times smaller than ∆ ν=1/3 . Meanwhile, a natural excitation from the ground state is a "quasiexciton" pair of a quasihole and a quasiparticle [74]. The neutral excitation gap estimated in Eq. (42a) can be interpreted as the lowest excitation energy of such a pair. We thus expect that the properties of the Laughlin ground state can be observed if the temperature is below the scale of ∆ ν=1/3 . For the other FQH states listed in Table I, we do not have a sufficient number of finite-size data to make a reliable extrapolation of the excitation gap to the thermodynamic limit. Yet, we find that as we decrease ν, the gap tends to decrease more rapidly than the case of a Coulomb interaction: for example, the ratio ∆ ν=1/5 /∆ ν=1/3 is around 0.05 and 0.2 [65] for the dipoledipole and Coulomb cases, respectively (we have used the N = 6 value to estimate ∆ ν=1/5 in the former). This can be understood from the behavior of the pseudopotential in Fig. 2. The intra-valley pseudopotential decreases more rapidly than the Coulomb case as I decreases, which reflects the fact that a dipole-dipole interaction decays more rapidly for long distances. Specifically, using Eq. (37), we find limS →∞ V (+,+) 2S−3 /V (+,+) 2S−1 = 1/8 and 5/8 for α = 3 and 1, respectively, which is expected to be the main origin of the reduced gap ratio ∆ ν=1/5 /∆ ν=1/3 in the former case. We note that at sufficiently low filling factors ν, the gap would close, leading to the formation of the Wigner crystal [78]. The stability of the Wigner crystal for ν < 1/7 has been discussed in Ref. [79]. A more detailed analysis of the competition with the Wigner crystal is beyond the scope of the present paper. We have also calculated the pair distribution function for the ν = 1/3 state; see Fig. 6. For a uniform system of area A, it is defined as where {r i } are the positions of N fermions and the expectation value is taken with respect to the ground state. Because of the spherical symmetry, this function does not depend on the direction of r and thus G(r) = G(r), where r is the chord distance of r. For ν = 1/3, a very good approximation to the ground state is given by the Laughlin wave function [4,64] In this wave function, the pair distribution function obeys a power-law dependence G(r) ∝ r 6 as r → 0. This suppression of G(r) for small r marks the effect of a repulsive interaction, and can also be found in the numerical data in Fig. 6. As we increase r, the numerical data show a hump around r/l B = 4, and gradually approach unity, which corresponds to the uncorrelated case. We thus estimate the correlation length to be around 4l B . C. Experimental realization Here we evaluate the energy gap ∆ ν=1/3 for some experimentally relevant situations. We first note that the displacement of the Dirac cones in Eq. (4) is at most the size of the Brillouin zone: |A(r)|/ 1/a. Thus the pseudomagnetic field B/ = 1/l 2 B obtained from the rotation of A(r)/ is at most of the order of 1/(R 0 a) in a sample of radius R 0 . Through a more detailed analysis [38], the maximum pseudomagnetic field is estimated to be B/ = 2.7/(R 0 λ), where λ = (3 √ 3/2)a is the wavelength of the lasers used to create the honeycomb optical lattice. In this case, the gap can be expressed as To achieve a larger gap, it is advantageous to reduce the ratio R 0 /l B . Meanwhile, the sample radius R 0 should be larger than the correlation length 4l B estimated from Fig. 6 in order to observe bulk properties around the center of the sample. For concreteness, let us consider 23 Na 40 K fermionic polar molecules, for which a large electric dipole moment of d = 0.8 Debye has been achieved [51,52]. The coefficient of the dipole-dipole interaction is given by C = d 2 /(4π 0 ), where 0 is the vacuum permitivity. For λ = 500 nm and R 0 /l B = 4 [80], the gap is estimated to be ∆ ν=1/3 k B × 0.89 nK. Since this value is still smaller than the typical temperature scale of ultracold atom experiments, we propose to use the recently proposed methods of subwavelength lattices [81][82][83][84]. In this technique, one can create an optical lattice whose spacing is reduced by a factor of an integer N [83]. If we decrease the sample radius R 0 by a factor of N at the same time by tightening the trap potential, we can keep the ratio R 0 /l B unchanged. In this case, Eq. (45) indicates that the gap can be enhanced by a factor of N 3 . If we take N = 4, for example, the gap is lifted to about 57 nK, which is in a reasonable range for experimental observation. In view of the rapid development in the creation and manipulation of polar molecules, molecules with a larger electric dipole moment are likely to be achieved, which would provide another route to a larger gap. If FQH states are realized, they can be probed via density plateaus in an in situ image of the trapped atoms, as proposed for integer quantum valley Hall states in Ref. [38]. Finally, we comment on the case of magnetic dipolar atoms. To be specific, let us consider 161 Dy atoms, for which Fermi degeneracy has been achieved [53]. These atoms have a large magnetic dipole moment of d = 10µ B , where µ B is the Bohr magneton. The coefficient for the dipole-dipole interaction is given by C = µ 0 d 2 /(4π), where µ 0 is the vacuum permeability. Comparing this coefficient with that for polar molecules considered above, we find C( 161 Dy)/C( 23 Na 40 K) ≈ 0.013. Therefore, the excitation gap is two orders of magnitude smaller than the case of polar molecules. V. SUMMARY AND OUTLOOK We have studied strongly correlated ground states of dipolar fermions in a honeycomb optical lattice with an effective strain due to spatially varying hopping amplitudes. The low-energy effective theory of this system is given by interacting Dirac fermions near two valleys in mutually antiparallel magnetic fields. We have simulated this theory by ED in the ZLL basis in a spherical geometry. In this basis, the interaction Hamiltonian can be conveniently represented in terms of pseudopotentials. We have shown that owing to the enhanced inter-valley pseudopotentials, the ground state is fully valley-polarized for all the filling factors. We have then carried out an extensive search for FQH states in the fully valley-polarized sector, and have found signatures of several FQH states which include Laughlin and composite-fermion states of particles and holes. The present system can thus emulate FQH physics in a static optical lattice. We have calculated the energy gaps above these incompressible states, and discussed the temperature scales required for their experimental realization. We have shown that by using the methods of subwavelength optical lattices, we can obtain a reasonable gap for observation. We note that the use of Rydberg atoms, which have a large dipole moment and strongly interact through enhanced van der Waals force [85][86][87], may further enhance the energy gap. It is interesting to compare the present system with the pseudospin-1 2 Bose gas in antiparallel fields as studied in Ref. [72]. In Ref. [72] (see also Ref. [88]), it was found that fractional quantum spin Hall states composed of a pair of nearly independent quantum Hall states are remarkably robust and persist even when the intercomponent s-wave scattering is comparable with the intracomponent one. In the present study, we have found that the dipole-dipole interactions of equal magnitudes within each valley and between the valleys lead to fully valley-polarized ground states. This difference from the bosonic case can be understood from the reduced effect of intra-valley interactions due to the prohibition of the scattering channel with I = 2S for fermions. In the present work, we have focused on the case of partially filled ZLL as in Fig. 1. By changing the density of the system, one can tune the chemical potential to higher Landau levels with n = ±1, ±2, ..., and investigate the FQH states realized in those Landau levels. It would be interesting to explore the possibility of a non-Abelian quantum Hall state as in the case of half-filled second Landau level in GaAs heterostructures [6][7][8]. The realization of a non-Abelian state in a highly controlled setting of ultracold atoms would offer a step toward a fault-torelant topological quantum computation [9]. Here we describe some details of the calculation of the pseudopotentials presented in Sec. III B. We essentially follow the method in Ref. [65]. We fist derive Eq. (34), which is an expression of the pseudopotentials for a general interaction potential V (r). As described in Sec. III B, the pseudopotential V where the normalization factor MS I is given by and we have introduced the spinor coordinates (u i , v i ) for r i (i = 1, 2) as in Eq. (28). We calculate the pseudopotentials (A1) for the intravalley interaction. Substituting Eq. (A3a) into Eq. (A1), we have where integrations are taken over solid angles Ω i formed by r i (i = 1, 2). We perform the following unitary transformation for the spinor coordinates of the second particle: where (θ 2 , φ 2 ) and (c 2 , s 2 ) are defined for (u 2 , v 2 ) as in Eq. (28). The integration over φ 1 trivially yields the constant 2π while that over φ 2 gives where C(I, k) is the binomial coefficient. Therefore, the intra-valley pseudopotentials are calculated as The inter-valley pseudopotentials can likewise be calculated as Here, the summation has been taken with the following trick:
2016-07-18T20:00:01.000Z
2016-07-18T00:00:00.000
{ "year": 2016, "sha1": "34e21e8cba5e898f5836e87a35e75ba91a0f60a4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1607.05275", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "34e21e8cba5e898f5836e87a35e75ba91a0f60a4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
242264401
pes2o/s2orc
v3-fos-license
Prevalence of Orthodontic Malocclusion in School Children in the North East of Slovenia: Retrospective Epidemiology Study Background: Dental malocclusions exhibit the third-highest prevalence among oral pathologies. The occlusion is evaluated in primary, mixed and permanent dentition. Most orthodontic patients were treated in the early permanent dentition. Early detection of dental anomalies is important to prevent complications and can have short-term and long-term benets. The epidemiological data on the prevalence of malocclusion is an important determinant in planning appropriate levels of orthodontic services. Dentists have the responsibility to recognize, diagnose, and manage or refer to abnormalities. Data from researches showed that is the incidence of malocclusions from 11% to 93%. The study aim is to nd out, the prevalence and species of malocclusion of school children in 4 school years, how they are registered by general dentists. Results: The research was conducted in 4 consecutive school years on schoolchildren from 1. to 9. class. Dentists have registered the presence and type of malocclusion. They are made statistical data percentage of children with malocclusion and percentage of representation malocclusion and statistical difference between genders. The percentage of malocclusion was the lowest in the 1. class. The highest percentage of malocclusion was in 7. class. The most common types of malocclusion were deep bite, crowding cross bite and II class of Angle's. The upper lateral incisive was the most common missing tooth (aplasia). Conclusion: The high percentage of malocclusion at the end of school (15-year-old) about 50% and the low number of children in orthodontic therapy. A systematic and well-organized dental care program for any target population in a community requires some basic information, such as the prevalence of the condition. (16) The epidemiological data on the prevalence of malocclusion is an important determinant in planning appropriate levels of orthodontic services. (17) Dentists have the responsibility to recognize, diagnose, and manage or refer abnormalities. (18) The source of data is systematic examination of school children. Their examinations do dentists in children and youth dentistry. They are rst stairs upon detection of malocclusions and referral to an orthodontist. It is important, that recognize early some malocclusions and refer to orthodontist for therapy. But sometimes is noted and refer malocclusion, which is not for orthodontic therapy. They may be temporarily condition in physiological tooth exchange. Therefore can be di cult access to orthodontic therapy for those who really need it. Children are the ideal population for prevention, development monitoring, determining the effectiveness of prevention programs and it can modi cation/ update preventive program based on epidemiological studies. Data of researches showed, that is the incidence of malocclusions from 11-93% (19,20) and these variations are di cult to explain. (17). The prevalence of malocclusions in primary dentition in Brazilian children was 75,8%. (21). Gelgor was in your study found, that only 10% of adolescents had correct occlusion. (22) How is it in the area of Murska Sobota, Slovenia? In Slovenia, the preventive program has its legal basis. (23) The Health center in the city Murska Sobota, which is in the north east of Slovenia, takes care of the implementation of preventive examinations for 11 elementary schools and 5 branch schools. The health center has been doing regular systematic examinations of elementary school pupils for more decades. Unfortunately, in 2020 due to COVID-19 pandemic, could not inspect all pupils. Dentists for children and youth at a systematic examination identify the anomaly and, if necessary, refer it to an orthodontist. They are a key factor in identifying malocclusions and referral to a specialist, therefore, their knowledge and involvement are essential. The aim of the study is to nd out, the prevalence and species of malocclusion of school children in 4 school years, how they are registered by general dentists. Methods The research was conducted in 4 consecutive school years, retrospective epidemiology study, beginning with 2015/16 (to 2018/19). The school year in Slovenia starts always in September and ends in June the following year. Students start attending school with 6 years (1st class) and end their primary education with approximately 15 years (9th class). In the 6th class, they are 12 years old. Data was obtained after systematic examinations of students, which was done once a year. Systematic examinations were carried out by 4 dentists in their ordinations. Pupils were examined on the chair with a dental mirror, a sond, a puster, and the light. Anomalies were registered by Angle's classi cation (II and III class), edge-to-edge (tete-a-tete), deep bite, open bite, crossbite, aplasia, crowding (tight condition), ectopic outgrowth and dens supernumerary. The condition was noted in the following form and includes the following data: the type of anomaly and is it pupils in orthodontic therapy, pupil's name, pupil's surname, school, class, date, and signature of the dentist. Children with systematic diseases, which may affect the occurrence of anomalies, did not register. Children in orthodontic therapy were registered »in orthodontic therapy«. Their anomalies were not registered. Anomalies, which were registered: The completed form was sent to a specialist in paediatric dentistry, who did the data processing. Statistical data are The highest percentage of anomalies was in the 7th class (13-year-old). The lowest percentage of anomalies was in 1st class (6-year-old). The most common occlusal anomalies were deep bite (21.7%), crossbite (20%) and crowding (14.7%) of registered anomalies (Table 1). In the population were present 12 children with aplasia. 1 child was missing teeth 15, 35, 12 and 22. 5 children were missing teeth 12 and 22. 2 children were missing teeth 35 and 45. 1 child was missing the tooth 12. 1 child was missing the tooth 22. 1 child was missing the tooth 31. 1 child was missing the tooth 42. In the population were 29 children with the Angle's III class. Table 2 The number of children, who were in orthodontic therapy. The children have been treated since 4th class (10-year-old). The most number treated children are in 8th class (14 year-old) ( Table 2). Chi-Square test showed that percentage of girls with malocclusions was higher than that of boys (Table 3). School year 2016/17 In the population were 19 children with the Angle's III class. Table 5 The number of children, who were in orthodontic therapy. Table 5). Chi-Square test showed that was both genders equally represented (Table 6). School year 2017/18 The highest percentage of anomalies was in the 7th class (13-year-old). The lowest percentage of anomalies was in 1st class (6-year-old). The most common occlusal anomalies were II class-Angle's (29%), deep bite (21.8%) and crowding (14.2%) of registered anomalies ( In the population were 54 children with the Angle's III class. Table 8 The number of children, who were in orthodontic therapy. The children have been treated since the 5th class (11-year-old). The most number treated children are in 9th class (15-year-old) ( Table 8). Chi-Square test showed that percentage of boys with malocclusions was higher than that of girls (Table 9). School year 2018/19 The highest percentage of anomalies was in the 3rd class (9-year-old). The lowest percentage of anomalies was in 1st class (6-year-old). The most common occlusal anomalies were II class-Angle's (30.4%), deep bite (22.3%) and crowding (16.9%) of registered anomalies (Table 10) In the population were present 16 children with aplasia. 7 children were missing teeth 12 and 22. 1 child was missing the tooth 31. 1 child was missing the tooth 41. 2 children was missing the tooth 12. 2 children was missing the tooth 22. 1 child were missing teeth 22 and 42. 1 child was missing the tooth 15. 1 child was missing the tooth 45. In the population were 43 children with the Angle's III class. Table 11 The number of children, who were in orthodontic therapy. The children have been treated since the 5th class (11-year-old). The most number treated children are in the 7th class (13-year-old) ( Table 11). Chi-Square test showed that percentage of boys with malocclusions was higher than that of girls (Table 12). Discussion The highest percentage of anomalies was at the age of 13. The lowest percentage of anomalies was at the age of 6. The most common anomalies were deep bite, cross bite, crowding and II class of Angle's. The most commonly missing teeth (aplasia) were 12 and 22 (together or separately), 35 and 45 (together or separately). Orthodontic therapy started at the age of 10/ 11. The most number treated children with orthodontic therapy were at the age of 15. Chi-Square test showed that was statistically difference at two groups-boys had more anomalies. The weakness of the study was that not used orthodontistic measures. But this was not the aim. It was not known how many children were directed to ortodontists. The strengt of the study is that the study lasted four years and was determined the prevalence and variants of orthhodontic anomalies, as found by general dentists. They had the obligation to recognize the anomaly and to direct to orthodontist. The study can illustrate the uctuation of orthodontic anomalies over 9 years, the years when the greatest changes in growth and development are present. In the study was found, that had cases with the aplasias and that are the most commonly missing teeth upper lateral incisive and lower second premolar. If several anomalies were to be attributed to a transitional phase resulting in physiological exchange (in the early mixed dentition), there was a big disparity between percentages of anomalies and numbers of children in orthodontic therapy. Especially if early orthodontic treatment would be bene cial and desirable especially to enhance skeletal and dental discrepancies and correct habits, dysfunction and malocclusion in their early stages. (24) Many studies were published to describe the prevalence and types of malocclusion, when examining a certain population, it is di cult to compare (varying methods and indexes to assess occlusal relationships). (25) The result in the study showed a higher percent of malocclusion, than Bandaru's study. (26) The percent of hypodontia in that study is lower than in Kazanci's study. (27) To clarify the need for orthodontic treatment and planning service, it would be good to add orthodontic ndings. Then we would more likely get a more accurate answer to what it is a small number of children with orthodontic therapy. The monitoring same children over the years could be a good source of information on the development of occlusion (maybe an idea for the next study). Conclusion There was the lowest percentage of malocclusion in deciduous dentition. The high percentage of malocclusion at the end of school (15-year-old) about 50%. The low number of children with ortodontic therapy is 15-years-old. Systematic examinations are a good source of prevalence information and start point for further planning. A preventive program is needed to update to reduce the high prevalence of malocclusion. Declarations Ethics approval and consent to participate The study used the data from systematic examinations. The parents/ legal guardians of the students signed a statement that the students are participating in the implementation of the program. Written
2021-08-27T16:19:45.357Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "0c1add23247790a93e3bb52123dbc9446d343f93", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-138568/v1.pdf?c=1631884494000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6692418cc82598487f0041cd2c79d185c8677bd2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
27834389
pes2o/s2orc
v3-fos-license
Test of Equivalence Principle at $10^{-8}$ Level by a Dual-species Double-diffraction Raman Atom Interferometer We report an improved test of the weak equivalence principle by using a simultaneous $^{85}$Rb-$^{87}$Rb dual-species atom interferometer. We propose and implement a four-wave double-diffraction Raman transition scheme for the interferometer, and demonstrate its ability in suppressing common-mode phase noise of Raman lasers after their frequencies and intensity ratios are optimized. The statistical uncertainty of the experimental data for E\"{o}tv\"{o}s parameter $\eta$ is $0.8\times10^{-8}$ at 3200 s. With various systematic errors corrected the final value is $\eta=(2.8\pm3.0)\times10^{-8}$. The major uncertainty is attributed to the Coriolis effect. The equivalence principle including the weak equivalence principle (WEP), also known as universality of free fall, is one of the two assumptions of Einstein's general relativity.Theories which try to unify gravity and the standard model, generally require violation of WEP [1].To explore the applicable extent of WEP and to help the birth of new quantum gravity theories, it is very important to precisely test WEP both with macro objects and with microscopic particles.WEP has been tested experimentally with large objects by lunar laser ranging [2] and torsion balances [3] at 10 −13 level, while with atoms it is tested only at 10 −7 level. The test with atoms relies on atom interferometry which has been developed for over 20 years [4] and has been widely used in measurements of gravity [5] and its gradient [6], the Newtonian gravitational constant [7], gravitational redshift [8], and post-Newtonian gravity [9].Fray et al. [10] performed the first atom based WEP test using an atom interferometer (AI) with an Eötvös value of η = (1.2 ± 1.7) × 10 −7 by measuring the gravitational accelerations of the isotopic 85 Rb and 87 Rb atoms.Ten years later Bonnin et al. [11] reported the same test to a similar accuracy of η = (1.2 ± 3.2) × 10 −7 by using simultaneous dual-species ( 85 Rb and 87 Rb) AIs.A non-isotopic pair of atoms 87 Rb and 39 K was also used recently by Schlippert et al. [12], they tested WEP with η = (0.3 ± 5.4) × 10 −7 .In addition, the bosonic and fermionic isotopes of strontium atoms were also used to test WEP, the value is (0.2 ± 1.6) × 10 −7 [13]. On the other hand, current single-species AI technique has reached very high resolution [14,15], which could principally push the AI based WEP test to a much higher accuracy than 10 −7 .The main obstacles are complex noise that is difficult to be common-mode rejected, and crosstalk of different laser frequencies in a dual-species AI. Here we propose a simultaneous dual-species doublediffraction Raman AI and demonstrate a new WEP test with it.We design and realize a four-wave doublediffraction Raman transition (4WDR) scheme by carefully selecting the frequencies and intensity ratio of Raman beams to avoid the crosstalk among different lasers.The 4WDR is based on the single-species doublediffraction Raman AI [16,17], but extended to two species ( 85 Rb and 87 Rb). In the 4WDR scheme the frequencies and intensity ratios of Raman beams are chosen to meet the following requirements: 1) the four frequencies are far offresonant to all of the resonance lines of rubidium isotopes; 2) the intensities of two chirp lasers (ω 1 and ω 2 ) are equal, to ensure that the corresponding Rabi frequencies of two counter-propagated wave vectors in each doublediffraction Raman transition are equal, and atoms recoil to two interference paths with the same probability; 3) the corresponding Rabi frequencies of different species AIs are the same; 4) for dual-species Raman transitions, the total AC Stark shift caused by four Raman beams are zero. To find the optimal parameters, we calculate the AC Stark shift spectrum of rubidium atoms (see Fig. 1(a)).To cancel AC Stark shifts in both species AIs, some Raman frequencies should lie between the cooling laser where δ i is the detuning of ω i (i = 1, 2).δ 1 =δ 2 =υk ef f /2π, where υ is the projection of atomic velocity along the direction of wave vector, k ef f = k 1 +k 2 +2k 3 (for 85 Rb) or k 1 + k 2 + 2k 4 (for 87 Rb) are the effective wave vectors of the Raman lasers, υk ef f /2π equals Doppler shift of atoms.ω 1 and ω 2 are detuned ∆ 1 =971 MHz and ∆ 2 =2097 MHz respectively to the blue sides of transitions F = 3 to F ′ = 4 of 85 Rb and F = 2 to F ′ = 3 of 87 Rb.Shown in the up row of Fig. 1(b) is a polarization spectrum of rubidium atoms [18] for reference.By fixing the above frequency locations we then decide the intensities.We find that the optimal intensity ratios of four Raman beams are I 1 : I 2 : I 3 : I 4 = 1.0 : 1.0 : 3.1 : 14.3, where I i is the intensity of ω i (i = 1 ∼ 4). A pronounced advantage of the 4WDR scheme is its capability to suppress the common mode phase noise of Raman lasers.This can be seen by writing the total phase shift [19] of a single-species (taking 85 Rb as an example and shown in Fig. 1 [16], ∆ϕ j (j = A ∼ D) is the initial phase shift at site j.Since Raman pairs (k 1 , k 3 ) and (k 2 , k 3 supply recoil momentum in opposite directions, the atom interference loop formed by Raman pulse sequence is spatially symmetric [17].By careful calculation we find that the initial phases of k 3 are canceled due to the opposite recoil process in the interference loop, and the phase shift of each site only depends on the initial phases ϕ j i0 of k i (i = 1, 2), i.e. Similarly, for 87 Rb atoms, the total phase shift of lasers is independent of k 4 , and it is only sensitive to ϕ j i0 (i = 1, 2).In other words, the 4WDR scheme is immune to phase noises of both k 3 and k 4 .The residual noise of ϕ j i0 (i = 1, 2) can be common-mode rejected since 85 Rb and 87 Rb AIs share the same k 1 and k 2 . The experimental setup [20] is a modified version of our early AIs [21,22].Briefly, the magneto-optical trap (MOT) chamber is at bottom of the setup and on the top is the fountain pipe, in between is the detection chamber.A pair of rectangle windows for two parallel probe beams are arranged along the horizontal direction of the detection chamber.Two round windows (Window-A, Window-B) are perpendicular to the axial of the two rectangle windows, they are used for collecting laser-induced fluorescence from 85 Rb and 87 Rb simultaneously for each shot of fountain.Window-A is 30 mm higher than Window-B.All laser beams are supplied by the laser system, which is composed of a seed laser, a taped laser amplifier, and some acousto-optic modulators (AOMs).The seed laser is stabilized by saturated absorption spectroscopy and its frequency is shifted by AOMs.The blue detuning of Raman beams are realized by an electro-optic modulator [18]. Cold 85 Rb and 87 Rb atom clouds are prepared in MOT, and then launched simultaneously by a moving molasses process to form atom fountains.During the launching and falling process the 4WDR pulse sequence is applied.At the end 85 Rb and 87 Rb are detected parallelly at Window-A and Window-B.By scanning δ 1 and δ 2 simultaneously at chirp rates of α 1 and α 2 respectively the phase shifts of both 87 Rb and 85 Rb AIs are obtained.By switching the frequencies of two probe beams, 87 Rb and 85 Rb atoms in Windows-B and Windows-A are detected alternately. To evaluate the ability of phase noise suppression of the 4WDR scheme, a comparison experiment is performed.Firstly, we shut off the Raman beam with frequency of ω 2 , and carry out simultaneous 85 Rb-87 Rb dual-species atom interferometry experiment by usual single-diffraction Raman transitions method.An AOM driven by a triangle wave is used to modulate the phase of ω 3 to introduce rapid phase change to 85 Rb atoms.The experimental data are shown in Fig. 2(a).Due to the complicated phase variance from the modulation, 85 Rb atom interference fringes disappears, while the visibility of unperturbed 87 Rb atom interference fringes is 48%.As a comparison, we then switch on the Raman beam of ω 2 , thus the AI is in a double-diffraction configuration.The visibility of 85 Rb atom interference fringes as shown in Fig. 2(b) is now about 20% even it is still suffering from the phase modulation of ω 3 .This visibility is comparable with that of 87 Rb atoms.Meanwhile as already demonstrated in [16,17] the phase sensitivity of interference fringes obtained by the 4WDR method is improved by two times (see Fig. 2(b)).By using the 4WDR Raman AI we made gravity differential measurements.For each fringe we repeat 40 measurements, and a single measurement spends 2.5 s.By sine curve fitting we determine the chirp rates corresponding to the centers of fringes, they are α 1 = 25.10408MHz/s for 85 Rb atoms and α 2 = 25.10420MHz/s for 87 Rb atoms, respectively.The difference is mainly caused by the difference of effective wave vectors. To obtain phase difference between 85 Rb and 87 Rb simultaneous interference fringes, we conducted ellipse fitting by setting interference fringe data of 85 Rb as x, 87 Rb as y (see Fig. 3(a) for a typical fringe data).For an ellipse fitting, the smallest error occurs if the data distribution is close to a perfect circle, where the phase difference is π/2.The value of ∆ϕ depends on experimental parameters T, k ef f , α 1 , and α 2 .For given k ef f , α 1 , and α 2 , ∆ϕ can only be determined by T. We set T = 70.96ms, and the corresponding fitted phase difference is near π/2.The frequency difference between ω 3 and ω 4 causes systematic error of −494.4 × 10 −8 g in gravity differential measurement.The relative gravity difference (namely, the Eötvös parameter) can be obtained by η = (g 85 − g 87 ) (g 85 + g 87 )/2 Allan deviation of measurements for η is shown in Fig. 4. The deviation value σ η in the dual-logarithm chart decreases at the square root of averaging τ .At τ = 3200 s the deviation is 0.8 × 10 −8 .The well-behaved Allan deviation indicates that white noise is the dominant noise source in the experiment.This again shows that the 4WDR scheme has good common-mode noise suppression ability at least as demonstrated here at the 10 −8 level.!""""""""""#To give an uncertainty budget of errors other than the direct experimental measurement, i.e.Type B errors, we make the following estimates.The frequency difference between ω 3 and ω 4 is still a major systematic error, but because the uncertainty of laser frequency difference is less than 10 Hz, the uncertainty to correct the error is only 3 × 10 −11 .The fluctuation of bias magnetic field in our experiment is less than 1 mG, so the uncertainty of η due to second order Zeeman shift is less than 1 × 10 −10 .Due to the tiny but not zero difference of 85 Rb and 87 Rb atoms in mass, launch velocity and recoil velocity, the central positions of two species atom clouds are not completely overlapped during the free falling process.The Coriolis effect caused by Earth's rotation coupling with free falling atoms due to their horizontal velocity distribution, the fluctuations of initial positions and velocities of two species atoms, is another uncertainty source of the Eötvös parameter.The uncertainty of the horizontal position difference of two clouds is less than 2 mm, and the uncertainty of velocity difference is less than 1 mm/s.Considering the latitude of our laboratory (north latitude 30.54 • ), the calculated uncertainty caused by Coriolis effect is 2.9×10 −8 .The vertical position difference of 85 Rb and 87 Rb atom clouds is 0.23 ± 1.00 mm, thus the gravity gradient based systematic error is less than 7 × 10 −11 , and its uncertainty is 3 × 10 −10 .In our experiments, the fluctuation of laser intensities is less than 10%, the uncertainty of η due to AC Stark shifts is measured in independent experiments to be less than 2 × 10 −9 .All above mentioned main contributions affecting the differential acceleration measurement are listed in Table I.Including all statistical uncertainties or errors (Type A and B) together, the total uncertainty of η value is 3.0 × 10 −8 . To further reduce the uncertainty, Coriolis effect should be canceled.It can be done by rotating the mirrors [23] reflecting Raman beams.Then the signal to noise ratio should be increased in our experiment by evolving more and further cooled atoms, and by suppressing residual noises like seismic vibration with active vibration-isolation [24].Finally the 10-meter fountain AIs [15,20] or even AI in space [25] will come to play with their ultrahigh sensitivity. In summary, we developed a simultaneous dual-species ( 85 Rb-87 Rb) cold AI in which the proposed 4WDR scheme was used and demonstrated to have obvious advantages in immunizing common-mode noises.The 4WDR AI carries forward all features, including larger interference loop, better phase sensitivity and suppression of the phase noises of external fields, revealed in single species counterpart.It also holds the new ability of suppressing common-mode phase noise of Raman lasers in dual-species case.With this new type AI, we made a new WEP test at 10 −8 level and found no violation to the WEP.This work advances a step forward in WEP test with atoms by improving the accuracy about one order. We acknowledge Jun Luo, Tianchu Li and Jun Ye for their helpful discussions and suggestions in error evaluation and data analysis.This work was supported by the National Basic Research Program of China under Grant No. 2010CB832805, by the National Natural Science Foundation of China under Grant Nos.11227803 and 91436107, and also by funds from the Chinese Academy of Sciences. FIG. 1 . FIG. 1. (color online) Schematic diagram of the 4WDR scheme.(a) AC Stark shift spectrum of rubidium atoms.(b) Lasers with frequencies of ωi(i = 1 ∼ 4) are used as Raman beams for 85 Rb -87 Rb dual-species AI; δ1 is the detuning of ω1 , δ2 is the detuning of ω2 .ω1 and ω2 are detuned to blue side of transitions F = 3 to F ′ = 4 of 85 Rb and F = 2 to F ′ = 3 of 87 Rb.(c) Diagram of a double-diffraction Raman AI using k1, k2, and k3, where the blue lines are the paths of atoms in ground state F = 2, while the red dashed lines are for excited state F = 3. FIG. 2 . FIG. 2. (color online)Phase noise suppression by the 4WDR method.A rapid phase modulation is applied to 85 Rb atoms.(a) Simultaneous 85 Rb-87 Rb interference fringes obtained by single-diffraction Raman transition method and (b) Simultaneous 85 Rb-87 Rb interference fringes by the 4WDR scheme.The red triangles are experimental data points of 85 Rb atoms, and the red dotted line is sine curve fitting.The blue dots are experimental data points of 87 Rb atoms, and the blue solid line is sine curve fitting. FIG. 3. (color online)Population of 87 Rb in F =2 state vs Population of 85 Rb in F=3 state (a) and data for gravity differential measurements (b).The systematical error caused by difference of effective wave vectors of 85 Rb and 87 Rb is corrected.Data A are obtained by probing 87 Rb atoms at Window-A while probing 85 Rb atoms at Window-B; Data B are obtained by altering the probe position.The black line and the blue line are the average values of Data A and Data B respectively.The average of Data A and Data B is 2.8×10 −8 g shown as red line. TABLE I . Main contributions affecting the differential gravitational acceleration measurement.
2015-03-02T03:18:29.000Z
2015-03-02T00:00:00.000
{ "year": 2015, "sha1": "7ee2e6f46f1ffb63ebafc87c0aaa464fded7a299", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1503.00401", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7ee2e6f46f1ffb63ebafc87c0aaa464fded7a299", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
53013550
pes2o/s2orc
v3-fos-license
Responses of persons at risk of suicide: A critical interpretive synthesis Abstract Aim Several nursing studies focus on suicidal persons; yet, a synthesis of such research is unavailable. The aim of this review was to give an inclusive understanding of responses of persons at risk for suicide that guides clinical nursing practice and research. Design A reflexive and iterative study design was used in this study. Method A qualitative content analysis and a systematic review of literature guided the six‐phase Critical Interpretive Synthesis . A sample consisting of 24 nursing studies published during 1994–2017 were included in this study. Results Key concepts found were “Disengaged while fraught with affliction”; “Readiness to engage in life”; and “Engaging through caring and confirming humanity.” Contextually, there are gaps in global nursing knowledge. Conceptually, three key concepts can guide the nursing practice and give an impetus for the research. Methodologically, the Critical Interpretive Synthesis served as a helpful way to summarize and synthesize a small sample size into an aggregate body of knowledge. An evidenced‐based understanding of responses of persons at risk for suicide can guide nurses to ensure safety, promote hopeful recovery, and foster resilience. "turning points,"; and "suicide and coping." A follow-up article (Lakeman, 2010) described how these five themes informed about nursing practice and empathic care. Of note is that the inclusion criteria for the Lakeman and Fitzgerald study (2008) was not solely nursing research; 16 of the 20 articles were authored by non-nurses. An aggregate study of evidencedbased nursing research on this topic is unavailable in published literature. An updated inclusive view of this topic based on a synthesis of qualitative and quantitative nursing research could increase and enhance nursing's global evidenced-based knowledge, guide nursing practice while ensuring patient safety and serve as an impetus for future nursing research. | AIM The aim of this review was to give an inclusive understanding of responses of persons at risk for suicide based on the published nursing research. The intent was to guide nurses as they encounter persons at risk of suicide in clinical nursing practice and to inform future nursing research. | ME THOD A Critical Interpretive Synthesis (CIS) was conducted (Dixon-Woods, et al., 2006). This six-phase iterative, reflexive review approach consisted of formulating the review question, search the literature, sampling, determining quality, extracting data, and developing an interpretive synthesis. Such reviews guide selecting relevant, quality works from a large amount of available information and increasing clarity of current knowledge; systematic reviews can offer workable practice solutions (Sandelowski, 2008). | Formulating the review question The review question was: "What is a CIS of nursing research addressing responses of persons at risk for suicide?" The aim was to increase the understanding that would benefit nursing practice and give an impetus for future research. The review question was initially apparent to the researchers because they had previously conducted a CIS on nurses' experiences with suicidal patients (Talseth & Gilje, 2011). | Searching the literature The search strategy proceeded in several phases. To begin, we developed inclusion and exclusion criteria. Inclusion criteria were nursing research studies with at least one nurse as the author, published in peer-reviewed nursing journals and peer-reviewed healthcare journals between 1990 and 2007 in English language, and available in electronic databases. Content criteria for inclusion were studies that focused on patients' responses to being suicidal. Exclusion criteria were all studies in non-English languages, and those classified as reviews, published in books, and dissertations. Identification of records occurred though searching electronic databases CINAHL Medline, Ovid, Psychinfo, and PubMed using the following search terms for each database: CIS, experiences, nursing research, patients, suicide, suicidal. These terms were searched individually and collectively resulting in 86 studies. Eight additional studies identified through other sources (i.e., reference lists of the 86 studies) were also included for screening. The screening was carried out by examining titles and abstracts of the 94 studies. A total of 20 duplicate studies were found and were removed. With further screening, 37 of the 74 studies were excluded because they did not fit the inclusion criteria. The remaining 37 full-text studies were then assessed by both authors for eligibility based on relevancy to the aim of this study and the inclusion criteria. Of the 37 eligible studies, 13 were excluded based on sampling, authorship, secondary studies, or did not meet quality appraisal. Based on results of PRISMA, 24 nursing research studies were included in the sample (see Figure 1). | Sampling From the systematic review of articles, the purpose sample comprised of 24 studies published from 1994 -May 2017. While our literature search started with the publication year 1990, the first published study found was dated 1994. Our rationale for including this study in the sample was: to focus on an inclusive perspective of the research topic of interest; the study's research question and results were relevant; and the study met inclusion criteria (Table 1). Sixteen studies were published in peer-reviewed nursing journals. Four were published in refereed health-related journals (i.e., Gerontologist, Psychiatric Services, Suicide and Life-Threatening Behavior and International Journal of Mental Health). Publication dates ranged from 1994 to May 2017; most were published since 2000. Most study designs were qualitative (N = 23) and published after 1994. The only quantitative (N = 1) study was published prior to 2008. Most studies were reportedly situated in Europe and North America, and a few in Asia, United Kingdom, and Australia. Inpatient psychiatry was the most common setting; only some were in general and psychiatric settings. Other settings included emergency and psychiatric, nursing home, veterans' home, substance abuse treatment, and education. One study location was unreported ( Table 2). The sample participants totalled 1,273. The majority (N = 849) of the participants were high school students; the remainder were mostly psychiatric patients. More than half (N = 18) of the study was conducted in psychiatric settings (inpatient, general and psychiatric, and emergency and psychiatric; Table 3). | Determining quality All studies met the inclusion criteria. Eighteen were published in nursing journals and two in medical-psychiatric journals. The one quantitative study was evaluated using the Jadad scale (Jadad, et al., 1996); the 23 qualitative studies were evaluated using a Critical Appraisal Skills Program (CASP) (2006). The Jadad scale was developed to evaluate the quality of reports of randomized trials. It is used in meta-analysis and systematic reviews. It involves a three-point questionnaire ranging from 0 to 3 assessing randomization, blinding, and withdrawals/dropouts. Points are added if randomization and blinding are appropriately described. Critics of the scale have identified 10 flaws (Berger, 2006), noting it overemphasizes blinding and that inter-rater reliability needs to be further evaluated (Clark, Huët, Salmi, & Laupacis, 1999). The authors agreed on including the Jaded scores which ranged from 1-3 points. The CASP (2006) is a 10-item tool addressing the aim, method and design, sampling, data collection and analysis, ethical issues, validity, and relevance of results in qualitative studies. This tool calls for the rating of each of the 10 questions as "yes," "no," or "can't tell." The authors chose to rate responses as "yes" or "no" because they found that these were the most helpful responses. Of the 10 questions, 9 were rated as "yes" responses in all studies. The question about ethical issues lacked the most positive responses. This question includes explaining the research to participants, adequately considering the researcher-participant relationship, discussing informed consent and confidentiality, and whether ethical committee approval had been sought. Eight of the studies had a CASP score of at least 8/10 because they did not report ethical standards. Reporting ethical standards is of concern because of its importance in all research (Lakeman & Fitzgerald, 2009) and most qualitative research views participants as co-researchers (Table 4). As we examined the quality determinants, contemplated their variability and considered the historical evolution of nursing research, we concluded that all studies in the sample would add to an interpretive synthesis. For example, we were aware that eight of the 24 studies were published before 2000 when quality measures in nursing research were not emphasized. Over time, quality measures have become highlighted and commonly explicated in nursing research. Inherent in qualitative designs is interpretation; we acknowledge that qualitative findings are one of many interpretations. In view of these considerations, we came to a shared understanding to include all 24 studies in the CIS, weighing them equally as we synthesized them into a conceptual understanding of responses of persons at risk for suicide. | Extracting data Qualitative content analysis processes of organizing and summarizing data were used for extracting data (Granheim & Lundman, 2004 Further qualitative content analysis focused on condensing the extracted text into subthemes. Through the authors' dialogical conversations and in-depth reflections, consensus on themes emerged. These themes were "Struggling desperately losing touch with self, others and the world; Grasping engagement releasing affliction; Pondering ways of being kept safe while moving from affliction toward the future; Contemplating meaningfulness of nurses' relating and care that fosters desire to live; Valuing support of nurses, family and systems" (Table 5). Through the unfolding and enfolding iterative and reflexive process, a CIS emerged. | RE SULTS Relating, reflecting, translating, and weaving subthemes with themes resulted in three interpreted, synthesized concepts which describe responses of persons at risk of suicide. These concepts are: "Disengaged while fraught with affliction"; "Readiness to engage in life"; and "Engaging through caring and confirming humanity.". | Concept 1. Disengaged fraught with affliction The concept 1 emerged from the theme "Losing touch with self, others and the world." This theme had five subthemes. The first subtheme describing persons at risk conveyed "deep struggles with turbulent disconnectedness with self and others." Disconnectedness was portrayed as, for example, psychological pain, inability to adjust, cognitive constriction (Valente, 1994). Disconnectedness with others happened through isolation from families, conflicts with family and coworkers, poor role models, death (Haight & Hendrix, 1998;Ku, Tsai, Lin, & Lin, 2009). The second subtheme was "being alienated from self and others while striving to live." Alienation involved being controlled and being rebuffed by family instead of being connected to others while being caught between being responsible for family, yet responsible to strive to live for one's self (Tzeng, 2001). "Losing touch" with the world was another description that conveyed alienation (Vatne & Naden, 2012). The third subtheme, "Being ashamed, consumed by shame and desperation," was related to an impulse to hide or escape from shame (Wicklander, Samuelsson, & Asberg, 2003). Amidst struggling with disconnectedness, persons at risk reflected on "being perplexed about meaning in life," the fourth subtheme. This subtheme referred to questioning meaning. Questioning meaning related to psychache, powerlessness, and perceiving no one cared (Moore, 1997 (Biong & Ravndal, 2007) . The fifth subtheme, "Struggling to grasp self, self-responsibility and self-development," was involved searching for strength, seeking to be understood, refusing to be violated and being responsible for one's own safety (Holm & Severinsson, 2011). This concept addresses struggling desperately to connect yet being disconnected. | Concept 2: Readiness to engage in dialogue The concept 2 was synthesized from one theme and two subthemes. These were "Pivoting from being disconnected to connected through self-worth, safety and hope" and "Opening up dialogue in the midst of becoming connected." Readiness was revealed as shifting from loss of support, loss of hope, lack of self-esteem, loneliness, abuse, searching for release, to trying to regain hope, and self-worth (Lin, Huang, Chen, & Shao, 2009 | Concept 3: Engaging through caring and being confirmed The concept 3, "Engaging through caring and being confirmed," emerged from three subthemes. The first theme, "Ponder ways of being safe and connected," had two subthemes. The first subtheme was "Imaging a positive future through art." The use of art connected persons at risk with their emotions, rekindled their dreams, restored their identity and regained their control, imaging the future (Walsh & Minor-Schork, 1997). The second subtheme was "Reflecting on therapeutic and nontherapeutic ways of feeling safe and being supported in the midst of distress." This subtheme contrasted nontherapeutic (Cardell & Pitula, 1999;Pitula & Cardell, 1996) with therapeutic (Cardell & Pitula, 1999) aspects of constant observation. The second theme in concept 3 was "Contemplating meaningfulness of nurses" relating three subthemes which formed the basis for the emergence of this theme. The first subtheme, "Considering importance of confirming-lack of confirming care from nurses," contrasted the presence and absence of confirming care. Confirming care was experienced when basic needs "being met" and one was seen, given time, conveyed hope and not judged while lack of confirming care dealt with unmet needs, not being seen, not provided time, lack of hope and being judged (Talseth, Lindseth, Jacobsson, & Norberg, 1999). Being confirmed was also sensed as being understood while noncaring evoked burdensome feelings, fostering risk of suicide (Samuelson, Wiklander, Asberg, & Saveman, 2000). The second subtheme pertaining to concept 3, theme 1, was "Meaningful caring as engagement, openness, trust and respect that re-connects with humanity and fosters learning to live." When psychosocial needs were met, engagement reconnected one with humanity as nurses reflected an image of humanity, guiding one back to humanity while learning to live (Cutcliffe, Stevenson, Jackson, & Smith, 2006). The third subtheme for concept 3, theme 2, was "Sensing being understood through the presence of caring in health personnel who actively listened," focused on meaning and inspired hope. This subtheme addressed the therapeutic interaction with nurses that reduced, for example, isolation, loss of control, distress, and objectification (Lees, Proctor, & Fassett, 2014). Encounters with healthcare personnel were described as the presence or absence of openness and trust and being met or not being met by someone who acknowledged the topic of suicide and conveyed mutual respect (Vatne & Nåden, 2014). Caring encounters and caring cultures in an atmosphere of wisdom fostered resuming or assuming self-responsibility and inspired hope (Vatne & Nåden, 2016b). The third theme embedded in concept 3, "Valuing support from nurses, family, health system and others," emerged from two subthemes. The first subtheme, "Desiring support from healthcare Condensed meaning units from key study findings about "Experiences of Persons at risk for suicide" Codes Subthemes Themes Patterns of suicide were: unbearable psychological pain; dissatisfactory interpersonal relationships; inability to adjust; cognitive constriction; rejection-aggression; indirect expressions (Valente, 1994) Life stories concerned dysfunctional families of origin, poor role models, feeling isolated and being pessimistic (Haight & Hendrix, 1998) Suicide triggers were illness, pain, death of close relative or friends, family/friend/coworker conflicts/ disputes and difficulty adapting to institutional life (Ku et al., 2009) Losing touch with the world involved relating suicidal attempt to life history, struggling for death and life, seeing suicide as a way to relieve desperation, feeling shame and guilt (Vatne & Nåden, 2012) Being alienated instead of connected by being controlled and rebuffed by others while wanting to leave family versus striving to live for self, seeking company of others, being loved and being responsible for family (Tzeng, 2001) Shame reactions concerned feeling failure, being ashamed of self, struggling with impulses to hide or flea and experiencing trespassing (Wicklander et al., 2003) Questioning life's meaning while experiencing psych ache and powerlessness, and perceiving nobody cared (Biong et al., 2008;Moore, 1997 Finding meaning in being isolated, being close to the point of no return, yet still being on the edge (Biong et al., 2008) Struggling to assume responsibility for self and others (searching for strength, struggling to be understood, refusing to be violated) and struggling to stay alive by enhancing self-development (recovering being able to be safe and trusted) (Holm & Severinsson., 2011) Disconnected versus connected (N = 9) Struggling with turbulent disconnectedness with self and others (Haight & Hendrix, 1998;Ku et al., 2009;Valente, 1994) Being alienated from self and others while striving to live (Tzeng, 2001;Vatne & Nåden, 2012) Being ashamed and feeling consumed by shame and desperation (Wicklander et al., 2003) Being perplexed about meaning in life (Biong et al., 2008;Moore, 1997) Struggling to grasp self, self-responsibility and self-development (Holm & Severinsson., 2011) Losing touch with self, others. and the world Longing for closeness, desiring connectedness, struggling to open up inner dialogue, breaking into outer dialogue, liberating inner and outer dialogue and struggling to open up for consolation (Talseth et al., 2003) Moving from loss of support, loss of self-esteem, loss of hope, loneliness, suffering abuse and seeking salvation to regaining hope and self-worth Wavering to grasp connectedness (N = 2) Opening up dialogue in the midst of becoming connected (Talseth et al., 2003) Pivoting from being disconnected to connected through self-worth, safety. and hope Grasping for engagement Overlapping stages of art future images illustrated complaint irritation, identity searching, humour reappearing, rekindling dreams, regaining control and pleasant anticipation (Walsh & Minor-Schork, 1997) Constant observation was described as preservation of safety, restoration of hope and distressing incidents (Pitula & Cardell, 1996) Therapeutic observer interventions were: optimism, acknowledgement of the patient, use of distraction, emotional support, and protection. Nontherapeutic observer interventions were: lack of empathy, acknowledgement, information and privacy; invasion of personal space; and confinement. (Cardell & Pitula, 1999) Interventions Imaging a positive future through art (Walsh & Minor-Schork, 1997) Reflecting on therapeutic and nontherapeutic ways of feeling safe and being supported in the midst of distress (Cardell & Pitula, 1999;Pitula & Cardell, 1996) Pondering ways of being safe and connected (Continues) | 479 TALSETH And GILJE system and nurses," involved support of psychosocial needs, being loved, and esteemed by nurses and being in control of life (Carrigan, 1994). The second subtheme was "Support from family and someone who cares, the desire to live and connectedness alleviated suicide risk." Experiencing connectedness and someone who cared, awareness of one's desire to live (Vatne & Nåden, 2016a) along with family support, alleviated suicide risk (Sharaf, Thompson, & Walsh, 2008). | Contextual views The context for most of the sample studies was Europe and North America. According to the World Health Organization (2016), the America's Region estimated suicide rates are, in general, lower than other WHO regions while the South East Asian Region has the highest estimated global suicide rate and the European Region has above the global average. However, published nursing research-based studies from regions with high as well as low suicide rates is very sparse. Throughout history, the topic of suicide has been taboo in many areas of the world. Currently, the topics of suicide and suicidal persons are multidimensional with cultural attitudes and contexts having an impact on research. Of importance is that much more research is needed in various contexts-geographical distributions and in variety of clinical settings. In this CIS, 18 of the 24 studies were reportedly conducted in psychiatric settings. However, suicide risks also occur in nonpsychiatric settings, including medical-surgical units (Neville & Roan, 2013). Condensed meaning units from key study findings about "Experiences of Persons at risk for suicide" Codes Subthemes Themes Care received from nurses was confirming (e.g., meeting basic needs; being seen; given time; patience, being open and nonjudgemental; conveying hope) or lack of confirming care, (i.e., unmet needs; not seen, given time or conveyed hope; and being judged) (Talseth et al., 1999) Being well cared occurred when suicidal patients received understanding and confirmation. Lack of confirmation contributed to feeling burdensome, demanding discharge and risking suicide (Samuelson et al., 2000) A key psychosocial need was "re-connecting with humanity," occurring through nurses "reflecting an image of humanity," guiding one back to humanity and "learning to live." Therapeutic interpersonal engagement with nurses that helped reduce isolation, loss of control, distress and objectification of the delivery of potentially objectifying common interventions was central to quality of care (Lees et al., 2014) Encounters with healthcare personnel were identified as: the presence or absence or openness and trust; being met and not being met by someone who addressed the topic of suicide; and being met on equal terms instead of being humiliated (Vatne & Nåden, 2014) Inspiring hope though encounters with healthcare personnel within caring cultures and an atmosphere of wisdom and resuming responsibilities (Vatne & Naden, 2016b) Meaningful caring and relating (N = 6) Considering importance of confirming lack of confirming care from nurses (Samuelson et al., 2000;Talseth et al., 1999) Meaningful caring as engagement, openness, trust, and respect that re-connects with humanity and fosters learning to live Sensing being understood through the presence of caring in health personnel who actively listened and focused on meaning, inspiring hope (Lees et al., 2014;Vatne & Nåden, 2014, 2016b Contemplating meaningfulness of nurses' relating Support and psychosocial needs unmet by healthcare system and nurses were being loved, esteemed, and in control of life (Carrigan, 1994) Family support affected self-esteem, impacting suicide risk (Sharf et al., 2009) Recovery from suicide risk occurred through becoming aware of one's desire to live, experiencing connectedness and someone who cared (Vatne & Naden, 2016a) Support (N = 3) Desiring support from healthcare system and nurses (Carrigan, 1994) Support from family and someone who cares, the desire to live and connectedness alleviated suicide risk (Sharf et al., 2009;Vatne & Naden, 2016a) Valuing support of nurses, family, health systems, and others | Conceptual views Conceptually, this CIS contributes to a more inclusive understanding about of responses of persons at risk for suicide. The three concepts, which are not linear, but rather interwoven, can guide and direct nurses' understandings, assisting persons at risk of suicide to survive suicide risk, and go on living. "Disengaged fraught with affliction," reflects persons at risk desperate struggle losing touch with self, others, and the world. This is experienced as being alienated, consumed in shame, trying to grasp a sense of self and self-responsibility while suffering with psychache. Psychache is extreme psychological pain (Sperber, 2011). Losing touch with the world alienates, experienced as "being rebuffed" by others (Tzeng, 2001), being estranged from nature, others, and self' (Sperber, 2011) and a way of "not being-in the world" (Heidegger, 1972). Alienation encompasses loneliness and despair. It can be understood as being cut-off from one's existence, perplexed with meaning in life. Shame is a mortifying experience involving one's own self-evaluation of one's actions or feelings. Shame can be understood as unworthiness of the whole self (Kalafat & Lester, 2011) accompanied by a desire to flee (Tzeng, 2001;Vatne & Nåden, 2012) and extreme withdraw from the situation. Suicide becomes the ultimate withdrawal (Kalafat & Lester, 2011). When experiencing shame, we lose touch with our existence (Vatne & Naden, 2012); we fear losing the world, others and our self. Self-responsibility includes self-control, being in control of life (Carrigan, 1994). For those at risk of suicide, control is about struggling with self to maintain control or to grasp regaining control (Crocker et al., 2006). Control involves being more or less connected. "Grasping engagement releasing affliction" reveals persons at risk shifting from extreme disconnectedness to connectedness. Afflicted with shame and low self-worth disconnects one from self and others. As self-worth increases and shame decreases, dialogue opens up for connecting through engagement. As engagement evolves, persons at risk begin to see themselves in the light of another person. Engagement, then, can relieve shame and foster attaching value to one's existence (c.f. Valente, 1994;Ku et al., 2009). Feeling safe from suicidal thoughts and impulse safe in encounters with others involves "connection, protection and control," essential to recovery from suicidal crises (Berg, Rørtveit, & Aase, 2017). It is evident that connection is important for safety. Similarly, control is important for safely. While safety includes but is more than a technical, physical intervention, of importance, safety is also about regaining emotional balance (Berglund et al., 2016) as well as engagement (c.f. Berg et al., 2017;Cutcliffe & Baker, 2002). While grasping for engagement, hope wavers. Hope can waver to and fro; it can be very temporary. It needs repeating over and over while grasping connectedness (Berglund et al., 2016;Cutcliffe, 2007). Hope diminishes alienation, affirms self-worth, fosters safety, evokes a sense of engagement, opens up for dialogue, releasing affliction. "Engaging through receiving meaningful care and being confirmed inspires hope" is about meeting needs and being understood, esteemed and supported. These responses confirm one's humanity and inspire a desire to move into the future (Cutcliffe & Baker, 2002). Engaging meaningfully with self and others echo a kind of "being at home-or at homeness" (Zingmark, Norberg, & Sandman, 1995). "Being at home" confirms one's humanity. Persons afflicted with suicide risk are "not at home"; they need to become ready for "being at home." Being at home is about being in relationship, engaging meaningfully, and experiencing being confirmed. Being confirmed is a most significant aspect of life (Cissna & Sieburg, 1981, p. 259), fostering a desire to live and fostering hope. Experiencing hope gives way to strength to manage problems and can bring forth self-control/self-responsibility (Berglund et al., 2016). As persons at risk engage meaningfully with nurses, they can begin to feel "at home" and self-worth can emerge. Self-worth emerges from experiencing confirmation. Confirmation means giving the other person the following messages: "To me, you exist! -We are relating! -To me, you are significant! -Your way of experiencing your world is valid" (Cissna & Sieburg, 1981, p. 259). All human beings want to be confirmed for what they are and even for what they can become (Buber 1957, pp. 102-103). Making the other present means imagining what he/she perceives, feels and wishes in the moment. (Cissna & Sieburg, 1981, p. 258;Buber, 1957,. p. 102-103). The desire to live, to be hopeful, is essential for those at suicide risk. Hope and caring are processes integrally woven together. These processes involve a human-to-human relationship, unconditional acceptance and tolerance, being heard, being understood, and feeling that one's life has value (Cutcliffe & Baker, 2002). Hope is also connected to confirming care experienced through being given time, being acknowledged, not judged yet sensing hope (Talseth et al., 1999). Similarly, being confirmed is being understood (Samuelson et al., 2000). During the processes of being cared for and being confirmed, at-risk persons can be guided back to humanity and learn to live (Cutcliffe & Barker, 2002). From these processes, hope can emerge, revealing readiness for consolation (Talseth, Gilje & Norberg, 2003, Vatne & Nåden, 2012. Readiness for consolation emerges from opening up to move into the future. This is facilitated through examining ways for hopeful recovery. Recovery is a process that involves opening up to others to be consoled (Lakeman & Fitzgerald, 2008 The CIS approach (Dixon-Woods et al., 2006), along with qualitative content analysis (Granheim & Lundman, 2004) and systematic review of literature (Sandelowski, 2008), provided orderly ways to sort and arrange data through extraction-condensation processes. | Methodological views The extracted data were formulated into condensed meaning units, codes, subthemes, and themes. Reflection and clarification of the themes led to formation of three key concepts that address the aim of the CIS. We thoughtfully considered quality determinants of the data which varied yet decided on the sample of 24 studies realizing each contributed to the aggregate findings. | LI M ITATI O N S As described above, the sample size was small and most studies were conducted in Europe and North America. Yet, the size was sufficient to address the research question and generate collective understandings of the topic. Of note is that suicide is a complex topic imbued with diverse cultural values and interpretations. Both authors are experienced psychiatric mental health nurse educators and qualitative researchers familiar with responses of suicidal patients. One author is from Norway and the other author is from the United States. While we view our backgrounds as both strengths and limitations to our interpretive lenses, we acknowledge the interpretations in this study are one of many (Ricoeur, 1976). | CON CLUS ION This paper presents a CIS of nursing research studies (N = 24) published from 1994 to 2017 on responses of persons at risk for suicide, a very vulnerable population whose safety is paramount. This understanding, based on a small sample of accumulated research-based from nursing literature, expands contextual, conceptual, and methodological views of this topic. Contextually, gaps are apparent in international research. Most studies were conducted in Europe and North America, and in psychiatric settings. Of note is that the context of research influences understandings of this culturally situated, sensitive topic of suicide. Conceptually, the three key concepts (i.e., Disengaged fraught with affliction; Readiness to engage in dialogue; and Engaging through caring and being confirmed) reveals a way of understanding responses of persons at risk for suicide. These concepts can guide nurses in clinical practice as well as research. Methodologically, the systematic review of literature and qualitative content analysis served as reasonable ways to organize data. The results can direct researchers in diverse areas of the world to further investigate responses of persons at risk for suicide. Of importance is that nurses address suicide as a preventable public health problem. | RELE VAN CE TO CLINI C AL PR AC TI CE Many nurses will encounter vulnerable persons at risk for suicide (Lakeman, 2010). Regardless of the setting, nurses should realize that most suicidal persons can survive and go on to live. Hence, understanding ways of engaging through caring and confirming humanity can prevent suicide. Accessing and using evidenced-informed knowledge and evidenced-based nursing knowledge to meet the challenges of encountering these at-risk persons, can guide nurses to facilitate hopeful recovery. Of importance is nurses' reflection on their hopefulness in working with persons at risk (Cutcliffe, 2006). Hopeful recovery emerges from meaningful connections, caring, being confirmed as a human being. Hopeful recovery can build resilience. Nursing research on the complex unfolding processes of resilience related to depression and suicide is emerging. Depression is a known risk factor of suicide. Of note is that from 2005 to 2015, the total estimated number of people living with depression worldwide increased 18.4% (World Health Organization, 2017). Resilience is a known protective factor for depression; it is highly correlated with low depression and anxiety (Edward, 2005;Wagnild & Gantnar, 2011). Lakeman and Fitzgerald (2008) implicitly described resilience when they asserted that persons at risk for suicide can quickly turn their lives around through experiencing gaining or regaining connection with others. Hopeful recovery, a way to not give up, can potentially build resilience (Edward & Warlow, 2005). As nurses' foster resilience in those at risk for suicide, lives can turnaround and risk of suicide can be overcome, thus addressing the important work of suicide prevention in the world.
2018-11-09T20:33:54.201Z
2018-07-10T00:00:00.000
{ "year": 2018, "sha1": "12061d617fc0f0533522177b81d1cfba79ff86c2", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nop2.169", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12061d617fc0f0533522177b81d1cfba79ff86c2", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
258688680
pes2o/s2orc
v3-fos-license
Pathophysiological effects of SARS-CoV-2 infection on the cardiovascular system and its clinical manifestations—a mini review Coronavirus disease 2019 (COVID-19) is a viral infection caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). COVID-19 may have a mild presentation, with few symptoms, or progress to a severe condition, characterized by generalized inflammation, systemic microvascular involvement, coagulopathy, and pulmonary and cardiovascular complications. Men present with more severe symptoms than women, especially men who are older and who present with comorbidities such as hypertension, diabetes mellitus, and a history of atherosclerotic diseases. Owing to its association with endothelial dysfunction, inflammation, thrombosis, and microvascular obstruction, SARS-CoV-2 infection can cause lesions in several organs, including the myocardium and the coronary arterial bed, which can result in clinical manifestations involving the cardiovascular system. In this mini review, we summarize the effects of SARS-CoV-2 infection on the cardiovascular system in both children and adults and characterize the various clinical manifestations associated with this disease. Introduction The coronavirus disease 2019 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), resulted in extensive extrapulmonary manifestations of the disease. These manifestations are the result of inflammatory processes involving multiple organs, resulting in the release of immune signaling mediators such as cytokines, tumor necrosis factor-alpha (TNF-α), and interleukin (IL)-1 and -6. These immune response mediators affect the cardiovascular system as a whole and can lead to abnormal coagulation and thromboembolic events (1,2). Acute myocardial infarction, acute coronary syndrome, cerebral vascular accidents, peripheral obstructive arterial disease with a risk of limb amputation, venous system involvement with deep vein thrombosis, and pulmonary thromboembolism are some of the cardiovascular events resulting from SARS-CoV-2 infection. Other complications include ventricular cardiac arrhythmias, supraventricular tachycardia [including atrial fibrillation (AF)], atrioventricular blocks, and direct myocardial injury, which can lead to myocarditis, heart failure, and cardiogenic shock (3)(4)(5). Pathophysiology of the cardiovascular manifestations of SARS-CoV-2 Several viral infections cause heart failure because of direct viral invasion and a storm of proinflammatory cytokines, leading to the activation of the sympathetic system and to myocardial failure. Such inflammatory overload, particularly the elevation of IL-1β, IL-6, and monocyte chemoattractant protein-1, leads to fulminant myocarditis. Moreover, endothelial dysfunction associated with multisystem inflammation and decreased nitric oxide bioavailability contribute to heart failure. These symptoms can also result from a combination of preexisting heart disease and virus-related acute hemodynamic and hypoxemic stress. Viruses enter the cytoplasm of host cells, like myocardial cells, after the host and viral membranes fuse, following cleavage of the viral S protein by transmembrane protease, serine 2. SARS-CoV-2 uses the spike protein to bind to angiotensin-converting enzyme 2 (ACE2) receptors on the myocardial cell membrane (6), which triggers the negative regulation of these receptors, angiotensin II accumulation, and subsequent adverse myocardial remodeling through the activity of angiotensin II type 1 receptors (7). SARS-CoV-2 can also cause myocardial damage via cell-mediated cytotoxicity. This uses a positive-feedback loop mechanism, in which activated CD8+ T lymphocytes migrate to cardiomyocytes and cause myocardial inflammation. Proinflammatory cytokines released into the bloodstream promote T-cell activation, which results in increased cytokine release (8). SARS-CoV-2 can affect the myocardium through three different mechanisms: (1) direct myocardial injury caused by the entry of the virus coupled to ACE2 receptors, which induces inflammation and cardiomyocyte death; (2) indirect secondary damage caused by a downregulation of ACE2 expression during postviral replication, which results in a hyperactivation of the renin-angiotensin system and stimulation of angiotensin 1 receptors, thereby promoting inflammatory and oxidant activities and arterial vasoconstriction; (3) indirect action mediated by immune B-and T-cell activation, resulting in a systemic inflammatory response with increased oxidative stress and an imbalance between oxygen supply and consumption (9,10). Inflammation is a major characteristic of these different damaging mechanisms induced by SARS-CoV-2 infection in the myocardium. Elevated ILs (including IL-2, -7, and -10), TNF-α, and the above-mentioned cytokine storm cause a negative inotropic effect and contribute to myocardial injury, apoptosis, and fibrosis. Furthermore, macrophage activation results in the release of IL-1 and IL-6, which promote inflammatory cell infiltration, vascular injury, microvascular involvement, endothelial dysfunction, and expression of cell adhesion molecules, including intercellular adhesion molecule 1 and vascular cell adhesion molecule 1 (11)(12)(13)(14)(15). Therefore, SARS-CoV-2 infection is associated with an increased risk of infarction and acute coronary syndrome (16,17). T-cell hyperactivation has been shown to induce the production of large amounts of interferon, TNF-α, and IL-6, which promotes immune system dysregulation and vascular diseases such as atherosclerosis (18). Increases in D-dimer levels and changes in fibrinolytic mechanisms, such as the inhibition of antithrombin of protein C and tissue factor, can lead to coronary thrombosis at the sites of plaque rupture and prothrombotic conditions in SARS-CoV-2-related inflammation (19). Supplementary Material Figure S1 shows the pathophysiology of cardiovascular manifestations resulting from SARS-CoV-2 infection and effects on the cardiovascular system. Some children with SARS-CoV-2 infections experience inflammatory shock, similar to that in Kawasaki disease, whose symptoms include heart failure and coronary artery disease (20)(21)(22)(23)). An interaction has been suggested between the hyperinflammatory state [caused by the cytokine storm (24) in monocytes] and the activated macrophages, characteristic of both Kawasaki disease and SARS-CoV-2 (25). In Kawasaki disease, cell infiltration is initiated by lymphocytes and macrophages in the tunica intima and adventitia (6-8 days after the onset of clinical symptoms). This cell infiltration process then progresses to the rest of the arterial wall (at approximately 10 days after onset) and leads to coronary artery involvement (arteritis). The extracellular matrix, which contains elastin, is required for maintaining the structural integrity of the arterial wall; its degradation by matrix metalloproteinases contributes to vascular inflammation (26). The inflammatory process in Kawasaki disease progression involves leukocytes, macrophages, lymphocytes (27), and high levels of TNF-α, a primary factor in this disease. Conversely, SARS-CoV-2 causes endothelial cell infection and inflammation (endotheliitis) (28-30), which promotes microcirculatory dysfunction and endothelial cell apoptosis, and can be a contributing factor to the development of arteritis in Kawasaki disease. The inflammatory response to severe SARS-CoV-2 infection is potentiated by interferon-1 (31,32). An elucidation of the molecular mechanisms associated with the inflammatory response activation could shed light on the pathogenesis of both Kawasaki disease and COVID-19. Cardiovascular manifestations of SARS-CoV-2 in adults 3.1. Acute coronary artery disease and SARS-CoV-2 affect 1.7% (confidence interval, 0%-3.6%) of patients hospitalized for the virus. Despite the majority of these patients having no history of coronary heart disease, many present with ST-segment elevation myocardial infarction, which is caused by the rupture of vulnerable coronary atherosclerotic plaques, coronary spasm, endothelial dysfunction, and thrombosis. Nevertheless, the number of hospitalizations for acute coronary events and percutaneous interventions registered in the United States, Italy, and Spain during the SARS-CoV-2 pandemic was significantly lower than the preceding period (33)(34)(35), possibly because fewer people sought medical care for heart attacks (36). SARS-CoV-2 infection can contribute to an increase in plasma troponin levels above the 99th percentile, which is indicative of myocardial injury. The major type of injury associated with SARS-CoV-2 is acute myocardial infarction, caused by acute myocardial ischemia, which can be of two types. Type 1 infarction typically results from plaque rupture, ulceration, erosion, dissection, and thrombosis; conversely, type 2 infarction is caused by an imbalance between oxygen supply and demand in heart muscles. To treat this condition, it is crucial to determine the type of infarction (37), both of which can occur in patients infected with SARS-CoV-2 (38). The presence of comorbidities, such as diabetes mellitus, arterial hypertension, and obesity, partially explains the high prevalence of coronary events in patients with COVID-19, as well as the higher incidence in severe cases. Takotsubo syndrome and SARS-CoV-2 Similar to myocardial injury, Takotsubo syndrome is associated with increased plasma troponin levels. In this syndrome, the segmental contractility of the left ventricular wall is altered, resulting in hypokinesia and akinesia, which lead to sudden heart failure and, in rare cases, can mimic acute myocardial infarction. Although its etiology is unclear, sympathetic hyperstimulation with microvascular involvement caused by stress-induced catecholamines appears to be an underlying cause (39). Takotsubo syndrome is also associated with high-grade inflammation (40-42). Acute myocardial involvement (myocarditis) and SARS-CoV-2 Acute myocardial involvement is the most common cardiac complication associated with SARS-CoV-2 infection. Typical symptoms include acute heart failure (3%-33%), left ventricular dysfunction (10%-41%), right ventricular dysfunction (33%-47%), and biventricular dysfunction (3%-15%) (43,44); presence of electrocardiogram (ECG) abnormalities and increased cardiac enzymes, such as high-sensitivity cardiac troponin T (hs-cTn) and N-terminal B-type natriuretic peptide (NT-proBNP) are indicators of these clinical complications (45). Ventricular wall stress caused by pressure or volume overload is the main stimulus for natriuretic peptide synthesis and release, which act in the kidney, inducing natriuresis and diuresis. Other physiological effects include peripheral vasodilation and inhibition of the renin-angiotensin and sympathetic nervous systems. NT-proBNP has a half-life of 120 min and is primarily eliminated by the kidneys (46). SARS-CoV-2 infection promotes NT-proBNP expression, which is a marker of both cardiac injury and disease severity (47)(48)(49). Severe COVID-19 cases display a mean NT-proBNP level of 791 pg/mL, while milder cases have a mean of 160 pg/ mL (49). Thus, NT-proBNP levels can be used as a biomarker of cardiac involvement and a prognosis indicator (50). The ventricular myocardium is the primary source of BNP. SARS-CoV-2 infection also causes increased hs-cTn and D-dimer levels, the degree of which is strongly correlated with poor prognosis in hospitalized patients (48,51). The magnitude of hs-cTn and D-dimer elevation correlates with the final clinical outcome, ranging from hospital discharge to death. During the SARS-CoV-2 pandemic, hospitalizations for myocarditis increased by 42.3%, compared with the prepandemic period. The risk for myocarditis was 0.146% among patients diagnosed with SARS-CoV-2 during an inpatient or hospitalbased outpatient encounter and 0.009% among patients who did not have a confirmed case of SARS-CoV-2 infection. After adjusting for individual and local care factors, the adjusted risk of myocarditis among SARS-CoV-2 carriers was 15.7 (confidence interval, 14.1-17.2) times higher than that of SARS-CoV-2 negative carriers (52). Endomyocardial biopsy is the gold standard for diagnosing acute and chronic inflammatory cardiomyopathies. Myocardial biopsies are accepted by the European Society of Cardiology (53) as gold standard investigative procedures for patients with myocarditis, using histochemical and viral genome analysis. A major drawback of the SARS-CoV-2 pandemic pertained to the technical difficulties imposed by performing these types of procedures in such conditions. However, magnetic resonance imaging can be a useful diagnostic resource for the identification of patients with cardiac involvement due to viral infection. Magnetic resonance imaging allows the detection of myocardial edema, hyperemia, necrosis, and/or fibrosis (Lake Louise criteria) (54, 55) with perfect correlation with the histological evidence of inflammation observed with endomyocardial biopsy (56-59). Cardiovascular magnetic resonance imaging (CMR) is useful for the characterization of the myocardial tissue in vivo, providing insights into the pattern and degree of cardiac injury. In patients with SARS-CoV-2, the prevalence of myocardial involvement identified using CMR ranges from 26% to 60%; this variability is attributed to differences between populations, severity of illness, and interval between acute infection and CMR evaluation. The European Society for Cardiovascular Magnetic Resonance Imaging recommends CMR and provides recommendations for its use and reporting metrics, toward improved standardization, uniform data acquisition, and analytical approaches in patients with SARS-CoV-2 infection (56). Heart failure and cardiogenic shock in SARS-CoV-2 infection Heart failure can occur at different stages of SARS-CoV-2 infection, making it particularly challenging to diagnose and manage. Patients with heart failure alone have a higher chance of contracting SARS-CoV-2 infection because of their weakened immune systems, overall fragility, and reduced hemodynamic tolerance to dangerous infectious processes. Inflammatory cytokine generation, macrophage recruitment, and granulocyte release result in a severe inflammatory storm and increased metabolic demand, leading to acute or chronic decompensation and exacerbation of latent clinical illness. Other contributing factors are as follows: the occurrence of coagulation issues and thrombotic events, which are also associated with renal involvement in 15%-25% of SARS-CoV-2 infections and exacerbate cardiac and renal dysfunction (60); and increased sympathetic activity, which creates an imbalance between energy supply and consumption. The most prevalent cardiovascular phenotype in hospitalized patients with SARS-CoV-2 infections is acute decompensated heart failure, which is characterized by severe congestion, drastically altered hemodynamic state, and increased biomarkers of myocardial injury (61). Up to 25% of hospitalized patients with COVID-19 develop new cases of heart failure. This complication is presumed to be a direct consequence of the virus or due to systemic inflammation. This causes acute myocarditis and, in some cases, results in cardiogenic shock, dysfunction of multiple organs, and death (62). Different mechanisms can lead to cardiogenic shock, including fulminant myocarditis, which causes sudden hemodynamic impairment, global hypokinesia, biventricular dysfunction, hypotension, and multiple organ dysfunction syndrome (63). Cardiogenic shock can also be associated with type 1 infarction, in cases of patients with large infarction extension that progresses to the most severe classification on the Killip scale and with mechanical complications. Isolated right ventricular failure Right ventricular failure results from the advancement of pulmonary illness, cytokine production, and inflammatory interstitial pneumonia, leading to severe pulmonary embolism or microembolization. Patients may develop secondary right ventricular failure caused by mechanical ventilation-induced pulmonary injury and right ventricular systolic dysfunction (due to precapillary pulmonary hypertension resulting from pulmonary hypoxemia vasoconstriction). In patients with SARS-CoV-2 infection, acute myocarditis or a hypertensive emergency may contribute to the occurrence of right ventricular failure. Sudden changes that can contribute to this complication include the infectious process and hydrostatic modifications with increased capillary permeability and accumulation of fluid in the extravascular space, leading to alveolar edema (64). Heart failure with reduced ejection fraction Patients with SARS-CoV-2 may develop heart failure with reduced ejection fraction; however, its prevalence remains unclear. Further studies are needed, specifically those including outpatient follow-up, to help uncover the clinical cause of the infection and lingering cardiac involvement (61). The primary causes of respiratory failure can be distinguished using biomarkers of cardiac injury, such as natriuretic peptides, in diagnostic imaging, which can also help determine the appropriate therapeutic approach. Diagnostic imaging To diagnose primary, secondary, or exacerbated cardiovascular problems linked to SARS-CoV-2 infection, conventional transthoracic echocardiography can be used (65, 66). In a recent report, although ventricular abnormalities were observed in 39% of patients with SARS-CoV-2 infections, examinations were normal in 45% of those patients (67). Of these, 3% had acute myocardial infarction, 3% had myocarditis, and 2% had Takotsubo disease. Left ventricular function deficits were classified as discrete, moderate, or severe in 17%, 12%, and 9% of patients, respectively. Similarly, 33% of the patients examined had functional alterations in the right ventricle. Discrete or moderate impairment was reported in 19% of patients, severe impairment in 6%, right ventricular dilation in 15%, and pulmonary hypertension in 8%. Both tamponade and endocarditis were detected only in 1% of patients. Furthermore, echocardiographic wall abnormalities were associated with well-defined clinical manifestations, such as chest pain with ST-segment elevation in 71% of the patients, elevated troponin and natriuretic peptides in 69%, suspected left or right ventricular failure in 60% each, and other alterations in 72% of the patients examined (68). Nuclear magnetic resonance (NMR) is the gold standard imaging modality for the assessment of myocardial structure and function and simultaneous composition of myocardial tissue (56). NMR examination detects acute ischemic involvement (myocardial infarction type 1), non-ischemic myocardial injury (myocarditis), stress cardiomyopathy, acute heart failure, and secondary myocardial injury caused by sepsis or critical illness (69). The most frequent NMR parameters are T1-weighted images for representing myocardial anatomy with postadministration of gadolinium, demonstrating the distribution of contrast in the tissue, which is evidence of the existence of chronic lesions, and differentiating myocardial scar fibrosis in patients with SARS-CoV-2 infection. Native T1 mapping without gadolinium allows the detection of increased interstitial space (e.g., collagen accumulation or amyloid deposits) or increased intracellular and/or extracellular space (tissue water, i.e., myocardial edema) (54). Necrosis/non-ischemic scarring involving the middle myocardium or epicardium can be detected using late-Yugar-Toledo et al. 10.3389/fcvm.2023.1162837 Frontiers in Cardiovascular Medicine enhancement images, 10-15 min after gadolinium injection. These images show typical subendocardial infarct or scar involvement in the region of an obstructed coronary artery. T2-weighted images with intense signal elevation are characteristic of tissue edema and associated with local inflammation; T2 mapping with increased time allows the detection of myocardial edema (70). A non-ischemic pattern during late gadolinium enhancement is usually linked to an abnormal T1 appearance and native T1, indicative of pericarditis or myopericarditis in patients with SARS-CoV-2. In patients with a high pretest probability of acute myocardial lesion-type myocarditis, magnetic resonance imaging can increase diagnosis sensitivity, facilitate treatment, and allow safe follow-up (69). Electrocardiographic alterations in SARS-CoV-2 In this regard, ECG is an excellent research tool for detecting myocardial ischemia because it is a straightforward, accessible, affordable, and low-risk procedure (71). Plaque rupture, coronary spasm, microthrombosis, endothelial dysfunction, hypoxia, electrolyte changes, and cytokine storms contribute to ECG changes during SARS-CoV-2 infection (72). These ECG alterations occur in 93% of patients hospitalized in the intensive care unit (ICU) with SARS-CoV-2 infection, which highlights the frequent comorbidity between COVID-19 and various arrhythmias and ECG alterations, as will be discussed in the following sections. Supraventricular tachycardia The most prevalent supraventricular arrhythmia in patients with SARS-CoV-2 infections is sinus tachycardia, which likely results from hypovolemia, hypoperfusion, hypoxia, and high body temperature. Conversely, AF is the most prevalent type of arrhythmia in patients with SARS-CoV-2-induced inflammatory cardiomyopathy. AF has variable presentations, such as sudden onset, recurrent pre-existing arrhythmia, and persistent or permanent AF with a rapid ventricular response, all of which are predictors of poor disease prognosis (79). Other arrhythmias that have been observed include atrial flutter, atrioventricular nodal reentrant tachycardia, and atrioventricular reentrant tachycardia, which are more common in younger individuals. Malignant ventricular arrhythmias Viral cardiomyopathy usually manifests as malignant ventricular arrhythmias, such as ventricular tachycardia and ventricular fibrillation in patients with SARS-CoV-2 infection. These arrhythmias can be caused by metabolic disorders or administration of drugs that prolong the rate-corrected QT (QTc) interval in the ECG. Monomorphic ventricular tachycardia has been observed in patients with structural myocardial diseases, such as acute coronary syndrome, ST-segment elevation myocardial infarction, and myocarditis. Polymorphic tachycardia, including torsade de pointes, results from functional heart diseases, such as drug toxicity, long QT, and Brugada syndrome (80). Bradyarrhythmia and atrioventricular blocks in SARS-CoV-2 Ventricular atrial blocks occur less frequently than tachyarrhythmias. When patients with SARS-CoV-2 infection develop this type of block, an artificial cardiac pacemaker may be required. Cardiac arrest in these patients may be preceded by sinus bradycardia, nodal rhythm, or ventricular tachycardia; thus, bradycardia may be an indicator of cardiovascular collapse risk in patients with SARS-CoV-2 (80). QT interval and other alterations in SARS-CoV-2 Patients with SARS-CoV-2 infection frequently present prolonged QT intervals, which is a cause for concern, as these can lead to malignant ventricular arrhythmias and cardiovascular death. At the beginning of the pandemic, a marked prolongation of the QT interval was observed in critically ill patients in ICUs receiving adjuvant therapies, including hydroxychloroquine, with or without concomitant azithromycin (81). This change in the ECG is associated with increased disease severity, serious cardiac injuries, and high mortality rates (82)(83)(84). However, a variety of factors can contribute to cardiac repolarization changes and impact the QT interval (85,86). These include hereditary and acquired factors such as inflammatory processes, medications, treatments, and electrolyte imbalance. ECG is a readily accessible tool that identifies cardiac involvement and can be used to predict the underlying cause of a disease (87). QRS and QTc intervals are early markers of SARS-CoV-2 disease progression and mortality (88). The exact mechanism by which SARS-CoV-2 infection may induce cardiac conduction abnormalities remains unknown. QT alterations may represent a simple marker reflecting the inflammatory state at the myocardial cellular level of the myocardium (83). The prolonged QTc intervals in ECG may result from the immune-mediated phenomena elicited by the virus infection, involving a cytokine storm with an elevation of IL-6 (89, 90), which blocks the potassium-related ether-a-go-go channel, contributing to increased circulating levels of IL-6. QTc prolongation is likely more than just a drug-related side effect because the administration of drugs that extend the QTc interval does not impact the hospital mortality in patients (93). Therefore, SARS-CoV-2 can have a deleterious effect on the cardiac conduction system, leading to significant ECG changes (79, 94). Molecular mechanisms of cardiac arrhythmias in SARS-CoV-2 In SARS-CoV-2 infection, there are numerous possible mechanisms that increase the risk of cardiac arrhythmias. These include various forms of myocardial damage and extracardiac processes that may exacerbate arrhythmias in patients with a preexisting propensity. Myocarditis can cause arrhythmia in the acute phase as a direct cytopathic effect, resulting in electrical imbalance, ischemia (due to microvascular dysfunction), and junction dysfunction resulting from an impaired myocardial expression of connexins or ion channels. This is particularly common in patients with channelopathies with superimposed inflammation. Viral infections and host-related factors can alter the structural and electrophysiological properties of the myocardium in viral myocarditis, resulting in abnormal calcium movement and a negative regulation of potassium channels, leading to prolonged repolarization and abnormal conduction. Prolonged repolarization can induce deflagrated electrical activity in association with abnormal conduction (reduced conduction velocity and decreased refractoriness). Arrhythmias can also occur in the postinflammatory phase, in which variable degrees of myocardial scarring may exist, thereby promoting reentrant arrhythmias (95). The systemic inflammatory response syndrome causes indirect myocardial damage. The intensive release of cytokines and chemokines, especially IL-1, IL-6, and TNF-α, is caused by a combination of micro-and macrovascular dysfunction, enhanced thrombogenicity, acidosis, hypoxia, and an imbalance in T-helper 1 and 2 responses. This process is amplified by enhanced catecholaminergic reactions; hyperinflammation due to high IL-6 levels results in a blockade of the hERG potassium channel and lengthening of the QT interval, facilitating the formation of unstable arrhythmias (96). Inflammatory cytokines are well-studied triggers of arrhythmia, particularly in patients with a long QT syndrome, in which the cardiac sympathetic nervous system is overstimulated by the hypothalamus-mediated inflammatory reflex and peripherally mediated activation of the stellate ganglion pathway (96). Furthermore, IL-6 inhibits cytochrome P450, which increases the bioavailability of drugs that prolong the QT interval (97). Hypoxia arising from lung injury or myocardial ischemia can activate anaerobic glycolysis, reducing intracellular pH, and thus increasing cytosolic calcium levels. This, in turn, can facilitate early and late depolarization and cause temporal changes in action potential duration. Hypoxia also increases extracellular potassium levels, which decrease the depolarization threshold and accelerate electrical conduction. In addition, hypoxemia can cause reduced electrical coupling and tissue anisotropy owing to the dephosphorylation of connexin 43 at communicating junctions (96). In a previously published case series, the effects of electrolyte abnormalities on both preexisting and new arrhythmias were studied (3). These findings have been attributed to diarrhea associated with SARS-CoV-2 infection or renal injury (98), and severe electrolyte disorders, such as hypokalemia, hypomagnesemia, and hypophosphatemia, are also linked to atrial arrhythmias (99). Traditional cardiovascular risk factors such as type II diabetes mellitus, hypertension, and hypercholesterolemia, as well as comorbidities such as ischemic heart disease and chronic renal failure, also contribute to the development of arrhythmia by altering the cardiac structure. Another potential contributor to the development of SARS-CoV-2 infection is the p.Ser1103Tyr variant of the common SCN5A-encoded Nav1.5 sodium channel that results in a lack of "repolarization reserve" (91). Epidemiological changes in SARS-CoV-2 infection variants Three epidemic waves have occurred since the first SARS-CoV-2 wave in March 2020, with the second and third waves dominated by the beta (B.1.351) and delta (B.1.617.2) variants, respectively. The fourth pandemic wave was caused by variation B.1.1.529, which the Network for Genomic Surveillance in South Africa identified as Omicron on 24 November 2021, the fifth variant of concern. This variant demonstrated a 70% decreased propensity to cause severe disease and thus lower hospitalization rates than the delta variant (100). In contrast to earlier waves, the early phase of the fourth wave in South Africa showed a different pattern of disease features and outcomes, with younger patients exhibiting fewer comorbidities, hospitalizations, and respiratory diagnoses, as well as a decline in severity and mortality. Despite this reduction in the pathogenicity of the Omicron variant, further research is required to determine the roles of acquired (vaccination) or natural immunity in the pandemic waves (101). Another aspect that should be considered is the efficacy of SARS-CoV-2 vaccines against infection. Efficacy decreased considerably 5-8 months after primary vaccination, although it remained high, particularly among those under 55 years of age. Nevertheless, vaccine boosters were effective in restoring protection against infection and had a good safety profile in the community, which contributed to the reduction in the severe consequences of SARS-CoV-2 infection (102). infection Most research on cardiovascular outcomes during the acute stage of hospitalization, which represents a minority of patients infected with SARS-CoV-2, does not adequately address the long-term cardiovascular sequelae of the infection. However, a database analysis (103) of cardiovascular outcomes after a 12month follow-up revealed that hospital readmission is associated with a high mortality rate, with multiple organ dysfunction being the primary cause (104). Nevertheless, a persistent elevation of myocardial injury biomarkers, such as hs-cTn, NT-proBNP, and D-dimer, indicates injury or underlying heart disease, and an increased risk of myocarditis, pericarditis, coronary artery disease manifested by acute coronary syndrome, myocardial infarction, angina, and ischemic cardiomyopathy. Other symptoms include heart failure, non-ischemic cardiomyopathy, left ventricular systolic and diastolic dysfunction, deep vein thrombosis, and pulmonary thromboembolism (105). Also, the post-SARS-CoV-2 group also displays an increased incidence of AF, sinus tachycardia and/or bradycardia, ventricular arrhythmias, and atrial flutter. In patients with SARS-CoV-2 infection, 12 months after hospital release, there is an increased risk of developing diabetes mellitus, and some prediabetic individuals progress to diabetes after infection (106). One hypothesis is the expression of ACE2 in pancreatic islet cells, where it could promote direct viral infiltration, resulting in inflammation and loss of pancreatic beta cells, and contributing to the development of diabetes mellitus (107). There is also evidence suggesting a higher risk of developing hypertension in individuals post-SARS-CoV-2 infection, as well as poorer blood pressure control. Although the mechanisms are not fully understood, ACE2 may be involved in reducing the renin-angiotensin-aldosterone system by converting angiotensin 1 and 2 into angiotensin 1-9 and 1-7, respectively. This is accompanied by an increased bioavailability of angiotensin 2 and a subsequent increase in blood pressure (108). Inadequate control of blood pressure in the postinfection stage is suggestive of lifestyle changes; a decrease in physical activity, unhealthy diets, and increased psychosocial stress have been observed as a result of the pandemic (109). Finally, a prolonged clinical picture of symptoms ranging from weeks to months post-SARS-CoV-2 infection called "long SARS-CoV-2" has been observed, which includes a broad spectrum of symptoms such as fatigue, exertional dyspnea, chest pain, palpitations, headache, nausea, vomiting, skin rashes, joint pain, anxiety, and depression. Although there is no standard universal criterion to characterize this condition, the World Health Organization has proposed that long SARS-CoV-2 should be defined as clinical manifestations lasting more than 3 months and symptoms lasting ≥ 2 months not explained by another disease (110, 111). The goal of this mini review was to provide an overview of the literature on the relationship between SARS-CoV-2 infection and the clinical manifestations of cardiovascular system involvement in children and adults, including important disorders affecting cardiac rhythm. We summarized the potential mechanisms that could be involved in the expression of the different clinical manifestations and the mechanisms underlying the complications of arrhythmias associated with SARS-CoV-2 infection and discussed the main imaging methods that allow appropriate diagnoses to be made. Conclusions Since the beginning of the pandemic, the medical and scientific communities have made enormous efforts to detect the early clinical cardiovascular manifestations caused by SARS-CoV-2 infection. Children with cardiovascular system involvement display a clinical profile similar to that of patients with Kawasaki disease, including heart failure and coronary artery involvement. In adults, its main clinical manifestations are coronary artery disease, stress-induced cardiomyopathy, myocarditis, heart failure, and arrhythmias, some of which are benign, such as transient sinus bradycardia, or potentially fatal, such as ventricular tachycardias and torsade de pointes, leading to sudden death. AF is the most prevalent arrhythmia in critically ill patients with SARS-CoV-2 infection with fibrillation. Owing to the severity of the infection and the concurrent use of proarrhythmogenic antimicrobials and anti-inflammatory medications, the management of these arrhythmias requires special consideration. Imaging techniques, such as conventional transthoracic echocardiography and ECG, are used to assess the cardiological manifestations of SARS-CoV-2 infection and diagnose primary, secondary, or associated cardiovascular complications. NMR is the most accurate technique for assessing myocardial structure and function. Cardiac enzymes hs-cTn and NT-proBNP can be used to detect cardiac involvement and determine prognosis. Comorbidities, such as hypertension, diabetes mellitus, and coronary heart disease, are risk factors for patients with COVID-19 that can aggravate the clinical course of the disease. Author contributions All authors participated equally in the preparation and review of the manuscript and agreed to its submission for publication. All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-05-16T13:06:53.279Z
2023-05-16T00:00:00.000
{ "year": 2023, "sha1": "045f043b32e1aa014b83b6b149ef0802c2f16b92", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "045f043b32e1aa014b83b6b149ef0802c2f16b92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1600571
pes2o/s2orc
v3-fos-license
Deep Ocean Mineral Supplementation Enhances the Cerebral Hemodynamic Response during Exercise and Decreases Inflammation Postexercise in Men at Two Age Levels Background: Previous studies have consistently shown that oral supplementation of deep ocean minerals (DOM) improves vascular function in animals and enhances muscle power output in exercising humans. Purpose: To examine the effects of DOM supplementation on the cerebral hemodynamic response during physical exertion in young and middle-aged men. Design: Double-blind placebo-controlled crossover studies were conducted in young (N = 12, aged 21.2 ± 0.4 years) and middle-aged men (N = 9, aged 46.8 ± 1.4 years). The counter-balanced trials of DOM and Placebo were separated by a 2-week washout period. DOM and Placebo were orally supplemented in drinks before, during, and after cycling exercise. DOM comprises desalinated minerals and trace elements from seawater collected ~618 m below the earth's surface. Methods: Cerebral hemodynamic response (tissue hemoglobin) was measured during cycling at 75% VO2max using near infrared spectroscopy (NIRS). Results: Cycling time to exhaustion at 75% VO2max and the associated plasma lactate response were similar between the Placebo and DOM trials for both age groups. In contrast, DOM significantly elevated cerebral hemoglobin levels in young men and, to a greater extent, in middle-aged men compared with Placebo. An increased neutrophil to lymphocyte ratio (NLR) was observed in middle-aged men, 2 h after exhaustive cycling, but was attenuated by DOM. Conclusion: Our data suggest that minerals and trace elements from deep oceans possess great promise in developing supplements to increase the cerebral hemodynamic response against a physical challenge and during post-exercise recovery for middle-aged men. INTRODUCTION A growing body of paleobiological evidence suggests that life on earth may have originated from the deep oceans (Gingerich et al., 2001;Kusky et al., 2001;Keller et al., 2017). If terrestrial organisms are evolved from deep oceans, sea-to-land migration could have compromised the nutritive complexity for all land survivors including descendants such as humans. In line with this concept, oral ingestion of components from deep oceans is likely to replenish any innate incomplete molecular complexity and increase the physical capacity of humans against entropic physical challenges. A proof-of-concept study has previously reported a substantially faster recovery (shortened from 48 to 4 h) on both leg muscle power (on a force plate) and aerobic fitness (maximal aerobic power on a cycle ergometer) in men stressed by an initial bout of exercise at high temperature (cycling at ∼30 • C until a 3% weight loss) with DOM supplementation (Hou et al., 2013). Similar results have been shown elsewhere, using different sources of mineral water from depths lower than 0.5 km below the earth's surface (Stasiule et al., 2014;Fan et al., 2016;Keen et al., 2016). Another repeatable finding regarding the physiological benefits of DOM ingestion is its protective effect on vascular function in land animals (Miyamura et al., 2004;Radhakrishnan et al., 2009;Li et al., 2014). In contrast to surface ocean water containing a similar profile of major minerals (magnesium, potassium, calcium, sodium, and chloride), DOM demonstrated greater protective benefits against the development of atherosclerosis in rabbits fed a high cholesterol diet (Miyamura et al., 2004), suggesting that trace elements of the deep ocean water contributed to the attenuated vascular inflammation and improved vascular function. In surface ocean water where light is permeable (within ∼200 m below the earth's surface), photosynthesis by marine organisms may exhaust biogenic components essential for optimal vascular functions (Miyamura et al., 2004). The action of DOM on cerebral vascular regulation during exercise has not yet been documented. Cerebral blood supply increases during exercise as a result of increased brain metabolism (Querido and Sheel, 2007). However, vascular function deteriorates during aging (Barac and Panza, 2009). Maximal aerobic power declines from 40 years of age (Fleg et al., 2005). As the brain is the primary determinant of voluntary effort on muscle recruitment during exercise in humans (Kayser, 2003), cerebral hemodynamic function has been considered as a limiting factor for high-intensity performance (Subudhi et al., 2007;Rupp and Perrey, 2008). During a progressive maximal exercise to exhaustion on a cycling ergometer, cerebral oxygenation increases initially but decreased markedly shortly before exhaustion (Rupp and Perrey, 2008). Cerebral hemoglobin fluctuation, as an indicator of blood volume change in the frontal brain, can be monitored in real-time by near-infrared spectroscopy (NIRS) during cycling (Bay Nielsen et al., 2005). Based on the aforementioned reports of the effects of DOM on muscle power output and vascular function, we hypothesized that DOM supplementations can improve cerebral hemodynamic responses during and attenuate NLR response post high-intensity physical exertion in young and middle-aged men. Participants This study recruited nine middle-aged men (aged 46.8 ± 1.4 year, body mass 81.4 ± 3.1 kg, height 175 ± 3 cm, BMI 26.5 ± 1.3, VO 2max 26.5 ± 1.3 mL/min/kg) and 12 young men (aged 21.2 ± 0.4 year, body mass 64.6 ± 1.6 kg, height 172 ± 1 cm, BMI 21.7 ± 0.5, VO 2max 45.2 ± 1.5 mL/min/kg) to determine cerebral hemodynamic and inflammatory responses during high-intensity cycling at 75% VO 2max . Individuals with a history of musculoskeletal, orthopedic injury, or cardiovascular abnormality were excluded from taking part. The study was ethically approved by the University of Taipei Institution Review Board. All participants were asked to not ingest any alcohol or nutritional supplementation (such as caffeine-containing supplements) during the study including the washout period. Written inform consent was obtained from all participants after a detailed explanation of the study protocol. Experimental Design We conducted randomized placebo-controlled crossover trials in a counter-balanced order with a 2-week washout period using taste matched DOM and Placebo drinks. No measurement was conducted during the washout period. Only males were recruited for the study to avoid the influence of menstruation or potential acute anemia on brain hemodynamic measurement. DOM or Placebo was orally supplemented 12 h (600 mL bolus) and 1 h before exercise (1.8 mL per kg body mass (BM), every 15 min) during exercise (1.8 mL per kg BM at 15th min), and during post-exercise recovery (10 mL per kg BM in 2 h). All participants received the same meal (800-900 kcal) and water (600 mL bolus) the night before experimental trials and a standardized breakfast (200-250 kcal) 1 h before commencing exercise. The same meal was provided during the crossover trial. Participants had a repeatable dietary intake on testing days. Drinks The desalinated DOM, taken from the West Pacific Ocean (618 m in depth), was provided by Pacific Deep Ocean Biotech (Taipei, Taiwan). DOM is defined by minerals and trace elements collected from ocean water 200 m below the earth's surface where sunlight is barely permeable. More than 70 minerals and trace elements existing in the ocean water have been documented (Farrington, 2000). DOM was filtered by a microfilter (removal of microorganisms) and an ultra-filter (removal of any macromolecules and/or viruses) before use. Molecules sized above 1.5 KD were removed after this two-filtration procedure. To mask the taste difference between DOM and Placebo, the same amount of erythritol (3%) was added to each drink. Tap water purified by reverse osmosis process was used for making Placebo. The safety of long-term DOM supplementation has been tested and shows no adverse effect on survival rates in two different animal models (Liu et al., 2013;Liao et al., 2016). Exercise Protocol Maximal oxygen consumption (VO 2max ) and the associated workload (W max ) were determined on a cycle ergometer (Monark, Sweden) at least 3 d before the start of experimental trials. The protocol for establishing VO 2max consisted of a 4-min warm-up, before beginning cycling at 100 W. The workload was increased incrementally by 25 W every 3 min until the participants could not continue to pedal despite constant verbal encouragement. The criteria used to establish VO 2max were a plateau of VO 2 with increasing exercise intensity and a respiratory exchange ratio (RER) > 1.1 and an RPE score of 19/20. Expired gas was collected using a MetaMax 3B (Cortex Biophysik, Nonnenstrasse, Leipzig, Germany). Heart rate was measured by a Polar heart rate monitor (Lake Success, NY, USA). For experimental trials, participants cycled to volitional exhaustion at a constant work rate equivalent to 75% VO 2max . Cerebral hemodynamic assessment was measured continuously during the first 20 min of exercise and time to exhaustion at 75% VO 2max was used as a measure of endurance performance. Trials were carried out at the same time of day (10:00 a.m.) to account for the influence of circadian rhythmic variation on exercise performance. Cerebral Hemodynamic Assessment An optical probe of frequency domain multi-distance nearinfrared spectroscope (NIRS) (ISS OxiplexTS, Champaign, IL, USA) was placed on the frontal brain and measured cerebral hemoglobin changes (detecting depth 2-2.5 cm) during the initial 20 min of exercise. Double-sided adhesive tape and an elastic band were used to secure the head probe in place. The NIRS oximeter was calibrated as per manufacturer guidelines prior to each test. All NIRS measurements (sampling rate: 1 Hz) were averaged over the last 60-s at each 5-min interval. Neutrophils-Lymphocytes Ratio (NLR) NLR, a common marker of systemic inflammation, was measured before and 2 h after cycling at 75% VO 2max. Venous blood samples were obtained for leukocyte analysis. The total numbers of leukocyte, neutrophils, monocytes, and lymphocytes were differentiated and quantitated using an Lactate and Glucose Lactate and glucose were measured before and after 15th min during cycling at 75% VO 2max . The fingerprick blood sample (10 microliters) was placed into a hemolyzing solution and serum was measured on a Biosen C-line glucose and lactate analyzer (EKF Diagnostic, Leipzig, Germany). Statistical Analysis Cycling time to exhaustion at 75% VO 2max , total hemoglobin levels during exercise at the same time points and post-exercise NLR between two crossover trials were compared using a paired t-test. Data were also analyzed using a two-way analysis of variance (ANOVA) with repeated measures (supplementation FIGURE 1 | Cerebral hemodynamic response (tissue total hemoglobin) during cycling at 75% VO 2max for the young (aged 21.2 ± 0.4 years) (A) and middle-aged men (B) (aged 46.8 ± 1.4 years). According to two-way ANOVA with repeated measures, main effects of group and time are both significant (Group effect: P < 0.05; Time effect: P < 0.001) for young men. For middle-aged men, no change in cerebral hemodynamic response was observed at the same relative intensity (75% VO 2max ) during the Placebo trial, whereas significant increases in cerebral hemodynamic response during the DOM trial were observed after 15 min of cycling (P < 0.05). There was a significant group and time interaction (P < 0.01). * Significant difference against Placebo based on paired t-test, P < 0.05. DOM, Deep ocean minerals; Maximal oxygen consumption, VO 2max . and time effects) to determine main and/or interactive effects. Fisher's post-hoc test was used for pair comparison. A level of P < 0.05 was set for significance for all tests. Unless otherwise stated values are expressed as means ± SE. Performance data were also analyzed using an effect size with 95% confidence intervals (Watt et al., 2002). RESULTS The mineral and trace element profile of DOM is shown in Table 1. No difference in cycling time to exhaustion at 75% VO 2max was observed between Placebo and DOM trials (Placebo vs. DOM: 2558 ± 387 s vs. 2504 ± 446 s, respectively) in young men, but minimal increases in cycling time to exhaustion after DOM were noted in middle-aged men (Placebo vs. DOM: 5401 ± 855 s vs. 5601 ± 777 s, respectively, P = 0.08). This ∼4% difference represents a minimal effect size of 0.09 (95% CI: −0.85 to 1.00) in endurance performance. The temporal changes of tissue hemoglobin levels in the frontal brain (detected by NIRS analysis) reflect the sensitivity of cerebral blood hemodynamic response (blood redistribution) against a physical challenge. Since the cycling time trials at 75% VO 2max showed a wide range of performance times amongst participants with the lowest cycling time around 21 min, we compared hemodynamic response data between both trials only for the first 20 min of cycling. For young men (Figure 1A), DOM supplementation enhanced the cerebral blood distribution by ∼75% during the first 10 min of cycling compared to Placebo (paired-t test, P < 0.05). When data for the entire 20 min are included, the main effects of group and time are both significant (Group effect: P < 0.05; Time effect: P < 0.001) for young men. There was no interactive effect (2-way ANOVA, P = 0.297). For middle-aged men (Figure 1B), no change in cerebral hemodynamic response was observed at the same relative intensity (75% VO 2max ) during the Placebo trial, whereas significant increases in cerebral hemodynamic response during the DOM trial were observed after 15 min of cycling (paired ttest, P < 0.05). There was a significant group * time interaction (P < 0.01), but main effects of both group and time were not different when data for the entire 20 min are included (Group effect: P = 0.115; Time effect: P = 0.513). Table 2 shows similar heart rate, blood lactate, and glucose levels between the DOM and Placebo trials after the first 15 min of cycling at 75% VO 2max for young (a) and middle-aged (b) men, respectively. An increase in blood neutrophil to lymphocyte ratio (NLR) 2 h after cycling was only observed in middleaged men. While no detectable difference between the DOM and Placebo trials was found in young men (Figure 2A), DOM supplementation attenuated the exercise-induced NLR response in middle-aged men by ∼25% compared to Placebo (Figure 2B). Significant differences between the DOM and Placebo trials were noted when data are expressed as changes in NLR (paired t-test, P < 0.05) for middle-aged men, but not for young men. DISCUSSION Vascular function is known to deteriorate with age (Barac and Panza, 2009), which may have ramifications on cerebral hemodynamic regulation during a physical challenge. Despite the protective effects of DOM on vascular function having been established with high reproducibility among animal studies (Miyamura et al., 2004;Radhakrishnan et al., 2009;Li et al., 2014), whether DOM can enhance cerebral hemodynamic responses during a physical challenge in men at various ages has not been previously documented. In this study, we found that mineral and trace elements from deep oceans can substantially increase the cerebral hemodynamic response during high-intensity cycling. The enhanced hemodynamic response with DOM was somewhat more pronounced in middle-aged men compared with young men at the same relative exercise intensity. Improved cerebral hemodynamic response during highintensity cycling provides mechanistic support to previous studies (Hou et al., 2013;Keen et al., 2016), which shows improved muscle power output in exercising men orally receiving DOM from more than 0.5 km below the earth's surface. As the brain is a critical determinant of muscle power output (Clark et al., 2014), the current findings suggest that components from deep oceans strengthen central command on muscle fiber recruitment mediated by accelerating blood supply to the frontal brain. However, high-intensity endurance performance was not significantly improved, suggesting that DOM had little influence on promoting fuel metabolism in contracting muscle. A limitation of the present study is that mechanistic evidence to explain the role of specific minerals and trace elements from deep oceans for the enhanced cerebral hemodynamic response to exercise is not provided. We speculate that trace elements are the major components of the DOM enhanced cerebral hemodynamic response. DOM contains relatively higher amount of trace elements such as lithium and rubidium. Supplementation of lithium and rubidium are known to directly increase spontaneous motor activity levels in animals (Johnson, 1972). Additionally, manipulating lithium and rubidium concentrations affects the nervous system that controls movements in marine animals (Johnson, 1972;Hoffmann and Smith, 1979). To identify the key components of DOM, which modulate the cerebral hemodynamic response during exercise, can be a promising research area for improving quality of life for middle-aged men. Another limitation of the study is that the middle-aged participants in the study were also heavier in weight, compared with the young participants. Therefore, it is difficult for us to distinguish whether the observed differences between two age levels are due to age or weight. Another novel finding of the present study is decreased systemic inflammation after exercise with DOM supplementation. The attenuated NLR increase suggests that DOM might either reduce the amount of damage or increase recovery rate after exercise. NLR is a commonly used marker of systemic inflammation and has been shown to be elevated 2 h following aerobic exercise, in an exercise volume dependent manner (Gabriel and Kindermann, 1997). The lowered systemic inflammation after exercise suggests that the increased molecular complexity provided by DOM supplementation improves the robustness of the cells against an entropic challenge. The preventive effect of vascular inflammation by DOM has been previously reported (Li et al., 2014), suggesting that the lowered NLR found in the present study could be related to improved endothelial function. In rats, immunohistochemical staining data has shown that DOM supplementation decreases the proteins (MAP kinase signaling pathway) controlling cell proliferation and migration induced by vascular damage. Furthermore, DOM supplementation substantially delays the progression of atherosclerosis in animals (Miyamura et al., 2004;Radhakrishnan et al., 2009). However, we must acknowledge that these studies are involved with long-term DOM supplementation for 4-12 weeks in contrast to our study. Therefore, the underlying mechanism for the observed effects of DOM on cerebral hemodynamic responses warrants further investigation. CONCLUSIONS The results of the present study strengthen the hypothesis that minerals and trace elements from deep oceans may increase nutritive complexity of humans against a physical challenge, supported by an enhanced cerebral hemodynamic response during cycling exercise and reduced systemic inflammation during recovery. Our findings suggest a promising application of using DOM to develop supplements for improving cerebral hemodynamic responses during physical challenges in middleaged men. AUTHOR CONTRIBUTIONS C-YW, C-YC, Y-HL, Y-ST, and C-HK conceived and designed the experiments; C-YW and Y-ST performed the experiments; C-YW and C-HK analyzed the data; C-YH, RC, MH, and C-HK wrote the paper.
2017-12-12T18:07:15.532Z
2017-12-12T00:00:00.000
{ "year": 2017, "sha1": "9f19c2101af5202bcd0f7dfdb1d6edaa1676f5ae", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.01016/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f19c2101af5202bcd0f7dfdb1d6edaa1676f5ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252537309
pes2o/s2orc
v3-fos-license
U-Pb dating and geochemical dataset of fracture-filling calcite veins from the Bóixols-Sant Corneli anticline (Southern Pyrenees) U-Pb dating and geochemical analyzes (δ18O, δ13C, Δ47, 87Sr/86Sr and elemental composition) have been applied to fracture-filling calcite veins and host carbonates from the Bóixols-Sant Corneli anticline, which developed along the front of the Bóixols thrust sheet in the Southern Pyrenees. This robust dataset is used to determine: (i) the absolute timing of fracturing and mineralization from fluid flow; (ii) the age and duration of fold evolution; and (iii) the variations and implications of fluid behavior across the anticline, as has been described in the article “Spatio-temporal variation of fluid flow behavior along a fold: The Bóixols-Sant Corneli anticline (Southern Pyrenees) from U–Pb dating and structural, petrographic, and geochemical constraints – Marine and Petroleum Geology (2022) (Muñoz-López et al., 2022). In this new contribution, we present the raw data that have been analyzed and discussed in the related research article and, also, the whole elemental and REE composition of calcite veins and host carbonates that has not been published yet. These data may be used to unravel the age and origin of veins, to understand their sequential evolution in orogenic belts and to compare our results with those obtained in similar settings worldwide. of fluid flow behavior along a fold: The Bóixols-Sant Corneli anticline (Southern Pyrenees) from U-Pb dating and structural, petrographic, and geochemical constraints -Marine and Petroleum Geology (2022) (Muñoz-López et al., 2022). In this new contribution, we present the raw data that have been analyzed and discussed in the related research article and, also, the whole elemental and REE composition of calcite veins and host carbonates that has not been published yet. These data may be used to unravel the age and origin of veins, to understand their sequential evolution in orogenic belts and to compare our results with those obtained in similar settings worldwide. Table Subject Earth Sciences Specific subject area Geochemistry and Petrology Type of data Tables How the data were acquired U-Pb dating of calcite veins using laser ablation-inductively coupled plasma mass spectrometer (LA-ICP-MS). Carbon and oxygen isotopes of calcite veins and host carbonates with a Thermal Ionization Mass Spectrometer Thermo Electron MAT-252 (Thermo Fisher Scientific). 47 measurements of calcite veins with an automated acid digestion and gas purification device coupled to a dual inlet Thermo MAT253 Mass Spectrometer. 87 Sr/ 86 Sr ratios of calcite veins and host carbonates have been analyzed in a TIMS-Phoenix mass spectrometer (Isotopx). Elemental composition of calcite veins and host carbonates employing a magnetic sector field Element XR (HR-ICP-MS, high resolution inductively coupled plasma-mass spectrometer, Thermo Fisher Scientific). Data format Raw and analyzed Description of data collection The description of the data collection is presented in the Experimental design, materials, and methods section. Data source location Samples of calcite veins and related host rocks were collected at the Bóixols-Sant Corneli anticline (Southern Pyrenees). See coordinates of each sample in Table 1 Value of the Data • We present geochronological and geochemical data from fracture-filling calcite veins and carbonate host rocks from the outstanding exposed Bóixols-Sant Corneli anticline, along the front of the Bóixols thrust sheet (Southern Pyrenees). This robust dataset has been used to constrain the sequence of deformation, the age and duration of fold evolution, and the fluid flow behavior across the anticline. • We include the raw data that have been analyzed and discussed in the related research article. We also include the whole elemental and REE composition of calcite veins and host carbonates that has not been published yet. • These data are useful for geoscientists working on carbonate geochemistry and geochronology applying novel techniques such as U-Pb dating and clumped isotope thermometry. • These data can be further used to determine the timing and thermal conditions of vein development during deformation and to compare our results with those obtained in similar settings worldwide. Data Description Geochronological and geochemical data of fracture-filling calcite cements and carbonate host rocks exposed in the Bóixols-Sant Corneli anticline (Southern Pyrenees) are presented here. Samples were collected in ten representative localities that cover the different fracture networks as well as the sedimentary successions involved in the formation of the anticline. The location and description of samples are shown in Table 1 , the geochronological results in Table 2 and Fig. 1 , and the geochemical dataset in Tables 3 , 4 and Fig. 2 . The complete elemental and isotopic composition of veins cements and host rocks is found in the Repository. The main features of fractures, and the detailed petrographic characteristics of vein cements and host rocks is found in [1] and elsewhere in [2 , 3] . Twenty-three new U-Pb ages were obtained by applying LA-ICPMS on 447 spot analyzes from different sets of fracture-filling calcite cements ( Table 2 and Fig. 1 ). Obtained ages range from Late Cretaceous (79.8 ± 1.2 Ma) to late Miocene (9.0 ± 4.6 Ma). Concordia plots, which are presented in the Repository, show well-defined regression lines for most samples with mean square weighted deviation (MSWD) of < 2. An exception is sample Bx47a that has a MSWD of 10.6. As this value is higher than 2, it could indicate an open system, a mixing of ages, or an incomplete equilibration of lead isotopes [4] . The raw data of the U-Pb results are presented in the supplementary material of [1] and in the Repository. The geochemical results, including δ 18 O, δ 13 C, 87 Sr/ 86 Sr, 47 , and the elemental composition of the different fracture-filling calcite cements and host rocks are shown in Tables 3 , 4 and Fig. 2 . In order to summarize this robust dataset and to describe the geochemical data, the analyzed vein cements have been assembled in three calcite groups (Group 1 to Group 3) according to three observed geochemical trends: The first geochemical trend (Group 1 calcites) has been observed in all cements from fractures and faults present in the hinge of the anticline (Cal Mestre locality) and in the base of the syn-orogenic deposits in the footwall of the Bóixols thrust sheet (Sant Antoni locality). The geochemistry of these calcites reflects the composition of their host rocks, either the Lower Cretaceous marls of the Lluçà Formation or the Upper Cretaceous marls of the Vallcarga Formation, respectively. Thus, Group 1 calcites yield minimum and maximum δ 13 C values between + 1.3 and + 2.4 ‰ VPDB, respectively, and 87 Sr/ 86 Sr between 0.707285 and 0.707669, which are similar values to their adjacent host carbonates. Also, Group 1 calcites display δ 18 O values that are lower than -3 ‰ VPDB and that are up to 5 ‰ VPDB lower than those values of their adjacent rocks ( Table 3 ). Regarding the elemental composition, these calcites have Mn contents lower than 200 ppm, Sr contents higher than 1100 ppm, and Y/Ho ratios higher than 50. Besides, nine representative samples of Group 1 cements were analyzed for clumped isotope measurements ( Table 4 ). The second geochemical trend (Group 2 calcites) has been observed in all calcite cements from large-scale faults including large thrusts, strike slip and normal faults and related fractures cutting the Bóixols-Sant Corneli anticline. These cements yield the lowest δ 18 O values, between -14 and -8 ‰ VPDB, which are up to 10 ‰ VPDB lower than those values of their host carbonates. Additionally, Group 2 calcites yield variable enrichment in δ 13 C values and 87 Sr/ 86 Sr ratios, from -12 to + 2 ‰ VPDB, and from 0.7074 to 0.7080, respectively ( Table 3 ) Table 4 ). The third geochemical trend (Group 3 calcites) has been observed in cements that precipitated in centimetric to metric-scale fractures (i.e., veins) in both limbs of the Bóixols-Sant Corneli anticline. These cements exhibit a narrow range of δ 18 O values, from -8 to -6 ‰ VPDB, and tendency towards δ 13 C-depleted values, from -10 to + 2 ‰ VPDB. The 87 Sr/ 86 Sr ratios of Group 3 calcites, ranging from 0.7073 to 0.7077, are also lower than those values of their host carbonates (the Collada Gassó and the Congost Formations and the Garumnian facies) ( Table 3 ). Finally, regarding the elemental composition, these cements have the lowest Sr contents and Y/Ho ratios, less than 500 ppm and less than 60, respectively. Besides, a representative sample of Group 3 cements was analyzed for clumped isotope measurements. The obtained 47 values, which are 0.494 ± 0.016, translate into precipitation temperatures of 66 ± 5 °C and δ 18 O fluid of + 1.9 ‰ SMOW ( Table 4 ). Experimental Design, Materials and Methods Petrographic analysis of around 135 polished thin sections from host rocks and vein cements was made using a Zeiss Axiophot microscope and a cold cathodoluminescence (CL) microscope operating at 15-18 kV and 350 μA current. U-Pb ages were obtained with a laser ablation-inductively coupled plasma mass spectrometer (LA-ICPMS) at FIERCE (Frankfurt Isotope and Element Research Center, Goethe University), following the modified method of [7] . A Thermo Scientific Element XR sector field ICPMS was coupled to a RESOlution 193 nm ArF excimer laser (COMpexPro 102) equipped with a twovolume ablation cell (Laurin Technic S155). Samples were firstly ablated in a helium atmosphere (300 mL/min) and then mixed in the ablation funnel with 1100 mL/min argon and 5 mL/min nitrogen. Signal strength at the ICP-MS was tuned for maximum sensitivity but keeping the oxide formation (monitored as 248 ThO/ 232 Th) below 0.2% and low fractionation of the Th/U ratio. Static ablation used a spot size of 193 μm and a fluency of about 2 J/cm 2 at 12 Hz. Data were obtained in fully automated mode overnight in two sequences of 598 analyzes each one. Each analysis comprised 18 s of background acquisition, 18 s of sample ablation, and 25 s of washout. During 36 s of data acquisition, the signal of 206 Pb, 207 Pb, 208 Pb, 232 Th, and 238 U was detected by peak jumping in pulse-counting and analogue mode with a total integration time of ∼ 0.1 s, resulting in 360 mass scans. Each spot was pre-ablated with 8 laser pulses to remove surface contamination before analysis. Soda-lime glass NIST SRM-612 was used as primary reference material (spot size of 50 μm, 8 Hz) together with four carbonate reference materials, which were bracketed in between the analysis of samples. Raw data were corrected offline with an in-house VBA spreadsheet program [7] . Following background correction, outliers ( ±2 σ ) were rejected based on the time-resolved 207 Pb/ 206 Pb, 208 Pb/ 206 Pb, 206 Pb/ 238 U, and 232 Th/ 238 U ratios. Such ratios were corrected for mass biases and drift over time, using NIST SRM-612. An additional matrix related offset was applied on the 206 Pb/ 238 U ratios (sequence 1: 21.5%, sequence 2: 19.6%) that was determined using WC-1 carbonate reference material [8] . The 206 Pb/ 238 U downhole-fractionation was estimated to be 3%, based on the common Pb corrected WC-1 analyzes, and was applied to all carbonate analyzes. Uncertainties for each isotopic ratio are the quadratic addition of the within run precision, counting statistic uncertainties, excess of scatter (calculated from NIST SRM-612) and the excess of variance (calculated from WC-1) after drift correction [9] . The systematic uncertainties considered are the decay constants uncertainties and the long-term reproducibility of the method (1.5%, 2 σ , calculated from repeated measurements ( n = 7) of ASH-15D between 2017 and 2019). Carbonate reference materials were measured for quality control. Reference material B6 (41.86 ± 0.53 Ma and 42.12 ± 0.88 Ma) [10] was measured in sequences 1 and 2, whereas reference material ASH-15D (2.907 ± 0.210 Ma) [11] was measured in sequence 1. Results on the secondary reference materials indicate an accuracy and repeatability of the method of about 1.5-2%. Data were displayed in Tera-Wasserburg plots, and ages were calculated as lower concordiacurve intercepts using the same algorithms as Isoplot 4.14 [12] . All uncertainties are reported at the 2 σ level. Analytical results, Concordia graphs and a summary of the U-Pb results are reported in [1] . For carbon and oxygen isotopes of vein cements and carbonate host rocks, 50-100 μm of samples were extracted with a microdrill. Each powered sample was reacted during four minutes with 100% phosphoric acid at 70 °C. The resultant CO 2 was analyzed following the method of [13] and using an automated Kiel Carbonate Device attached to a Thermal Ionization Mass Spectrometer Thermo Electron MAT-252 (Thermo Fisher Scientific). For calibration, the inter- 47 measurements were performed at the California Institute of Technology (USA) in three different analytical sessions (May to July 2019) with an automated acid digestion and gas purification device coupled to a dual inlet Thermo MAT253 [14] . Samples were weighed into silver capsules ( ∼ 8 mg) and reacted in a common phosphoric acid bath ( ∼103%) for 20 min at 90 °C under static vacuum. The evolved CO 2 was passed through an ethanol/dry ice U-trap ( ∼ -80 °C) before being collected on a liquid nitrogen temperature (-196 °C) U-trap. Following the 20 min reaction period, the collected CO 2 was thawed, entrained in helium, and carried through a Porapak Q 120/80 mesh gas column held at -20 °C using He as the carrier gas. The purified CO 2 was analyzed using a Thermo Scientific MAT 253 Mass Spectrometer set to collect masses 44-49. Mass 48 was only monitored to detect any hydrocarbon contaminant. δ 18 O and δ 13 C data were also acquired as part of each 47 analysis and calculated using the parameters reported relative to the PDB reference frame based on the calibrated composition of the laboratory working gas and the correction scheme and constants from [15] . To account for the temperature dependence of oxygen isotope fractionation between CO 2 gas and carbonate resulting from the reaction with phosphoric acid at 90 °C, fractionation factors of 1.00811 were used for calcite [16] . The raw 47 data was corrected for instrument non-linearity and scale compression [17] using several heated (at 10 0 0 °) and equilibrated gases (at 25 °C) of various bulk isotopic compositions that were run during each session. These gases were used to convert measurements into the interlaboratory absolute reference frame [17] . To guarantee accuracy of the 47 data, we routinely analyzed two carbonate reference materials (Carrara marble and TV04). One of these two carbonate standards was analyzed typically once for every five analyzes of the unknown samples to check for procedural analytical stability and accuracy, and to determine the long-term external reproducibility of our measurements. The 47 values obtained for these carbonates over the course of this study are: 47-CDES25 = 0.409 ± 0.016 ‰ (1SD, n = 10) for Carrara; 47-CDES25 = 0.6 6 6 ± 0.011 ‰ (1SD, n = 8) for TV04, i.e., within accepted 47 values for TV04 ( 47-CDES25 = 0.655 ‰ ) and Carrara ( 47-CDES25 = 0.405 ‰ ). Finally, the corrected 47 values were converted into temperatures using the composite 47 -T calibration of [5] , which has been shown to be appropriate for calcite and dolomite between 0 and 300 °C. The oxygen isotopic compositions of the water ( δ 18 O water ) from which the carbonates precipitated were calculated for each estimated T 47 using the bulk δ 18 O carb values and the calcite-water fractionation equation from [6] . For 87 Sr/ 86 Sr ratios, powdered samples of calcite cements and host rock have been dissolved in 5 mL of 10% acetic acid and then centrifuged. The supernatant was dried and dissolved in 1 mL of 1M HNO 3 . The solid residue, resulted after evaporation, was diluted in 3 mL of 3M HNO 3 and then loaded into chromatographic columns to separate the Rb-free Sr fraction, using SrResin TM (crown-ether (4,4'(5')-di-t-butylcyclohexano-18-crown-6)) and 0.05M HNO 3 as eluent. After evaporation, samples were loaded onto a Re filament with 2 μL of Ta 2 O 5 and 1 μL of 1 M phosphoric acid. Analyzes of isotopic ratios have been performed in a TIMS-Phoenix mass spectrometer (Isotopx) according to a dynamic multicollection method, during 10 blocks of 16 cycles each one, maintaining a 88 Sr beam intensity of 3-V. Obtained ratios have been corrected for 87 Rb interferences and normalized with a 88 Sr/ 86 Sr = 0.1194 reference value, aiming at correcting possible mass fractionation during sample loading and analysis. The isotopic standard NBS-987 has been analyzed 6 times, yielding an average value of 0.710243 ± 0.0 0 0 0 09 (standard deviation, 2 σ ). NBS 987 data have been used to correct the sample ratios for standard drift from the certified value. The analytical error in the 87 Sr/ 86 Sr ratio was 0.01% (referred to two standard deviations). The internal precision is 0.0 0 0 0 03. Sr procedural blanks were below 0.5 ng. For the elemental composition, powdered samples of vein cements and host rocks were analyzed employing a magnetic sector field Element XR (HR-ICP-MS, high resolution inductively coupled plasma-mass spectrometer, Thermo Fisher Scientific). In this case, the LR (low resolution) and the MR (medium resolution) have only been used. 100 mg of each powdered sample was firstly dried at 40 °C during 24 h and then acid digested in closed polytetrafluoroethylene (PTFE) vessels with a combination of HNO 3 + HF + HClO 4 (2.5 mL: 5 mL: 2.5 mL v/v). Samples have been evaporated and, to make a double evaporation, 1 mL of HNO 3 was added. Then, samples have been re-dissolved and diluted with MilliQ water (18.2 M Ω cm-1 ) and 1 mL of HNO 3 in a 100 mL volume flask. A tuning solution of 1 μg L −1 Li, B, Na, K, Sc, Fe, Co, Cu, Ga, Y, Rh, In, Ba, Tl, U was employed to improve the sensitivity of the ICP-MS and 20 mg L −1 of a monoelemental solution of 115 In were used as internal standard. Reference materials are the BCS-CRM n o 393 (ECRM 752-1) limestone, JA-2 andesite and JB-3 basalt. Precision of results is expressed in terms of two standard deviations of a set of eight reference materials measurements (reference material JA-2). Accuracy (%) has been calculated employing the absolute value of the difference between the measured values obtained during the analysis and the certified values of a set of eight reference material analysis (reference material BCS-CRM n o 393 for major oxides and JA-2 for trace elements). The DL (detection limit) has been calculated as three times the standard deviation of the average of ten blanks. Ethics Statement Nothing to declare. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-09-27T15:02:21.871Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "e2319469ec84870fc998aa49d30ae12672af6b7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2022.108636", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "20713d4d64534aa4dc0fdaf64b7cd6070ae315af", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
14465583
pes2o/s2orc
v3-fos-license
Rotating membranes on G_2 manifolds, logarithmic anomalous dimensions and N=1 duality We show that the $E-S \sim \log S$ behaviour found for long strings rotating on $AdS_5\times S^5$ may be reproduced by membranes rotating on $AdS_4\times S^7$ and on a warped $AdS_5$ M-theory solution. We go on to obtain rotating membrane configurations with the same $E-K \sim \log K$ relation on $G_2$ holonomy backgrounds that are dual to ${\mathcal{N}}=1$ gauge theories in four dimensions. We study membrane configurations on $G_2$ holonomy backgrounds systematically, finding various other Energy-Charge relations. We end with some comments about strings rotating on warped backgrounds. Contents 1 Background and Motivation A brief history of rotating strings and membranes A recent advance in our understanding of the AdS/CFT duality was the proposal [1] that gauge theory operators with large spin were dual to semiclassical rotating strings in the AdS background. This original work was inspired by comments [2] concerning 'long' gauge theory operators with high bare dimension and by the success in matching the anomalous dimensions of large R-charge operators with the spectrum of string theory on the Penrose limit of AdS 5 ×S 5 [3] . String configurations naturally have energies in the 1/α ′ ∼ λ 1/2 scale, where λ is the 't Hooft coupling, and are therefore dual to operators with large dimensions. Rotating strings [1] were shown to reproduce the known results for large R-charge operators [3], as well as giving results for a new class of 'long' twist two operators. The principle factor that made the identification of these new operators possible, was the fact that a rotating string configuration in AdS space was obtained [1] that had E − S ∼ f (λ) ln S, where E and S are the energy and spin of the configuration. These must then be dual to gauge theory operators with an anomalous dimension that depends logarithmically on the spin. Such operators were known from the operator approach to Deep Inelastic Scattering (D.I.S.) in QCD, where they appear in the OPE of electromagnetic currents. The twist two operators typically have the form Where Φ(x) is a field in the theory such as a field strength or quarks. The anomalous dimension of these twist two operators is responsible for violations of Bjorken scaling in D.I.S. at finite coupling [4]. It is perhaps surprising that the logarithmic dependence of anomalous dimension on spin survives from the perturbative to the strong coupling regime in the 't Hooft coupling, and that no corrections of ln k S, or other corrections, appear. Some corrections were shown to vanish in an important one loop string calculation [5]. These results [5] further clarified the connection with the large R-charge operators. In [5,6], solutions interpolating smoothly between the various configurations considered in [1] were found and a natural proposal for some of the dual operators was given. In [11], non-stationary, pulsating, configurations were considered and, using a WKB approximation, the corrected energies were found in the limit of large quantum numbers. These configurations were associated with generalisations of operators with impurities studied in [3]. In [9], orbifolded geometries were considered using a collective coordinates approach, and some connections between gauge theories and integrable systems were pointed out. The paper [7] considers strings orbiting around AdS 5 black holes. Their proposal is to understand the finite temperature dual system as a glueball melting into the gluon plasma, due to a transfer of angular momentum from the 'planetoid' solution [17], to the black hole. In [8], an interesting extension was proposed. Firstly, they analyse the behaviour expected when one considers string configurations in 'confining geometries', considering only the general features of this type of geometry. They propose that the functional form has to change, when varying the size of the string soliton, from Regge-like (E ∼ S 1/2 ) to D.I.S-like (E − S ∼ ln S). A key observation of theirs is that in this case, unlike the case of AdS 5 × S 5 , the Regge-like behaviour will not be simply an effect of the finite volume in which the gauge theory dual is defined. Further, they study string solitons in Witten's model for QCD [18] and find, for a model constructed with near extremal Dp-branes, a very curious relation of the form E − S ∼ 1 − S (p−5)/(9−p) . One result of our work will be to exhibit a Regge to logarithmic transition without any finite size effects. In [15], the study of strings in Witten's model was extended by considering pulsating, non-stationary, configurations similar to those of [11]. Further, [15] study pulsating membrane configurations in AdS 7 . In [13], rotating membranes were studied in AdS 7 spaces, but no logarithmic behaviour for the anomalous dimensions was found. Instead, power-like behaviours were displayed. In a very nice paper dealing with higher spin gauge theories [16], the authors also discuss membrane configurations and found similar results. Another result of the present paper is to obtain a logarithmic anomalous dimension for membranes in AdS 4 (or AdS 7 ). More recently, the results of [1] were reproduced using Wilson loops with a cusp anomaly [14]. For the relation of Wilson loops and large R-charge operators, see for example [10]. Also, the paper [12] has recently studied the anomalous dimensions on the field theory side. In this paper, we would like to build on this success by applying the methods of [1] to gauge-gravity dualities that are much less understood than the canonical AdS 5 × S 5 string theory with N = 4 super Yang-Mills (SYM) [19,20,21,22] and the immediate derivatives thereof. One would like to understand dual descriptions to N = 1 gauge theories in four dimensions, which have more in common with observed particle physics. M-theory on a non-compact G 2 holonomy manifold is one way of realising such a gravity dual, as we shall now summarise. Background to the G 2 holonomy duality Progress in this direction originated from the duality between Chern-Simons gauge theory on S 3 at large N and topological string theory on a blown up Calabai-Yau conifold [23]. This duality was embedded in string theory as a duality between the IIA string theory of N D6-branes wrapping the blown up S 3 of the deformed conifold and IIA string theory on the small resolution of the conifold with N units of two form Ramond-Ramond flux through the blown up S 2 and no branes [24]. The D6-brane side of the duality involves an N = 1 gauge theory in four dimensions that is living on the non-compact directions of the branes, at energies that do not probe the wrapped S 3 . Before lifting this duality to M-theory, let us make a few further statements regarding the relation of the D6 branes to the field theory. In order for the wrapped branes to preserve some supersymmetry, one has to embed the spin connection of the wrapped cycle into the gauge connection, which is known as twisting the theory. On the wrapped part of the brane, the gauge theory is topological [25]. Whilst the twisting allows the configuration to preserve supersymmetry, some of the supercharges will not have massless modes. Therefore the theory living on the flat part of the brane will preserve a lower fraction of supersymmetry than the unwrapped flat brane configuration. Apart from these fields, there will be massive modes, whose mass scale is set by the size of the curved cycle. When we probe the system with low enough energies, we find only the spectrum of N = 1 SYM. In the following when we consider 'high energies', we will be understanding that the energies are not high enough to probe the massive modes of the theory. The situation is not quite as straightforward as outlined above. This is because for D6 branes in flat space, the 'decoupling' limit does not completely decouple the gauge theory modes from bulk modes [27]. In our case, we expect a good gauge theory description only when the size of the wrapped three-cycle is large, which implies that we have to probe the system with very low energies to get 3+1 dimensional SYM [28]. In this case, the size of the two cycle in the flopped geometry is very near to zero, so a good gravity description is not expected. In short, we must keep in mind that the field theory we will be dealing with has more degrees of freedom than pure N = 1 SYM. It was discovered that the duality described above is naturally understood by considering M-theory on a G 2 holonomy metric [28]. In eleven dimensions, G 2 holonomy implements N = 1 as pure gravity. One starts with a singular G 2 manifold that on dimensional reduction to IIA string theory corresponds to N D6 branes wrapping the S 3 of the deformed conifold. There is an SU (N ) gauge theory at the singular locus/D6 brane. This configuration describes the UV of the gauge theory. As the coupling runs to the IR, a blown up S 3 in the G 2 manifold shrinks and another grows. This flop is smooth in M-theory physics. The metrics will be discussed in more detail in the following sections. In the IR regime, the G 2 manifold is non-singular and dimensional reduction to IIA gives precisely the aforementioned small resolution of the conifold with no branes and RR flux. Confinement emerges nicely in this picture, because the gauge degrees of freedom have disappeared in the IR along with the branes. The smooth M-theory physics of this process was systematised in [29] where it was shown that the transitions are in fact between three possible geometries, corresponding to the deformed conifold and two small resolutions of the conifold. This should be understood as a quantum mechanically broken triality symmetry. See also [30]. The M-theory lift of the IIA duality of [24] was arrived at independently in [31] from the perspective of studying the M-theory geometry describing four dimensional gauge theory localised at ADE singularities [32]. The moral of these discoveries would seem to be that special holonomy in eleven dimensions is a natural way to formulate the dual geometry of gauge theories living on wrapped D-branes. This approach was further pursued in, for example, [33,34,35]. The G 2 metrics describe the near horizons of branes as opposed to the full brane supergravity solution because they are not asymptotically flat. We cannot generically take a further near horizon limit of the metric, r → 0 typically, because this would spoil the special holonomy and therefore the matching of supersymmeties. Working within this paradigm, we shall consider rotating membranes on eleven dimensional backgrounds R 1,3 × X 7 , where X 7 is a cohomogeneity-one non-compact G 2 manifold. Motivation and contents of this work To get going, we will first study rotating membrane configurations on AdS 4 × S 7 and then go on to study rotating membranes on G 2 holonomy manifolds. The first step is something of a warm-up to show that one obtains sensible results by considering rotating membranes in a configuration that is fairly well studied. It is however, severely limited by the fact that comparatively little is known about the dual theory. The second step, on the other hand, is particularly interesting as the duality takes us to pure N = 1 SYM theories. This is a theory that is understood and not so different from the gauge theories of nature. However, what is very poorly understood indeed is the precise nature of the duality with M-theory on G 2 holonomy spacetimes. The anomalous dimensions of operators with large quantum numbers exhibit very characteristic behaviours that seem be captured by fairly simple string/M-theory configurations. It thus provides a window into the duality. Some rotating membrane configurations on AdS 7 ×S 4 were discussed in [13,16]. We will show how a simple modification of their configurations will give logarithms in the energyspin relation. This modification will later provide the inspiration for finding logarithms in the G 2 holonomy cases. Another previous use of membrane configurations in AdS 7 × S 4 was in providing dual descriptions of Wilson loops in [36]. Also, the presently known matchings of N = 1 SYM with G 2 holonomy M-theory come from considering membrane instantons as gauge theory instantons that generate the superpotential [32], membranes wrapped on one-cycles in the IR geometry that are super QCD strings in the gauge theory [31,37], and fivebranes wrapped on three-cycles that give domain walls in the gauge theory [31,38]. These matchings are essentially topological and do not use the explicit form of the G 2 metrics. In this sense our results, which do use the explicit form of various metrics, are of a different nature from previous studies of the duality. In section 2 we recall the basic formulae for supermembranes and fix notation. In section 3 we study rotating membranes on AdS spaces that are dual to gauge theories in three and four dimensions with varying amounts of supersymmetry. In particular we obtain various configurations with logarithmic anomalous dimensions. In section 4 we recall the existence of Asymptotically Localy Conical (ALC) G 2 and their role in the N = 1 duality. We go on to study membranes rotating in these backgrounds. Again we obtain logarithmic anomalous dimensions, as well as a variety of other behaviours. In obtaining the logarithms we consider energy and charge densities of a non-compact brane. Section 6 contains a summary and discussion, a few comments regarding the dual operators to the membrane configurations, and open questions. The first appendix is independent from the rest of this work and sets up a formalism for studying strings moving on warped backgrounds. The second appendix explicitly checks the lack of supersymmetry of the G 2 holonomy configurations. Membrane formulae In this section, we briefly summarise the action, equations of motion and gauge fixing constraints for membranes. The bosonic sector of the supermembrane action [39] is where i, j, k = 0 . . . 2 and µ, ν, ρ = 0 . . . 10. The worldsheet metric is γ ij , the embedding fields are X µ and the eleven dimensional background is described by the spacetime metric G µν and three-form field C µνρ . The corresponding field strength is H = dC. The equations of motion are The action (2) is equivalent on shell to the Dirac-Nambu-Goto action. The three diffeomorphism symmetries of the action may be gauge fixed by imposing the following constraints where α, β = 1 . . . 2 are the spatial worldsheet indices and L 2 is an arbitrary constant to be fixed later. Using the equation of motion for γ ij (3) and the gauge fixing conditions (4), one obtains the action Note that for backgrounds where the C field does not couple to the membrane, the second constraint in (4) is just the constancy of the Hamiltonian of the action (5). For the simple configurations we consider below, that have additional conserved charges, the equations of motion will almost follow from imposing the constant Hamiltonian constraint. However, in making ansätze for solutions one needs to be quite careful about consistency. Warped terms are particularly dangerous. The equations of motion for the gauge fixed action (5) contain terms like ∂ 0 X µ ∂ 0 X ν ∂ ρ G µν (X), and these generally have to vanish in order for the equations of motion to be solved. This must be checked for each anstaz adopted. Membranes rotating in AdS geometries This section considers membranes rotating in various AdS backgrounds. These configurations are very straightforward generalisations of previous work and we consider this section to be a warm-up for the G 2 cases to be considered below. We modify previous configurations slightly to obtain logarithmic terms in energy-spin relations. We call these new configurations Type I and the previously studied, non-logarithmic, membrane configurations Type II. We emphasise that this distinction, and the existence of logarithms, is independent of the precise AdS geometry, so long as the internal manifold has a U (1) isometry. Membranes rotating in AdS 4 × M 7 We start by studying membranes moving in AdS 4 × M 7 . We will take first the maximally supersymmetric case with M 7 = S 7 and then move on to more interesting geometries preserving N = 1, 2, 3 supersymmetries in the dual 2+1 dimensional theory. The dual field theories will be conformal and are, in some aspects, very well known. We will study two different types of configurations. The first type of configurations, type I, are similar to the original string configurations [1], and will give logarithmic anomalous dimensions. Type II configurations are essentially the membrane configurations that have already been studied [13] . The metric and three-form potential are Here B is the relative radius of AdS 4 with respect to the seven-manifold, whilst k is a number that can be easily determined from the equations of motion. Let us first study the case in which M 7 = S 7 . In this case we find it convenient to write the metric as ds 2 7 = 4dξ 2 +cos 2 ξ(dθ 2 +dφ 2 +dψ 2 +2 cos θdφdψ)+sin 2 ξ(dθ 2 +dφ 2 +dψ+2 cosθdφdψ). (8) We could equally well be considering the case of the squashed seven sphere,S 7 , the supergravity system will be dual to a conformal gauge theory with N = 1 supersymmetry. In this case, the metric will read with A i being the SU(2) one-instanton on S 4 and ω i the left-invariant one-forms of SU (2) (for details see for example [40]). We will see that we obtain the same results in both cases. The two types of solution mentioned above differ in the dependence of the AdS coordinates on the worldvolume of the membrane. The membrane is moving forward trivially in time, one direction is stretched along the radial direction of AdS and is rotating either in the AdS (spin) or in the internal, M 7 , space (R-charge). There is one extra direction left, with worldvolume coordinate δ, that distinguishes the membrane from a string. We must wrap this direction along a U (1) isometry. This could either be in the AdS (type I configuration), or in the internal space (type II configuration). Thus, for the type I solutions the wrapped direction of the membrane, δ-direction, remains finite at infinity, and the long membrane limit is string-like. For the type II solutions, the wrapped direction is not stabilised asymptotically. This kind of distinction will play an important role below when we discuss membranes on G 2 manifolds. In constracting these solutions, it is important to check that the ansätze are in fact consistent. In practice, this constrains the values that one may give to constant angular coordinates. Solutions of type I: logarithms As this is our first configuration, let us describe it clearly. We want to embed the membrane into spacetime such that it is moving trivially forward in time and is extended along the radial direction of AdS We would then like to have the membrane rotating in the AdS space and for consistency of the anstaz it turns out that we cannot have the membrane rotating in the internal sphere at the same time, so the configuration will not have any R-charge, Finally, we wrap the membrane along a U (1) in the spherẽ ψ = λδ,θ = π/2,φ = 0. (13) Note that in (6), the size of the M 7 space does not change with the AdS radial direction ρ and therefore the wrapped direction remains stabilised at infinity. This will be the main difference with the type II solutions below. We can check that two of the constraints (4) are satisfied whilst the remaining constraint gives the following relation, upon choosing L = 1/λ, We may now compute the action by substituting into the formulae of section 2, where P = 16π|B| (2π) 2 is a normalization factor and ρ 0 is the turning point given by Note that the term in the action associated to the three form vanishes. There is a factor of four in the normalisation because of the periodicity of the integrand and the fact that the membrane doubles back on itself. Write the integrals defining the conserved energy and spin by differentiating the Lagrangian Now one needs to do the integrals. Fortunately, these integrals are exactly the ones considered for rotating strings [1], and therefore we may just read off the results from these papers. What one is interested in is the relationship between the spin and energy for large and small energy. In particular, for long membranes we will get the result To do the integrals one uses the endpoint constraint (18) and also one sometimes needs to use the normalisation condition It is perhaps not surprising that a membrane wrapped on a cycle of constant size has the same behaviour as a string. Solutions of type II We consider now a configuration that is similar to the configuration of the previous subsection, but with the difference that the wrapped direction is in the AdS space and not the sphere. That is compare this with (11) and (13). The rotation must now be in the sphere only, as there are no more directions in the AdSψ The remaining directions are then The first two constraints are satisfied as before, whilst (15) gives The action will be the limits of integration are zero and ρ 0 is the solution of the endpoint constraint, which is The normalization constant is P = 8π|B| (2π) 2 and the contribution of the C (3) field vanishes as before. One can now write down the integrals defining the energy and R-charge angular momentum, there is no room for spin in this case due to the fact that we are dealing with Again, we now recognise these integrals from previous work, this time from rotating membranes [13]. Thus we may again just read off the energy-R-charge relations. For long membranes, these are of the form E = J + .... A type II configuration for membranes in AdS 7 × S 4 has been discussed in [13]. Clearly, one can also write down a type I configuration in AdS 7 × S 4 and obtain a logarithmic E − S in that background. The case of We consider now the case where the internal manifold M 7 of equation (6) is The interest of this configuration is that it provides an M theory dual to a three dimensional N = 2 conformal field theory. This is an interesting field theory, that can be thought of as describing low energy excitations living on M2 branes, that are placed on the tip of an eight dimensional cone with special holonomy. The theory is described in terms of fields A i , B i , C i with i = 1, 2 and with given transformation properties under the colour and flavour groups. Gauge invariant operators are of the form X = ABC and can be put in correspondence with supergravity modes in AdS 4 . Besides, baryonic operators can be constructed. This theory was well studied in various papers [41], [42], [40]. The eleven dimensional configuration reads Here again, k is a constant determined by the equations of motion. We can again consider two types of solutions. We will be brief in this case, since the calculations results are not very different from those of the previous subsections. Indeed the main point here is that the existence of two types of energy-spin relations, one with logarithms and one without, is independent of the internal manifold, so long as it has a U (1) isometry around which we can wrap the membrane. • Type I solutions In this case the configuration reads reads The constraint reads there is a turning point where dρ/dσ = 0 and the action reads In this case, the normalization factor is P = 16π (2π) 2 . As in the previous sections, the contribution of the C (3) field vanishes. We can compute the energy, spin and R-charge angular momentum, and the results are essentially identical to those coming from equations (19)- (22). In particular, there is a logarithmic E − S relation. • Type II solutions In this case the solution will read The constraint will give a turning point ρ 0 , when dρ/dσ = 0 and the action will be The normalization factor will be P = 8π (2π) 2 . This time, the results will be essentially the same as those coming from the previous type II configuration, of equations (28)-(29). It is not difficult to see from (9) that one may obtain the same results using the squashed seven sphere, as we have the same cycles on which to wrap the membrane and rotate. Thus we obtain membrane configurations dual to operators of an N = 1 theory in three dimensions. We can also consider the case of three dimensional N = 3 conformal field theories. These theories are dual to geometries of the form AdS 4 × N 0,1,0 , where the manifold N 0,1,0 has metric with A 1,(2) = cos ξσ 1,(2) , 2A 3 = (1 + cos 2 ξ)σ 3 , and ω i , σ i are left-invariant forms in the different SU (2)s. This type of field theory is interesting because it has the same field content as N = 4 theories, but there are fermionic interactions that only preserve N = 3. Following the steps above, one can find type I and type II solutions for these metrics. Everything will work as before, with different numerical coefficients. It should be clear by now that all that is needed to obtain a logarithmic configuration is a stabilised circle to wrap the membrane on. As we will see below in the section on G 2 manifolds, this does seems work in more general situations than AdS product spacetimes. Membranes moving in warped AdS 5 × M 6 spaces We now consider membranes moving in a geometry that is dual to an N = 2 supersymmetric conformal field theory in four dimensions, as opposed to the three dimensions of the previous cases. The eleven dimensional configuration was written in [43] and represents M5 branes wrapping a hyperbolic two-manifold. The geometry has the form of a warped product of five dimensional AdS space times a six dimensional manifold. This should be thought of as M5 branes wrapping some compact (hyperbolic) cycle inside a Calabi-Yau two-fold. The metric looks, schematically (for a detailed discussion see [44]), as follows where AdS 5 is written in the coordinates (ρ, t, ξ 1 , ξ 2 , φ) as usual, ∆(θ) = 1 + cos 2 θ and the R i are constants. The C (3) field has the schematic form We can consider a solution of the form The remaining angles take values of 0 or π/2 as in previous configurations. Note that θ = π/2 is necessary to solve the equations of motion. There is no R-charge in this configuration. Warped solutions are discussed in more detail in Appendix A below. The key point about this configuration is that the warping factor is unimportant because we fix a value of θ so that it just becomes an overall number. The configuration is of type I because the wrapped direction, ψ = λδ, is in the M 6 , therefore we expect to get relations of the form E −S ∼ ln S, and indeed this is what one finds upon doing the calculations. The integrals that emerge are, up to numerical coefficients, the same as the type I configurations we studied above. Membranes moving near an AdS black hole For completeness, we briefly consider now the case of membranes orbiting in an eleven dimensional geometry given by With the function and B = R, the previous metric is a black hole in AdS 4 . A very nice physical description of the AdS/CFT correspondence for strings orbiting about black holes was given in [7]. We shall limit ourselves to discussing the membrane configurations and energy spin relations. As previously, we will consider two types of configuration, the type I and the type II We can construct the expression for the membrane constraint in the first case, and for the type II solutions Upon requiring dr/dσ to vanish at the endpoints, we will obtain two different values of r min , r max , that is, the integration limits in the action, when we change variables from σ to the radial coordinate r. This is physically the fact that the membrane is entirely outside the event horizon and is therefore orbiting rather than rotating. The expression for the action, energy and spin, in the type I case is, while for the type II configurations we will have for the action, energy and R-symmetry angular momentum Let us study the explicit expressions for the energy and the spin of type I configurations. It is convenient to make a choice of parameters M = R = κ = 1. The results will remain true at least for a small interval of value for M around M = 1. We can see that for values of the parameter ω close to one, corresponding to long membranes, the functions inside the square roots are positive in the interval r h , r + , where r h is the root of r 3 + r − 1 and the roots of −(ω 2 − 1)r 3 + r − 1 are r − , r + both of them positive and bigger than r h and r * , a third negative root. The integrals read, Now, we want to study the approximate expressions for these integrals in the cases of long membranes, that is membranes that are extended in the interval (r − , r + ). Evaluating the approximate expressions for the integrals in the case of long membranes, we get a relation of the form E − kS ∼ S 3 . 4 Rotating membranes on G 2 manifolds The duality with ALC G 2 metrics Partially motivated by the developments described in the introduction, there has been significant recent progress in constructing new cohomogeneity-one manifolds with G 2 holonomy [45,30,46,47,48,49,50,51], generalising the manifolds that have been known for some time [52,53]. When the M-theory flop was discussed in [28], the only known G 2 metric with the necessary symmetries to describe wrapped D6 branes in type IIA was asymptotically a cone over S 3 ×S 3 [52,53]. The essential point is that one S 3 collapses at the origin whilst another does not. 1 Thus depending on which S 3 the M-theory cycle is contained in, one gets either a IIA reduction that is singular at the origin -branes -or a non-singular reduction -no branes. However, in these metrics the dilaton diverges at infinity after reduction so they are unsatisfactory IIA backgrounds. The authors of [28] thus postulated the existence of two new types of G 2 holonomy metric to fix this problem. These metrics should not be Asymptotically Conical (AC), but Asymptotically Locally Conical (ALC), that is to say that at infinity there should be a circle with a stabilised radius. This circle will be the M-theory circle and the corresponding IIA dilaton will be well-behaved. The two metrics would correspond to when the stabilised U (1) is contained within the collapsing S 3 or the non-collapsing S 3 , corresponding to good D6-brane or good non-D6-brane solutions, respectively. This picture was essentially realised with the discovery of explicit ALC G 2 metrics. G 2 metrics reducing to D6-branes wrapping the deformed conifold were discussed in [45,48,46], these are called the B 7 family of metrics. Metrics reducing to the small resolution of the conifold with fluxes were discussed in [49,51,50], these are called the D 7 family. Transformations of these metrics under the broken triality symmetry were discussed in [51], this does not change the radial behaviour or the symmetries. The situation is not quite as anticipated by [28]. All the known G 2 metrics are con- M-theory geometry that also admits a good IIA reduction in the IR (D 7 family). But the desired flow should exist, which is enough to establish the IIA duality from M-theory with a well-behaved dilaton. Assuming that the quantum smoothing of the process continues to occur as it did in the AC case [29]. We will consider membranes rotating on all of the geometries discussed in this subsection. The D 7 family are, strictly speaking, the gravity duals that describe the field theory in the IR. The precise role of the B 7 metrics in the duality is unclear, although it could well be related to the lack of brane-bulk decoupling discussed in the introduction. The definitions for Σ i are analogous but with (θ 1 , φ 1 , ψ 1 ) → (θ 2 , φ 2 , ψ 2 ). These metrics are locally asymptotic to cones over S 3 × S 3 . There is a finite size S 3 bolt at the origin. There is a two parameter family of such G 2 metrics, called B 7 in the classification of [49,50]. The radial functions satisfy the equations [45,46] Two exact solutions are known. One is the asymptotically conical (AC) solution of [52,53], which has SU (2) 3 ×Z 2 symmetry. The other is only Asymptotically Locally Conical (ALC), with a stabilised U (1) at infinity [45,48], which has SU (2) 2 × U (1) × Z 2 symmetry. The remaining metrics in this family are only known numerically. Fortunately, we only require the asymptotics at the origin and at infinity, which are easily calculated from (59). As where q 0 and R 0 are constants. Note that b(r) and c(r) collapse, whilst a(r) and d(r) do not. As r → ∞ we have where q 1 and R 1 are constants that will be functions of of q 0 and R 0 . Note that c(r) is stabilised whilst the others diverge linearly. The expressions are needed to second order because we will be interested in the subleading terms of various integrals. Commuting U (1) isometries and membrane configurations The metrics (57) have three commuting U (1) isometries. Using the Euler coordinates (58), these can canonically be taken to be generated by The existence of three commuting U (1) isometries is very useful for considering rotating membranes. By placing the directions of rotation and wrapping along these U (1)s, most of the equations of motion are trivially satisfied as a statement of conserved charges. The remaining equation of motion for the radial direction then follows from a first order gauge fixing constraint, as discussed above. However, the canonical U (1)s are not the most sensible for our purposes. Consider the redefinitions Note that ψ 3 , ψ 4 now have a range of 8π whilst φ 3 , φ 4 have a range of 4π. Three commuting isometries now are ∂ φ 3 , ∂ φ 4 , ∂ ψ 3 . As we shall see shortly, the first two U (1)s will now be contained in S 3 s that do collapse and do not collapse, respectively, at the origin. In the IIA brane picture, the S 3 that does collapse is surrounding the brane whilst the S 3 that does not collapse is inside the brane. In order for the dual field theory to be four dimensional, one must consider energies such that the finite S 3 is not probed. The charge generated by rotations along ∂ φ 3 , inside the brane, will be denoted K 1 , whilst the charge generated by rotations along ∂ φ 4 , outside the brane, will be denoted by K 2 . In the B 7 family, the U (1) generated by ∂ ψ 3 , the circle that is stabilised at infinity, is contained within the collapsing S 3 at the origin. Call this charge K 3 . Note that the isometries transverse to the membrane do not have the interpretation of R-charge because the theory is N = 1. We cannot have all three charges at once, as we need to use one of the isometries to wrap the membrane. This last point is necessary for the wrapping direction to drop out of the action integral. One can also extend the string in the xyz plane and indeed such configuration will be considered in a later section. There are then three possible configurations for the nontrivial directions, shown in Table 1. If the stabilised circle generated by ∂ ψ 3 is considered as the M-theory circle, then configurations II B and III B reduce to rotating D2-branes or a D0-D2 state, depending on whether there is a rotation along the M-theory circle or not, whilst the I B configuration reduces to a rotating fundamental string. At this point, we need to take into account the Z N quotient of the G 2 manifold that was mentioned above. The effect of this quotient is to send The target space metric that is seen by the membrane is thus It is easily checked that the γ 0α constraints in (4) are satisfied. The remaining constraint, choosing the free constant L = 1/λ, then implies that A further constraint must be imposed, this is the condition that the membrane doubles back on itself at some radius dr dσ r 0 = 0. Other configurations are possible, in which the rotating directions or the wrapped direction is some linear combination of the U (1)s. However, these configurations will not generically satisfy the constraints, because the induced metric will have cross terms. Another possibility is to take different U (1) subgroups of the original SU (2)s. The present choices would appear to be the most natural and we will not consider other subgroups here. Before moving on, one must check the consistency of the ansätze described here. Checking the equations of motion, we find that indeed all are consistent. Energy and other conserved charges The following conserved charges are naturally associated with the configuration where I is the action (2). The κ derivative is taken at fixed (r 0 , ω, ν 2 , ν 3 ), and similarly for the other derivatives. Let us do this in the three cases. Note that in passing from an integral over σ to an integral over r we multiply by four because of the periodicity of the integrand and the fact that the membrane doubles back on itself. We use the constraint (67) to eliminate κ. The different numerical factor in the different cases is due to the different ranges of the angle about which the membrane is wrapped. These integrals may be expanded for large and small r 0 using the expansions (60) and (61). In the various integrals there are usually two constants, such as ω and ν 2 in the I B case. These are nontrivially related through the normalisation constraint (68). Here we will only consider the cases where one of the constants is zero, corresponding to a rotation in only one direction. In these cases we see that the remaining constant drops out of the integral and the normalisation constraint does not need to be evaluated. For short membranes, small r 0 , one may use the Taylor expansions about the origin to evaluate the integrals. For long membranes, large r 0 , one may only use the expansions about infinity to evaluate the integral if the integral is diverging with r 0 because in this case the integral is dominated by the contributions at infinity. One then needs to check that there is not an r 0 contribution from the interior of the integrand. Naively, the integrals for large r 0 are done as follows where Λ is some cutoff and we ignore contributions from this end of the integral. The final expression represents an expansion of f (ur 0 , r 0 ) about r 0 = ∞. In the final result of this calculation, we may trust any terms that diverge as r 0 → ∞. One thing that may go wrong is that the leading order coefficient, F m (u), in the final equation of (85) integrates to zero, meaning that there is no r m 0 power term. In this case one should do the full integral numerically to check whether the vanishing is a result of power expanding inside the integral, and see what the leading order coefficient is. Alternatively one can try to do the integral exactly without expanding the integrand fully. Doing this is crucial to obtain the logarithmic term in the next subsection. Given the resulting expressions for E and the Ks, one then eliminates r 0 to obtain the results of Table 2. In this table k is used to denote positive numerical factors. Dependence on R 0 , R 1 , N is kept explicit. It turns out there is no dependence on q 0 , q 1 to the order considered in the table. The results in Table 2 have a physical interpretation. Note that there are four types of leading order behaviour. Use K to denote a generic charge and R to denote either R 1 or R 2 . • E = kR 1/2 K 1/2 : This is the well known Regge relation for strings in flat space. It arises when the δ-direction of the membrane is wrapped on a stabilised U (1) and when the direction of rotation is a U (1) that is not stabilised (i.e. collapsing if we are at the origin or expanding if we are going to infinity). • E = kK 2/3 : This is the result for membranes rotating in flat space. It arises when neither the δ-direction nor the direction of rotation is stabilised. • E − K R = kRK 1/3 : This result arises for long strings when the δ-direction is not stabilised, but the direction of rotation is stabilised. Interestingly, this relation was also observed in a different configuration [13] in AdS 7 × S 4 , suggesting perhaps that it is quite generic. This result arises for short strings when the δ-direction collapses, but the direction of rotation does not collapse. The behaviour of the energy-charge relationship would thus seem to depend on whether the wrapped circle and the circle of rotation are stabilised. In the above configurations one case is missing, there is no case in which both the δ-circle and the circle of rotation do not collapse. For short strings, we will find such a configuration in the D 7 metrics below, because more circles are non-shrinking at the origin. However, within the set of configurations we have considered thus far, we cannot find a configuration in which two circles are stabilised at infinity, because the G 2 metrics only have one stabilised circle. We might expect such a configuration to give logarithms by analogy with the previous section when we considered membranes rotating on AdS 4 × S 7 : to move from a relationship of the form E − K = K 1/3 to a relationship E − K = ln K, we changed the wrapped circle to make it stabilised. To achieve this in the present case, we need to use the non-compact directions. However, due to the tension of the membrane, one cannot simply have a closed membrane in flat space. The resolution is to consider an infinite membrane in the non-compact directions and study the energy density of the configuration. Insofar as the equations are concerned, this is effectively the same as wrapping the membane. Using the non-compact directions: logarithms Writing the eleven dimensional metric as the following configuration, which we denote IV B , has the desired feature of having both a wrapped and a rotating direction asymptotically stabilised. The nontrivial coordiantes are t = κτ, ψ 3 = ν 3 τ, r = r(σ), x = λδ. The remaining coordinates are trivial One might also want to consider having the rotation in the non-compact direction, but this seems to cause difficulties with the implementation of the endpoint constraint (67). The target space metric seen by the membrane now becomes The action, energy and charge, per unit length along the non-compact x direction, are easily worked out to be In the large membrane limit, r 0 → ∞, these integrals are dominated by their behaviour at r 0 , thus we may expand the integrand and evaluate only at r 0 . We substitute the expansion of c(r) to second order into the integrand and evaluate the integral. Expanding the integrand fully before integrating will not give the correct answer, as no logs will appear. The integrals give, to leading and subleading order Where K(x) and E(x) are complete Elliptic integrals of the first and second kind. The constants k in the above two lines are equal, but below we use k to denote any constant, with dependence on ρ 0 and R 1 kept explicit. In order to evaluate these integrals we need the following expressions for the asymptotics as x → ∞ of the Elliptic integrals These formulae are easily derived by expressing the complete elliptic integrals as hypergeometric functions and then using the Pfaff and Gauss theorems for hypergeometric functions [54]. Thus we have that whilst the difference Combining these two expressions gives the new kind of behaviour for long membranes This behaviour is different from the behaviours of the previous section because both the direction of wrapping and the direction of rotation are stabilised as we go to infinity. For short membranes with this configuration, we get E = kN 1/2 K 1/2 3 + · · · as expected for a membrane where the δ-direction is stabilised but the rotation direction collapses at the origin. These solutions thus realise a transition from Regge behaviour for short membranes, to logarithmic behaviour for long membranes without finite size effects [8]. Another way of getting around the fact that a closed tubular membrane in flat space can't exist as a static solution due to the membrane tension is to consider a pulsating membrane solution, analogous to the well-known pulsating closed string solution. The solution will not be particularly straightforward to construct in the present context. Such a solution may have a tunable amplitude, in which case one could make the energy of the pulsations negligible compared with the energy of the rotation and therefore the calculations of this section will go through as the dominant effect. Metric formulae The metrics can be written in the form where Σ i , σ i are left-invariant one-forms on the SU (2)s, as previously. The six functions are not all independent g(r) = −a(r)f (r) 2b(r)c(r) , g 3 (r) = −1 + 2g(r) 2 . None of the radial functions are known explicitly, although the asymptotics at the origin and at infinity are known. The asymptotics are found by finding Taylor series solutions to the first order equations for the radial functions. The equations are [50] As r → 0 one has where q 0 and R 0 are constants. Note that a(r) and c(r) collapse and the other two functions do not. As r → ∞ we have With constants R 1 , q 1 , h 1 . Note that f (r) stabilises. Three constants appear to this order, whilst there were only two constants in the expansion about the origin. This just means that for some values of these constants, the corresponding solution will diverge before it reaches zero. In any case, we find no h 1 dependence in the results below. Membrane configurations The situation is essentially the same as for the B 7 family of metrics. Again one has three commuting U (1) isometries, generated by ∂ φ 1 ⊂ SU (2) σ L , ∂ φ 2 ⊂ SU (2) Σ L and ∂ ψ 1 + ∂ ψ 2 ⊂ SU (2) D R . One difference, however, is that now ∂ φ 1 generates a U (1) that does not collapse and ∂ φ 2 generates a circle that does collapse, so there is no need to change variables to φ 3 and φ 4 as previously. In fact, such a change would not give a valid solution. We do however need to define Note that now ψ 3 , ψ 4 have ranges of 8π whilst φ 1 , φ 2 have ranges of 2π. Three commuting U (1) isometries are then ∂ φ 1 , ∂ φ 2 and ∂ ψ 3 . There are no branes in reduction of these configurations to well-defined IIA solutions. The circle generated by ∂ φ 1 and ∂ ψ 3 do not collapse in the interior and thus rotation in these directions corresponds to charges, K 1 and K 2 respectively. The ∂ φ 2 circle does collapse and rotation about this circle will give a charge denoted by K 3 . Similar to before, we take Note that the value of ψ 4 is different. This value is needed to diagonalise the metric seen by the membrane and hence satisfy the constraints. As in the previous subsection, there are three possible configurations for the nontrivial directions, shown in Table 3. The target space metric that is seen by the membrane, after doing the Z N quotient on where we have introduced functions A(r), B(r), C(r) in order to make the following formulae less ugly. The asymptotics for these functions follow from the limits (101) and (102) and the algebraic equations in (100). As r → 0 we have and as r → ∞ we have The only nontrivial constraint from (4) now implies that The issue of consistency is more subtle in these cases than in the B 7 cases. This is because there are more cross terms in the metric. It turns out that in order for the ansätze to be consistent, one can only have one of the angular momenta to be nonzero in any given solution. Thus, for example, in the type I D solution one must have either ω 1 = 0 or ν = 0 in order for the configuration to solve the equations of motion. These are the configurations we shall consider below. Energy and other conserved charges Again, the differing numerical factors in the expressions below are due to differences in the ranges of angles. . (113) (118) These integrals are now performed in the small and large membrane limits in the same way as for the B 7 metrics. The results are presented in Table 4. Again, k denotes positive numerical constants, with dependence on R 0 , R 1 , q 0 , q 1 , N kept explicit. It is not surprising that a dependence on q 0 now emerges because, unlike the B 7 cases, the principle interpretation of this parameter is as measuring the squashing of the bolt at the origin [50]. The behaviours observed are the same as for the B 7 metrics, except that there is one new possibility for short strings This arises when the δ-direction does not collapse at the origin, so the rotation is string-like, and the direction of rotation also does not collapse. There will be a dependence on q. Using the non-compact directions again: logarithms Writing the eleven dimensional metric as 1 l 2 11 ds 2 11 = −dt 2 + dx 2 + dy 2 + dz 2 + ds 2 7 , we may do exactly the same calcutions as we did before for the B 7 metrics. The configuration that follows will be denoted IV D , t = κτ, ψ 3 = ω 2 τ, φ 1 = φ 2 = 0, r = r(σ), The target space metric seen by the membrane now becomes The action, energy and charge, per unit length along the x direction, are easily worked out to be Obviously, this is exactly the same as the B 7 case but with c(r) → C(r). The asymptotic expansions to second order are the same for c(r) and C(r) if we let R 1 → R 1 /2. Thus we get the same result For short membranes with this configuration, we get E = 2N K 3 /(R 1 q 1 ) = kR 1 /q + · · ·. As commented before, we might expect to also find a pulsating membrane solution with these energy -charge relations. To start with, we considered an AdS 4 × M 7 spacetime. Depending on the holonomy of M 7 , these are dual to 2 + 1 dimensional conformal field theories with a varying number of preserved supersymmetries. In all of these manifolds, we found that rotating membrane configurations may develop relations for the energy E, spin S, and R-symmetry angular momentum J, of the form E − S ∼ ln S and E − J − S ∼ 1/J ln 2 (S/J), as had previously been found for strings on various backgrounds. The same logarithmic results were found for membranes moving in a warped AdS 5 × M 6 geometry that is dual to a four dimensional N = 2 conformal field theory. We also recovered previous non-logarithmic results for membranes and explained the difference between the logarithmic and non-logarithmic cases in terms of whether the direction wrapped by the membrane was stabilised at infinity or not. According to the correspondence between high angular momentum strings/membranes and 'long' operators [1], these rotating membranes should be dual to certain twist two operators in the corresponding conformal field theory that have anomalous dimensions given by the relation between energy (or conformal dimension), spin and J-charge calculated on the gravity side of the duality. These results point to the fact that for geometries of the form AdS p × M q , it will be possible to find membrane/string configurations dual to 'long' twist two operators. Given these results, we were lead to a very natural extension; geometries that are not of the form AdS p × M q . Section four of this work presents a very detailed study of membranes rotating in M-theory backgrounds of the form R 1,3 × M 7 , where M 7 is now a non-compact G 2 holonomy manifold. These backgrounds are thought to be dual to N = 1 SYM, which is a confining 'QCD-like' theory. The results of section four could be summarised as follows. We have found rotating membrane configurations that should be dual to operators with energy-angular momentum relations, using K to denote the angular momentum/dual charge, of the following form for When continued to the large quantum number regime these may become E ∼ K 1/2 , E ∼ For the logarithmic cases, we considered energy and charge densities of a noncompact membrane. Some of these configurations seem to realise the proposal of [8] for rotating solutions in a confining geometry to exhibit a transition for Regge-like to D.I.S.-like behaviour without finite size effects. Several comments are in order. First of all, we consider these results to be interesting. Not many dynamical or quantitative tests of the duality between M theory on G 2 manifolds and N = 1 SYM theory seem to exist. We hope that our results are a step towards an understanding of the duality that involves both a dynamical and a quantitative statement. Indeed, the fact that we obtained results that look very much like they should correspond to anomalous dimensions of operators, suggests that the energy of gravity states corresponds to the dimension of gauge theory operators. This is not at all otherwise obvious, given the lack of conformal symmetry and the lack of a holographic formulation of the duality that explicitly links bulk states with boundary operators. We it could be. As well as the known behaviours, like E ∼ K 1/2 , E ∼ K 2/3 , which are Regge-like behaviours for short membranes, and E − K ∼ ln K, which is the D.I.S/twist two-like behaviour, we obtained other relations such as those of the form E − K ∼ K 1/3 , E − K ∼ K 3 . The first type of relation appeared previously for strings moving in Witten's QCD confining model [8]. The second type does not seem to have been previously studied. It is not clear which 'QCD-like' operator will be dual to these last two configurations. We must keep in mind that, in the gravity approximation, M-theory on a G 2 manifold is not dual to pure N = 1 SYM. Indeed Kaluza-Klein and bulk modes are not decoupled from the 3 + 1 gauge theory, a feature that seems to afflict any study involving D6 branes. One might speculate that the logarithmic configurations we found on the G 2 spacetimes could be related to large Lorentzian spin operators in field theory via Wilson lines. Wilson lines are closely related to the twist two operators of form (1), see for example [14] and references therein. The membrane configurations in question, called IV B and IV D above, form an infinite line in the noncompact directions. Some related comments were made in [16]. Finally, in the following appendices, we set up a formalism to study certain string configurations on warped AdS backgrounds, that are general confining backgrounds. We then end by explicitly checking non-supersymmetry of the membrane configurations we considered on G 2 backgrounds. One can check that the derivative of the constraint can be split up to give the second order equations of motion. We can consider special cases of the previous configurations. In the case where the warping angle ξ is taken to be a constant ξ * = π/2. Note that this specific value is needed to solve the equations of motion. There is no R-charge in this configuration. The computations are very similar to the cases analysed in [1] and we get the same result with operators satisfying E −S ∼ ln S. Besides, one can consider the case in which the coordinate ρ = ρ * = 0 is constant, again with a value specified by the equations of motion. In this case, the equations of motion reduce to ξ ′′ = ν 2 cos ξ sin ξ, ξ ′2 + ν 2 cos 2 ξ = g 4 4 κ 2 , The turning point will be We can compute the energy and angular momentum for this configuration to be given by As we have done in previous sections, we expand the integrals above to find the relation between energy, spin and angular momentum for long and short strings. After doing the expansion, we notice that the energy and angular momentum do not diverge for long strings. This is a new type of behaviour. Even though this geometry is very similar to the one described in section 4.1 of the paper [1], we have here a warping factor. We get that the relation for long strings is of the form where k is a numerical constant. where the "..." can be whatever one wants, the string will not be moving in these directions. Backgrounds of this form are interesting since they have the general form of gravity duals to gauge theories in 3 + 1 dimensions with a low number of supersymmetries, and that may exhibit confinement. Consider a string configuration that could be interpreted as a spinning string in the 3 + 1 manifold r = r(σ), t = κτ, φ = ωτ, ρ = ρ(σ). Integrating the equations of motion, we will obtain a relation between the variables, that we can substitute into the original action and follow the procedure in previous sections of the paper. This seems like an interesting direction to investigate in the future. It seems possible that one might obtain logarithms in these types of configurations. B Conditions for supersymmetry of rotating membranes We do not expect our configurations to be supersymmetric given the time dependence and the minimally supersymmetric background. However, for completeness, we check this explicitly. Let ǫ generate a supersymmetry of the background spacetime metric, i.e. it is a Killing spinor. This supersymmetry will be preserved by the worldsheet if [56,39,57] where Where Γ µνσ is the standard antisymmetric combination of eleven dimensional Dirac matrices, and γ ij is the induced metric on the worldsheet. From here one uses the membrane configurations of Table 1 to calculate Γ M 2 . For example, for the I B configuration one obtains where The matrix (153) is easily seen not to commute or anticommute with the projectors of equation (151) and therefore no supersymmetries are preserved. The same will be the case for the II B , III B and IV B configurations. The D 7 cases are a little more complicated because the parallel spinor [49] does not have constant coefficients. However, it will be sufficient for us to know that the parallel spinor satisfies Γ 2536 ǫ = ǫ, where we are using tangent space indices and the vielbeins are e 0 = dr, e 1 = a(Σ 1 + gσ 1 ), e 2 = a(Σ 2 + gσ 2 ), e 3 = c(Σ 3 + g 3 σ 3 ), e 4 = bσ 1 , e 5 = bσ 2 , e 6 = f σ 3 . (156) One then calculates Γ M 2 using Table 2. For the I D configuration, for example, one obtains where One can now see that Γ I D and Γ 2536 do not commute or anticommute and therefore no supersymmetry is preserved. It is easy to check that the same occurs for the other configurations, II D , III D and IV D . Thus, as expected, none of our configurations are supersymmetric.
2014-10-01T00:00:00.000Z
2002-10-22T00:00:00.000
{ "year": 2002, "sha1": "772bd59b1ac2386e15bec0a8e0a9c71a4019dbef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0210218", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "18ccf7673e84b6e48d588fa10587984809b878a8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210262185
pes2o/s2orc
v3-fos-license
Complex for experimental research of elastic wave interactions with ice layer An adjustable pneumatic generator of acoustic signals with shock excitation was developed. Measuring and computing complex was also created to investigate elastic acoustic wave propagation along ice surface. Experiments on low-frequency acoustic signal propagation from the pneumatic generator were carried out in «water-ice-air» system. The possibility to apply the developed measuring and computing complex for physical modeling of acoustic wave propagation from earthquake sources along ice cover was confirmed. Introduction The paper [1] experimentally proves that deformation-acoustic activity which appears before earthquakes may be considered as a complex precursor of a seismic event. One of the methods to record earthquake precursors is the method of acoustic elastic wave recording in the range from 0 to 10 kHz in small water pools [2][3][4][5]. In Arctic conditions, there is a necessity to record earthquake sources located under the ice far from coastline. Such a question statement requires recording of hydroacoustic waves taking into account their interactions with ice surface. The papers [6,7] present mathematical models for acoustic signal propagation taking into account reflections at the boundary of two mediums. However, applicability of such models in practice requires additional investigation with consideration to real conditions. The aim of the paper is the development of a measuring and computing complex to investigate the interactions of elastic waves with ice layer in real conditions. In order to do that, we need to solve the following tasks: to develop instrumentation imitating earthquakes, to create a measuring and computing complex for investigation of acoustic wave propagation along an ice layer, to make experimental research of elastic wave propagation at the boundary of two mediums, «water-ice». Elastic wave propagation at the boundary of several mediums is a complex mathematical problem. At the present time, mathematical models for elastic wave propagation along ice cover of water mediums are being developed. One of the promising investigation directions is based on the application of directed Green's function method [6,7]. However, such models lack the knowledge of initial conditions and real natural medium properties (ice) for practical applications. Thus, to obtain adequate results and to verify mathematical models, we need natural experimental researches. Elastic wave generator To generate an elastic wave in an ice layer, pulse powers of several kW are required. Generators of GP-24 type are capable of generating pulse shock waves of such power [8]. A significant disadvantage of such generators is large dimensions, weight and energy consumption. For example, application of GP-24 generator requires an energy source of 5 kW and the generator weight is 120 kg. An air pneumatic generator can be used as a source of powerful shock wave [9]. The generator physical configuration is shown in Fig. 1. The generator operates from compressed air of several atmospheres in auto vibrating mode. When pressure reaches the critical value, air bursts from the generator interior for 100-200 ms with vibration frequency of 30-70 Hz. Such an air generator does not need an external power supply. In order to investigate elastic wave propagation, we suggest providing the air generator with a system for interpulse period control. The complex for acoustic elastic wave generation operates as follows. The parameters, defining the duration and interpulse period at pneumatic generator 3 (PG) output, are uploaded to the microcontroller 4 via Bluetooth connection 6 by the help of a PC 9 and serial input interface. The microcontroller 4 forms pulses. During the time of their action, an electromagnetic valve 2 is opened by an executive unit 5 and the air from the bottle enters the pneumatic generator 3. Pulse duration is chosen so that pressure reaches its critical value in the operating PG by the end of the pulse. When the pressure critical level is achieved, a part of the air is sharply blown out of the PG interior into the outer space. Airburst lasts for a short period of time of about 100-200 ms in attenuating oscillation mode with the frequency from 30 to 70 Hz. As the pulse stops, the electromagnetic valve is not affected any more, thus, air delivery to the PG is stopped and the process of acoustic pulse generation does not resume. When the next electromagnetic pulse comes, the electromagnetic valve is opened and the process of acoustic pulse formation is repeated. Adjusting the duration of the pause between the control pulses, we can regulate the acoustic signal radiation frequency. Measuring and computing complex A measuring and computing complex (MCC) was created for experimental check of pulse acoustic generator performance. Its functional scheme is illustrated in Fig. 3. The MCC operates as follows. The parameters, defining the pulse duration and the interpulse period are uploaded to the microcontroller 2 via Bluetooth connection by the help of a PC 1 and serial input-output interface. The microcontroller 2 defines the acoustic pulse repetition rate at PG 4 output though the electromagnetic valve control device 3. The PG is descended at an appropriate depth though a hole made in the ice. A hydrophone 5 is descended through another hole at a defined distance from the PG. A gydroacoustic pulse is received by hydrophone 5, is amplified by hydrophone amplifier 6, is converted into a digital code via device 7 and enters PC 1 for processing and data display. Results Investigations were carried out in winter time during the following conditions: air temperature was minus 9ºС; wind velocity was not more than 5 m/s; ice thickness was 0,5m; water temperature was 1ºС; radiator depth was 1 m; hydrophone depth was 1 m; distance between the radiator and the hydrophone was 30 m; depth at the experimental area was 8 m; bottom was formed by sand, stones, water-inhabiting plants. A pneumatic generator illustrated in Fig. 1 was used in the experiment. Pressure in the compressed-air bottle was 100 Pa. Pneumatic generator operation pressure was 10 Pa. Signals were received by a omnidirectional hydrophone. The hydrophone is a piezoceramic spherical Ø 50 mm one. Its sensitivity is within the range of 20 -2000 Hz -180 µV/Pa ± 20%. The hydrophone capacity with the cable of 6 m is 34 nF ± 20%. The hydrophone physical configuration is shown in Fig. 4. As a hydrophone amplifier we used a voltage one with the gain factor Gf = 200, input resistance Rвх = 10 mΩ and the pass band Δf = 2-4000 Hz at the level of -3 dB. A multifunctional unit myDAQ (National Instruments) connected to a laptop was used as a recording unit for the signals received by the hydrophone. The signal sampling frequency was 10 kHz, ADC capacity was 16 binary digits. In order to shields from the air temperature change effect, the hydrophone amplifier, multifunctional unit myDAQ and the laptop were placed inside a building, where the temperature was within 15 ± 2ºС (Fig. 5). Specialized software in LabView environment was used for signal recording and further analysis. components at the level of 60 Pa is clear. It is determined by the exceedance of hydrophone amplifier dynamic range. Fig. 6 shows a signal received from the pneumatic generator at a distance of 30 m. The signal is a pulse sequence of relaxing oscillations with shock excitation and frequency filling. In this case, the pulse group repetition frequency is T = 0,2 Hz. The pneumatic generator control scheme allows us to receive shock acoustic waves with the repetition frequency from 2-3 Hz up to single pulses. Fig. 7 presents a single signal (signal fragment in Fig. 6). Pulse group attenuation to the level of 0.1 from the maximum is about 100 ms. Total attenuation in a pulse group lasts for 300 ms. The period of attenuating oscillations in a pulse group is from 14 to 23 ms. The average frequency of attenuating oscillations in a pulse is f = 50 Hz. Discussion It was shown in the paper [1] that during earthquakes, sound and seismic signals of deformation nature have the pulse repletion frequency of T = 0,1-0,5 Hz in the background. The spectra of such signals is within f = 0-22 kHz. The highest amplitude of acoustic signal energy spectrum is observed at the frequencies of about f = 10-200 Hz. Comparison of pneumatic generator sound signal parameters (T = 0-3 Hz, f = 50 Hz) with real earthquake sound signals (T = 0,1-0,5 Hz, f = 10-200 Hz) allows us to make a conclusion on the possibility to apply the suggested pneumatic generator for modeling of sound wave propagation processes in water mediums covered with ice. . Conclusions Experimental investigations of pneumatic generator showed the possibility to apply it for physical modeling of propagation of relaxation oscillations acoustic pulses with shock excitation and frequency filling in water mediums covered with ice. The MCC based on the pneumatic generator will allow us to automate the experiments on the investigation of sound signal propagation from earthquakes in polar ice conditions.
2019-11-07T14:16:14.248Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4f8cf6524e1a41a188626bc0dd02fb3340fd83e4", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/53/e3sconf_strpep2019_02006.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c4412ac1613f5e994d7885d56cdf17af531f8d0a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
222112922
pes2o/s2orc
v3-fos-license
Thrombus Histology of Basilar Artery Occlusions Background For patients with acute vessel occlusions of the anterior circulation histopathology of retrieved cerebral thrombi has been reported to be associated to stroke etiology. Due to the relatively small incidence of posterior circulation stroke, exclusive histopathologic analyses are missing for this subgroup. The aim of the study was to investigate thrombus histology for patients with basilar artery occlusions and uncover differences to anterior circulation clots with respect to underlying etiology. Methods A total of 59 basilar thrombi were collected during intracranial mechanical recanalization and quantitatively analyzed in terms of their relative fractions of the main constituents, e.g. fibrin/platelets (F/P), red (RBC) and white blood cells (WBC). Data were compared to histopathological analyses of 122 thrombi of the anterior circulation with respect to underlying pathogenesis. Results The composition of basilar thrombi differed significantly to thrombi of the anterior circulation with an overall higher RBC amount (median fraction in % (interquartile range):0.48 (0.37–0.69) vs. 0.37 (0.28–0.50), p < 0.001) and lower F/P count (0.45 (0.21–0.58) vs. 0.57 (0.44–0.66), p < 0.001). Basilar thrombi composition did not differ between the different etiological stroke subgroups. Conclusion The results depict a differing thrombus composition of basilar thrombi in comparison to anterior circulation clots with an overall higher amount of RBC. This may reflect different pathophysiologic processes between anterior and posterior circulation thrombogenesis, e.g. a larger proportion of appositional thrombus growth in the posterior circulation. Introduction Basilar artery occlusions (BAO) account for about 1% of all strokes. Importantly, they are associated with high mortality and morbidity rates without treatment [1][2][3][4]. As these rates can be reduced dramatically by mechanical thrombectomy (MT), MT is now standard care in most stroke centers [5][6][7][8] despite a lack of evidence by large randomized trials. Outcome rates seem to be comparable to those of large vessel occlusions of the anterior circulation [4,9]. MT allows collection of thrombus material that can subsequently be used for histopathologic analysis. Due to their prevalent occurrence this has been applied in recent years predominantly for thrombi of the anterior circulation. With overall sample sizes of 20 to almost 200 thrombi, several single center studies included only up to 15 thrombi of the posterior circulation [10][11][12][13][14][15]. This small sample size may explain why no significant differences in thrombus composition were found between anterior and posterior circulation so far [10]. From a pathophysiological point of view it seems mandatory to study thrombi from patients with basilar artery occlusion (basilar thrombi) and those of the anterior circulation separately, as underlying pathogenesis with higher numbers of in situ thrombosis as well as flow conditions are different [16]. It seems plausible that these different conditions also influence thrombus evolution. Thrombus evolution directly affects thrombus composition, which in turn is influenced by local anatomical and flow conditions. Known associations between thrombus histology and underlying pathology or stroke etiology as well as angiographic and clinical outcome [10,17] are probably not easy to apply to thrombi of the posterior circulation. Therefore, a dedicated histopathological analysis of vertebrobasilar thrombi is warranted. Aim of the present study was to analyze a collective of basilar thrombi and compare their histopathologic composition to that of anterior circulation thrombi. Differences were further examined with respect to underlying etiology of stroke. Material and Methods As primary end point, basilar thrombi were collected and analyzed in terms of their histopathologic composition. Existing data of a large collective of anterior circulation thrombi were used to compare their composition to basilar thrombi under consideration of the underlying stroke etiology. Clinical and angiographic data of patients with BAO were analyzed based on a prospectively collected database, and data were put into relation to thrombus composition. The local ethics committee gave the project a positive vote under number 5518/12. If possible, informed consent of the patients was obtained. If patients were unable to decide concerning the informed consent, a waiver of consent was granted by the ethics committee. Study Population At our single comprehensive stroke center, we screened for patients with acute BAO, who were consecutively treated with second generation thrombectomy devices between 2008 and 2017 (n = 134). Institutional eligibility criteria for mechanical thrombectomy in BAO as well as technical details of recanalization procedure can be found in [18]. In parts, clinical and neuroradiological parameters of this population were already described in [18]. Of this collective, 59 thrombi (44%) could be gathered and were available for further histological analysis. The prospectively collected clinical and imaging data were retrospectively analyzed. Basic demographic, clinical, and interventional data of patients were gathered. The National Institutes of Health Stroke Scale (NIHSS) score was assessed by NIHSS-certified neurologists at time of admission and at time of discharge. The modified Rankin Scale (mRS) was used to assess disability at discharge. The modified thrombolysis in cerebral infarction (mTICI) score [19] was determined by two experienced neurointerventionalists in consensus. Stroke pathogeneses were determined according to the international TOAST (Trial of ORG 10172 in Acute Stroke Treatment) classification [20] on the basis of diagnostic and clinical information available for each patient, including cerebral computed tomography (CT), CT angiography and magnetic resonance imaging, transcranial and extracranial duplex sonography, coagulation tests, long-term electrocardiography recording, and transthoracic or transesophageal echocardiography [21]. Concerning anterior circulation stroke, histological, etiological, clinical and angiographic data were taken from an existing database of 122 large vessel occlusions of the anterior circulation. In this study at the same single comprehensive stroke center in total 137 thrombi were analyzed, including 122 of the anterior circulation [10]. Histological Analysis of Thrombus Material All thrombi were processed as previously described [21]: thrombus material was immediately fixed in phosphatebuffered 10% formalin or 3.8% formaldehyde, transferred to 70% ethanol, and then embedded in paraffin. The formalin-fixed and paraffin-embedded thrombus material was cut into 2-μm slices using a Microm HM 335 E microtome (Microm International GmbH, Walldorf, Germany), followed by hematoxylin-eosin staining of slices. The slides where digitalized at high resolution (0.252 µm per pixel, apparent magnification equivalent to 40× objective) with a Leica AT2 scanning system (Leica, Wetzlar, Germany) and saved as tif-files with Lempel-Ziv-Welch (LZW) compression. Histological analysis of thrombi was performed blinded to clinical and interventional data. The relative quantitative fraction of the different clot components, fibrin/platelets (F/P), red blood cells (RBCs), and white blood cells (WBCs) was evaluated using custom-made quantification software (CAMPThrombus 1.0, not commercially available) of the scanned slides of the complete retrieved thrombus material as reported before [22] (see Fig. 1). In the presence of multiple fragments, all fragments were included in the relative quantitative fraction analysis to ensure the entire clot is represented in the analysis. Because the number of WBC inside the clots was low compared with the main components F/P and RBC [10], the respective amounts of these two components are approxi- Fig. 1 Histopathologic morphology of two hematoxylin-eosin (HE) stained cerebral thrombi. a Thrombus retrieved from the middle cerebral artery. b thrombus from the basilar artery. Etiology of stroke of each was cardioembolic origin. Comparison of clot fractions depicts higher red blood cell count in the thrombus from posterior circulation. Both thrombi are HE-stained, depicting red blood cells (red), white blood cell aggregations (dark blue) and fibrin/platelet area (purple). Black bar 150 μm in the overview image and 50 µm in the small box mately inversely proportional. It therefore seems reasonable to take the ratio of RBC/F/P (named composition ratio in the following) as an indicator of the overall clot composition. Statistical Analysis Quantitative histological data of thrombi were compared between the groups of anterior and posterior circulation as well as between etiological subgroups by means of nonparametric tests (Wilcoxon rank-sum tests). P-values less than 5% were considered as statistically significant. All statistical analyses were performed using IBM SPSS Statistics (version 25, IBM Corp, Armonk, NY, USA). Patient Characteristics In total, thrombi from 59 patients with BAO were included in the study. Demographic, clinical and interventional data of patients are presented in Table 1. Histological composition of the basilar thrombi was compared to that of 122 large-vessel occlusions of the anterior circulation (anterior thrombi). Detailed description of this study cohort can be found in [10]. In patients with LAA stroke (TOAST 1) thrombus composition did not statistically differ between the anterior and posterior circulation ( Table 2). For all other stroke subtypes, basilar thrombi showed significantly higher amounts of RBC and composition ratio and lower F/P-proportions than anterior thrombi (for statistical details see Table 2). Comparison of Thrombus Composition Between Stroke Subtypes Within Anterior and Basilar Thrombi In patients with basilar artery occlusion, thrombus compositions showed similar RBC and F/P proportions for cardioembolic (TOAST 2) and cryptogenic (TOAST 5) stroke etiologies. Accordingly, values of the composition ratio did not differ significantly between these thrombi (p = 0.64). Cardioembolic thrombi (TOAST 2) of the posterior circulation could not be differentiated from the thrombi caused by LAA (TOAST 1) in their composition ratio (p = 0.94). In anterior thrombi there was no statistical difference between TOAST 2 and 5 etiologies (p = 0.61), but values of the composition ratio for cardioembolic thrombi (TOAST 2) compared to LAA thrombi (TOAST 1) were significantly lower (p = 0.04). Discussion In the present study, cerebral thrombi of patients with BAO were analyzed for thrombus composition. Clear differences were shown compared to thrombus composition of the anterior circulation: (A) basilar thrombi contained an overall higher fraction of RBCs. (B) Basilar thrombi did not have a specific pattern of thrombus composition for each stroke subtype (opposed to anterior thrombi). (C) All basilar thrombi had a similar thrombus composition to LAA thrombi (TOAST 1) of the anterior circulation. These findings are possibly based on different thrombus evolution processes in the posterior circulation with a higher proportion of appositional thrombus growth. For the first time, to our knowledge, a collective of clots of the posterior circulation was analyzed concerning their histopathological thrombus composition. In the present study, 59 thrombi were gathered between 2008 and 2017. Although this number appears only moderate, this sample size exceeds all previously published histological analyses of occlusions within the posterior circulation by far, which is due to the lower frequency of BAO compared to occlusions in the anterior circulation [10][11][12][13][14][15]. Previous analyses with very low numbers of BAO occlusions found no significant differences between anterior and posterior circulation thrombi [10]. In our comparison of basilar and anterior thrombi, differences in histopathologi-cal composition could be detected. Overall, basilar thrombi showed a higher RBC content. At first sight, it seems plausible that an overall higher RBC amount is due to a higher etiological proportion of LAA in the posterior compared to the anterior circulation, based on a higher number of in situ thromboses [4] or embolism from stenosis of the vertebral artery; however, the higher RBC amount was observed in all other etiological subgroups. Thus, the overall higher RBC amount in the BAO thrombi seems not to be driven by a relatively higher number of patients with LAA stroke but may have pathophysiological reasons. In patients with LAA stroke, RBC proportions are similar between basilar and anterior thrombi. It seems reasonable that thrombi caused by LAA do not differ between the anterior and posterior circulation, as pathogenesis is similar. Thrombus formation is based on either embolism (e.g. due to a stenosis with ruptured plaque) or local thrombosis. This kind of thrombus evolution is characterized by an acute formation of thrombus, containing platelet aggregations and relatively high amounts of interspersed RBCs. Most studies on anterior thrombi showed that RBC proportion is higher in thrombi caused by LAA, than in cardioembolic clots [10,11]. This is different in basilar thrombi with a higher proportion of RBC for all underlying stroke causes, including cardioembolic strokes. This might be attributed to a different thrombus evolution process, as the posterior circulation is characterized by different flow K conditions compared to the anterior circulation [16,18], probably causing a relevant amount of fresh appositional local thrombus with subsequently higher RBC amounts of the thrombi extracted from the posterior circulation. Importantly, as it is known from experimental models, embolus trajectory may depend on thrombus size and density [23,24]. The differing diameter of posterior circulation vessels could explain the overall difference of thrombus composition in these thrombi and should be considered in future analysis. It is assumed that thrombus composition influences the efficacy of different thrombectomy techniques, improving the results of endovascular treatment [25]. The interventionalists could adapt their technique (e.g. utilization of aspiration) also dependent on the expected composition (besides different flow conditions [16]), as RBC thrombi tend to be softer and might be easier to extract with aspiration. The higher RBC count of basilar thrombi would reinforce the primary application of this technique in the vertebrobasilar system. Furthermore, future studies on larger numbers of clots from the posterior circulation could confirm previous studies on anterior circulation occlusions that showed that thrombus composition can be assessed by imaging parameters and could also give valuable information about pathogenesis [26,27]. This would be valuable in the planning of the endovascular treatment. Analysis of cerebral thrombi may support decision making concerning secondary prophylaxis after stroke in future [28]; however, as histological differentiation between stroke etiology does not work in basilar thrombi of our study population, our findings clearly show that results of studies focusing on anterior circulation stroke may be transferred to BAO stroke only carefully. Our study has certain limitations. As thrombi could be gathered for about half of the entire BAO population only, a selection bias cannot be excluded, which affects all studies investigating retrieved thrombi. To evaluate this possible bias, we additionally performed a comparison between the group of patients with evaluable thrombi and the screened patients without analyzable clots regarding demographical, clinical and interventional variables (see supplementary material). This analysis showed no relevant differences in demographic and clinical parameters. There were no differences in the devices used for mechanical recanalization (stand-alone aspiration, stent-retriever only or mixed) and in reperfusion success (measured by mTICI), making a systematic bias unlikely. The longer recanalization time and higher total number of maneuvers within the group of patients without analyzable clots is caused by the outliers of hard clots or clots impossible to remove. A further limitation is the method of histological analysis. We studied relative quantitative fractions of the different clot components only; however, the clot structure itself was not investigated. To differentiate the parts of appositional thrombus growth with an assumed dominant RBC amount this approach would be helpful and should be applied in further studies. Finally, although TOAST classification was originally designed independently of vascular territory [29], we could not exclude the possibility that the posterior vascular territory affected diagnosis and work-up of stroke etiology in our patients. This, in turn would affect the comparison between subgroups. Conclusion Evidence for a differing thrombus composite was shown between anterior and posterior circulation with an overall higher RBC amount in basilar thrombi. This is possibly based on a different thrombus evolution process in the posterior circulation with a higher proportion of appositional thrombus growth. Results of studies with anterior thrombi, especially regarding the evaluation of secondary prophylaxis strategies, may be transferred to BAO stroke only carefully. Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Final approval of the version to be published: ALL. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved: ALL. Author Contribution Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/.
2020-10-03T13:56:20.376Z
2020-10-02T00:00:00.000
{ "year": 2020, "sha1": "68ab073bf81515cdaa530b1abfe1c5a7ef1931ac", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00062-020-00964-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "68ab073bf81515cdaa530b1abfe1c5a7ef1931ac", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248097873
pes2o/s2orc
v3-fos-license
Dysbiosis in Inflammatory Bowel Disease: Pathogenic Role and Potential Therapeutic Targets Microbe–host communication is essential to maintain vital functions of a healthy host, and its disruption has been associated with several diseases, including Crohn’s disease and ulcerative colitis, the two major forms of inflammatory bowel disease (IBD). Although individual members of the intestinal microbiota have been associated with experimental IBD, identifying microorganisms that affect disease susceptibility and phenotypes in humans remains a considerable challenge. Currently, the lack of a definition between what is healthy and what is a dysbiotic gut microbiome limits research. Nevertheless, although clear proof-of-concept of causality is still lacking, there is an increasingly evident need to understand the microbial basis of IBD at the microbial strain, genomic, epigenomic, and functional levels and in specific clinical contexts. Recent information on the role of diet and novel environmental risk factors affecting the gut microbiome has direct implications for the immune response that impacts the development of IBD. The complexity of IBD pathogenesis, involving multiple distinct elements, suggests the need for an integrative approach, likely utilizing computational modeling of molecular datasets to identify more specific therapeutic targets. Introduction In the last two decades, the gut microbiota has become a focus of major interest in the study of inflammatory bowel disease (IBD) pathogenesis. Technological advancements have allowed the characterization of several gut microbiome abnormalities in patients with Crohn's disease (CD) and ulcerative colitis (UC), the two major forms of IBD. Abnormal immune reactivity against commensal microorganisms [1,2] and defects in innate and adaptive immunity have long been described in studies on IBD [3][4][5]. Nonetheless, the most consistent association between IBD and bacteria has been derived from animal models. For example, germ-free mice do not develop colitis, and inflammatory changes can be induced after colonization with commensal bacteria [6]. In a study using interleukin (IL)-10-deficient mice, which are genetically susceptible to colitis development, antibiotic administration early in life increased the risk of colitis [7]. Regarding genetic predisposition, several studies have identified altered regulators of the complex network underlying IBD in patients, many of which control the immune response to microbes. Nucleotide-binding oligomerization domain 2 (NOD2) polymorphisms, which encode an intracellular pattern recognition receptor and regulate the production of defensins by Paneth cells, have been associated with the risk of CD [8]. Several other gene variants related to bacterial clearance or protection against epithelial invasion have also been associated with IBD [9]. Currently, healthy immune homeostasis is linked to a state of tolerance towards resident microbiota, and disequilibrium of normal homeostatic Figure 1. The role of gut dysbiosis in the pathogenesis of inflammatory bowel disease. Gut microbiota reflect an interaction of host genetics with dynamic exposure to innumerable stimuli from the exposome. Crosstalk amongst these factors results in long-standing consequences to the gut microbiota and epigenetic modifications in a multidirectional fashion, potentially affecting susceptibility to diseases. The prevalence of either regulatory (eubiosis) or inflammatory (dysbiosis) species within the gut microbial community determines the respective predominant immune response. Treg, regulatory T-cell; Breg, regulatory B-cell; ILC, innate lymphoid cell; IgA, immunoglobulin A; MØ, macrophage; TSLP, thymic stromal lymphopoietin. Intestinal Microbial Dysbiosis The gut microbiota is an important physical, chemical, and immunological interface between the environment and host; thus, any dysregulation or breakdown of this barrier can contribute to disease states. For example, altered physical epithelial barrier function, Figure 1. The role of gut dysbiosis in the pathogenesis of inflammatory bowel disease. Gut microbiota reflect an interaction of host genetics with dynamic exposure to innumerable stimuli from the exposome. Crosstalk amongst these factors results in long-standing consequences to the gut microbiota and epigenetic modifications in a multidirectional fashion, potentially affecting susceptibility to diseases. The prevalence of either regulatory (eubiosis) or inflammatory (dysbiosis) species within the gut microbial community determines the respective predominant immune response. Treg, regulatory T-cell; Breg, regulatory B-cell; ILC, innate lymphoid cell; IgA, immunoglobulin A; MØ, macrophage; TSLP, thymic stromal lymphopoietin. Intestinal Microbial Dysbiosis The gut microbiota is an important physical, chemical, and immunological interface between the environment and host; thus, any dysregulation or breakdown of this barrier can contribute to disease states. For example, altered physical epithelial barrier function, a thinner mucus layer, and altered responses to endoplasmic reticulum stress (via mutations in MUC19, ITLN1, FUT2, and XBP1) have all been identified as risk factors for IBD [13][14][15]. Currently, the pathogenesis of human IBD is believed to involve inappropriate activation of the immune system when genetically susceptible individuals are exposed to gut antigens, such as microbiome components [16]. Although alterations in the gut microbiome have been proposed to be critical in IBD pathogenesis, it is not yet clear how this process occurs and whether dysbiosis is a central cause or a common consequence of the disease [17]. In healthy individuals, 99% of gut bacterial phyla are Firmicutes, Bacteroidetes, Proteobacteria, and Actinobacteria. Firmicutes and Bacteroidetes account for approximately 90% of the total microbiome composition. These phyla are critically important in maintaining gut homeostasis and produce short-chain fatty acids (SCFAs), especially butyrate and propionate, from the fermentation of dietary components such as indigestible fibers. SCFAs are important energy sources for colonic mucosa cells but have also been shown to play key roles in regulating immune homeostasis [18]. Dysbiosis is defined as an alteration in gut microbiota composition and diversity and a shift in the balance between commensal and potentially pathogenic microorganisms [19]. Several pieces of evidence support the role of the microbiome and dysbiosis in IBD development. For example, experimental mice subjected to germ-free conditions develop attenuated colitis [20]. In studies using mouse models, the transfer of bacterial strains associated with IBD induces intestinal inflammation in genetically susceptible mice [21]. Similarly, fecal transplantation from human IBD donors to germ-free mice stimulates proinflammatory responses, with increased Th17 cell infiltration and proinflammatory mediators compared with transplants from healthy human donors [22]. Britton et al. colonized groups of adult wild-type or Rag1-deficient mice in germ-free conditions with human microbiota and assessed the mucosal immune response. Microbiota from healthy human donors induced, on average, higher frequencies of RORγt + Foxp3 + Treg cells in the intestinal lamina propria and prevented disease exacerbation. In contrast, microbiota from IBD donors resulted in enhanced RORγt + Th17 effector cell frequencies and enhanced disease severity in colitis-susceptible mice [23]. Determining the groups of microbes that are related to the development of intestinal inflammation has been a focus of extensive research. Patients with IBD tend to present several changes, not only in composition, but also in the diversity of their microbiome populations when compared to healthy individuals (Table 1). Evidence shows that alterations in microbiome components can also be involved in different IBD phenotypes [24]. The IBD microbiota has been characterized by an increase in the abundance of Bacteroidetes and Proteobacteria and a decrease in Firmicutes compared to control individuals. Specifically, levels of Faecalibacterium prausnitzii, a highly metabolically active commensal bacterium, are reduced in individuals with IBD [25]. Patients with IBD have reduced microbiome diversity (mostly a decrease in the relative abundance of Firmicutes) and an increase in the presence of Proteobacteria, such as Enterobacteriaceae and Bilophila, and certain members of Bacteroidetes [26]. Dysbiosis can potentially lead to a reduction in key functions necessary for maintaining intestinal barrier integrity and gut homeostasis. Therefore, alterations in the immune response and proinflammatory activity could be due to a dysbiotic microenvironment. role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Mycobacterium avium Associated with increased production of proinflammatory cytokines. Mutations in NOD2/CARD15 receptors may cause intracellular survival of the The abundance of this bacteria, especially the subspecies paratuberculosis, is higher in patients with IBD than in controls [39,40] It is a highly active metabolic commensal bacterium involved in the production of butyrate. This metabolite plays a major role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity Presence of F. prausnitzii may serve as a biomarker of intestinal health in adults. Low levels of this bacteria could be predictive for CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence [25,27] Eubacterium spp. role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Found decreased in samples from patients with CD and UC [24,29] Ruminococcus gnavus Involved in bile and amino acid biosynthesis pathways, including amino acid, energy, carbohydrate, and nucleotide metabolism Lack of supporting evidence of possible mechanisms involved Found decreased in the stool of patients with treatment-naïve newonset CD Found increased in samples from patients with CD compared to controls [24,30] Clostridioides difficile role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Mycobacterium avium Associated with increased production of proinflammatory cytokines. Mutations in NOD2/CARD15 receptors may cause intracellular survival of the The abundance of this bacteria, especially the subspecies paratuberculosis, is higher in patients with IBD than in controls [39,40] Epithelium-associated invasive E. coli has frequently been isolated from ileal and colonic mucosa from patients with CD and can infect and damage intestinal epithelial cell monolayers, and synthesize α-hemolysin Found increased numbers of E. coli strains with virulence properties isolated from samples of patients with IBD. Several studies indicate that there is a link between the prevalence of E. coli and IBD relapses [41,42] Haemophilus parainfluenzae Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Mycobacterium avium Associated with increased production of proinflammatory cytokines. Mutations in NOD2/CARD15 receptors may cause intracellular survival of the The abundance of this bacteria, especially the subspecies paratuberculosis, is higher in patients with IBD than in controls [39,40] Involved in glycerol-phospholipid and lipopolysaccharide metabolism, thereby promoting inflammation Found increased in stool samples from patients with treatment-naïve new-onset CD [30] Campylobacter spp. sion, modulation of the immune system, and promotion of anti-inflammatory activity going surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Mycobacterium avium Associated with increased production of proinflammatory cytokines. Mutations in NOD2/CARD15 receptors may cause intracellular survival of the The abundance of this bacteria, especially the subspecies paratuberculosis, is higher in patients with IBD than in controls [39,40] Invasive strains of this bacteria in patients with IBD can survive in intracellular and anaerobic conditions Increased in patients with IBD compared to controls [43] Eikenella corrodens role in gut physiology and has beneficial effects, including protection against pathogen invasion, modulation of the immune system, and promotion of anti-inflammatory activity CD. Its deficiency was shown in colonic CD. Low F. prausnitzii levels in patients with IBD undergoing surgery is associated with a higher risk of post-operative recurrence Eubacterium spp. Involved in the production of SCFAs, especially butyrate. It is important in inflammation modulation and the promotion of epithelial barrier integrity Found deficient in samples from patients with CD and UC [24,28] Ruminococcus albus Possibly involved in SCFA metabolism and its protective and anti-inflammatory roles Clooney et al. showed that among the microbial species found to be significantly increased in CD compared to controls, there was an increased presence of Ruminococcus gnavus and Fusobacterium nucleatum. Conversely, the presence of Ruminococcus albus, Eubacterium rectale, and Faecalibacterium prausnitzii were decreased in CD. Eubacterium and Roseburia were among the most important species in classifying either CD or UC compared with controls [24,47]. Furthermore, some species are particularly associated with certain subgroups of patients. For instance, Bacteroides vulgatus, Akkermansia muciniphila, and Escherichia/Shigella were increased in patients with a history of prior surgical resection [44]. In the largest pediatric Crohn's cohort to date, including >400 patients and 200 controls, microbiomes from new-onset CD cases in multiple gastrointestinal locations were analyzed by Gevers et al. An axis defined by an increased abundance of bacteria such as Enterobacteriaceae, Pasteurellacaea, and Fusobacteriaceae and decreased abundance of Erysipelotrichales, Bacteroidales, and Clostridiales was strongly correlated with disease status. Moreover, microbiome comparison between patients with CD with and without antibiotic exposure indicated that antibiotic use amplified the microbial dysbiosis associated with CD in a new-onset pediatric cohort. The microbial dysbiosis index, which is characterized by the differential relative abundance of specific taxa, was associated with disease severity. Additionally, the rectal mucosa-associated microbiome, but not the fecal microbiome, has been shown to be a robust disease predictor [30]. Similarly, another study of a pediatric IBD cohort showed significant correlation between microbiota composition and disease severity, with resolution of dysbiosis in patients responding to anti-tumor necrosis factor (TNF) therapy [48]. Although changes in the gut microbiota profile in new-onset and treatment-naive pediatric patients with IBD were further corroborated in a recent systematic review, no clear conclusion can be drawn at the moment due to inconsistent results and heterogeneous methodologies [49]. In one of the largest longitudinal analyses of the IBD microbiome, Halfvarson et al. dissected the long-term dynamic behavior of the gut microbiome by comparing patients with IBD with healthy controls. Phylogenetic analysis of fecal samples showed that gut microbiomes of different IBD subtypes displayed different species distributions relative to controls. They identified potential microbial indicators of IBD subtypes, including genera such as Lachnospira, Clostridium, Oscillospira, and many unidentified Ruminococcaceae. Furthermore, they found that the microbiomes of patients with IBD fluctuate more than those of healthy individuals, based on deviation from a newly defined healthy plane. In addition, patients with ileal CD deviated the most from the healthy plane, especially those who underwent surgical resection. Interestingly, the microbiomes of some patients with IBD periodically visited the healthy plane and then deviated away from it [17]. In a study of 132 participants with IBD and controls, Lloyd-Price et al. assessed host and microbial data from colon biopsies, blood, and stool, for one year each. Principal coordinate analysis based on species-level Bray-Curtis dissimilarity showed that most variation was driven by a trade-off between the phyla Bacteroidetes and Firmicutes. Samples from individuals with IBD (particularly CD) had lower alpha diversity. Moreover, in patients with CD, taxonomic perturbations during dysbiosis were observed, such as a depletion of obligate anaerobes, including F. prausnitzii and Roseburia hominis, and an enrichment of facultative anaerobes such as Escherichia coli. In the metabolome, SCFAs were generally reduced during IBD dysbiosis. Overall, metabolite pools were less diverse in individuals with IBD, paralleling the observations of microbial diversity [50]. The investigation of whether the response to treatment with biologic agents could be associated with alterations in the composition of the intestinal microbiota of patients with CD was performed in a prospective study. The therapeutic intervention based on adalimumab was associated with the restoration of a eubiotic environment after six months of treatment. Particularly, in the cohort of patients with CD, those receiving adalimumab displayed a reduction in Proteobacteria and an increase in Lachnospiraceae. These results characteristically predominated among those patients who achieved therapeutic success, suggesting that dysbiosis could be directly involved with the response to treatment [51]. The role of pathogenic components in IBD has also been studied. For example, adherent-invasive E. coli (AIEC) is more prevalent in the mucosa of patients with IBD than in healthy individuals. Overall, the prevalence of AIEC in the mucosa of adult patients with CD ranges from 21-63%, and it is more associated with ileal CD than with colonic disease [41]. Mycobacterium avium, especially the subspecies paratuberculosis, has also been implicated in IBD pathogenesis. The abundance of these bacteria is higher in patients with IBD than in controls, and they are associated with increased production of proinflammatory cytokines [31]. Similarly, Listeria monocytogenes, which induces a Th1-type immune response [52], has also been shown to be increased in patients with IBD [33]. Archaea members have also been identified as part of the human microbiome, especially methane-producing species, such as Methanobrevibacter smithii, Methanosphaera stadtmanae, and Methanomassiliicoccus luminyensis in human stools, and Methanobrevibacter oralis in the oral mucosa [53,54]. Dysbiosis appears to affect the relative abundance of archaea members and, particularly in patients with IBD, M. smithii had a reduced abundance, whereas the immunogenic M. stadtmanae was remarkably increased compared to healthy controls [55,56]. Viruses and fungi are widely present in the gut and may play important roles in homeostasis. Changes in the enteric group of viruses in the gastrointestinal tract can have consequences on the bacterial microbiome and its diversity, as viruses are drivers of bacterial resistance. For instance, viruses can be responsible for the horizontal transfer of genetic material among bacterial communities, which implies a change in the balance of different ecosystems. An increase in the abundance of Caudovirales bacteriophages has been observed in patients with CD [57,58]. In addition, results from another study demonstrated an increased abundance of phages infecting Clostridiales, Alteromonadales, and Clostridium acetobutylicum, as well as viruses from the Retroviridae family, in patients with IBD [59,60]. Similarly, fungal components regulate and trigger immune responses. However, the gut mycobiome is less stable than the bacterial microbiome; therefore, a mycobiome signature has not yet been described [61]. Candida is the most prominent component of the fungal microbiota in humans. Its cell wall constituents, such as beta-glucans, chitin, and mannoses, can activate components of the innate immune system, such as Toll-like receptors (TLRs) 2 and 4, dectin-1, CD5, CD36, and SCARF1, and complement system components. The activation of these molecules leads to immune signaling and proinflammatory responses. Some studies have proposed a correlation between the gut mycobiome and gut bacterial components. Mason et al. showed that the colonization of C. albicans strain CHN1 in the stomach and cecum of C57BL6 mice prompted an overgrowth of Lactobacillus spp. and Enterococcus spp. during antibiotic treatment [62]. However, Bernardes et al. demonstrated that bacteria may influence fungal colonization. They showed that the colonization of bacteria together with fungi increased the relative abundance of C. parapsilosis and Issatchenkia orientalis, and a lack of co-colonization with bacteria or elimination of bacteria by antibiotics led to an overgrowth of C. albicans [63]. Influence of the Exposome The first organisms on earth were microbes, and they have evolved and adapted to live in extreme environments all over the planet. All biological entities appearing later on Earth, including mammals, have evolved in a microbially dominant world. Therefore, humans have coevolved with microbes immersed within two complex ecological communities: the external and internal microbial environments. In this dualistic world of microbes, the exposome and gut microbiome impact each other as well as all other "-omes" in a reciprocal manner [64,65]. Geosocial Factors Environmental factors have been associated with complex health conditions, including chronic immune-mediated inflammatory diseases (IMIDs), which have been increasing in incidence during the last century. Features are shared among the different IMIDs, including IBD, such as an inflammatory basis, multifactorial nature, and yet-unknown causes. In addition, epidemiological data revealing the co-occurrence of IMIDs and geographic expansion reinforce a common pathophysiological background that has evolved over the past several decades to reach a worldwide distribution [66]. The emergence of IMIDs and IBD has been linked to societal transformations, most commonly socioeconomic development or industrialization [67,68]. Such changes have been almost invariably accompanied by increasing urbanization [69], which, in turn, has been associated with distinct gut microbiota compositions different to those found in rural areas that are supposedly protective against IBD development [70,71]. Socioeconomic development and social behavior are crucial elements fueling the emergence of IBD [72]. Such changes have been associated with improvements in sanitation, quality of water supply in distribution systems, and a resulting decrease in infectious diseases, which constitute the basis of the hygiene hypothesis [73]. Nonetheless, these changes bring about several other simultaneous environmental modifications that need to be considered. It is important to highlight, for example, changes in homes, family structures, workplaces, dietary habits, the widespread use and production of chemicals, and the use of medications, including antibiotics. Growing urbanization has led to a continuous increase in population density in cities that have become progressively more polluted, competitive, and stressful, causing dramatic changes in peoples' lifestyles. Moreover, the attraction between manpower and industry, or other economic activities, resulted in more human agglomeration, whether that be in households or factories; it also gathered people from different backgrounds, be they genetic, geographic, or cultural. These observations are in accordance with previous studies showing that individuals who migrate from low to high prevalence IBD areas, that is, from less developed to more developed areas, are more susceptible to developing IBD, predominantly affecting the first-and second-generation offspring of these immigrants [74][75][76][77]. In addition, among the potentially relevant stimuli from the exposome, cohabitation has been shown to strongly affect immune responses [78]. Transmission of microbial strains, predominantly detected among first-degree relatives sharing a household, has recently been demonstrated [79], helping to explain the link between the exposome and the immune response. This also reinforces a microbial basis for IMIDs. While several human diseases have been associated with abnormalities in hostassociated microbial communities, and the human body is seen as an ecosystem [80], defining a healthy microbiome continues to represent a complex challenge due to the formidable variability shown in population-based studies [81,82]. This is also true for IBD, as a large study confirmed a reduction in microbial diversity in patients with CD and UC but did not explain the increased variability compared to controls [24]. Whether such a variance is stochastic or due to environmental factors has not yet been established [83]. Nevertheless, the microbiome reflects a complex combination of endogenous and exogenous elements, particularly environmental and lifestyle factors. Previous studies have shown that the gut microbiome of Western populations is characteristically less diverse [84][85][86]. As the intestine represents the largest surface of contact with the external environment, IBD could be facilitated by a combination of both a cleaner external milieu (as in the hygiene hypothesis) and an impoverished biome influencing the internal milieu to become less diverse, resulting in inappropriate immune system education and responses. The hypothesis that loss of biodiversity is an important environmental factor has been supported by data showing that reduced contact of people with the natural environment may negatively impact the commensal microbiota and its immunomodulatory properties [87,88]. In addition to its worldwide distribution and progressive increase as a result of diverse human activities, loss of biodiversity has been regarded as a critical factor in the rise of allergic diseases [89], among which asthma, in particular, has been intimately associated with IBD [90]. Loss of biodiversity has recently been proposed as a novel factor in the pathogenesis and prevention of IBD, based on the non-uniform disease distribution in large developing countries, showing pronounced regional dissimilarities and disease hotspots associated with specific geosocial and ecosystem factors [91]. Antibiotics Exposure to antibiotics has been associated with increased risk for developing IBD, especially CD [92,93]. Evidence from different studies has shown that patients diagnosed with IBD during childhood were more likely to have been exposed to antibiotics early in life [94]. In a pediatric prospective study, the strongest association between antibiotic use and future development of IBD was in the first 3 months following the use of antibiotics and among children who had more courses of antibiotics [95]. Contrarily, a recent study found that exposure to antibiotics during pregnancy, but not in infancy, is associated with an increased risk of early onset IBD [96]. Although current evidence does not confirm a consistent causal link with IBD, early exposure to antibiotics has been suggested to affect the development of tolerance to the gut microbiota, consequently raising inappropriate immune reactivity that underlies chronic intestinal inflammation [97]. Additionally, recent evidence from a study investigating the microbiome of humans, domestic animals, and their environment, in relation to antibiotic use, suggested the exchange of antimicrobial-resistant strains between reservoirs [98]. Together, these data appear to support the idea that the risk of developing IBD associated with intestinal dysbiosis may occur at both the individual and community levels. This also includes crosstalk with nonhuman components, reinforcing the existence of dynamic interactions between the environment and host regarding the exchange and sharing of microorganisms. Dietary Factors Several studies have investigated diet, arguably the most ubiquitous environmental factor, and its potential to shape the gut microbiota. For instance, evidence has shown that a high-calorie diet, consisting of fat-and carbohydrate-based foods, determines a preferential expansion of the genera Bacteroides and Prevotella and the Bacteroidetes phylum in adults, with shifts occurring in a relatively rapid fashion [99]. In another study, strictly animalbased food increased the relative abundance of bile-tolerant microorganisms, reducing the presence of microorganisms capable of metabolizing dietary plant polysaccharides. These results showed shifts between carbohydrate and protein fermentation, confirming that the microbiota can rapidly adjust to changes in dietary patterns. Moreover, changes in microbial composition were followed by changes in the molecular output of the microbiome with dietary interventions. SCFAs, products of bacterial digestion of fibers with critical homeostatic functions in the mucosa and anti-inflammatory properties, were shown to increase with plant-based diets. This may explain why a reduction in SCFAs in a typical Western-style diet (animal-based, high-calorie, high-fat, and low-dietary fiber) has been associated with the risk of IBD [100]. In fact, a Western-style diet, rich in sugar and fat, has been the predominant profile associated with a higher risk of developing IBD. While individuals who consume higher proportions of red meat and fats have a higher risk of IBD, others who predominantly consume fibers from vegetables and fruits have a lower risk [101,102]. Regarding dietary fat content, particularly polyunsaturated fatty acids, recent data indicate that a high omega-6 to omega-3 ratio, typical of Western-style diets, is associated with proinflammatory effects [103]. Furthermore, polyunsaturated fatty acids have been shown to exert not only effects on the immune response, directly acting on immune cells, but also influence the composition of the gut microbiota, thereby affecting host-microbiome interactions at different levels [104]. The consumption of processed foods, usually low in omega-3 fatty acids and micronutrients such as zinc and vitamins D and E, another common feature of Westernized diets, has also been associated with the development of chronic inflammatory diseases [105][106][107][108]. Globally, major shifts in dietary patterns towards progressively more Westernized diets, together with socioeconomic and demographic changes, represent a global transition that may explain the widespread increase in the rates of several metabolic and IMIDs [109], potentially involving changes in the gut microbiome and its interaction with the host. Genetic Susceptibility A clear connection with genetic predisposition has long been demonstrated in IBD, more so in CD than in UC [110]. There are over 200 genetic loci associated with IBD susceptibility, most of which regulate host-microbe interactions and immune-related pathways [110,111]. Some of the more studied genes include those involved in IL-23 receptors and Janus-activated kinase signaling, and those in innate mucosal defense, cytokine production, lymphocyte activation, epithelial barrier integrity, and multiple proteins involved in autophagy [112]. Genome-wide association studies have highlighted higher IBD genetic risk in individuals with NOD2 receptors, autophagy-related protein 16-like 1 (ATG16L1), immunity-related GTPase family, M (IRGM), IL-23 receptor gene, protein tyrosine phosphatase, non-receptor type 2 (PTPN2), X-box binding protein 1 (XBP1), and leucine-rich repeat kinase 2 (LRRK2) variants [111,113]. Genetic risk variants are also associated with changes in microbiota composition; for example, Roseburia spp., an acetate-to-butyrate converter, was less abundant in patients with IBD with these high-risk mutations [114]. Mutations in autophagy-related genes alter anti-bacterial, fungal, and viral responses and impair the clearance of various invading pathogens such as Mycobacterium tuberculosis, Group A Streptococcus, L. monocytogenes, and E. coli [115][116][117]. An ATG16L1 single nucleotide polymorphism (SNP) confers susceptibility to CD and is a common genetic variation present in 40-50% of the population, although most individuals with this SNP do not develop IBD [118,119]. The role of autophagy variants in Salmonella clearance is not well established. Messer et al. observed that ATG16L1 deficiency promoted cell resistance to Salmonella, while Conway et al. observed autophagy induction after Salmonella infection with the participation of ATG16L1 in intestines [120,121]. These variants also affect antimicrobial peptide production by Paneth cells, cytokine production, antigen presentation, and response to endoplasmic reticulum stress [122]. Atg16L1-deficient mice exhibited elevated inflammasome activation and IL-1β production when stimulated with lipopolysaccharides (LPS) and abnormalities in ileal Paneth cells, such as the escape of antimicrobial peptides into the cytoplasm [123]. There is crosstalk between NOD2 and ATG16L1, as NOD2 activation triggers autophagy in dendritic cells with the participation of ATG16L1, and deficiency in ATG16L1 heightens cytokine production via NOD [124,125]. Patients with CD with risk variants of ATG16L1 or NOD2 present with abnormal Paneth cell morphology [126]. In mouse models, decreased expression levels of Atg5, Atg7, or Atg4B generated abnormal Paneth cell functions, and in CD-like ileitis, deficiency of Atg16L1 also altered Paneth cell morphology [127]. In addition, norovirus infection in Atg16L1-deficient animals increased their susceptibility to dextran sodium sulfate (DSS) in a TNF-dependent phenotype resembling aspects of IBD [58]. Complete knockout of Atg3, Atg5, Atg7, or Atg16L1 is lethal in mice, and impairment of either Atg7 or Atg16L1 results in severe CD-like transmural ileitis [128]. Autophagic defects also worsen goblet cell function, production of mucus membrane defenses, and absorptive functions of the microvilli [129]. NOD2 recognizes bacterial peptidoglycan (muramyl dipeptide) in the cell walls of Gram-negative and Gram-positive bacteria and triggers the production of intestinal antimicrobial peptides to protect cells and immune responses in the gut [130]. NOD2 activation leads to NF-κB activation and production of IL-1b, TNF-α, IL-6, IL-8, and αdefensins [130,131]. NOD2 interacts with autophagy-related proteins to help destroy intracellular pathogens, and mutations in NOD2, also known as the caspase recruitment domain family, member 15 gene (CARD15), disrupt Paneth cells' ability to recognize and eliminate invading pathogens [132,133]. In IBD, NOD2 mutations are associated with decreased release of defensins [134]. NOD2-mediated autophagy is important for the generation of major histocompatibility complex (MHC)-II-restricted CD4+ T cell responses in dendritic cells, and patients with CD with high-risk NOD2 or ATG16L1 variants exhibit impaired MHC II antigen presentation [124]. IL-23 signaling affects both the innate and adaptive immune systems in mice and is required for colitis development in several models [135][136][137]. The dominant IL23R SNP protects against IBD and generates a soluble receptor antagonist of IL-23 [138]. Variants in the autophagy-associated IRGM gene interfere with Paneth cell morphology and function, and are associated with abnormal secretory granule development, decreased antimicrobial peptide production, and higher susceptibility to colitis in a DSS-induced model [139]. Mutations in PTPN2 lead to defective autophagosome formation and bacterial elimination and promote T cell differentiation into Th1 and Th17 types [140][141][142]. Patients with IBD with PTPN2 variants demonstrate increased levels of interferon (IFN)-γ, IL-17, and IL-22 in the serum and intestinal mucosa [143]. LRRK2 is involved in the activation of dendritic cells (DCs) and production of IL-2 and TNF-α in CD [144]. Epigenetic Modifications Recent data have provided the basis for the hypothesis that epigenetic modifications, resulting from interactions between the host and exposome, determine the phenotypic expression of IBD. For instance, the relatively high discordance rate among monozygotic twins [145] and an increased risk of developing the disease among people migrating from low-to high-incidence regions of IBD [146] constitute important epidemiological information to support the pathogenic role of epigenetic changes. Consequently, epigenetic factors have been suggested to mediate critical interactions between the exposome and genome, offering new insights into the pathogenesis of several diseases, including IBD [147]. Epigenetic changes related to the gut microbiome include modifications to DNA or histones, as well as the regulation of non-coding RNAs [148]. For example, recent studies have shown that microorganisms can bind to lysine on histones and regulate host chromatin by modifying histone proteins [149]. In turn, post-translational modifications of histones induced by microorganisms lead to changes in transcriptional gene activity [150,151]. Other studies investigating microRNAs (miRNAs) have suggested their participation in the immune response to microorganisms, resulting in the regulation of inflammatory mediators [152]. For example, miR-10a has been shown to suppress CD4+ T cell production of IL-10, favoring the induction of more severe colitis in genetically predisposed Rag1 −/− mice [153]. In addition, miR-155 has been shown to promote Th17 differentiation and upregulate Th17-related cytokines [154]. Moreover, the induction of miR-155 and miR-146 family members has been implicated in the regulation of inflammatory responses triggered by microorganisms [155,156]. Dietary components also promote epigenetic modifications either directly or through the action of the gut microbiome, as some metabolites may modulate gene expression, chromatin remodeling, and DNA methylation. For example, polyphenols in green tea or soybeans, such as epigallocatechin-3-gallate and genistein, have been shown to inhibit DNA methyltransferase activity. Additionally, the gut microbiome generates a variety of SCFAs, such as acetate, butyrate, and propionate, which are essential for epithelial cell homeostasis but can also epigenetically regulate the immune response [157]. Bacteria from Clostridium, Eubacterium, and Butyrivibrio genera can synthesize butyrate, which inhibits histone deacetylases, from non-digestible fibers in the gut lumen [158]. In addition to being a nutrient for epithelial cells, SCFAs can also induce intracellular signaling pathways through the activation of G-protein-coupled receptors, regulating cell metabolism, inflammation, and oxidative stress [159,160]. Furthermore, the gut microbiome also contributes to the absorption and secretion of minerals, such as iodine, zinc, selenium, cobalt, and other cofactors that participate in epigenetic processes. Additionally, other key metabolites of the gut microbiota, including S-adenosylmethionine, acetyl-coenzyme A, nicotinamide adenine dinucleotide, α-ketoglutarate, and adenosine triphosphate, serve as essential cofactors for epigenetic enzymes that regulate DNA methylation and histone modifications [161,162]. Inappropriate Immune Response The epithelium and its specialized cell types act as a barrier, separating the microbiota in the lumen from the immune cells in the lamina propria. In this reciprocal relationship, the microbiota also produces the metabolites necessary for epithelial cells, such as SCFAs and bacteriocins [163]. Immune cells in the lamina propria constitute the mucosa-associated lymphoid tissue that responds to microbiota stimuli, along with the epithelium. Some innate immune cells such as DCs, macrophages, natural killer cells, and innate lymphoid cells (ILCs) sample lumen antigens and induce a tolerogenic immune response. Under homeostatic conditions, this immune surveillance does not initiate a proinflammatory response; in contrast, it induces Treg cells. These immune cells capture antigens and migrate to lymphoid tissues to activate lymphocytes and link innate and adaptive responses [163,164]. The interaction between the immune system and microbiota, especially segmented filamentous bacteria, symbiotically aids the maturation of the immune system [164,165]. The dysregulated immune response observed in IBD is thought to result from crosstalk among genetic susceptibility, environmental factors, and gut microbiota [166,167]. Dysbiosis changes the composition of the gut microbiota, resulting in the loss of commensal bacteria and growth of pathogenic microorganisms [17,168]. In IBD, dysbiosis is characterized by a decrease in alpha diversity, with a decrease in abundance of Bacteroidetes and Firmicutes and an increase in that of Gamma-proteobacteria, especially AIEC [114,169]. Moreover, dysbiosis in IBD leads to a shift towards a proinflammatory environment with activated immune cells. In this context, cells increase the expression levels of pattern recognition receptors (PRRs) and the production of proinflammatory mediators in response to pathogens [170]. PRRs recognize microbe-, pathogen-, and/or danger-associated molecular patterns (MAMPs, PAMPs, and DAMPs, respectively, e.g., ATP and high mobility group box 1 protein (HMGB1)) released by cells during inflammation [171]. Examples of MAMPs include LPS (a TLR-4 ligand) and flagellin (a TLR-5 ligand) from bacteria, β-1,3-glucans (a dectin-1, C-type lectin receptor (CLR) ligand) from fungi, and viral nucleic acid molecules [172]. TLRs and CLRs are distributed on the surface of immune cells, epithelial cells, and other cell types, whereas NOD-like receptors (NLRs) and RIG-I-like receptors are present in the cytoplasm [173,174]. Tissue-resident macrophages comprise a large population of macrophages in the gut and control either tolerance or defense against microorganisms. Therefore, macrophages exhibit specialized gene expression related to their localization within the intestinal mucosa [175,176]. ILCs represent a group of innate immune cells, mostly localized at mucosal sites, with important participation in immune-mediated diseases such as IBD. In addition to increasing in density in the inflamed mucosa, ILC-1-producing IFN-γ cells characteristically accumulate in CD [177]. Because of their proximity to the gut microbiome, mucosal ILCs are thought to participate in a dichotomous regulatory mechanism, in which ILCs interfere with the microbial composition of the gut, and the gut resident microbes shape the plasticity and physiological functions of ILCs [178]. Inflammasomes have recently been regarded as central and specifically attractive in IBD immunopathogenesis because of their participation in complex crosstalk between the host mucosal immune system and environment, particularly the microbiota. Inflammasomes are multiprotein platforms formed in the cytoplasm that cleave and activate caspase-1, leading to the production of inflammatory cytokines, including IL-1b and IL-18. Inflammasomes comprise intracellular sensors formed by NLR proteins NLRP1, NLRP3, NLRC4, NLRP6, and NAIP5, or by the DNA-sensing complex AIM2, and can be activated by extracellular and intracellular pathogens in the presence of DAMPs [179]. Inflammasomes participate in responses against several bacteria such as L. monocytogenes, M. tuberculosis, and Fusobacterium [180]. In patients with CD, increased NLRP3 and AIM2 activity has been reported, and an SNP in NLRP3 is associated with CD susceptibility [181,182]. In contrast, NLRP3 inflammasome activation and production of IL-1b and IL-18 appear to be protective in experimental IBD [183]. Together, through these mechanisms, innate immune cells sense PAMPs and DAMPs, induce an inflammatory response, and shape adaptive immunity, promoting lymphoid tissue expansion and T-(Th1, Th2, Th9, Th17, and Treg cells) and B-cell responses [184]. Additional direct and indirect participation of gut microbiota through the metabolism of dietary vitamins and SCFAs, such as butyrate, also influences the immune response, promoting Treg differentiation and tolerance [185,186]. For example, appropriate Th17 and Th1 responses are important for the clearance of Citrobacter rodentium and non-typhoidal Salmonella enterica infections [187,188]. Although, under normal conditions, the gut constitutes a microenvironment controlled by balanced T-cell responses, prolonged dysbiosis favors an inappropriate persistent proinflammatory response [189]. Microbial-Based Therapies Antibiotics have long been considered in the treatment of IBD because of the prevalence of microbial abnormalities and the presence of known pathogens. Although the use of antibiotics has been clearly supported for the treatment of infectious complications related to IBD, several pieces of evidence have failed to find consistent beneficial associations between antibiotic treatment and IBD remission [190]. Selby et al. did not find beneficial outcomes for CD with treatment with clarithromycin, rifabutin, and clofazimine aimed at eradicating M. avium subspecies paratuberculosis in a two-year randomized clinical trial [191]. Nevertheless, some data support the clinical application of antibiotics such as ciprofloxacin, with or without metronidazole, for treating active fistulizing perianal CD [21,192]. Other methods of bacterial manipulation have provided additional evidence supporting the role of the microbiota in the pathogenesis of IBD. One method of microbiota manipulation in IBD is the introduction of dietary probiotics to control the growth of pathological components and/or switch the global composition towards a healthier one. E. coli Nissle 1917, a nonpathogenic strain clinically used as a probiotic, has been shown to be effective in inducing the remission of patients with UC. In addition, E. coli Nissle 1917 has been associated with maintenance of remission in patients with UC for at least one year [193,194]. Similarly, the probiotic VSL#3, a set of eight bacterial strains (Bifidobacterium breve, B. longum, B. infantis, Lactobacillus acidophilus, L. plantarum, L. paracasei, L. bulgaricus, and Streptococcus thermophilus) has significantly reduced scores of disease severity and induced remission in patients with UC compared to a placebo [37,195,196]. Other probiotics, such as Lactobacillus GG, have been shown to be effective when associated with IBD oral therapy, such as mesalamine [38,197]. Nevertheless, so far, data regarding the effectiveness of probiotics for treating patients with CD have failed to reach substantial association with the induction of remission. Although currently available probiotics potentially modulate dysbiosis in IBD, their effects are transient and limited in most IBD subsets. In fact, most existing probiotics encounter colonization resistance in the host intestine and are present only for a limited period, even after long-term administration. Therefore, new alternatives have been investigated, including the use of genetically modified organisms with recombinant bacteria as vectors to deliver therapeutic molecules at target sites in the gut. For example, modified strains of Lactobacillus casei BL23 [198] and Streptococcus thermophilus CRL 807 [199] were engineered to produce superoxide dismutase, which has anti-inflammatory properties. Lactococcus lactis-secreting IL-10 [200], elafin (a human protease inhibitor) [201], and IL-27 (an immunosuppressive cytokine) [202] reportedly have anti-inflammatory effects in colitis models and may therefore represent potential candidates for future clinical trials. Another method of microbiota manipulation of increasing interest for potential therapeutic applications in various diseases is fecal microbiota transplantation (FMT). Recently, randomized clinical trials have assessed the benefits of the use of FMT in the treatment of patients with IBD. Moayyedi et al., for example, found that patients with recently diagnosed UC could be induced to remission after treatment with FMT [203]. Data from another study showed that for patients in remission, treatment with FMT was able to maintain clinical remission in 87.1% of patients compared to 66.7% receiving a placebo. These results indicate that the long-term beneficial effect of FMT in patients with UC in clinical remission could help sustain endoscopic, histological, and clinical remission [204]. A recent meta-analysis showed that FMT was effective in promoting clinical remission (OR = 3.47, 95% CI = 1.93-6.25) and clinical response (OR = 2.48, 95% CI = 1.18-5.21) to patients with active UC when compared to placebo [205]. Studies on the effectiveness of therapeutic microbiota manipulation in IBD are still in progress, and the results are expected to further understanding and guide the potential application in clinical practice. Complex Genetic and Molecular Network As previously mentioned, by using modern sequencing methodologies, several studies have compared and described the microbiota composition of healthy individuals versus patients with IBD [17,50,168]. For example, recent data revealed a loss of microbial diversity in patients with IBD, with a clear separation between CD and healthy patients, and a more heterogeneous profile in patients with UC [168]. Antibiotic resistance gene levels were increased in IBD, and their abundance was positively correlated with Escherichia and Bacteroides bacteria [206]. Furthermore, higher than normal levels of hydrogen sulfide generated by gut microbiota have been strongly associated with IBD pathogenesis and indicate increased prevalence of sulfate-reducing bacteria, such as Deltaproteobacteria, Desulfotomaculum, Desulfosporosinus, Thermodesulfobacterium, and Thermodesulfovibrio genera [207]. In CD, metagenomic and metaproteomic studies have characterized a decrease in levels of butanoate and propanoate metabolism genes, butyrate, and other SCFAs, in agreement with the decrease in abundance of SCFA-producing Firmicutes bacteria seen in taxonomic profiling studies [208,209]. Several functional changes in the microbiome of IBD have been identified, including an increase in the activities of pathobionts, alterations in the synthesis of amino acids, neurotransmitters, and vitamins, regulation of mineral absorption, degradation of complex carbohydrates, and effects on pathways related to SCFAs, cysteine, and L-arginine synthesis [168,209,210]. In a metagenomic study with a pediatric CD cohort undergoing anti-TNF-α therapy, greater microbiota changes were correlated with higher levels of fungal and human DNA and variations in microbial genes. Examples of these variations include a decrease in selenocompound metabolic pathway activity and an increase in levels of microbial genes encoding glycerophospholipid metabolism, aminobenzoate degradation, sulfur relay systems, and glutathione metabolism [211]. Together, these data indicate that further metabolomic studies could help differentiate, diagnose, and better characterize disease activity [212]. The measurement of RNA transcripts in tissues from patients with IBD may predict the pathways that are activated and involved in the disease. Using deep RNA sequencing, studies of the transcripts have identified molecular subtypes of CD [213][214][215]. A remarkable report on this topic is the Pediatric RISK Stratification Study, which showed that these molecular signatures may predict disease behavior [216]. However, as RNA does not necessarily represent the proteins produced in cells, there are limitations to this approach. For example, a study on the feasibility of metatranscriptomics for fecal samples observed that transcriptional profiles differed more between individuals than metagenomic functional profiles [168]. Metatranscriptomic data also revealed some species-specific biases in the transcriptional activity of gut bacteria, especially with IBD-specific microbial populations, such as F. prausnitzii [210]. Metabolomics research, including analysis of plasma, serum, urine, stool, and intestinal biopsies, has provided data allowing for differentiation between healthy controls and patients with IBD [217]. In the stool of patients with IBD, a loss of metabolites has been observed in concordance with the loss of microbial diversity [168,218]. There were lower levels of secondary bile acids, sphingolipids, short-and medium-chain fatty acids, and vitamins, whereas primary bile acids, amino acids, polyamines, arachidonate, and acylcarnitines were present in higher levels compared to the controls [219]. In IBD, farnesoid X receptor (FXR) activation, triggered by bile salts, led to the downregulation of proinflammatory cytokines, and in CD, intestinal biopsies showed lower expression levels of FXR [220,221]. Another compound commonly associated with IBD is tryptophan, an essential aromatic amino acid obtained from the diet and a precursor of numerous molecules, such as serotonin, melatonin, nicotinamide, and vitamin B3, as well as other intermediates. Common sources of tryptophan are dairy foods, poultry, fish, and oats, and tryptophan is metabolized by both the host and gut microbiota. Microbiota metabolism leads to indole metabolites that can activate aryl hydrocarbon receptors (AhR) and participate in the onset of IBD [222,223]. Few bacteria produce AhR agonists, such as Peptostreptococcus russellii and members of Lactobacillus, whereas indole-propionic acid (IPA) production has been best characterized in Clostridum sporogenes [224]. Indole induces the release of glucagon-like peptide-1 and its derivatives, indoleacetic acid, indole-3-acetaldehyde, indole-3-aldehyde, indoleacrylic acid, and indole-3-propionic acid. IPA via AhR affects T-cell immunity and exerts anti-inflammatory effects in the gut [225]. AhR expression levels are reduced in patients with CD, whereas tryptophan deficiency promotes more severe colitis in mice [226,227]. Proteomic studies have also been conducted to explore innate and adaptive immune mechanisms in IBD. For example, compared to those in the controls, in patients with UC, 46 proteins, excluding neutrophils and their extracellular trap proteins, were more abundant in the colon tissue [228,229]. In intestinal biopsies of patients with CD, the proteomes of human Th1 and Th1/Th17 clones were studied, and 334 proteins were found to be differentially expressed. Cytotoxic proteins, such as granzyme B and perforin, were more abundant in Th1 cells than in Th17 cells, but only in a subgroup of Th1 cell clones from patients with CD [230]. Regarding regulatory T cells (CD4+ Foxp3+), a proteomics study identified a novel protein, THEMIS, which is important for the suppressive function of Treg cells [231]. In agreement with proteomic studies, the lipidome and immune responses have also been investigated in IBD. For instance, the inflamed mucosa of patients with UC showed increased levels of seven eicosanoids (prostaglandin (PG) E2, PGD2, thrombox-ane B2, 5-hydroxyeicosatetraenoic acid (HETE), 11-HETE, 12-HETE, and 15-HETE) [232]. Macrophages from patients with CD challenged with heat-inactivated E. coli presented lower levels of newly synthesized phosphatidylinositol [233]. Lipidomic analysis of the phosphatidylcholine lipidome profile of rectal mucus obtained from patients with UC showed lower levels of phosphatidylcholine compared to patients with CD and controls. Interestingly, supplementation with delayed-release phosphatidylcholine was clinically effective [234]. Although knowledge of the mechanisms underlying IBD continues to expand, novel data stemming from individual pathogenic constituents are usually not integrated, leading to only limited data being utilized for achieving relevant progress in the field [64]. Regarding the microbiome, complexity becomes even more evident as modern evolving technologies provide an exponential increase in novel information with an overwhelming accumulation of data. Hence, it is currently believed that a better understanding of the pathogenesis of complex diseases such as IBD will depend on the comprehensive integration of knowledge from different "-omes," including the microbiome, exposome, and genome. Conclusions In the last two decades, it has become increasingly evident that the microbiome, immune system, genome, and exposome are comprised of highly complex, dynamic, and mutually interactive systems. Nevertheless, the traditional approach for evaluating the individual components that presumably participate in the pathogenesis of IBD, including the microbiota, has not been sufficient to determine the interconnecting pathways underlying the multiple biological systems involved in the disease development. Even using the best of the currently available methods, including clinical, laboratory, endoscopic, histological, and imaging parameters, we still only have a narrow understanding of the intricate mechanisms responsible for chronic inflammation and the peculiar dynamics and specificities affecting each patient with IBD. Consequently, current treatments continue to be mostly empirical and have limited efficacy. Regarding the microbial component, although causality remains to be clearly established, evidence indicating an association with IBD pathogenesis is rapidly accumulating. However, a better understanding of the probable microbial basis of IBD depends on more complete, deep, and unbiased investigations at multiple and simultaneous levels, including microbial strains and genomic and functional features, ideally allowing the construction of full transcriptomic and metabolomic profiles. High-throughput technologies capable of analyzing innumerable parameters of the microbiome in conjunction with other system variables have been developed in recent years. Hopefully, more integrative analysis will enable data assembly in a comprehensive fashion to build an IBD network and translate information into useful biological insights with direct influence on specific therapeutic targets, clinical decisions, and disease outcomes, which will preferably be individualized.
2022-04-13T05:19:26.105Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "faec039d06679b63f7dc7f350e9722ba00c1524c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "faec039d06679b63f7dc7f350e9722ba00c1524c", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
246866213
pes2o/s2orc
v3-fos-license
Acute odynophagia: A new symptom of COVID‐19 during the SARS‐CoV‐2 Omicron variant wave in Sweden Abstract Objective The objective of this study is to present a novel clinical manifestation of infection with the Omicron variant of the SARS‐CoV‐2 virus affecting mainly young, vaccinated, and healthy adults. We describe a new group of COVID‐19 patients seeking emergency care with symptoms similar to the life‐threatening condition epiglottitis. Here, we present a case series and discuss management. Methods We performed a retrospective single‐center case study of patients diagnosed with COVID‐19 who were referred to the Ear, Nose, and Throat Emergency Department (ENT ED) between January 1 and January 23, 2022 with clinical symptoms such as acute odynophagia, severe sore throat, and fever. Ethical approval was obtained from the Swedish Ethical Review Authority (2020‐02579). Informed consent was obtained from all patients included in the study. Results Twenty patients meeting inclusion criteria were identified. Fifteen patients were fully vaccinated against COVID‐19. Four patients needed a short hospitalization for their symptoms. The most common diagnoses were COVID‐19‐associated acute viral laryngotracheitis and/or viral pharyngitis. Six patients presented with signs of secondary bacterial infection and were put on antibiotics. Conclusion Previous variants of SARS‐CoV‐2 infection affected predominantly the lower respiratory tract and were associated with loss of smell and taste in many patients. The Omicron variant seems to affect predominantly the upper airways and cause acute laryngitis without olfactory dysfunction. In some patients, the clinical manifestation is similar to the symptoms of epiglottitis. In such a case, a prompt examination of the larynx is the gold standard to exclude inflammatory edema in the upper airways. None of the patients described in this study developed epiglottitis. In this study, we discuss the management of acute odynophagia in COVID‐19 patients. Introduction Since the beginning of the COVID-19 pandemic, an estimated 360 million cases and 5.6 million deaths have been reported due to SARS-CoV-2 infection [1]. While the vaccine immunity against the SARS-CoV-2 virus has been increasing worldwide, on November 11, 2021, the first case of Omicronthe novel variant of concern-was reported in Botswana and several days later in Hong Kong [2]. Since then, Omicron has rapidly spread across the globe, replacing the Delta variant and causing new infection waves in multiple countries. Omicron has been proven to be a highly transmissible variant with doubling time of approximately 2-3 days [3,4]. The exact clinical features and complications are still unknown. The preliminary data from the ZOE COVID Study run by ZOE and King's College of London indicate that Omicron symptoms come predominantly from the upper respiratory tract and include runny nose, sore throat, headache, fatigue, and sneezing [5]. The loss of smell and taste-one of the previously common symptoms of COVID-19 infection-is reported to be rare during Omicron infection. Early data also suggest that Omicron infection is associated with a lower risk of hospitalization, serious disease, and death [6,7]. In the last week of 2021, Omicron was the dominant variant in the region of Stockholm, Sweden [8]. At the same time as the Omicron variant became dominant, we experienced a high volume of young adults referred to our Ear, Nose, and Throat Emergency Department (ENT ED) with acute odynophagia, severe sore throat, and fever. Most of the patients were vaccinated against COVID-19 and did not have any comorbidities. The most common diagnoses in those patients were COVID-19-associated acute viral laryngotracheitis and/or viral pharyngitis. Those patients did not present with symptoms from the lower respiratory tract and did not experience smell/taste loss. Most notably, this is a new group of patients seeking emergency care with symptoms similar to the lifethreatening condition epiglottitis. With an increasing number of Omicron cases, it is anticipated that the volume of patients with these symptoms could become overwhelming for EDs. We present a case series of 20 adults with COVID-19 with acute odynophagia as the main symptom of Omicron infection. We also briefly describe the appropriate management of patients presenting with this clinical manifestation of COVID-19. Design We conducted a retrospective single-center case study, enrolling consecutive patients with a confirmed COVID-19 diagnosis who were referred to the ENT ED at Karolinska University Hospital, Stockholm, due to acute odynophagia, severe sore throat, and fever between January 1 and January 23, 2022. Included motivations for referral to the ENT ED were as follows: severe sore throat, peritonsillitis, acute odynophagia, and epiglottitis. Case confirmation To be included in the study, patients were required to have a positive reverse transcription-polymerase chain reaction (RT-PCR) test detecting SARS-CoV-2 RNA in a nasal swab or a positive rapid antigen test performed by a healthcare professional and registered in the electronic medical documentation within 3 days before or following the visit at the ENT ED. Patients who did not meet this criteria were excluded from further analysis. Five patients enrolled in the study were tested for COVID-19 between January 3 and January 9 (week 1). Five patients were tested for COVID-19 between January 10 and January 16 (week 2). Ten patients were tested for COVID-19 between January 17 and January 23 (week 3). According to official statistics of the Public Health Agency of Sweden, among sequenced PCR samples, during week 1, 91% of cases were the Omicron variant. During week 2, 96.3% of cases were confirmed as Omicron. During week 3, 98.6% of cases were confirmed as Omicron [9]. Thus, with high probability-close to certainty-patients included in the study were infected with the Omicron variant of SARS-CoV-2. Data collection Medical records were reviewed and patients' demographic and medical histories were collected. Laboratory information, vaccination status, and details of ENT assessment were available in the medical records. The following epidemiological variables were collected: gender, age, comorbidities, the status of vaccination against COVID-19, and history of past COVID-19 infection. The referral and notes following the ENT assessment were reviewed to extract symptoms of COVID-19 infection in enrolled individuals. We searched for general symptoms such as fever, asthenia, cough, myalgia, headache, diarrhea, dyspnea, and ENT symptoms such as odynophagia, sore throat, otalgia, nasal obstruction, hoarseness of voice, selfreported olfactory disorder, and self-reported taste disorder. We used descriptive statistics to characterize the described cohort. Follow-up was performed by the main investigator (K.P.) in the form of telephone calls to enrolled patients. Information about the resolution of the infection and management of symptoms was collected. Demographic and epidemiologic characteristics of COVID-19 patients Twenty patients meeting inclusion criteria were identified. The mean age of patients was 32 ± 10 years. There were nine (45%) female and 11 (55%) male patients. Nineteen patients had the COVID-19 diagnosis confirmed by a positive RT-PCR test and one patient had the diagnosis confirmed by a positive antigen test performed by a healthcare professional. The majority of patients had no comorbidities (85%). The demographic data are summarized and presented in Table 1. Vaccination status of COVID-19 patients The summary of data on vaccination status is summarized in Table 1. Fifteen patients (75%) were fully vaccinated against COVID-19. One fully vaccinated patient had previous COVID-19 infection confirmed with a positive RT-PCR test in 2020. There were four patients (20%) who were unvaccinated against COVID-19 and did not have COVID-19 previously. One patient was vaccinated with just one dose. Clinical picture of COVID-19-associated acute odynophagia All patients included in this case series had acute odynophagia and severe throat pain as a major complaint. Seven patients (35%) presented with hoarseness of voice as a part of clinical manifestation. Fifteen patients had fever at the time they presented at the ENT ED. In six patients, signs of secondary bacterial infection were identified, and thus treatment with antibiotics was immediately started. Four patients (20%) needed hospitalization for management of their symptoms, including one patient who presented with edema of the arytenoid region. None of the patients developed a clinical picture of epiglottitis and needed airway management. None of the patients presented with loss of smell or taste. All the patients recovered from their COVID-19 infections. The laryngeal findings in the majority of patients included general redness in the hypopharynx and larynx. As stated earlier, only one patient had a finding of edema in the arytenoid region that quickly resolved after administration of intravenous corticosteroids and epinephrine inhalation. The summary of laryngeal findings of the enrolled patients is presented in Table 2. Figure 1 shows representative laryngofiberscopic findings in one of the patients with the erythematous epiglottis and bilateral erythematous arytenoids with saliva pooling in the pyriform sinus. Management Treatments of odynophagia caused by viral or bacterial agents include treating the underlying infection. As there is no widely used specific treatment for COVID-19, the first treatment choice is pain management including paracetamol and nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen or diclofenac. In addition, affected patients may use a local anesthetic containing lidocaine in the form of a spray or oral solution to numb the throat. Antibiotics are not recommended if there is no suspicion of secondary bacterial infection. Antibiotics have been shown to have a modest beneficial effect over placebo in reducing the symptom of sore throat [10]. In cases where antibiotics are indicated, penicillin V taken two or three times daily for 10 days is the treatment of choice. Patients with severe symptoms may be given one dose of epinephrine in an inhaled form using a nebulizer to reduce airway inflammation. This can be repeated frequently. Epinephrine inhalation gives a quick effect and may relieve pain by reducing edema; however, its effect wears off quickly. Another alternative is a single dose of oral or intramuscular corticosteroids. Dexamethasone is the recommended corticosteroid due to its long-lasting effects. Corticosteroids were shown to moderately reduce throat pain in several clinical trials [11]. Table 2. Discussion We report a case series of COVID-19-positive patients seeking the ENT ED due to acute odynophagia, severe sore throat, and fever. During the first weeks of the Omicron wave in Sweden, we experienced a high volume of young patients presenting with this clinical manifestation of infection with SARS-CoV-2. All patients developed COVID-19-associated acute laryngitis and/or pharyngitis. The clinical triad of symptoms these patients presented with increased the likelihood that they had the life-threatening condition epiglottitis. Thus, a prompt clinical examination including laryngoscopy is necessary to make a diagnosis and guide management. None of the patients described in this study presented with swollen/edematous epiglottis and required airway management. Only one patient presented with swelling of an arytenoid region. However, in the literature search we identified seven cases of acute epiglottitis with concomitant COVID-19 infection [12][13][14][15][16][17][18], so this diagnosis should be considered and promptly excluded, especially if a patient is presenting with a typical clinical triad. The clinical picture of SARS-CoV-2 infection with acute odynophagia, severe sore throat, and fever has become a common COVID-19 manifestation of the Omicron variant. During the previous waves, symptoms such as cough, fever, and loss of taste or smell were more prominent. Those symptoms are currently rarely reported by Omicron-infected patients [5]. The preliminary evidence suggests that Omicron gives milder symptoms compared to Delta, but it is still unclear whether the reduced severity is caused by the variant's characteristics or if it is due to the increasing vaccine immunity worldwide. Still, even though the preliminary evidence suggests that Omicron is milder, a significant percentage of patients need to be hospitalized to manage their symptoms. In the described cohort, 20% needed hospitalization for management of their symptoms. All those patients were fully vaccinated with at least two doses against COVID-19. In the described cohort, once the suspicion of epiglottitis was ruled out, patients were assigned with diagnoses such as acute viral laryngitis or acute viral pharyngitis based on the picture of the larynx and hypopharynx in laryngoscopy. In a few patients with signs of secondary bacterial superinfection, oral antibiotics were administered. The traditional treatment of acute laryngitis includes voice rest, analgesic therapy, and humidification [19]. According to the findings of a Cochrane Review investigating the benefits of antibiotic use for acute laryngitis in adults, antibiotics appear to have no benefit in treating this entity [20]. Conclusions In conclusion, we present a case series of 20 patients with COVID-19-associated laryngotracheitis and pharyngitis who were admitted to our ENT ED during the SARS-CoV-2 Omicron wave. In those patients, the dominant symptoms were acute odynophagia, severe sore throat, and fever. The majority of patients were young, healthy, and vaccinated against COVID-19. This clinical manifestation of COVID-19 was uncommon during previous waves. Because a similar medical triad of symptoms is associated with the life-threatening condition epiglottitis, a prompt examination of the larynx is recommended to exclude inflammatory edema in the upper airways. None of the patients with COVID-19-associated odynophagia presented with edema in the larynx and epiglottis causing airway obstruction. Still, 20% of the patients-despite being vaccinated against COVID-19-needed to be hospitalized for their symptom management. The management of acute odynophagia in the absence of edema in the airways consists of highdose analgesics, NSAIDs, and local anesthetic to numb the lining of the mouth and throat. In severe cases, epinephrine inhalation and oral/intravenous corticosteroids may be needed to relieve the symptoms. Conflict of interest The authors declare that they have no conflict of interest.
2022-02-17T06:17:23.563Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "960d6fe75c0856bf461d63d446f460356a6b3b32", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/joim.13470", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1a664e7fdb0f0fed4b0de554114080adc2b34591", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250681495
pes2o/s2orc
v3-fos-license
Self-assembling nano-diameter needlelike pinning centers in YBCO, utilizing a foreign element dopant Although pinning centers created by irradiation presently produce the highest Jc, it is probable that ultimately these will be emulated by chemical pinning centers. The best pinning centers produced by irradiation nevertheless provide guidelines for desirable morphology of chemical pinning structures. The highest Jc produced earlier in textured HTS was obtained using isotropic high-energy ions produced by fission of 235U. This so-called U/n process produces pinning centers of diameter ⩽ 4.5 nm, with an effective length of ∼2.7 µm. Maximum Jc occurs for pinning center density of ∼1010 cm−3. We use this as a model for desired chemical pinning centers. Our approach to introducing chemical pinning centers has been to produce precipitates within the HTS containing elements not native to the HTS, and to seek needlelike (columnar) deposits of small diameter. We report here on the formation of needlelike or columnar deposits in textured Y123 containing a dopant foreign to Y123. It serves as a demonstration that self-assembling nanometer diameter columns utilizing a dopant foreign to the HTS system are a feasible goal. These deposits, however, do not fully meet the ultimate requirements of pinning centers because the desired deposits should be smaller. The self-assembling columns formed contain titanium, are ∼500 nm in diameter, and up to 10 µm long. The size and morphology of the deposits vary with the mass of admixed Ti dopant. Jc is decreased for small dopant mass. At larger dopant masses needlelike precipitates form, and Jc increases again. A small range of mass of admixed Ti exists in which Jc is enhanced by pinning. In the range of admixed Ti mass studied in these experiments there is a negligible effect on Tc. Magnetization studies of Jc are also reported. Introduction Critical current density (J c ) is one of the most important properties of high temperature superconductors (HTS). The achievement of high J c or trapped magnetic field (B trap ) makes it practical for HTS to be used in devices such as motors and generators, and power transmission. In textured HTS, J c or B trap can be greatly enhanced by the introduction of pinning centers. Pinning centers are regions in the HTS that can trap magnetic fluxoids. In HTS, pinning centers can be created by asgrown and artificial methods. Presently the highest J c in textured HTS is produced by irradiation techniques. One such method is the U/n process [1]. In this process 235 U is admixed into the HTS powders, textured, and then irradiated with thermal neutrons. Ions from the fission process then move within, and damage, the HTS. Fission fragment defects are composed of short columns, broken aligned columns and "string of beads". These defects are ≤ 4.5 nm in diameter with an effective length for pinning of ~ 2.7 μm, and are isotropically distributed within the HTS. The fission fragment defects act as pinning centers and greatly improve J c or B trap [1]. Maximum J c or B trap occurs for a pinning center density of ~ 10 10 cm -3 . There are, however, negative aspects to U/n processing. In the U/n process, samples must be irradiated with neutrons; consequently, this extra step in processing incurs additional costs. Also, U/n processed samples have a residual radioactivity, which may make the process unattractive to implement by industry. Nevertheless, irradiation pinning centers provide guidelines for desirable morphologies of chemical pinning structures. It is probable then that pinning centers with residual radioactivity will ultimately be emulated by purely chemical methods. Our approach to producing chemical pinning centers has been to admix elements not found in the HTS, which may then form precipitates and act as pinning centers [2]. We report here our first successful attempt to emulate columnar irradiation pinning centers by a chemical method. It demonstrates that self-assembled nanometer-diameter columnar deposits can be formed, and that these deposits act as pinning centers to increase J c or B trap in textured YBa 2 Cu 3 O 7-δ (Y123). Sample Processing and Testing In this experiment we used titanium (IV) oxide (TiO 2 ) as the dopant. The TiO 2 was quoted by the manufacturer to be spherical in shape with a diameter of 34 nm. Several batches of textured Y123 samples were produced. Each batch contained Y123 + 30 mol% Y 2 BaCuO 5 (Y211) + 0.5 wt% platinum + a fixed wt% of TiO 2 . TiO 2 was admixed in doping levels from 0.0 wt% (undoped control samples) to ~ 0.6 wt%. The samples were Y123 cylinders of diameter 1.5 cm and 6.5 mm thick. B trap was measured after samples were cooled in liquid nitrogen while in the field of a 1 T electromagnet. Microstructure studies were done with a JEOL JXA-8600 electron microprobe with wavelength dispersive (WDS) and energy dispersive (EDS) x-ray spectrometers attachments. J c , as a function of applied field (B applied ), was measured at 77 K with a 1.5 T vibrating sample magnetometer (VSM). The VSM was also used to measure critical temperature (T c ). J c and T c measurements were done using rectangular tiles, 3 mm x 3 mm x 1 mm, cut from the larger cylinders. Results and Discussion The leading discovery of this experiment is the formation of deposits with a columnar or needlelike morphology. Analysis of microstructure data indicates that they are ~ 500 nm in diameter and typically ~ 5-10 μm long. The needles grow shorter for increasing wt% of Ti. As the needles grow shorter, the formation of spherical deposits is also observed. Predominantly spherical shaped deposits are seen in the higher wt% Ti-doped samples. Figure 1 shows a sampling of the microstructure of samples from this experiment. All needles and spherical deposits contain Ti as confirmed by the analysis of EDS and WDS data. For samples containing < 0.15 wt% TiO 2 , there are no observed Tirich deposits. At 0.15 wt% TiO 2 , needles were formed. As the TiO 2 concentration is increased beyond 0.15 wt%, the needles grow shorter; spheroids are also formed. Most of the Ti-rich deposits were spherical for > 0.4 wt% doping. Ti deposits do not appear to be uniformly distributed within the Y123 grain. For all samples doped with Ti, there is substitution of Ti into the Y123 crystal matrix [3][4]. The amount of admixed TiO 2 appears to affect the morphology and composition of Ti-rich deposits. By comparing Ti-doped samples to undoped samples, it appears that Ti did not interfere with the ability of Pt to refine Y211 particles. In all samples containing Ti, there were areas in the Y123 that were free of Y211. These Y211 segregated areas were not seen in control samples. Also, the growth of single grains along the c-axis decreases as the wt% of admixed TiO 2 is increased. Reproduction of exact morphology and pinning center density is difficult. This indicates that we are operating some parameter close to a critical value. This could be a processing parameter such as melt temperature, or a chemical parameter such as wt% of admixed Pt. Recall that for all samples containing Ti, there is substitution into the Y123, and the growth of single grains diminishes with increasing wt% TiO 2 ; these effects generally deteriorate J c and B trap . Ti-rich deposits of a columnar or needlelike morphology are observed at 0.15-0.2 wt%. The needles act as pinning centers and tend to increase J c , thereby partially negating the effects of Ti substitution and deceasing grain growth. As the Ti-rich deposits become a mixture of shorter needles and spheroids (TiO 2 > 0.2 wt%), the number of pinning centers also increases and hence J c increases. At 0.4 wt%, where most of the Ti-rich deposits are spherical, B trap recovers to the value of undoped samples, which were typically 2700 Gauss (J c ~ 10 kAcm -2 ) for 1.5 cm diameter x 6.5 mm thick cylinders. B trap decreases again for > 0.4 wt% doping. This is probably because the high wt% Ti inhibits the growth of single grains [4]. The appearance of a peak in B trap (and therefore J c ) at 0.4 wt% suggests that an optimum level of Ti doping for pinning is 0.15 wt% < TiO 2 < 0.4 wt%. To separate the behavior of J c from the problem of grain growth along the c-axis caused by Ti doping, smaller samples (3 mm x 3 mm x 1 mm) were cut from the larger cylindrical samples, and analyzed by VSM. By comparing similar grain sizes, a more accurate pinning effect of Ti doping on J c can be obtained from B applied . J c was measured at 77 K with B applied parallel to the c-axis of the sample. Figure 3 shows the results. For B applied ~ 0.1 T, J c of undoped samples is ~ 40 kAcm -2 . J c decreased to ~ 30 kAcm -2 in Ti-doped samples that did not contain distinguishable Ti deposits, i.e., at 0.1 wt%. For samples containing mostly Ti-rich needles, i.e., 0.15-0.2 wt%, J c no longer decreased. Instead, J c increased to ~ 35 kAcm -2 , indicating that the needles act as pinning centers. J c continued to increase as more deposits were formed, with an assortment of morphologies from a mixture of short needles and spheroids to mostly spheroids. At B applied ~ 0.6 T, for 0.2 wt%, where only needles are formed, J c is ~ 1.1 times the J c of undoped samples. For a mixture of shorter needles and spheroids at 0.3 wt% there is also an increase in J c of ~ 10%. This indicates that, although there are a greater number of deposits with different morphologies (a mixture of needles and spheroids) at 0.3 wt%, the columns or needles at 0.2 wt% not only act as pinning centers but also appear to be more effective pinning centers than are the spherical deposits. T c measurements are shown in figure 4. For all samples, including the undoped ones, the midpoint of the temperature transition was 91-92 K. A decrease in T c of ~ 0.5 K was measured in samples doped with ≥ 0.3 wt%. A broadening of the T c transition was observed for samples doped with > 0.2 wt% TiO 2 . T c for undoped samples was ~ 91.8 K. Summary Our exploratory experiments on nanometer-sized TiO 2 doping of Y123 showed that self-assembled, nano-sized columnar chemical pinning centers are achievable. Processing conditions still need to be optimized to reduce grain growth problems and better control morphology. The formation of chemical columns appears to be very sensitive to the doping level of TiO 2 , and also to the processing conditions. We speculate that the formation of needles depends critically on the dopant's particle size. It was previously shown that softer and smaller sized dopants produce smaller deposits [5]. Needlelike pinning centers are typically ~ 5-10 μm long and ~ 500 nm in diameter, and are formed with 0.15-0.2 wt% TiO 2 doping. This range also appears to be optimum for Ti doping as it has a minimal effect on T c . Although the effect on T c is measured to be small, it is desirable to use a dopant that does not substitute into the crystal matrix. Such a dopant may not inhibit grain growth as much as Ti doping. Additionally, the diameter of the needles is still too large to emulate the best irradiation pinning centers. If the diameter of these Ti needles can be decreased, e.g., by a factor of 2, then the number of needlelike chemical pinning centers should increase J c in textured Y123 to > 100 kAcm -2 .
2022-06-28T03:51:08.847Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "b2f4d284793c9bbbc2c4fced6a13a8a085eedd47", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/43/1/060", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b2f4d284793c9bbbc2c4fced6a13a8a085eedd47", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
36683529
pes2o/s2orc
v3-fos-license
On the Distribution of Plasmoids In High-Lundquist-Number Magnetic Reconnection The distribution function $f(\psi)$ of magnetic flux $\psi$ in plasmoids formed in high-Lundquist-number current sheets is studied by means of an analytic phenomenological model and direct numerical simulations. The distribution function is shown to follow a power law $f(\psi)\sim\psi^{-1}$, which differs from other recent theoretical predictions. Physical explanations are given for the discrepant predictions of other theoretical models. The distribution function f (ψ) of magnetic flux ψ in plasmoids formed in high-Lundquist-number current sheets is studied by means of an analytic phenomenological model and direct numerical simulations. The distribution function is shown to follow a power law f (ψ) ∼ ψ −1 , which differs from other recent theoretical predictions. Physical explanations are given for the discrepant predictions of other theoretical models. In recent years, significant advances have been made in understanding the role of plasmoids (or secondary islands) in magnetic reconnection, which is believed to be the underlying mechanism of energy release for phenomena such as solar flares, magnetospheric substorms, and sawtooth crashes in fusion plasmas [1]. Plasmoids often form spontaneously in resistive magnetohydrodynamics (MHD) [2][3][4][5], Hall MHD [6,7], and kinetic particle-incell (PIC) [8][9][10] simulations of large scale reconnection. Evidences of plasmoids have also been found in the magnetotail and the solar atmosphere [11][12][13], where they are demonstrated to play a significant role in particle acceleration [14]. In the framework of resistive MHD, magnetic reconnection is governed by the Lundquist number S ≡ V A L/η, where V A is the upstream Alfvén speed, L is the reconnection layer length, and η is the resistivity. The classical Sweet-Parker theory [15,16] assumes the existence of a stable, elongated current sheet and yields the reconnection rate ∼ BV A / √ S, where B is the upstream magnetic field. However, it has been shown recently that when S is above a critical value S c ∼ 10 4 , the Sweet-Parker current sheet becomes unstable to the plasmoid instability, with a growth rate that increases with S [4,17]. The reconnection layer changes to a chain of plasmoids connected by secondary current sheets that, in turn, may become unstable again. Eventually the reconnection layer will tend to a statistical steady state characterized by a hierarchical structure of plasmoids [18]. Scaling laws of the number of plasmoids n p , the widths δ and lengths l of secondary current sheets have been deduced from numerical simulations. These scaling laws can be understood by noting that the process of break-up of the secondary current sheet will stop when the local Lundquist number of a secondary current sheet drops below S c . Assuming that all secondary current sheets are close to marginal stability, it can be deduced c /S, and n p ∼ L/l ∼ S/S c . The reconnection rate may be estimated as ηJ ∼ ηB/δ ∼ BV A / √ S c , independent of S [19]. The discovery of the surprising scaling properties of the plasmoid instability in the linear as well as nonlinear regimes, and the ubiquity of the instability in collisional as well as collisionless regimes have raised interest in seeking a statistical description of the plasmoid dynamics in recent literature [20][21][22][23]. However, existing theoretical models give conflicting predictions. Using a heuristic argument based on self-similarity, Uzdensky et al. suggested that the distribution function f (ψ) of plasmoids in terms of their magnetic fluxes ψ follows a f (ψ) ∼ ψ −2 power law [21]. On the other hand, the kinetic model of Fermo et al. [20] predicts a distribution function that decays exponentially in the tail. In this Letter, we employ both kinetic models and direct numerical simulations (DNS) of resistive MHD equations to study the distribution of plasmoids. We first recast the heuristic argument of Uzdensky et al. in the form of a kinetic model, and show that its steady-state solutions exhibit both a f (ψ) ∼ ψ −2 power-law regime and an exponential tail. This approach not only gives a formal derivation of the f (ψ) ∼ ψ −2 power law, but also elucidates when the the power-law regime makes a transition to the exponential tail. However, the results of DNS show a power law closer to f (ψ) ∼ ψ −1 than to f (ψ) ∼ ψ −2 . By careful analysis, we identify the physical causes for this deviation, and propose a modified kinetic equation that yields solutions consistent with the results of DNS. To fix ideas, we begin with a new model kinetic equation for the plasmoid distribution function f (ψ) as a function of the flux ψ that yields the power-law solution obtained heuristically in [21]. The distribution function f (ψ) of the magnetic flux ψ evolves in time due to the following four effects: (1) the fluxes of plasmoids increase due to reconnection in secondary current sheets; (2) new plasmoids are generated when secondary current sheets become unstable; plasmoids are lost by (3) coalescence and (4) by advection out of the reconnection layer. These effects can be encapsulated in the equation Here N (ψ) ≡´∞ ψ f (ψ ′ )dψ ′ is the cumulative distribution function, i.e. the number of plasmoids with fluxes larger than ψ. In Eq. (1), the following assumptions have been made: (1) All secondary current sheets are close to marginal stability, therefore on average all plasmoids grow at a constant rate α ∼ BV A / √ S c . (2) When new plasmoids are created, they contain zero flux (represented by the source term ζδ(ψ), where δ(ψ) is the Dirac δ−function). (3) Plasmoids disappear upon encountering larger plasmoids. This is represented by the loss term −f N/τ A , where the characteristic time scale to encounter a larger plasmoid is estimated as ∼ τ A /N ≡ L/N V A , assuming the characteristic relative velocity between plasmoids is of the order of V A . The process of coalescence is assumed to be instantaneous. Note that when two plasmoids coalesce, the flux of the merged plasmoid is equal to the larger of the two original fluxes [20]. Therefore, coalescence does not affect the value of f at the larger of the two fluxes. (4) Lastly, plasmoid loss due to advection is represented by the term −f /τ A , where the time scale τ A is based on the outflow speed ∼ V A . Under steady-state conditions, Eq. (1) admits the analytic solution where the constant C = 1 + 2/n p , with the total number of plasmoids n p =´∞ 0 f (ψ)dψ. The source term ζδ(ψ) sets the boundary condition f (0) = ζ/α, which gives the relation ζτ A = n 2 p /2 + n p . The source term magnitude ζ may be estimated by the relation n p ∼ S/S c . In the p /2ατ A when ψ/ατ A ≪ 2/n p . Therefore, the solution admits both an exponential tail and a power-law regime. It can be shown that the dominant loss mechanism in the former regime is advection ( Figure 1 shows the distribution function (2) for S = 10 6 , 10 8 , and 10 10 . Here, to fix ideas, we have taken S c = 10 4 , V A = 1, B = 1, and L = 1, and all scaling relations, such as α ∼ BV A / √ S c , are replaced by equalities. Note that the range where the f ∼ ψ −2 power law holds is more extended for higher S. To test the f (ψ) ∼ ψ −2 power law by DNS, we use the same simulation setup of two coalescing magnetic islands as in a previous study [19]. The 2D simulation box is the domain (x, z) ∈ [−1/2, 1/2] × [−1/2, 1/2]. In normalized units, the initial magnetic field is given by B 0 = ∇ψ 0 ×ŷ, where ψ 0 = tanh (z/h) cos (πx) sin (2πz) /2π. The parameter h, which is set to 0.01 for all simulations, determines the initial current layer width. The initial plasma density ρ is approximately 1, and the plasma temperature T is 3. The density profile has a weak nonuniformity such that the initial condition is approximately forcebalanced. The initial peak magnetic field and Alfvén speed are both approximately unity. The plasma beta β ≡ p/B 2 = 2ρT /B 2 is greater than 6 everywhere. Perfectly conducting and free slipping boundary conditions are imposed along both x and z directions. Only the upper half of the domain (z ≥ 0) is simulated, and solutions in the lower half are inferred by symmetries. We use a uniform grid along the x direction and a nonuniform grid along the z direction that packs high resolution around z = 0. For cases with S = 10 6 and 3 × 10 6 , the mesh size is 12726 × 1600, and the smallest grid size along z is 5.7 × 10 −6 . For the S = 10 7 case, the mesh size is 37800 × 2880, and the smallest grid size along z is 1.9×10 −6 . No explicit viscosity is employed in these simulations. A fourth order numerical dissipation is added to damp small fluctuations at grid scale [24]. The initial velocity is seeded with a random noise of ψ amplitude 10 −6 to trigger the plasmoid instability. The early period when the reconnected flux is less than 0.01 is precluded from the analysis to allow the reconnection layer to reach a statistical steady state. We take data during the period when the reconnected flux is between 0.01 to 0.05, corresponding to 25% of the initial flux in each of the merging islands. This period roughly spans 6τ A , insensitive to S. Snapshots are taken at intervals of 0.01τ A . We identify plasmoids within the range x ∈ [−0.25, 0.25] with a computer program for each snapshot, which provide the dataset for further statistical analysis. Figure 2 shows the probability distribution functions f (ψ) for S = 10 6 , 3 × 10 6 [two runs, labeled as (a) and (b)], and S = 10 7 . Distribution functions are normalized such that´∞ 0 f (ψ)dψ is equal to the average number of plasmoids in each time slice. These numerical results appear to be robust and reproducible, as exemplified by the two S = 3 × 10 6 runs that yield nearly identical distribution functions. Qualitative similarities between Fig. 1 and Fig. 2, especially the existence of three distinct regimes, are evident. However, the distribution function in the power-law regime is closer to To understand the discrepancy between the numerical results and the power-law prediction of Eq. (2), we need to critically examine the basic assumptions that give rise to the f (ψ) ∼ ψ −2 power law. In the f (ψ) ∼ ψ −2 regime, the dominant balance in Eq. (1) is between the plasmoid growth term and the loss term due to coalescence, i.e. α∂f /∂ψ ≃ −f N/τ A . A key assumption underlying the loss term −f N/τ A is that the relative speeds of a plasmoid with respect to neighboring plasmoids larger than itself are of the order of V A and are uncorrelated to the flux of the plasmoid. To examine this assumption with numerical data, we measure the relative velocity ∆v of each plasmoid at any given time with respect to the first larger plasmoid it will encounter by extrapolating the trajectories of the plasmoids with their velocities at that time. Note that ∆v is undefined for the largest plasmoid, or when all larger plasmoids are moving away from a given plasmoid. The plasmoids with ∆v undefined are disregarded in the analyses. Figure 3 shows the distribution g(ψ, ∆v) of plasmoids with respect to ψ and ∆v from the run S = 10 7 . Here we normalize g(ψ, ∆v) such that´∞ −∞ g(ψ, ∆v)d(∆v) = 1 for better visualization. We can clearly see that the distribution is not uniform across different values of ψ. The distribution covers a broader range of ∆v at smaller ψ, and it becomes more concentrated around ∆v = 0 at larger ψ. Similar results are also observed in other runs. Therefore, it appears that the reconnection layer organizes itself spontaneously into a state such that large plasmoids tend to avoid coalescing with each other. How do we interpret this phenomenon? As discussed earlier, the flux of a plasmoid is approximately proportional to its age because all plasmoids grow approximately at the same rate α. Consequently, a plasmoid can become large only if it has not encountered plasmoids larger than itself for an extended period of time. Presumably, plasmoids moving rapidly relative to their neighbors will encounter larger plasmoids and disappear easily, whereas those with small relative speeds are more likely to survive for a long time and become large. This observation motivates us to consider a distribution function F (ψ, v), where v can be interpreted as the plasmoid velocity relative to the mean flow (which has a profile along the outflow direction). The governing equation for F (ψ, v) is written as where the function H is defined as and h(v) is an arbitrary distribution function in velocity space when new plasmoids are generated. The distribution function f (ψ) can be obtained by integrating F (ψ, v) over the velocity space. Eq. (3) differs from Eq. (1) in the plasmoid loss term due to coalescence, where the relative speed |v − v ′ | between two plasmoids is taken into account in the integral operator of Eq. (4). If we replace |v − v ′ | in Eq. (4) by V A , then Eq. (3) reduces to Eq. (1). Steady-state solutions of Eq. (3) can be obtained numerically. To fix ideas, we assume a Gaussian profile for the arbitrary source function. Fig. 4 shows the resulting f (ψ) for ζτ A = 10 6 , 10 7 , and 10 8 . Assuming n p ≃ S/S c and S c ≃ 10 4 , these solutions approximately correspond to S = 3 × 10 7 , 10 8 , and 3 × 10 8 , respectively. These solutions also show three A previous DNS study of plasmoid distribution has been recently carried out by Loureiro et al. [23], where they claimed confirmation of the f (ψ) ∼ ψ −2 distribution. It should be pointed out that Loureiro et al. compared the f (ψ) ∼ ψ −2 prediction with simulation data in the large-ψ regime. If we focus on the smaller-ψ regime of their numerical data, the distribution appears more consistent with our finding f (ψ) ∼ ψ −1 . This flattening of distribution function in the smaller-ψ regime was noted by Loureiro et al., but no attempt was made to fit the smaller-ψ regime to a power law. An important question is: do we expect to see a power law in the large-ψ regime or the smaller-ψ regime? Our analytic theory reveals that the transition from a power-law distribution to an exponential tail is due to a change in the dominant loss mechanism from coalescence to advection, which occurs approximately when N ∼ O(1). In our simulation data, the cumulative distribution function N (ψ) drops below unity at ψ ∼ 10 −3 , which is also approximately where the distribution function deviates from f (ψ) ∼ ψ −1 to a more rapid, presumably exponential, falloff. Therefore, this rapidly falling tail is not where a power law should arise. However, simulation data in the large-ψ regime is sufficiently uncertain that it may be difficult to make a clear distinction between a ψ −2 and an exponential falloff. Note that the exponential falloff at large ψ is consistent with the prediction of the kinetic model of Fermo et al. [20] and a subsequent analysis of the flux transfer events (FTEs) in the magnetopause from Cluster [22]. Fermo et al. did not explicitly address the distribu-tion of smaller plasmoids. Because the coalescence term in their model is based on very different considerations and assumptions, it is not clear whether the distribution of smaller plasmoids will follow a power law. Although Eq. (3) is a significant improvement on Eq. (1), it does not include some important physical effects. Most notably, coalescence between islands is assumed to occur instantaneously, whereas in reality larger plasmoids take longer to merge, and there can be bouncing (or sloshing) between them [25,26]. These effects may also contribute to the distribution shown in Fig. 3. Furthermore, the velocity v relative to the mean flow is assumed to remain constant throughout the lifetime of a plasmoid, whereas in reality some variation is expected due to the complex dynamics between plasmoids. Finally, in high-S regime the current sheet between two coalescing plasmoids can also be the source of more plasmoids. [27]. It should be borne in mind that our considerations are valid for collisional plasmas obeying the resistive MHD equations. In weakly collisional systems the plasmoid instability inevitably drives reconnection towards the collisionless regime [6,7,10]. The question of the plasmoid distribution in the collisionless regime remains largely open. However, some of the key ideas in this work, such as the tendency of large plasmoids to avoid coalescence, may still be relevant. The present study is limited to highly idealized 2D problems where more concrete conclusions can be drawn. In 3D geometry oblique tearing modes have been shown to play an important role [28,29], and a statistical description of such systems remains a great challenge.
2012-11-28T19:11:43.000Z
2012-11-28T00:00:00.000
{ "year": 2012, "sha1": "ce566a24d0d5e3d4bfd27a97ee9a43c910fa91dd", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.109.265002", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "ce566a24d0d5e3d4bfd27a97ee9a43c910fa91dd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
238227646
pes2o/s2orc
v3-fos-license
Norovirus Detection in Environmental Samples from Norovirus Outbreaks in Schools and Kindergartens — Beijing Municipality, China, October–December 2020 What is already known on this topic? The norovirus has often caused outbreaks in schools and kindergartens, but minimal research has been performed on environmental contamination during norovirus outbreaks in schools and kindergartens. What is added by this report? This report conveys the norovirus detection rates and viral loads in different environmental sites in 45 norovirus outbreaks in Beijing Municipality from October to December 2020. What are the implications for public health practice? The evidence presented here can instruct professionals and the public to sample and disinfect key locations of the environment purposefully when responding to norovirus outbreaks. Norovirus is the main pathogen responsible for sporadic cases and outbreaks of acute gastroenteritis worldwide (1), and it often causes outbreaks in schools and kindergartens (2-3). Contact with a contaminated environment is the main transmission mode of this virus, but relatively few studies focus on environmental contamination during norovirus outbreaks across a large number of schools and kindergartens (4)(5)(6). This study included 45 norovirus outbreaks in schools and kindergartens with environmental positive specimen in Beijing from October to December 2020, among which 44 outbreaks were caused by person-to-person transmission. The environmental detection rate of norovirus was 17.54% in these 44 outbreaks, in the residence of key patients (including initial patients and index patients), the highest norovirus detection rate was found from housewares (e.g., toys, TV remotes, and desk lamps) at 31.25%, and the lowest cycle threshold (Ct) value was found in a door handle sample at 19.59. In the school buildings, the highest norovirus detection rate was found in the lavatory and flush button samples (26.79%), and the lowest Ct value (18.65) was found in a stair handrail sample. These locations are most likely to be contaminated by the virus during norovirus outbreaks, and it is necessary to clean and disinfect these key locations purposefully. The epidemiological investigation of norovirus outbreaks in schools and kindergartens was conducted in Beijing from October to December 2020. A swab dipped in the solution from a virus sampling tube was smeared on the environmental surface. The sampling locations included the residence of key patients, school buildings, and school canteens. The sampling sites included lavatory and flush buttons, door handles, housewares, cleaning tools, and related items. Noroviruses were detected and genotyped using the real time reverse transcription polymerase chain reaction detection kit (Bioperfectus Ltd., Taizhou, China) according to the manufacturer's protocol. Statistical analyses were performed using SPSS software (version 19.0, IBM, Chicago, IL, USA). Chi-squared tests were used to compare the norovirus detection rates from different sites, and the Kruskal-Wallis H-test was used to compare the median Ct values from different sites; P-values of <0.05 were considered to indicate statistical significance. Environmental samples were collected in all norovirus outbreaks in schools and kindergartens, among which these 45 outbreaks had positive environmental samples. The median number of individuals affected per outbreak was 9 [interquartile range (IQR): 5-16]. The most common transmission mode was person-to-person (97.78%, 44/45), and food-borne transmission occurred in 1 outbreak. Only 1 outbreak was caused by genogroup I noroviruses, and the other 44 outbreaks were caused by genogroup Ⅱ noroviruses. A total of 707 environmental samples were collected from the 44 norovirus outbreaks with person-to-person transmission. For these, the total norovirus detection rate was 17.54% (124/707), with a 22.41% (65/290) detection rate for samples from the residence of key patients and a 17.88% (59/330) detection rate for samples from the school buildings, norovirus was not detected in samples from the school canteens, a statistical difference was found between these detection rates (χ 2 =238.095, P<0.001) and the pair-to-pair comparison showed that there was no significant difference between the norovirus detection rates for samples from the school buildings and from the residence of key patients (χ 2 =1.984, P=0.159). In the residence of key patients, the 3 sites with the highest norovirus detection rates were housewares (31.25%), lavatory and flush buttons (30.00%), and desktops and floors (28.57%). In the school buildings, these sites were lavatory and flush buttons (26.79%), washing facilities (19.61%), and desktops and floors (19.57%) ( Table 1). In the food-borne transmission outbreak, a total of 83 environmental samples were collected from the school canteen and staff dormitory. DISCUSSION Our results showed that a high-level of environmental contamination occurred during norovirus outbreaks in schools and kindergartens in Beijing from October to December 2020. The highest norovirus detection rate was found from the housewares in the residence of key patients, and the highest norovirus detection rate was found from the lavatory and surrounding environments in the school buildings. The sample from door handle and stair handrail had the lowest Ct values in the residence of key patients and the school buildings, respectively. Compared with environmental contamination of the school canteens, the locations that residence of key patients and the school buildings were more severe. The findings were similar to reports from norovirus outbreaks in houseboats in Arizona described by Jones et al. (7). This might be explained by the cleaning procedures of kitchen staff, helping to reduce environmental contamination in school canteens. In the residence of key patients, the highest norovirus detection rate was found from housewares, and the second highest detection rate was found from lavatory and flush buttons. This may indicate that, in the residence of key patients, people do not pay as much attention to hand hygiene as they do outside, and their hands often touch the surfaces of objects increasing the range of contamination, at the same time the cleaning and disinfection of housewares surfaces are often ignored. Therefore, it is important not only to disinfect the lavatory and surrounding environments, but also to clean and disinfect housewares surfaces at home. When disinfecting different objects (such as textiles, hard surfaces, and toilets), different methods are suggested and chlorine disinfectants are priority when applicable. In school buildings, the highest norovirus positive detection rate was found from the lavatory and surrounding environments (e.g., flush button, sink faucet, and hand sanitizer button), which was consistent with the norovirus contamination results from two norovirus outbreaks in kindergartens in Jingmen City of Hubei Province (4). Norovirus is usually excreted in feces during norovirus outbreaks, which may lead to contamination of the lavatory environment in school and kindergarten. The Ct value can reflect the severity of contamination, with lower Ct values representing higher viral loads and correspondingly more serious levels of environmental contamination. Among the samples collected from the residence of key patients and school buildings, those from door handles and stair handrails had the lowest Ct value, suggesting that both may be an important contributor to norovirus transmission. This finding was consistent with the results of Rico et al. (8), who evaluated the environmental contamination in norovirus outbreaks in the Barcelona region between January 2017 and March 2019, in which they found that norovirues were most frequently detected on toilet handles and handrail bars. Therefore, locations where the hands often touch, such as door handles and stair handrails should be disinfected regularly during norovirus outbreaks. This study was subject to some limitations. First, some factors could affect detection rates and Ct values such as the level of virus shedding of the patients, the contact frequencies by the patients of the sites where the environmental samples were collected, and the sample collection process that could not include all contaminated sites. Second, some PCR inhibitors existed in the environments, which could have affected the test results. Third, Ct values of some positive sites were not reported, so there may be some bias. Further research, such as conducting gene sequencing of the noroviruses in environmental samples, estimating the duration of contamination, and evaluating the disinfection effect, is needed. This article provided evidence describing the locations and environmental sites that were most likely to be contaminated by the virus during norovirus outbreaks. These results could instruct professionals and the public to sample and disinfect the environment properly, and it is helpful to formulate a China CDC Weekly sampling, cleaning and disinfection workflow for the disposal of norovirus outbreak.
2021-08-27T16:46:51.961Z
2021-08-13T00:00:00.000
{ "year": 2021, "sha1": "38b170df1cac7d8f078ce9a48ba6271967053bd7", "oa_license": "CCBYNCSA", "oa_url": "http://weekly.chinacdc.cn/en/article/pdf/preview/10.46234/ccdcw2021.165", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac554e795ab2172faebf415176d7aad48ab9062d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271542116
pes2o/s2orc
v3-fos-license
Predicting graft and patient outcomes following kidney transplantation using interpretable machine learning models The decision to accept a deceased donor organ offer for transplant, or wait for something potentially better in the future, can be challenging. Clinical decision support tools predicting transplant outcomes are lacking. This project uses interpretable methods to predict both graft failure and patient death using data from previously accepted kidney transplant offers. Using more than 25 years of transplant outcome data, we train and compare several survival analysis models in single risk settings. In addition, we use post hoc interpretability techniques to clinically validate these models. Neural networks show comparable performance to the Cox proportional hazard model, with concordance of 0.63 and 0.79 for prediction of graft failure and patient death, respectively. Donor and recipient ages, the number of mismatches at DR locus, dialysis type, and primary renal disease appear to be important features for transplant outcome prediction. Owing to their good predictive performance and the clinical relevance of their post hoc interpretation, neural networks represent a promising core component in the construction of future decision support systems for transplant offering. Around 2500 deceased donor kidney transplants are performed in the UK each year.At any time, there are around 5000 patients on the kidney transplant waiting list with an average wait of 2-3 years.The shortage of organs available for transplant means that some patients become unfit for surgery or die whilst waiting.Because of this, clinicians often consider organ offers from less-than-optimal donors with existing comorbidities or older age.Decisions around organ offers are made by clinicians based upon the information available at the time of offer, including donor and recipient demographic and medical details.Clinicians use their clinical experience, but do not have reliable tools available to help them predict what would happen if they choose to accept or decline an offer and wait for the next available one.This uncertainty leads to considerable variability in organ decline rates and waiting times between clinicians and centres.A clinical decision support (CDS) system that accurately predicts transplant outcomes, both in terms of graft failure and patient death, as well as indicating what would happen if the organ offer was declined (in terms of future offers and likely waiting time), may help to support clinicians in making these difficult decisions.As decisions must remain under the responsibility and control of the clinician, any CDS tool must be easy to use, and predictions must be interpretable from a clinician's perspective.Interpretability and usability are also important to patients, allowing better explanations of likely outcomes during the informed consent process. The aim of this study is to predict transplant outcomes in the scenario of an accepted kidney offer.We utilise more than 20 years of registry data, containing over 36,000 accepted kidney transplant offers, with graft and patient survival information.These data have been provided from the National Health Service Blood and Transplant (NHSBT) UK Transplant Registry with ethical approval.Using these data, we have trained and compared several survival analysis models.In addition, we use post hoc interpretability techniques to clinically validate these models. Predicting the time of occurrence of an event (such as patient death or graft failure) from censored data has been extensively studied under the name of survival analysis.This has many applications in health informatics such as predicting strokes 1 , oral cancer 2 , or graft outcome prediction.Censored data are common in such contexts, resulting from loss of follow-up, competing events, or the end of the study.In the context of graft outcome prediction, previous studies use the Cox proportional hazard (PH) model to predict kidney graft or recipient survival [3][4][5] .The Cox PH model is a classic time-to-event approach that models the hazard function, as in the failure rate of a system according to time 6 .This approach is not only robust and reliable, but also simple to use and well understood by clinicians.Several generalisations of this model have been proposed.For instance, DeepSurv 7 aims at increasing the modelling power of the Cox model by replacing the linear contribution of the covariates with a neural network.DeepHit 8 directly models the cumulative incidence function with a single neural network.Originally proposed for handling competing risks, this network is structured according to a multi-task architecture, composed of a shared sub-network and several cause-specific sub-networks.The loss has been adapted to maximise the concordance index, a classic survival analysis metric based on the idea that the earlier an event is observed, the higher the associated risk should be. In contrast to the Cox PH model, this approach does not rely on the proportional hazard assumption.Neural networks are not the only machine learning models that have been adapted to survival analysis.For instance, random survival forests 9 is an adaptation of random forests to right-censored survival data.Alternatively, classification machine learning methods can be considered to predict the status of the subject of interest at specific time points.For example, predicting transplant outcomes after 1, 5, and 10 years is generally sufficient for the clinicians.Thus, existing risk communication tools such as 10 identify survival functions obtained from the Cox PH model at these time points.Many previous publications follow this classification approach 3,11,12 .However, this approach requires to train a model for each time point.These independent models may induce inconsistent results when considered all together.Whilst many of these previous studies demonstrate acceptable predictive performance, none challenged their models' validity through the lens of clinical interpretability. Interpretability is another important criterion in the construction of a CDS tool for predicting graft outcomes.Interpretability is the extent to which the prediction of a model can be understood by a human 13 .This way, users can build trust regarding the model's results and remain in control of the associated outcomes.A good model should always be intrinsically interpretable to a certain degree.Indeed, interpretable models have been shown to be more robust to adversarial attacks 14 .Although some of the approaches mentioned above, like the Cox PH model, are inherently interpretable, some other models, like neural networks, are designed in a way that makes interpretation difficult.Nonetheless, it is possible to interpret a posteriori a black box neural network model with the help of post hoc interpretability methods.One can provide a local explanation of a given prediction.For instance, LIME 15 locally samples data points around the input and returns a linear explanation of the predictions made by the black-box model from these data points.Unfortunately, this solution is unstable; explanations depend highly on the sampled data points, harming the trustworthiness of the explanations.Similarly to LIME, SHAP 16 is a post hoc interpretability method relying on additive feature attribution models, i.e. linear functions as local explanation models.It provides explanations via game theory: each prediction is seen as a game where the features are players contributing to that game.Feature contributions are computed by considering all possible coalitions of features and the marginal contribution of each feature within these coalitions.Hence, SHAP can be considered as a gold standard in terms of post hoc interpretability methods. Methods All methods were carried out in accordance with relevant guidelines and regulations.This study, referenced under IRAS project ID 304542, has received approval from the Health Research Authority and Health and Care Research Wales (UK research ethics committee).All UK transplant recipients provide consent to the use of their data in the mandatory national registry at the time of addition to the transplant waiting list.This project uses anonymised data from the national registry, so individual patient consent was not required. Data Our work is based on the analysis of a data set from the UK Transplant Registry, provided by NHSBT.It describes 36,653 accepted kidney transplants, performed between the years 2000 and 2020, across 24 UK transplant centres.All transplants are from deceased donors.The total follow-up duration is around 22 years.Each transplant is originally described with 3 identifiers, 12 immunosuppression follow-up indicators, 143 donor, recipient and transplant characteristics, and 7 entries describing targeted outcomes.Considering transplants as independent, we exclude the transplant, donor, and recipient identifiers.Information regarding post-transplant immunosuppression is discarded as this is not available at the time of the offer decision.The donor, recipient and transplant characteristics serve as input features for modelling.Among them, 24 describe the recipient, 109 represent the donor, and 10 refer to the overall transplant.Both recipient and donor characteristics contain generic information such as gender, ethnicity, age, blood group, height, weight, or body mass index (BMI).More specific information is also available, such as the transplant centre, number of previous transplants, waiting time, ease of matching, and the dialysis status.Donor data include the cause of death, past medical history and results of blood tests including kidney function (estimated glomerular filtration rate, eGFR).Transplant data include the donor-recipient immunological match. Duplicate rows are removed, and values outside of a plausible clinical range are removed.Categorical values are checked by clinicians and simplified (or removed) if needed.BMI is recomputed based on weight and height.Both weights and heights are discarded to limit redundant information.Blood measurements are harmonised across the data set by selecting the first measurement available (generally at donor registration) and the maximum value during the donation process.Since the calculation of eGFR varies across hospitals, this metric is recomputed over the whole data set using a consistent definition (see appendix, section A.1). Recipient dialysis status is also simplified into a dialysis duration and dialysis modality at time of transplant (predialysis, haemodialysis or peritoneal dialysis).Notably, the time on dialysis for predialysis recipients is set by default to 0. Transplant offers not meeting the inclusion criteria, such as dual and multi-organ transplants, are discarded. Outcomes present in the dataset include information about graft failure, patient death, and transplant failure.Graft failure excludes death with a functioning graft, whilst transplant failure denotes either graft failure or death.In this work, we focus on predicting graft failure and patient death.Each outcome is represented as a pair containing an event time and a right-censoring indicator.Right-censoring is a common type of censoring in survival analysis that describes the loss of follow-up on the event of interest.It can occur for various reasons, such as the end of the study, competing events, etc.Thus, right-censored information provides some partial information about the survival time, where it is only known to be greater than the censoring time.Transplant outcomes are recomputed for the sake of consistency. After removing the features presenting more than 50% missing values across the whole data set, the data is described through 50 input variables.At this stage, the data contains 8% missingness.A summary of this datacleaning process is given in Fig. 1.Additionally, an exhaustive list of the features and targets considered at the latest stage of this process is given in the appendix (section A.2). Model training and validation In this article, we compare the Cox PH model, DeepHit, and random survival forests in a single risk setting.The different models are interpreted a posteriori, and their performances are discussed. The following methodology is applied.First, the data is split in a stratified manner with regards to censoring indicators, where 80% of the data is reserved for training and the remaining 20% is left for testing.Due to matching policy changes and follow-up time differences between old and recent offers, we do not split the data according to transplant dates.After this first step, numerical values are standardised and categorical ones are one-hot-encoded.Mean and variance are computed over training data only.Standardisation has appeared to be more relevant than normalisation due to the presence of outliers in the data.Then, we impute missing features with the help of MissForest, an iterative imputation technique relying on random forests 17 .MissForest is first trained on the training data and then applied to all the data.This solution has been selected among several imputation techniques including MIDAS, a variational autoencoder-based imputation technique 18 ; MICE, an iterative method for multi-column imputation 19 ; MissForest itself, which is a variant of MICE; and a naive imputer simply returning average values.These methods have been compared on a sample of the data, where missingness was introduced by randomly masking known values.In order to simplify the end-to-end data processing pipeline and alleviate any burden on data requirements, we use the same training data set for both pre-processing and model training.Prior tests have shown no particular difference with more partitioned data management.Thus, after the imputation step, survival analysis models are trained through 5-cross validation.This process is performed a first time for feature selection.This is achieved by inputting Gaussian noise as a feature: we select any feature whose importance is higher than the importance attached to noise.Based on this subset of features, 5-cross validation is then repeated for final model training.Model calibration is then performed: predictions are adjusted a posteriori to match observed outcome ratios by training a logistic regression model.Model evaluation is undertaken by computing concordance and AUROC scores over 100 bootstraps of the testing data.The survival models are clinically interpreted using SHAP.To do so, we fix a particular time point (1, 5, or 10 years) and consider how models predict event occurence up to that point.The coefficients of the Cox PH model are also provided.The choice of considering Cox model's coefficients for interpretability rather than using SHAP is motivated by the fact that Cox model's inherent interpretability is a key factor in model selection.Notably, this is how this model is usually interpreted.The Fig. 1 illustrates the overall methodology and the code used for experiments can be found at https:// github.com/ Achil leSal aun/ Xamel ot. Following this processing pipeline, we compare the Cox PH model, random survival forests, and neural networks.Since both DeepHit and survival forests require the time to be discretised, we restrict transplant outcome prediction to 1, 5, and 10 years.This step follows the discretisation process described in 8 .We rely on grid search to tune hyperparameters.As a result, Breslow's estimator is used to derive the Cox PH model's baseline 20 . In addition, a regularisation parameter is introduced and set to 1e −4 to deal with colinearities in the data.The survival random forest is given 300 trees.Finally, we train DeepHit in a single risk fashion.While predicting graft failure, the model is instantiated with two hidden layers of 100 neurons, with 10% dropout.The neural network used to predict patient death shows one hidden layer of 200 neurons followed by two layers of 100 neurons.For both graft failure and patient death predictions, the training is done through 50 epochs, with batches of size 64, and a learning rate equal to 1e −2 . Single outcome prediction In total, 35 features are selected to predict graft failure and patient death.Donor and recipient ages, the number of mismatches at DR locus, type of dialysis, and primary renal disease are important features for prediction of both outcomes. Table 1 displays the concordance scores reached by each model for graft failure and patient death prediction.Tables 2 and 3 provide the AUROC reached by each model for graft failure and patient death prediction, respectively, for observations years 1, 5, and 10.Performances before and after feature selection are presented.One can observe that overall performances increase with the observation time, being maximal at year 10.This may be explained by the fact that the features present in the original dataset are more relevant to long term predictions than short term ones.Also, event rates increase over time.Considering both concordance and AUROC, the From an interpretability viewpoint, the neural network, when combined with SHAP, provides a richer clinical depiction of the data than Cox.The features that are important to clinicians are also considered important to the neural network.For example, among predominant features for graft failure prediction (cf.Fig. 2b), recipient and donor age, donor type, donor past hypertension, or eGFRs are also features commonly used by regression models from the transplant literature 4,21,22 .The direction of effect of feature values on predictions also matches clinical knowledge.For instance, patients with diabetes are likely to have inferior survival.This is reflected through the higher SHAP values regarding graft failure when prd#Diabetes is equal to one.The effect of covariates on survivability can be non-linear, as illustrated by the recipient age (rage; see Fig. 2c).Indeed, it is commonly recognised that younger patients can be less adherent to medication, hence increasing the risk of graft failure.This phenomenon vanishes with older patients, and age then becomes a penalising feature for survivability.In contrast, explanations obtained from the Cox PH model do not highlight such behaviours (see Fig. 2d), being limited to less expressive covariate effects.By design, it can be summarised as a linear function in the case of Cox.Moreover, Cox coefficients do not seem to reflect clinical expertise in terms of feature importance.For example, it attaches a strong importance to the hospital centre while both neural networks and random forests agree on the predominance of donor age.Finally, survival random forests share similar interpretations to DeepHit (see Fig. 2a). Discussion Neural networks have shown comparable performance to tools generally used by clinicians when predicting kidney transplant outcomes.In particular, they perform well when predicting long-term outcomes, which is a useful property when making a decision as to whether to accept an organ offer. The Cox PH model remains a robust solution in terms of performance, with little to no hyperparameter tuning.It is simple to use, leads to reliable predictions, and is easy to understand by clinicians.However, whilst Cox PH models can be interpreted at a model level by inspecting regression coefficients, interpretability at an individual prediction level is not as easy.First, feature effects are linked indirectly to the survival function through the linear component in the hazard function.Since Cox PH model's inherent interpretability comes from this linear component, variants aiming at modelling non-linear covariates (e.g. by use of splines) might hamper interpretability.Furthermore, using splines would require an a priori understanding of the covariate effects.Finally, interpretations do not depend on the prediction time due to the proportional hazard assumption.predictions at years 1, 5, and 10 all together while performing calibration prevents introduction of inconsistencies in predictions (e.g predicting a higher probability of graft failure at year 5 than at year 10). Predicting transplant outcomes is only one aspect of the construction of a CDS tool for kidney offering.However, predicting what could be the consequences of refusing an organ offer in terms of future transplant opportunities, death, or removal from the waiting list is another key step.Having a good understanding of the outcomes in both scenarios is indeed necessary to predict individualised treatment effects.In parallel, measures to safe-guard the use of the CDS tool are necessary.If interpretability contributes to building trust regarding the tool's predictions, uncertainty quantification is another critical feature regarding the construction of a CDS tool for organ offering.This can be achieved either through post hoc error prediction using meta-modeling, or with a Bayesian version of our neural network models (or frailty models as an extension of Cox models).The presence of biases derived from the data is also something to inspect more carefully.As our models have been compared on a fair ground, possible biases should not impact our conclusions.Preliminary tests show optimistic results regarding the limitation of biases related to sensitive characteristics like age, gender, or ethnicity.One can also note that we do not use SHAP to explore inter-covariate dependencies: if it could be relevant in the context of the clinical model validation, such interactions start to be difficult to explain and present to clinicians in practice.Finally, we have yet to address the aspect of maintainability for our tool.This will require recurring validation on recent data as it becomes available, with retraining of the models if performance decreases.The concordance would be a good candidate for performance monitoring.Indeed, it provides a synthetic viewpoint on a survival analysis model's performance, handling censored events.Moreover, this metric takes an important place in the design of the loss function for training DeepHit.The optimum solution for model maintenance would be continuous learning, but this may be challenging in the context of healthcare data due to limitations in access to datasets. To conclude, we have trained several models to predict transplant outcomes from kidney offers, based on 20 years of registry data.Neural networks provide comparable results to classic survival analysis models.By using SHAP, we provide clinically validated interpretations of these models.This level of interpretability is especially relevant to enable validation from clinicians and to involve patients in the decision-making process.Therefore, neural networks represent a promising core component in the construction of future CDS system for transplant offering.As future work, we want to extend our analysis to the prediction of patient outcomes in the case of a declined offer.We also plan to add uncertainty quantification to our CDS tool. Figure 1 . Figure 1.End-to-end data processing pipeline, from raw data to model testing.Data cleaning is detailed on the left.Cross-validation is performed before and after feature selection. Figure 2 . Figure 2. Interpreting graft failure prediction at 10 years.(a) and (b) provide covariate effects with regards to SHAP for random forests and neural networks, respectively.Each point represents a Shapley value for a particular feature in a particular offer.The values of the features are represented through colour: blue and pink indicate features whose values are low and high, respectively.When focusing on a single feature, this information can be directly reported onto the y-axis (cf.b). Figure 3 . Figure 3. Calibration of the neural network predicting graft failure.Each point represents a cohort of transplant offers that share similar predicted scores: the average score respective to each group is reported on the x-axis and the true class ratio within each groups is reported on the y-axis.Predictions at 1, 5, and 10 years are considered all together while training the calibration model. Figure 4 . Figure 4. Waterfall plot for an example prediction of graft failure at 10 years.Positive SHAP values indicate a positive impact on graft failure, and vice versa.The plot demonstrates the impact of individual features in moving the predicted survival away from the population average (0.499) to an individualised prediction (0.669). Table 1 . Concordance scores for graft failure and patient death prediction on test data.Best results are highlighted in bold. Table 2 . AUROC with respect to graft failure prediction on test data.Best scores are highlighted in bold. Table 3 . AUROC with respect to patient death prediction on test data.Best scores are highlighted in bold.neuralnetwork shows similar performances to the performances of the random forest and the Cox PH model, slightly outperforming them on the graft failure prediction task.
2024-07-31T06:17:39.856Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "0e541e130b9661720dca3f61d13d228c53252640", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "572de7a84daa31e3c3fed1dc190fdaf3d68b8e9d", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
125911312
pes2o/s2orc
v3-fos-license
Development of an Algorithm for Calculation and Application of Conformal Mapping Methods on the Calculation of Hydrodynamic Coefficients The multipole method was first developed by Ursell [1]. His method consists of the superposition of potential functions that satisfy the Laplace equation, the free-surface boundary condition, and the condition at infinity. The potential functions represent a source and horizontal dipole at the origin, which give the radiated waves at infinity, and a series of multipoles that die off rapidly as one moves away from the origin. The strengths of the source, dipole and multipoles are all determined so that the body boundary condition is met. Ursell used this method to solve the problem of a heaving circular cylinder. For sections that are not circular in shape, conformal mapping is used. In the multipole method, the mapping function that transforms the ship section into a semi-circle is found. The mapping function can then be used in conjunction with Ursell’s known solution for a circular cylinder to find the solution for the actual ship section. The difficulty in the technique is to determine the proper mapping function for each cross section. Various methods have been proposed to find this mapping function. The most common mapping uses the so-called Lewis-forms [2], [3] and [4]. It is recognized that this method gives smooth solutions over all frequency range (no irregular frequencies). On the other hand, sharp corners are not well represented, and sections with very low sectional area coefficient may not be well represented as well. Even though, for first rough estimates in initial stages of ship design, this method may give results that agree reasonably well with the other more computational demanding methods, in terms of order of magnitude and trend. Considering the difficulty to find good charts of the Lewis forms data – to the best of our knowledge, the best known are those published by Bhattacharyya [5] - and even more difficult to find the data in digital form. The main purpose of this work is to present a computational method of computing the data given through the Lewis forms and apply it to naval ship sections in order to find rough estimates of the hydrodynamic coefficients in heave. Introduction One of the areas of Ocean Engineering and Naval Architecture of extreme importance is seakeeping, as well as its ability to maneuver in the aquatic environment, allowing us to obtain a forecast of the ship's hull behavior in waves in its coupled six degrees of freedom. As a consequence of this, the study of ship dynamics has been traditionally separated into two main areas [6]:  Manoeuvring or controllability in calm water and  Seakeeping or vessel motion in a seaway. With regard to seakeeping, there are a huge number of methods that can be applied, ranging from simple two-dimensional linear numerical methods in frequency or time domain, such as Strip Theory, to advanced nonlinear methods such as Computational Fluid Dynamics (CFD). Both methods have advantages and disadvantages. Although the Strip Theory is simpler and considers certain assumptions (e.g., the flow is 2D and the fluid is inviscid), it continues to have results with good precision enough for initial stage design and in which the computational effort is not so demanding, being possible to obtain them with computers with less effort and faster [7]. The 2D methods using a Strip Theory to find the ship's response in seaway need as one of the inputs the hydrodynamic coefficients to solve the equations of motion. From the several methods available today, the most used is the panel method 2D/3D. However, other less computational, time a cost demanding methods are still possible to use, at least for first rough estimates. The multipole method was first developed by Ursell [1]. His method consists of the superposition of potential functions that satisfy the Laplace equation, the free-surface boundary condition, and the boundary condition at infinity, and the body boundary condition. The potential functions represent a source and horizontal dipole at the origin, which give the radiated waves at infinity, and a series of multipoles that die off rapidly as one moves away from the origin. The strengths of the source, dipole and multipoles are all determined so that the body boundary condition is met. Ursell used this method to solve the problem of finding the hydrodynamic coefficients for a heaving circular cylinder. For sections that are not circular, conformal mapping has been used. In Ursell's multipole method, a mapping function that transforms the ship section into a semi-circle is found, which can then be used in conjunction with Ursell's known solution for a circular cylinder to find the solution for the actual ship section. The difficulty in the technique is to determine the proper mapping function for each cross section. Various methods have been proposed to find this mapping function. The most common mapping uses the so-called Lewis forms [2], [3] and [4]. It is recognized that this method gives smooth behaviour of the solutions over all frequency range. On the other hand, sharp corners are not well represented, and sections with very low sectional area coefficient may not be well represented as well. Even though, for first rough estimates in initial stages of ship design, this method may give results that agree reasonably well with the other more computational demanding methods, in terms of order of magnitude and trend. Nowadays it is difficult to find good published charts with Lewis form's data. The best known are those published by Bhattacharyya [5], which are difficult to read and convert reliably. Ship Dynamics Regarding the dynamic analysis of seakeeping, certain simplifications need to be accounted: the ship will be considered a rigid body with small amplitudes motions. Motions and Reference Frame It is necessary to predict the vessel´s translational (surge, sway and heave) and rotational (roll, pitch and yaw) motions. These motions are considered as being six degrees of freedom, as is shown in Figure 1. Unfortunately, there is no universal coordinate system accepted in the literature of the behavior of the ship at sea. Thus, taking into account the main linear numerical method in the frequency domain applied to linear waves (Strip Theory), two coordinate systems are normally used [8]:  The ship-fixed system (non-inertial system) , , , with axis pointing from amidships forwards, to starboard and towards the keel. In this system, the center of gravity of the ship is independent of the time , , ;  The Earth-fixed system (inertial system) , , , which follows the constant movement of the vessel with velocity = √ 2 + 2 , dependent on the quasi-velocities in surge -, and sway -. It should be noted that, in seakeeping, the coordinate system used is usually the inertial system. Linear Equations of Ship Dynamics in Regular Waves The prediction of a ship's response in seaway, the seakeeping, is a complex process, involving the interactions between the ship's own dynamics and the surrounding hydrodynamic forces. Knowing the ship's responses in regular waves for different frequencies, we can predict its behavior for several sea states. The general form of the linearized equations of ship dynamics in the six degrees of freedom, or in other words, the Euler equations of ship motion used in the literature devoted to seakeeping, using the fixed axes on the ship can be described as follows [9]: where: the inertia matrix components of the ship, such as mass and moment of inertia; ̈k -the accelerations in mode k; the sum of the forces and moments acting on the body in the direction ; and F j re harmonic functions in the time base. Linearizing equation (1), certain terms in may be considered zero, as shown by [10], which for a ship with lateral symmetry [9], this equation can be reduced to the following six equations relating to the six degrees of freedom: where: ( ), = 1,2,3 -the sum of the forces in the directions , , respectively; ( ), = 4,5,6the sum of the moments acting on the , and axes, with the positive moment following the right hand rule; -the total mass of the ship; , = 4,5,6the moments of inertia about the , and axes, respectively; 46 -the product of inertia between the degrees of freedom roll-yaw= 64 ; ( , 0, )the coordinates of the center of gravity of the ship in the non-inertial system , , ; ̈( )the acceleration in the degree of freedom , in the system with the fixed axes in the ship, referring to = 1, 2, 3, 4, 5, 6 the surge, sway, heave, roll, pitch and yaw, respectively. Comparing the equations (1) and (2) it is possible to write the inertia matrix as: Taking into account the following assumptions [9]:  Considering only the gravitational and fluid forces acting on the ship;  Taking into account the linear theory, the ship's responses will be directly proportional to the wave amplitude, occurring at the frequency at which the ship suffers the incident waves;  Considering only the ship's response in sinusoidal waves, the time-dependent responses of the vessel ( ) will be sinusoidal at a given encounter frequency being represented by: where: -the added mass coefficients in dof j due to motion in dof k; -the damping coefficients in dof j due to motion in dof k; -the hydrostatic restoring force coefficients in dof j due to motion in dof k; + -the two components of the amplitude of the excitation forces acting on the ship. Hydrodynamic Loads Applying Strip Theory, and to simplify the concepts discussed so far, the three-dimensional (3D) problem is reduced to a two-dimensional (2D) problem by dividing the hull into several two-dimensional vertical sections along the length of the ship, each strip having a constant cross-section [7] and a flow that do not interferes longitudinally with the adjacent strip flow. Subsequently, some restrictions will be presented that need to be considered in applying the Strip Theory. A common approach in the calculation of hydrodynamic loads can be made by dividing the hydrodynamic problem into two sub-problems [  Sub-problem A: The movement of the ship when exposed to incoming waves is predicted. In this sub-problem, the Froude-Krylov and the diffraction forces and moments of the wave excitation are computed.  Sub-problem B: The incoming waves are not considered. In this sub-problem, we postulate that the ship is moving in its six degrees of freedom at the matching frequency corresponding to the wave frequency of sub-problem A. Here, the added mass coefficients , the damping coefficients , and the hydrostatic restoring force coefficients are calculated. The decoupled equation of the ship's motion can be given by: This work addresses the radiation problemsub-problem B, that is the calculation of the hydrodynamic coefficients and . Conformal Mapping As started above the main principle in Strip Theory involves dividing the submerged part of the ship into a finite number of strips. Hence, 2D hydrodynamic coefficients for added mass a and damping b can be computed for each strip and then be summed over the length of the body to yield the 3D coefficients [11]. The 2D dynamic coefficients can be calculated from boundary element methods or via conformal mapping [11]. Conformal mapping is used to transform the section into a circle, for which the form of the multipole potential is known. This representation is then transformed back into the physical plane using the derived mapping function [12]. The problem on conformal mapping now becomes one of determining the parameters in the transformation which map the arbitrary section to a unit circle. The earliest two-parameter mapping technique was due to [2]. This method produces reasonable representations of conventional sections [12]. However, nowadays most of the computing tools available make use of more complex panel methods. Computation of the Hydrodynamic Coefficients using a two-parameter conformal method The algorithm developed computes the local heave added mass a 33 and heave damping coefficients 33 on each section of the ship (the global ship added mass and damping coefficients in heave using Strip Theory being: 33 = ∫ 33 and 33 = ∫ 33 respectively) . Our work was based on the formulation developed by [18]. This scientific document gives the expressions for the added mass and damping coefficients for conventional hull cross-sections related with the Lewis forms approximations. Lewis Transformation Method Due to the fact that ship´s hulls do not have semi-circular cross-sections, Lewis Transformation Method is used to extend the results for the semi-circle into solutions for more realistic hull shapes. In this way, it must be considered that [17]:  Small motion amplitudes are assumed. In this technique, the circle and the flow around it (stream and potential functions) are calculated in the complex plane where: Then, these results are mapped into the flow around a hull section in the complex plane (the hull cross section plane) defined as: These two complex planes can be related by the following transformation: It´s important to understand that for each size and shape of the section of the ship in the plane, the functional form of the transformation equations must be determined for every individual case. Therefore, the transformation that will map any point on a semicircle of radius meters in the plane into a corresponding point on a given shape in the plane (if appropriate values of the coefficients where: Athe underwater sectional area; the underwater sectional beam; -the sectional draft. Taking into account that the section of the ship has radius = [ ], substituting equations (6) and (7) into equation (9), and separating real and imaginary parts, we can obtain a pair of parametric equations in (from = /2 to = 0) describing the shape of the Lewis form in the plane: The coefficients a 1 and a 3 are obtained with these equations: It should be noted that a 0 is a scale factor governing the overall size of the Lewis form [12]: Being 0 the half underwater sectional beam. Some examples of Lewis forms are presented in Figure 4. It is possible to obtain the sectional area coefficient and beam/draft ratio: The studied frigate cross section Lewis form is presented on Figure 5. Considering the studies made by [18] the expressions for added mass and damping coefficients in heave for each section of the ship are: An exhaustive description for both equations can be found in Appendix [17]. Comparing the non-dimensional curves of hydrodynamic coefficients in heave (added mass and damping) presented on page 120 [17] with the curves obtained using the algorithm written in Matlab®, it is possible to see that the results are similar with an approximate error of: The respective curves are presented in Figures 6 and 7. This comparison serves the purpose of validation before making the calculations for the real cross section of the naval frigate. The legend for the curves given by [17] and Matlab® code: Added mass coefficient in heave: a. Results from [17]: b. Results from Matlab® code: Figure 6. Hydrodynamic coefficients -added mass, from [17] and Matlab® code. Then the cross-sectional non-dimensional added mass and damping hydrodynamic coefficients for the cross section from Portuguese Navy frigate in heave were calculated and the results can be seen in Figures 8 and 9. While the damping curve follows a expected tend, the added mass presents oscillations at the high frequencies that where not expected and need further study and validation. Conclusions This work tried to address the radiation problem in what respects the calculation of the hydrodynamic coefficients and in heave, 33 and 33 . A computing code was developed to estimate the local heave added mass 33 and heave damping coefficients 33 using a two parameter conformal mapping method usually known as Lewis forms. The code was based on the equations for the radiation problem presented by Lloyd [17]. The results in the form of non-dimensional heave added mass and heave damping coefficients were compared with those presented by de Jong [18] for several sectional forms and seem to agree fairly well, making a first validation of the code. The code was applied to the cross section of a navy frigate and the results obtained for the Lewis form seem correct. The results obtained for a 33 and 33 show that:  for the added mass there are oscillations in higher frequencies region that need further study since they are not expected using this method which is known to be stable through all range of frequencies;  the damping follows a expectable trend, however there are calculation instabilities to be solved in the very low frequency range ( ( /2 ) 0.5 < 0.25);  for both hydrodynamic coefficients a validation needs to be made through comparison with other method for the same section.
2019-04-22T13:12:03.912Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "3a3df6c5ee70954fac3fb9ba02df0f425a142c69", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/172/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2e2ea0a6b9548077623e191a90e7636a944fc918", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Computer Science" ] }
242312606
pes2o/s2orc
v3-fos-license
Nanodiamonds in Oil Emulsions as Effective Vaccine Adjuvants and Antitumor Therapeutic Agents Background Vaccination is an effective tool to elicit immunological responses that mediate the protection from infection or disease. Composed of mineral oil and mycobacteria pathogens, complete Freund’s adjuvant (CFA) is one of the most commonly employed adjuvants for antibody production and vaccination due to its high e�ciency. However, the dead mycobacteria in CFA can cause many allergic reactions. To avoid these adverse effects, we propose here a new formulation based on the use of nanodiamonds (NDs) as biocompatible non-allergic additives in incomplete Freund’s adjuvant (IFA) instead. Results Introduction Vaccination is an effective public health tool to prevent the spread of infectious disease worldwide [1]. It is playing an ever-increasing role in preventing and controlling epidemics (such as COVID-19) today [2]. The use of vaccines is to elicit immunological responses that mediate the protection from infection or disease. The majority of vaccine antigens currently available and under development are subunits of pathogens or their recombinant molecules with little or no immunostimulatory activities. Therefore, the development of safe and potent immunologic adjuvants that can direct and enhance vaccine-speci c immunity is absolutely needed. However, safety has always been a concern for their applications [3,4]. Despite numerous efforts made in the past, only a handful of adjuvants have been included in licensed human vaccines and few are in clinical trials. Among these vaccine adjuvants, alum (or aluminum salt) is the most widely used, although aluminum is a known neurotoxin and our understanding of its toxicology and pharmacokinetics in the human body is still limited [5,6]. While new types of adjuvants such as those composed of water-in-oil emulsion, squalene, liposomes, other compounds have been developed, they may have higher local reactogenicity and systemic toxicity than alum alone [7,8]. How to achieve potent adjuvant effects and yet avoid human reactogenicity or toxicity remains a major challenge in the eld of vaccine development to date. Complete Freund's adjuvant (CFA) is a water-in-oil emulsion containing heat-killed mycobacteria for immunization. It is one of the strongest adjuvants known [9], because the inactivated mycobacteria in CFA effectively attract macrophages and other immune cells to the injection sites whereas the oil acts as an insoluble depot of antigens to achieve long-term immunostimulation. These two characteristics together greatly enhance the immune responses [10,11]. However, the high reactogenicity and toxicity of CFA have precluded its applicability in human vaccination and, as a result, CFA is most commonly used for antibody production in experimental animals. A way to overcome this limitation is to employ incomplete Freund's adjuvant (IFA), which lacks allergic additives (e.g., inactivated mycobacteria), and mix it with biocompatible, non-allergic, and non-toxic nanoparticles to reduce undesirable side effects. Ideal balance of e cacy and safety is possible to be reached by mixing IFA with synthetic nanoparticles that have been developed over the past few decades for vaccination applications [12,13]. The approach is appealing because there have been several completed clinical trials using IFA as vaccine adjuvants to treat diseases like human immunode ciency virus (HIV) infection [14]. The nanoparticles that have been applied for vaccination can be roughly classi ed into two types [12,13]: (i) organic nanoparticles including liposomes and polymers; (ii) inorganic nanoparticles include aluminum hydroxides, mesoporous silica, magnetic nanoparticles, gold nanoparticles, and nanodiamonds (NDs). Of particular interest are aluminum hydroxides and mesoporous silica, which have been experimentally demonstrated to be useful as antigen carriers as well as self-adjuvants for vaccine delivery [15][16][17][18]. However, the toxic levels of these nanoparticles in the human body remain unclear. NDs, on the other hand, are chemically inert and have excellent biocompatibility and exceptionally low cytotoxicity. They have found practical applications in biology and nanomedicine due to their high surface-area-to-volume ratios, tunable surface chemistry, and the capability of emitting near-infrared uorescence from color centers [19,20]. Additionally, NDs have been demonstrated to be able to improve the e cacy of many chemotherapeutic agents by increasing their dispersibility in water, enhancing sustained release, shielding the drug from inactivation, and bypassing the mechanisms of chemoresistance [21][22][23]. These signi cant improvements have inspired further research on the sustained release of other therapeutic molecules such as growth factors, peptides, and genes both in vitro and in vivo. A recent study has shown that NDs can serve as an e cient delivery system for immunostimulatory cytosine-phosphate-guanine oligonucleotides with great potential for cancer immunotherapy applications [24]. Another study demonstrated that NDs can be readily taken up by immune cells (including natural killer cells and monocytes), resulting in no compromise in cell viability and immune cell activation, thereby useful for targeted anti-tumor immunotherapy [25]. Furthermore, uorescent nanodiamond (FND) particles surface-conjugated with immunomodulatory molecules are promising candidate agents with trackable and traceable capabilities to stimulate and manipulate the immune systems [26,27]. Chicken egg ovalbumin (OVA) was chosen as the model antigen for this study because the protein is a well-characterized target antigen for CD8 + T cells (e.g., cytotoxic T lymphocytes), which speci cally recognize the OVA 257-264 peptides and thus offer an excellent opportunity to study antigen-speci c T cell immunity [28]. The biocompatible, non-allergic, and non-toxic nanoparticles used in this work were monocrystalline NDs of 100 nm in diameter. Their surfaces were rst oxidized in air and subsequently carboxylated by acid treatment to facilitate their conjugation with OVA through electrostatic attraction, hydrogen bonding, van der Waals force, and hydrophobic interactions. The conjugation is expected to help increase the uptake of antigens by antigen-presenting cells (APCs) through endocytosis of NDs and thus enhances the antibody production. To achieve this goal, we rst noncovalently conjugated OVA with NDs in phosphate-buffered saline (PBS) to form stable complexes to enable sustained release of the surface-bound immunogens either in vitro or in vivo. We then mixed the antigen-containing buffers with IFA to yield emulsions, followed by subcutaneous injection of the emulsions into healthy mice and monitoring the animals' immune responses. Finally, to demonstrate the immunotherapeutic effects of the vaccine, we inoculated tumor-free mice with E.G7-OVA, which was derived from the mouse lymphoma cell line EL4 containing a single copy of an inserted gene for constitutive synthesis and secretion of chicken egg OVA in the cells. How the vaccination inhibited the tumor growth was then examined closely over a time period of more than 1 month. A signi cant enhancement in antibody production and an effective suppression of tumor growth were discovered by use of this new ND-IFA-based adjuvant system. Results And Discussion Characterization of OVA-ND complexes. NDs before and after mixing with OVA were analyzed for their size distributions and zeta potentials. Transmission electron microscopy (TEM) of bare NDs rst revealed that the particles were irregular in shape and varied considerably in size (inset in Fig. 1). Dynamic light scattering measurements showed that bare NDs have a mean hydrodynamic diameter of ~ 100 nm and a zeta potential of − 45 mV (Fig. 1). The average diameter of the particles increased by about 20 nm after mixing with OVA in water, indicating that these NDs have been successfully coated with OVA by physical adsorption. As the isoelectric point of OVA is 4.5 [29], meaning that the protein molecules are negatively charged in PBS buffer (pH 7.4), the change of the zeta potential from − 45 mV of ND to − 23 mV of OVA-ND implied that forces (such as hydrophobic forces) other than electrostatic interaction play important roles in the protein adsorption process. As a member of nanocarbon family, the surface of NDs can be conveniently modi ed with a variety of oxygen-containing groups such as -COOH, -COH, -COOC-, etc. by extensive washes in strong oxidative acids. Uniquely, the acid-washed NDs exhibit an exceptionally high a nity for a wide range of protein molecules including bovine serum albumin (BSA), myoglobin, cytochrome c, lysozyme, and luciferase [30][31][32]. Moreover, the structural integrity of these proteins is retained, as demonstrated by the catalytic activities of lysozyme and luciferase after adsorption on NDs [31,32]. Chicken OVA is a phosphorylated glycoprotein consisting of 385 amino acid residues with a molecular weight of 42.7 kDa or a total molecular weight of 45 kDa including the carbohydrate and phosphate portions [29]. To evaluate the amount of OVA that could be loaded on the acid-washed NDs, we measured the changes in optical absorbance of unbound OVA at 280 nm before and after mixing with the nanoparticles (Supplementary Figure S1). For OVA adsorbed on 100-nm NDs, we determined a protein loading capacity of OVA:ND = 1:8 (weight ratio) at saturation. Assuming a spherical shape for the adsorbent, this high loading capacity suggests that each 100-nm ND (weight of ~ 1.8 fg/particle) can accommodate more than 3000 OVA molecules on surface. Immune responses. The in vivo experiments were started by mixing 5 µg OVA with 60 µg NDs and dispersing the OVA-conjugated NDs in PBS, CFA, or IFA prior to subcutaneous injection of the mixtures into BALB/C mice. The corresponding control experiments consisted of 5 µg OVA in PBS, CFA, or IFA, respectively (Figs. 2 and 3). Figure 2A shows the timeline of immunization and blood collection in this experiment. The water-in-oil emulsions formed small nodules and appeared as soft capsules at the injection sites upon immunization. We evaluated OVA-speci c IgG antibody responses in the sera of the immunized mice with enzyme-linked immunosorbent assays (ELISA) after the second and third immunizations with OVA and OVA/ND in CFA. As shown in Fig. 2B and 2C, the OVA/ND/CFA treatments induced a signi cantly higher amount of OVA-speci c IgG antibodies in the mouse sera than OVA/CFA alone by 3.5-and 1.6-fold, respectively, after the second and third immunizations. It is demonstrated that the addition of NDs in CFA is able to elicit highly e cient and protective immune responses against OVA in the mouse body, in line with a previous report that NDs can enhance immune responses against recombinant HA/H7N9 in mice [33]. Next, we investigated the dose dependence of the immune response by employing OVA/CFA and OVA/ND/CFA containing 25 µg OVA. The amount of NDs used accordingly increased to 300 µg. Indeed, a 2-fold increase of the OVA-speci c IgG antibody production was found in the OVA/CFA treatment ( Fig. 2A). However, the response did not exceed that of the 5-µg treatment with OVA/ND/CFA. Notably, further increase of the OVA dose failed to boost the immune response in the OVA/ND/CFA treatment. The result suggested a saturation effect, where no higher levels of anti-OVA could be reached irrespective of the doses of OVA applied. An important implication of this nding is that the use of NDs as additives in CFA can help reduce the consumption of antigens in producing the antibodies of interest, which is a valuable feature for industrial production of antibodies. We explored further if the same level of immune response by OVA/ND/CFA could be maintained without the need of allergic components such as inactivated mycobacteria in CFA. The dosage groups of 5 µg and 25 µg OVA were tested in parallel in this experiment. As shown in Fig. 3, we did not found signi cant differences in the results between OVA/ND/IFA and OVA/ND/CFA treatments in these two groups, indicating that the substitution of dead mycobacteria by NDs as additives in the mineral oil not only can improve the safety but also can maintain the e cacy of the vaccine adjuvant. This new combination of substances is expected to work well also as immune drug delivery vehicles to promote directed antitumor activities with minimal systemic toxicity [27]. Antitumor therapeutics. The new formulation of NDs in oil emulsions is applicable as antitumor therapeutic agents as well. To demonstrate the application, we employed the mouse lymphoma cell lines, EL4 and E.G7-OVA. The E.G7-OVA cells are able to express OVA and have been widely used in cancer immunotherapy studies. Depicted in Fig. 4A is the timeline for the injection of OVA/ND/IFA, followed by the inoculation of EL4 and E.G7-OVA cells in C57BL/6 mice. By referring to the unvaccinated groups, we found that the treatment with OVA/ND/IFA in the EL4 model was unable to delay the tumor growth (Fig. 4B). In contrast, the OVA/ND/IFA treatment could effectively inhibit the tumor progression in the E.G7 model over 3 weeks post inoculation of the cells (Fig. 4C). Notably, half of the mice (4 out of 7 mice) in the E.G7 model maintained tumor-free for more than 15 days after cell inoculation (Fig. 4D) and survived up to 35 days post tumor cell challenges (Fig. 4E). In Fig. 4F, we show photographs of the tumors isolated on day 24 from vaccinated and non-vaccinated mice. The difference in tumor size between these two groups (in triplicate) of mice is substantially, about 10 times in total volume. Taken together, these results indicate that the presently developed nanovaccines with ND/IFA as adjuvants are promising agents for cancer immunotherapy. To further assess the therapeutic potential of OVA/ND/IFA, we investigated the in vivo immunostimulatory activity of the agent with just one dose in each mouse. Single-dose therapy has several advantages over multiple-dose therapy, including greater patient compliance, less risk of side effects, and lower costs [34]. In particular, knowing the effectiveness of the single-dose vaccines composed of either whole viruses, protein subunits, viral vectors, or nucleic acids (RNA and DNA) is critically important in the prevention and control of COVID-19 infections today [35]. Additionally, in protecting livestock (such as cattle, sheep, pigs, and goats) from infectious diseases, single-dose veterinary vaccine makes it easier for suppliers to streamline the production process and distribution of the agents to rural areas [36]. In this single-shot experiment, mice were rst administrated with OVA/ND/IFA via subcutaneous injection and then examined by measuring the production of anti-OVA IgG in the mouse sera on a weekly basis. We found that the OVA/ND/IFA treatment could dramatically induce the production of OVA-speci c IgG antibodies on day 28 and day 35 after the administration (Fig. 5). Compared with the OVA/ND and OVA/IFA groups using the same amount of antigens, the OVA/ND/IFA treatment boosted the levels of anti-OVA IgG by 432 and 6 times on day 28, respectively. The enhancement factor further increased to 1717 and 19 times on day 35. It is demonstrated that the addition of NDs can greatly improve the effectiveness of IFA as a single-dose vaccine adjuvant, which is capable of sustaining its immunostimulatory activities over an extended period of time. Finally, we explored whether or not the addition of NDs in IFA altered the mechanism of the immune response elicited by IFA alone, which is known to proceed predominantly through the Th2 pathway (i.e. humoral immune response) [17,37]. We addressed the question by performing ELISA assays for cytokines in the sera of C57BL/6 mice after injection with OVA/ND/IFA. As shown in Fig. 6, only a small difference in the interleukin 2 (IL-2) level was found between the control and treatment groups, whereas a marked elevation of the interleukin 4 (IL-4) concentration in the vaccinated group was detected. Furthermore, by replacing NDs with FNDs in the adjuvants, we were able to clearly identify the presence of FNDs in mouse spleens through background-free detection of far-red uorescence at ~ 700 nm in the tissue digests (Supplementary Figure S2 and ref. [38] for details). All the results led us to a possible predominant mechanism for the initiation of the immune response by the ND/IFA-based vaccine as follows: (i) formation of nodules with loose structure in mouse tissues after subcutaneous injection of the antigen-loaded ND/IFA emulsion, in which the adjuvants act as a depot; (ii) sustained release of the antigens from NDs in the water phase of the emulsions; (iii) active and continuous recruitment of immature immune cells to the depot; (iv) uptake of the antigen-loaded NDs by the immune cells through endocytosis; and (v) promotion of Th2 response, where helper T cells bind with the antigen presenting cells and activate the development of B cells into antibody-producing plasma cells in spleens. The proposed mechanism is depicted in Fig. 7. Conclusions We have demonstrated that the new formulation consisting of ND (diameter of ~ 100 nm) mixed in IFA is capable of generating effective and durable immune responses in mice. Without the need of allergic additives (i.e. dead mycobacteria), the addition of highly biocompatible NDs in the oil emulsion can not only retain the adjuvanticity of IFA but also signi cantly reduce the level of side effects. Compared with existing products, the ND-in-oil adjuvant has three major advantages: high safety, low side effects, and low demand in antigen. By applying OVA as the model antigen in small animals like mice, our studies clearly show that ND/IFA can serve well as an active vaccine platform to induce sustained and potent immune responses. Additionally, the adjuvants are useful as antitumor therapeutic agents, as proven by the OVA/ND/IFA treatment which effectively inhibits the tumor progression of OVA-expressing E.G7 cells inoculated into mice. Further research, development, and optimization of the ND-based new formulation into single-dose vaccines may nd real-world applications of this technology in diverse areas, particularly in the care and protection of domesticated animals. Materials And Methods Chemicals and reagents. OVA, CFA, IFA, PBS, and all other chemicals were from MilliporeSigma and used without further puri cation. Mouse OVA-speci c IgG antibodies were obtained from Abcam. NDs. Monocrystalline synthetic diamond powders with a nominal size of 100 nm were obtained from Element Six. To remove metallic impurities and graphitic carbon atoms on the surface, the diamond powders were rst oxidized in air at 490°C for 2 h, followed by microwave cleaning in concentrated H 2 SO 4 -HNO 3 (3:1, v/v) solution at 100°C for 3 h to functionalize the ND surface with -COOH groups [31]. OVA-conjugated NDs. OVA-conjugated NDs were synthesized via a simple mixing of 5 µL of antigen solution (1 or 5 mg/mL) with 30 µL of ND suspension (2 or 10 mg/mL) in a shaker for 1 h at room temperature. Excess amounts of OVA were removed by centrifugal separation and water wash. Particle characterization. Hydrodynamic sizes and zeta potentials of air-oxidized, acid-washed NDs and OVA-conjugated NDs were measured with a particle size and zeta potential analyzer (DelsaNano C, Beckman Coulter). The morphologies of NDs on copper grids were imaged with a transmission electron microscope (H-7650, Hitachi) operating at an acceleration voltage of 75 kV. Cell cultures. EL4 and E.G7-OVA cells were obtained from Bioresource Collection and Research Center, Taiwan, and used together with the mouse models. The E.G7-OVA cells with the OVA expression were derived from the C57BL/6 mouse lymphoma cell line, EL4, transfected with pAc-neo-OVA plasmids [39]. The EL4 cells were grown in DMEM culture medium (Thermo Fisher Scienti c) complemented with 10% horse serum and Antibiotic-Antimycotic. The E.G7-OVA cells were maintained in RPMI-1640 medium (Thermo Fisher Scienti c) supplemented with 10% fetal bovine serum, 10 mM HEPES, 1.0 mM sodium pyruvate and supplemented with 0.05 mM 2-mercaptoethanol and 0.4 mg/mL G418 at 37°C in a humidi ed atmosphere of 5% CO 2 Immunization. BALB/C mice and C57BL/6 mice (female, 6-8 weeks) obtained from BioLASCO, Taiwan, were immunized by subcutaneous injection of 100 µL solutions or emulsions containing either (i) OVA in PBS, (ii) OVA in CFA, (iii) OVA in IFA, (iv) OVA/ND in PBS, (v) OVA/ND in CFA, or (vi) OVA/ND in IFA on day 1, 14, and 28. The water-in-oil emulsions were prepared by mixing 35 µL of OVA/ND suspension in PBS with 65 µL of CFA or IFA. The immunogens and the adjuvants were thoroughly emulsi ed by pipetting up and down before injection. All mice were maintained under pathogen-free conditions and treated benevolently to eliminate or reduce suffering. Antitumor therapeutics. C57BL/6 mice were subcutaneously immunized with the vaccine formulation as solutions or emulsions for 3 times at 2-week intervals and then challenged by E.G7-OVA lymphoma cells (5 × 10 5 cells) or OVA-negative EL4 tumor cells (5 × 10 5 cells) on day 7 post the nal immunization. The tumor growth was monitored starting from day 10 post tumor cell inoculation. For the single-dose therapeutic treatment, anti-OVA IgG was examined on a weekly basis after the immunization. IgG antibody assays. Mouse blood was collected from the submandibular veins of vaccinated or nonvaccinated mice on various days after immunization. OVA-speci c IgG antibody responses of the immunized mice were evaluated by using ELISA with the collected mouse sera measured in a microplate reader (GloMax, Promega). Declarations Ethics approval and consent to participate All the procedures related to animal experiments were approved by the Institutional Animal Care and Use Committee of the National Taiwan University College of Medicine (NTUCM). Consent for publication Not applicable. Availability of data and materials All the data are available on request from the corresponding author (H.C.C.).
2021-09-28T16:01:11.932Z
2021-07-19T00:00:00.000
{ "year": 2021, "sha1": "0860b3fe638a64146147aab2e75aa8aa201383a9", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-707469/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "73cca5c2f8b69f2ecb759a19c434fcfef1c75cbf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
83819183
pes2o/s2orc
v3-fos-license
Genes de hexamerinas em Apis mellifera: busca de funções alternativas durante o desenvolvimento Martins JR. Hexamerin genes in Apis mellifera: alternative functions during development. 2012. 189p. PhD thesis – Genetics Department, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, 2012. Background: Insect hexamerins are storage proteins synthesized by the larval fat body and secreted into the hemolymph, where they accumulate. The canonical function of hexamerins is to provide amino acids and energy for the reconstruction of tissues and organs during pupal-to-adult development. The aim of the current study was to search for evidence of alternative roles for the hexamerins in the life cycle of the honey bee, A. mellifera. Results: The canonical role of insect hexamerins received support from our data on the temporal expression profiles of the four honey bee hexamerin subunits (HEX 70a, HEX 70b, HEX 70c and HEX 110), as verified by SDS-PAGE and western blot using hemolymph and fat body samples. Consistent with the canonical function, the four hexamerins were localized in the cytoplasm of fat body cells, during metamorphosis, by using specific antibodies and confocal laser-scanning microscopy. However, additional functions could be inferred by the following findings: (1) The four hexamerins were also localized in the nuclei of some fat body cells, thus tentatively suggesting an anti-apoptotic role during metamorphosis; (2) Furthermore, HEX 70a and HEX 110 were localized in the cytoplasm and nucleus of ovarian and testicular cells, pointing to a role in gonad development and maturation. Co-labeling of the thymidine analog EdU and HEX 70a in the ovariole cell nuclei, strongly suggested a role in cell proliferation; HEX 70a depletion via injection of the specific antibody in queen pupae impaired ovariole growth, thus strengthening our hypothesis on a role in cell proliferation, (3) HEX 70a depletion also impaired cuticle sclerotization, indicating a function in exoskeleton formation, and (4) led to a precocious adult ecdysis, perhaps in response to the lack (or decrease) in hexamerin-derived amino acids. We also investigated aspects of the regulation of hexamerin genes. The experimental manipulation of diet consumption and juvenile hormone (JH) titer clearly interfered in the expression of hexamerin genes. Regulation by JH was also supported by a previous bioinformatics analysis of the 5’ UTR region of each hexamerin gene (Martins et al., 2010), which revealed a potential binding site for Ultraspiracle (Usp), a member of the JH receptor complex in the DNA. Experiments are in progress for in vitro expression and purification of the four hexamerins aiming to further characterize their structures and interactions. Conclusion: Taken together, these results imply in novel roles for hexamerins in the life cycle of A. mellifera in addition to their well-established role as amino acids sources for metamorphosis. Results: The hexamerin genes of the honey bee (hex 70a, hex 70b, hex 70c and hex 110) diverge considerably in structure, so that the overall amino acid identity shared among their deduced protein subunits varies from 30 to 42%. Bioinformatics search for motifs in the respective upstream control regions (UCRs) revealed six overrepresented motifs including a potential binding site for Ultraspiracle (Usp), a target of juvenile hormone (JH). The expression of these genes was induced by topical application of JH on worker larvae. The four genes are highly transcribed by the larval fat body, although with significant differences in transcript levels, but only hex 110 and hex 70a are re-induced in the adult fat body in a caste-and sex-specific fashion, workers showing the highest expression. Transcripts for hex 110, hex 70a and hex70b were detected in developing ovaries and testes, and hex 110 was highly transcribed in the ovaries of egglaying queens. A phylogenetic analysis revealed that HEX 110 is located at the most basal position among the holometabola hexamerins, and like HEX 70a and HEX 70c, it shares potential orthology relationship with hexamerins from other hymenopteran species. Conclusions: Striking differences were found in the structure and developmental expression of the four hexamerin genes in the honey bee. The presence of a potential binding site for Usp in the respective 5' UCRs, and the results of experiments on JH level manipulation in vivo support the hypothesis of regulation by JH. Transcript levels and patterns in the fat body and gonads suggest that, in addition to their primary role in supplying amino acids for metamorphosis, hexamerins serve as storage proteins for gonad development, egg production, and to support foraging activity. A phylogenetic analysis including the four deduced hexamerins and related proteins revealed a complex pattern of evolution, with independent radiation in insect orders. Background Hexamerins essentially participate in the dynamics of amino acid storage and exploitation that occurs during insect development. These six-subunit proteins are primarily synthesized by the larval fat body and are massively stored in hemolymph as an amino acid source for development toward the adult stage [1]. They also may function as JH-binding proteins [2,3], and in addition, there is circumstantial evidence supporting the hypothesis that larval hexamerins are targeted for egg production [4][5][6][7][8]. While hexamerins have been the focus of numerous studies in solitary insects [9,10], the characterization of these proteins in social insects has received much less attention, in spite of the potential for discovering unique * Correspondence: mmgbit@usp.br physiological functions linked to aspects of the social way of life. Workers of an ant species may use hexamerins as an amino acid source for brood nourishment, and there is circumstantial evidence that, by acting as a JH-binding protein, hexamerins regulate JH titer and caste differentiation in a termite species [11][12][13][14][15]. The highly eusocial honey bee hatches as a larva after a 72 h embryonic stage, and develops through a series of molts that define the five larval instars. This is a period of feeding, and the larva gains weight while it is continuously fed by worker bees. During the larval stage, queens, workers and drones have distinct nutritional requirements. Depending on the quality and quantity of nutrition, a diploid female larva develops as a queen or as a worker. A queen-destined larva is fed with secretions produced by worker hypopharyngeal and mandibular glands, the royal jelly, in a much higher proportion than a worker-destined larva. As a supplement to its nutritional regime, the worker larva also receives pollen, nectar and honey. Drone larval nourishment is composed of these same nutrients, but they are fed on a larger quantity of food, and their diet also differs in quality when compared to that given to workers [16]. Female and male larvae grow enormously because of these nutrients, and accumulate proteins, lipids and glycogen for use as structural materials and energy during the subsequent non-feeding pupal and pharate-adult stages. Duration of development from egg to adult eclosion differs considerably among queens, workers and drones, spanning 16, 21 and 24 days, respectively [17] with some differences among A. mellifera subspecies. The single adult queen in the hive is adapted to egg production. When fertilized, the eggs will give rise to workers and occasionally to a new queen, while non-fertilized eggs become drones. The functionally sterile workers perform a series of flexible but age-correlated tasks, a phenomenon known as age polyethism. The younger worker bees usually stay inside the hive and are engaged in brood rearing, queen tending, nest building, nest cleaning, and food processing. Older workers take over the duties of foraging for pollen and nectar that are used to provision and maintain the hive. Drones do not have any known function other than mating with the queen [18]. Our goal was to determine whether these morphotypes (queen, worker, drone), which are so divergent in their developmental rate, size, morphology and other essential characteristics, and which perform very distinct functions as adults in the hive, also show hexamerin gene expression profiles that are correlated with their unique developmental trajectories. The current study was undertaken to deepen our knowledge of the four hexamerin genes found in the honey bee genome [19][20][21] by exploring their structures, expression patterns, and putative functions using a comparative approach. To this end we determined (1) the features of the full-length cDNA coding sequences and their conceptual translation products; (2) the potential regulatory sequences present in the respective 5' UCRs; (3) the expression patterns in the fat body and gonads of developing and adult queens, workers and drones; (4) the effect of JH on the expression in larval fat body; (5) the relative quantities of hexamerin transcripts in females and drones during the metamorphic molt and adult stage, and (6) the evolutionary relationships among the honey bee hexamerins and related members of the hemocyanin superfamily in other insect species. Consistent with the hypothesis that hexamerins have multiple functions in the honey bee, our findings disclosed striking structural differences among the hexamerin gene sequences and tissue-, caste-and sex-specific expression patterns. Additionally, the recognition of potential JH-target sites in 5' UCR of all hexamerin genes together with the observed JH-effect on the levels of hexamerin transcripts indicate regulation by this hormone. Structural characteristics of the hexamerin CDSs and respective translation products The entire CDSs of hex 70b and hex 70a, as well as a portion of their respective 5' and 3' untranslated regions (UTRs), were previously sequenced by our research group [19,21]. Part of the hex 110 CDS (a cDNA fragment of 180 bp) also was previously cloned and sequenced in our laboratory [20]. In the current work, the sequencing of hex 110 was extended to the entire CDS and part of the 5' and 3' UTRs. In addition, we cloned and sequenced the hex 70c CDS as well as segments of its UTRs. Sequence analyses using the Artemis platform [22] allowed comparisons of the structural characteristics of hex CDSs ( Figure 1A, Additional files 1, 2, 3 and 4). Each of these sequences is present as a single copy in the Honeybee Genome Assembly (version 4.0) as confirmed by BLAST searches. The hex 70a, hex 70b and hex 70c sequences are tandemly arrayed in GroupUn.53, whereas hex 110 is separately positioned in Group 11.32. The translation products contain the N-terminal sequences determined by Danty et al [23] using automated Edman degradation. The conserved N, M and C hemocyanin domains ( Figure 1B) were identified in all hexamerin subunits (HEX 70a, HEX 70b, HEX 70c and HEX 110), but as previously observed [20], the hemocyanin C domain of HEX 110 is interrupted by a 291 amino acid insertion. This insertion is very rich in glutamine and glutamic acid (Glx) and contributes significantly to the total Glx content (20.9%) of HEX 110. Some features of the honey bee hexamerin genes and of the respective deduced subunits are compiled in Table 1. The four hexamerin subunits are also characterized by the presence of glycosylation sites, a conserved histidine and motifs typically found in other insect hexamerins (Additional files 5, 6, 7 and 8). We used the software http://phobius.sbc.su.se for prediction of signal peptides and transmembrane topology from the amino acid sequences of each of the hexamerin subunits in the honey bee, HEX110, HEX70a, HEX 70b and HEX 70c. Hydropathy profiles were produced for each of them and included in the Additional file 9. The subunits are predicted to contain each a signal peptide (Table 1, hydrophobic amino acids specified in Additional file 9) that directs transport of the protein through the secretory pathway. As expected, none of the subunits contain transmembrane helices. With respect to amino acid composition, HEX 70a and HEX 70c contain a relatively high quantity of phenylalanine, tryptophan and tyrosine (18.2% and 16.9%, respectively), and thus belong to the class of aromatic amino acid-rich hexamerins (or arylphorins) [24]. With their relatively high methionine content, HEX 70b (4.4%) and HEX 70c (6.4%) can be included in the class of methionine-rich hexamerins, which are composed of 4 to 11% methionine [10]. The overall amino acid identity shared among the deduced honey bee hexamerins varies from 30% to 42%. A multiple alignment using ClustalW 1.83 (Additional file 10) revealed that HEX 70a, HEX 70b and HEX 70c are more similar to each other (39 to 42% identity) than they are to HEX 110 (30 to 32% identity). Overrepresented motifs in upstream control regions (UCRs) Motif analyses of the UCRs were carried out with two goals in mind: to search for potential JH response elements, and to search for hexamerin-specific conserved regions. Table 2 shows six DNA motifs, here named site1 to site6, that are overrepresented in the UCRs of the four hexamerin genes. All of the six motifs were mapped on an extension of the UCR corresponding to 1.5 kb from the translation start codon (Figure 2A). Site1 is 80% identical to the D. melanogaster Ultraspiracle (Usp) binding site, also known as chorion factor-1 (CF1, [Flybase ID: FBgn0003964]) [25], and is located very close to the 5' end (Figure 2A and 2D). Two site1 motifs enrich the hex 110 UCR, whereas only one was found in each of the UCRs of the three hex 70 genes. None of the other five motifs (site2 to site6) are similar to any binding site described to date in the TRANSFAC database, and may be specific to hexamerin gene UCRs. The hex 70b UCR showed the greatest complexity because it is the only one containing all six motifs. Few of these motifs were found in the hex 70a UCR, possibly because it is still a gapped DNA region (Figure 2A). Based on Figure 2A, we graphically represented the types ( Figure 2B) and quantities ( Figure 2C) of the potential regulatory sites shared by the four honey bee hexam-erin UCRs. Site1 is shared by all four hexamerin genes; site2 is shared by hex 70a and hex 70b; site3 is present in hex 70b, hex 70c and hex 110; site4 is specific to the hex 70 genes (it is absent from the hex 110 UCR); and site5 and site6 were both detected in hex 70b, hex 70c and hex 110 ( Figure 2B). Therefore, the hex 70b and hex 70c UCRs share a maximum of five of these motifs whereas the hex 70a and hex 110 UCRs share only one motif (Figure 2C). This relationship suggests that at least some of the hexamerin genes are co-regulated. Effect of JH on the expression of hexamerin genes JH-treatment was performed at the feeding phase of the 5 th larval instar. During this developmental phase, larvae have a high titer of JH in hemolymph [26]. The treatment with exogenous hormone aimed to maintain JH titer at a high level for a prolonged period of time, thus circumventing the normal decay that normally occurs at the transition from the feeding to the spinning phase [26]. Figure 3 shows that expression of the genes hex 70b and hex 70c is higher in JH-treated larvae than in the controls, although the JH-effect on the expression of hex 70a and hex 110 is more discrete. Together, these results support a function of JH in inducing the expression of hexamerin genes in honey bee larvae. Evolutionary relationship among the honey bee hexamerins and related proteins We investigated the evolutionary relationships among 45 hexamerin amino acid sequences from six insect orders (Hymenoptera, Diptera, Lepidoptera, Coleoptera, Isoptera, and Orthoptera), and hemocyanins from 8 insect species and from a crustacean. The tree structure mainly reflects the molecular relationship at the level of insect order (Figure 4). Within the hymenopteran cluster, the honey bee HEX 110 (named AmeHEX 110 in the Most of the hymenopteran hexamerins (seven of twelve) contain more than 15% aromatic amino acids, and are therefore considered arylphorins. Five of twelve meet the criterion for inclusion in the methionine-rich class. A wasp hexamerin (NviHEX79) and two of the honey bee hexamerins (HEX 70a, HEX 70b) contain more than 10% leucine and are here defined as leucine-rich. Hexamerins from the coleopterans Tribolium castaneum and Tenebrio molitor form a well-defined group (Figure 4). Without exception, they are all arylphorins. Aside from TcaHEX5, they all have a high Glx content (at 9.8% Glx, TcaHEX5 fails to meet our criterion for inclusion among the high-Glx hexamerins by a small margin). Among lepidopterans (Figure 4), the arylphorin BmoSP2 is positioned separately from the methioninerich hexamerins HceHEX1, HzeHEX, TniJHSP2, CfuDAP2, BmoSP1 and HceHEX2, which are organized in two branches, one of them also including HviHEX. As only part of the HviHEX sequence is available in data Bank, we could not classify it as an arylphorin or a methionine-rich hexamerin. All lepidopteran hexamerins contain intermediary or low Glx (< 7%) content, and only three of them (HceHEX1, HzeHEX and TniJHSP) are rich in leucine. Interestingly, these leucine-rich hexamerins were grouped in a single branch. A branch of dipteran hexamerins (Figure 4) included the very high Glx (20%)/high molecular mass DmeFBP1 and two other hexamerins from D. melanogaster, Dme7320 and Dme8100, both containing a high Glx content (13.7% and 10.8%, respectively), but a typical molecular mass ~70 kDa. All hexamerins in this branch are rich in leucine. The other three main branches consist of hexamerins in the range of 82-99 kDa. One of them clustered the DmeLSP1 isoforms (α, β and γ). The other two branches grouped some Anopheles gambiae hexamerins with OatHEX 1.2 from the mosquito Ochlerotatus atropalpus, and some A. gambiae hexamerins with DmeLSP2. Except for some incomplete sequences (Aga29840, Aga16795, and Aga31208) for which we could not determine the exact amino acid composition, all the hexamerins forming these three branches are arylphorins, and OatHEX1.2, DmeLSP1α and DmeLSP1β are also rich in methionine. Several of them have a high Glx content. None is rich in leucine. The only orthopteran hexamerin used in tree construction, LmiJHBS (Figure 4), is distinguished by a high proportion of leucine. The basal position of this hexamerin is evident. Two isopteran hexamerins, RflHEX1 and RflHEX2 clustered in a single branch. Both are arylphorins, but RflHEX1 is also rich in methionine. RflHEX2, but not RflHEX1, has a high Glx content (10.4%). As expected, the insect hemocyanins clustered separately from the hexamerins (Figure 4), and formed a wellsupported monophyletic clade (1.0 Bayesian posterior probability), with ScuHC1 at the most basal position. hexamerin genes in the fat body of workers, queens and drones (see Additional file 11). Relative expression is shown from the 5th larval instar throughout the pupal and adult stages. Because the expression data on earlier larval phases (2nd to 4th larval instars) was mostly obtained using workers instead of queens and drones, they are not shown here. Expression of hexamerin genes in the fat body of developing and adult workers, queens and drones In the fat body of workers, hex 110 transcripts were abundant from the 5th larval instar throughout the pupal stage, with a decrease in the amount of transcripts nearby the time of adult eclosion. But expression increased again in adult workers. In contrast, in the fat body of drones and queens, hex 110 expression was found basically in the 5th larval instar, extending up to the early pupal stage in queens. The expression of hex 70a differs from hex 110 mainly in adults, which showed sex-and caste-specific patterns of hex 70a transcription. Workers and drones showed high levels of hex 70a transcripts up to the ages of 30 and 5 days, respectively. In 3-to 7-day-old virgin queens, hex 70a expression was reduced in comparison Table 2. The N series in the hex 70a UCR indicates that this DNA region is still undefined in the honey bee genome. (B) Putative co-regulatory network showing the relationship between the six overrepresented motifs and hexamerin genes (orange circles). The site 1 motif (red square) is similar to the Usp binding element (also named CF1) in D. melanogaster. Sites 2 to 6 (grey squares) are not similar to any of the D. melanogaster binding site sequences described in the TRANSFAC database, and may be specific to hexamerin genes; (C) Based on the co-regulatory network, a scheme was constructed using the number of sites shared among hexamerin gene UCRs. The thickness of the bars linking hexamerin genes (orange circles) is directly proportional to the quantity of putative co-regulatory sites (the number of sites is indicated) shared by the four genes. (D) Alignment of the target sites similar to CF1-Usp found in the 5' UCR of honey bee hexamerin genes. The graph shows the sequence conservation at each position while the height of symbols indicates the relative frequency of each nucleotide at that position. to the newly emerged ones, but increased again in older, egg-laying queens. Comparatively, the expression of hex 70a is lower in adult queens than in adult workers. The expression of the other two hexamerin genes, hex 70b and hex 70c, was detectable only in the 5th larval instar of females and males. Together, the data summarized in Figure 5 highlight that: (1) all the honey bee hexamerin genes are highly expressed in the larval fat body of workers, queens and drones, and (2) hex 110 and hex 70a were transcribed in a caste-and sex-specific fashion in pupal and adult fat body. These expression patterns suggest that in addition to their primary role as storage proteins that supply amino acids during non-feeding pupal development, hexamerins have different functions in the adult stage. Using real-time RT-PCR we quantified the levels of the four hexamerin transcripts in the fat body at two periods of the honey bee life cycle: during larval-pupal transition and in adults. Figure 6A shows that the transcriptional profiles are similarly modulated, with an abrupt increase in the quantity of transcripts in the 5th larval instar, followed by a marked decrease in newly ecdysed pupae. However, at definite points of this developmental period, we found interesting differences among the honey bee morphotypes. In workers and drones, the four hexamerin transcripts reached maximal levels during the feeding phase of the 5th larval instar (L5F). In queens, maximal transcript levels were detected at the subsequent spinning phase (L5S) (except for hex 70b transcripts, which reached maximal levels in L5F). Moreover, the maximal expression in workers and drones was significantly higher than the maximal expression in queens (except for hex 70c). Figure 6A also shows that in workers and drones at the L5F phase, when expression of the four hexamerin genes reached a maximum, the levels of hex 110/hex 70b transcripts were much higher than the levels of hex 70a/ hex 70c transcripts. Pearson's correlation coefficient (R) was used to evaluate the relationship among these expression profiles. Figure 6B graphically represents the hexamerin genes (orange circles) linked by bars. The greater the thickness of the bar, the greater the R value. In queens, the expression profiles of hex 110 and hex 70c were the only positively correlated. In drones, by contrast, the expression profiles of all the hexamerin genes except hex 70b/hex 110 were positively correlated. Similarly, in workers the expression profiles of all but hex 70b/hex 110 and hex 70b/hex 70c were positively correlated. Therefore, workers and drones share similar transcriptional profiles during the larval-pupal transition that are distinct from those exhibited by queens. In other words, workers and drones differ from queens in the patterns of co-expression of the four hexamerin genes. Fat body from adult females was also used to quantitatively compare the levels of hex 70a and hex 110 transcripts (the only ones found at this stage). We compared age-matched workers and queens (3-day-old), but to investigate the effect of mating in transcript levels, we also included egg-laying queens in this analysis. Figure 7 shows that 3-day old workers have a significantly higher quantity of both transcripts than queens, independent of their reproductive status. Expression of hexamerin genes in the gonads of developing and adult workers, queens and drones The expression of hexamerin genes was also investigated in developing and adult female and male gonads using RT-PCR (Figure 8; see Additional file 12). The only hexamerin gene apparently inactive in ovaries and testes is hex 70c, as presumed from the complete absence of its transcript in these organs. The levels of hex 70b transcripts were abundant, but only in the larval gonads of queens and drones. A high level of hex 110 mRNA was found in the larval gonads of workers, queens, and drones. Expression then decreases during the pupal stage to be resumed exclusively in the ovaries of egg-laying queens. Similarly, a relatively high level of hex 70a transcripts was found in the gonads of workers and drones at the larval/pupal stages, and in the ovaries of queens at the pupal/early adult stages. This is followed by transcript depletion in the gonads of workers and drones, but not in the ovaries of virgin and egg-laying queens where hex 70a expression is maintained, although at a low level, throughout the adult stage. In summary, the presence of hexamerin transcripts in larval and pupal gonads of workers, drones and queens Figure 3 Effect of juvenile hormone on the expression of the four hexamerin genes. The hormone (diluted in acetone) was topically applied on the dorsum of 5 th instar larvae (feeding phase, L5F). Controls were treated with acetone only. Gene expression was analyzed 24 h after treatment. Hexamerin transcript abundance analyzed by RT-PCR followed by electrophoresis of the amplified cDNA on ethidium bromide-stained agarose gels. The A. mellifera rp49 gene was used as a loading control. suggests roles in ovary and testis development, and in spermatogenesis, which occurs during the pupal stage. The higher expression of hex 110 in the ovaries of egglaying than virgin queens is remarkable, and suggests a function in reproduction. Hexamerin genes and deduced proteins revealed striking structural differences The tandem organization of the hex 70a, 70b and 70c genes in the honey bee genome supports the hypothesis of origin by gene duplication, a common phenomenon among insect hexamerins [27]. The separately located hex 110 gene exhibits unusual features throughout its sequence. It encodes a subunit that is longer than those usually found, and which carries a very high proportion of Glx (20.9%). Because these features are also displayed by hexamerins found in some species of ants [28] and wasps [29], it had been previously thought that they were restricted to hymenopterans. In fact, two hexamerins in the N. vitripennis wasp (NviHEX102 and NviHEX109) exhibit such characteristics (see Figure 4). However, a receptor in Drosophila, DmFBP1 (which is closely related to its own ligand DmLSP1) [30], is also composed by a very high Glx content (20%) and has a high molecular mass (116 kDa). We also identified hexamerins containing slightly lower Glx percentages (between 10% and 15%) among the dipterans, coleopterans and in the isopteran included in the tree (see Figure 4). The physiological significance of such a high proportion of glutamine and glutamic acid in hexamerins remains to be elucidated. A conserved histidine residue was identified in each of the four amino acid sequences. In the ancestral hemocyanins, six copper-liganding histidines confer the ability to bind and transport oxygen. The insect hexamerins lost most or all of the histidine residues and, thus, the oxygenbinding function [31]. The ~70 kDa hexamerins in the honey bee were classified as arylphorins and/or methionine-rich, according to their particular amino acid composition. Due to the importance of aromatic amino acids in sclerotization, the HEX 70a and HEX 70c arylphorins may contribute to exoskeleton hardening and differentiation during pharate adult development. HEX 70c is also rich in methionine, and like HEX 70b (also a methionine-rich hexamerin) it may act as a sulfur reserve for development toward the adult stage. The search for cis-acting elements in the UCR of each hexamerin gene using bioinformatic analyses revealed a total of six overrepresented DNA motifs. It was very interesting to find out that all four hexamerin genes exhibit potential binding sites for the protein Usp, thus suggesting regulation by JH and/or ecdysteroids. Usp has been primarily studied as part of the 20-hydroxyecdysone (20E)-binding nuclear receptor complex [32,33]. However it was recently also identified as a potential target for compounds based on a methyl farnesoid structure, like JH [34]. The potential Usp binding motif is located near the start codon of each hexamerin gene (see Figure 2A), a conserved pattern that increases the likelihood of its functionality. Using a hormone manipulation experi- Figure 5 Patterns of hexamerin gene expression in the fat body of immature and adult stages of workers, queens and drones. The simplified diagrams represent expression patterns based on hexamerin transcript abundance as detected by RT-PCR followed by electrophoresis of the amplified cDNA on ethidium bromide-stained agarose gels using A. mellifera actin as a loading control (see Additional file 11). The thick, thin and dashed lines in the diagrams represent high, intermediary and low transcript levels, respectively. L5: 5th larval instar. L5F and L5S: feeding and spinning phases of the 5th larval instar. NE: newly emerged queens. EL: egg-laying queens. 3d, 6d, and 7d: adult age in days. Data previously published by our laboratory are indicated by the respective bibliographic reference. ment, we recently demonstrated that hex 70b is induced by JH and repressed by 20E [19]. In the current study we expanded this experiment to investigate the action of JH on the expression of the other honey bee hexamerin genes. To make this approach comparative, we re-tested the expression of hex 70b in the same fat body samples used for the analysis of the other genes. The results showed a strong and positive JH-influence on the expression of hex 70b and hex 70c, and a weaker effect on the expression of hex 70a and hex 110. The observed differential effect of JH on expression levels suggests that the JH titer that induces a hexamerin gene for maximal expression may differ from the JH-threshold needed for maximal expression of their homologous. Further studies using a series of JH doses for treating aged-synchronized honey bee larvae may confirm (or refute) this hypothesis. These results are in accord with the proposed functionality of the Usp regulatory element in UCRs. In addition to the potential Usp binding motif, five additional motifs (site2 to site6) were overrepresented in the UCRs of all four hexamerin genes, suggesting co-regulation. In the context of metamorphosis, or larval-pupal transition, this would assure the massive and synchronized production of hexamerins in late larvae for using Figure 6 Relative quantification of hexamerin transcripts in the larval and pupal fat body. (A) Levels of hex 70a, hex 70b, hex 70c and hex 110 mRNAs measured by real time RT-PCR in workers, queens and drones during the 4th larval instar (L4), the feeding (L5F) and spinning (L5S) phases of the 5th larval instar and in newly ecdysed pupae (Pw). All points in the curves represent means and standard errors of three biological samples each prepared twice (experimental replicates). For a better visualization of the data, the Y axis scale was extended in the graphs representing hex 70a and hex 70c expression. Letters indicate significant differences: (ab) workers ≠ queens/drones; (ac) queens ≠ workers/drones; (bc) drones ≠ workers/ queens; (abc) workers ≠ queens ≠ drones. Statistical analysis was carried out with Jandel SigmaStat 3.1 software (Jandel Corporation, San Rafael, CA, USA), using two way ANOVA with post-hoc comparisons by the Holm-Sidak multiple comparison test. * p = 0.04, ** p ≤ 0.001. (B) Correlation among transcriptional profiles of the four honey bee hexamerin genes in queens, workers and drones during larval-pupal transition. The greater the thickness of the bar linking the encircled hexamerin genes, the higher the Pearson's correlation coefficient and association among expression profiles. during pupal and pharate-adult development. In fact, all four hexamerin genes reached their highest expression level during the very same stage, the 5th larval instar (see Figure 6A). Lending support to the idea of co-regulation, we found that the transcriptional profiles of the hexamerin genes in each caste and sex during the larval-pupal transition are in general positively correlated. Regulatory factors could interact with the common sequence motifs, thus influencing the correlated expression. However, the transcriptional profiles of hex 70b/hex 110 in workers and drones, and of hex 70b/hex 70c in workers, did not show a positive correlation (see Figure 6B). Furthermore, in metamorphosing queens, only the hex 70c and hex 110 transcriptional profiles showed a significant Pearson's correlation coefficient. Such differences might be due to structural features in the architecture of each hexamerin gene UCR, such as the type, number and spatial distribution of the overrepresented motifs, which raises the possibility of nuanced differences in the co-regulatory mechanism. Diversification of hexamerins associated with independent radiation within insect orders Based on amino acid sequence similarities and hexameric structure, it has been proposed that arthropod hemocyanins gave rise to the insect hexamerins [31,35], which lost the ability to bind Cu2+ ions. Therefore, in contrast to the ancestral molecule, hexamerins do not bind and deliver hemolymph oxygen, but mainly have a role as storage proteins. The molecular phylogeny shown in Figure 4 shows a complex pattern of hexamerin evolution, with independent radiation in each of the selected insect orders. In general, our analysis resulted in a similar tree topology as those published before [36], with the hemimetabola (orthopteran and isopteran) hexamerins in a basal position, followed by hymenopteran and coleopteran hexamerins, which are basal to the lepidopteran and dipteran ones. As previously noticed, these evolutionary relationships are in good agreement with phylogeny of insect orders [37]. Most of the information on hexamerin evolution derives from studies on dipteran and lepidopteran species, with a few studies on other holometabolous and hemimetabolous orders, although data from an evolutionarily less-derived order (Plecoptera) have been recently published [38]. The main contribution of the current phylogenetic analysis was to explore the evolutionary relationship among the honey bee hexamerins and their homologue sequences in other insect orders. It is evident in our phylogenetic tree that the high molecular mass AmeHEX110 is located at the most basal position among the holometabola hexamerins. AmeHEX110 shares this position with two probable orthologues, NviHEX102 and NviHEX109. Similarly, the gene encoding AmeHEX70a apparently has an orthologue, NviHEX81, in the genome of N. vitripennis. Other poten- The molecular phylogeny also revealed a potential orthology relationship outside the hymenopteran clade, between the hexamerins from T. molitor, TmoHEX2, and T. castaneum, TcaHEX2, but a definitive conclusion would only be possible if T. molitor had its genome sequenced. For the lepidopteran hexamerin sequences it is more complicated to infer orthology relationships, at least until they were available in public databases. Orthology may be a reflection of functional equivalence, although it is important to consider that in view of gene loss and other events in evolution of a group of organisms, genes from different species sharing high sequence similarity may be not orthologues at all [39]. Functional studies are therefore decisive to elucidate this complex relationship among hexamerin genes. A relevant observation taken from our molecular phylogeny is the well-supported split between dipteran hexamerins and the group formed by the hexamerin receptor DmeFBP1 and Dme8100 plus Dme7320 (0.82 posterior probability; Figure 4). It has already been suggested that dipteran hexamerin receptors, which also belong to the hemocyanin-hexamerin superfamily, form a separate group in phylogenetic trees [37]. Whether these other two DmeFBP1 paralogs have a similar function is still unknown. It is also obvious that most of the lepidopteran hexamerins used in tree reconstruction are methionine-rich proteins, thus suggesting that they derived from an ancestral molecule containing a high proportion of this amino acid. We also observed that, except for AmeHEX110 every basal hexamerin in each insect order is an arylphorin, thus suggesting that insect hexamerins have evolved from an ancestral arylphorin. Our phylogenetic analysis also supports the hypothesis of gene duplication events taking place mainly after the split of insect orders. Expression of hexamerin genes in larval fat body: the wellknown role in metamorphosis and a putative role in binding JH and regulating caste differentiation The four hexamerin genes were highly transcribed in the larval fat body of workers, queens and drones, and are probably involved in hexamerin synthesis for amino acid storage to support metamorphosis and development toward the adult stage. However, these genes were differentially co-expressed in the honey bee morphotypes, as revealed by transcript quantification during the critical period of larval-pupal transition. The most evident difference occurred during the feeding phase of the 5th larval instar (L5F), where we observed a higher level of hex 110, hex 70a and hex 70b transcripts in workers and drones than in queens. This is a very interesting finding if contrasted to the caste and sex-specific JH titer at this stage, which is higher in queens than in workers and drones [26,40]. It is known that JH plays a central role in phenotypic caste differentiation in A. mellifera. This hormone triggers specific physiological responses in the bipotent female larvae, a high titer inducing development of a queen and a low titer specifying the worker phenotype [41,42]. The inverse relationship between levels of hexamerin transcripts and JH titers leads us to speculate that, as was proposed for the termite Reticulitermis flavipes [14,15,43,44], the honey bee hexamerins may function as JH-binding proteins. By binding and controlling JH levels, hexamerins were implicated in the process of JH-dependent caste differentiation in this termite. By applying this model to the honey bee, we may hypothesize that if JH titer exceeded the binding capacity of hexamerins, it would interact with target receptors and therefore cause the bipotent female larva to develop as a queen. If not, the larva would develop as a worker. Interestingly, all the four hexamerin genes showed a significantly higher expression in drone feeding larvae (low JH titer) than in queens at the same stage (high JH titer), thus reinforcing our hypothesis on a role for the honey bee hexamerins in binding and controlling JH action. But this needs further validation. To our knowledge, hexameric JH-binding proteins have been characterized only in orthopterans [2,3,45]. The cDNA and predicted amino acid sequences of the high affinity JH-binding hexamerin from L. migratoria (Lmi-JHBS) show peculiar structural features. It is clear that LmiJHBS is not closely related to any other hexamerin, including the potential JH-binding hexamerin from R. flavipes (RfHEX1) and the honey bee hexamerins (see Figure 4). The lack of homology clearly indicates that these hexamerins have evolved independently, and it is unlikely that comparisons among their primary sequences will bring to light amino acid regions or domains that could be involved in JH binding. Such regions or domains could not be identified in the LmJHBS sequence [3]. Thus, functional binding assays and determination of dissociation constants are required to assess the potential role of hexamerins in binding JH and, by extension, to verify the role of the honey bee hexamerins in caste determination. An interesting finding is that insect hexamerins may not directly bind JH but may be part of a multiprotein complex engaged in JH sequestration and transport. The first demonstration of a physical interaction among JHbinding proteins (both free and in a complex with JH), two hexamerins (one of them is an arylphorin), and an apolipophorin was recently provided [46]. This interaction implies an important participation of hexamerins in regulating JH levels, and action, even if they do not directly bind to JH. Re-induction of hexamerin gene expression in the adult fat body: differential roles in workers and queens In agreement with previous reports on hemolymph proteins in the honey bee [23,47], lower levels of hexamerin gene expression were found in adult workers in comparison to larvae. Of the four hexamerin genes, hex 70a and hex 110 were the only ones with detectable expression in the adult fat body. Their transcriptional profiles differed conspicuously among adult queens, drones and workers. Levels of hex 70a transcripts were very low in queens and abundant in drones and workers. However, in drones, the abundance is limited to the five first post-emergence days, whereas, in workers, it is extended up to the 30th day of adult life. These transcriptional profiles in workers and drones match the respective HEX 70a profiles in hemolymph [21], and are closely connected with adult life duration. Workers live longer than drones [48,49], and their respective patterns of hex 70a expression reflects fat body metabolic activity and lifespan. But the higher expression of hex 70a in the fat body of adult workers in comparison to queens (see Figures 5 and 7) suggests an important role in worker physiology. This is discussed below. The expression of hex 110 is somewhat different when compared to that of hex 70a. Like hex 70a, the hex 110 gene is considerably active in the fat body of adult workers, but it is practically silent in queens (see Figures 5 and 7) and drones (see Figure 5). Yet, in spite of the evident presence of hex 110 transcripts in adult workers, a negligible amount of HEX 110 subunits were detected in hemolymph [20], indicating that this hexamerin does not function as a storage protein at this stage. But a potential function in the fat body of adult workers cannot be ruled out, given its specific and abundant expression in this tissue. Our results are consistent with re-induction of hex 70a and hex 110 expression in adult bees, although only the product of hex 70a accumulates in hemolymph at this stage. The presence of hexamerins in adult insects may occur either because they were carried over from the larval stage, or due to a specific induction in adults. As examples, some lepidopterans that do not feed on protein as adults may use the larval store of hexamerins for egg production [5]; mRNAs for an adult-specific hexamerin appears in Musca domestica females only after induction by a rich protein meal [50]. Differently, the presence of HEX 70a in the adult honey bee results from gene reinduction after a drastic reduction in transcript levels during adult ecdysis (see Figure 5). The honey bee queen continuously receives, via trophallaxis, a proteinaceous glandular secretion, the royal jelly, which is produced by nurse workers [51]. She may not need to allocate amino acids from larval hexamerins for egg production, since such compounds are continuously derived from her protein-enriched diet. The structural nutrients and energy contained in royal jelly administered to queens are not used for the synthesis and storage of hexamerins (levels of hexamerin transcripts are very low, or undetectable, in the fat body of adult queens; see Figures 5 and 7) but must be mainly directed to vitellogenesis and egg production. In contrast to the queen, workers normally do not produce eggs, although they also have access to a rich source of dietary proteins. They actively consume pollen when they are younger, i.e., during the first two weeks of adult life [52]. Pollen consumption increases the expression of hex 70a and hex 110 in the fat body, and increases the abundance of HEX 70a in hemolymph [20,21]. This is consistent with dietary protein consumption causing reinduction of both hexamerin genes in adult workers, and thereby enabling the storage of HEX 70a. Whether hex 110 mRNAs are translated and, if so, why their subunits do not accumulate in hemolymph are questions that remain unanswered. Pollen consumption also causes the accumulation of vitellogenin, the yolk protein precursor, in the hemolymph of young workers [53]. Vitellogenin is continuously produced and stored in hemolymph during the first two weeks of adulthood to be subsequently depleted as worker bees get older and become foragers [54][55][56][57]. HEX 70a follows a similar pattern. Since workers normally do not reproduce and in general have a short life, the question that remains unanswered is: why do they store proteins? We suggest that the consumption of pollen by young workers exceeds demand, and the excess is hoarded in the form of storage proteins to be consumed later, when they become foragers. Foragers rather eat nectar [58], which is composed primarily by carbohydrates [59]. By this means, stored proteins could provide amino acids for sustaining worker basal metabolism during foraging. The well documented observations that HEX 70a [21], vitellogenin [55][56][57], and total hemolymph protein titers [60] decrease gradually in foragers is consistent with this hypothesis. The destination of proteins stored in worker hemolymph would not be fixed, but dependent on the social context. In case there is queen loss, workers may activate their ovaries for drone production. Their protein reserves would then be directed to meet reproduction demands. Interestingly, workers accumulate storage proteins when they are younger and more prone to activate their ovaries if separated from the queen. Also in the ant Camponotus festinatus, hexamerins are possibly involved in important facets of sociality. It was demonstrated that in the presence of larvae, adult workers do not store, but apparently make use of hexamerins to nourish larvae. Conversely, hexamerins accumulate in hemolymph of workers with no larvae to feed. It was also observed that hexamerins exist in great amounts in virgin queens and are depleted when they seal themselves in a chamber to lay eggs to found a new colony, thus indicating utilization for egg production and rearing the first brood [11,12]. The high levels of hexamerins stored by certain species of termites were also found to be related to colony founding and to the production of initial broods [13]. In adult honey bee workers, vitellogenin and HEX 70a may be part of the above mentioned multiprotein complex engaged in JH binding and sequestration [46], since the presence of both in hemolymph coincides with a low JH titer (in younger workers), and their depletion occurs in synergy with JH titer increase (in foragers). If so, as proposed for vitellogenin [61], HEX 70a also may be a player in the physiological process of JH-regulated transition to forager. Yet, HEX 70a may have developed another role in adult workers. A protein fragment found in the honey bee venom [62] matched HEX 70a N-terminal sequence. Its function in the venom, and whether it is synthesized by the venom gland or sequestered from hemolymph, was not yet determined. Hexamerin gene expression in gonads: a proposed role in ovary and testis development and activity Among solitary insects there is circumstantial evidence supporting the hypothesis that hexamerins are also used for reproduction. In lepidopteran species, for example, it was established a correlation between egg production and depletion of the larval reserve of hexamerins [4,5]. Autogenous mosquitoes that produce their first batch of eggs without a feeding may use larval storage proteins, mainly hexamerins, as amino acid source for this purpose [6,7]. It has also been suggested that amino acids held in storage proteins are used for provisioning eggs of Schistocerca americana [8]. To shed light on whether the "adult" honey bee hexamerins are important or not for reproduction, we checked for the presence of transcripts in the gonads. Except for hex 70c, hexamerin transcripts were abundant in the gonads of larvae and pupae, suggesting roles in ovary differentiation, and also in spermatogenesis, which in drones occurs during pupal stage and is finalized before adult emergence. Two of the hexamerin genes, hex 70a and hex 110, were also expressed in adult gonads, and this was exclusively in queen ovaries. Interestingly, the expression of hex 110 is higher in the ovaries of mated, egg-laying queens than in the young, virgin ones. It is known that mating triggers changes in gene expression in Drosophila females [63]. Mating also elicits physiological and behavioral changes in honey bee queens, and although the molecular mechanisms underlying such responses are largely unknown, it was verified that they involve differential gene expression into the ovaries. Based on gene ontology annotation, a function in oogenesis and reproduction was attributed to these differentially expressed genes [64]. The high expression of hex 110 in the ovaries of egg-laying queens suggests a role linked to ovary activity and reproduction. Ovaries of egg-laying queens have a lower level of hex 70a transcripts than ovaries of newly emerged ones. But HEX 70a subunits exist in equivalent amounts in the ovaries of both, newly emerged and egg-laying queens, as confirmed by Western blots using a specific antibody [21]. Therefore, ovarian HEX 70a molecules seem to have a dual origin. It is produced by the fat body of egg-laying queens (see Figure 5) and secreted in hemolymph [21,23], implying that it could be incorporated into the ovaries in addition to being synthesized by them. The function of HEX 70a in the ovaries of virgin and egg-laying queens is to be determined. Conclusions Our study revealed dramatic differences in structure, organization and expression of the four hexamerin genes of the honey bee, where these differences might have arisen concurrently with their functional diversification. The amino acid composition, motifs and conserved regions were identified in the deduced protein subunits, which were also used in a phylogenetic analysis to explore their evolutionary relationship with homologue sequences of other insect species. Analyses of the UCR of each hexamerin gene revealed a total of six overrepresented DNA motifs, indicating co-regulation. One of these motifs is a potential binding site for the protein Usp, and suggested gene regulation by JH. This hypothesis was reinforced by manipulating JH-levels in experiments in vivo, which resulted in JH-induction of the expression of hexamerin genes in larval fat body. Apparently under a high dietary protein input as occurring during larval stage, JH induces hexamerins for a high expression. The detailed expression studies using fat body, ovaries and testes revealed that: (1) the four hexamerin genes are highly transcribed in the larval fat body, and are likely involved in hexamerin synthesis for amino acid storage and use during pupal stage; (2) in young adult workers, the expression of hex 70a in the fat body is in accordance with the idea of amino acid storage for a later support of foraging; (3) the expression of hexamerin genes in larval and pupal gonads suggests a role in ovaries and testes differentiation, and in spermatogenesis; (4) the expression of hex 110 in the ovaries of egg-laying queens, was associated with ovary activity for egg production; (5) at definite points of the honey bee development, the inverse relationship between the fat body levels of hexamerin transcripts and JH titer suggests that hexamerins regulate JH availability and, consequently, may be involved in the processes of caste-differentiation and worker transition to foraging. Together, the findings of the present study are significant in that they highlighted the potential participation of hexamerins in important aspects of the life cycle of a social insect, in addition to their primordial role in metamorphosis. Honey bees Africanized honey bees were collected from hives maintained at the apiary of the University of São Paulo in Ribeirão Preto, Brazil. Developing queens (reared by standard apicultural procedures), workers and drones were staged according to Rembold et al [65], Michelette and Soares [66], and Tozetto et al [67], respectively. Adult workers and drones of known ages were obtained by paint marking the newly emerged ones and returning them to their hives to be collected a specified number of days later. Adult virgin queens were used at the third day after emergence, and egg-laying queens of unknown ages were collected from colonies kept in our apiary. Fat body and gonad samples used in hexamerin expression studies Transcripts for the four hexamerin genes were assessed in the fat body of workers, queens and drones collected at different ontogenetic stages (larval, pupal and adult). Larvae were collected at the 4 th and 5 th instars, and individually sampled for total RNA extraction. As at the 4 th instar the fat body removed by dissection frequently resulted in small quantities of total RNA, we opted to use the whole larvae as fat body source. To optimize comparisons within larval stage, the same was done for the 5 th instar larvae. The abdominal carcass (dorsal integument and subjacent fat body) from pupae, pharate-adults and adults was sufficient to obtain individual RNA amounts, and then it was used as fat body source. Expression was also investigated in the ovaries of queens and workers, and in the testes of drones, during pre-imaginal and adult stages. Ovaries and testes were carefully dissected and exhaustively washed in Ringer saline to eliminate contaminant fat body before being used for RNA extractions. Data from our laboratory concerning hex 110 [20], hex 70b [19] and hex 70a [21] gene expression in the worker fat body, and concerning that of hex 70a [21] in female and male gonads, were used for comparisons. Testing the effect of JH on the expression of hexamerin genes To test the effect of JH on the expression of hexamerin genes, a commercial JH III (Fluka) was diluted in acetone to make 10 μg per μl, and 1 μl was topically applied on 5 th instar worker larvae at the feeding phase (L5F). Control larvae at the same age were topically treated with 1 μl acetone. To obtain age-controlled worker larvae, the queen was caged on a comb and left to lay eggs for 6 h. When age-synchronized larvae reached the L5F stage, the comb was retrieved from the hive and transported to the laboratory for hormone treatment. The hormone (or acetone only) was carefully deposited on each larva in its comb cell using a micropipette. In this occasion, the comb was mapped for further identification of treated and control larvae, and thereafter they were returned to the hive. Treated and control larvae were collected after 24 h for RT-PCR analyses. Characterization of hexamerin coding sequences (CDSs) The previously described N-terminal sequences of two of the honey bee hexamerins, HEX 70c (AYYAGRHTAD-MFFLH) and HEX 110 (APNVKQRAADQDLLNKQQD-VIQLLQKISQPIPNQELQNLG) [23], were individually aligned against the Official Gene Set database [68,69]. Matching predicted amino acid sequences (GB13613-PA and GB14361-PA) were identified, and the corresponding nucleotide sequences (GB13613-RA and GB14361-RA) were annotated in the Artemis 7.0 platform [22] and used to design primers for experimental determination of the complete hex 70c and hex 110 CDSs (a short hex 110 cDNA fragment of 180 bp had been previously cloned and sequenced by our research group, [20]). The primers [hex 70c (PIR and 3F; 2R and PIF), and hex 110 (5R and 2F; 2R and 0F)] (see Additional file 13) were combined for PCR amplification from first-strand cDNAs obtained by reverse transcription (see RT-PCR analysis below) of total RNA from 5th instar worker larvae. Amplicons were purified and subcloned using the TOPO TA-cloning kit (Invitrogen). Insert-containing plasmids were subjected to sequencing reactions using the primers described in Additional file 1 and M13-forward and reverse universal primers. Dideoxy sequencing was performed in an automatic sequencer (ABI Prism 310, Applied Biosystems) using BigDye Terminator v3.0 Cycle Sequencing Reaction (Applied Biosystems). Sequences were analyzed using Sequencher (version 4.7, Gene Codes Corporation), Artemis software and BLAST algorithms. For purposes of data comparison, we used the hex 70b CDS, and the hex 70a CDS plus part of its 5' and 3' UTRs previously cloned and sequenced by our research group ( [19,21]; see Additional file 14 for the accession numbers). Characterization of potential regulatory sequences in UCRs A pipeline for motif discovery was designed based on reliable strategies previously proposed by MacIsaac et al [70], and adapted to analyze the honey bee genome [71,72]. This pipeline integrates three motif-detection programs: AlignAce [73], MEME [74] and MDscan [75]. Honey bee intergenic databases were constructed for 1.5, 3 and 6 kb sequence sizes that were trimmed whenever another open reading frame (ORF) was found to be flanking these regions. These databases were exploited for score calculations using group specificity scores (Church scores) [76], ROC-AUC scores [77] and Enrichment scores [78]. Two additional specific score metrics, the MAP score from AlignAce and MDscan and the E-value from MEME, were also used as a first filter for selecting the most significant motifs (MAP > 5 and E-value ≤ 1e-05). The second filter was set up to decrease the amount of spurious hits among the identified DNA motifs (Church ≤ 1e-04, ROC-AUC ≥ 0.7 and P-value for enrichment ≤ 1e-04). The main criterion for identifying known regulatory sites among the six overrepresented motifs was the alignment of the PSSM (Position-Specific Scoring Matrix) for each hexamerin motif with the D. melanogaster sites as described in the TRANSFAC database, version 2008.2 [79]. Only the alignments passing a threshold of 80% identity for each PSSM were considered as significant matches. The correlation among hexamerin transcription profiles in developing queens, workers and drones, and the occurrence of DNA motifs in hexamerin gene UCRs were represented as networks based on concepts from graph theory [80] and complex networks [81,82]. Molecular phylogenetic analysis Hexamerins and hemocyanins were searched for in public databases of protein sequences (Additional file 14) by using HMMER [83] for identifying the hemocyanin C (PF03723.5), N (PF03722.5) and M (PF00372.10) domains as described in the Pfam database [84]. A multiple alignment was performed using Muscle [85] with default parameters (Additional file 15). The phylogenetic tree was reconstructed by bayesian inference (MrBayes v3.1.2) using the Blosum model and gamma distribution of substitution rates. Metropolis-coupled Markov chain Monte Carlo sampling was performed (with one cold and three heated chains) setting the number of generations to 300,000 and trees were sampled every 100 th generations. The average standard deviation of split frequency was 0.005 after 300,000 generations. The posterior probabilities were estimated by discarding the first 30% samples. RT-PCR analysis The expression of hexamerin genes was evaluated in the fat body and gonads of developing queens, workers and drones by semi-quantitative RT-PCR using specific primers (hex 110: 2F and 2R; hex 70b: RT-PCR-F and RT-PCR-R; hex 70c: JUF and JUR, see Additional file 13) to generate 659, 456 and 171 bp cDNA fragments, respectively. Total RNA was extracted from fat body, ovaries and testes using Trizol reagent (Invitrogen). The RNA concentration of each extracted sample was measured using a GeneQuant spectrophotometer (Pharmacia). Purity was determined by the 260/280 nm ratio considering values between 1.8 and 2. RNA integrity was verified using denaturing agarose gel (1.2%) electrophoresis and ethidium bromide staining. RNA samples were incubated at 37°C in the presence of 3 units of RNase-free DNase (Promega) for 40 min to eliminate contaminant DNA, followed by 15 min at 70°C to inactivate the enzyme. Firststrand cDNA was synthesized by reverse transcription using 2.5 μg of total RNA, SuperScript II reverse transcriptase and an oligo dT(12-18) primer (Invitrogen). Negative control reactions without the enzyme were also prepared in parallel. After establishing the adequate number of cycles to avoid saturation, aliquots of cDNAs diluted 1:5 (v/v) in water were subjected to PCR (27 cycles of 30 s at 94°C, 1 min at 58°C and 1 min at 72°C). The amplified products were analyzed by electrophoresis in 1.2% agarose gels containing ethidium bromide. An A. mellifera actin gene (GenBank accession number AB023025), which is constitutively expressed during development [86], was used to control for cDNA loading. The primers used for actin gene amplification were ACT -F and ACT -R (Additional file 13), and the thermal cycling program was the same as described above. Primer pairs used in RT-PCR analysis (as well as those specified in Additional file 13) were designed to span at least one intron. Therefore, possible contamination by genomic DNA in RT-PCRs could be easily identified by the detection of a distinct, larger, band following electrophoresis in ethidium bromide-stained agarose gels. Real-time RT-PCR analysis In order to quantitatively compare the levels of hexamerin transcripts in queens, workers and drones during the larval-pupal transition and in adults, we used the ΔΔC T method where the relative amount of transcripts is given by 2 -ΔΔC T (Applied Biosystems User bulletin #2; [87]). We previously performed validation experiments to verify the efficiencies of amplification of the targets and the endogenous reference. The rp 49 gene [Gen-Bank:AF441189], which is expressed in similar levels during honey bee development [86] was used as the endogenous reference. Amplification of rp 49 was done with the primers R and F (Additional file 13). Specific primers were used to amplify hexamerin genes (hex 70a: RTR and RTF; hex 70c: 2F and 1R; hex 110: 3R and 4F; hex 70b: RTR and RTF; Additional file 13). Using serial cDNA dilutions, the efficiency (E) of the reactions was calculated (E = 10 (-1/slope) ) for each gene and showed to be approximately equal. Amplifications were conducted in a 7500 Real Time PCR System (Applied Biosystems) using 20 μL reaction volumes containing 10 μL SYBR Green Master Mix 2× (Applied Biosystems), 1 μL first-strand cDNA (0.25 μg/μL, prepared from total RNA extracted from the fat body as described above for RT-PCR analysis), 7.4 μL water and 1 pmol of each specific primer. Reactions not including the SuperScript II reverse transcriptase (Invitrogen), or cDNA template, were prepared as negative controls. We used an initial cycle of 50°C for 2 min, a denaturation step of 95°C for 10 min, followed by a two-step cycling condition (40 cycles of 95°C for 15 s, and 60°C for 1 min). Each run was followed by a melting curve analysis to confirm the specificity of amplification and absence of primer dimers. To check reproducibility, each SYBR green assay was done in duplicate and repeated with three independent samples. Baseline and threshold were correctly set to obtain accurate C T values, which were exported into an MS Excel spreadsheet (Microsoft Inc.) for 2 -ΔΔC T calculations. Pearson's Correlation Coefficient R was used to verify a possible association among the expression profiles of the four honey bee hexamerin genes in queens, workers and drones during the larval-pupal transition. The statistical significance of the R values was evaluated using a t-test, with R > 0. 81 [23]. Asterisks indicate glicosylation sites. The conserved histidine is double underlined. HEX110 deduced amino acid sequence. Authors' contributions JRM conducted the bioinformatic analyses, directed laboratory activities, undertook statistical analyses, interpretation of results and assisted with preparation of this manuscript. FMFN assisted with primers design, bioinformatic analyses, and manuscript edits. ASC conducted the bioinformatic and phylogenetic analyses and authored the corresponding Method and Result sections. ZLPS assisted with project design and development. MMGB conducted project design with considerable input into direction of research, interpretation of results and preparation of this manuscript. All authors read and approved the final manuscript. Introduction The larvae of holometabolous insects accumulate a large quantity of proteins, carbohydrates and lipids which serve as energy and structural compounds for sustaining metamorphosis up to the adult stage [1]. The most abundant proteins in larval hemolymph are the hexamerins, also known as larval serum proteins, or simply, as storage proteins. Hexamerins are high molecular mass molecules composed, by definition, of six subunits, which can be either homo-or heteromers. Evolutionarily they are derived from hemocyanins, but in contrast to the ancestral molecule, they have lost the capacity of binding copper ions for oxygen transport, and mainly have a role as storage proteins [2]. Hexamerins are massively synthesized by the larval fat body and secreted in hemolymph. Following cessation of larval feeding in preparation to the larval-to-pupal molt, these proteins are sequestered from hemolymph by the fat body cells, via endocytosis mediated by membrane receptors [3], and stored in the cytoplasm in the form of granules [4]. As such, they can be processed and used as amino acid source for development completion. In line with the idea that the sole function of most hexamerins is to act as amino acid reserves when feeding is no longer occurring, as during the pupal and pharate-adult stages, Roberts and Brock (1981) [5] considered that hexamerins are the essential proteins for metamorphosis, as vitellogenins are to embryogenesis. The importance of hexamerins as amino acid storage proteins during metamorphosis was initially demonstrated by injecting larvae of the dipteran Calliphora vicina with [ 14 C]-phenylalanine that was metabolically incorporated into hexamerin molecules (then called calliphorins), and following the fate of the radioactive carbon isotope. Using this strategy, [6] verified that most of the soluble proteins from practically all tissues of the developing pharate-adults became labeled. In a similar experiment, labeled proteins were recorded not only in adult somatic tissues (integument, thoracic muscle), but also in the egg (chorion, yolk) of Actias luna, a moth that produces its eggs during pharate adult development [7]. A correlation between egg production and depletion of the larval reserve of hexamerins was established in adult lepidopterans unable to eat (without mouth parts) or that feed basically on nectar, a poor protein diet [7][8][9][10] despite containing amino acids of supplemental nutritional value [11]. There is also circumstantial evidence that amino acids held in hexamerins are used for provisioning eggs of non-lepidopteran species, such as, the mosquito Aedes atropalpus, which produces the first batch of eggs without a feeding [12,13], the cockroach Blaberus discoidalis [14], the house fly Musca domestica [15], and the grasshopper Schistocerca americana [16]. The high level of hexamerins stored by Camponotus festinatus queen ants and by certain species of termites was also related to the production of the first batch of brood without access to food during colony founding [17][18][19]. Together, these results indicate that hexamerin residues are recycled to make other proteins needed for tissues reconstruction during metamorphosis and, in some insect species, for egg production. Thus, after hexamerin breakdown in the fat body, the released amino acid residues are reutilized and incorporated into new proteins, although there is also evidence of incorporation of hexamerins into tissues after partial degradation [20] or even without degradation [4,21]. In general, hexamerins disappear from hemolymph within a few days after adult eclosion. Nevertheless, in some insect species they may persist in hemolymph up to the adult stage [14,22]. There is also evidence of synthesis reinduction and even de novo synthesis in adults, although at a lower rate [13,23]. A special class of hexamerins, the arylphorins, has received special attention in view of their high content of aromatic amino acids. In fact, arylphorins have long been presumed to be a source of aromatic amino acids for exoskeleton sclerotization in lepidopterans [7,[24][25][26][27]. Hexamerins from Locusta migratoria [28,29] and Melanoplus sanguinipes [30] also play a role as hemolymph juvenile hormone transporters, and the Larval Hemolymph Protein-1 of Calliphora vicina has been confirmed as a low affinity carrier protein for ecdysteroids [4]. Recently, [31] demonstrated that hexamerins interact with other proteins (juvenile hormone binding protein and apolipophorin) in a multiprotein complex engaged in sequestration and transport of juvenile hormone, thus inferring the involvement of hexamerins in regulating juvenile hormone levels and action, even when they do not directly bind to the hormone. Based on the purported ability of binding and controlling juvenile hormone levels, hexamerins have been linked to important facets of social insect life histories. In the termite Reticulitermes flavipes, the role of hexamerins has been associated to the regulation of the juvenile hormone-dependent soldier caste phenotype [32][33][34][35]. Also in honey bee larval development, the inverse relationship between the levels of hexamerin transcripts in the fat body and the juvenile hormone titer suggests that hexamerins may act as players in the juvenile hormone-dependent differentiation of the bipotent female larva towards a queen or a worker phenotype [36]. In the social wasp Polistes metricus, one hexamerin may be involved in caste-specific behaviors and in the regulation of diapause, which is also conditional on a low titer of juvenile hormone [37]. Except for the termite R. flavipes, most of the above mentioned considerations on the roles of hexamerins in social insect life histories are based on correlational or other circumstantial evidence, still requiring experimental confirmation and in-depth analysis at the cellular level. In the highly eusocial honey bee, [38] were the first to characterize a hexamerin subunit in the range of 75-80 kDa. Later, four hexamerin subunits (including the one previously described by Ryan [38]) were distinguished in honey bee hemolymph samples by SDS-PAGE and N-terminal sequencing [39]. Since three of these subunits presented molecular mass in the 70 kDa range, they were named HEX 70a, HEX 70b, and HEX 70c. The other subunit migrated at a rate consistent with a higher molecular mass and was named HEX 110. Studies undertaken in our laboratory led to the characterization of the full-length cDNAs encoding the four honey bee hexamerin subunits. These studies enabled the characterization of the structure of these genes and the prospection of overrepresented sequence motifs indicative of mutual co-regulation in the respective upstream control regions. It was also investigated the evolutionary relationship between the honey bee hexamerins and homologous proteins from other insect species. Furthermore, we characterized the expression patterns of the four hexamerin genes in the fat body and gonads of developing and adult workers, queens and drones, as well as the hormonal-and nutritionaldependent expression of these genes [23,36,40,41]. A honey bee arylphorin, HEX 70a, is the focus of the current work. Through RT-PCR (semiquantitative and quantitative) and western blot analyses using a specific antibody we had previously demonstrated that, besides being strongly expressed in the larval fat body, the HEX 70a transcript and protein subunit were also present in the male and female gonads [23]. In the search for a role of this hexamerin in ovaries and testes we designed experiments for its immunofluorescence detection by confocal laser-scanning microscopy. In parallel, a nucleoside analog of thymidine coupled to a dye was used for prospection of dividing cells in developing ovaries. To highlight structural aspects of the gonads at the developmental stages here approached we used rhodamine-phalloidin labeling for F-actin and DAPI-labeling for cell nuclei, in addition to conventional histology. HEX 70a detection in ovarian cell nuclei in pharate-adult workers Ovary sections of a pharate-adult worker show the basic structure of an ovariole stained with methylene blue and basic fuchsin ( Figure 1A), the actin array visualized through rhodamine/ phalloidin staining ( Figures 1B, C), and foci of HEX 70a immunodetected with anti-HEX70a/Cy3 (Figures E, F). DAPI was used to highlight ovarian cell nuclei and to make ovariole visualization easier ( Figures 1B, C, D, F). At this initial stage of pharate-adult development (,1 day after pupal ecdysis), each ovariole consists of a distal terminal filament (not shown) and a proximal germarium. In the germarium the germline cells, or cystocytes, are beginning to be arranged in rosette-like structures (circle in Figure 1A). Each rosette is a cystocyte clone derived from a single cystoblast (oogonium) and will give rise to a single oocyte and the accompanying trophocytes, or nurse cells. To better visualize the structure of the ovariole at this developmental stage (early pharate-adult) we used rhodamine/phalloidin for detection of F-actin, and DAPI to stain the ovarian cell nuclei. In the upper region of the germarium (upper part of Figure 1B) we could visualize the dense actin complex typical of the polyfusomal region in the center of each cystocyte rosette (arrowheads in Figure 1B). In the lower region of the germarium (lower part of Figure 1B and Figure 1C) the polyfusomes were converted into ring canals (arrows in Figures 1B, C) that allow communication among the germline cells, i.e., among the cell destined to be the oocyte and its associated nurse cells. Ovarioles characterized by such structural arrangements, as detailed in Figures 1A-C, were prepared for HEX 70a detection with anti-HEX 70a/Cy3. Figure 1D shows the upper region of the germarium of an ovariole stained with DAPI. Figure 1E illustrates the same ovariole region where foci of HEX 70a can be seen (merged image is shown in Figure 1F). The insert in Figure 1F represents a control ovariole incubated with preimmune serum and stained with DAPI and Cy3. Comparison among Figures 1D-F revels that HEX 70a is localized in the nuclei of the germline cells (cystocytes), in close association with chromatin (arrowheads in Figures 1D-F). Presumptive follicle cells (somatic cells) are not clearly evident at this stage, but were tentatively indicated by arrows in Figures 1D-F. Like the germline cell nuclei, the somatic cell nuclei show HEX 70a foci. Colocalization of EdU and HEX 70a in the ovarian cell nuclei of pharate-adult workers EdU is a nucleoside analog of thymidine that incorporates into DNA during the S-phase of the cell cycle, thus allowing the detection of DNA replication for cell division when coupled to a dye (Alexa Fluor 594). EdU was injected in early pharate adults (,1 day after pupal ecdysis). The ovaries were dissected after 24 h and prepared for confocal microscopy. Figures 2A-D show confocal images of one of these ovaries. In Figure 2A the DAPIstaining highlighted the cell nuclei in the base of the ovary and in its constituent ovarioles. Only the germarium region is shown in each ovariole. Figure 2B revealed intranuclear HEX 70a/Cy3 foci spread throughout the ovary. By comparing Figures 2A and 2B we identified regions of DAPI-stained nuclei in the ovarioles (germarium) without HEX 70a/Cy3 foci. Therefore, HEX 70a is not present in every ovarian nuclei. Figure 2C revealed EdU incorporation in S-phase nuclei. In a comparative analysis, the Figures 2B, C and the merged image seen in Figure 2D revealed that the nuclei labeled with EdU/Alexa Fluor also show HEX 70a/Cy3 labels, suggesting that HEX 70a may be somehow involved in the S-phase events leading to cell proliferation in ovarioles. However, HEX 70a has a nuclear localization even in cells outside the S-phase, since the overlap between HEX 70a/ Cy3 and EdU/Alexa Fluor labels is not complete: for example, the nuclei showing HEX 70a immunofluorescence at the right margin of the ovary in Figure 2B do not show EdU fluorescence ( Figures 2C, D). Expression of HEX 70a in ovarioles of egg laying queens HEX 70a foci were also detected in ovarioles dissected from adult queens. Figure 3A shows a schematic representation of an ovariole of an egg-laying queen. The ovariole consists of a narrow distal region, the terminal filament, an intermediate region, or germarium, and a proximal region, the vitellarium. The terminal filament contains typical coin-shaped somatic cells and putative germline stem cells [42]. Cystocyte clusters are observed in the upper region of the germarium, and in the lower region there are growing oocytes associated with the polyploid nurse cells. In the upper region of the vitellarium ( Figure 3A), nurse cell and oocyte chambers forming the pre-vitellogenic follicles are visible. The lower region of vitellarium is the largest region of the ovariole (shown in Figure 3B) and consists of a sequence of growing oocytes involved by a layer of follicle cells (arrowheads) interspersed with nurse cell chambers (arrows). In this region, the oocyte reaches its maximum size, the nurse cells collapse, the chorion is formed and the egg is finally released into the oviduct. Figures 3C-E shows the lower region of the terminal filament. In this region, HEX 70a is strongly associated with cell nuclei, but foci of HEX 70a in the cytoplasm of filament cells were also noticed ( Figure 3D, E, arrows). HEX 70a was also localized in the nuclei of the nurse cells ( Figures 3F-H), as well as in the nuclei of the somatic follicle cells ( Figures 3I-K), which cover the oocyte. In both cell types, HEX 70a has exclusively an intranuclear localization, but with a very distinct pattern of foci size and distribution. HEX 70a foci are small and scattered all over the nuclei of the polyploid nurse cells and are larger and concentrated in defined nuclear areas in the proliferating follicle cells. Effect of anti-HEX 70a injection on ovariole width and cuticle sclerotization To strengthen the hypothesis that HEX 70a is involved in ovariole cell proliferation we injected 24 h-queen pupae with anti-HEX 70a (diluted in 0.9% NaCl) and measured the width of the ovarioles soon after the adult ecdysis, under the expectation that the specific antibody would reduce HEX 70a activity and, thus, result in smaller ovarioles. Figure 4A shows that the antibody injection significantly hampered ovariole growth (p = 0.002) in comparison with control queens injected with the vehicle only. In parallel, 24 h-worker pupae were also injected with anti-HEX 70a and the effect of this antibody on the hemolymph HEX 70a levels was examined. Western blots revealed a reduction of 54% (estimated by densitometric assessment in arbitrary units obtained from HEX 70a bands normalized to the ,200 kDa lipophorin loading control) in the levels of HEX 70a 4 h after injection of the antibody, followed by recovery to normal levels within 72 h ( Figure 4B). Given that HEX 70a is an arylphorin, and as such, it may represent a source of aromatic amino acids for cuticle formation, we also checked the progress of pigmentation and sclerotization in anti-HEX 70a-injected workers, comparing them to two control groups, injected with mouse IgG or only with the antibody vehicle. Anti-HEX 70a injection produced a drastic effect on cuticle formation. This effect was more evident in the cuticle of the hind legs that were not fully pigmented and sclerotized. In anti-HEX 70a-treated bees, the hind leg cuticle is clearer and softer than in the control groups ( Figure 4C). Taken together, the data shown in Figure 4 are consistent with the proposed participation of HEX 70a in ovariole cell proliferation, confirmed that the antibody is effective in reducing HEX 70a levels, and furthermore, confirmed that HEX 70a is a genuine arylphorin with a role in cuticle formation (in addition to being a nuclear protein in the gonads). Expression of HEX 70a in the testes HEX 70a was also detected in the germ and somatic cells of developing testes. Figure 5A shows a cross section of the upper portion of a testiole dissected from a drone pupa (1 day after pupal ecdysis). In this region we could observe cysts, i.e., groups of germ cells (cystocytes or spermatogonia: arrows in Figure 5A) housed within a somatic cell envelope (somatic cell nuclei pointed by arrowheads in Figure 5A). Confocal microscopy on rhodamine/ phalloidin-labeled F-actin (green) and DAPI-labeled cell nuclei (blue) ( Figure 5B) highlighted the structure of this region of the testiole. F-actin is an abundant component of the somatic cell cytoplasm, and is also present in the ring canals (asterisks in Figure 5B) that enable mutual communication for the germ cells. Comparison of Figures 5C-E revealed foci of HEX 70a mainly in the nuclei of the germ cells (thick arrows in Figure 5E) and somatic cells (arrowheads in Figure 5E), but also dispersed in the cytoplasm of the germ cells (thin arrows in Figure 5E). The small volume of cytoplasm in the somatic cells impairs the accuracy in identifying possible cytoplasmic HEX 70a foci in the confocal images. Sections of the lower region of testioles dissected from drones at an intermediate phase of the pharate adult development (,6 days after pupal ecdysis) showed syncytial clusters of elongating spermatids ( Figures 6A, C, arrows). Actin cones were seen assembled around the tip of the spermatid nuclei ( Figures 6B, D, arrows). Figure 6E shows DAPI-stained nuclei of spermatids in syncytial clusters (arrows) and of somatic cells (arrowheads). In Figure 6F, which is a preparation stained with anti-HEX 70a/ Cy3, and in the merged image ( Figure 6G) we could verify that HEX 70a was strongly localized to the posterior extremity of the spermatid nuclei ( Figures 6F, G inserts), as well as in the nuclei of individualized spermatozoa (arrows in Figures 6F, G) and somatic cells (arrowheads in Figures 6F, G). Discussion HEX 70a in oogenesis and spermatogenesis of the honey bee Herein we show that the honey bee HEX 70a is localized in the nuclei of ovarian and testis cells, thus implying in a yet undescribed role for this hexamerin. In its native structure, HEX 70a is an oligomer (data not shown). Similar to other proteins, HEX 70a may be acting in the nucleus in the monomeric form, as recently reported for royalactin, a 57 kDa monomer that functions as a caste determining factor in the honey bee. Royalactin forms the oligomere MRJP1, a member of the Major Royal Jelly Protein family, which is present not only in royal jelly secreted by the worker hypopharingeal glands, but also in hemolymph and other tissues of the honey bee [43]. HEX 70a fulfills all the criteria established for classification as a storage hexamerin. It has the three canonical hemocyanin domains (N: PF03722.5, M: PF00372.10 and C: PF03723.5 -Pfam database, [44]), which are typical of all hexamerins. It is massively synthesized by the fat body during the larval feeding stage and abundantly stored into larval hemolymph, remains in high quantity in pupal and early pharate-adult hemolymph, and subsequently becomes less abundant [23,39]. This feature is in conformity with the role in providing amino acids for pupal and pharate adult development, just like the other hexamerins. Furthermore, it contains a high proportion (18.2%) of aromatic amino acids, which makes it a member of a subclass of hexamerins, the arylphorins. HEX 70a is likely used for adult cuticle construction. As demonstrated herein, the inactivation of HEX 70a in vivo by injecting anti-HEX 70a into worker pupae visibly hampered the process of adult cuticle formation. Interestingly, the experimental decrease in HEX 70a in hemolymph provoked through antibody-injection was sufficient to affect cuticle formation, despite the presence of another arylphorin, HEX 70b, in hemolymph at this stage [40]. This indicates that HEX 70a, or the amino acids derived from its hydrolysis, have essential participation in cuticle formation. Previous experimental evidence in our laboratory had already indicated that HEX 70a is a multifunctional protein. By means of semiquantitative and quantitative RT-PCR and Western blot analysis using anti-HEX 70a, we could show that the fat body is not the only site of HEX 70a production, as the transcript and the corresponding protein subunit were also detected in developing gonads of workers, queens and drones, suggesting roles in ovary differentiation and testes maturation. HEX 70a transcripts and protein subunits were also detected in the ovaries of adult queens (but not in the worker bee hypopharyngeal glands) [23]. Following up on this question, the immunodetection of HEX 70a in the gonads now evidenced an association of this protein with nuclei of germline and somatic cells. Such localization was completely unexpected for a storage protein, implying regulatory or structural roles in the nuclei. The nuclear colocalization of HEX 70a with the S-phase marker EdU furthermore indicated that HEX 70a may play a role in DNA replication for cell proliferation or polyploidization. However, there are also ovariole cell nuclei showing HEX 70a immunofluorescence, but not EdU fluorescence (the reverse was not observed). This does not exclude a possible HEX 70a role in cell proliferation, but may indicate that HEX 70a does not have an exclusive role in the S-phase of the cell cycle, or that the stability of the protein within the nuclei is not restricted to the S-phase. The hypothesis that HEX 70a is involved in cell proliferation received support from experiments where anti-HEX 70a antibody was injected into queen pupae, revealing negative effects on ovariole enlargement, which likely occurs via cell proliferation. Consistent with this hypothesis, HEX 70a was localized in the nuclei of the cystocytes in the ovaries of early pharate-adult workers. Cystocytes are mitotically active, as shown here by EdU labeling, and through BrdU (5-bromo-29deoxy-uridine) labeling [42]. Each cystocyte proliferates to form a clone of about 48 or more cells [45,46] which is arranged as a rosette and contains a germline-specific organelle, the polyfusome [47]. Actin was shown to be a prominent fusome marker in the center of the rosettes [48,49]. Only later in development will one cystocyte in each rosette enter meiosis and begin to grow and then become morphologically distinguishable from the nurse cell-destined cystocytes. As the oocyte differentiates, the rosettes are gradually transformed into initial follicles, with the fusomes being converted into the ring canals that connect the developing oocyte with the nurse cells, and the nurse cells with each other. Each growing oocyte/nurse cell cluster becomes surrounded by somatic follicle cells and will be partitioned into an egg chamber, where oogenesis and vitellogenesis proceed, and a trophic chamber (or nurse cell chamber) [42,46,50]. Whilst this is the common pattern in queens, progressive oogenesis in workers it will only take place if they are released from the repressor effect of queen pheromone [51]. Different from the oocyte, which enter meiosis and remains transcriptionally silent, nurse cells undergo a series of endomitotic cycles [46,52]. This characteristic, typical of the meroistic ovary, is an evolutionary strategy to increase the synthesis of material and organelles at a high rate during oogenesis, and export them to the growing oocyte through the ring canals [53,54]. During oogenesis of the honey bee, the somatic follicle cells become a thick epithelium around the growing oocyte and a flattened cell layer around the joined nurse cells [55]. To account for the intense oocyte growth during oogenesis and vitellogenesis, the follicle cells that surround the oocyte must undergo several rounds of mitotic divisions. Unpublished data from our laboratory (Macedo LMF, personal communication) documented the significant increase in follicle cell number in the growing follicles of the honey bee. Consistent with a role in DNA replication, HEX 70a was localized in the polyploid nuclei of nurse cells and in the proliferating follicle cells covering the growing follicle in queen ovarioles. The pattern of HEX 70a foci in the nucleus, however, is distinct for nurse and follicle cells, perchance reflecting their respective physiological status. HEX 70a was also localized in the terminal filament cells where mitotically active BrdU labeled nuclei, probably stem germline cell nuclei, were demonstrated by Tanaka and Hartfelder (2004) [42]. Interestingly, only in this ovariole region we were able to distinguish HEX 70a foci in the cytoplasm in addition to the nuclear focal spots. We were unable to localize HEX 70a in the nuclei of meiotic oocytes. Intranuclear foci of HEX 70a were also detected in the germ and somatic cells of the male gonad during its early and late development. Unambiguous cytoplasmic foci of HEX 70a were observed only in the earlier stages of testis development and in the terminal filament of the ovarioles. As spermatogenesis and oogenesis progresses the foci of HEX 70a become exclusively intranuclear. In newly-ecdysed drone pupae, the clusters of dividing secondary spermatogonia, also termed cystocytes, become enveloped by actin-rich somatic cells and in the interior of these cyst capsules they develop in spermatocytes, which then initiate the meiotic division [52,56]. Within the cyst, the germ cells remain connected by cytoplasmic bridges, the ring canals, similar to what is seen in the ovarioles. Thus, the presence of HEX 70a in the room temperature. Ovaries and testioles were then incubated in DAPI (49,6-diamidino-2-phenylindole) 1:8000 v/v (Sigma) in 0.1% TPBS for 5 min and then rinsed five times in 0.1% TPBS. Slides were mounted in glycerol 80% (Merck) and examined under a Leica TCS-SP5 confocal microscope (Leica Microsystems). EdU and HEX 70a colocalization. Newly ecdysed worker pupae (Pw phase) collected from hives and kept in an incubator at 34uC and 80% relative humidity for 24 h were injected with 1 ml of a 40 mM 5-ethynyl-29deoxyuridine (EdU, Click-iT TM EdU Imaging Kits -Invitrogen) solution in Ringer saline. The injection was administered into the abdominal hemocoel. After 24 h the injected bees were dissected for extraction of the ovaries. The ovaries were fixed in 3.7% formaldehyde in PBS 1 for 30 min and subsequently transferred to the Click-iT TM EdU Imaging Kits reaction mixture (43 ml 10X reaction buffer; 38 ml distilled water; 20 ml copper sulphate; 1.2 ml Alexa Fluor 594; 50 ml reaction buffer additive) where they remained for 30 min. The permeabilization and HEX 70a localization were performed as described above. Effect of anti-HEX 70a on hemolymph levels of HEX 70a and on cuticle sclerotization Treatment of workers and queens with anti-HEX 70a. Newly ecdysed queen and worker pupae (Pw phase) were collected from hives and maintained in an incubator at 34uC and 80% relative humidity for 24 h before receiving an injection of 1 ml (1 mg) of the anti-HEX 70a antibody, diluted in 0.9% NaCl, into the abdominal hemocoel. Controls received 1 ml of 0,9% NaCl or 1 ml (1 mg) of mouse IgG (ECL TM Western Blotting Analysis System, Amersham Biosciences) in 0,9% NaCl. The injected queens and workers, and their respective control groups, were maintained in the incubator until the adult ecdysis. Since HEX 70a is an arylphorin, and as such it may be implicated in cuticle formation, the progress of pigmentation and sclerotization was followed daily until adult ecdysis. Following adult ecdysis of the control worker bees, the hemolymph was collected from both worker groups (control and experimental) for Western blot analysis to attest the levels of free HEX 70a. Queens had their ovarioles dissected soon after adult ecdysis and stained with DAPI for measurement of width. Western blot. The hemolymph samples from the newly ecdysed adult workers injected with anti-HEX 70a or with saline vehicle only were centrifuged at 20006g for 1 min at 4uC. Total protein was quantified [69] in the supernatants and samples containing 5 mg of total protein were used for electrophoresis in denaturing conditions [70] carried out at 15 mA and 4uC using 7.5% polyacrylamide gels (100612060.9 mm). Following electrophoresis, the proteins were transferred to nitrocellulose membranes (ImmunBlot TM PVDF Membrane). The membranes were stained with Coomassie Brillant Blue (CBB) to check migration of hemolymph proteins and molecular mass markers (205, 116, 97.4, 66, 45 and 29 kDa, Sigma). Non-specific biding sites were blocked by incubating the membranes for 16 h with 10% non-fat dried milk in PBS 2 (50 mM Tris, 80 mM NaCl, 2 mM CaCl 2, pH 8.5). HEX 70a subunits were detected by incubating the membranes for 1 h, at room temperature, with anti-HEX 70a antibody diluted 1:5,000 in 10% non-fat dried milk in PBS 2 . The membranes were washed thoroughly in 0.05% Tween 20 in PBS 1 (0.05% TwPBS) and subsequently incubated for 1 h in a horseradish peroxidase labeled anti-rabbit IgG secondary antibody (Amersham Biosciences), diluted 1:12,000 in 0.05% TwPBS. After washing in 0.05% TwPBS, the detection was carried out by using the ECL System (ECL TM Western Blotting Analysis System, Amersham Biosciences). The constitutively expressed ,200 kDa hemolymph lipophorin identified in the CBB-stained nitrocellulose membranes was used as a loading control. Measurements of ovary width. Ovaries from HEX 70a antibody-injected queens and from 0.9% NaCl-injected controls were fixed in 3.7% formaldehyde in PBS 1 during 30 min and incubated in DAPI (1:400 dilution) in 0.1% TPBS for 5 min. After rinsing five times in 0.1% TPBS the ovarioles were mounted in 80% glycerol for analysis in a Leica TCS-SP5 confocal microscope system. Ovariole width was measured by using the software LAS AF Lite 2.4.1 (Leica Microsystems).
2019-03-20T13:04:59.441Z
2012-11-13T00:00:00.000
{ "year": 2012, "sha1": "04a8c8a6f148386b6cbfa5beca418778b602ef3d", "oa_license": "CCBYNCSA", "oa_url": "http://www.teses.usp.br/teses/disponiveis/17/17135/tde-13062013-075841/publico/DoutoradoJulianaRamosMartins.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7231ea5db54a2fe03ca64da2ad143239224bfeaa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
14879974
pes2o/s2orc
v3-fos-license
Magnetoencephalographic Signals Identify Stages in Real-Life Decision Processes We used magnetoencephalography (MEG) to study the dynamics of neural responses in eight subjects engaged in shopping for day-to-day items from supermarket shelves. This behavior not only has personal and economic importance but also provides an example of an experience that is both personal and shared between individuals. The shopping experience enables the exploration of neural mechanisms underlying choice based on complex memories. Choosing among different brands of closely related products activated a robust sequence of signals within the first second after the presentation of the choice images. This sequence engaged first the visual cortex (80-100 ms), then as the images were analyzed, predominantly the left temporal regions (310-340 ms). At longer latency, characteristic neural activetion was found in motor speech areas (500-520 ms) for images requiring low salience choices with respect to previous (brand) memory, and in right parietal cortex for high salience choices (850-920 ms). We argue that the neural processes associated with the particular brand-choice stimulus can be separated into identifiable stages through observation of MEG responses and knowledge of functional anatomy. INTRODUCTION Traditionally, laboratory research into cognitive processes has attempted to simplify the context. Thus, decisions to be made, or items to be recognized, learned, or remembered are separated from the normal complex experiential web within which perception takes place and memories are made and acted upon in day-to-day life. This tradition, which dates back to Ebbinghaus, has been followed in most of the more recent approaches to imaging the neural processes engaged in decisionmaking; such as in simple lexical decision tasks (e.g. Embick et ai., 2001), or go/no-go decisions in movement tasks (e.g. Filipovic et al., 2000). The reasons are straightforward; decisions based on individual autobiographic experience are, by definition, specific. Prior personal experiences, which help shape real life decision-making, will nearly always involve episodic, semantic, and even procedural elements, and are therefore by their nature idiosyncratic. Thus, it is difficult to devise an experimental procedure to investigate the neural correlates of such real-life situations that can be applied equivalently across many subjects. Nonetheless, to have such paradigms is highly desirable as it 242 S. BRAEUTIGAM ET AL. becomes more and more clear that the mechanisms underlying decision-making, in which diverse factors such as social context or differences in personality may be important, should be studied in contexts related to everyday life situations (e.g. Eysenck & Keane, 2000, 486-487;Oaksford, 1997). Some interesting studies into decision-making related to real-life situations have been published. One imaging study (Maguire et al., 1997) investigated the neural mechanisms engaged when London taxi drivers plan/decide which route to take for a given destination. Emotional decisions have been studied in relation to the subject's state of anxiety using unpleasant linguistic stimuli (Tabert et al., 2001). The approach pursued here draws on the observation that in industrialized societies, common experiences of urban dwellers can be exploited to provide stereotyped experiments with a broader 'real-life' context. Thus, most adults have some experience of supermarket shopping and choosing specific products and items from an array of competing brands. A variety of individual factors, such as age, gender, financial constraints, familiarity with the displayed items, advertising, and previous experience and/or preference for certain brands/products, influence their behavior. Although an earlier MEG investigation of the neural correlates of advertising stimuli has been reported (Ioannides et al., 2000), the present paper is, to our knowledge, the first attempt to study the neural systems associated with the very moment that a consumer choice is being initiated and/or made. This moment is defined here as the onset of a stimulus requiring subjects to make a consumer choice. The overall aim is to explore whether the sequence of MEG responses can reveal the recruitment of the generic systems needed to effect the choice. The link with choice behavior is made through correlation between the signals and a behavioral measure, salience, calculated on the basis of prior behavioral responses. The responses were obtained from a questionnaire (as commonly used in advertising/marketing, e.g. Ambler & Burne, 1999) about brand familiarity and brand preference and are presumed to reflect the combined effect of all these potential factors affecting choice. Subjects Eight right-handed healthy native-Englishspeaking adults 4 females and 4 males), aged between 30 and 63 years) participated in this study. Subjects had normal or corrected-to-normal vision and signed an informed consent form before the experiment (Helsinki Declaration). The procedures were described from a pilot study on a larger number of volunteers, during which no brain signals were recorded. All experiments were carried out in the Low Temperature Laboratory of Helsinki University of Technology. The tasks were delivered in the form of video clips projected on a screen within a magnetically shielded room. Participants were seated under the helmet-shaped detector at a distance of about 80 cm from the presentation screen. The subjects used their right hands to press keys on a small keypad according to task condition. Standard verbal instructions were given before each task. There was no emphasis on speed, but participants were asked to press the key as soon as they had reached a decision. Choice task Participants were presented with footage of the interior of a familiar supermarket in England where all subjects shopped, at least occasionally. The footage comprised 18 scenes of walking along the aisles and shelves. Each scene showed a selection of common consumer items belonging to a certain category (e.g. diary products, soft drinks) as placed Video Video c) Example stimuli used in the color control task (colored shapes were used in the experiment). All images had the same size on screen. on the shelves on the day of filming. The scenes cued subjects on the category of products that would be shown in five static images after each video scene (Fig. 1). These images, constituting the actual stimuli, were shown for 5 sec each, tbllowed by a 3 sec inter-stimulus-interval. Each image showed three products of the relevant category arranged in a row of items on a neutral background (e.g. three types of butter). A total of 90 (18 x 5) one-out-of-three choices were to be made (90 x 3 270 items). The video lasted about 18 rain. Participants were asked to indicate, after each image, which of the three items they would purchase if given the choice. Subjects expressed their choice by pressing with their index, middle, or ring finger corresponding to the left, middle or right item shown in the image. They were instructed to press with their thumb when they felt they could not make a choice. Subjects were asked to ignore price differentials and were informed that they would be given a shopping voucher (GBP 50), which could be used to purchase products selected during the MEG experiment. Questionnaire At the end of the experiments, participants filled out a questionnaire on which they used a 5point scale to indicate their familiarity with or usage of each of the 270 consumer items. With this ordering, the questionnaire could not have an impact on the MEG recording. For each subject, these questionnaire returns were used to calculate a measure S of the salience of a chosen item within the context of a given image according to s v, --jOy, + ), where the Vc represents the questionnaire score of the item chosen and V and V2 represent the sores of the non-chosen items. S ranges from -4 to 4. The maximum is achieved if the item chosen (V,) scores 5, whereas the two non-chosen items (V and V) each score 1. The minimum signifies the reverse situation. Example: Subject chooses the still mineral water [middle] in the left image of Fig. 1A. Her questionnaire gives score 4 for this item and scores and 2 for the left and right items respectively. For subsequent analysis, data epochs from the choice task were median split into two groups, according to decreasing values of S and the groups denoted by high and low respectively. In these data, the group of high salience trials is mainly comprised of images for which the score of the item chosen, Vc, is higher than the scores corresponding to the non-chosen items V and Vz (90%). It also contains trials for which Vc V and Vc >(V + 1). The low group contains all images with equal scores, or nearly equal scores (Vc=V and Vc=Vz+l; in total 70%) and images in which at least one item has a higher score than the one chosen. Height control task In this task, participants were presented with a random sequence of 60 (5 sec presentation, 3 sec inter-stimulus-interval) images drawn from those used in the choice task, this time without interleaving scenes of the supermarket. The task was to indicate which of the three items (left, middle, or right) was the shortest, again by pressing one of the respective keys (pressing with the thumb was possible if subjects felt they could not discriminate between the heights of the items). The images were selected such that the shortest item could be discriminated easily, and there was an equal probability for the shortest item being in one of the three positions. The presentation lasted for about 8 rain. The use of 60 images was based on the pilot study, which had suggested a higher frequency of no-choice responses than actually occurred. Color control task In this task, participants were presented with a random sequence of 60 images (5 sec presentation, 3 sec inter-stimulus-interval) showing three simple geometrical objects arranged in a row. The task was to indicate which of the three objects (left, middle, or right) was red, again by pressing one of the respective keys. A small proportion of images did not feature an object in red, where subjects had to press with their thumbs. The experiment lasted about 8 min. Data acquisition Neuromagnetic responses following image presentation were recorded using a VectorView TM MEG system (H/mliinen, 1997), which is based around a helmet-shaped array of 102 pairs of firstorder gradiometer sensors. The outputs of each pair of detectors are most sensitive to the tangential current flow in the region directly below the detectors. The local root-mean-square (rms) signal summed over the two readings is a measure of the strength of that current. Electronic markers on the video tape were frame-locked to the appearance of each image and fed into the data acquisition system for synchronization The data were sampled at 600Hz (0.01 to 200Hz anti-alias filter). We controlled for artifacts by recording the electro-oculogram and the electrocardiogram. We identified movement of the head by measuring the head's position before and after each experiment. Time-series analysis For each subject, all epochs were averaged according to task and high-low salience conditions within the interval-200 to 1000 ms (t=0 denotes stimulus onset). This gives a total of five types of average evoked response for this study (choice, height, color, choice-low-salience, and choicehigh-salience). Before the analysis, average signals were further filtered (0.2 to 30Hz) and normalized to the signal variance within a baseline interval (0 to 200ms) before stimulus arrival. The variations in the baseline signal variance were small across tasks and conditions. Significant differences between pairs of evoked responses were sought using a time-dependent measure P(t). This measure identifies latencies where, across the subject group as a whole, significant differences between the responses from two types of evoked response are found. For each latency for which a significant difference was identified, spatial maps of root-mean-square signals were calculated as measures of the corresponding neuronal activity. The measure has been used successfully by Braeutigam et al. (2001a) and is repeated here for convenience. where N=204 denotes the number of channels, prob is the significance level of the quantity in brackets (Batschelet, 1981), and w is the level of significance of a paired Wilcoxon test (Conover, 1980) of the pairs of evoked responses from all subjects in the ith channel. 2.8. Source analysi,,: Anatomical MRI scans were available for four male subjects. We calculated source estimates only for these four individuals. Both equivalent current dipole calculations and current density estimation were performed using Curry TM software (Curry, 1999). We obtained all density estimates using a minimum norm algorithm with L-curve regularization restricting currents to (reconstructed) cortical surfaces within a best-fit spherical volume con-ductor. Before source estimation, the data were preprocessed using the same baselines as those used for the time-series analyses and subjected to a principal component analysis. The cortical source components obtained in this way best describe the observed differences between signals. Based on the data available, an assessment of significance in source-space was not possible. In addition, identification of possibly deep sources could not be consistently achieved. Behavioral responses All eight subjects completed the three tasks successfully with a negligible number of missed key presses. Subjects maintained a constant head position within the accuracy of the positioning system (<5ram). A small number of epochs (<1%) had to be rejected because of heart and/or eyeblink artifacts. In total, participants positively chose an item in 74% of all stimuli, pressing the appropriate key on average 2620 ms after stimulus onset. Overall, the results of the questionnaire mirrored the actual choices made during the experimental run in that items chosen scored significantly higher than nonchosen items (2.9 and 1.9 respectively, p <0.01). The distinction between low and high salience stimuli was reflected in a significant difference of 570ms in average response times (p <0.01). Subjects responded faster to high than to low salience stimuli. Note that the number of non-choice key presses (thumb) significantly anti-correlated with questionnaire scores across product categories ( -0.51, p <0.05). Despite this finding, the occurrence of non-choices was too varied across subjects to allow further behavioral or magnetoencephalographic analysis of these trials. In both control tasks, decision-making was significantly faster than in the choice condition (p<0.01). Subjects responded by pressing the appropriate key 860 ms and 660 ms after stimulus onset in the height and color tasks, respectively. The occurrence of 'cannot discriminate' presses was negligible in both control tasks (< 0.5%). Overview: Evoked responses Based on this initial observation, statistical analysis using the measure P(t) identified four characteristic differential effects, for which neural activity is modulated by either task condition or salience. All effects are robust in signal-space, supported in at least six out of eight subjects (all four effects were present in five subjects), and statistically significant across the group of subjects. In the choice task, the first appreciable evoked responses were observed over occipital primaryvisual cortices at about 100 ms after presentation of the images. Consistent evoked responses were observed at latencies up to about 200 ms over extra-striate and parietal regions for a given subject, but the responses varied between subjects. At longer latency, where inter-subject variability was greater, evoked activity is seen in rough sequence over anterior temporal, pre-frontal, frontal, occipital, and parietal areas during the period from 300 ms to about 1000 ms after stimulus onset ( Fig. 2A). At about 1100 ms evoked responses begin to decrease rapidly. The consistency of neural activity with respect to the inducing stimulus is lost, and the evoked responses become too weak to be analyzed (Fig. 2B). In both control tasks, evoked responses were broadly similar to the choice tasks for latencies up to about 300 ms; starting over occipital primaryvisual cortices and extending to extra-striate, parietal, and left anterior-temporal regions. Subsequently, at around 400 ms, evoked activity decreased rapidly, consistent with a variety of studies requiring the discrimination of (complex) visual stimuli (e.g. Swithenby et al., 1998). At longer latency (>500 ms), evoked responses were associated with the motor activity required by the ensuing key presses, which had been made much earlier (860 and 660 ms, respectively) in the control tasks than in the choice condition. Such responses were highly varied across subjects and without any (stimulus-locked) pattern. 3.3 Differential responses: Choice versus control tasks The measure P(t) was used to identify intervals in time when the evoked responses elicited by the choice tasks differed significantly (P(t) <0.01) from the two control tasks across subjects. The analysis was restricted to 0 to 450 ms after stimulus onset. At longer latency, comparisons are impossible to interpret because of the onset of the motor activity found in the control tasks. Two intervals, V and T, were identified where the responses were significantly modulated by the task conditions (see Fig. 3 and Fig. 4; V and T respectively; these labels correspond to the location of main effects in each interval, i.e. primary visual and temporal cortices). V: Neuronal activity observed over primary visual cortices at around 90 ms after stimulus onset. This was consistent with a localized (dipolar) source in primary visual cortex, within the limitations of the source analysis. Signal amplitudes were highest in the choice task, second highest in the height control task, and weakest in the color control task. The differences between the evoked amplitudes elicited by the two control tasks are also significant. T: Neuronal activity over left temporal cortices at around 325 ms after stimulus onset. The bulk of this activity is generated in left anterior temporal cortices, extending, to variable degree, to ventral and medial temporal areas. Some generators were also found in left superior/middle frontal gyri, orbital gyri, and right extra-striate cortex. Within this latency range, signal amplitudes following presentation of the images were higher when choosing an item as opposed to either determining the shortest item or the red item. When comparing the two control tasks, we found no such effect. A) Local rms-signal at latency 100 to 900 ms after the choice stimulus (rms-signal has been summed over an interval of 10 ms). The sequence shown is based on data from two subjects (first subject 100-400 ms; second subject 500-900 ms) to illustrate the significant stages of neural activity. Due to inter-subject variability, not all regions mentioned in the text are visible. For the presentation of data, the helmet-shaped array of detectors has been projected into two dimensions (left ear on the left, front at the top). Inset: The helmet-shaped array of detectors. Each plate symbolizes two orthogonal, first-order gradiometers most sensitive to directly underlying neural currents. B) Global rms-signal (arbitrary units) after the choice stimulus, summed over all subjects and channels. Differential responses" High versus low salience stimuli (choice task) The measure P(t) was used to identify intervals in time when the evoked responses elicited in the choice tasks differed significantly (P(t) <0.01) between the high and low salience images measured across subjects. The analysis was restricted to 0 to 1000 ms after stimulus onset-the range of latencies when most of the stimulus locked evoked activity is seen. Two intervals, F and P, were identified when the responses were significantly modulated by item salience conditions (see Fig. 3 and Fig. 4; F and P respectively; these labels correspond to the location of main effects in each interval, i.e. frontal and parietal cortices). 3: Summary of differential effects across tasks and conditions. The spatial distribution of significance is shown in the top row, where gray-scale coding symbolizes the number of channels and time points (within a given interval) for which w,(t) < 0.01 holds. The number of subjects (out of 8) is given in which each differential effect is observable. V: the same effects are observed in both choice versus color and height versus color comparisons. T: the same effect is observed in the choice versus color comparison. F: Neuronal activity observed over left inferior frontal cortices at around 510 ms after stimulus onset. Within this latency range, signal amplitudes following low-salience stimuli were higher than those following high-salience stimuli. The results of the source analysis suggest that this activity is generated predominantly in cortical regions homologous to Broca's speech area, together with some evidence for activation of secondary visual cortices. A possible contribution from the cerebellum or brainstem as suggested by the significance plot could not be resolved. P: Neuronal activity observed over right parietal cortices at around 885 ms after stimulus onset. Within this latency range, signal amplitudes following high-salience stimuli were higher than Fig. 4: Source estimates. The images have been selected to illustrate those main source locations which are likely to give rise to the differential effects, and which have been consistently identified across subjects, i.e. occipital (V; dipolar sources), left temporal (T), left lateral frontal (F), and right posterior parietal (P). These source areas are consistent in at least three out of the four subjects for which MRIs were available. At long latency, other areas may become strongly active but there is considerable inter-subject variability. The putative functional significance has been indicated. It is noted that due to variability not all generators mentioned in the text are visible. For the purpose of presentation, each view has been rotated by a few degrees independently. those following low-salience stimuli. Source estimates suggest a strong contribution from right, posterior parietal cortices. At this latency, nondifferential generators in secondary visual, extrastriate, orbital, and cerebellar regions are also involved to varying degrees across subjects. No salience related effects were found at latencies before F, i.e. the choice evoked responses associated with differential effects V and T above are insensitive to the high-low salience distinction. Within the limitation of the source analysis, there is some overlap between the frontal generators associated with T and F, but the frontal sources for F are clearly more lateralized than those identified forT. DISCUSSION Eight subjects completed an experiment designed to study the neural correlates of a type of common real-life behavior, making purchasing decisions for common consumer items during supermarket shopping. Such decisions are individual and the outcome of a variety of factors, including prior experience (learning and memory), the effects of advertising and financial constraints, as well, perhaps, as more transient processes such as mood, season, or time of day. Age, gender, class, and other broad variables are also known to affect such decision-making. The behavioral responses of subjects in our simulated shopping environment suggest that they did indeed engage in higher-order cognitive processes that reasonably might be linked to making purchasing decisions. Thus, the response times were much longer than would be expected for purely geometric discrimination tasks, and the questionnaire responses correlated strongly with the choices made during the MEG recording sessions. Also, the pattern found for non-choices (that is, when confronted with products in the three-item choice situation, none of which were of interest) points to a plausible behavior in that the subjects did not choose when familiarity/preference was low. The implications of the salience measure are relevantit provides a comparative rating of the three consumer items in each image. Salience is only high when one item is strongly preferred to or is much more familiar than the other two. If all three items are (nearly) equal in familiarity or desirability, even if the familiarity/desirability is strong, then the salience is low. Salience is also low if the chosen item has competed successfully with two items with higher scores (presumably because of greater familiarity) on the questionnaire. Thus, in each case, low salience stimuli are those in which there may be some form of (perceived) ambiguity or perplexity in making a choice. The longer response times for low salience stimuli seem to reflect this putatively harder choice. The MEG results averaged across all subjects reveal a robust temporal sequence of neural responses, which follow the presentation of those images requiring the expression of subject choice. This sequence emerges from two separate comparisons of choice versus control and high versus low salience. The initial, primary visual cortex response V, at around 90 ms following stimulus onset, was compatible with the timing found in a variety of studies involving visual responses (Halgren et al., 1994;Yoneda et al., 1995). This primary visual response was stronger in the choice than in the control conditions. One interpretation would be that a complex stimulus must be strongly represented in striate cortex for subsequent higher analysis. This view would be in accordance with recent findings that a high working memory load in a task requiring visual selective attention is associated with increased activity in occipital cortices (de Fockert et al., 2001). Nevertheless, in the first control task, where product images are discriminated between on the basis of height, the signal amplitudes were also higher than those in the color discrimination task based around simple objects. Presumably, therefore, the strength of the cortical representation is related to both the complexity of the images and the demands of the task. The later response T, at 325 ms in left temporal cortices, is also stronger in the consumer choice task. The effect is clearly induced by this task in that the responses following the height task do not differ from the responses obtained from the color task. These regions are known from a variety of intra-cranial, MEG, and functional imaging studies to be engaged in semantic processing and the memory-based interpretation of visually presented material (McCarthy et al., 1995;Braeutigam et al., 200 b: Damasio et al., 1996;Nyberg et al., 1996). The finding is thus compatible with the hypothesis that at this time, the images are being recognized and compared with data recalled from long-term memories of the relevant brands and products. Such memories must be complex with episodic and, in many cases, affective and cognitive elements. The memories may involve actual experience of using, purchasing, or seeing advertisements for the specific brands. Activity in the right extra-striate cortex may further aid object recognition as part of this process (Allison et al., 1994). Any processes of comparison occurring at this latency, however, seem to be of a rather general character as there is no dependence on the salience measure. Working memory is likely to be involved because some of the generators locating in left frontal regions match recent observations of visual selective attention (de Fockert et al., 2001). The differences in the responses to high and low salience images was reflected in the MEG data from around 510 ms and maps initially onto Broca's area, F. There is prior evidence of silent vocalization occurring in interpreting such visual presentations . The stronger signal from the low salience stimuli, where the subjects may face difficulty in making a decision, may indicate an increased tendency to vocalize as a strategy aiding decision-making in the absence of easily retrieved preference. Post-hoc scrutiny of images did not suggest that this putative vocalization is linked to obvious features provided in the images, such as color, shape, or linguistic (text) information. Finally, a characteristic response is found at 885 ms in the right parietal cortex P, in high salience conditions, and thus where the subject has a strong familiarity with or preference for one of the three brands/products. Whilst this strongly lateralized parietal signal cannot be conclusively explained here, a number of insights from other sources bear on this finding. The parietal cortex receives converging input from many sources, making it available for second-order mapping, e.g. it is engaged in relating spatial to other representations (Anderson & Zipser, 1990), notably during memory retrieval. Lesions of the right parietal affect a person's capacity to produce speech with normal prosody and emotion (Heilman et al., 1975;Ross & Mesulam, 1979). Damasio has broadened these observations into a specific 'somatic marker hypothesis' according to which damage to the right parietal cortex (Damasio, 2000;Charlton, 2000) results in anosognosia; where intentionality is profoundly damaged. High salience stimuli might relate to decisions in which the outcome is strongly consistent with some form of intention based on previous experience. In this context, it may be relevant that, at this latency, left lateral prefrontal activity was observed in three of the eight subjects, which might be related to mechanisms of reward expectancy (Watanabe, 1996). For this integrative and representational view of the responses associated with stimuli of high salience, it may be relevant that long lasting parietal waves associated with recall linguistic stimuli have been reported recently (Kane et al., 2000). A further, but not necessarily alternative, explanation draws on the role of the right parietal in selective and sustained attention processes (Cabeza & Nyberg, 1997;Vallar, 1997), as well as higher levels of motor control (e.g. Krakauer & Ghez, 2000). Accordingly, right parietal activity may signify a (final) attentional focus on the item already chosen to visually 'hold' it during the ensuing or already initiated motor control that is necessary for the key-press. Currently, it is unclear whether this implies that such right parietal activity could, in principle, follow low salience stimuli as well. Clearly, the neural mechanisms underlying such shopping choices are complex. They may draw on the specificity of an individual's past experience and engage many interacting psychological and social processes not explored here (notably gender) with, doubtless, appropriate brain correlates. Yet, this study has provided evidence that relevant behavioral measures (salience) that are associated with choosing consumer items may translate into differential neural activity at specific stages following stimulus presentation. In this context, it might be interesting to explore in future work the neural responses that are possibly locked to key presses, i.e. the moment a choice is being translated into action. This experimental design may be a step toward examining brain mechanisms engaged in closer approximations to real life situations. Indeed, as an increasing number of people actually shop on the Internet, real life and this study situation may come ever closer.
2014-10-01T00:00:00.000Z
2001-01-01T00:00:00.000
{ "year": 2001, "sha1": "4662eb210964915b5ba2e7541ab80a2a35442fd5", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/np/2001/247838.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09cc3ed8a39141d01680753c7ec9e03ca0b6cfb0", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
43658669
pes2o/s2orc
v3-fos-license
Peripheral odontogenic myxoma in a 12-year-old girl: a rare entity Peripheral odontogenic myxoma is a rare odontogenic tumor representing an extra osseous counterpart of central odontogenic myxoma. It is commonly seen in gingiva between the 3rd and 4th decades of life and appears predominantly in females. Compared to central odontogenic myxoma, it is a less aggressive, slow-growing lesion with a low recurrence rate. However, close postoperative follow-up is required because of the unlimited growth potential of incompletely removed lesions. It shares many features with other soft tissue myxoid proliferations occurring in the oral cavity and hence needs to be differentiated from them. Very few cases of peripheral odontogenic myxomas have been reported and, to the best of our knowledge, no case has been reported in a pediatric patient. We present an unusual case of peripheral odontogenic myxoma occurring in a 12-year-old girl located in the anterior mandibular gingiva, with an emphasis on differential diagnosis. II. Case Report A 12-year-old girl presented with a growth on the mandibular gingiva between tooth #31 and #32 ( Fig. 1. A) that appeared three months prior. It was 1×1.5 cm in size, firm in consistency, and adherent to the mandibular gingiva but not fixed. The overlying mucosa was normal in color and texture. (Fig. 1. A) Radiologically, the intraoral periapical view showed drifting of tooth #31 and #32 without any erosion of alveolar bone. (Fig. 1. B) Based upon the clinical and radiographic findings, the lesion was provisionally diagnosed as a pyogenic granuloma. The lesion was completely excised and curetted under local anesthesia. Gross examination of the excised tissue revealed a softto-firm grayish white pedunculated mass. Microscopically, an H&E stained section showed well-circumscribed lesional tissue separated from the overlying stratified squamous parakeratinized epithelium by fibrous tissue. The lesional tissue consisted of relatively acellular loose myxoid stroma with scattered spindle-to-stellate-shaped cells and many delicate proliferating capillaries. A minimal amount of collagen fibers was seen. (Fig. 2. A) The presence of numerous mast cells (MCs) was confirmed by toluidine blue staining. (Fig. 2. B) The lesional tissue was strongly positive for reticulin staining and showed alcinophilia. (Fig. 2. C, 2. D, respectively) I. Introduction Odontogenic myxomas are relatively rare benign odontogenic tumors that arise from the ectomesenchyme of the tooth-forming apparatus and are composed of spindle shaped/ rounded/angular cells embedded in abundant mucoid stroma. Odontogenic myxomas can be categorized into central and peripheral variants [1][2][3][4] . Very few case reports of peripheral odontogenic myxomas (POMs) are available in the literature. Clinically and histologically, POMs resemble many other soft tissue lesions. Hence, recognizing and diagnosing POMs is necessary for the careful planning of conservative treatment and follow-up to rule out intraosseous extension 5 . This article presents a rare case of POM in a pediatric patient with a special emphasis on differential diagnosis. III. Discussion Odontogenic myxoma is a mesenchymal lesion of uncertain histogenesis that microscopically mimics dental pulp or follicular connective tissue 1,2 . Odontogenic myxomas are Lesional tissue showed vimentin positivity and S-100 negativity. Based on these findings, a final diagnosis of POM was established. After excision of the lesion, the migrated teeth reverted to their normal position. The two-year followup period was uneventful. Interestingly, in our case of POM, we found a scattered distribution of MCs. MCs may also play an important role in the growth and expansion of odontogenic tumors and their presence is associated with poor prognosis 12,13 . A B It has been suggested that the MCs are associated with remodeling of the extracellular matrix in neoplastic alterations as they produce and release proteolytic enzymes favoring the migration of both endothelial and tumor cells as well as the release of angiogenic factors stored within the stromal tissue, leading to a higher degree of aggressiveness of odontogenic myxoma 12,14 . However, the presence of MCs has not been reported previously in POMs. Histologically, the differential diagnosis of POM should include myxoid neurofibroma, myxoid chondrosarcoma, and myxoid liposarcoma, chondromyxoid fibroma, myxoid chondrosarcoma, a myxoid change in fibrosarcoma, botryoid type embryonal rhabdomyosarcoma and pleomorphic adenoma 4 . Awareness of the potential diagnostic pitfalls as well as careful evaluation of the clinical, radiological and characteristic histopathologic findings can narrow down the differential diagnosis 7 . Nerve sheath myxoma typically exhibits lobulated mucoid tissue containing stellate and spindle shaped cells, and condensed connective tissue representing perineurium surrounding the lesion. MCs are characteristically present in this lesion 11 . Oral focal mucinosis is clinically indistinguishable from other similar lesions; however, the connective tissue is alcinophilic and lacks reticulin fibres 15 . Our case showed strong positivity for reticulin staining, thus ruling out oral focal mucinosis. In our case, we confirmed the results of other studies with respect to S-100 negativity and vimentin positivity. The diagnostic value of immunohistochemistry (IHC) in odontogenic myxomas is limited as the neoplastic cells share antigenic characteristics with many non-odontogenic myxoid proliferations and a specific marker for cells of dental ectomesenchymal origin is lacking 4 . However, IHC findings help to differentiate these lesions from other myxoid lesions. If left untreated, POMs have unlimited growth potential. POMs without bone destruction are treated by simple excision while those with bone destruction require excision and marginal curettage. POM has a much lower recurrence rate (3%-8%) than central odontogenic myxoma (10%-33%). Therefore, a carefully planned conservative enucleation or semi-radical approach is justified 2,4,5,16 . Close follow-up of these lesions is necessary to rule out intraosseous extension and recurrence. classified as central/intraosseous and peripheral/extra osseous variants 2,3 . POM is a very rare lesion with a reported incidence less than that of other peripheral odontogenic tumors 4 ; data on POM clinicopathologic features remain scarce 4 . Relevant literature suggests that peripheral myxomas of the intraoral tissues should be named POMs because soft tissue myxomas are usually seen extrafacially in skeletal muscles, dermal and subcutaneous tissues and do not occur in the oral cavity 5 . Several theories have been put forth regarding the pathogenesis of POM. One hypothesis states that altered primitive fibroblast/myofibroblasts produce excess mucopolysaccharides. And most of these cells are incapable of forming mature collagen. Other authors have suggested an origin derived from mesenchymal cells, such as dental papilla, dental follicle, or periodontal ligament 5,6 . POMs most commonly present clinically as pedunculated or sessile, painless, exophytic masses located in the gingiva 4,5 . Most of the reported cases of POMs occur in 4th to 6th decade of life 4,5,[7][8][9][10] . In contrast, our case was found in a 12-yearold girl. POMs show a predilection for females and most reported cases have occurred in the maxilla 2,4,5,7-9 , with only a few cases including the present case reported in the mandible. The size of the lesions ranges from one centimeter to several centimeters, with two reported cases being very large 4 . Radiologically, some of the reported cases of POMs showed displacement of the associated teeth without root resorption. Localized erosion of alveolar bone was also observed in some cases 4,5,8 . In our case, the lesion caused tooth displacement without any bony erosion. POMs are poorly circumscribed myxoid proliferations outside the bone. They show little encapsulation and their rapid growth may be due to an accumulation of mucoid ground substance mimicking an aggressive neoplasm. The neoplasm is composed of haphazardly arranged stellate, spindle shaped and round cells in a loose myxoid stroma. Typically, a delicate vascular network and stellate fibroblasts are diagnostic of POM 4 . Odontogenic epithelial rests may not be obvious in most lesions and are not necessary for establishing a final We conclude that special stains and IHC are valuable tools for the differential diagnosis of these lesions. Overtreatment of POMs should be avoided through the use of definitive diagnosis, especially in pediatric patients, as it may affect the alignment and eruption of teeth. The role of MCs in POMs needs to be further evaluated, since POMs with MCs have not been reported previously.
2018-04-03T05:38:30.317Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "4c1193c21e982dd12015c59dd65df45594684370", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5125/jkaoms.2017.43.3.178", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c1193c21e982dd12015c59dd65df45594684370", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13932623
pes2o/s2orc
v3-fos-license
Couplings and Scales in Strongly Coupled Heterotic String Theory If nature is described by string theory, and if the compactification radius is large (as suggested by the unification of couplings), then the theory is in a regime best described by the low energy limit of $M$-theory. We discuss some phenomenological aspects of this view. The scale at which conventional quantum field theory breaks down is of order the unification scale and consequently (approximate) discrete symmetries are essential to prevent proton decay. There are one or more light axions, one of which solves the strong CP problem. Modular cosmology is still problematic but much more complex than in perturbative string vacua. We also consider a range of more theoretical issues, focusing particularly on the question of stabilizing the moduli. We give a simple, weak coupling derivation of Witten's expression for the dependence of the coupling constants on the eleven dimensional radius. We discuss the criteria for the validity of the long wavelength analysis and find that the"real world"seems to sit just where this analysis is breaking down. On the other hand, residual constraints from N=2 supersymmetry make it difficult to see how the moduli can be stabilized while at the same time yielding a large hierarchy. Introduction and Summary The only vacuum independent quantitative predictions of weakly coupled heterotic string phenomenology are a relation between the Planck mass, the four dimensional gauge coupling and the string tension, and a relation between the unification scale and the scale of compactification. This last prediction is rather troubling. For if we suppose that the successful supersymmetric unification of couplings is not an accident, then one predicts that the scale of compactification is a factor of 20 or so below the string scale. This, in turn, implies that the dimensionless coupling of string theory is of order 10 7 , so that a weak coupling description surely does not make sense 1 . On the other hand, it has been argued for many years that string theory cannot be weakly coupled if it describes nature. In the weak coupling region, the dilaton potential almost certainly cannot be stabilized [2]. So perhaps we should simply accept the facts as they appear, and suppose that the compactification scale, R, is large, in heterotic string tension units, and the theory is strongly coupled. One might worry that, by duality, such a strong coupling region would be mapped into a weakly coupled region of some other string theory (or of M theory) and that this region would suffer from some version of the dilaton runaway problem. However, in ref. [3], it was pointed out that all of the known dualities map the region of large radius, strong coupling, and fixed four dimensional coupling to other strongly coupled theories (or at least theories in which the couplings are not arbitrarily small). Witten has recently taken this viewpoint to its logical conclusion [4]. At strong coupling, the heterotic theory is described, at low energies, by 11-dimensional supergravity. More generally, the strong coupling limit of the theory has been called M -theory [5]. Witten has argued that M-theory might well provide a better description of nature than weakly coupled strings. The M -theory description is valid, as we will see shortly, when a certain parameter, which we will call ǫ, is small. This parameter seems to be of order one in the real world, so the M theory description is likely to be at least qualitatively much better 1 These statements are valid for more or less isotropic Calabi-Yau manifolds. In [1] we argued that highly anisotropic manifolds could resolve this problem. There have also been attempts to extract conventional four dimensional unified gauge theories from string theory. than the weak coupling string description. Compactification of the E 8 × E 8 heterotic theory on a Calabi-Yau space, X, is dual to M theory compactified on X × S 1 /Z 2 . Using formulas presented in [4], one finds the following connections between the 11 dimensional Planck mass, M 11 (defined in terms of the coefficient of the Einstein lagrangian in 11 dimensional supergravity, as M 11 = κ −2/9 11 ), the 11-dimensional radius, R 11 , and the compactification radius, R = V 1/6 , where V is the volume of the Calabi-Yau space on the boundary with unbroken E 6 gauge group: (1. 1) where G N is the four dimensional Newton's constant; Substituting reasonable phenomenological values, one finds that the eleven dimensional Planck length is roughly half the compactification radius, while the eleven dimensional radius is about ten times the compactification scale! So one might hope that eleven (or five)-dimensional supergravity provides at least a crude approximation to the real world. Moreover, if this viewpoint is correct, dramatic new physics occurs long before one encounters the four dimensional Planck scale. The universe first looks five dimensional, then eleven dimensional. The four dimensional Planck scale, M 4 , is simply a parameter of low energy physics; there is no interesting new dynamics at this scale! Quantum Gravitational (more properly Quantum M theoretical) effects, become important at the unification scale. This has possible implications for many questions, including issues of early universe cosmology. These are among the issues we will explore in this paper. The qualitative physics of this M theory regime is quite different from that of weakly coupled heterotic strings, which are no longer the lowest energy excitations. The fact that the compactification scale is large in string tension units is a consequence of the fact that heterotic strings are membranes stretched between the two walls of the eleven dimensional world. The fundamental energy scale in this regime is the eleven dimensional Planck mass M 11 . The membrane tension is one in these units but the heterotic membrane is large because the eleventh dimension is an order of magnitude larger than l 11 = M −1 11 . The heterotic string tension, T h is M 3 11 R 11 . The compactification radius is of order one in l 11 units, and this is what determines the unification scale. While the M -theory description should be qualitatively much better than the weak coupling string description, the universe is probably not in a regime where one can simply compute in the classical, low energy eleven dimensional supergravity theory. In the classical supergravity theory, the expansion parameter is This number is of order one. So we might expect unknown quantum M -theory corrections to be of order one. (This should be compared with the situation in the weakly coupled string theory description, where the "small parameter" is of order 10 7 .) As we will discuss, this is just as well, since in the weak coupling limit one could not understand the stabilization of the moduli. Even before exploring any detailed dynamics, the view that the universe is approximately five dimensional has interesting consequences. Consider, for example, the strong CP problem. It is well known that in four dimensional string models, there is always a "model-independent" axion, the partner of the usual dilaton, with the potential to solve the strong CP problem. The superpartners of the Kahler, (1, 1),moduli of X provide other axion candidates. In weakly coupled string theory, world-sheet instanton effects break the Peccei-Quinn (PQ) symmetries of these Kahler axions, by amounts of order e −R 2 . Usually, since R is assumed to be a number of order one in T h units, these breaking effects are also taken to be of order one. However, if R is large, this factor can be extremely small. Indeed, in the five dimensional picture, these axions lie in vector multiplets and the associated PQ symmetries are "would-be five dimensional gauge symmetries" that are broken only by boundary effects and by membrane instantons. The latter are highly suppressed because of the large size of the membranes (actually, because it occurs in a holomorphic superpotential this effect can be calculated by extrapolating weak coupling formulae, as we will see below). We will argue that in a class of M theory vacua the dominant boundary effect is Quantum Chromodynamics, and a linear combination of the Kahler axions is a QCD axion. A second phenomenological issue arises from the fact that the unification scale is so close to l 11 . The most sensitive probe of such large scales is proton decay. Exact or approximate symmetries will be essential in understanding why the proton is so stable. A careful examination of the four dimensional low energy effective theory gives rise to other interesting observations. The Kahler axion multiplet naturally gives rise to a no scale model with broken SUSY and vanishing cosmological constant as the leading term in a systematic computation of the effective potential. A natural explanation of squark mass universality can also be obtained in this model. The no scale structure and squark degeneracy are only valid in leading order in (R 11 M 11 ) −1 ∼ 0.1. It is unclear how far we can rely on these results as explanations of phenomena in the real world. One of the most fundamental issues in string theory is the question of how the moduli are stabilized. In the weak coupling limit of string theory, moduli are either exact or unstable. If the theory describes nature, one must hope that the moduli are stabilized at a point in moduli space where semiclassical reasoning is not valid. This raises the worry that one will not be able to predict anything from string theory. There will be no small parameter to explain a small scale of supersymmetry breaking, for example, and the smallness of the gauge couplings and their apparent unification must be accidents. In ref. [1], a solution to this problem was suggested, exploiting the holomorphy of the superpotential and gauge coupling functions, and certain discrete symmetries. It was assumed that the compactification radii are of order one in string units, and that string perturbation theory breaks down even for small values of the dimensionless string coupling. More precisely, the model-independent dilaton S was supposed large, while the other moduli were of order one. Stringy non-perturbative effects in the gauge couplings and the superpotential were shown to behave as powers of e −S . As a result, one can understnad why supersymmetry breaking is small and the gauge couplngs are unified, and one predicts only tiny corrections to the lowest order superpotential for matter fields. Because one is in a region where the dilaton superpotential is monotonically decreasing, stabilization of the moduli must arise through large corrections to the Kahler potential. The view that the compactification scale is large and that the string coupling is very strong requires a reassessment of this picture. In the limit that the low energy, 11-dimensional supergravity description is valid, the theory suffers from instabilities simi-lar to those at weak coupling, as we will see in some detail. On the other hand, as we have said, taking the 11 dimensional parameters from the "observed" four dimensional ones, nature would seem to reside in precisely the regime where the long wavelength description breaks down. So it would seem reasonable to hope that quantum M -theory effects are responsible for the stabilization of the moduli. One of the goals of the present work is to explore this possibility, and to ask what weak-coupling predictions, if any, survive into the strong coupling regime. In order to do this, it is necessary to understand as well as possible the structure of the low energy theory in the small ǫ regime. One of the main results of ref. [4] is a computation of the gauge couplings from an eleven dimensional perspective. Studying the classical field equations, Witten finds that the Calabi-Yau volume on the E 8 side decreases linearly with R 11 , so that, for fixed E 6 coupling, the E 8 coupling blows up at a finite value of R 11 . In section 2.2, we point out that, exploiting the holomorphy of the gauge coupling function, these functions can be computed by weak coupling methods. The imaginary parts of the chiral fields T and S (the usual moduli whose real parts describe the internal radii and the four dimensional gauge coupling, respectively) are the axions we have spoken of above. They can be normalized so that the theory is invariant under 2π shifts of these fields. As a result, the gauge coupling functions are necessarily of the form where m, n, r and s represent sets of integers. Perturbative heterotic string physics is valid when S ≫ 1; S ≫ T and S/T 3 ≫ 1. M theory is well approximated by classical supergravity (SUGRA) when S and T are both large such that S |T | 3 is small. However, we must avoid regions where S and |T | are comparable and certain linear combinations of them are small. In these regions physics on one of the boundaries of the eleven dimensional world is strongly coupled. By holomorphy we can just as well calculate the linear terms in the gauge kinetic functions at weak coupling. Such weak coupling calculations have been performed in the past for a variety of theories [6] such as orbifold models, and the difference of the E 6 and E 8 couplings has been calculated for Calabi-Yau spaces. In section 2.2, we point out that the couplings themselves can be computed directly for Calabi-Yau spaces, by dimensionally reducing the Green-Schwarz counterterms introduced in ten-dimensions to cancel anomalies. 2 This gives the couplings of the axions in the T multiplets to FF and is easily supersymmetrized to give the coupling of the full multiplet. We find, as in ref. [4], that the sign of these couplings is such that, for fixed S, the E 8 coupling blows up at a finite value of the radius. This fact is already quite striking. It was probably ignored in the past because it corresponds to a point of strong string coupling. As stressed in [4], the blowing up of the coupling may have something to do with the stabilization of the moduli. As we will discuss in some detail, in the weak coupling regime, one can determine the potential for the moduli completely. Gluino condensation gives rise to a superpotential which behaves as a power of e −S+ α· T . In the semiclassical SUGRA regime, we will fully determine the Kahler potential for the moduli and matter fields. Gluino condensation then leads to a potential which grows with radius for fixed coupling of the standard model gauge fields, i.e. at weak coupling the dynamics tends to shrink the eleventh dimension. This is rather surprising, since one might have expected that for widely separated walls, the eleven-dimensional dynamics would become free. In the regime where M theoretical dynamics reduces to supergravity, there is no way to prevent the shrinkage. Thus, quantum M theory is crucial to the stabilization of the radius of the eleventh dimension. Similar remarks can be made about the size of X. Phenomenology indicates that it is quite close to l 11 . Consequently the stabilization of this modulus probably also requires the intervention of quantum M theory. Given that the parameter ǫ is of order 1, it is not unreasonable to expect that quantum M -theory dynamics stabilize the moduli at their observed values. Note that unlike the situation analyzed in [1], there is no mystery here about the weakness of the standard model gauge couplings. The latter play no role in the stabilization of the moduli. Nor do we need to invoke a premature breakdown of perturbation theory. Weak coupling arises from geometrical factors of order one, primarily a factor of 2 (which gets raised to the sixth 2 The weak coupling calculation which we will perform here -and thus, in some sense, Witten's eleven dimensional calculation, has actually been performed some time ago by L. Ibanez and P. Nilles, [7] power) between the linear size of the Calabi Yau manifold on the E 6 boundary and l 11 . We will find, however, that residual constraints from N = 2 supersymmetry -the no-scale structure we alluded to earlier -raise puzzles about how some of the moduli are stabilized. As in ref. [1] we can attempt to use holomorphy of the superpotential and gauge coupling functions, together with exact discrete shift symmetries for the moduli, to argue that certain semiclassical predictions received only exponentially small corrections even at strong coupling. Now, however, the situation is more complicated. We have noted above that in the M theory regime certain linear combinations of the moduli can be small, and exponentials of these are no longer suppressed. By studying physics deep in the semiclassical regime, where the Calabi Yau volumes are everywhere large, we will argue below that these unsuppressed exponentials do not infect the predictions of gauge coupling unification and ratios of Yukawa couplings. A crucial ingredient of this argument is the use of holomorphy to extrapolate semiclassical results into the regime of phenomenological relevance. The rest of this paper is organized as follows. In the next section we review the effective theory in five dimensions which results from compactification of M theory on a Calabi-Yau space. We pay particular attention to certain approximate symmetries which will survive in four dimensions as Peccei-Quinn symmetries. We then reduce the theory to four dimensions. We give a weak coupling string calculation of the dependence of the coupling on the 11-dimensional radius. We discuss the form of the resulting Kahler potentials, including restrictions inherited from the approximate five-dimensional supersymmetry. We point out that there are several approximate Peccei-Quinn symmetries which hold to an extremely high degree of accuracy. The axion associated with one of these symmetries solves the strong CP problem, but will violate the conventional cosmological bounds. In section 3, we discuss the problem of stabilizing the moduli, exhibiting the intriguing yet rather problematic no-scale structure. We offer some speculations about how stabilization might occur and about the possible origin of a hierarchy. Finally, in section 4, we discuss some phenomenological and cosmological implications of these observations. Effective Field Theory in Five Dimensions To begin, let us be more precise about the numerical values of various parameters. We do this not because of any illusion that the tree level calculation of these parameters is immune to corrections, but in order to orient ourselves. The tree level fit to the fine structure constant and the unification scale gives (2.1) Here, L is the sixth root of the volume of X, R 11 is the length of the eleventh dimension (πρ in Witten's notation), M 11 = l −1 11 , and l 11 is the ninth root of κ 2 , the coefficient of the eleven dimensional Einstein action. The fit of M theory to the real world suggests that six of the dimensions are very small, one is one order of magnitude larger and the rest are at least as large as our horizon volume. We can also write a formula for the heterotic string tension in terms of eleven dimensional quantities. For large R 11 we have an approximate five dimensional SUSY, and this is a BPS formula which receives no corrections. Boundary effects and other breaking of SUSY down to four dimensional N = 1, will give corrections to this formula of order (R 11 M 11 ) −1 , which we will neglect. The string tension formula can be obtained by the following reasoning. In ten dimensions, one has expressions for the gauge and gravitational couplings in terms of the tension [8]: where λ is the dimensionless string coupling. Comparing with the eleven dimensional expressions for these quantities yields Alternatively, we can use Polchinski's formula for the Dirichlet two brane tension in Type IIA string theory [9] 3 , and the fact that the heterotic string is just a two brane stretched between the walls of the world. We also need the Kaluza Klein relation between the ten and eleven dimensional gravitational constants. This calculation gives the same result as above, if one is careful about factors of 2 coming from the relation between compactifications of M theory on a circle and an orbifold. Given the relatively large size of R 11 , it is appropriate to consider an effective five dimensional action for physics at length scales larger than l 11 but smaller than R 11 . We will then reduce this to a four dimensional effective action for scales longer than R 11 . In the bulk, the five dimensional theory has full five dimensional SUSY, and its lagrangian has been worked out by Antoniadis et. al. [10], following [11]. The volume of X is in a hypermultiplet along with some of the purely internal and the dual of the purely external components of the three form gauge potential. The complex structure moduli also pair up into hypermultiplets, with internal components of the three form. The quaternionic metric on this space of hypermultiplets is not determined by general considerations. For large volume it can be computed by Kaluza Klein technology. However, equation (2.1) tells us that the volume is not large. We expect M theory to give corrections to this metric of On the other hand, the volume preserving Kahler moduli, are in vector multiplets, along with the integrals of the three form over nontrivial (1, 1) cycles. The bosonic part of the lagrangian for these multiplets is given by [10] Here we choose the gauge function so that ∂ 11 d a ij = b a ij , with b a one of the harmonic (1, 1) forms on the Calabi-Yau fiber at x 11 , and so that d a ij vanishes on the boundary. c a is the integral of d a over the ath (1, 1) cycle on the manifold (and we have renamed the eleventh dimension the fifth). This transformation is not a symmetry of the system. However, it is broken only by nonperturbative physics which involves the E 6 boundary. Loosely speaking, nonperturbative effects on the E 6 boundary arise from membranes stretched between the two boundaries, and Euclidean 5-branes wrapped around the Calabi-Yau manifold on this boundary. These approximate symmetries become Peccei-Quinn symmetries of the effective four dimensional theory. We will estimate the dominant symmetry breaking effects below. Four Dimensional Effective Field Theory We now want to reduce our resolving power and obtain a description of the world on length scales longer than R 11 . This will be an N = 1 locally supersymmetric four 4 Micha Berkooz has pointed out to us that the nontrivial background fields calculated by Witten, break d = 5 SUSY. Thus, there may be corrections to this lagrangian. However, when R 11 is much larger than l 11 the unknown dynamics of short distance M theory should not be affected by this soft breaking of d = 5 SUSY. The corrections should be calculable in low energy supergravity. That is, integrating out the unknown massive degrees of freedom of quantum M theory, should give us a Lagrangian which is d = 5 supersymmetric to leading order in R −1 11 . The fields which Witten calculates to have N = 2 SUSY breaking VEVs are all in hypermultiplets and they do not effect the vector multiplets to leading order in the long distance expansion. dimensional field theory. We first address the question of the gauge couplings in this theory. Witten has given us an eleven dimensional calculation of the blowup of the E 8 coupling when R 11 reaches a critical value. It is a remarkable example of the power of holomorphy [12] that this calculation can be exactly reproduced by extrapolation of results for the weakly coupled heterotic string. Witten determines the dependence of the E 6 and E 8 gauge couplings on the volume of the Calabi-Yau space and the radius of the eleventh dimension. However, if the four dimensional effective coupling is small, while the Calabi-Yau radius is large, it should be possible to obtain this dependence from a weak coupling computation. The point is that there is a regime of large radius ("T ") and small coupling (large "S"), such that the dimensionless string coupling is small, and these couplings can be computed in perturbation theory. The gauge coupling functions are holomorphic functions of S and T . They must also be invariant under discrete shifts of S and T . With the normalizations we will use, these shifts are, As a result, up to terms which are exponentially small for large S, the gauge couplings functions f a , must be given by where m a and n a are integers. The m a 's are determined by the central terms, k a , in the Kac-Moody algebras. The n a 's can be obtained from a one loop computation. These couplings have been evaluated in the literature for many special cases [13]. For large radius Calabi-Yau compactifications, a formula has been presented for the difference of the E 6 and E 8 couplings [13]. However, for large radius, the separate couplings are well defined and it is actually a simple matter to determine them. The point is that for large radius, these couplings can be obtained by reduction of the ten-dimensional effective action. In particular, in terms of component fields, these couplings imply couplings of certain "axion-like" fields to FF . These axions correspond to particular excitations of the antisymmetric tensor field, B M N , with indices in the internal space. Such couplings are necessarily linear in B and involve products of F µν , i.e. from a ten-dimensional perspective they are precisely the terms which appear in the Green-Schwarz counterterms. So it is only necessary to reduce the Green-Schwarz counterterms to four dimensions. Before examining the Green-Schwarz counterterms themselves, a few preliminaries are necessary. First, we must determine the excitations of the B field corresponding to the various axions, and how they fit into chiral multiplets. The necessary expressions appear in ref. [14] The axions are in one to one correspondence with harmonic (1, 1) forms, b i,ī . These are conventionally normalized so that where Σ a are a basis of nontrivial closed two-dimensional sub-manifolds. In terms of these, and adopting units with 2α ′ = 1, the action takes the form By virtue of the normalization of the b (a) 's, the coefficients of the θ a 's are quantized, and θ a has period 2π. As we will now show, θ a is the imaginary part of the chiral field whose real part is r a . Note that 2πr a is what one would call the radius-squared of the internal space. In order to determine the structure of the four dimensional chiral fields, it is necessary to adopt some conventions. We take the ten-dimensional fields to satisfy Γ 11 = 1, where In making the reduction to four dimensions, we introduce three complex coordinates, x i andx i (this was implicit in the discussion above), and a corresponding set of γ matrices. In particular, if we define etc., and if we define corresponding six dimensional γ matrices, d i and dī, then we can define "states" by , and the chiralities of the other states follow immediately. In particular, the states |i have chirality one both internally and in four dimensions. So vertex operators of the form are vertex operators for 27's with positive chirality. Note, however, that when trying to identify these operators with fields, it must be remembered that the vertex operators are like creation operators, i.e. they are like complex conjugates of fields. Similarly, we can read off the operators for the moduli, from eqn. (2.11). In particular, the chirality plus field is the one which multiplies DX i , but complex conjugated as described above, i.e. r n + iθ n Now we can turn to the Green-Schwarz term. This term has been evaluated in various places. We choose to take the result from ref. [15]: We can dimensionally reduce this immediately. Break up F into parts with indices in four dimensions and indices in the internal six dimensions. Replace B by 2πθ a b (a) . Recall that Tr(F 4 ) = 1 100 (TrF 2 ) 2 , and TrF 2 = 30trF 2 . One then obtains, for the E 8 coupling to the axion, For the E 6 coupling, one obtains the same result but with the opposite sign. In order to finally determine the sign of the coupling of the modulus to the gauge fields, one notes that that the sign of the coupling of the imaginary part to FF is opposite to that of the coupling of the chiral field to W 2 α [16]. So we see that the E 8 fields couple to S − T b∧F ∧F 8π 2 while the E 6 fields couple to the same combination but with the opposite sign for the T term. Finally, we can compare this with Witten's result. Using the formula for α ′ , eqn. (2.5), and Witten's expression for the difference of the E 8 and E 6 couplings, we have, in units with 2α ′ = 1 However, only one chiral field from each hypermultiplet survives the breaking of SUSY that accompanies the reduction to four dimensions. We will denote the superfield whose real part is proportional to the volume of X on the E 8 boundary by S. The normalization is fixed so that shifts of the imaginary part of S by 2π are exact symmetries of the theory. The complex structure moduli on the E 8 boundary are denoted by C α . Note that although all of these fields are defined in terms of boundary conditions, they are what we will later describe as bulk moduli. The boundary conditions determine the behavior of the classical vacuum configuration throughout the fifth dimension. The action for making a small spacetime dependent deformation of these boundary conditions will be proportional to With these definitions we can write our weak coupling results for the gauge kinetic functions in terms of the fields S and Y a in the M theory regime. We have gauge theory, an accidental U (1) symmetry prevents the occurrence of such terms, but in M theory we do not expect to have such a symmetry. Thus, although we know that the coupling becomes strong, we do not know that it becomes infinitely strong. In the strong coupling region we do not have a reliable calculation either of the E 8 coupling itself, or of the superpotential for the moduli which is generated by the strongly coupled dynamics. One may worry that similar incalculable effects will infect the computation of the coupling functions on the E 6 boundary. This could completely ruin predictions of coupling unification. We know of no symmetry argument which rules this out, but we believe that the following physical argument is plausible. Let us examine the region where the Calabi Yau volume is much larger that l 6 11 . In this regime, the E 8 coupling becomes strong only at very large R 11 . Thus, there is a regime in which R 11 is large and the E 8 coupling is still small enough that nonperturbative dynamics is well approximated by a very dilute gas of small instantons. In addition, since the instanton density is exponential in the coupling, the average instanton spacing can be taken much larger than R 11 . In this limit, the dominant effect on the lagrangian of the E 6 boundary will come from the local influence of single instantons. E 8 instantons are 5 branes in eleven dimensional space. Their effect on the E 6 boundary must fall like R −3 11 as R 11 is increased. Thus they cannot give rise to effects on the E 6 coupling functions which grow exponentially with R 11 . Indeed, Green's functions made up of fields which live purely on the E 6 boundary cannot soak up the E 8 instanton zero modes, and get no contribution from these nonperturbative configurations. We have made this argument for very large V and R 11 , but holomorphy tells us that if the growing exponentials are not present in this regime, they are not present at all. Coupling unification is a prediction in the M theory region of moduli space. One advantage of our weak coupling calculation of vacuum polarization functions is that we can easily extend it to the case where E 8 is broken by Wilson lines. In fact, it is not difficult to see that the result is unchanged in the presence of Wilson lines. At large radius, on the torus, one must compute an expectation value of the form where V B is a vertex operator for the antisymmetric tensor, and V A is a vertex operator for a gauge field. One can take, say, V B in the −1 superconformal ghost number picture, and the V A 's in the zero ghost picture. As in the flat space calculation, the term with an ǫ tensor arises from the sector with (P, P ) boundary conditions for the right movers. In the large R limit, there is an (approximate) zero mode for each of the ψ I 's. This is just the correct number of zero modes to be soaked up by the five vertex operators in eqn. (2.19). The momentum factors in the four gauge boson vertex operators then give F 4 . Because the fundamental group of the non simply connected Calabi Yau manifold acts freely on its covering space, at large radius one just has an ordinary momentum integral to do, up to terms which are down by powers of 1/R. Such terms have the wrong R dependence to correct the modulus-dependence of the gauge couplings. The rest of the calculation is as in ten dimensions. The right moving boson and fermionic contributions cancel. Level matching then implies that only states withL 0 = 0 contribute on the left. This is identical to the situation without the Wilson line. Kahler Potentials The dynamics of SUSY breaking in the M theory regime is, as usual in string theory, 8π ≡ m 2 4 . When we rescale these fields, B i , to give them their proper dimension, their lagrangian will be a function of B i m 4 . Note that it is the reduced Planck mass m 4 that we choose in this formula rather than the Planck mass itself. Historically, it has been natural to associate the mass associated with Newton's constant as the fundamental mass scale of quantum gravity. However, in the M theory regime at least, it is a low energy artifact. m 4 is the parameter which appears in all formulae in the M theory regime. In writing a supersymmetric four dimensional lagrangian, it is convenient to choose a conformal frame in which the Einstein term does not depend on the chiral superfields. This is the frame in which textbook expressions for the supergravity potential are written. In this frame, the coefficient of the Einstein term and of the Kahler potential for dimensionless bulk moduli fields is M 2 11 . We will refer to this as the canonical frame. Note that this is different from the Einstein frame, where the coefficient of the four dimensional Einstein lagrangian is m 2 4 . This is a consequence of the fact that M 11 is the fundamental scale, while m 4 is a function of the moduli. Among the bulk moduli will be those that descend from components of vector multiplets in 5 dimensions. h 1,1 of these can be associated with Kahler deformations of the Calabi Yau manifold. The way in which these emerge from the 5 dimensional lagrangian has been described in [11]. Remember that the five dimensional theory contained h 1,1 − 1 vector multiplets, whose scalar components live on a manifold with coordinates X a satisfying the constraint N (X) = 1. The dimensionally reduced theory is conveniently described in terms of the complex fields Y a = R 11 X a + iA a 5 which are the scalar components of chiral superfields. These fields are unconstrained and have (in canonical conformal frame) the Kahler potential −ln N (ReY a ). In this approximation, the theory is invariant under continuous shifts of the imaginary parts of the Y a . In the quantum theory, we expect this to be broken to a discrete shift symmetry. However we have argued above that the symmetry breaking is entirely due to stretched membranes and to fivebranes embedded in the E 6 boundary. We can estimate the size of the stretched membrane contribution in two ways. First a naive eleven dimensional calculation suggests a PQ symmetry breaking term of order e −cM 3 11 R 2 R 11 This is the same form as the PQ breaking term which arises from a single worldsheet instanton in the weak coupling theory. We can in fact, reproduce this result by analytically continuing the world sheet instanton contribution of weakly coupled string theory. This has the form e −cR 2 in units with 2α ′ = 1. Inserting the formula for the string tension in terms of eleven dimensional quantities we get e −c4 2/3 π 5/3 M 3 11 R 2 R 11 . The latter derivation allows us to compute the precise coefficient, c, in the exponent for specific Calabi The low energy spectrum also includes fields which originate as modes on the boundary of the five dimensional world. Apart from the gauge fields, there are the moduli of the E 8 gauge bundle on the E 6 boundary, and quark, lepton and Higgs superfields, as well as possible exotic matter. We denote the generic chiral multiplet originating on the boundary as an edge field, E I . The Kahler potential for these fields is of order M 2 11 , so that when they are made dimensionful, their lagrangian will depend on E M 11 . In general, it will depend on the bulk moduli as well, and will be a correction to the Kahler potential of these fields. When R 11 and R are large, but ǫ is small, it is a simple matter to determine the Kahler potential for these edge states. It is, in fact, precisely the same as on the weakly coupled string side. To see this, one simply has to consider the lagrangian for the edge states, which for the bosonic fields takes the form: Reducing this lagrangian is similar to reducing the usual ten-dimensional supergravity lagrangian on a Calabi-Yau space. The factors of R 11 work out correctly. In particular, if one first reduces to ten-dimensions, it is necessary to rescale the ten-dimensional metric by in front of the gauge term, which is the conventional form of the ten-dimensional action. It is curious that the Kahler potentials for all of the fields have the same form at both extremely weak and extremely strong string couplings. It is not clear that this is enforced by any symmetry. Moreover, we have seen that the identification of the fields S and T is different in the two regimes. Nevertheless, perhaps it holds some deeper meaning. Finally, let us note that the boundary moduli may provide us with another candidate for the invisible axion. Indeed, in [14], it was shown that many (2, 0) moduli might receive masses of order e −T h R 2 . This is a superpotential calculation, and may be analytically extrapolated into the M theory regime. Since it refers to fields which live on the E 6 boundary, it will not be affected by strong coupling dynamics on "the other side of the world". If these (2, 0) moduli affect the E 6 gauge couplings at one loop in heterotic perturbation theory, as is almost certainly the case, then they will provide another contribution to the QCD axion. The true axion will be a linear combination of these, and the h 1,1 moduli discussed above. However, because the boundary moduli have decay constants of order M 11 rather than m 4 , the dominant component will be a boundary modulus. This will ameliorate the cosmological axion problem. Mechanisms for Stabilizing the Moduli In the limit that the classical eleven dimensional description is good, we expect to find the usual problem of runaway in the various moduli. If we simply consider compactification with gauge group E 6 × E 8 , we can compute the potential due to gluino condensation of the "far side." We do not need to think carefully about the interactions between the two walls to do this, since we have already determined the four dimensional Kahler potential, and the superpotential due to gluino condensation follows, as in ref. [17] from symmetry considerations. One obtains, then, a potential identical to that at weak coupling. It tends to zero as This potential favors large Calabi-Yau volume on the E 8 boundary and large R 11 . This is a region where the supergravity analysis should be completely valid, so we have encountered the eleven dimensional version of the stability problem. Perhaps, however, the fact that, for fixed E 6 gauge coupling, the potential forces R 11 to zero is a hopeful sign. This follows from the fact that for fixed E 6 coupling, the E 8 coupling (and thus the strength of the gaugino condensate) decreases with R 11 . We turn, then, to a discussion of what sorts of physics might stabilize the moduli. We begin by discussing the dynamics of the strongly coupled gauge theory on the E 8 boundary. The proximity of the phenomenologically determined value of R 11 to the strong coupling point motivates us to search for a mechanism involving the strong gauge dynamics which freezes some of the fields. As we argued in the previous section, the fields associated with five dimensional vector multiplets do not participate in the strong dynamics. The superpotential generated by E 8 and other quantum effects in M theory will be a function of S and perhaps of the complex structure moduli, but will not depend on the fields Y a . Label the fields on which it does depend Z A . Then the potential will have the form Here G ≡ −lnN and G a , G ab , etc. refer to derivatives with respect to ReY a (G ab is the inverse metric). This expression is the first term in an asymptotic expansion of the potential for large Y a . The equations F A = 0 have a solution at S = ∞, the weak coupling region referred to above. Generically, we may expect them to have a solution for finite values of S as well. When S is small, the theory is strongly coupled and the Calabi Yau volumes everywhere small (at finite R 11 ), and we can calculate neither the superpotential nor the Kahler potential. It is reasonable to postulate the existence of a discrete set of solutions to these k equations for k complex unknowns. Furthermore, generically, W will not vanish at these points. In regions where S is relatively large it may be a good approximation to neglect higher order terms in the superpotential, while retaining the complicated Kahler potential. The leading term in the superpotential has the form e − S b 0 where b 0 is the first coefficient in the renormalization group beta function. The corrections are powers of e −S multiplied by the leading term or by 1. We now note the remarkable property [18] of the Kahler potentials for the axion multiplets, which has been widely exploited in no scale models: the term in square brackets in (3.2) vanishes identically for any W and any value of Y a . As a consequence, the submanifold with F A = 0 of the full moduli space is, in the current approximation, a stationary manifold of the potential, with broken supersymmetry and vanishing cosmological constant. Moreover, the scale of SUSY breaking is as yet undetermined, since it depends on the values of the Y a . The Y a will be determined by terms higher order in the Y expansion of the Kahler potential. At order 1 |Y | we also encounter terms in the Kahler potential that involve the boundary fields. These include quarks,Q i , and moduli of the gauge bundle that breaks E 8 to E 6 . To this order, the Kahler potential will have the form where h and h ij are homogeneous of degree minus one in Y a . They can also depend on the gauge bundle moduli, E I . We will assume that there is a solution of the equations ∂h(Y, E) ∂E I = 0, (3.4) which fixes the value of the gauge bundle moduli. With this assumption, there is only one term of order Y −1 and quadratic in quarks which appears to depend on a matrix other than h ij . It is proportional to where L ≡ lnN . N is a homogeneous polynomial, so L a = −L ab Y b and the dangerous term is proportional to Y a h ij,a . To leading order in Y −1 , this is proportional to h ij itself. Thus the squark mass matrix is proportional to the matrix in the quark kinetic term and we have universality. Corrections to this will be of relative order 1 |Y | ∼ 10 −1 (here we use the phenomenological fit to the value of |Y | ∼ R 11 since we are not yet able to calculate it theoretically). The value of the Y a will be determined by minimizing the potential with Q i = 0 and E I determined by equation (3.4). This procedure will have the usual philosophical problem discussed in [2]. Minimization is achieved only by balancing terms of different orders in Y , even though Y is large. There are several differences from the analogous problem in weakly coupled string theory. There one is forced to contemplate cancellations between different exponentials of a large number. Here we have a Laurent series in Y a , and |Y | must be of order 10 in order to explain the ratio between the unification scale and the Planck scale. A second contrast with the weakly coupled problem is that we seem to have solved at least one of the stability problems of the weakly coupled theory. S is presumed to be fixed in the strong coupling region by the equation F S = 0. Note that this is completely compatible with the fact that the gauge theory on the E 6 boundary is weakly coupled at the unification scale. Unfortunately, this argument leaves us with a puzzle about the scale of SUSY breaking. In the true strong coupling regime, the superpotential generated by nonperturbative dynamics on the E 8 boundary is of order M 3 11 . The gravitino mass is then fixed to be of order | W M 3 11 ||Y | −1 M 11 ∼ 10 15 GeV. In order to get the right scale of SUSY breaking, we must assume that the superpotential generated by the strong E 8 dynamics is of order 10 −12 in eleven dimensional units. This suggests that the coupling is not terribly strong and very probably that the gauge group is smaller than E 8 . For a gauge group G with k instanton zero modes in the adjoint representation, the implied G fine structure constant at the unification scale is 0.45 k . Another problem, which loses none of its severity through familiarity, is that we do not have an explanation of the value of the cosmological constant. The no scale cancellation actually works through order |Y | −1 but fails at higher order. The fact that the vacuum energy density will be smaller by a factor of 100 than in a typical hidden sector model with the same value of the gravitino mass is perhaps suggestive, but hardly represents a solution of the cosmological constant problem. To conclude the discussion of this scenario, we briefly note the properties of the moduli. The bulk moduli coming from S and the complex structure of X will have masses of order the gravitino mass. Their kinetic terms are of the same order as the Einstein term in the action, so their couplings to ordinary matter will be suppressed by powers of m 4 . In Einstein frame (the frame in which the coefficient in front of the Einstein lagrangian is m 2 4 ) they will have potential energies of order m 2 The bulk moduli associated with the real parts of the axion multiplets have a potential which is suppressed by two powers of |Y | relative to the other bulk moduli. Thus, their mass is of order 10 −1 m 3 2 or 100 GeV. Their couplings to matter are nonrenormalizable and scale with m 4 . The QCD axion has a mass of order 10 −10 eV and decay constant of order m 4 . Its coherent couplings to matter are further suppressed by the same factors that suppress any low energy CP violation. If h (1,1) > 1 there will be more of these multiplets. Now however the axions will be extremely light as noted above. If there are also boundary contributions to the QCD axion then the true axion decay constant will be M 11 . We will also have a definite prediction of a very light axion which would contribute to long range spin dependent forces and very weak (compared to gravity) long range coherent forces. Low Energy Constraints on Planck Scale Physics The replacement of the Planck scale M 4 by M 11 as the threshold for as yet incalculable quantum gravitational effects sharpens the constraints on physics at ordinary scales from possible higher dimension operators. The most important such effect is the lowering of the scale of dimension five baryon number violating operators by two or three orders of magnitude. Discrete symmetries which eliminate or suppress dimension five operators become absolutely imperative. The constraint from gravitational physics is now of the same order as that conventionally quoted for grand unified models. In a similar manner, we find a new estimate for gravitational contributions to neutrino masses. We also find a stronger constraint on models which invoke pseudogoldstone bosons of accidental continuous symmetries. Previously, one argued that a renormalizable theory at scale f might spontaneously break an accidental continuous symmetry, producing a Goldstone boson with decay constant f . If gravitational physics breaks all global symmetries (this is certainly the case in string theory) we expect a Goldstone boson mass to be generated. It will be of order f Cosmology Here we will make only the briefest remarks about cosmology in the M theory region of moduli space. The first thing to note is that the large vacuum energy densities typical of many inflation models are uncomfortably close to the eleven dimensional Planck scale. This raises the disturbing (or perhaps exciting) possibility that the inflationary era can only be studied with the unknown machinery of quantum M theory. 7 Usually the gauge symmetries of the renormalizable model allow operators of dimension 5 or 6, but discrete gauge symmetries can be invoked to push d to larger values. In the M theory region of moduli space, M G ∼ 3 × 10 16 GeV. Indeed, in the scenario we have presented in this paper for nonperturbative physics and SUSY breaking, the natural scales of energy density in the low energy four dimensional theory are all much lower than M 11 . The vacuum energy density is of course moduli dependent, so we can always imagine that inflation takes place in a region of moduli space where the energy density is close to M 4 11 . We will then have to deal with the "cosmic overshoot"problem described by Brustein and Steinhardt [20]. The initial energy density of the system is much larger than the barriers that separate the inflationary region of moduli space from the extreme weak coupling region where string theory contradicts observation. In [21] it was suggested that this problem might be less severe than it had first appeared. In a region of steeply falling potential, the moduli lose energy exponentially in the distance covered by the trajectory on moduli space. It requires a detailed knowledge of the lagrangian on moduli space to determine whether the system really crosses the barrier into the weak coupling region. We also note that the natural candidates for inflatons in the M theory regime are the bulk moduli. They have self couplings which scale with powers of m 4 so that the natural size of the forces restoring these moduli to their equilibrium values is of the same order as gravitational friction. Assuming that we can construct a satisfactory inflationary model, we will certainly have to face a cosmological moduli problem. Many of the bulk moduli have masses of the same order of magnitude as squarks in strongly coupled heterotic string theory. Despite the replacement of M 4 by M 11 as the fundamental gravitational scale, M 4 (or perhaps m 4 ) is the parameter which determines the couplings of the moduli to ordinary matter. We will have to borrow one of the existing mechanisms for solving this problem [22] [21]or come up with a new one. Note that we also have a QCD axion with Planck scale decay constant. Most mechanisms (with the notable exception of [23]) for solving the cosmological moduli problem will not help with the axion. However, the very existence of the moduli will change the nature of the axion problem. The very early universe will be cold and matter dominated, so the usual analysis of axion history above the QCD phase transition may not be relevant. It should be clear furthermore that the cosmology of strongly coupled heterotic string theory is considerably more complicated than models that have been considered in the literature. In addition to more or less conventional bulk moduli and QCD axion fields, the model also has boundary moduli. These have mass of order the gravitino mass, but couplings to matter suppressed only by powers of M −1 11 . Their reheat temperature is about 1 MeV. We also have scalar partners for the axions, which will be a form of late decaying dark matter, and probably have to have very small density at nucleosynthesis if they are not to ruin classical cosmology. The distributions of energy among the various scalar fields may lead to a rich and complicated cosmological scenario. We will also have to sort out the question of whether the QCD axion is dominantly a boundary modulus in generic regions of moduli space in order to embark on a detailed study of the cosmology of M theory. Conclusion Strongly coupled heterotic string theory retains most of the attractive features of the weakly coupled region but provides a better fit to the parameters of the real world. There is no longer a discrepancy between string theory and supersymmetric coupling unification. In the strongly coupled region there is always a QCD axion and the strong CP problem is resolved. The axion decay constant violates cosmological bounds, but we view this as a challenge rather than a definitive failure of the theory. Indeed, the most serious phenomenological problem of string theory, in any region of moduli space is the cosmological moduli problem. Several solutions to this have been proposed, but Linde's seems to be the only one which could resolve the axion problem. The axion is of course also an attractive dark matter candidate. If h 1,1 > 1, the theory predicts a number of essentially stable axionlike particles with Compton wavelengths of astrophysical magnitude. If boundary moduli contribute to the QCD axion then we will certainly have at least one of these particles. Observations measuring the number of such light axions would be of the utmost interest. They would amount to measurements of topology of the six compactified dimension. Alternatively, if no such particles are found, and if there are boundary contributions to the QCD axion, the entire M theory region of moduli space would be ruled out. We have also proposed a scenario for SUSY breaking in the strong coupling region. The fundamental reason for the discrepancy between the Planck scale and the unification scale is the existence of a fifth "large "dimension an order of magnitude larger than the unification scale. As a consequence, certain fields of the theory exhibit an approximate 5 dimensional supersymmetry which is broken by terms of order inverse powers of the radius of the fifth dimension. There are h 1,1 chiral superfields in the low energy four dimensional theory which descend from vector multiplets in five dimensions. The axions are the imaginary parts of these fields. Approximate N = 2 SUSY produces an approximate no scale scenario for SUSY breaking, in which the F terms of the axion multiplets are the order parameters. The R symmetry breaking which triggers SUSY breaking comes from nonperturbative physics on the strongly coupled boundary. We argue that in this scenario squark degeneracy naturally arises to leading order in the inverse radius of the fifth dimension. The scenario also leads to h 1,1 scalar axion superpartners, with masses of order 100 GeV and Planck scale couplings to matter. These are a form of late decaying dark matter and are constrained by classical cosmology. The scenario is unacceptable as it stands. If we make the natural assumption that strongly coupled physics does not introduce any small parameters into the superpotential, then we predict the SUSY breaking scale to be much too large. Otherwise, we must resort to the sort of Kahler stabilization of some of the moduli that we advocated in [1] for the regime of weakly coupled string theory. Apart from this, we must also invoke higher order terms in the expansion in the inverse radius of the fifth dimension to explain the stabilization of the radius. This is precisely the sort of procedure that was criticized in [2]. Here however the expansion parameter is only of order 0.1. It is plausible then that the expansion breaks down for the values of the moduli at which the minimum is achieved. It is also reasonable to use the expansion as evidence for the existence of a SUSY breaking minimum (though not of course to understand why the cosmological constant is zero). However, we do not see how to save the prediction of squark mass universality which follows from the no scale structure at large R 11 . It is fairly clear from this discussion that we do not yet understand the mechanism of SUSY breaking in the M theory regime. We suspect that this may be closely connected with another phenomenological issue that has not yet been explored, the quark mass matrix. Most successful theories of the quark mass matrix are based on horizontal symmetries. In string theory, an attractive origin for horizontal symmetries has been suggested by a number of authors [24]. They originate as U (1) gauge symmetries which have Fayet-Iliopoulos D-terms. We feel certain that the dynamics of cancellation of the D-term will influence the breaking of supersymmetry and the stabilization of the moduli. Perhaps it will help to resolve some of the puzzles we have uncovered. In the long term, if the M theory region of moduli space has anything to do with the real world, the most striking feature of its phenomenology will be the low scale at which interesting gravitational phenomena become accessible. At energies of order 10 15 GeV, "experiments" will reveal an extra bosonic dimension of spacetime, and discover that some of the degrees of freedom live on "the other wall of the world". At energies one or two orders of magnitude higher we will encounter true quantum mechanical manifestations of gravity and find out what M is. We have already indicated that the low scale of gravitational phenomena forces us to envisage discrete symmetries which forbid the leading gravitational corrections to the standard model. It is to be hoped that further study will reveal interesting signatures of M theory that can be probed at low energies, or in the early universe.
2014-10-01T00:00:00.000Z
1996-05-20T00:00:00.000
{ "year": 1996, "sha1": "dbac91d918a3f92f62c301e3581675dc59f4ba55", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9605136", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dbac91d918a3f92f62c301e3581675dc59f4ba55", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
116992122
pes2o/s2orc
v3-fos-license
Double Transverse-Spin Asymmetries for Drell-Yan Process in pp and p\bar{p} Collisions: Role of Nonleading QCD Corrections at Small Transverse Momentum We discuss the double-spin asymmetries in transversely polarized Drell-Yan process, calculating all-order gluon resummation corrections up to the next-to-leading logarithmic accuracy. This resummation is relevant when the transverse-momentum Q_T of the produced lepton pair is small, and reproduces the (fixed-order) next-to-leading QCD corrections upon integrating over Q_T. The resummation corrections behave differently between pp- and p\bar{p}-collision cases and are small for the latter case at the kinematics in the proposed GSI experiments. This fact allows us to predict large value of the double-spin asymmetries at GSI, using the recent empirical information on the transversity. The double-spin asymmetry in Drell-Yan process with transversely-polarized protons, p ↑ p ↑ → l + l − X, for azimuthal angle φ of a lepton measured in the rest frame of the dilepton l + l − with invariant mass Q and rapidity y, is given by (dω ≡ dQ 2 dydφ, q = u,ū, d,d, . . .) A T T = dσ ↑↑ /dω − dσ ↑↓ /dω dσ ↑↑ /dω + dσ ↑↓ /dω ≡ ∆ T dσ/dω dσ/dω = cos(2φ) 2 q e 2 q δq(x 1 , Q 2 )δq(x 2 , Q 2 ) + · · · q e 2 q q(x 1 , Q 2 )q(x 2 , Q 2 ) + · · · , (1) as the ratio of products of the relevant quark and antiquark distributions, the transversity δq(x, Q 2 ) and the unpolarized q(x, Q 2 ), and the ellipses stand for the corrections of next-toleading order (NLO, O(α s )) and higher in QCD perturbation theory. The scaling variables x 1,2 represent the momentum fractions associated with the partons annihilating via the Drell-Yan mechanism, such that Q 2 = (x 1 P 1 + x 2 P 2 ) 2 = x 1 x 2 S and y = (1/2) ln(x 1 /x 2 ), where S = (P 1 + P 2 ) 2 is the CM energy squared of the colliding protons. Thus the transversely polarized Drell-Yan (tDY) data for (1) can provide a direct access to the transversity, and it is important to clarify the role of QCD corrections in the double-spin asymmetries. It has been shown that the NLO QCD corrections for (1) are not so significant and the resulting A T T is less than a few percent at RHIC, similarly to the LO estimates (see [2]). This reflects that the sea-quark region is probed at RHIC for Q 2 10 GeV 2 , where the denominator in (1) is enhanced with small x 1,2 . Now, when the transverse momentum Q T of the final dilepton is also observed in tDY, we obtain the double-spin asymmetry at a measured Q T , as the ratio of the Q T -differential cross sections, A T T (Q T ) = (∆ T dσ/dωdQ T )/(dσ/dωdQ T ). In principle, the relevant parton distributions in this asymmetry may be controlled by the new scale ∼ Q T , in contrast to Q in (1). The small-Q T case is important because the bulk of events is produced for Q T ≪ Q. In this case, the cross sections (∆ T )dσ/dωdQ T receive the large perturbative corrections with logarithms ln(Q 2 /Q 2 T ) multiplying α s at each order by the recoil from gluon radiations, which have to be resummed to all orders [2]. As a result, we get (b 0 ≡ 2e −γE with γ E the Euler constant) where the numerator and denominator are, respectively, reorganized in the impact parameter b space in terms of the Sudakov factor e S(b,Q) resumming soft and flavor-conserving collinear radiation, while the ellipses involve the remaining contributions of the O(α s ) collinear radiation, which can be absorbed into the exhibited terms as δq → ∆ T C qq ⊗ δq, q → C qq ⊗ q + C qg ⊗ g using the corresponding coefficient functions (∆ T )C ij ; there appears no gluon distribution in the numerator of (2), similarly as in (1), because of the chiral-odd nature. Using universal Sudakov exponent S(b, Q) with the first nonleading anomalous dimensions in (2), the first three towers of large logarithmic contributions to the cross sections, , are resummed to all orders in α s , yielding the next-to-leading logarithmic (NLL) resummation. In addition to these NLL resummed components relevant for small Q T , the ellipses in (2) also involve the other terms of the fixed-order α s , which treats the LO processes in the large Q T region, so that (2) is the ratio of the NLL+LO polarized and unpolarized cross sections. We include a Gaussian smearing as usually as S(b, Q) → S(b, Q) − g N P b 2 , corresponding to intrinsic transverse momentum. The integration of these NLL+LO cross sections ∆ T dσ/dωdQ T , dσ/dωdQ T over Q T coincides [2] with the NLO cross sections ∆ T dσ/dω, dσ/dω, respectively, associated with A T T of (1); thus the NLO parton distributions have to be substituted into (2) as well as (1). The resummation indeed makes 1/b ∼ Q T the relevant scale. The numerical evaluation of (2) at NLL+LO with RHIC and J-PARC kinematics has revealed [2] that, in small and is governed by the NLL resummed component and is almost constant as a function of Q T , reflecting universality of the large Sudakov effects. The results show A T T (Q T ) > A T T , because the denominator of (2) is not enhanced for Q T ≪ Q compared with that of the corresponding NLO A T T of (1), and also show the tendency that A T T (Q T ) with resummation at higher level yields the larger value. Using the NLO transversities that saturate the Soffer bound, 2δq(x, µ 2 ) ≤ q(x, µ 2 )+∆q(x, µ 2 ), at a low scale µ with ∆q the helicity distribution, the NLO value of (1) at φ = 0 is 4% and ∼ 13% for typical kinematics at RHIC and J-PARC, respectively, and the NLL+LO A T T (Q T ) for small Q T using the same transversity are larger than those NLO A T T by about 20-30% [2]. It is also worth noting that, for Q T ≈ 0, the b integral of (2) is controlled by a saddle point b = b SP , which has the same value between the numerator and denominator in (2) at NLL accuracy [2]: combined with the almost constant behavior of A T T (Q T ) mentioned above, for small Q T region, omitting the small corrections from the LO components involved in the ellipses in (2). The saddle-point evaluation does not lose the NLL accuracy of (2); in particular, the O(α s ) contributions from the coefficients (∆ T )C ij , e.g. those with gluon distribution in the denominator, completely decouple as Q T → 0. Remarkably [2], b 0 /b SP ≃ 1 GeV, irrespective of the values of Q and g N P . The formula (3) allows quantitative evaluation of (2) to good accuracy, and embodies the above features of A T T (Q T ) in a compact form. Next we discuss the pp-collision case, p ↑p↑ → l + l − X; here and below, the formal interchange, δq(x 2 ) ↔ δq(x 2 ), q(x 2 ) ↔q(x 2 ), for the distributions associated with the variable x 2 should be understood in the relevant formulae (1)-(3) for the asymmetries. Thus this case allows us to probe the product of the two quark-transversities, in particular, the valencequark transversities for the region 0.2 x 1,2 0.7 in the proposed polarization experiments at GSI (see e.g. [3]). When the transverse-momentum Q T is unobserved, one obtains A T T of (1): for this asymmetry at GSI, the NLO (O(α s )) corrections as well as the higher order corrections beyond them in the framework of the threshold resummation are shown to be rather small, so that the LO value of A T T , which turns out to be large, is rather robust [3]. We now consider the QCD corrections at a measured Q T , calculating A T T (Q T ) of (2), (3) at GSI kinematics. The numerical evaluation of (2) using the transversity distributions corresponding to the Soffer bound, which are same as in the pp-collision case discussed above, shows [4] that the NLL resummed component dominates A T T (Q T ) in small and moderate Q T region such that A T T (Q T ) is almost constant, with even flatter behavior than for the pp case. It is also demonstrated that A T T (Q T ) at NLL+LO has almost the same value as that at LL; i.e., in contrast to the pp case, the resummation at higher level does not enhance the asymmetry. We here note that A T T (Q T ) at LL is given by (2) omitting all nonleading corrections, i.e., omitting the ellipses, replacing S(b, Q) by that at the LL level, and replacing the scale of the parton distributions as b 2 0 /b 2 → Q 2 , so that the result coincides with A T T of (1) at LO. Combined with the above-mentioned property of A T T , we obtain, for Q T Q, at GSI, with the large value of the asymmetry which is quite stable when including the QCD (resummation and fixed-order) corrections. To clarify the reason behind this remarkable difference between the pp-and pp-collision cases, the saddle-point formula (3) is useful. The simple form of (3) is reminiscent of A T T of (1) at LO, but is different from the latter, only in the unconventional scale b 2 0 /b 2 SP . In fact, this scale, b 2 0 /b 2 SP ≃ 1 GeV 2 (≪ Q 2 ) at all GSI kinematics as determined by the saddle point, completely absorbs the nonuniversal effects associated with nonleading (NLL) level resummation, because A LL T T (Q T ) = A LO T T as noted above. In the valence region 0.2 x 1,2 0.7 relevant for GSI kinematics, the u-quark contribution dominates in (3) and (2), so that these asymmetries are controlled by the ratio of the u quark distributions, δu(x 1,2 , µ 2 )/u(x 1,2 , µ 2 ), with µ 2 = b 2 0 /b 2 SP and Q 2 , respectively. It is straightforward to see that the scale dependence in this ratio almost cancels between the numerator and denominator in the valence region as δu( Fig. 3 in [4]), implying (4) at GSI; this is not the case for pp collisions at RHIC and J-PARC, because of very different behavior of the sea-quark components under the evolution between transversity and unpolarized distributions [2]. A similar logic applied to (2) also explains why A T T (Q T ) in pp collisions at GSI are flatter than in pp collisions as mentioned above. Another consequence of the similar logic is that δu(x, 1 GeV 2 )/u(x, 1 GeV 2 ) as a function of x directly determines the Q-as well as S-dependence of the value of (4) at GSI, with x 1,2 = (Q/ √ S)e ±y . In Fig. 1, using the NLO transversity distributions corresponding to the Soffer bound, the symbols "△" plot A T T (Q T ) of (2) at NLL+LO as a function of Q with y = φ = 0 and Q T ≃ 1 GeV, in the fixed-target (S = 30 GeV 2 ) and collider (S = 210 GeV 2 ) modes at GSI [4]. The dashed curve draws the result using (3); this simple formula indeed works well. Also plotted by the two-dot-dashed curve is A T T of (1) at LO with the transversities corresponding to the Soffer bound at LO level, to demonstrate (4). The Qand S-dependence of these results reflects that the ratio δu(x, 1GeV 2 )/u(x, 1GeV 2 ) is an increasing function of x for the present choice. These results using the Soffer bound show the "maximally possible" asymmetry, i.e., optimistic estimate. A more realistic estimate of (2) and (3) is shown [4] in Fig. 1 by the symbols "▽" and the dot-dashed curve, respectively, with the NLO transversity distributions assuming δq(x, µ 2 ) = ∆q(x, µ 2 ) at a low scale µ, as suggested by various nucleon models and favored by the results of empirical fit for DIS 2008 transversity [5]. The new estimate gives smaller asymmetries compared with the Soffer bound results because the u-quark transversity is considerably smaller, but still yields rather large asymmetries [4]. Based on (4), these results also give estimate of A T T of (1). At present, empirical information of transversity is based on the LO global fit, using the semi-inclusive deep inelastic scattering data and assuming that the antiquark transversities in the proton vanish, δq(x) = 0, so that the corresponding LO parameterization is available only for u and d quarks [5]. Fortunately, however, the dominance of the u-quark contribution in the GSI kinematics allows quantitative evaluation of A T T at LO using only this empirical information [4]: the upper limit of the one-sigma error bounds for the u-and d-quark transversities obtained by the global fit [5] yields the "upper bound" of A T T shown by the dotted curve in Fig. 1. Using (4), this result would also represent estimate of A T T (Q T ). In the small Q region, our full NLL+LO result of A T T (Q T ), shown by "▽", can be consistent with estimate using the empirical LO transversity, but these results have rather different behavior for increasing Q, because the u-quark transversity for the former lies, for x 0.3, slightly outside the one-sigma error bounds of the global fit [4]. Thus, the asymmetries to be observed at GSI, in particular, the behavior of A T T (Q T ) as well as A T T as functions of Q, will allow us to determine the detailed shape of transversity distributions. Other interesting DY spin-asymmetries at GSI are the longitudinal-transverse asymmetry A LT [6] and the single transverse-spin asymmetry [7], which are sensitive to twist-3 effects inside proton. This work was supported by the Grant-in-Aid for Scientific Research No. B-19340063.
2008-07-07T10:27:46.000Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "a7349ee16560a4e8da103af159362d88e4191eb0", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "a7349ee16560a4e8da103af159362d88e4191eb0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2701881
pes2o/s2orc
v3-fos-license
GABAB Encephalitis: A Fifty-Two-Year-Old Man with Seizures, Dysautonomia, and Acute Heart Failure Autoantibodies to the γ-aminobutyric acid receptor, subtype B (GABAB), are a known cause of limbic encephalitis. The spectrum of clinical manifestations attributable to this antibody is not well defined at the present time. Here we present a case of GABAB encephalitis presenting with encephalopathy, status epilepticus, dysautonomia, and acute heart failure. To our knowledge, heart failure and dysautonomia have not yet been reported with this syndrome. Introduction GABA B encephalitis refers to limbic encephalitis caused by autoantibodies to the -aminobutyric acid receptor, subtype B (GABA B ). Its clinical presentation is similar to other limbic encephalitides and Morvan syndrome, with psychiatric symptoms and seizures predominating. However, the range of clinical manifestations attributable to this antibody has not yet been fully described owing primarily to its recent discovery and its rarity. Here, we present a case of GABA B encephalitis with additional components of acute heart failure and dysautonomia. Case Presentation A previously healthy 52-year-old Caucasian man was admitted to our hospital with a subacute, progressive syndrome of refractory seizures, psychosis, dysautonomia, and encephalopathy. He initially presented to an outside facility with new-onset seizures, but after multiple hospitalizations, and despite two antiseizure medications, the patient continued to have breakthrough seizures. Two weeks later, he gradually developed amnesia, cognitive difficulties, visual hallucinations, paranoia, and anxiety, requiring a readmission to evaluate and treat for a presumed primary psychiatric condition. In spite of one month of antiepileptic drug adjustments he continued to have breakthrough seizures, prompting transfer to our institution. On exam he was somnolent with poor attention. He was oriented to self, location, and year but was unable to perform basic arithmetic; the remainder of his neurologic exam was nonfocal. An infectious etiology was investigated, which included blood, urine, tracheal aspirate, and CSF cultures, but was negative. His vital signs were persistently abnormal during the first ten days after his transfer: temperature up to 38.3 ∘ C, respiratory rate up to 32 breaths per minute, and sustained heart rates up to 122 beats per minute. The patient's hospital course was further complicated by heart failure and hypotension, necessitating critical care monitoring and an epinephrine infusion. On presentation to the intensive care unit his troponin I was 0.26 ng/mL which downtrended to 0.15 ng/mL and was undetectable within 24 hours (the lower limit of detection on our assay is 0.03 ng/mL). Electrocardiograms revealed a supraventricular tachycardia; there were intermittent episodes of atrial flutter with 2 : 1 atrioventricular nodal conduction block and atrial fibrillation with rapid ventricular response ( Figure 1). Furthermore, a transthoracic echocardiogram demonstrated severe mitral regurgitation, depressed left ventricular function, and an ejection fraction of 26%. Amiodarone and metoprolol were consequently started with return to normal sinus rhythm. and utilizes indirect immunofluorescence on animal brain slices to screen for antibodies reactive to brain antigens. Positive results are further characterized and reflex tests for other autoreactive antibodies are performed based on the staining pattern. Reflex autoantibody tests include those against the NMDA receptor, AMPA receptor, and GAD-65 which were not detected; therefore direct testing for these autoantibodies did not occur. Other relevant antibodies with this presentation are anti-LGI1 anti-GABA A ; however these were not screened or tested. Negative antibodies on this panel were ANNA-1, ANNA-2, ANNA-3, anti-glial nuclear antibody, anti-Purkinje cell cytoplasmic antibody, types 1 and 2 and Tr, anti-amphiphysin, and anti-CRMP-5. An autoimmune workup was negative for ENA and ANCA, but with a mildly positive ANA (1 : 160). Anti-thyroid peroxidase and thyroglobulin antibodies were elevated at 2910 units/mL and 4.8 ng/mL, respectively. These latter two antibodies are increasingly being appreciated as nonspecific markers of autoimmune processes in what is often called "steroid responsive encephalopathy." Thyroid stimulating hormone was elevated at 5.78 mIU/L, but free T4 was normal at 1.53 g/dL. Whole-body CT and PET scan showed no evidence of malignancy but did reveal markedly increased FDG uptake within the medial left temporal lobe ( Figure 2). The patient was initially treated with high-dose IV methylprednisolone at 1 gram per day for six days, in addition to plasma exchange. Shortly after treatment there was decreased seizure frequency and continued maintenance of normal sinus rhythm. He was given a dose of rituximab and started on a twelve-week prednisone taper. His encephalopathy and psychosis were slower to resolve, requiring intermittent symptomatic treatment. At the time of discharge he had no electrographic evidence of epileptic activity on a regimen of lacosamide, levetiracetam, carbamazepine, and scheduled lorazepam. A repeat transthoracic echocardiogram demonstrated resolution of systolic heart failure with a normal ejection fraction of 67%. One month after discharge, a repeat brain MRI revealed a decrease in the left temporal FLAIR signal; two months after discharge, MRI revealed complete resolution. Discussion GABA B encephalitis occurs relatively infrequently; however the characterization of the antibody and clinical phenotype has occurred recently. Interestingly, it associated with small cell lung cancer in 50-80% of patients from recent case series [1]. This is the most common neoplasm described with this syndrome and implies that surveillance should continue for several years after the onset of the encephalitis. By contrast, dysautonomia occurs more frequently in limbic encephalitis such as with anti-NMDA autoantibodies [2]. Höftberger et al. have previously reported autonomic dysfunction in one of twenty patients with GABA B syndrome [3]. We are not aware of acute heart failure in the context of GABA B encephalitis, though there is at least one report of Takotsubo cardiomyopathy in a patient with limbic encephalitis [4]. This was presumed to be a paraneoplastic process associated with B cell lymphoma but unfortunately an autoantibody was not identified. Cardiogenic shock and heart failure have also been seen in rhombencephalitis caused by enterovirus 71 [5]. In that report there was no PCR evidence of enterovirus 71 in seven hearts examined for pathology. Similarly there was no significant cardiac inflammatory infiltrate, all suggesting that heart failure was due to a neurogenic mechanism rather than myocarditis. The cardiac dysfunction seen in these cases globally affected the left ventricle, similar to the patient described in the present report. Although the known etiologies of supraventricular tachycardia and acute heart failure are numerous and varied, the clinical circumstances in this case strongly suggest that they were a consequence of neurologic injury. Three points when taken together support this: (1) GABA B receptors are transcribed in the fetal but not adult heart and are thought to be found primarily in the nervous system, though they have recently been described in smooth muscle of the human aorta [6,7]; (2) the cardiomyopathy persisted until the patient underwent immunosuppressive treatment and then fully resolved with treatment; (3) there are few etiologies of his cardiomyopathy identified which would be expected to resolve with immunosuppression or spontaneously, though autoimmune myocarditis or tachycardia-induced cardiomyopathy is among them. In our patient there was hypermetabolism in the left medial temporal lobe, determined by FDG PET imaging which is suggestive of inflammation. This is consistent with limbic encephalitides that have been reported in the literature. While common, it is not a uniform finding. Indeed, hypermetabolism has been reported in the bilateral medial temporal lobes, occipital lobes, frontal lobes, and cerebellum. There are also cases of hypometabolism though this seems to occur in older patients and may be associated with concurrent neurodegenerative processes [8]. The specific brain structures that are inflamed are expected to relate to the clinical manifestations of the limbic encephalitis. For example, our patient had prominent difficulty with cognitive function, seizures, and hallucinations, all of which could be attributed to the temporal lobe or via its connections. Similar presentations of GABA B encephalitis have been described in a recent large case series [3]. The connections between the medial temporal lobe and the insular cortex provide one pathway by which limbic encephalitis can lead to dysautonomia [4,9]. The insular cortex is known to modulate autonomic pathways from the brain to the heart. For example, cardiac complications such as arrhythmias, myocardial infarction, and heart failure are reported after left insular stroke as well as intracerebral and subarachnoid hemorrhages [10,11]. Additionally, other stroke locations such as hemispheric and basal ganglia as well as epilepsy have also been associated with heart failure, suggesting that there are many connections involved in neural regulation of the heart [12,13]. Further support of a neural-mediated cardiac injury in this patient is promoted by animal studies where baclofen injection into the tract of the nucleus solitarius produced hypertension and tachycardia and inhibited the depressor baroreflex response [14]. Others have similarly found that vagal inhibition by the solitary tract is mediated by both GABA A and GABA B receptors [15]. One mechanism of heart failure and dysautonomia in this case may be due to blockade GABA B receptors in the brainstem, ultimately leading to failure of vagal inhibition of cardiac function. Another possibility is that this was tachycardiaassociated cardiomyopathy, which in turn could be due to dysautonomia caused by the limbic encephalitis. Alternative explanations for our patient's heart failure include direct antibody mediated effects on the myocardium. However, there are no reports of GABA receptors in the myocardium. Also, anti-thyroid peroxidase antibodies are not associated with heart failure [16,17]. Nonetheless, we can only speculate about the precise pathogenesis of heart failure. A cardiac MRI or myocardial biopsy would have been helpful in identifying an autoimmune myocarditis, due to either anti-GABA B antibodies biding to sites other than the GABA B receptor or additional antibodies that we did not detect. Fortunately, the patient's cardiac function was improving with treatment and we did not feel that further diagnostic testing of the heart would change management; therefore it was not pursued. Takotsubo cardiomyopathy, the most widely recognized to be associated with neurologic injury, is thought to be mediated by increased levels of circulating catecholamines and in turn increases in intracellular calcium and in systemic vascular resistance and afterload [12,18]. Takotsubo cardiomyopathy is primarily characterized by apical ballooning, a feature that was absent in this case [19]. As such, this entity is unlikely in our patient, who instead had global systolic dysfunction. In this report we highlight two uncommon complications of GABA B encephalitis: acute heart failure and dysautonomia. To our knowledge, cardiomyopathy has not previously been recognized as a manifestation of this condition and dysautonomia is uncommon [20]. GABA B encephalitis is associated with a varied clinical phenotype. We believe that this report further expands the clinical manifestations of this relatively uncommon syndrome.
2017-11-22T07:43:21.619Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "209db83643350490df406fa8e0fe1eaaf8f83aad", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crinm/2015/812035.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad5944753833b78743b363f1616b446ed02f2ffb", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221689924
pes2o/s2orc
v3-fos-license
DEVELOPMENT OF NEW AIRBORNE LASER SCANNING METHOD BY MEANDERING FLIGHT Japan has many meandering rivers in her mountainous areas. Many hazards have occurred in their surroundings. Airborne laser scanning (ALS) is one of measures for disaster prevention in the surrounding of a meandering river. In Japan, ordinary ALS by using both a fixed wing airplane and a rotary wing airplane adopts flying along straight lines over a target area. Although ALS along straight lines is effective when a target area is planar, ALS along straight lines for a meandering river in a mountainous area increases the number of flying courses and flight time. On the other hand, although ALS by a meandering flight along a target meandering river would be efficient in data acquisition, it depends on the skill of a pilot and brings difficulty in data processing to secure measuring accuracy. We decided to develop a new efficient ALS method by a meandering flight. It systematizes flight planning, GCP allocation, and data processing especially course adjustment to secure required measuring accuracy. After conducting preliminary experiments in the test area, measuring accuracy was verified following the operation guidelines for the Japanese public surveying established by the Ministry of Land, Infrastructure, Transport and Tourism of Japan. The result that the accuracy by a meandering flight would be almost the same as that by a straight-line flight, and indicated that it would meet the operation guidelines for Japanese public surveying. * Corresponding author INTRODUCTION In the last decade, Japan has several large-scale disasters. On March 11, 2011 an extremely huge earthquake, which was later named the Great East Japan Earthquake, occurred. Since March of 2011, we had three huge earthquakes: two earthquakes in Kumamoto in April of 2016 and one earthquake in Hokkaido in September of 2018. In recent years, storms and floods such as river flooding due to typhoons and torrential downpours are occurring frequently and becoming even more severe. Figure 1 shows a river flooding due torrential downpours in Fukuoka on June, 2017. For pre-disaster prevention and quick recovery from large-scale disasters, the Japanese Government established the Basic Act for National Resilience Contributing to Preventing and Mitigating Disasters for Developing Resilience in the Lives of the Citizenry in 2013. Moreover, the Government has formulated Three-Year Emergency Measures for Increasing the Resilience of the National Territory (2019-2021) as a measure against intensifying disasters. The national measures adopt the most advanced surveying technologies as an effective measure, and these are utilized corresponding to the stages: (1) prior disaster prevention, (2) emergency measures in the event of a disaster, (3) restoration and reconstruction measures, and so on. The Ministry of Land, Infrastructure, Transport and Tourism of Japan is now working on a number of measures for pre-disaster prevention and quick recovery. Airborne laser scanning (ALS) is one of the measures. ALS is effective for not only pre-disaster prevention but also grasping situation after a disaster. Japan is a small and mountainous country. Channel extensions of most rivers in Japan are short and their longitudinal bed slopes are steep. Upper reaches of many rivers in Japan locate in steep mountainous areas, and most of the rivers meander among mountains. Disasters such as river flooding and landslides have occurred in the surroundings of a meandering river. ALS has been utilized for river management (Yoshida et al., 2017) and slope failure survey (Hiramatsu et al., 2017) in Japan. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B1-2020, 2020XXIV ISPRS Congress (2020 Ordinary ALS is used with both fixed wing and rotary wing airplane flying along straight lines over a target area. ALS along straight lines is effective when the target area is planar, since the target area and survey area match. However ALS in straight lines for a meandering area (e.g. rivers, roads, etc.) the number of flying courses and flight time will increase compared to flying the same area in a planar measurement leading to inefficiency. Conducting ALS along a meandering feature with meandering flight would be efficient for data acquisition. Nevertheless, since issues exist regarding pilot skills and data measurement accuracy, there have been few reports focusing on ALS by a meandering flight. Accordingly, we decided to develop a new efficient ALS analysis method for meandering flights. It should systematize flight planning, GCP allocation, and data processing for course adjustment to secure the required measuring accuracy. We conducted two experiments. One is a preliminary experiment to investigate possibility of ALS by a meandering flight, and the other is a practical experiment to investigate feasibility of adopting ALS by a meandering flight in Japanese public surveying. This paper reports the experiments conducted for development of a new efficient ALS method by a meandering flight. Outline of the preliminary experiment We conducted a preliminary experiment to investigate possibility of ALS by a meandering flight at the upper reaches of Tama River, which is one of Japanese Class A rivers, in Ome City, Tokyo Metropolis. Figure 2 shows the target area. The target area is an area surrounded by 400-meter-high mountains on both sides of the river. We used a Leica Chiroptera II in the experiment. Table 1 shows the specifications of Chiroptera II. Chiroptera II has two observation modes: one is the topographic mode by using infrared laser, and the other is the bathymetric mode by using green laser. We executed data acquisition by using the topographic mode. Since a meandering flight by fixed wing airplane is difficult, we adopted a helicopter (Aerospatiale AS350) as a platform in the preliminary experiment. We conducted two sets of observation on June 25, 2018. One was conducted by a flight along straight lines and the other was conducted by a flight along a meandering river as Figure 3 shows. Table 2 shows the LiDAR surveying specifications of the flights. Experiment results Experiments results were evaluated following the general standard of operation specifications for Japanese public surveying (hereinafter referred as the Japanese general standard) established by the Ministry of Land, Infrastructure, Transport and Tourism of Japan. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) Point density: Point density acquired by ALS is usually evaluated by a data-missing rate in Japan. The datamissing rate is calculated as a ratio of meshes of which point density do not satisfy required point density. The data-missing rate is evaluated every 2km x 1.5km rectangle. We verified with 1 point/0.25 m 2 (1 mesh: 0.5m x 0.5m), 4 points/m 2 (1 mesh: 1m x 1m), and 10 points/m 2 (1 mesh: 1m x 1m) since they are the most common required specification in Japan. According to the Japanese general standard, if the mesh is smaller than 1m x 1m, data missing rate should be lower than 15 percent. Table 3 shows the results of the verification of point density. It is clear that the result with all three patterns of mesh size satisfied the required value. Table 3. Data missing rate in the preliminary experiment Elevation verification accuracy of check points: We used six ground control points (GCPs) for adjustment of obtained point clouds. Four of them were located at four corners of the target area, while two of them were located around the centre of the target area. In addition, we set up 14 check points for evaluation of the quality of the obtained point clouds. Figure 4 shows the locations of the GCPs and check points. To evaluate observed elevation, we extracted observed points within 0.5m radius from each check point and calculated a mean of the selected points as an observed elevation of the check point. We compared the elevation of each check point obtained by ALS with that obtained by GNSS surveying. Table 4 shows statistics of differences of elevations of 14 check points between ALS and GNSS surveying. Table 4 shows that the elevation accuracy of checking points in ALS by a meandering flight would be nearly equal to that in ALS by a straight-line flight. The Japanese general standard defines that the absolute value of the mean of elevation differences of check points should be smaller than 0.25m or the RMSE should be smaller than 0.25m. Table 4 indicates that both ALS by a meandering flight and ALS by a straight-line flight satisfied the requirement of the Japanese general standard. Elevation verification accuracy between flying courses: We selected 10 examination points in 10 flat areas where adjacent flying courses were overlapped in order to evaluate elevation differences between flying courses. We extracted observed points within 0.5m radius from each examination point and calculated a mean of the selected points as an elevation of the examination point. Table 5 shows statistics of differences of elevations of 10 examination points between adjacent flying courses. According to the Japanese general standard, the absolute value of the mean of elevation differences of examination points between adjacent flying courses should be smaller than 0.30m. Table 5 indicates that the result of all flying courses satisfied the requirement of the Japanese general standard in both ALS by a straight-line flight and ALS by a meandering flight. Table 5. Elevation accuracy between adjacent flying courses in the preliminary experiment Discussion From the results of the preliminary experiment, we found the following points should be considered through ALS by a meandering flights. If we could satisfy those conditions, the improvement in data-accuracy of ALS by a meandering flight would be expected. Figure 5 shows that the deviation of the actual trajectory from the planned flying course tends to be large around a sharp curve in the meandering flight. In proportion to the deviation of the trajectory the observed point density by ALS by the meandering flight tends to be uneven as Figure 6 shows. As for ALS by a straight-line flight, since the flight was planned so that overlapping ratio of adjacent flying courses should be 50%, the observed point density became uniform. On the other hand, as for ALS by a meandering flight, since flight attitude control is difficult due to direction change of wind when aircraft enters the sharp curve nearly U-turn, it may have affected observed data. Flight planning: Moreover, PDOP (position dilution of precision) value, which indicates degradation of position accuracy of GNSS, of a meandering flight becomes worse than that of a straight-line flight. The maximum value of PDOP was 2.0 in the straight-line flight, while that was 2.9 in the meandering flight. The cause of the degradation would be slower flight speed around a sharp curve in the meandering flight. The degradation of position accuracy of GNSS makes the accuracy of observed data worse. Therefore, it is preferable to divide a flying course into two or more flying courses so that all curves should be smaller than 90 degrees. Dividing a flying course makes point density uniform and secure GNSS and IMU accuracy. Adjustment between flying courses: In the case of ALS by a straight-line flight, adjustment is performed for each flying course in order to remove relative differences between adjacent flying courses. On the other hand, in the case of ALS of a meandering flight, as we mentioned above, the position of an aircraft varies within a flying course and the accuracy of measured position depends on its location within a flying course. Differences between adjacent flying courses in a meandering flight cannot be completely eliminated with the same method as a straight-line flight as Figure 7a shows. Therefore, we decided that a flying course should be divided further. Our preprocessing software creates a point cloud file every softwarespecified data volume. Accordingly, we tried performing adjustment between point cloud files. In the preliminary experiment, a flying course was divided into approximately 100 files. Owing to this additional process each course does not have a large difference as Figure 7b shows. Unfortunately, the additional processing would bring twice or three times work in comparison with course adjustment of ALS by a straight-line flight. GCP allocation: In the ordinary ALS by a straight-line flight, we allocate GCPs at the four corners of a block or at an overlapped area. However, according to the findings mentioned in Subsection 2.3.1 and Subsection 2.3.2, difference between flying courses tends to be large around a shape curve in a meandering flight. Therefore, it would be effective to allocate check points at both ends of meandering flying courses and around a sharp curve in addition to ordinal GCPs. It helps to detect an area which possible have lower accuracy. If we could find an area with lower accuracy, we can adjust courses by using check points as additional GCPs. However, it means that more GCPs will be required in a more meandering area. Surveying cost: In the preliminary experiment, the ALS by a straight-line flight had 10 flying courses and a total length was 90km, while the ALS by a meandering flight had 3 meandering flying courses and a total flight length was 50km. Since the length of a flying course became shorter, data acquisition time also reduced. The straight-line flight took 52 min for data acquisition. On the other hand, the meandering flight took 34 min, which represents approximately a 2/3 reduction compared with the straight-line flight. The reduction of the flying courses helps reducing the number of turnings before entering a flying course, so it is effective in shortening data acquisition time. Even though the flying speed of meandering flight was slower (30kt) than straight-line flight (45kt), the data acquisition time becomes shorter. Therefore, the meandering flight data collection is efficient and can be useful in cases under the restricted data acquisition time. Figure 8 shows the actual cost breakdown for each work flow of the ordinary ALS surveying. Since data acquisition cost accounts for 69% of the total cost. Therefore, the reduction of data acquisition cost by a meandering flight would be expected to produce the reduction of the total cost. Outline of the practical experiment We conducted a practical experiment to investigate feasibility of adopting ALS by a meandering flight for Japanese public surveying. The practical experiment was conducted at the middle reaches of Ashida River, which is a Japan class A river, in Fuchu City, Hiroshima Prefecture. Figure 9 shows the target area. The Ashida River is surrounded by mountains and is a gentle mountainous area. We used a Leica Chiroptera II as a sensor and an Aerospatiale AS350 as a platform in the practical experiment in the same way as the preliminary experiment. Figure 10 shows the flight plans of both flights. Table 6 shows LiDAR surveying specifications. Based on the considerations of the results of the preliminary experiment as mentioned in Section 2.3, at the place where sharp curves are observed, the flight was divided into three flying courses where measurement was conducted. Table 6. LiDAR surveying specifications in the practical experiment Results and discusssion We set up four GCPs for adjustment of obtained point cloud at both ends of each flying course in the same way as the preliminary experiment. Table 7 shows elevation accuracy of the adjustment. Table 7 indicates that the accuracy of ALS by a meandering flight would be nearly equal to that by a straightline flight. Furthermore, Table 7 indicates that ALS by a meandering flight would be able to be adopted for Japanese public surveying. Moreover, although the deviation of the actual trajectory from the planned flying course and uneven point density observed were found in the preliminary experiment, these faults were improved through dividing flying courses as Figure 11 and Figure 12 show. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) CONCLUSION We conducted the preliminary experiment at the upper reaches of Tama River and practical experiment at the middle reaches of Ashida River in order to develop a new efficient ALS method by a meandering flight. The former was intended to investigate possibility of ALS by a meandering flight, and the latter was intended to investigate feasibility of adopting ALS by a meandering flight in Japanese public surveying. We conclude that a meandering flight would be able to be adopted for Japanese public surveying by using the methods developed by us based on the results of the preliminary experiment. As for measurement quality, the experiment results indicate that there would be some differences between flying courses that cannot be completely removed by adjustment in ALS by a meandering flight. However, the experimental results show that the accuracy of ALS by a meandering flight would be nearly equal to that by a straight-line flight, and satisfy the requirement of Japanese public surveying. As for measuring cost, ALS by a meandering flight requires more work in GNSS surveying of GCPs and adjustment between flying courses than ALS by a straight-line flight. On the other hand, ALS by a meandering flight requires significantly less work in flight planning and data acquisition than ALS by a straight-line flight. From the point view of total cost, we consider that ALS by a meandering flight would be more efficient in total cost that ALS by a straight-line flight. We summarized the characteristics of ALS by a meandering flight based on the experiment results. Table 8 shows the summary of the characteristics of ALS by a meandering flight in comparison with ALS by a straight-line flight. Japan has many rivers meandering in narrow steep-walled valley. ALS by a straight-line flight would be unable to obtain data enough for disaster prevention in the surrounding of such meandering rivers. We expect that ALS by a meandering flight is a promising measure for disaster prevention in Japan. We are going to establish more efficient and more accurate ALS by a meandering flight by adopting some new methods dedicated to ALS by a meandering flight.
2020-08-13T10:11:08.734Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "10b5d61a366e386097721e74f7b0f5af7777dc95", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B1-2020/51/2020/isprs-archives-XLIII-B1-2020-51-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fe67e6fd554df984314aad5595d34abdd9c9e98f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
13422187
pes2o/s2orc
v3-fos-license
The perceived impact of location privacy: a web-based survey of public health perspectives and requirements in the UK and Canada. Background The "place-consciousness" of public health professionals is on the rise as spatial analyses and Geographic Information Systems (GIS) are rapidly becoming key components of their toolbox. However, "place" is most useful at its most precise, granular scale – which increases identification risks, thereby clashing with privacy issues. This paper describes the views and requirements of public health professionals in Canada and the UK on privacy issues and spatial data, as collected through a web-based survey. Methods Perceptions on the impact of privacy were collected through a web-based survey administered between November 2006 and January 2007. The survey targeted government, non-government and academic GIS labs and research groups involved in public health, as well as public health units (Canada), ministries, and observatories (UK). Potential participants were invited to participate through personally addressed, standardised emails. Results Of 112 invitees in Canada and 75 in the UK, 66 and 28 participated in the survey, respectively. The completion proportion for Canada was 91%, and 86% for the UK. No response differences were observed between the two countries. Ninety three percent of participants indicated a requirement for personally identifiable data (PID) in their public health activities, including geographic information. Privacy was identified as an obstacle to public health practice by 71% of respondents. The overall self-rated median score for knowledge of privacy legislation and policies was 7 out of 10. Those who rated their knowledge of privacy as high (at the median or above) also rated it significantly more severe as an obstacle to research (P < 0.001). The most critical cause cited by participants in both countries was bureaucracy. Conclusion The clash between PID requirements – including granular geography – and limitations imposed by privacy and its associated bureaucracy require immediate attention and solutions, particularly given the increasing utilisation of GIS in public health. Solutions include harmonization of privacy legislation with public health requirements, bureaucratic simplification, increased multidisciplinary discourse, education, and development of toolsets, algorithms and guidelines for using and reporting on disaggregate data. Background Although "place" has been coined one of the three pillars of epidemiological data, only relatively recently has it garnered significant attention in the public health field, as Geographic Information Systems (GIS) have increasingly become more affordable, accessible, and intuitive. Indeed, the public health community's "place-consciousness" is on the rise as spatial analyses and GIS, now defined as part of the medical and health literature [1][2][3], are rapidly becoming key components of the public health professional's toolbox [4]. Privacy, an evolving "principle as old as the common law" [5], has been cited as an issue in a variety of public health events, reports, and media releases [6][7][8][9][10][11]. So much so, in fact, that one sometimes cannot help but wonder if privacy is, indeed, the enemy of public health [12], and whether they could ever peacefully co-exist [13]. A distinction should here be made between the related concepts of privacy, confidentiality, and security within the context of the current discussion. Privacy is attributable to the individual about whom identifiable information pertains, and refers to that individual's right to control such information, thereby freeing the individual from un-invited intrusion and identification. Confidentiality obligates others who have been entrusted with such information to respect the individual's privacy, and is therefore attributable to third parties; a breach of confidentiality violates the privacy of the individual because the individual has had no control over the release of the data. Finally, security refers to tools and methods used to safeguard confidentiality and privacy [14,15]. This research deals specifically with privacy issues as regulated and defined by legislation and ethical guidelines surrounding consent. From within this context, an individual's privacy is not deemed to have been violated if data shared in the absence of consent cannot be used to identify the individual. Exception clauses generally exist in legislation, allowing authorities to release personally identifiable data under various circumstances -such as where it is deemed to be in the best interest of society or where it is impractical to obtain consent. Examples include Section 60 of the UK's Health and Social Care Act 2001 [16], and Sections 8 and 7 of Canada's Privacy Act [17] and Personal Information Protection and Electronic Documents Act [18], respectively. While an analysis of privacy legislation as it pertains to health data and the concept of "place" is beyond the scope of this paper, suffice it to say that such clauses are often ambiguous and subjective, particularly when combined with vague definitions of "sensitive personal information" and the scale at which geographic data becomes "identifiable". The concept of place, for example, is not explicitly specified as "sensitive personal data" in the UK's Data Protection Act 1988 [19], nor in the generic EU Data Protection Directive of 1995 [20] (though it is explicitly mentioned in var-ious telecommunications directives), but postcodes are specifically mentioned in a 2005 NHS data protection and medical research POSTnote [21]. In Canada's Privacy Act [17], "address" is specifically listed as "personal information", while in the Personal Information Protection and Electronic Documents Act [18], it is not (though implied). Such ambiguities deter the sharing of data, causing organisations and authorities to err on the side of caution and not release identifying information [22], including spatial data. It is no surprise, therefore, that the increasing popularity of "place" in public health has further exacerbated the public health research-privacy debate. Traditional healthdata anonymisation techniques, such as pseudonymisation and aggregation, cannot be applied to spatial data without significantly altering or destroying the spatial relationships under investigation [23][24][25][26], and hence the very reason for which they are to be used in the first place. The problem with "place" is that it is most useful at its most precise, granular scale [15,23]. Yet with increasing spatial precision and accuracy comes a corresponding increase in the risk of identification, and therefore a breach of privacy [15]. This becomes particularly troublesome when the spatial data is linked to health, social or demographic data. The development of methods by which to mitigate these risks continues to be an active area of research, but thus far, proposed solutions have limitations, risks and tradeoffs, and lack guidelines on their appropriate use. Consequently, the acquisition of geographic data tends to be either limited, or at a sub-optimal or unusable scale. Not only do privacy issues impact data acquisition and use for analysis, but also visualisation and dissemination of the results. Researchers have been able to "reverse engineer" maps, for example, to successfully re-identify individuals [27][28][29]. While the debate between the fields of privacy and public health has raged on for decades [5] despite their interdependence on one another [14], tension continues to rise in concert with the rampant growth of information technology and e-Health. From a health research perspective, both Canada and the UK place strong emphasis on evidence-based public health policies and services [6], yet in both countries, this seems to be hampered by privacy issues. While some argue that this debate is the product of a lack of understanding of the legislation and regulations by the public health community [14,30,31], there is little in the way of formal collection and synthesis of the corresponding views and perspectives of those directly involved in public health activities. This paper describes the views and requirements of public health professionals in Canada and the UK on privacy issues and spatial data, as collected through a web-based survey. Given that Canada's health care and public health systems were both largely modeled after those of the UK [6,32,33], that each continues to be studied by the other for improvements and lessons learned [6,34], and that privacy issues for public health have been cited in both, it is expected that survey responses in the two countries will also be similar. Development & Content The survey was first developed on paper in the summer of 2006, and piloted with select public health individuals in Canada and the UK. It was then submitted for privacy assessment by the Access to Information and Privacy Branch of Health Canada, and for ethics review and approval from the Health Canada Research Ethics Board and the Southwest Multicentre Research Ethics Committee in the UK. Throughout the process it was clear that the survey would be developed as a closed web-based survey, running between November 2006 and January 2007. The final paper versions of the survey are provided (see Additional files 1, 2, 3) and can also be found on the research website [35]. The paper survey was then converted to a web-version by the ALPHA Project [36] team at the Public Health Agency of Canada (PHAC), and piloted by the author and several colleagues within the PHAC. The survey launch was delayed by two weeks, with only some of the concerns identified during the pilot being implemented due to limitations of the ALPHA architecture. Issues and limitations with the design of the web-based survey are addressed in a later section. Three versions of the survey were developed and launched: Canada-English, Canada-French and UK-Eng-lish. A summary of the survey's structure and contents is given in Table 1. Target The survey targeted government, non-government and academic GIS labs and research groups involved in public health, as well as public health units (Canada), ministries, and observatories (UK). Potential participants were identified through web searches of public health sites, mailing databases, personal contact, referrals/word of mouth, and postings on the research website [35], a PHAC Public Health Portal website [37], and the NHS Public Health Informatics Community website [38]. Participation Potential participants were invited to participate through a standardised but personally addressed email outlining the reason for the invite, the mechanisms by which their contact information was retrieved, a brief summary of the research and survey, a description of the data handling methods, an estimate of the time it would take to complete the survey (approximately 20 minutes), a unique user ID and password, the URL to the survey site, the URL to the research website, and the principle investigator's contact information. The survey website had no other content. In order to participate, invitees were required to (1) successfully log in, and (2) consent to participation. Only the most recent responses for any given user ID were collected, ensuring only one survey was completed per participant. The consent screen outlined the voluntary and anonymous nature of the survey, indicated the approximate time it would take to complete the survey, the risks and benefits to the participants, the intellectual property and ownership of all data collected, and the protection of any personal data provided under Canadian and UK law. Failure to successfully complete either of these two requirements resulted in termination of the survey. After consenting, participants were given the option to select their country and language of choice, and the relevant survey then commenced. All questions included a "Skip" option. Progress through the survey required the selection of a response for each question, and participants could terminate the survey at any time or complete it over multiple sessions, at their convenience. Questions were not randomized or alternated, but adaptive questioning was utilized. Question types varied, and included single-choice, multiple-choice, scale, and free-form response questions, thereby collecting both quantitative and qualitative responses. There was typically only one question per screen with multiple potential responses, the maximum number of which was 17. Depending on the responses of the participants, the survey was distributed over approximately 40 screens. Key questions addressed by the survey included the following: -Is there a requirement for personally identifiable data, including spatial data? -What spatial resolution is ideal for public health research? -Is privacy perceived to be a significant obstacle to public health practice? -How knowledgeable do public health professionals consider themselves on privacy? -What is the most critical obstacle to the access and use of personally identifiable data? -What are the views of the public health community on public awareness and perceptions? -Which is preferred: raw, case-level data, or aggregated, anonymised data? Collected responses were analysed using basic descriptive statistics and non-parametric methods in SAS 9.2. The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) [39] was used as a guideline in the reporting of the web-based survey methodology. Results Of 112 invitees in Canada and 75 in the UK, 66 (59%) and 28 (37%) participated in the survey, respectively. Of the Canadian participants, three responded to the French version. The completion proportion for Canada was 91%, and 86% for the UK. There were no differences in the distribution of roles reported by participants in both countries, with most participants (49% in Canada; 64% in the UK) identifying their main role as falling within the research and analysis domain ( Table 2). Participant expertise varied, and included aboriginal health (Canada only), chronic diseases, paediatric public health, infectious diseases, dental public health, emergency preparedness and response, environmental public health, ethics and public health law, food and nutrition, health services, injuries and disabilities, mental health and substance misuse, social determinants of health, surveillance, and education. No response differences were observed between the two countries on each of the key questions, and the overall, combined results are therefore reported. A summary of the findings is given in Table 3. Is there a requirement for personally identifiable data, including spatial data? Almost all participants identified a need for personally identifiable data (PID) in their roles; only one Canadian participant indicated no need for PID. Five Canadian participants and one UK participant chose not to answer the question. In total 93% of participants indicated a requirement for PID in their public health activities. What spatial resolution is ideal for public health research? All participants identified geographic location of health data as a requirement for their roles or organisation. When asked "...what level of geography would you ideally like to visualise your data and/or conduct spatial analyses," 69% of respondents identified "latitude and longitude, exact street address, or exact household." Is privacy perceived to be a significant obstacle to public health practice? AND How knowledgeable do public health professionals consider themselves on privacy? When asked "Are you or have you been restricted in your use of GIS for any public health activity because of privacy concerns (i.e. map or data might identify an individual or community)?" 79% of respondents marked "YES". Of 83 participants who responded to the question "In your opinion, do current restrictions to PID pose an obstacle to any aspects of public health practice?" 59 (71%) agreed, rating the obstacle severity at 6 or higher. Of these 59, 36 (61%) rated their knowledge of privacy and confidentiality issues/legislation at 6 out of 10 or higher, with a mean score of 7.5 (std = 1.0) and a median score of 7. Using the median, respondents with a self-rated knowledge score lower than 7 were classified as "low" on knowledge (47%), while those at or above the median score were classified as "high" (53%). Those classified as high were more likely to rate privacy as an obstacle (one-sided Wilcoxon exact P < 0.001). A trend was evident for the overall correlation between restriction score and self-rated privacy knowledge score (Spearman r = 0.22, P = 0.057). What is the most critical obstacle to the access and use of personally identifiable data? The most common obstacles were reported as bureaucracy and legislation by 33% and 25% of the participants, respectively. Other responses included public disapproval/paranoia (15%), practitioner paranoia (7%), lack of knowledge (6%), combination of these factors (4%), other (2%), and none (skipped question, 7%). What are the views of the public health community on public awareness and perceptions? Fifty seven percent of participants felt that under 10% of the public population is aware of the impact of restricted access to PID on public health practice; 74% felt it to be under 20%, and 84% felt the proportion to be less than 30% (cumulative frequencies). Most identified education *One UK participant who identified a main role in research and analysis declined a response to the question on scope. Lat/Long or address (69%) 3. Is privacy perceived to be a significant obstacle to public health practice? Yes (71%) 4. How knowledgeable do public health professionals consider themselves on privacy? High Knowledge* (53%) 5. What is the most critical obstacle to the access and use of personally identifiable data? Bureaucracy Legislation (33%) (25%) 6. What are the views of the public health community on public awareness and perceptions? Less than 30% of the public is aware (84%) 7. Which is preferred: raw, case level data, or aggregated, anonymised data? Raw, case-level data (66%) † Numbers in parentheses are the percent of participants who responded as described *Participants rating their knowledge as high were also more likely to rate privacy as a more severe obstacle (P < 0.001) and awareness (through media, reports, case studies, scenarios, etc) as the best methods to increase this proportion. When then asked what proportion of the public they felt would allow the use of their PID if they were educated on the usefulness of such data to public health practice, 67% said 50% or higher. Which is preferred: raw, case-level data, or aggregated, anonymised data? More respondents identified a preference for having access to granular-level rather than aggregate data (53 vs. 27; 66% of those responding to this question). Discussion This survey and user-needs assessment on privacy and public health shows a definite requirement by public health professionals -in various fields and positions in both Canada and the UK -for personally identifiable data, including spatial data. The requirement for this spatial data is at its most granular level -latitude and longitude, or exact street address -which necessarily compromises patient privacy. It is not surprising, therefore, that public health professionals perceive privacy to be a significant obstacle to public health practice. There are those who would argue that this perception is the product of a lack of understanding of the legislation and regulations by the public health community. The results of this research, however, indicate the contrary. Not only did public health professionals in both countries generally rate themselves high on knowledge of privacy legislation and related issues, but those with the highest self-rated scores also tended to rate privacy as more of an obstacle. That these self-ratings of knowledge are not representative of actual knowledge remains possible. Participants perceived the most critical obstacles to sharing or acquisition of health data with PID to be bureaucracy, followed by legislation. Bureaucracy surrounding health research in both Canada and the UK generally revolves around data ownership, academic competitiveness, ethics review boards or committees, and in particular, requirements for informed consent, even if they compromise public health, or are not in the best interests of the patients involved [40][41][42]. Since seeking subject consent with every new hypothesis to be tested or model to be developed is an impossible task, some have suggested that thought be given to "blanket" consent. At the Canadian Institutes for Health Research (CIHI) 2003 workshop on the legal and ethical issues facing the Canadian Lifelong Health Initiative [43], participants spent some time discussing such issues, only to emphasise the importance of the establishment of ethical governance and structure; essentially, more necessary bureaucracy. Interestingly, while the debate continues, a relatively recent survey found that most of the British public did not consider the use of their National Cancer Registry PID for public health research and surveillance to be an invasion of their privacy [30]. While the ethics of blanket consent are not discussed in this study, it is nonetheless offered as a potential solution in light of the requirements of the public health community. This does not, however, address other issues of data ownership and control that contribute to the bureaucratic debate. While many individuals recognised the importance of privacy legislation, participants generally indicated a concern and, in some cases, first-hand frustration that legislation unduly restricts public health activities, compromising surveillance and research. Many phrases were used by respondents to describe the implications of privacy legislation on public health, including, among others: "increasingly restrictive;" "serious;" "incomplete;" "fuzzy;" "does more harm than good;" "two-edged sword;" "causes challenges;" "delays and restricts access [to data];" " [is a] hindrance to the improvement and efficiency of public health;" "disappointing;" "frustrating;" "difficult to interpret;" "very worrisome;" "disadvantages the public interest;" "not properly understood;" "overprotective;" "limiting;" "hinders knowledge;" and "used as an excuse not to share data." A large proportion of the public health community represented in this sample clearly expressed major concerns with the impact of privacy legislation on their work -both in Canada, and in the UK -in spite of having a good understanding and acceptance of its purpose and necessity. It is also important for legislation to be written in an unambiguous manner that is clearly understood by both public health professionals and the general public [4]. Public health professionals are largely of the opinion that the general public's level of awareness of the impact of restricted access to PID on public health practice is extremely low. Surveys by the Office of the Privacy Commissioner in Canada [44] repeatedly show that the majority of Canadians surveyed (up to 80%) place an extremely high level of importance on strong laws to protect personal information, particularly health information, and that they feel that the level of protection of their personal information has declined over the past ten years. Yet interestingly, only 20% are clearly aware of existing laws, and even fewer (12%) are aware of their rights around the collection, use and disclosure of this information. The "need to raise Canadians' awareness about the current laws in place and what their rights are" [44] must therefore be coupled with the corresponding need to address this from within the context of public health requirements. Educating the public, therefore, as well as practitioners, data users, policy makers and politicians, was not surprisingly identified by participants as a potential solution. Participants put emphasis on the utilisation of the media to educate and increase awareness, as well as demonstrating the impact of a lack of data, and the benefits of its use when available. Demonstration of the benefits to the individual (e.g. streamlining of the system, not being asked for personal information with every visit to a new clinician, improved dissemination of public health information and intelligence directly to the public) was also offered as a solution, and summed up by one participant in the phrase "seeing is believing". It is worth noting, however, that a number of participants displayed a certain level of pessimism that until a crisis or extreme event occurs, no amount of education or awareness-increasing activities would make a difference. Public health professionals generally prefer disaggregate, case-level data, but access to this data is an issue. The limitations imposed by privacy on public health have resulted in the development of a variety of techniques for data anonymisation [15,23,45]. However, all unavoidably have their issues, risks and limitations, and there is currently no framework to guide public health professionals in their appropriate use and interpretation. Generalisability Although the findings of this paper may be generalisable to public health professionals in Canada and the UK, issues of privacy and public health are not unique to these countries. Privacy is defined as a fundamental human right in the legislation of many countries, and the concept is enshrined in Article 12 of the United Nations' Universal Declaration of Human Rights [46] and Article 8 of the European Convention on Human Rights [47]. Similarly, public health is an international discipline; both diseases and information are ubiquitous, and neither is constrained by political boundaries and oceans. The increasing requirement for spatial data and its inherent clash with privacy legislation therefore extend beyond the UK and Canadian contexts, and the results, requirements and conclusions drawn from this research can be generalised to wherever such a clash exists. The implementation of solutions by national governments may be further exacerbated by issues of social political trust. General public distrust in government initiatives and motives, such as in most countries of the European Union, Canada, and the United States [48,49], complicates changes that may be perceived by the public to be intrusions of privacy. Such issues may currently be less of a concern in countries such as Finland, Sweden, Denmark, and the Netherlands, where social political trust, though declining, has traditionally tended to be much higher [50][51][52][53]. However, even in such nations where privacy and health have traditionally not clashed, increased international data sharing requirements and spatial data implications may pose unanticipated and challenging obstacles. Limitations No comprehensive lists of public health and health GIS professionals were found in either country, so it was not possible to invite a random sample. In addition, the response rate in the UK was relatively low, and it is therefore uncertain that the sample is representative of all public health professionals in the two countries. However, responses between the two countries were consistent, with no significant differences. Since knowledge of privacy legislation and policies was based on self-rated scores, a thorough review and assessment of privacy legislation as it pertains to public health practice is required in both Canada and the UK to validate the findings of this survey. A number of limitations and issues pertaining to the websurvey were identified. Most notable of these was the presence of a scroll bar in sections II and III which most participants missed, thereby eliminating the ability to capture items in reference to "place", such as usefulness. However, these items were also captured more broadly in other sections of the survey. Other issues involved the inability of the architecture to support various designs and types of questions that would have facilitated the completion of the survey, and shortened the length of time required. Participants also noted frustration with the navigation and structure of the survey pages. A document outlining these issues and others was submitted to the ALPHA team after the initial pilot for future enhancements to the architecture. Conclusion It is clear that privacy is perceived to be a major obstacle and issue for public health -the literature illustrates it, and the current study provides both quantitative and qualitative evidence. Together, these provide a more holistic portrayal of public health community viewpoints, and can be used to educate the public, and as evidence for decision makers to implement changes in policies and legislation. The clash between a requirement for personally identifiable data -including exact, individual location -by public health professionals, and the limitations imposed by privacy and its associated bureaucracy, must be addressed and appropriate solutions developed, particularly given the increasing utilisation of geographic information systems in public health and the imminent completion of comprehensive electronic health systems. Privacy legislation is critical for the protection of this fundamental human right, and to prevent the abuse of personal information, particularly in the health field. However, the legislation must be harmonised with the requirements of public health practice if the health of societies and populations is to be maintained and improved. Since health is not limited by political boundaries, this must be pursued at an international level, and solutions must address these perceptions in the public health community, simplify the bureaucratic process, promote multidisciplinary discussions between legislators, bureaucrats and the public health community, educate communities, and develop and provide public health professionals with toolsets, algorithms and guidelines for using and reporting on disaggregate data. While the results of this study should inform and justify the development of techniques that better anonymise health data with minimal impact on its integrity and frameworks for implementing them, it seems fitting to echo the warning of Curtis et al: "...health and spatial scientists should be proactive and suggest a series of point level spatial confidentiality guidelines before governmental decisions are made which may be reactionary toward the threat of revealing confidential information, thereby imposing draconian limits on research using a GIS [27]."
2017-04-09T05:05:22.890Z
2008-05-09T00:00:00.000
{ "year": 2008, "sha1": "6b51ab63ce9aee9279c58f28b971dc7436eaa5e4", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-8-156", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b51ab63ce9aee9279c58f28b971dc7436eaa5e4", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
5915225
pes2o/s2orc
v3-fos-license
Low-Complexity Geometry-Based MIMO Channel Simulation The simulation of electromagnetic wave propagation in time-variant wideband multiple-input multiple-output mobile radio channels using a geometry-based channel model (GCM) is computationally expensive. Due to multipath propagation, a large number of complex exponentials must be evaluated and summed up. We present a low-complexity algorithm for the implementation of a GCM on a hardware channel simulator. Our algorithm takes advantage of the limited numerical precision of the channel simulator by using a truncated subspace representation of the channel transfer function based on multidimensional discrete pro-late spheroidal (DPS) sequences. The DPS subspace representation o ff ers two advantages. Firstly, only a small subspace dimension is required to achieve the numerical accuracy of the hardware channel simulator. Secondly, the computational complexity of the subspace representation is independent of the number of multipath components (MPCs). Moreover, we present an algorithm for the projection of each MPC onto the DPS subspace in O (1) operations. Thus the computational complexity of the DPS subspace algorithm compared to a conventional implementation is reduced by more than one order of magnitude on a hardware channel simulator with 14-bit precision. INTRODUCTION In mobile radio channels, electromagnetic waves propagate from the transmitter to the receiver via multiple paths.A geometry-based channel model (GCM) assumes that every multipath component (MPC) can be modeled as a plane wave, mathematically represented by a complex exponential function.The computer simulation of time-variant wideband multiple-input multiple-output (MIMO) channels based on a GCM is computationally expensive, since a large number of complex exponential functions must be evaluated and summed up. This paper presents a novel low-complexity algorithm for the computation of a GCM on hardware channel simulators.Hardware channel simulators [1][2][3][4][5] allow one to simulate mobile radio channels in real time.They consist of a powerful baseband signal processing unit and radio frequency frontends for input and output.In the baseband processing unit, two basic operations are performed.Firstly, the channel impulse response is calculated according to the GCM.Secondly, the transmit signal is convolved with the channel im-pulse response.The processing power of the baseband unit limits the number of MPCs that can be calculated and hence the model accuracy.We note that the accuracy of the channel simulator is limited by the arithmetic precision of the baseband unit as well as the resolution of the analog/digital converters.On the ARC SmartSim channel simulator [2], for example, the baseband processing hardware uses 16-bit fixedpoint processors and an analog/digital converter with 14-bit precision.This corresponds to a maximum achievable accuracy of E max = 2 −13 . The new simulation algorithm presented in this paper takes advantage of the limited numerical accuracy of hardware channel simulators by using a truncated basis expansion of the channel transfer function.The basis expansion is based on the fact that wireless fading channels are highly oversampled.Index-limited snapshots of the sampled fading process span a subspace of small dimension.The same subspace is also spanned by index-limited discrete prolate spheroidal (DPS) sequences [6].In this paper, we show that the projection of the channel transfer function onto the DPS subspace can be calculated approximately but very efficiently in O(1) operations from the MPC parameters given by the model.Furthermore, the subspace representation is independent of the number of MPCs.Thus, in the hardware simulation of wireless communication channels, the number of paths can be increased and more realistic models can be computed.By adjusting the dimension of the subspace, the approximation error can be made smaller than the numerical precision given by the hardware, allowing one to trade accuracy for efficiency.Using multidimensional DPS sequences, the DPS subspace representation can also be extended to simulate time-variant wideband MIMO channel models. One particular application of the new algorithm is the simulation of Rayleigh fading processes using Clarke's [7] channel model.Clarke's model for time-variant frequencyflat single-input single-output (SISO) channels assumes that the angles of arrival (AoAs) of the MPCs are uniformly distributed.Jakes [8] proposed a simplified version of this model by assuming that the number of MPCs is a multiple of four and that the AoAs are spaced equidistantly.Jakes' model reduces the computational complexity of Clarke's model by a factor of four by exploiting the symmetry of the AoA distribution.However, the second-order statistics of Jakes' simplification do not match the ones of Clarke's model [9] and Jakes' model is not wide-sense stationary [10].Attempts to improve the second-order statistics while keeping the reduced complexity of Jakes' model are reported in [6,[9][10][11][12][13][14].However, due to the equidistant spacing of the AoAs, none of these models achieves all the desirable statistical properties of Clarke's reference model [15].Our new approach presented in this paper allows us to reduce the complexity of Clarke's original model by more than an order of magnitude without imposing any restrictions on the AoAs. Contributions of the paper (i) We apply the DPS subspace representation to derive a low-complexity algorithm for the computation of the GCM.(ii) We introduce approximate DPS wave functions to calculate the projection onto the subspace in O(1) operations.(iii) We provide a detailed error and complexity analysis that allows us to trade efficiency for accuracy.(iv) We extend the DPS subspace projection to multiple dimensions and describe a novel way to calculate multidimensional DPS sequences using the Kronecker product formalism. Notation.Let Z, R, and C denote the set of integers, real and complex numbers, respectively.Vectors are denoted by v and matrices by V. Their elements are denoted by v i and V i,l , respectively.Transposition of a vector or a matrix is indicated by • T and conjugate transposition by [16]. For an N-dimensional, finite index set I ⊂ Z N , the elements of the sequence v m , m ∈ I, may be collected in a vector v.For a parameterizable function f , { f } denotes the family of functions over the whole parameter space.The absolute value, the phase, the real part, and the imaginary part of a complex variable a are denoted by |a|, Φ(a), a, and a, respectively.E {•} denotes the expectation operator. Organization of the paper In Section 2, a subspace representation of time-variant frequency-flat SISO channels based on one-dimensional DPS sequences is derived.The main result of the paper, that is, the low-complexity calculation of the basis coefficients of the DPS subspace representation, is given in Section 3. Section 4 extends the DPS subspace representation to higher dimensions, enabling the computer simulation of wideband MIMO channels.A summary and conclusions are given in Section 5. Appendix A proposes a novel way to calculate the multidimensional DPS sequences utilizing the Kronecker product.Appendix B gives a detailed proof of a central theorem.A list of symbols is defined in Appendix C. Time-variant frequency-flat SISO geometry-based channel model We start deriving the DPS subspace representation for the generic GCM for time-variant frequency-flat SISO channels depicted in Figure 1.The GCM assumes that the channel transfer function h(t) can be written as a superposition of P MPCs: where each MPC is characterized by its complex weight η p , which embodies the gain and the phase shift, as well as its Florian Kaltenberger et al.Doppler shift ω p .With 1/T S denoting the sampling rate of the system, the sampled channel transfer function can be written as where ν p = ω p T S is the normalized Doppler shift of the pth MPC.We refer to (2) as the sum of complex exponentials (SoCE) algorithm for computing the channel transfer function h m .We assume that the normalized Doppler shifts ν p are bounded by the maximum (one-sided) normalized Doppler bandwidth ν Dmax , which is given by the maximum speed v max of the transmitter, the carrier frequency f C , the speed of light c, and the sampling rate 1/T S , In typical wireless communication systems, the maximum normalized Doppler bandwidth 2ν Dmax is much smaller than the available normalized channel bandwidth (see Figure 2): Thus, the channel transfer function ( 1) is highly oversampled. Clarke's model [17] is a special case of (2) and assumes that the AoAs ψ p of the impinging MPCs are distributed uniformly on the interval [−π, π) and that E {|η p | 2 } = 1/P.The normalized Doppler shift ν p of the pth MPC is related to the AoA ψ p by ν p = ν Dmax cos(ψ p ). Jakes' model [8] and its variants [9][10][11][12][13][14] assume that the AoAs ψ p are spaced equidistantly with some (random) offset ϑ: If P is a multiple of four, symmetries can be utilized and only P/4 sinusoids have to be evaluated [8].However, the second-order statistics of such models do not match the ones of Clarke's original model [9].In this paper, a truncated subspace representation is used to reduce the complexity of the GCM (2).The subspace representation does not require special assumptions on the AoAs ψ p .It is based on DPS sequences, which are introduced in the following section. DPS sequences In this section, one-dimensional DPS sequences are reviewed.They were introduced in 1978 by Slepian [17].Their applications include spectrum estimation [18], approximation, and prediction of band-limited signals [15,17] as well as channel estimation in wireless communication systems [6].DPS sequences can be generalized to multiple dimensions [19].Multidimensional DPS sequences are reviewed in Section 4.2, where they are used for wideband MIMO channel simulation. Definition 1.The one-dimensional discrete prolate spheroidal (DPS) sequences v (d) m (W, I) with band-limit W = [−ν Dmax , ν Dmax ] and concentration region I = {M 0 , . . ., M 0 + M − 1} are defined as the real solutions of They are sorted such that their eigenvalues λ d (W, I) are in descending order: To ease notation, we drop the explicit dependence of v (d) m (W, I) on W and I when it is clear from the context.Further, we define the DPS vector v (d) (W, I) ∈ C M as the DPS sequence v (d) m (W, I) index-limited to I. The DPS vectors v (d) (W, I) are also eigenvectors of the M × M matrix K with elements K m,n = sin(2πν Dmax (m − n))/ π(n − m).The eigenvalues of this matrix decay exponentially and thus render numerical calculation difficult.Fortunately, there exists a tridiagonal matrix commuting with K, which enables fast and numerically stable calculation of DPS sequences [17,20].Figures 3 and 4 illustrate one-dimensional DPS sequences and their eigenvalues, respectively.Some properties of DPS sequences are summarized in the following theorem. Theorem 1. (1) The sequences v (d) m (W, I) are band-limited to W. (2) The eigenvalue λ d (W, I) of the DPS sequence v (d) m (W, I) denotes the energy concentration of the sequence within I: (5) Every band-limited sequence h m can be decomposed uniquely as h m = h m + g m , where h m is a linear combination of DPS sequences v (d) m (W, I) for some I and g m = 0 for all m ∈ I. m , v (1) m , and v (2) m for M 0 = 0, M = 256, and Mν Dmax = 2. Proof.See Slepian [17]. DPS subspace representation The time-variant fading process {h m } given by the model in (2) obtained by index limiting h m to I can be represented as a linear combination of the DPS vectors Properties ( 2) and (3) of Theorem 1 show that the first D = 2ν Dmax M + 1 DPS sequences contain almost all of their energy in the index-set I. Therefore, the vectors {h} span a subspace with essential dimension [6] Due to (4), the time-variant fading process is highly oversampled.Thus the maximum number of subspace dimensions M is reduced by 2ν Dmax 1.In typical wireless communication systems, the essential subspace dimension D is in the order of two to five only.This fact is exploited in the following definition.Definition 2. Let h be a vector obtained by index limiting a band-limited process with band-limit W to the index set I. Further, collect the first D DPS vectors v (d) (W, I) in the matrix The DPS subspace representation of h with dimension D is defined as where α is the projection of the vector h onto the columns of V: For the purpose of channel simulation, it is possible to use D > D DPS vectors in order to increase the numerical accuracy of the subspace representation.The subspace dimension D has to be chosen such that the bias of the subspace representation is small compared to the machine precision of the underlying simulation hardware.This is illustrated in Section 3.2 by numerical examples. In terms of complexity, the problem of computing the series (2) was reformulated into the problem of computing the basis coefficients α of the subspace representation (13).If they were computed directly using (14), the complexity of the problem would not be reduced.In the following section, we derive a novel low-complexity method to calculate the basis coefficients α approximately. Approximate calculation of the basis coefficients In this section, an approximate method to calculate the basis coefficients α in ( 13) with low complexity is presented.Until now we have only considered the time domain of the channel and assumed that the band limiting region W is symmetric around the origin.To make the methods in this section also applicable to the frequency domain and the spatial domains (cf.Section 4), we make the more general assumption that The projection of a single complex exponential vector e p = [e 2π jνpM0 , . . ., e 2π jνp(M0+M−1) ] T onto the basis functions v (d) (W, I) can be written as a function of the Doppler shift ν p , the band-limit region W, and the index set I, Since h can be written as the basis coefficients α (14) can be calculated by where γ p = [γ 0 (ν p ; W, I), . . ., γ D−1 (ν p ; W, I)] T denote the basis coefficients for a single MPC. To calculate the basis coefficients γ d (ν p ; W, I), we take advantage of the DPS wave functions U d ( f ; W, I).For the special case W 0 = 0 and M 0 = 0 the DPS wave functions are defined in [17].For the more general case, the DPS wave functions are defined as the eigenfunctions of They are normalized such that The DPS wave functions are closely related to the DPS sequences.It can be shown that the amplitude spectrum of a DPS sequence limited to m ∈ I is a scaled version of the associated DPS wave function (cf.[17, equation (26)]) Comparing ( 16) with (21) shows that the basis coefficients can be calculated according to The following definition and theorem show that U d (ν p ; W, I) can be approximately calculated from v (d) m (W, I) by a simple scaling and shifting operation [21]. m (W, I) be the DPS sequences with bandlimit region W = [W 0 − W max , W 0 + W max ] and index set I = {M 0 , . . ., M 0 + M − 1}.Further denote by λ d (W, I) the corresponding eigenvalues.For ν p ∈ W define the index m p by Approximate DPS wave functions are defined as where the sign is taken such that the following normalization holds: Theorem 2. Let ψ d (c, f ) be the prolate spheroidal wave functions [22].Let c > 0 be given and set In other words, both the approximate DPS wave functions as well as the DPS wave functions themselves converge to the prolate spheroidal wave functions. Proof.For W 0 = 0 and The general case follows by using the two identities Theorem 2 suggests that the approximate DPS wave functions can be used as an approximation to the DPS wave functions.Therefore, the basis coefficients ( 22) can be calculated approximately by The theorem does not indicate the quality of the approximation.It can only be deduced that the approximation improves as the bandwidth W max decreases, while the number of samples M = c/πW max increases.This fact is exploited in the following definition.Definition 4. Let h be a vector obtained by index limiting a band-limited process of the form (2) with band For a positive integer r-the resolution factor-define The approximate DPS subspace representation with dimension D and resolution factor r is given by whose approximate basis coefficients are Note that the DPS sequences are required in a higher resolution only for the calculation of the approximate basis coefficients.The resulting h D,r has the same sample rate for any choice of r. Bias of the subspace representation In this subsection, the square bias of the subspace representation bias 2 and the square bias of the approximate subspace representation are analyzed. For ease of notation, we assume again that W = [−ν Dmax , ν Dmax ], that is, we set W 0 = 0 and W max = ν Dmax .However, the results also hold for the general case (15).If the Doppler shifts ν p , p = 0, . . ., P − 1, are distributed independently and uniformly on W, the DPS subspace representation h coincides with the Karhunen-Loève transform of h [23] and it can be shown that bias 2 If the Doppler shifts ν p , p = 0, . . ., P − 1, are not distributed uniformly, (35) can still be used as an approximation for the square bias [21]. For the square bias of the approximate DPS subspace representation h D,r , no analytical results are available.However, for the minimum achievable square bias, we conjecture that bias 2 min,r = min This conjecture is substantiated by numerical Monte-Carlo simulations using the parameters from Table 1.The Doppler shifts ν p , p = 0, . . ., P − 1, are distributed independently and uniformly on W. The results are illustrated in Figure 5.It can be seen that the square bias of the subspace representation bias 2 h D decays with the subspace dimension.For D ≥ 2Mν Dmax + 1 = 2 this decay is even exponential.These two properties can also be seen directly from (35) and the exponential decay of the eigenvalues λ d (W, I).The square bias bias 2 h D,r of the approximate subspace representation is similar to bias 2 h D up to a certain subspace dimension.Thereafter, the square bias of the approximate subspace representation levels out at bias 2 min,r ≈ (2ν Dmax /r) 2 .Increasing the resolution factor pushes the levels further down. Let the maximal allowable square error of the simulation be denoted by E 2 max .Then, the approximate subspace representation can be used without loss of accuracy if D and r are chosen such that bias 2 Good approximations for D and r can be found by The first expression can be computed using (35).Using conjecture (36), the latter evaluates to Using a 14-bit fixed-point processor, the maximum achievable accuracy is E 2 max = (2 −13 ) 2 ≈ 1.5 × 10 −8 .For the example of Figure 5, where the maximum Doppler shift ν Dmax = 4.82 × 10 −5 and the number of samples M = 2560, the choice D = 4 and r = 2 makes the simulation as accurate as possible on this hardware.Depending on the application, a lower accuracy might also be sufficient. Complexity and memory requirements In this subsection, the computational complexity of the approximate subspace representation (31) is compared to the SoCE algorithm (2).The complexity is expressed in number of complex multiplications (CM) and evaluations of the complex exponential (CE).Additionally, we compare the number of memory access (MA) operations, which gives a better complexity comparison than the actual memory requirements. We assume that all complex numbers are represented using their real and imaginary part.A CM thus requires four multiplication and two addition operations.As a reference for a CE we use a table look-up implementation with linear interpolation for values between table elements [2].This implementation needs six addition, four multiplication, and two memory access operations. Let the number of operations that are needed to evaluate h and h be denoted by C h and C h , respectively.Using the SoCE algorithm, for every m ∈ I = {M 0 , . . ., M 0 +M−1} and every p = 0, . . ., P − 1, a CE and a CM have to be evaluated, that is, For the approximate DPS subspace representation with dimension D, first the approximate basis coefficients α have to be evaluated, requiring operations where the first term accounts for ( 29) and the second term for (32).In total, for the evaluation of the approximate subspace representation (31), operations are required.For large P, the approximate DPS subspace representation reduces the number of arithmetic operations compared to the SoCE algorithm by The memory requirements of the DPS subspace representation are determined by the block length M, the subspace dimension D and the resolution factor r. If the DPS sequences are stored with 16-bit precision, are needed. In Figure 6, C h and C h are plotted over the number of paths P for the parameters given in Table 1.Multiplications and additions are counted as one operation.Memory access operations are counted separately.The subspace dimension is chosen to be D = 4 according to the observations of the last subsection.The memory requirements for the DPS subspace representation are Mem h = 80 kbyte. It can be seen that the complexity of the approximate DPS subspace representation in terms of number of arithmetic operations as well as memory access operations increases with slope D, while the complexity of the SoCE algorithm increases with slope M. Since in the given example D M, the approximate DPS subspace representation already enables a complexity reduction by more than one order of magnitude compared to the SoCE algorithm for P = 30 paths.Asymptotically, the number of arithmetic operations can be reduced by a factor of C h /C h → 465. The wideband MIMO geometry-based channel model The time-variant GCM described in Section 2.1 can be extended to describe time-variant wideband MIMO channels. For simplicity we assume uniform linear arrays (ULA) with omnidirectional antennas.Then the channel can be described by the time-variant wideband MIMO channel transfer function h(t, f , x, y), where t denotes time, f denotes frequency, x the position of the transmit antenna on the ULA, y the position of the receive antenna on the ULA [25]. The GCM assumes that h(t, f , x, y) can be written as a superposition of P MPCs, η p e 2π jωpt e −2π jτp f e 2π j/λ sin ϕpx e −2π j/λ sin ψp y , (45) where every MPC is characterized by its complex weight η p , its Doppler shift ω p , its delay τ p , its angle of departure (AoD) ϕ p , and its AoA ψ p (see Figure 7) and λ is the wavelength.More sophisticated models may also include parameters such as elevation angle, antenna patterns, and polarization. There exist many models for how to obtain the parameters of the MPCs.They can be categorized as deterministic, geometry-based stochastic, and nongeometrical stochastic models [26].The number of MPCs required depends on the scenario modeled, the system bandwidth, and the number of antennas used.In this paper, we choose the number of MPCs such that the channel is Rayleigh fading, except for the lineof-sight component. For narrowband frequency-flat systems, approximately P 0 = 40 MPCs are needed to achieve a Rayleigh fading statis-tics [13].If the channel bandwidth is increased, the number of resolvable MPCs increases also.The ITU channel models [27], which are used for bandwidths up to 5 MHz in UMTS systems, specify a power delay profile with up to six delay bins.The I-METRA channel models for the IEEE 802.11n wireless LAN standard [28] are valid for up to 40 MHz and specify a power delay profile with up to 18 delay bins.This requires a total number of MPCs of up to P 1 = 18P 0 = 720.Diffuse scattering can also be modeled using a GCM by increasing the number of MPCs.In theory, diffuse scattering results from the superposition of an infinite number of MPCs [29].However, good approximations can be achieved by using a large but finite number of MPCs [30,31].In MIMO channels, the number of MPCs multiplies by N Tx N Rx , since every antenna sees every scatterer from a different AoA and AoD, respectively.For a 4 × 4 system, the total number of MPCs can thus reach up to We now show that the sampled time-variant wideband MIMO channel transfer function is band-limited in time, frequency, and space.Let F S denote the width of a frequency bin and D S the distance between antennas.The sampled channel transfer function can be described as a fourdimensional sequence h m,q,r,s = h(mT S , qF S , rD S , sD S ), where m denotes discrete time, q denotes discrete frequency, s denotes the index of the transmit antenna, and r denotes the index of the receive antenna. 1 Further, let ν p = ω p T S denote the normalized Doppler shift, θ p = τ p F S the normalized delay, ζ p = sin(ϕ p )D S /λ and ξ p = sin(ψ p )D S /λ the normalized angles of departure and arrival, respectively.If all these indices are collected in the vectors m = [m, q, s, r] T , h m can be written as that is, the multidimensional form of (2).The band-limitation of h m in time, frequency, and space is defined by the following physical parameters of the channel. (1) The maximum normalized Doppler shift of the channel ν Dmax defines the band-limitation in the time domain.It is determined by the maximum speed of the user v max , the carrier frequency f C , the speed of light c, and the sampling rate 1/T S , that is, (2) The maximum normalized delay of the scenario θ max defines the band-limitation in the frequency domain. It is determined by the maximum delay τ max and the sample rate 1/F S in frequency (3) The minimum and maximum normalized AoA, ξ min and ξ max define the band-limitation in the spatial domain at the receiver.They are given by the minimum and maximum AoA, ψ min and ψ max , the spatial sampling distance D S and the wavelength λ: The band-limitation at the transmitter is given similarly by the normalized minimum and maximum normalized AoD, ζ min and ζ max . In summary it can be seen that h m is band-limited to Thus the discrete time Fourier transform (DTFT) vanishes outside the region W, that is, Multidimensional DPS sequences The fact that h m is band-limited allows one to extend the concepts of the DPS subspace representation also to time-variant wideband MIMO channels.Therefore, a generalization of the one-dimensional DPS sequences to multiple dimensions is required. where They are sorted such that their eigenvalues λ d (W, I) are in descending order To ease notation, we drop the explicit dependence of v (d) m (W, I) on W and I when it is clear from the context.Further, we define the multidimensional DPS vector v (d) (W, I) ∈ C L as the multidimensional DPS sequence v (d) m (W, I) index-limited to I. In particular, if every element m ∈ I is indexed lexicographically, such that I = {m l , l = 0, 1, . . ., L − 1}, then All the properties of Theorem 1 also apply to multidimensional DPS sequences [19].The only difference is that m has to be replaced with m and Z with Z N . Example 1.In the two-dimensional case N = 2 with bandlimiting region W and index set I given by Equation ( 54) reduces to Note that due to the nonsymmetric band-limiting region W, the solutions of (59) can take complex values.Examples of two-dimensional DPS sequences and their eigenvalues are given in Figures 8 and 9, respectively.They have been calculated using the methods described in Appendix A. Multidimensional DPS subspace representation We assume that for hardware implementation, h m is calculated blockwise for M samples in time, Q bins in frequency, N Tx transmit antennas, and N Rx receive antennas.Accordingly, the index set is defined by The DPS subspace representation can easily be extended to multiple dimensions.Let h be the vector obtained by index limiting the sequence h m (47) to the index set I (60) and sorting the elements lexicographically.In analogy to the one-dimensional case, the subspace spanned by {h} is also spanned by the multidimensional DPS vectors v (d) (W, I) defined in Section 4.2.Due to the common notation of oneand multidimensional sequences and vectors, the multidimensional DPS subspace representation of h can be defined similarly to Definition 2. Definition 6.Let h be a vector obtained by index limiting a multidimensional band-limited process of the form (47) with band-limit W to the index set I. Let v (d) (W, I) be the multidimensional DPS vectors for the multidimensional band-limit region W and the multidimensional index set I. Further, collect the first D DPS vectors v (d) (W, I) in the matrix EURASIP Journal on Advances in Signal Processing The multidimensional DPS subspace representation of h with subspace dimension D is defined as where α is the projection of the vector h onto the columns of V: The subspace dimension D has to be chosen such that the bias of the subspace representation is small compared to the machine precision of the underlying simulation hardware.The following theorem shows how the multidimensional projection (63) can be reduced to a series of onedimensional projections. Theorem 3. Let h D be the N-dimensional DPS subspace representation of h with subspace dimension D, band-limiting region W, and index set I. If W and I can be written as Cartesian products where W i = [W 0,i − W max,i , W 0,i + W max,i ], and I i = {M 0,i , . . ., M 0,i + M i − 1}, then for every d = 0, . . ., D − 1, there exist d 0 , . . ., d N−1 such that the N-dimensional DPS basis vectors v (d) (W, I) can be written as Further, the basis coefficients of the approximate DPS subspace representation are given by where γ (i) p,d = γ di ( f p,i , W i , I i ) are the one-dimensional approximate basis coefficients defined in (29).Additionally, resolution factors r i can be used to improve the approximation. Proof. See Appendix B The band-limiting region W (51) and the index set I (60) of the channel model (47) fulfill the prerequisites of Theorem 3 with Thus, Theorem 3 allows us to use the methods of Section 3.1 to calculate the basis coefficients of the multidimensional DPS subspace representation approximately with low complexity.The resolution factors r i , i = 0, . . ., N − 1, have to be chosen such that the bias of the subspace representation is small compared to the machine precision E max of the underlying simulation hardware.A necessary but not sufficient condition for this is to use the methods of Section 3.2 for each dimension independently, that is, to choose r i = 2W max,i /E max .However, it has to be verified numerically that the multidimensional DPS subspace representation achieves the required numerical accuracy. Complexity and memory requirements In this subsection, we evaluate the complexity and memory requirements of the N-dimensional SoCE algorithm and the N-dimensional approximate DPS subspace representation, given by Theorem 3.These results are a generalization of the results of Section 3.3.We assume that the one-dimensional DPS sequences v (di) (W i , I i ), i = 0, . . ., N − 1, have been precalculated.Further, we assume that Let the number of operations that are needed to evaluate h (47) and h D (67) be denoted by C h and C h D , respectively.For the SoCE algorithm, For the approximate DPS subspace representation with dimension D, firstly the N-dimensional DPS basis vectors need to be calculated from the one-dimensional DPS vectors (cf.(66)), requiring Secondly, the approximate basis coefficients α have to be evaluated according to (68), requiring In total, for the evaluation of the approximate subspace representation (67), operations are required.Asymptotically for P → ∞, the N-dimensional DPS subspace representation reduces the number of arithmetic operations compared to the SoCE algorithm by the factor The memory requirements of the DPS subspace representation are determined by the size of the index set I, the number of DPS vectors D i , and the resolution factors r i .If the DPS sequences are stored with 16-bit precision, are needed. Numerical examples Section 3 demonstrated that an application of the approximate DPS subspace representation to the time-domain of wireless channels may save more than an order of magnitude in complexity.In this subsection, the multidimensional approximate DPS subspace representation is applied to an example of a time-variant frequency-selective channel as well as an example of a time-variant frequency-selective MIMO channel.A comparison of the arithmetic complexity is given.We assume a 14-bit fixed-point hardware architecture, that is, a maximum allowable square error of 1.We assume a typical urban environment with a maximum delay spread of τ max = 3.7 milliseconds given by the ITU Pedestrian B channel model [27].By omitting the spatial domains x and y in (47), we obtain a time-variant frequency-selective GCM where m = [m, q] T and f p = [ν p , θ p ] T .Since ( 76) is bandlimited to and we wish to calculate (76) in the index set we can apply a two-dimensional DPS subspace representation (Definition 6) to (76).Further, we can use Theorem 3 to calculate the basis coefficients α of the subspace representation. For a given maximum allowable square bias E 2 max = (2 −13 ) 2 , the estimated values of the resolution factors in the time and frequency domain are r 0 = 2ν Dmax /E max ≈ 2 and r 1 = θ max /E max ≈ 512 (rounded to the next power of two).The square bias bias 2 of the two-dimensional exact and the approximate DPS subspace representation is plotted in Figure 10 against the subspace dimension D. It can be seen that bias 2 h D ≈ E 2 max at a subspace dimension of approximately D = 80.The maximum number of one-dimensional DPS vectors is D 0 = 4 and D 1 = 23. Time, frequency, and spatial domain Table 3 contains the simulation parameters of the numerical experiments in the spatial domain.The remaining parameters are chosen according to Tables 1 and 2. We assume uniform linear arrays at the transmitter and the receiver with spacing D S = λ/2 and N Tx = N Rx = 8 antennas each.Further we assume that there is only one cluster of scatterers in the scenario which is not in the vicinity of the transmitter or receiver (see Figure 11) and we assume no line-of-sight component.The AoD and AoA are assumed to be limited by [ϕ min , ϕ max ] = [ψ min , ψ max ] = [−5 • , 5 • ], which has been observed in measurements [32]. A four-dimensional DPS subspace representation is applied to the channel transfer function (47) with W and I defined in (51) and (60).Following the same procedure as in the previous subsection, for a numerical accuracy of 14 bits the estimated values of the resolution factors and the number of one-dimensional DPS vectors in the spatial domains are r 2 = (ζ max − ζ min )/E max ≈ 512, r 3 = (ξ max − ξ min )/E max ≈ 512 (rounded to the next power of 2), and D 2 = D 3 = 5. Hybrid DPS subspace representation Last but not least, we propose a hybrid DPS subspace representation that applies a DPS subspace representation in time where the band-limit region W and the index set I are the same as in the two-dimensional case (cf.( 77) and ( 78)).Then, the two-dimensional DPS subspace representation can be applied to each h s,r m , s = 0, . . ., N Tx − 1, r = 0, . . ., N Rx − 1, independently. Results and discussion A complexity comparison of the SoCE algorithm and the approximate DPS subspace representation for one, two, and four dimensions is given in Figure 12.It was evaluated using (70) and (73).Also shown is the complexity of the four-dimensional hybrid DPS subspace representation.It can be seen that for time-variant frequency-flat SISO channels, the one-dimensional DPS subspace representation requires fewer arithmetic operations for P > 2 MPCs.The more MPCs are used in the GCM, the more complexity is saved.Asymptotically, the number of arithmetic operations is reduced by C h /C h → 465. For time-variant frequency-selective SISO channels, the two-dimensional DPS subspace representation requires fewer arithmetic operations for P > 30 MPCs.However, as noted in Section 4.1, channel models for systems with the given parameters require P = 400 paths or more.For such a scenario, the DPS subspace representation saves two orders of magnitude in complexity.Asymptotically, the number of arithmetic operations is reduced by a factor of C h /C h → 6.8 × 10 3 (cf.( 74)).The memory requirements are Mem h = 5.83 Mbyte (cf.(75)). For time-variant frequency-selective MIMO channels, the four-dimensional DPS subspace representation requires fewer arithmetic operations for P > 2 × 10 3 MPCs.Since MIMO channels require the simulation of up to 10 4 MPCs (cf.Section 4.1), complexity savings are still possible.The asymptotic complexity savings are C h /C h → 1.9 × 10 4 .However, in the region P < 2 × 10 3 MPCs, the four-dimensional DPS subspace representation requires more complex operations than the corresponding SoCE algorithm.Thus, even though we choose a "best case" scenario with only one cluster, a small angular spread and a low numerical accuracy, there is hardly any additional complexity reduction if the DPS subspace representation is applied in the spatial domain. The hybrid DPS subspace representation on the other hand exploits the savings of the DPS subspace representation in the time and frequency domain only.From Figure 12 it can be seen that it has fewer arithmetic operations than the four-dimensional DPS subspace representation and the fourdimensional SoCE algorithm for 60 < P < 2 × 10 3 MPCs.Thus the hybrid method is preferable for channel simulations in this region.Further, this method also allows for an efficient parallelization on hardware channel simulators [33]. CONCLUSIONS We have presented a low-complexity algorithm for the computer simulation of geometry-based MIMO channel models.The algorithm exploits the low-dimensional subspace spanned by multidimensional DPS sequences.By adjusting the dimension of the subspace, it is possible to trade computational complexity for accuracy.Thus the algorithm is ideally suited for fixed-point hardware architectures with limited precision. We demonstrated that the complexity reduction depends mainly on the normalized bandwidth of the underlying fading process in time, frequency, and space.If the bandwidth is very small compared to the sampling rate, the essential subspace dimension of the process is small and the complexity can be reduced substantially.In the time domain, the maximum Doppler bandwidth of the fading process is much smaller than the system sampling rate.Compared with the SoCE algorithm, our new algorithm reduces the complexity by more than one order of magnitude on 14-bit hardware. The bandwidth of a frequency-selective fading process is given by the maximum delay in the channel, which is a factor of five to ten smaller than the sampling rate in frequency.Therefore, the DPS subspace representation also reduces the computational complexity when applied in the frequency domain.To achieve a satisfactory numerical accuracy, the resolution factor in the approximation of the basis coefficients needs to be large, resulting in high memory requirements.On the other hand, it was shown that the number of memory access operations is small.Since this figure has more influence on the run-time of the algorithm, the approximate DPS subspace representation is preferable over the SoCE algorithm for a frequency-selective fading-process. The bandwidth of the fading process in the spatial domain is determined by the angular spread of the channel, which is almost as large as the spatial sampling rate for most scenarios in wireless communications.Therefore, applying the DPS subspace representation in the spatial domain does not achieve any additional complexity reduction for the scenarios of interest.As a consequence, for the purpose of wideband MIMO channel simulation, we propose to use a hybrid method which computes the complex exponentials in the spatial domain directly and applies the subspace representation to the time and frequency domain only.This method also allows for an efficient parallelization on hardware channel simulators. APPENDICES A. CALCULATION OF MULTIDIMENSIONAL DPS SEQUENCES In the one-dimensional case (N = 1), where W = [W 0 − W max , W 0 + W max ] and I = {M 0 , . . ., M 0 + M − 1}, the DPS sequences can be calculated efficiently [17,20].The efficient and numerically stable calculation of multidimensional DPS sequences with arbitrary W and I is not trivial and has not been treated satisfactorily in the literature.In this section a new way of calculating multidimensional DPS sequences is derived if their passband region can be written as a Cartesian product of one-dimensional intervals.Indexing every element m ∈ I lexicographically, such that I = {m l , l = 0, 1, . . ., L − 1}, we define the matrix K (W) by where the kernel K (W) is given by (55).Let v (d) (W, I) and λ d (W, I), d = 0, . . ., L−1, denote the eigenvectors and eigenvalues of K (W) : where It can be shown that the eigenvectors v (d) (W, I) and the eigenvalues λ d (W, I) are exactly the multidimensional DPS vectors defined in (57) and their corresponding eigenvalues. If the DPS sequences are required for m / ∈ I, they can be extended using (54). The multidimensional DPS vectors can theoretically be calculated for an arbitrary passband region W directly from the eigenproblem (A.2).However, since the matrix K (W) has an exponentially decaying eigenvalue distribution, this method is numerically unstable. If W can be written as a Cartesian product of onedimensional intervals (i.e., W is a hyper-cube), where , and the index-set I is written as where I i = {M 0,i , . . ., M 0,i + M i − 1}, the defining kernel K (W) for the multidimensional DPS vectors evaluates to where u = [u 0 , . . ., u N−1 ] T ∈ I.This means that the kernel K (W) is separable and thus the matrix K (W) can be written as a Kronecker product where K (Wi) , i = 0, . . ., N − 1, are the kernel matrices corresponding to the one-dimensional DPS vectors.Now let λ di (W i , I i ) and v (di) (W i , I i ), d i = 0, . . ., M i − 1, denote the eigenvalues and the eigenvectors of K (Wi) , i = 0, . . ., N − 1, respectively.Then the eigenvalues of K (W) are given by [34,Chapter 9] B. PROOF OF THEOREM 3 For I given by (65), h can be written as where e (i) p = [e 2π j fp,iM0,i , . . ., e 2π j fp,i(M0,i+Mi−1) ] T .Further, since W is given by (64), the results of Appendix A can be used and V can be written as where every M i × D i matrix V i contains the one-dimensional DPS vectors v d (W i , I i ) in its columns.Using the identity Figure 2 : Figure 2: Doppler spectrum H(ν) of the sampled time-variant channel transfer function h m .The maximum normalized Doppler bandwidth 2ν Dmax is much smaller than the available normalized channel bandwidth. ) ( 3 ) The eigenvalues λ d (W, I) satisfy 1 < λ i (W, I) < 0. They are clustered around 1 for d ≤ D − 1, and decay exponentially for d ≥ D , where D = |W ||I| + 1. (4) The DPS sequences v (d) m (W, I) are orthogonal on the index set I and on Z. Figure 6 : Figure6: Complexity in terms of number of arithmetic operations (left abscissa) and memory access operations (right abscissa) versus the number of MPCs P. We show results for the sum of complex exponentials algorithm (denoted by "SoCE") and the approximate subspace representation (denoted by "DPSS") using M = 2560, ν Dmax = 4.82 × 10 −5 , and D = 4. ψ 1 ψ 2 Figure 7 : Figure 7: Multipath propagation model for a time-variant wideband MIMO radio channel.The signals sent from the transmitter, moving at speed v, arrive at the receiver.Each path p has complex weight η p , time delay τ p , Doppler shift ω p , angle of departure ϕ p , and angle of arrival ψ p . Definition 5 . Let I ⊂ Z N be an N-dimensional finite index set with L = |I| elements, and W ⊂ (−1/2, 1/2) N an Ndimensional band-limiting region.Multidimensional discrete prolate spheroidal (DPS) sequences v(d) m (W, I) are defined as the solutions of the eigenvalue problem Figure 10 : Figure 10: bias 2 h D for the subspace representation in the time and frequency domain with ν Dmax = 4.82 × 10 −5 , M = 2560, θ max = 0.056, and Q = 256.The resolution factors are fixed to r 0 = 2 and r 1 = 512.The thin horizontal line denotes the numerical accuracy of a fixed-point 14-bit processor. Figure 11 : Figure 11: Scenario of a mobile radio channel with one cluster of scatterers.The AoD and the AoA are limited within the intervals Φ = [ϕ min , ϕ max ] and Ψ = [ψ min , ψ max ], respectively. Figure 12 : Figure12: Complexity in terms of number of arithmetic operations versus the number of MPCs P. We show results for the SoCE algorithm (denoted by "SoCE") and the approximate DPS subspace representation (denoted by "DPSS") for one, two, and four dimensions.Also shown is the complexity of the four-dimensional hybrid DPS subspace representation (denoted by "Hybrid"). 3 ) 1 e the basis coefficients α can be calculated byα = V H h = (0) p ⊗ • • • ⊗ e (N−1) p ⊗ • • • ⊗ V H N−1 e (N−1) Figure 1: GCM for a time-variant frequency-flat SISO channel.Signals sent from the transmitter, moving at speed v, arrive at the receiver via different paths.Each MPC p has complex weight η p and Doppler shift ω p • H .The Euclidean ( 2 ) norm of the vector a is denoted by a .The Kronecker product and the Khatri-Rao product (columnwise Kronecker product) are denoted by ⊗ and , respectively.The inner product of two vectors of length N is defined asx, y = N−1 i=0 x i y * i , where • * denotes complex conjugation.If X is a discrete index set, |X| denotes the number of el- is band-limited to the region W = [−ν Dmax , ν Dmax ].Let I = {M 0 , . . ., M 0 + M − 1} denote a finite index set on which we want to calculate h m .Due to property (5) of Theorem 1, h m can be decomposed into h m = h m +g m , where h m is a linear combination of the DPS sequences v (d) m (W, I) and h m = h m for all m ∈ I. Therefore, the vectors Table 1 : [24]lation parameters for the numerical experiments in the time domain.The carrier frequency and the sample rate resemble those of a UMTS system[24].The block length is chosen to be as long as a UMTS frame. Table 2 : Simulation parameters for the numerical experiments in the frequency domain.Table 2 contains the simulation parameters of the numerical experiments in the frequency domain.The parameters in the time domain are chosen according to Table Table 3 : Simulation parameters for the numerical experiments in the spatial domains. , ν p : Doppler shift and normalized Doppler shift of the pth MPC ω Dmax , ν Dmax : Maximum Doppler shift, maximum normalized Doppler shift τ p , θ p : Delay and normalized delay of the pth MPC τ max , θ max : (ν), U d (ν): DPS wave function and approximate DPS wave function α d , α d : dth basis coefficient and approximate basis coefficient of DPS subspace representation of h γ p,d , γ p,d : dth basis coefficient and approximate basis coefficient of DPS subspace representation of the pth MPC r i , D i :Resolution factor and maximum number of one-dimensional DPS vectors in time (i = 0), frequency (i = 1), space at the transmitter (i = 2), and space at the receiver (i = 3)
2014-10-01T00:00:00.000Z
2007-07-01T00:00:00.000
{ "year": 2007, "sha1": "8cabd87ff764d6e1455f8f2f4da70160f9643514", "oa_license": "CCBY", "oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1155/2007/95281", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8cabd87ff764d6e1455f8f2f4da70160f9643514", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
263767573
pes2o/s2orc
v3-fos-license
Comparing rates of adverse events detected in incident reporting and the Global Trigger Tool: a systematic review Abstract Many hospitals continue to use incident reporting systems (IRSs) as their primary patient safety data source. The information IRSs collect on the frequency of harm to patients [adverse events (AEs)] is generally of poor quality, and some incident types (e.g. diagnostic errors) are under-reported. Other methods of collecting patient safety information using medical record review, such as the Global Trigger Tool (GTT), have been developed. The aim of this study was to undertake a systematic review to empirically quantify the gap between the percentage of AEs detected using the GTT to those that are also detected via IRSs. The review was conducted in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Studies published in English, which collected AE data using the GTT and IRSs, were included. In total, 14 studies met the inclusion criteria. All studies were undertaken in hospitals and were published between 2006 and 2022. The studies were conducted in six countries, mainly in the USA (nine studies). Studies reviewed 22 589 medical records using the GTT across 107 institutions finding 7166 AEs. The percentage of AEs detected using the GTT that were also detected in corresponding IRSs ranged from 0% to 37.4% with an average of 7.0% (SD 9.1; median 3.9 and IQR 5.2). Twelve of the fourteen studies found <10% of the AEs detected using the GTT were also found in corresponding IRSs. The >10-fold gap between the detection rates of the GTT and IRSs is strong evidence that the rate of AEs collected in IRSs in hospitals should not be used to measure or as a proxy for the level of safety of a hospital. IRSs should be recognized for their strengths which are to detect rare, serious, and new incident types and to enable analysis of contributing and contextual factors to develop preventive and corrective strategies. Health systems should use multiple patient safety data sources to prioritize interventions and promote a cycle of action and improvement based on data rather than merely just collecting and analysing information. Introduction Hospitals require real-or near real-time information to understand whether they are delivering safe care to patients and to inform interventions to reduce adverse events (AEs) (harm to patients) [1].The lack of adequate detection and monitoring of AEs is a major factor in their persistence [1,2].Measurement of types and frequencies of AEs informs patient safety priorities for corrective strategies and tracking progress over time and against peers [1].The challenges for patient safety measurement in healthcare systems have been outlined, with solutions for adoption by member states and their healthcare systems, in the WHO Patient Safety Acton Plan (2021-2030) which calls on governments to 'strengthen synergies and data-sharing channels between sources of patient safety information for timely action and intervention …' [3]. One reason why hospitals do not adequately detect AEs and monitor their prevalence may be their use of incident reporting systems (IRSs) as their primary patient safety information data source [4,5].IRSs tend to collect poor quality information on the frequency of harm, and certain incident types, such as diagnostic errors are consistently underreported [4,6].Over-reliance on IRSs thereby can compromise a hospital's quantitative understanding of AEs [4]. One frequently used method of collecting patient safety prevalence information is the Global Trigger Tool (GTT) [7][8][9].The GTT was designed to provide hospitals with 'an easy-to-use method for accurately identifying AEs and measuring the rate of AEs over time' [8].The GTT involves the screening of medical records for the presence of triggers, followed by a more in-depth manual review for the presence of an AE.After AEs have been detected with the GTT, their rates may be calculated and displayed graphically over time [8].Originally developed for adult inpatients in 2003, the GTT has since been modified for hospital specialties [10][11][12][13] and primary care [13][14][15] with a second edition ('the GTT Protocol') published in 2009 [8].Medical record review, using structured tools like the GTT, is considered to be one of the patient safety data sources most amenable to measuring rates of AEs, whilst IRSs are not suitable for reliable measurement purposes largely owing to reporting biases [16].Healthcare services may use the GTT as an adjunct to IRSs to detect and measure AEs [8].Tchijevitch et al. [17] found that an IRS alone was insufficient as a single method for quantifying the occurrence of serious or fatal adverse drug events (ADEs) and that the GTT could be beneficial as one other data source.In a secondary finding of our 2016 systematic review of the GTT, IRSs detected only an average of 4% (range 2-8%) of AEs detected using the GTT across eight studies [18].However, there are no syntheses of direct comparisons of AE data collected by IRS and the GTT.Given many hospitals' widespread use of and arguably over-dependence on IRSs [4,5], the poor quality of information that IRSs provide on the prevalence of harm and that the GTT is designed as a more reliable tool to be used by hospitals to measure AEs, we sought to compare the two methods.The aim of this study was to undertake a systematic review to empirically quantify the gap between the percentage of AEs detected using the GTT to those that are also detected via IRSs. Methods A systematic review and narrative synthesis was conducted in adherence to the PRISMA statement [19].We searched MEDLINE, EMBASE, and CINAHL for articles for all time up to 27 March 2023 using the search term 'Global Trigger Tool'.We also hand-searched the key journals including 'BMJ Quality and Safety', the International 'Journal for Quality in Health Care', 'Health Services Research', 'BMJ Open', 'Pediatrics', 'Journal of Evaluation Clinical Practice', 'Joint Commission Journal', 'Journal of General Internal Medicine', 'Journal of Patient Safety Risk Management', 'Journal of Patient Safety', 'American Journal of Medical Quality', and included all eight studies previously identified [18].Snowball searching of included articles was also undertaken.Variants of the GTT were included.Studies were limited to those published in the peer-reviewed literature; doctoral theses were excluded.Fig. 1 depicts the search process. Study selection Two authors (C.J.M. and P.H.) independently screened all titles, abstracts, and potentially relevant articles for full-text review.Any disagreements about the eligibility of studies were resolved through discussion until consensus was reached.Studies published in English that compared AE rates using a variant of the GTT with AEs detected by IRS were included.In total, 14 studies met the inclusion criteria. Data extraction Two authors (C.J.M. and P.H.) extracted and compiled data from each paper.The publication data and study demographics (authors, year of publication, and country), speciality (healthcare type and speciality), GTT methodology (number of institutions, sample size, AE definitions, number of reviewers, use of inter-rater reliability (IRR), and patient safety classifications), and results data (AE rate measured by GTT and IRS), were all extracted.The GTT methodology was abstracted due to the considerable heterogeneity and deviations from the GTT protocol that we previously found within studies [18]. Results The literature identified 404 potentially relevant, nonduplicate articles.After reviewing the titles and abstracts, we excluded 248 articles and 156 were read in full-text form (Fig. 1).Fourteen articles met our inclusion criteria.Of these, 11 (79%) studies cited measuring the AE rate as a reason for undertaking the research (Table 1).Characterising AEs (for example, using incident types, preventability, or severity) was the second most frequently cited reason (7/14, 50%), followed by comparing the GTT with other AE data sources (7/14, 50%). Demographics and methodology-sampling All 14 included studies were undertaken in hospitals of six countries with nine (64%) in the USA (Table 2, Table A.1). The studies were published between 2006 and 2022.ADEs only were collected in two studies [17,29].Over one-third of the studies were undertaken in a single institution (6/14, 43%) (Table 2), and a total of 22 589 medical records were reviewed across 107 institutions.A total of 7166 AEs were found. Definition of AE While the GTT method was stated in all studies, only four of 14 (29%) explicitly used the GTT protocol's [8] AE definition ('unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment or hospitalization, or that results in death').Five (36%) studies used the following definition (or a modification of it) 'an injury, large or small, caused by the use (including non-use) of a drug, test or medical treatment', four studies (29%) reported no definition and one (7%) study used an Institute of Medicine [33] definition ('an event leading to patient harm and caused by medical management rather than the underlying condition of the patient'). Number of reviewers The GTT protocol recommends assignment of two primary reviewers and one authenticating physician [8].Just under half of the studies (6/14, 43%) (Table A.2) used this method.The most frequently used other method was one primary and one secondary reviewer (3/14, 21%).a One study [27] took part in two countries so total is greater than number of papers.b Nilsson applied the GTT to patients who had died in ICU. Inter-rater reliability Only two studies measured IRR and only one of those reported the results (k = 0.58 between two primary reviewers and k = 0.89 between primary and secondary) [25]. Use of severity of harm scale The National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) [34] was used as the scale of harm to report GTT AEs in 93% (13/14) of the studies (Table A.3). Comparison of adverse event reporting rates using the GTT and IRSs The percentage of AEs detected using the GTT that were also detected in corresponding IRSs ranged from 0% to 37.4% (Fig. 2, Table 3).There was an average of 7.0% of AEs detected with the GTT also detected by IRSs (SD 9.1; median 3.9 and IQR 5.2).Twelve of the fourteen studies found <10% of the AEs detected using the GTT were also found in corresponding IRSs.Two studies included in this review compared rates of serious AEs collected by the GTT and IRS.Tchijevitch et al. [17] detected 10 serious or fatal AEs from 141 medical records using the GTT with none of these (0%) being detected by IRS.In another study related to inpatient psychiatry, the IRS detected just over half (53%) of moderate to severe harm AEs that were detected by GTT [31]. Statement of principal findings We found that most AEs occurring in hospitals and identified by the GTT are unlikely to be detected by IRSs.In 12 of the 14 studies reviewed, the rate of AE detection by IRSs was <10% of those detected by the GTT.The average of AEs detected with the GTT that were also detected by IRSs was 7%. Strengths and limitations This review is the first to undertake a standalone systematic review to answer a recurring question in the quality and safety field: to compare rates of AEs using the GTT with AEs detected via IRS.Strengths include a systematic search strategy involving multiple data sources and reporting according to PRISMA guidelines, and included studies were critically appraised for quality.The quality of included studies as measured by the mean QATSDD score was higher (26.7 versus 19.7 [35] and 21.8 [36]) compared with two other systematic reviews assessing patient safety data sources. The rate of AEs detected by the GTT is the comparator in this study.The GTT AE rates varied widely across studies: 7-203 AEs per 100 admissions across the 14 studies and 7-28 AEs per 100 admissions for the five general medical studies [23][24][25][26]30] (Table 3).This wide variation is likely to have three explanations.Firstly, differences in setting: the four highest AE rates (74-203 per 100 admissions [21,22,27,32]) (Table 3) occurred in paediatric or neonatal intensive care units or inpatient oncology, where higher rates are more likely.Secondly, heterogeneity of GTT methods outlined in the results and thirdly, reviewers' judgements, for example, AEs involving minor harm are generally less easily identified and the GTT reviewers need to apply considerable discretion which may result in variation of perceptions [18].The heterogeneity in methods, notably definitions of AE, and reviewer judgement, may reduce the utility of the GTT as a comparator to IRSs. As to limitations, one is the small number of included studies.It is possible that some relevant studies were not captured by the search strategy.There remains a possibility of bias because non-English publications were omitted.It is also possible that publication bias affected the results of this study.The information within the included studies is reliant on the information captured in the medical records and, as our previous systematic review on the GTT's use found, there is heterogeneity in how this is captured and recorded in medical records between and within health services and studies [18].The GTT was designed to be used in general inpatients; however, 9/14 papers included in this review applied the GTT in other specialties which tend to have different triggers for AEs which may impact on the rate of AEs detected.Research using the GTT methodology that yielded low levels of AEs may be less likely to be published than studies with higher AE levels. Interpretation within the context of the wider literature This study compared the rates of incidents from two data sources, IRSs and the GTT.However, there are other data sources available to health services to allow them to measure and characterize their safety profile.These include patient complaints, medico-legal claims, executive walk-arounds, investigations, observation of patient care, and administrative data analysis [2,[37][38][39].Each have strengths and weaknesses; for example, medical record reviews, such as the GTT may allow health services to compare rates of incidents over time, but they are time-consuming and resource intensive often requiring experienced clinicians to identify and judge AEs [2]. Not only do different data sources have particular methodological strengths and weaknesses, they also tend to collect different incident types.Levtzion-Korach et al. in 2010 compared five data sources and found there was little overlap in incident types between sources [38].For example, IRSs tend to collect incidents related to patient identification, falls, and medication; medicolegal claims collect incidents related to clinical judgement related to diagnosis and treatment, communication, and problems with medical records; whilst patient complaints tend to collect incidents on communication and administrative issues, such as admission and discharge [16].The implication of patient safety data sources' methodological strengths and weaknesses together with the heterogeneous incident types collected means that best practice is for health services to collect and strategically analyse multiple types of data sources [16,39]. The utility of patient safety data collection and analysis methods are evolving.One is the use of trigger tools, such as the GTT 'prospectively' or in 'real-time' [40].This involves an integrated clinician working within a medical department reviewing medical records within 48 h of patient admission (and who still may be an in-patient); this is followed by a multi-disciplinary discussion to elicit staff perspectives of what happened and why, to determine contributing factors and change ideas to inform possible interventions to improve the safety of future care.This method was designed to overcome a weakness of retrospective medical reviews that can be poor at understanding contributing factors from a review of medical notes alone.The method also fits with Macrae's notion that learning from incidents is socially participative, not merely a formal data collection and analysis exercise [4]. Another innovation relates to using the IRS in a different way by focussing reporting on incidents that a health service may require more information about [41].Marshall et al. focussed reporting on a clinical topic that is not well covered by IRS, paediatric diagnostic incidents, with the added aim of increasing reporting from doctors, who generally did not use the IRS at their institution [41].Using small-scale iterative interventions, 44 paediatric diagnostic incidents were reported in 6 months from a baseline of 0. This was sufficient to characterize the main contributing factors to these incidents to allow interventions to be designed [41]. The manual and time-consuming nature of case note review and on-going digitalization of medical records has sparked continued research interest in collecting triggers electronically, thereby introducing considerable efficiencies [42].Within this realm, querying of large electronic data repositories can be undertaken to detect incident types which are infrequent and difficult to collect, such as diagnostic incidents [43].Artificial intelligence approaches, such as Natural Language Processes can also be incorporated into these data repository querying approaches to refine and make searches more specific [44].These methods may only be applicable in health services with large volumes of digitized medical record information and the requisite data analyst capabilities [39,43]. Only two studies included in this review collected information on serious AEs-with 0% and 53% of serious AEs detected using the GTT that were also detected in corresponding IRSs.Both of these studies collected data in specialty areas (medication [17] and inpatient psychiatry [31]) with the latter an outlier in the general results in this review (37%).The relatively high proportion of moderate to severe harm AEs detected by IRS in the inpatient psychiatry study is unlikely to be generalizable [31].Another study compared GTT and IRS AE rates (but was not included in this systematic review due to GTT and IRR AEs being reported independently) and found in a sample of 795 medical records in three US hospitals, 26 serious AEs detected by the GTT with only two or 8% detected by an IRS [7].This limited evidence indicates that serious AEs may not be detected reliably by IRS, further emphasising the need for health services to routinely use multiple patient safety data sources. Implications for policy, practice, and research Our findings of a >10-fold gap between the AE detection rates of the GTT and IRSs is strong evidence that the rate of AEs collected in IRSs in hospitals should not be used to measure or serve as a proxy for the level of safety of a hospital.However, recent prominent editorials and reviews of international patient safety expert opinions note that such a practice remains common in healthcare and that IRSs are the most widely employed patient safety practice [5,45,46].The primary implication is for health services to incorporate multiple methods to collect patient safety information in a way that is most efficient for them depending, for example, on whether their records are digitized and to use frameworks of decision making and prioritization setting for action.This should explicitly delineate the purpose of all patient safety data sources in policy, practice, education, and measuring results from interventions designed to reduce harm.For example, IRSs, as emphasized by our results, are poor at assessing incident rates.However, from the perspective of patient safety improvement, they can detect rare and new incident types and contribute to analysis of contributing and contextual factors to develop preventive and corrective strategies.If a health service is lacking information to understand particular incident types or clinical specialties, they may design bespoke IRS data collections to fill this gap and to design interventions, as Marshall et al. achieved in relation to paediatric diagnostic incidents [41]. The results of our study may provide a temptation for health services to run campaigns with clinicians for more incidents to be reported in the IRS.However, we would caution against this.Even though IRS detect a small percentage of AEs, large numbers of incidents are being collected, for example, over 2.4 million in one year (July 2021-June 2022) in the former National Reporting and Learning System in England and Wales [47].The qualitative data contained in the IRS incident narratives are a highly valuable source of information to understand contributing and contextual factors, to inform improvement and research [48], notwithstanding their known limitations [4].There are already too many resources devoted to the collection of IRS data, and not enough dedicated to the strategic prioritization, interpretation, and analysis of all patient safety data sources and the implementation of corrective strategies [4]. Conclusions This systematic review found that in 14 studies, across a range of specialties, IRSs detected <10% of AEs than were detected by GTT.This study provides clear evidence that hospitals should not use IRSs to estimate prevalence of harm to patients.Health systems should incorporate multiple patient safety data sources to prioritize interventions and promote a cycle of action and improvement based on data rather than merely just collecting and analysing information. Figure 1 Figure 1 Systematic review flow diagram detailing the numbers of articles found, abstracts screened, and full texts reviewed. patient days in original papers (calculated to 1000 patient days).b Calculated by authors of this paper.c Fatal and life threatening ADEs only. Figure 2 Figure 2Percentage of adverse events detected using the Global Trigger Tool that were also found in corresponding incident reporting system by research study. Table 1 . Reasons for undertaking the study. Table 2 . GTT studies by country, speciality, and number of hospitals. Table 3 and in more detail in Table A.5.Studies largely performed best on criteria related to fit between research question and method, statement of aims/objectives, and clear description of research
2023-07-15T06:17:33.378Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "87af2a4ee8f408f7b4989b9d0c41f72585b74943", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/intqhc/advance-article-pdf/doi/10.1093/intqhc/mzad056/50878724/mzad056.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "33ac7178e119816ede33f8d643d75f4e272301c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
597241
pes2o/s2orc
v3-fos-license
Histocompatibility antigens and genetic control of the immune response in guinea pigs IV. Specific inhibition of lymphocyte proliferation by auto-anti-idiotypic antibodies. The in vitro T-cell proliferation induced by penicilloylated bovine IgG (BPO-BGG) in sensitized strain 2 guinea pigs could be specifically blocked by the serum of guinea pig (*305) which had been repeatedly immunized with BPO-BGG over a period of 9 mo. The antibodies which appeared in the serum of this animal (*305) were functionally similar to the strain 2 anti-idiotypic antibodies (a strain 2 BPO-BGG) raised by immunizing strain 2 guinea pigs with immuno-adsorbent column-purified BPO-BGG. Animal *305 had no detectable antibodies to BPO-BGG and failed to give a delayed hypersensitivity skin response when challenged with BPO-BGG. site was measured 24 h after testing with a caliper (Schnelltaster; Kroeplin, Schiichtern, Germany). Delayed skin reactivity to BPO-BGG was assessed by the intradermal injection into the shaved flank of 5 and 50 ~g BPO-BGG in 100 ~1 of PBS (0.01 M, pH 7.4). Production ofAntisera. Strain 13 anti-strain 2 serum was prepared by immunizing strain 13 guinea pigs with a homogenate of strain 13 lymph node and spleen cells as previously described (3). Antisera against strain 2 and strain 13 anti-BPO-BGG were prepared as previously described (2). The immune anti-BPO-BGG antibodies were then used as immunogens for the sensitization of strain 2 and strain 13 guinea pigs (1). Serum *305 was obtained from a strain 2 guinea pig (*305) which had been immunized with BPO-BGG according to the immunization schedule outlined in Table I. All animals Ca total of seven) were bled (6-8 ml blood) at about 2 wk intervals and their serum used for the preparation of anti-BPO-BGG antibody (2). At the end of day 264 one strain 2 animal out of a group of seven no longer gave a delayed hypersensitivity skin response when challenged with 50 ~g BPO-BGG. The sera of all animals were tested for their ability to inhibit BPO-BGG-induced T-cell proliferation in vitro. Absorption of Antisera. Antisera were mixed with an equal volume of packed lymph node and spleen cells (approximately 10 ~ cells/ml of serum) for 4 h at 4°C. Each antiserum was absorbed at least three times. The absorbed antisera were centrifuged at 10,00O rpm for 30 min and sterilized by Millipore filtration. Antibody Determinations. Antibodies directed against the penicilloyl (BPO) group were determined by the neutralization of penicillin-coated T4 bacteriophages as previously described (4). The passive hemagglutination technique was used to measure antibodies to bovine Ig (BGG) (5). BGG was bound with carbodiimide to sheep erythrocytes as described by Golub et al. (5). Cell Preparation PERITONEAL EXUDATE LYMPHOCYTES (PEL). Guinea pig PEL were purified by passage through nylon wool columns (2). For the assay of antigen-induced proliferation, lymphocyte suspensions (20 × 10B/ml) were incubated with either 100 ~g of DNPT-GL and GT or 400 ~g aspiryl ovalbumin (ASP-OVA), BPO-BGG, and iT, G)-A--L/ml of Medium 199 (M-199) containing 10% heat-inactivated fetal calf serum. A portion of the cells was also incubated with 10 ~g phytohemagglutinin (PHA)/ml of M-199. The cells were pulsed with antigen for 30 min at 37°C, washed three times with M-199, and then cultured at a cell concentration of 1.25 × 106/ml. 0.2-ml aliquots of the antigenpulsed PEL (0.25 × 106 cells) were cultured in round-bottom microtiter plates (Cooke Microtiter system; Sterilin LTD, Richmond, Surrey, England) in medium RPMI-1640 (Grand Island Biological Co., Grand Island, N. Y.) supplemented with penicillin (100 U/ml) and streptomycin (100 ~g/ ml), adding either 1% normal guinea pig serum or 1% guinea pig antiserum supplemented with 9% beat-inactivated fetal calf serum for 72 h at 37°C. 18 h before the termination of the cultures, 0.5 ~Ci of tritiated thymidine (sp act, 5 Ci/mmol; The Radiochemical Centre, Amersham, Great Britain) was added to each well. The cells were harvested with a multiple cell culture harvester (Skatron, Lierbyen, Norway), and the radioactivity was measured in a Beckman liquid scintillation counter (Beckman Instruments, Inc., Fullerton, Calif.). The results are expressed as total counts per minute per culture well. Results Repeated immunization with BPO-BGG (Table I) over a period of about 9 mo, resulted in one strain 2 guinea pig (out of a group of seven) becoming unresponsive when challenged with BPO-BGG in vivo. Whereas the other members of the group gave intense 4-h Arthus and 24-h delayed skin reactions to 50 ~g of intradermally injected BPO-BGG, animal *305 failed to respond. The skin test was repeated 10 days later with the same results. The serum of animal *305 contained no detectable antibodies to either the BPO group or to the BGG carrier, whereas the sera of the other strain 2 guinea pigs in the group had high antibody titers to both the BPO and the BGG determinants (results not shown). The sera of all animals were then tested for their capacity to inhibit BPO-BGG- induced T-cell proliferation in vitro. As shown in Table II, (Table IV). Furthermore, the inhibitory activity of serum *305 can only be absorbed by immune cells from strain 2 (Table III) (Table HI). These data suggest a functional similarity between serum *305 and strain-specific strain 2 BPO-BGG. lar theory based on the interaction of complementary idiotypes has been put forward by KShler et al. (12). The results in this paper indicate that a serum with specific in vitro inhibitory activity may arise in the course of repeated antigenic stimulation. These data are reminiscent of the production of anti-idiotypic antibody in rats repeatedly immunized with alloantigens (9). A striking feature of serum *305 however, is that its activity appears to be directed against strain-specific idiotypes characteristic of strain 2 guinea pigs and in this respect serum *305 is functionally similar to ~ strain 2 BPO-BGG produced by repeated immunization with syngeneic antibodies. Both serum *305 and ~ strain 2 BPO-BGG are exquisitely specific in vitro in that they only inhibit BPO-BGG-induced in vitro T-cell proliferation but they do not interfere with the stimulation induced by the other antigens (Table HI; reference 1). A number of recent studies have reported the production of autologous antiidiotypic antibodies (6-9) and it has been suggested (9) that the in vivo appearance of anti-idiotypic antibodies serves a regulatory function, possibly by controlling the synthesis of a particular antibody elaborated in response to an antigenic stimulus. The phenomenon of idiotype-anti-idiotype interactions, as suggested by Jerne (10) and recently modified by Hoffmann (11), is pictured as a network of interacting variable domains of immunoglobulin molecules. A simi- The significance of the production of "auto-anti-idiotypic antibodies", specifically reactive with a particular antigen (in one case out of seven), is at present not clear. One possibility is that the production of anti-BPO-BGG may have been suppressed by an excess of complementary (12) anti-idiotypic antibodies with the result that the immune balance in the "suppressed" animal may have been temporarily shifted in the favor of auto-anti-idiotypic antibody production. Further studies with strain 2 guinea pigs using different immunization schedules and monitoring the appearance and disappearance of antibodies and antiidiotypic antibodies may clarify this point. In support of the suggestion that the auto-anti-idiotypic antibodies (8,9), which sometimes appear in the serum of immunologicaily unresponsive animals, may serve an immunoregulatory role is the finding that the in vivo administration of ~ strain 2 BPO-BGG into sensitized strain 2 guinea pigs results in a dramatic but short-lived (about 4 wk) fall in the level of anti-BPO antibodies. 1 It remains to be seen whether further immunization with BPO-BGG can induce unresponsiveness in the remaining responders of the group with the concomitant appearance of "auto-antibody" activity and whether recovery from unresponsiveness results in a decline in the level of BPO-BGG-specific inhibitory activity in the serum of animal *305. Summary The in vitro T-cell proliferation induced by penicilloylated bovine IgG (BPO-BGG) in sensitized strain 2 guinea pigs could be specifically blocked by the serum of guinea pig (*305) which had been repeatedly immunized with BPO-BGG over a period of 9 mo. The antibodies which appeared in the serum of this animal (*305) were functionally similar to the strain 2 anti-idiotypic antibodies (~ strain 2 BPO-BGG) raised by immunizing strain 2 guinea pigs with immunoadsorbent column-purified BPO-BGG. Animal *305 had no detectable antibodies to BPO-BGG and failed to give a delayed hypersensitivity skin response when challenged with BPO-BGG.
2014-10-01T00:00:00.000Z
1977-04-01T00:00:00.000
{ "year": 1977, "sha1": "347e52e5c79ade9ff4d35332492dd3828b7fdd48", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/145/4/1093.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "347e52e5c79ade9ff4d35332492dd3828b7fdd48", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
126301987
pes2o/s2orc
v3-fos-license
The Extension of Hypothesis in Propositional Logic The belief non-revision method can achieve a consistent hypothesis extension by limiting the process of reasoning. In propositional logic, several valuable properties of the hypothesis extension have been proved. The hypothesis extension is an infinite set. It includes some clauses. For improving the efficiency of the non-revision method, this paper proposed an algorithm of hypothesis extension in propositional logic. Hypothesis extension is still a limited set of clauses according to this method. And it is logically equivalent with the existing hypothesis extension. The experimental results and theoretical results are consistent. Introduction Common sense reasoning is reasoning for the limited information.Non-monotonic logic [1] is an effective method to deal with the reasoning by default.Therefore, it is the most important logic tool in common sense reasoning. At present, there are two main methods to deal with the problems of non-monotonic logic in common sense reasoning.One method is the default logic and the other method is the belief revision method.There are many important research results about the field of the common sense reasoning.For the first method, McCarthy proposed the restrictive reasoning method [2][3].In addition, Sandewall [4] proposed the method that it put the operator UNLESS in the first method.Later, the research about the monotonicity is becoming increasingly important in many fields.And it also achieved a lot of research results.For example, Reiter proposed the default logic [5].McDermott and Doyle proposed the non-monotonic logic [6] and the related theory based on the modal logic [7].A few years later, the equivalence relation between the default logic and the auto-epistemic logic [8] have been proved by Konolige.Until the mid of 1980s, Alchourron, Gardenfors and Makinsond proposed the famous theory of belief revision [9], it can also be named as AGM [10]. The belief revision method has been widely studied in common sense reasoning.However, some useful information may be lost or the result is unexpected during the process of revision.Therefore, the belief non-revision method [11] has been proposed.It can achieves the consistent set by limiting the process of reasoning.This method dose not change the original database.This paper proposed a hypothesis extension algorithm based on the belief non-revision method.It provides the definition of hypothesis extension in propositional logic.The method of primitive implication was adopted in the algorithm.We will show that each hypothesis has the only extension.The experimental results demonstrate the process of reasoning is correct. The remainder of the paper is structured as follows.In section 2, the theoretical basis are reviewed.Section 3 introduces the extension of hypothesis in propositional logic in detail and the results of experiment.Section 4 concludes the paper with a short discussion of future work. Theoretical basis 2.1 The definitions of the propositional logic In this paper, the hypothesis is limited to the propositional logic.The hypothesis extension is defined with the general resolution principle. Definition 1 Proposition is a statement.It can determine the relationship described by the statement is either true or false. Proposition can be divided into the atomic proposition and the complex proposition. The atomic proposition is indecomposable.And it does not contain the logical connector.   , , , , , , 1 The logical connector is shown as the follows: , , , , The complex proposition consist of some atomic propositions by connecting with the logical connector, e.g.,   Definition 2 If A is a formula, then A must satisfies one of the following criteria: (1)'0' and '1' are formulas. (2) A atomic proposition is a formula. (3) If , P Q are formulas, then The symbol string could be any limited combination of (1), (2) The resolution principle The resolution principle is inference rule.It means that a clause can be derived by two clauses. Definition 5 For two clauses 1 S and 2 S , there is a literal 1 l in the clause 1 S , there is another literal 2 l in the , l l and connect the remaining part with the logical connector.The resolution of 1 2 , Example 3 If , A B are two clauses, and C is the The primitive implication In the 1950s, Quine [12] proposed a mechanized procedures.It can achieve the minimalist equivalent type of a formula.Actually, this is the original definition of the primitive implication.This problem is related to many fields of research.For example, the minimization of Boolean function [13], the update problem of logical database [14], finding the minimum support set in the truth maintenance system and so on. .If a disjunctive form C is the primitive implication of a theory T , then it must satisfies the following properties: (1) C is not a tautology. ( ( The definition of hypothesis extension This paper gives an algorithm of hypothesis extension in propositional logic.In this section, the relative definitions are introduced. Definition 10 Let are clauses.Γ is a hypothesis and Γ may be inconsistent. Definition 11 Let Γ be a hypothesis.C is a clause but it is not empty.There is satisfies the following properties: (1)Let satisfies the following conditions:  There are clauses , , According to the definition, the attribute set is a set of all results of reasoning.Definition 12 Let Γ be a hypothesis.If a clause is empty, then should satisfies the following properties: (1) Theorem 1 If Γ is a hypothesis, then the set of reconstruction   R Γ is consistent.(the details can be seen in [16]) ( (3) There is not exist a formula j C and According to the above definitions,   PI Γ is the primitive implication of Γ .Definition 14 If Γ is a hypothesis, then the , , Hypothesis extension algorithm PI Γ is empty according to the definition of the primitive implication.However, Γ may be inconsistent in this paper.Therefore, the process of hypothesis extension is divided into three steps.First, the attribute set   Con Γ can be achieved by using the resolution principle.Second, the set of reconstruction   Γ can be achieved by deleting the conflicts in   Con Γ .Finally, due to   R Γ is consistent, we can achieve the hypothesis extension But in the process of the resolution, we found that some operations are redundant because the results of different resolutions may be the same. Therefore, the method Tison [17] have been adopted.This method is initially used for structure the primitive implication .It can optimizes the process of the resolution.And it also can improves the efficiency of the resolution. The method Tison can be understood through the following example: The ordinary resolution: The method Tision puts forwards a point.If the process of the resolution is executed according to the order of the literals, then the resolution can avoids redundancy (let the order be A, B, D ) . The method Tison : Obviously, the steps of the method Tison is less than the steps of the ordinary resolution. Algorithm As the following step shows: The set of beliefs is denoted by Γ .The literal set is denoted by The attribute set is denoted by The hypothesis extension is denoted by   else, then   Experimental Case The process of the experiment as the following: (1)The hypothesis is consistent and there are no inclusion relation in it: The hypothesis is inconsistent and there are no inclusion relation in it: The hypothesis is consistent and the inclusion relation exist in it: The hypothesis is inconsistent and the inclusion relation exist in it: The experiment shows that the experimental result is consistent with the theoretical result.The algorithm obtains the good effect.It does not appear the situation that the important information is lost.In the process of experiment, we found that the working time will increases with the amount of data increases.However, this kind of situation is inevitable.In general, the algorithm is accurate and comprehensive. Conclusions The hypothesis extension has many good mathematical properties based on the non-revision method.The process of cognition is convergent in this paper.This nature shows that the non-revision method conforms to the process of cognition.And it is an effective solution to the non-monotonic reasoning in practice.In order to improve the application value of the non-revision method, this paper redefined the hypothesis extension based on the primitive implication.In the future, the research could be extended to the And/Or clause and the first-order logic. Definition 6 Let C be a clause in propositional logic.A set of all literals of C is denoted by   literal C .And   literal C is the cardinality of C .Definition 7 , i j C C are two clauses.Both of them are not empty.If     i j literal C literal C  , then the clause j C subsume the clause i C .Definition 8 The primitive implication of a theory T is donoted by   PI Γ  ) There is not exist a clause C such that | T C  and C C   .Definition 9 If Γ is a consistent set of some clauses, then Γ and   PI Γ  are logically equivalent.(theprocess of proof can be seen in[15]) iC be a clause in the sequence.The clause i C and (3).It is still a formula.
2018-11-30T16:38:21.979Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "31a91e106f7f6731cf40ff923b08b6e56f74c2cf", "oa_license": "CCBY", "oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2016/02/itmconf_ita2016_06001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "31a91e106f7f6731cf40ff923b08b6e56f74c2cf", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Mathematics" ] }
3613364
pes2o/s2orc
v3-fos-license
A Randomized Study of the Relative Pharmacokinetics, Pharmacodynamics, and Safety of Alirocumab, a Fully Human Monoclonal Antibody to PCSK9, After Single Subcutaneous Administration at Three Different Injection Sites in Healthy Subjects Aims We investigated the relative pharmacokinetics, pharmacodynamics, and safety of the proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor alirocumab following injection at three different sites. Methods Sixty healthy subjects (39 male, 21 female; age 20–45 years) were randomized to receive a single subcutaneous injection of alirocumab 75 mg via 1-mL prefilled pen into the abdomen, upper arm, or thigh (NCT01785329). Subjects were followed for 85 days ± 2 days following study drug administration. Pharmacokinetic (PK) parameters for the systemic exposure of alirocumab were calculated, and levels of free PCSK9 were assessed. Percentage changes from baseline in LDL-C were compared between injection site groups using linear mixed-effects models. Results Alirocumab concentration–time profiles were similar, and free PCSK9 levels were reduced to approximately zero between Day 3 and Day 4 postinjection in all groups. LDL-C levels reached nadir on Day 15 postinjection in all groups with mean percentage reductions of 48.4% (abdomen), 39.5% (upper arm), and 45.6% (thigh) at this time point. A similar effect on LDL-C levels was seen across the entire time course of the study at all three injection sites. Treatment-emergent adverse events were experienced by 8/20 (abdomen), 11/20 (upper arm), and 13/20 (thigh) subjects. There were 2 mild/transient injection site reactions. There were no serious adverse events. Discussion A single subcutaneous administration of alirocumab 75 mg via prefilled pen was well tolerated with similar pharmacokinetics and pharmacodynamics when injected into the abdomen, upper arm, or thigh. Conclusion These results suggest that alirocumab can be interchangeably injected in the abdomen, upper arm, or thigh. Introduction Proprotein convertase subtilisin/kexin type 9 (PCSK9) is a protease that mediates degradation of low-density lipoprotein (LDL) receptors [1]. By its effect of increasing the numbers of LDL receptors, inhibition of PCSK9 is being investigated as a means of reducing levels of LDL cholesterol (LDL-C). Alirocumab is a fully human monoclonal antibody that specifically binds to and inhibits PCSK9. In Phase 2 studies, alirocumab administered every 2 weeks at a dose of 150 mg reduced LDL-C by up to 72% when combined with statins AE ezetimibe, with the most common treatment-emergent adverse event (TEAE) being transient injection site reactions of mild intensity and short duration [2][3][4]. In these studies, all patients received alirocumab injections in the abdomen; however, patients may prefer to use different injection sites. Here, we report the relative pharmacokinetics (PK), pharmacodynamics (PD), and safety of alirocumab after single subcutaneous (SC) administration of 75 mg into the abdomen, upper arm, and thigh of healthy subjects. Methods Study Design and Population (2.46 mmol/L) not receiving background lipid-lowering therapy. The study was conducted at the Hammersmith Medicines Research Clinical Research Unit in London, UK (NCT01785329). The protocol was approved by the Scotland A Research Ethics Committee, Edinburgh, Scotland, and written informed consent was obtained from all participants. Subjects were randomized to one of the three parallel groups and received a single 75 mg dose of alirocumab SC via 1-mL prefilled pen at one of the three distinct sites (abdomen, upper arm, and thigh) in the morning on Day 1. Samples for PK and PD analyses (including free PCSK9 and LDL-C assessments) were collected following a 10-h fast predose on Day 1, and at various time points up to Day 85 (AE2 days, end of the study). The primary objective was to compare the relative PK of a single SC dose of alirocumab 75 mg administered at three different injection sites in healthy subjects. Additional objectives included assessments of the effect of a single SC dose of alirocumab on serum LDL-C, other lipid parameters, free PCSK9 levels, and safety. Alirocumab and free PCSK9 serum concentrations were determined using validated enzyme-linked immunosorbent assays with lower limits of quantification (LLOQ) of 78 and 31.2 ng/mL, respectively. PK parameters for the systemic exposure of alirocumab, calculated using noncompartmental methods, included maximum serum concentration (C max ), area under the serum concentration versus time curve (AUC), and AUC from time zero to time of last concentration above LLOQ (AUC last ). LDL-C was calculated using the Friedewald formula [5]. Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), triglycerides (TGs), apolipoprotein (apo) B, and apoA1 were measured directly. Safety assessments included TEAEs, especially local tolerability (injection site reactions). TEAEs were defined as any AE occurring from the time of alirocumab administration up to the end of the study visit. Statistical Analyses A sample size of 20 subjects per group was calculated to be sufficient to obtain an estimate for the ratio of PK parameters between groups with a maximum imprecision of 19.7% and 90% assurance in terms of the 90% confidence interval (CI), and assuming a maximum standard deviation (SD) of 0.35 for log-transformed PK parameters based on previous experience with alirocumab. PK parameters were log-transformed prior to statistical analysis with PKDMS (PKDMS version 2.0 incorporating WinNonlin Professional, version 5.2.1; Pharsight [now Certara, St. Louis, MO, USA]) and SAS â (version 9.2 on Windows platform; SAS Institute, Cary, NC, USA). Relative PK of systemic exposure between injection sites was assessed using a linear fixed-effects model with terms for injection site, gender, and weight as covariate. Ratios of geometric means for C max , AUC last , and AUC were obtained by computing estimates and 90% CIs for the differences between injection sites means within the linear mixed-effects model framework, then converting to ratios by antilog transformation. Percentage changes from baseline for each PD parameter were compared between each injection site group at each time point using a linear mixed-effects model (SAS Proc Mixed â ; SAS Institute) to obtain P-values for the interaction effect between injection site and PD parameter. Safety data were analyzed using descriptive statistics. Subjects In total, 60 subjects were randomized (20 per group), and all completed the study. Baseline characteristics, including mean LDL-C and free PCSK9 levels, were similar across the three groups (Table 1). Relative Pharmacokinetics Alirocumab serum concentration-time profiles were similar among the three injection sites, with C max of 8.18, 6.77, and 7.13 mg/L and AUC of 129, 130, and 115 mg day/L for the abdomen, upper arm, and thigh groups, respectively ( Figure 1A). The ratios of point estimates between upper arm versus abdomen injection site groups showed a decrease of 21% for C max and 8% for AUC and AUC last ( Table 2). Comparing thigh versus abdomen, a difference of 12% was observed for C max and 16% for AUC and AUC last . For upper arm versus thigh, a 10% difference was observed for C max (90% CI: 0.76-1.06), whereas a 9% greater difference was observed for AUC and AUC last . Median time to reach C max (t max ) was 2.96, 6.95, and 3.06 days in the abdomen, upper arm, and thigh, respectively (Table 3). Despite the higher median t max value for the upper arm, with high variability being observed for the distribution of t max between the treatment groups, the time-course curves for upper arm and thigh were very similar ( Figure 1A). Elimination of alirocumab resulted in mean residence time of 11.6-13.5 days and mean half-life of 5.77-6.66 days. Pharmacodynamics Maximal reduction of mean free PCSK9 was observed between Day 3 and Day 4 in all groups, with mean values at zero or close to zero ( Figure 1B). After this suppression, serum concentrations of free PCSK9 started to gradually increase and returned within the baseline range by the end of the study. The PD effects of alirocumab on LDL-C ( Figure 1C) were similar for all three injection site groups. LDL-C declined reaching a nadir on Day 15 in all groups. At this time point, the percentage decrease in LDL-C was 48.4% in the abdomen group, 39.5% in the upper arm group, and 45.6% in the thigh group ( Figure 1C). The effect on LDL-C levels was similar across the entire time course at all three injection sites (P-value of the effect of injection site groups = 0.403). There was a trend for a slightly smaller PD effect in the upper arm group observed at nadir values for LDL-C, but not for the overall time course. The PD effects of alirocumab on apoB and non-HDL-C were consistent with LDL-C (Supporting information Figure S1). Safety TEAEs were experienced in 8 (40%), 11 (55%), and 13 (65%) subjects in the abdomen, upper arm, and thigh groups, respectively. The most common TEAEs in all groups were nasopharyngitis, which occurred in two (10%), one (5%), and six (30%) subjects, and headache in two (10%), four (20%), and five (25%) subjects in the abdomen, upper arm, and thigh groups, respectively (Table 4). Only two local injection site reactions were reported (pain and discoloration); both occurred in the thigh group and were transient and of mild intensity. There were no serious AEs (SAEs) or TEAEs of severe intensity. Discussion The concentration-time profiles for alirocumab after a single SC injection of 75 mg into the abdomen, upper arm, and thigh were similar in this population of healthy subjects. There was a slight trend for lower exposure in the upper arm and thigh compared with the abdomen. The observed mean half-life of 5.8-6.7 days was consistent with previous estimates of 5.6-8.8 days with single ascending SC doses of alirocumab [6]. Subcutaneously administered alirocumab rapidly bound to and reduced circulating free PCSK9, reaching a nadir close to zero between Day 3 and Day 4 in all injection site groups. This was followed by a decrease in LDL-C with maximal reduction on Day 15 in all groups. The dynamics between a single alirocumab 75 mg dose, free PCSK9, and LDL-C observed in this study at each injection site are in agreement with the findings of a single ascending dose study in healthy subjects in which a SC injection of alirocumab into the abdomen resulted in reductions in free PCSK9 levels within 3 days of dosing and peak reductions in LDL-C 8-15 days after dosing [7,8]. Additionally, in a Phase 3 monotherapy study, alirocumab 75 mg every 2 weeks produced sustained LDL-C reductions over 12 weeks of treatment (least square mean reduction of 53.2% from baseline at Week 12) [9,10]. During the study, no SAEs or TEAEs of severe intensity were reported, as expected based on the data observed in Phase 2 and Phase 3 studies to date [2][3][4]10]. A prefilled pen was used to deliver the single alirocumab 75 mg dose as a 1-mL SC injection. Injection site reactions were infrequent, with only two reports of mild and transient events in the group of subjects receiving the injection in the thigh. Overall, a single administration of alirocumab 75 mg by SC route delivered via prefilled pen into the abdomen, upper arm, or thigh was well tolerated and presented similar PK and PD profiles regardless of injection site. Our findings suggest that alirocumab could be interchangeably injected in the abdomen, upper arm, or thigh offering patients' flexibility in choice of injection site.
2018-04-03T01:12:54.915Z
2014-11-24T00:00:00.000
{ "year": 2014, "sha1": "c4d77df475a00764ecd04ff96192c3eba9f0044f", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1755-5922.12093", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c4d77df475a00764ecd04ff96192c3eba9f0044f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125358217
pes2o/s2orc
v3-fos-license
EXPERIMENTAL STUDY TO COMPARE THE EFFECT OF SUBOCCIPI- TAL CRANIAL BASE RELEASE AND SUBOCCIPITAL FASCIAL SCRAP- ING ON THE EXTENSIBILITY OF OTHER SEGMENTS OF POSTERIOR KINETIC CHAIN *1 Post Graduation Student of Physiotherapy, Sardar Bhagwan Singh Post Graduate Institute Of Biomedical Sciences And Research, Balawala, Dehradun, India. 2 Head of Department, Department of Physiotherapy, Sardar Bhagwan Singh Postgraduate Institute Of Biomedical Sciences And Research, Balawala, Dehradun, India. 3 Assistant Professor, Department of Physiotherapy, Sardar Bhagwan Singh Postgraduate Institute Of Biomedical Sciences And Research, Balawala, Dehradun, India. INTRODUCTION any of these structures can lead to a postural disequilibrium and some initial tension can cause a sequence of combined tension [1].Kinetic chain is often thought of as each joint in the The human posture is determined by muscular chains fascia, ligaments and bony structures, which are interconnected.Any dysfunction in body being like each link in a chain.It is this chain of system linked together to create human movement.Thus, muscle and fascia are functionally linked [2].Myers and Stecco describe models explaining myofascial trains crossing the entire body and linking head to toes and centre to periphery [3].These are lines of pull based on standard western anatomy lines, which transmit strain and movement through body's myofascia around skeleton [4].As stated by Myers, muscles never attach to bone.Their movement pulls on fascia, the fascia attached to periosteum and periosteum pulls on bone.Therefore if one of the structures within a meridian develops tension it will be distributed along entire myofascial continuum.The superficial back line is a continuity of fascial fabric and it connects and protects the posterior surface of the body lika a carpace from the bottom of the toes around the heel and up the back of the body crossing over the head to its terminus at the frontal ridge at eyebrows [4]. The continuity of the neural system theoretically links the dura mater, which anatomically is inserted into the suboccipital muscles (particularly the rectus capitis posterior minor muscle) and the hamstring musculature.It has been reported that the limited flexibility of hamstring muscles provokes reduced pelvis mobility, disturbing the distribution of pressures in the spine, altering the lumbar curve, causing compensatory movement patterns of the lumbar spine, and subsequently increasing stress on the spinal soft tissues [5].Connection of sub-occipital muscles with duramater and presence of myofacial chains that links the connective tissue fascia and muscles along specific lines in the body.It is important to study the treatment and influence on local region where treatment is taking place and also globally in distant region [6]. Myofascial release is a hands-on soft tissue technique that facilitates stretch into the restricted fascia [7].Fascial Scraping is a fascial release technique done through scrapper tool, to through stimulation in specific areas produce local therapeutics effects and to restore the organic functions and break the fascial adhesions.Hence the myofacial release technique can be used to release the restricted or pathological fascial structures [8].If a specific structural element of a myofascial chain is affected, the other structures in that chain also gets affected and need to be treated to return to full activity.Thus the anatomy trains map out the sets of sausage links within the body.The idea is that especially in postural habits and long-term sequelae from injury strain communicates along these longitudinal lines from one muscle to another [9].MATERIALS AND METHODS 30 subjects were selected on the basis of inclusion and exclusion criteria.Subjects with dysfunction in posterior kinetic chain as evident of tightness and decreased exertion on forward bending toe touch test were targeted from accessible population.Prior to the study procedure was explained to the subjects and an informed consent was signed from each subjects.The subjects were randomly divided into two groups.The reading and measurement were taken on the first day before the intervention and after one week of intervention.Suboccipital Muscle Length was measured using inch tape, lumbar ROM was measured using Schober's method, pelvic tilt was measured by inclinometer and hamstring length was measured by straight leg raise test.Group A (n=15) received myofascial release through cranial base release and group B (n=15) received myofascial release through fascial scraping.Cranial base release was given with stretch maintained for 90-120 seconds and it was given for 3-5 minutes.Fascial scraping was given by scraping the fascia along the suboccipital muscles with the fascial scraping tool for 30-40 seconds before any skin colour changes are seen with the nodding movement. The analysis revealed that both the techniques of myofascial release (Cranial Base Release and Fascial Scraping) are equally effective.At present, limited research has been undertaken to determine the effectiveness of fascial scraping.Fascial sraping tool helps in breaking up the scar tissue between muscle layers.This increases rate and amount of blood flow in and around area.This initiate and promote healing process of affected soft tissues.Various authors have done numerous studies on the fascial links-Herting and Kessler described myofascial connection by explaining that the connective tissue is a continuous substance throughout body [10].Benjamin (2009) in his study stated that the innervations of deep fascia should be considered in relation to its association with muscle [11].Gary Fryer in his study stated that the cervical isometric contract relax treatment produced significant effect to the extensibility of hamstrings [12].Pollard and Ward compared two techniques, a suboccipital muscle contraction-relaxation technique and a contraction-relaxation on hamstring muscles.They used the SLR test as an outcome measure, showing a significant improvement in the cervical zone intervention group by gaining increase in hip flexion range of motion.Mohamed Eldesoky and Enas Abutaleb (2015) in their study stated that there is an anatomical relationship between the pelvis and lumbar spine , the lumbar spine posture depends on the pelvic alignment .Changes in the inclination of pelvis affected the degree of lumbar lordosis and lumbar range of motion.And thus foot posture alteration can produce effect in pelvis and spine [13].The pelvis, an important segment, situated in the center of the body, connects the upper body to the lower limbs.Pelvic position has been found to highly correlate with the lumbar protects the entire posterior surface of the body from the bottom of the foot to the top of the head in two pieces -toes to knees, and knees to brow.When the knees are extended, as in standing, the SBL functions as one continuous line of integrated myofascia [4].If fascia becomes tight or restricted at one point in the superficial back line then as a result of the continuity of fascia through the body, this initial disruption in the fascia will influence all structures the fascia is attached to and continuous with.In our study suboccipital muscle length, lumbar range of motion , pelvic tilt and hamstring length was measured to check how tension in one part of chain can affect other components of the meridian.Myers found that localized injury to any component would transmit tension along this line [4]. In our present study extensibility in the suboccipitals length, lumbar range motion, pelvic tilt and hamstring was seen as an outcome measure and suboccipital muscles was given myofascial release and effect was seen on suboccipital length, lumbar range of motion, pelvic tilt and hamstrings length.When myofascial release was given to the one segment i.e., tight suboccipital muscles there was an increase in extensibility of other components which can be explained on the basis of myofascial continuity as fascia is continuous so when restriction is removed there occurs release of tension in fascial system, also it can be due to principle of tensegrity.As the model of tensegrity states that increase of tonus of one element of a structure causes analogical increase of tension of other elements remaining in mutual structural contact [13].The present study suggested new approach to the treatment of dysfunction of more than one components by suboccipital muscle inhibition technique and encouraged further investigation of remote effect of cervical treatment favoring the authors who concluded that manual therapy of neck may have a role to play in treatment of extra spinal lower limb musculoskeletal conditions.Thus this study suggests that myofascial chains can provide a biomechanical explanation for the effectiveness of myofascial treatments in musculoskeletal dysfunctions.It can serve as a guide for interpreting pain distribution but also as a topographical map for choosing specific, key areas for effective treatment.A characteristic of this method is that it evaluates and treats points at a distance from the region where subjects experience their pain. Graph 1 :Graph 2 : Mean and Standard Deviation of Pre and Post values of Suboccipital length within Group A (CBR).Mean and Standard Deviation of Pre and Post values of Lumbar ROM within Group A (CBR). Graph 3 : Mean and Standard Deviation of Pre and Post values of Standing pelvic tilt angle within Group A (CBR). Graph 4 : Mean and Standard Deviation of Pre and Post values of Straight leg raise within Group A (CBR). 6 : Mean and Standard Deviation of Pre and Post values of Lumbar ROM within Group B (Fascial Scraping). Graph 7 : Mean and Standard Deviation of Pre and Post values of standing pelvic tilt angle within Group B (Fascial Scraping).Yadav P, Arora M, Bhardwaj P. EXPERIMENTAL STUDY TO COMPARE THE EFFECT OF SUBOCCIPITAL CRANIAL BASE RELEASE AND SUBOCCIPITAL FASCIAL SCRAPING ON THE EXTENSIBILITY OF OTHER SEGMENTS OF POSTERIOR KINETIC CHAIN.DISCUSSION position[14].Rockey and Marie (2008) in their study proved a significant relationship between pelvic inclination, hamstring extensibility and hamstring muscle strength.McPartland noted the presence of Myodural Bridge connecting rectus capitis posterior minor muscles to the duramater.Myofascial chains links the connective tissue fascia and muscles along specific lines in the body[6].The findings showed that treating the sub-occipital muscles for the dysfunction of other components of myokinetic chain was found to be effective.JanWilke (2016) in his study concluded that the muscles of the human body are regarded as part of a tensegrity-like, body-wide network, with fascial structures acting as linking components[15].Any restrictions in the connective tissues will affect the ability of the musculoskeletal system to function efficiently.Muscles are thoroughly interwined and surrounded by fascia which explains why fascia have influence on muscle length and function when there occurs dysfunction in fascia[7].Yucesoy, Koopman, Grootenboer, and Huijing in their study found that the transmission of force along the myofascia played an important role in muscle functioning, muscle length and the amount of force the muscles could generate[16].When fascia in one area is stretched, it can cause tightness, restriction and pain in another part of the body [17].A Muscle-Fascial Chain is a group of muscles that are connected through the fascia and are longitudinally positioned in the human body.They run in the same direction and overlap in a continuous chain, which efficiently conducts tension.All of the muscles in the chain are mutually dependent and behave as if they were a single muscle [18].Thomas Myers referred to the muscular chains as anatomy trains because of their continuity.In myofascial meridian one muscle is attached to the next muscle.This fascial trains starts at plantar fascia of the feet, which c ontinues along the posterior side of the body and finishes on the brow line.The bony stations are plantar surface of toe, phalanges, calcaneus, condyles of femur, ischial tuberosity, sacrum, occipital ridge and frontal bone.The Superficial Back Line (SBL) connects and In this experimental study, 30 individuals with tightness or dysfunction in posterior kinetic chain were selected.15 individuals recieved myofacsial release technique by cranial base release and 15 individuals received myofascial release technique by fascial scraping of the suboccipital muscles.The study reveals that the Myofascial Release (Cranial Base Release and Fascial Scraping) given to one segment helped in the extensibility of other segments of Kinetic Chain. Yadav P, Arora M, Bhardwaj P. EXPERIMENTAL STUDY TO COMPARE THE EFFECT OF SUBOCCIPITAL CRANIAL BASE RELEASE AND SUBOCCIPITAL FASCIAL SCRAPING ON THE EXTENSIBILITY OF OTHER SEGMENTS OF POSTERIOR KINETIC CHAIN. Table 1 : Mean and Standard Deviation of Pre and Post values within Group A (CBR).Yadav P, Arora M, Bhardwaj P. EXPERIMENTAL STUDY TO COMPARE THE EFFECT OF SUBOCCIPITAL CRANIAL BASE RELEASE AND SUBOCCIPITAL FASCIAL SCRAPING ON THE EXTENSIBILITY OF OTHER SEGMENTS OF POSTERIOR KINETIC CHAIN. Table 2 : Mean and Standard Deviation of Pre and Post values within Group B (Fascial Scraping). Graph 5 : Mean and Standard Deviation of Pre and Post values of Suboccipital length within Group B (Fascial Scraping).
2019-04-22T13:12:11.724Z
2018-08-11T00:00:00.000
{ "year": 2018, "sha1": "f17488c92ef72bc53309d67c4b6b2119defcf481", "oa_license": "CCBYNCSA", "oa_url": "https://www.ijmhr.org/ijpr.6.4/IJPR.2018.149.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8226e4c9735fda8b2dde07c7dbf8d01980feb1f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
235329014
pes2o/s2orc
v3-fos-license
Psychological Status and Associated Factors During the Lockdown Period of the COVID-19 Epidemic in China: A Web-Based Survey Background: The aim of this study was to survey the general public in China to better understand their levels of psychological state and its inuencing factors after the Wuhan shutdown on 23 Jan. Methods: A survey was conducted on Feb 20-24, using an online self-administrated questionnaire among 4071 participants. Data on subjective indicators of daily-life change was collected, and individual scores on changes in anxiety, depression, and stress were generated by 8-item, 11-item, and 6-item questions. After bivariate analyses, multiple linear regression analyses were conducted to investigate independent associations between socio-demographic variables, subjective indicators of changes in daily life (cid:0) and summary scores including anxiety, depression, and stress scores. Results: Information from 3803 participants was available for analysis. Multivariable regression analyses showed that the anxiety (B=-1.27, 95%CI=-1.71 to -0.82), depression (B=-1.47, 95%CI=-2.06 to -0.88), and stress (B=-0.79, 95%CI=-1.13 to -0.46) scores of people in rural areas are lower than those in urban areas. People living in the other regions except Hubei, higher education were independent correlates of less negative emotions, while people with relatively high incomes had poor psychological status in anxiety (B=0.73, 95%CI=0.08 to 1.38),depression (B=1.45, 95%CI=0.60 to 2.30) and stress (B=0.65, 95%CI=0.17 to 1.13). Married people were less anxious (B=-0.67, 95%CI=-1.30 to -0.05), depressed (B=-1.14, 95%CI=-1.96 to -0.33), and stressed (B=-0.47, 95%CI=-0.93 to 0.00) than single people. The level of attention, self-assessed infection risk, impact of the daily life and mental-health help-seeking tended to be positively associated with the scores of anxiety, depression, and stress (p<0.001). Conclusions: Usual residence, education, Background Being highly contagious, COVID-19 has arisen a national pandemic and a rapid transmission globally to a lot of countries worldwide [1]. In the wake of this global health crisis, stringent public health measures have been implemented to curtail the spread of COVID- 19. In China, in order to prevent the further spread, a lockdown was imposed on Wuhan on 23 January, with travel restrictions, followed by the entire Hubei Province a day later. In addition, the government has taken some measures to prevent further dispersal, including closing entertainment venues, cancelling the party, extending the Chinese New Year holidays, forcing people to wear masks in public as well as limiting the number and frequency of outings per household. The Chinese experience actively treats infected patients, protects susceptible populations, and cuts off transmission routes, which are proved to be effective and stop at least 700,000 cases of COVID-19 [2]. However, the outbreak itself and the measures taken to combat the epidemic could lead to widespread fear and panic, which may escalate into further negative psychological reactions including adjustment disorder and depression. With the closure of schools and business close, the negative emotions experienced by individuals become more complicated [3]. At the same time, as most residents are restricted to their homes, they tend to face too much negative news at home every day, which may lead to a psychological crisis. Previous evidence showed that quarantine and isolation of patients led to widespread fear and panic, resulting in negative psychological reactions including adjustment disorder and depression [4][5][6][7][8]. And one recent study pointed to an increase in psychological problems in this epidemic, including anxiety, depression, and stress [9]. Two studies had noted an increase of psychological problems during the epidemic, and emphasized that we should pay attention to the mental health of speci c groups such as children, old adults, patients, and medical staff [10,11]. However, no data is available examining the psychological impact of COVID-19 on the general population in China one month after the Wuhan shut down. Therefore, we conducted the survey to investigate the residents' change of life and psychological conditions in one month after the Wuhan lockdown. This may assist government agencies and healthcare professionals in safeguarding the psychological wellbeing of the community in the face of COVID-19 outbreak expansion in China and different parts of the world. Study participants We conducted an online survey in one month (Feb 20 to 24) later after the shutdown of Wuhan (Jan 23) and Hubei province (Jan 25) against the COVID-19 spread, and we received 4071 anonymous questionnaires in the investigation, covering 33 Chinese provinces and autonomous regions except Taiwan. Inclusion Criteria:1. Male or female ages 15-85 years; 2. Participants must have capacity to understand the study and provide informed consent; 3. Participants must be uent in Chinese; 4. Participants currently live in China. Exclusion Criteria: 1. Serious neurological (speci c or focal) disorders preventing full participation in the protocol; 2. The illogical case in the questionnaire. Examples include: Select the same option consecutively; The results of similar choices vary widely. After eliminating the invalid samples, 3803 (93.42%) valid questionnaires were nally obtained. This study was approved by the research ethics committees of Wuhan University. All participants provided informed consent. Based on the investigation of the psychological state after the disaster in China and compiled after the discussion of experts, the self-administrated questionnaires were mainly divided into three parts. Changes in psychological status Changes of psychological status: In the study, 21 feeling items were used to measure the changes of psychologic status, including sorrow, fear, tired, irritability, loneliness, sleep condition, self-perceived uselessness, irritability and loneliness, weight, appetite, chest tightness, disturbed, muscle ache and others (see details in the Supplementary Materials). We rated these items in a 5-point response format. ("-2 = signi cantly decrease", "-1 = decrease", "0 = unchanged", "1 = increase", or "2 = signi cantly increase"). Sorting the items and calculating the scores can analyze the changes in the residents' psychological status in a more targeted manner. Therefore, literature review and experts interview methods were used to construct the index system rstly. According to the literatures [12][13][14], the 21 feeling items in the self-made questionnaires were classi ed into three categories: anxiety, depression, and stress. The total scores were calculated by simple addition based on the extent of the feeling. A negative score indicated that the negative emotions of the participants decreased compared to the previous week; otherwise, a positive score indicated that the negative emotions increased. The higher the score, the worse the psychological condition. An additional movie le shows the questionnaire in more detail [see Additional le 1]. The reliability of the questionnaire checked using Cronbach's Alpha and reliability coe cient was 0.958. Subjective indicators of changes in daily life The status of daily life of residents after the Wuhan shutdown is composed of level of attention, self-assessed infection risk, impact of the daily life, self-perceived health status, mental health help-seeking and satisfaction with community work. The rst 3 items were rated as "1 = decreased", "2 = unchanged", "3 = increased"; while the self-perceived health status was rated as " 1 = good/very good", "2 = average ", "3 = bad/very bad", mental health help-seeking was related as "1 = found and tried", "2 = found but not tried", "3 = not found yet", "4 = not looked for", "5 = no need to adjust", and the satisfaction with community work was rated as "1 = satis ed", "2 = general", "3 = unsatis ed". Statistical analysis Data were double-entered and cross-checked using Excel version 2019 (Microsoft Corp.; Redmond, USA), R3.6.2 was used for data cleaning, SPSS 25.0 was used to conduct corresponding statistical analysis, and a two-sided p value less that 0.05 was considered statistically signi cant. To identify the determinants of participants' psychological feelings, we rst examined the effects of their characteristics on changes of anxiety, depression, and stress scores with one-way analysis of variance (ANOVA) or the nonparametric Kruskall-Wallis test for categorical variables, depending on the distribution of the variables. The statistically signi cant variables were then allowed to enter the multiple linear regression model, and dummy variables were created when appropriate. Dichotomous variables were explored for the regression analysis in order to simplify the relationships. A series of multiple linear regression analyses (stepwise method) were explored to investigate the independent associations between socio-demographics, subjective indicators of changes in daily life and summary scores including anxiety, depression, and stress scores after checking the assumptions of distribution and independence of the residuals as well as multicollinearity. Normality was assessed by visual inspection of the P-P plot. Linearity and homoscedasticity was investigated by visual inspection of the plot of the predicted values and standardized residuals. A variance in ation factor (VIF) of greater than 10 was used to identify possible multicollinearity among independent variables. Univariate analysis showed that participants with different place of living, usual residence, monthly income, and whether there were diagnosed patients in the relationship network had signi cant differences in the scores of anxiety, depression and stress (Table 1). In addition, the differences in the six indicators of self-perception factors, that is, level of attention, self-assessed infection risk, impact of the daily life, self-perceived health status, mental health help-seeking and satisfaction with community work, also had signi cant differences in the scores of the three psychological conditions. Results The assumptions for linear regression were met for our data. Linearity, homoscedasticity and normal distribution of residuals were validated in the models. The VIFs were less than 10, indicating multicollinearity was not observed in the models. Multivariable analyses were then performed to identify these variables with a signi cantly independent impact on the changes in psychological status ( Table 2). The scores of anxiety, depression, and stress were dependent variables and independent variables of the models were age (≥ 50 as reference), gender (male as reference), place of living (urban as reference), usual residence (other areas in China as reference), education (middle school or below as reference), marital status (single as reference), occupation (medical staff as reference), monthly income (< 2,000 yuan as reference), number of cohabitants (0 as reference), quarantine or not (yes as reference), con rmed infected in personal network (yes as reference), level of attention (increased as reference), self-assessed infection risk (increased as reference), mental health help-seeking (not found yet as reference) and satisfaction with community work (dissatis ed as reference). Compared to people with middle school education and below, the mental health status of people with high education is better. Married persons had lower anxiety, depression, and stress scores than single persons, that is, their mental health was relatively better. People with monthly incomes above 10,000 have higher anxiety, depression, and stress scores than those with monthly incomes below 2,000. The scores of anxiety and stress increased as the frequency of attention to the epidemic increased. Anxiety, depression, and stress were higher among those who thought they had an increased risk of contracting COVID-19 in recent days and those who believed that the epidemic's impact on their lives had increased. Taking people who had sought mental health help but not yet as a reference, people who had tried to adjust their mental state in some way and those who thought they did not need to adjust their mental state had lower scores of anxiety, depression, and stress. In addition, people who had received psychological help but had not yet tried had lower anxiety and stress scores. Gender, age, occupation, con rmed infected in personal network and satisfaction with community work appeared not to be signi cant correlates of the anxiety, depression, stress scores. Disccusion Since COVID-19 outbreak sparked a global public health crisis by spreading across China and other countries, various mandatory precautions have been taken by governments and individuals [15]. To the best of our knowledge, the present investigation is the rst study to characterize people's psychological status in anxiety, depression and stress one month after the Wuhan lockdown, when the government's initiatives have achieved initial success. Studying the residents' psychological conditions at this time point can re ect the effectiveness of the anti-epidemic work of the government, communities and all walks of life from the side. In multivariable analyses, we found that urban residents were more likely to gain anxiety, depression and stress than rural counterparts. In densely populated urban areas, well-planned e cient public transportation systems can facilitate residents' travel [16]. The disruption of daily life and absence of entertainment or recreation made it impossible to release the excess inner pressure of urban residents. A more important reason is that due to the high density of urban population and the greater mobility of people than rural areas, the risk of disease infection is greater. High population density increases people's exposure to infectious diseases [17], which may lead to increased negative emotions among urban residents. People living in Hubei province reported signi cantly higher anxiety and stress scores, perhaps because of Hubei in the center of the epidemic. Higher educated participants possibly have a better understanding of the epidemic and look for appropriate care for their condition, which might lead to lower levels of negative emotions and better coping strategies. People who were married reported greater mental health status, this might partly be attributable to the fact that married people can share the burden of negative emotions and obtain psychological support from their families [18,19], indicating that family support is of great importance. It is well documented that low-income groups are more likely to suffer from depression and anxiety [20,21]. However, our study showed that the epidemic had a greater impact on high-income groups, people with monthly incomes above 10,000 have higher anxiety, depression, and stress scores than those with monthly incomes below 2,000, their concerns about delays in working hours and subsequent deprivation of expected income may explain the high level of stress [22]. Another noteworthy nding of this study is that the subjective indicators of changes in daily life played an important role in the scores of people's anxiety, depression and stress. Our study found the greater the level of attention to COVID-19, the greater the negative emotions, which is in agreement with previous research [23]. In addition to the Wuhan lockdown, relevant actions including the urgent establishment of two quarantine hospitals (Huoshenshan Hospital and Leishenshan Hospital) within a 10-day span in response to the outbreak [24,25]. Other protective measures have been enacted such as building a series of cabin hospital to receive people who have tested positive for the coronavirus but show no severe symptoms [26].Our investigation was carried out in mid-February, and there was no sign of improvement at the time. During this period, domestic and foreign media rush to report on the incident. Since people cannot differentiate true and false news, the more attention to COVID-19, the more unclear information may be received, which negatively affects respondents' psychological status. Therefore, the content of health information provided during the epidemic needs to be based on evidence to avoid adverse psychological reactions. Our ndings also revealed that the level of self-assessed infection risk also in uenced participants' mental state. Anxiety, depression and stress outcomes were elevated with the increase of self-assessed infection risk. It may be resulted from actual conditions, respondents received signals from the surrounding environment and is supposed to make corresponding assessment of their risk of infection. The respondents who felt severely affected by the lockdown exhibited more obvious anxiety, depression, and stress than the rest of them. The respondents who were seriously affected by the quarantine exhibited more obvious anxiety, depression and stress than the rest of them. This gives an indication that guarantee day to day lives for the residents will be bene cial for mental health [27]. In addition, people who had tried to adjust their mental state in some way and those who thought they did not need to adjust their mental state had lower scores of anxiety, depression, and stress. And people who had received psychological help but had not yet tried had lower anxiety and stress scores. It re ected from the side that when one nds a problem with one's mental state, actively seeking a solution can effectively relieve negative emotions. During the COVID-19 epidemic, online mental health services have become the mainstream way of mental health services, including online cognitive behavioral therapy for depression, anxiety, and insomnia (e.g., on WeChat) [28]. So, for people with psychological problems, it is also a good choice to seek help from professionals on the Internet. This study not only a supplementary of the psychological status of residents during the lockdown period, but also helps to better understand which group of people are more likely to produce negative emotions when the disease is epidemic, which makes a lot of sense to China and other countries. Limitations There are some limitations to the study. On the one hand, during the process of data collection, sources of bias include potential selection bias of respondents, as respondents were asked if they were willing to participate in the survey, resulting in volunteer bias and may not be truly representative of the general population. On the other hand, although we have su cient respondents, the sampling method may have non-response bias by two surveys [29]. Conclusion In conclusion, the life and psychological state of the urban population had produced negative changes after the Wuhan shutdown in 23 Jan. Usual residence, education, marital status, monthly income, the level of attention, self-assessed infection risk, impact of the daily life and mental-health help-seeking are important correlates of the scores of anxiety, depression, and stress. At present, China has achieved great success in the ght against epidemics, but the epidemic situation in some parts of the world has not improved. Most residents are still in a state of quarantine at home. Awareness of these relevant factors could help the government and related personnel to prevent more severe psychological trauma in the later period. Declarations Ethics approval and consent to participate The project was conducted through an online survey and we obtained the informed consent of the participants before doing the survey. Due to the epidemic situation, we cannot return to school for the time being. The statement of the Ethics Committee will be added after returning to school.
2020-07-23T09:01:38.112Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "8d0cf1b8c8e540fdc5b6eae65dbc93596d2752f4", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-36934/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "7621215bd027a95e8e489a174b8090ae38ead8c8", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
195745367
pes2o/s2orc
v3-fos-license
Fetal Dermal Mesenchymal Stem Cell-Derived Exosomes Accelerate Cutaneous Wound Healing by Activating Notch Signaling Fetal dermal mesenchymal stem cells (FDMSCs), isolated from fetal skin, are serving as a novel MSC candidate with great potential in regenerative medicine. More recently, the paracrine actions, especially MSC-derived exosomes, are being focused on the vital role in MSC-based cellular therapy. This study was to evaluate the therapeutic potential of exosomes secreted by FDMSCs in normal wound healing. First, the in vivo study indicated that FDMSC exosomes could accelerate wound closure in a mouse full-thickness skin wound model. Then, we investigated the role of FDMSC-derived exosomes on adult dermal fibroblast (ADFs). The results demonstrated that FDMSC exosomes could induce the proliferation, migration, and secretion of ADFs. We discovered that after treatment of exosomes, the Notch signaling pathway was activated. Then, we found that in FDMSC exosomes, the ligands of the Notch pathway were undetectable expect for Jagged 1, and the results of Jagged 1 mimic by peptide and knockdown by siRNA suggested that Jagged 1 may lead the activation of the Notch signal in ADFs. Collectively, our findings indicated that the FDMSC exosomes may promote wound healing by activating the ADF cell motility and secretion ability via the Notch signaling pathway, providing new aspects for the therapeutic strategy of FDMSC-derived exosomes for the treatment of skin wounds. Introduction The skin is the largest tissue of the human body and its main function is to guard the underlying tissues. Wound healing is a complex process, and successful cutaneous wound healing needs a series of steps including inflammation, new tissue formation, and remodeling. Furthermore, skin cell migration, proliferation, differentiation, and apoptosis make great contributions to this process. These steps are tightly coordinated and well regulated to restore the multilayered structure of the skin in the normal wound-healing process [1]. Dermal fibroblasts are one of the most important cell lines involved in the normal wound-healing process [2]. The main functions of the dermal fibroblast are extracellular matrix (ECM) production, collagen synthesis, wound contraction, reepithelialization, and tissue remodeling. Once hurt, hemostasis takes place immediately. Fibroblasts, along with other cells including neutrophils, macrophages, and endothelial cells, are attracted to the wound by the blood clot. Then, fibroblasts are activated by macrophages and play a vital role in the proliferative and remodeling phase. Fibroblasts start proliferating and producing ECM proteins like collagen, hyaluronan, and fibronectin to provide a foundation of wound repair [3]. There is a paucity of pharmacological therapeutics that can accelerate wound healing of a large area burn wound and chronic, nonhealing wounds. These wounds adversely affect the life quality of the patients and put great economic pressures on the family and society. Therefore, it is important to seek an effective therapeutic method to promote wound healing [4]. Mesenchymal stem cells (MSCs) have a significant promise for regenerative medicine. Previous studies demonstrated the therapeutic potential of MSCs for tissue regeneration, including the liver, heart, bone, cartilage, neural, and skin [5][6][7][8][9][10]. Recent literature suggests that the regenerative effect of MSCs is mainly mediated through paracrine signaling to regulate host cells, instead of cell replacement [5,11]. Fetal dermal MSCs (FDMSCs), which are derived from the dermis of accidentally aborted fetuses, exhibit advantages of high expansion potential, high differentiation properties, and low immunogenicity. As an advantageous MSC source, FDMSCs have great potential in the tissue regeneration field for their scarless wound-healing characteristic [12][13][14]. In our previous research, we found that FDMSCs can inhibit the bioactivity of keloid fibroblasts by a paracrine manner. In the last decades, researchers have shown increased interest in exosomes. Exosomes are 40-100 nm small membranous vesicles secreted by most cell types. There are nuclear acids, lipids, and proteins in them, and their main function is to transfer bioactive moleculars in cell-cell communication [15,16]. Moreover, recent studies have shown the role of exosomes in pathogenesis, tissue regeneration, diagnosis, and drug delivery [17][18][19][20][21]. Exosomes are released from MSCs due to paracrine signaling and transfer their cargo of proteins, RNAs, and lipids to recipient cells to regulate the cell state and behaviors. Exosomes derived from MSCs are involved in the acceleration of wound healing [20][21][22]. We used the promising MSC type, FDMSCs, to investigate the paracrine effect on wound healing process in vivo and in vitro, and to analyze the signal pathway associated with this process. Notch signaling is an evolutionarily conserved pathway with numerous functions ascribed. Studies over the past decades have proved that Notch plays key roles in stem cell maintenance, development, homeostasis regulation, and cell fate decisions, and its dysfunction can contribute to a variety of diseases in humans [23]. There are 5 ligands (delta-like-(Dll-) 1, Dll-3, Dll-4, Jagged 1, and Jagged 2) in mammals, which can activate Notch signaling. Once activated, Notch receptors are cleavaged by tumor necrosis factor alpha converting enzyme (TACE) and γ-secretase, which results in the release of the Notch intracellular domain (NICD). Cleaved NICD can translocate into the nucleus and conjunct with a DNAbinding protein to regulate target gene expression [24,25]. A number of studies have identified that the Notch signal plays a critical role in wound healing by regulating the proliferation and migration of endothelial cells, keratinocytes, fibroblasts, epidermal stem cells (ESCs), and other wound-healing-related cells [25][26][27][28]. Furthermore, the cell secretion ability is under the regulation of Notch signaling [29]. In this study, we hypothesized that exosomes derived from FDMSCs can promote cutaneous wound healing via the Notch signaling pathway. Material and Method 2.1. Cell Culture. FDMSCs were extracted from the dorsal skin of fetal samples while adult dermal fibroblasts (ADFs) were extracted from adult skin samples of patient surgical waste. The extraction and identification steps were described in our previous study [30]. These cells were cultured in DMEM/low glucose (HyClone, USA) containing 10% fetal bovine serum (FBS, Gibco, USA) and 1% 100 U/ml Penicillin-Streptomycin (Gibco, USA). Isolation and Identification of FDMSC Exosomes. The exosomes were isolated using an ExoQuick-TC kit (SBI, USA) following the instruction. In brief, approximately 80% confluent FDMSCs were washed with PBS twice and cultured for an additional 48 hours in serum-free medium (SFM) containing 1% 100 U/ml Penicillin-Streptomycin. The CM (conditioned media) was collected and centrifuged at 3,000 × g for 15 minutes to remove cells and cell debris. The supernatant was filtered using a 0.22 μm filter sterilized SteritopTM (Millipore, USA), and then the supernatant was transferred to an Amicon® Ultra-15 10K Centrifugal Filter Unit (Millipore, USA) to concentrate to 1/5 volume. Appropriate volume of ExoQuick-TC was added in the supernatant in a ratio of 1 : 5 and mixed with the supernatant. After storing at 4°C overnight, the mixture was centrifuged at 1500 × g for 30 minutes to collect the exosomes. The exosomes were quantitated using the BCA Protein Assay Kit (Beyotime, China) following the manufacturer's protocol. The morphology of the exosomes was observed using a FEI Tecnai G2 Spirit transmission electron microscope (TEM, FEI, USA) after being fixed with 2% glutaraldehyde and counterstained with 4% uranyl acetate. The exosome markers, CD63, Alix, and Tsg101, were detected by Western blot using the specific antibodies. The diameter of exosomes was measured by a ZetaView Nanoparticle Tracking Microscope (Particle Metrix Inc., USA). Animal Assay. Animal experiments were approved by the Ethics Committee of the Second Hospital of Shandong University. Studies were performed in 8-10-week-old BALB/c mice weighing 25 ± 5 g. Mice were anesthetized using tribromoethanol and the dorsal hair was shaved. 1 cm × 1 cm full-thickness dermal wounds were created in the skin on the back of the mouse. 200 μg FDMSC-exosomes in 200 μl PBS or 200 μl PBS were injected subcutaneously at four sites around the wound. On days 0, 7, and 14, digital photographs of the injury site were taken. Some mice in each group were euthanized to obtain the skin tissue samples from the wound site by dissection. These samples were collected for histopathological examination by hematoxylin and eosin (H&E) and immunohistochemistry (IHC). In IHC, primary antibodies PCNA (Servicebio, China) and CK19 (Servicebio, China) were used. Exosome Internalization. Exosomes were labeled with PKH26 (Sigma-Aldrich, USA) according to the manufacturer's protocol. Briefly, 5 mg of exosomes was resuspended in 0.5 ml 2 × Diluent C. PKH26 was diluted in 0.5 ml 2 × Diluent C (4 × 10 −6 M). Immediately mix the exosomes and dye solutions to make the final concentrations of PKH26 2 × 10 −6 M. Then, the exosome dye suspension was incubated for 3 min with periodic mixing. 1 ml 1% BSA was then added to stop the staining. The labeled exosomes were washed by centrifugation in PBS in the Amicon® Ultra-15 10K Centrifugal Filter Unit. The labeled exosomes were added to cultures of ADFs and incubated for 8 hours at 37°C. ADFs were washed thrice, and then the nuclei were stained with DAPI. Then, the cells were observed by fluorescence microscope (Olympus, USA). Western Blot. Western blotting was performed following standard protocols. Western blotting was used to identify exosome markers CD63, Alix, and Tsg101. Briefly, exosomes were resuspended by PBS and loading buffer and then heated at 95°C for 5 minutes. Cell samples were lysed in RIPA lysis buffer (Beyotime, China) on ice. Then, the samples were loaded and separated in SDS-PAGE gels and transferred onto nitrocellulose membranes (Pall Life Sciences, USA). After incubating with specific antibodies, protein expression and phosphorylation were and imaged with FluorChem Q (Pro-teinSimple, USA). The images were quantified using ImageJ. Primary antibodies used in this study were as follows: exosome markers Alix 2.6. Cell Proliferation. 2 × 10 3 fibroblasts were seeded in 96-well plates in SFM. After overnight plating, a Cell Counting Kit (CCK-) 8 (Beyotime, China) assay was performed to evaluate the cell proliferation according to the manufacturer's protocol. In brief, cells were treated with MSC exosomes (1 μg/ml, 10 μg/ml, and 100 μg/ml) or SFM for 24 h, and each group contained three parallel holes. 20 μl CCK-8 solution was added to each well and incubated for 2 hours at 37°C. The optical density of each well was measured at 450 nm using the Victor spectrophotometer (Thermo Fisher Scientific, USA). Cell Migration. The migration assay was used to analyze the migration effect of FD-MSC exosomes to ADFs. 1 × 10 4 ADFs were seeded in the upper chamber in FDMSC exosomes (1 μg/ml, 10 μg/ml, and 100 μg/ml) or vehicle, and the bottom chambers contained culture media containing 10% FBS and 1% P/S. 24 hours later, cells were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet, and then the upper surface cells were removed with a cotton swab. The cells of the lower membrane surface were counted under a microscope (Nikon, Japan) at ×100 magnification, and 5 random fields were selected. Quantitative Real-Time PCR. The primers were synthesized by BGI (China), and the sequences are listed in Table 1. Total RNA was isolated from ADFs and mouseexcised skin wounds using a TRIzol reagent (Invitrogen, USA) according to the manufacturer's instructions. cDNA was synthesized using a HiScript II Q RT SuperMix for qPCR (Vazyme Biotech, China), and real-time PCR was performed using a SYBR Green Master Mix (TAKARA, China) reagent. GAPDH was used as the reference gene for calculations. The ΔΔCt method was used to analyze the real-time PCR data. 2.10. Jagged 1 Peptide Treatment. Jagged 1 peptide (CDDYYYGFGCNKFCRPR) with Notch agonist activity and scrambled control (SC) peptide (RCGPDCFDNYGRY-KYCF) was synthesized at the Qiangyao Biological Technology Company (China) [31]. Peptide stock solutions (10 mM) were prepared in sterile distilled water and diluted to 15 μM in culture medium before use. 2.11. siRNA Knockdown. FDMSCs were transfected with Jagged 1 siRNA and control RNA (RiboBio, China). A Lipo-fectamine™ RNAiMAX Transfection Reagent (Thermo Fisher Scientific, USA) was used according to the manufacturer's instructions. 10 hours after transfection, ADFs were washed with PBS twice and cultured with SFM. 48 hours later, CM was collected from Jagged 1 siRNA or control siRNA-transfected FDMSCs to isolate Jagged 1 siRNA exosomes and control siRNA exosomes separately. 2.12. Statistical Analysis. Statistical analyses were performed using GraphPad Prism 5. Three or more independent experiments were performed for each result, and the mean and SD were calculated. One-way ANOVA or Student's t-test Type I collagen 5′-CGGCGAGAGCATGACCGATGG-3′ 5′-TCCATGTAGGCCACGCTGTTC-3′ Type III collagen 5′-ACAAAGAGGAGAACCTGGACC-3′ 5′-GGAGGACCCCGGGCTCCCATC-3′ was used to detect statistically significant differences. A P value < 0.05 was considered as statistically significant. Characterization of FDMSC Exosomes. FDMSCs were successfully isolated from fetus dorsal skin and identified by flow cytometry analysis and differentiation potential analysis in our previous study [30]. FDMSC exosomes were isolated and then analyzed by TEM and Western blotting. We used TEM to analyze the size, shape, and morphology of exosomes, and the result clearly revealed that FDMSC exosomes have a size range of about 100 nm with an appearance of cupshaped or round-shaped morphology (Figure 1(a), black arrow). The result showed the presence of exosome marker proteins, CD63, Tsg101, and Alix, in exosome lysates (Figure 1(b)). Then, we measured the size of the exosomes, and the results showed that the diameter of exosomes was about 100 nm (Figure 1(c)). FDMSC Exosomes Promote Cutaneous Wound Healing In Vivo. We established a mouse full-thickness dermal wound injury model to investigate the roles of FDMSC exosomes in wound healing. In the exosome-treated group, the wounds healed more rapidly than those in the control group (Figures 2(a) and 2(b)). H&E results indicated that in the exosome-treated group, there are more cells in ECM and the ECM proteins are more regular and denser, with a thicker layer of collagen than that in the control group on 7 days and 14 days posttreatment (Figure 2(c)). Furthermore, in FDMSC exosome-treated wounds, there were more cells with a higher proliferative rate in the wound area evaluated by PCNA IHC results (Figure 2(d)). The IHC result of CK19 illustrated that the reepithelialization in the exosometreated group was accelerated and the regenerative epidermis is thicker than that in the control group (Figure 2(e)). In summary, our in vivo results indicated that FDMSC exosomes can accelerate cutaneous wound healing by promoting cell proliferation, ECM deposition, and reepithelialization in the wound area. FDMSC Exosomes Enhance Proliferation, Migration, and Secretion of ADFs. The main functions of ADFs are to synthesize, secrete, and deposit collagen and elastic fibers of the ECM. Therefore, the proliferation, migration, and protein synthesis abilities of ADFs are vital factors in wound healing. To explore the mechanisms for FDMSC exosome-induced repair, we treated ADFs with FDMSC exosomes. FDMSC exosomes (red) were found to be internalized by the ADFs (Figure 3(a)). To determine the effect of FDMSC exosomes on ADF growth and mobility, CCK-8 and Transwell assays were performed. The results showed that cell proliferation ability of ADFs was significantly improved after being treated with exosomes in a dose-dependent manner (Figure 3(b)). Compared to the control group, the migratory capabilities of ADFs were also significantly improved in the presence of exosomes (Figures 3(c) and 3(d)). These results demonstrated that exosomes significantly enhanced the proliferation and migration of ADFs in a concentration-dependent manner. Fibroblasts, due to their abilities of synthesis and secretion of ECM proteins, play a significant role during repair of skin wounds. These proteins, to a certain extent, determine the speed and quality of wound healing. Here, we analyzed the mRNA expressions of ECM proteins and woundhealing-related proteins (Type I and III collagen, fibronectin, elastin, and α-SMA) of ADFs by real-time PCR after being treated with FDMSC exosomes. We found that in ADFs incubated with exosomes (1 μg/ml, 10 μg/ml, and 100 μg/ml) for 48 hours, Type I and III collagen, elastin, and fibronectin mRNA production was increased in a dose-dependent manner (Figure 3(e)). The results suggested that the FDMSC exosomes can promote the ECM secretion of ADFs. FDMSC Exosomes Activate the Notch Signaling Pathway. Recently, researchers have shown the importance of the Notch signal in skin development and tissue regeneration. Therefore, we hypothesized that Notch signaling might be involved in the exosome-mediated wound-healing process. To investigate the underlying mechanism of the effect of FDMSC exosomes on ADFs, the expression level of Notch1, Jagged 1, components of the Notch signaling pathway, and hairy and enhancer of split-1 (Hes 1), a Notch target gene, were analyzed by Western blot. The results showed the increased expression of active Notch1, Jagged 1, and Hes 1, which illustrated the activation of Notch signaling in the presence of FDMSC exosomes (Figures 4(a) and 4(b)). To find how the Notch signaling was activated, we detected the . Cells were counted for at least five random microscope fields. Results are shown as mean ± SD from three independent experiments. (e) FDMSC exosomes promote ECM protein secretion of ADFs. Real-time PCR analysis of the mRNA levels of Type I and III collagen, elastin fibronectin, and α-SMA genes (normalized to GAPDH) in ADFs of different groups. Results are shown as mean ± SD of three independent experiments. Two-tailed Student's t-test was used. * * * P < 0 001, * * P < 0 01, or * P < 0 05. Notch ligands in exosomes and found that Jagged 1 was packaged into MSC exosomes while the others were undetectable by Western blot (Figure 4(c)). DAPT Can Partly Block the Promoting Effect of FDMSC Exosomes of ADF Proliferation and Migration. To determine whether exosomes can promote ADF proliferation and migration in a Notch-dependent manner, we treated ADFs with DAPT, the γ-secretase inhibitor, to block Notch receptor cleavage at the cell surface. ADFs were treated with SFM, 100 μg/ml exosomes or DAPT 10 μM + exosomes 100 μg/ml . We found that DAPT partly abolished the positive regulating effect in cell proliferation ( Figure 5(a)) and migration (Figures 5(b) and 5(c)) of FDMSC exosomes on ADFs. These results indicate that FDMSC exosomes can activate the wound-healing capacity of ADFs via the Notch signaling pathway, and these effects can also be inhibited when DAPT was used, illustrating the role of the Notch pathway in wound healing. 3.6. Jagged 1 in FDMSC Exosomes Promote the Wound-Healing Capacity of ADFs. To further investigate the functional role of Jagged 1 expressed in exosomes in wounding, we used the Jagged 1 peptide to mimic Jagged 1 in activating the Notch signal and knockdown Jagged 1 expression in FDMSCs by siRNA. The expression of Jagged 1 in FDMSC exosomes was reduced after siRNA knockdown (Figure 6(a)). ADFs were incubated with SFM, 100 μg/ml FDMSC exosomes, 15 μM Jagged 1 peptide, or 100 μg/ml Jagged 1 knockdown exosomes for 24 hours. We found that in the FDMSC exosome and Jagged 1 peptide treatment groups, the Notch pathway was activated, and the proliferation and migration ability of ADFs was increased, while depletion of Jagged 1 in FDMSC exosomes by siRNA blocked the activation of Notch signaling and blocked the promoting ability of FDMSC exosomes on the proliferation and migration of ADFs (Figures 6(b)-6(e)). These results indicated that Jagged 1 in FDMSC exosomes can activate the woundhealing capacity of ADFs via the Notch signaling pathway. Discussion Wound healing is an integrated and coordinated process of different cells functionally relevant to skin tissue repair, alone with the microenvironment around them. There are a large number of published studies that describe the treatment methods for the management of cutaneous wound healing; however, the questions and difficulties are still remaining in this field. Especially for nonhealing and chronic wounds, effective therapeutic approaches need to be further explored to deal with this prevalent and costly public health issue. Thus, it is urgent to find an effective approach to prompt wound healing [32]. In the last decades, researches and clinical trials of MSC applications in tissue regeneration have made great progress. Studies focused on MSC transplantation suggested that instead of direct cell differentiation and replacement, MSCs play regulation and stimulation roles via paracrine signaling by releasing factors that promote angiogenesis, immunomodulation, and recruitment of different cells [5,9,33]. Literature has proved the positive effect of MSC CM on tissue regeneration [5,34,35]. Growth factors, cytokines, immunomodulatory proteins, and other biologically active proteins are the major components of CM. Besides, the discovery of exosomes helps us gain a better understanding of the underlying mechanism of the multiple effects of MSCs throughout the body [36][37][38]. Exosomes can mediate the cell-cell communication by transferring RNAs, proteins, and lipids to recipient cells and modifying their bioactivity state [15]. Nowadays, exosomes are considered as novel therapeutic tools and diagnostic markers [17,39,40]. MSC exosomes can exhibit repair effect, consistently with the MSCs, on the injured tissues through modifying recipient cell gene expression, protein production, and status, as well as activating regeneration-associated pathways including Wnt/β-catenin, AKT, ERK, and STAT3 [41][42][43]. Recently, investigators have examined the regenerative effects of exosomes derived from MSCs on tissues of the lung, heart, kidney, liver, brain, and so on. Therefore, exosomes derived from MSC exosomes may become potential therapeutic agents in cell-free tissue regeneration therapy. According to advanced research in wound healing, MSC exosomes can increase the proliferation and migration of skin cells and inhibit their apoptosis. Fetal MSCs are a new potential source of MSCs. The dorsal skin of aborted fetuses, which is considered as clinic discards, is an alternative abundant source of MSCs, and the clinic significance needs to be further explored. Compared with adult MSCs, fetal MSCs exhibit low immunogenicity, higher proliferation, and differentiation potential. FDMSCs are derived from accidental aborted fetuses, and they are thought as the main functional cells involved in scarless wound healing [12]. Furthermore, owing to the histological origin of FDMSCs, they may deserve unique properties on skin regeneration. In summary, FDMSCs are better candidates than adult MSCs in wound healing. Fibroblasts, as the important target of exosomes in wound healing, are the major cell type to synthesize, secrete, and deposit collagen and elastic fibers of the ECM [2]. Recently, there has been renewed interest in the different fibroblast lineages [24,44]. Researchers found that fibroblasts isolated from different dermal sources exhibit diverse functions, and the underlying mechanism needs to be explored further. Fibroblasts from diabetic patients showed impaired function in wound healing with reduced migration response and growth factor expression [45,46]. In summary, the proliferation, migration, and protein synthesis abilities of dermal fibroblasts are vital for wound repair. Activation of fibroblasts in the early phase of wound healing can accelerate the wound closure and matrix protein production, providing a foundation for wound repair. In our study, results suggested that FDMSC exosomes have an enhancing effect on ADF cell growth and migration. Further analysis by real-time PCR showed significantly elevated ECM protein levels compared to those of the control group, indicating that FDMSCs can promote ECM protein synthesis. The upregulation of Notch1, Jagged 1, and Hes 1 exhibited the activating effect of FDMSC exosomes on Notch signaling. Furthermore, Western blot analysis of exosome components showed Jagged 1 was the only ligand that can be detected, and the inhibition of Notch signaling by DAPT significantly decreased the proliferation and migration of ADFs. In contrast, ADFs treated with FDMSC exosomes and Jagged peptide showed significantly enhanced proliferation and migration, and knockdown of Jagged 1 in exosomes abolished the promoting effect. These results emphasizing that the Notch pathway is a mediator of exosome communication in regulating wound repair and Jagged 1 in exosomes play a vital role. As one of the important Notch ligands, Jagged 1 can regulate maturation of the human epidermis by activating Notch signaling [31]. In addition, Jagged 1 is present in exosomes from different kinds of cells and is biologically active, but the role of Jagged 1 in exosomes in wound healing is largely unknown [47][48][49]. In this study, we found that Jagged 1 is sorted in FDMSC exosomes to regulate the Notch signal pathway activity in ADFs. However, the quantity of Jagged 1 in FDMSC exosomes is variable and unstable because the biogenesis of exosomes is largely depending on cell types, cell functions, and physiological statuses. Due to the complexity of FDMSC exosomes, the important components of the exosomal cargo and other factors which can activate Notch signaling and the mechanism are still studying. Further research is needed to elucidate a detailed molecular mechanism of the sorting process and biological functions of Jagged 1 and the exact mechanism of FDMSC exosomes in wound healing and to develop new therapeutic strategies for nonhealing and chronic wounds. In conclusion, we successfully obtained FDMSC exosomes and investigated their role on cutaneous wound healing. Our results demonstrated that FDMSC exosomes can exert promoting effect on the proliferation, migration, and protein synthesis abilities of ADFs via Notch signal activation. Conclusion The results demonstrated that FDMSC exosomes could accelerate cutaneous wound healing in vivo and promote the wound-healing capacities of ADFs by activating the Notch signal pathway in vitro. Our findings provided new aspects for the therapeutic strategy of FDMSC-derived exosomes for the treatment of skin wounds. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Ethical Approval We received the ethical approval of the Ethics Committee of the Second Hospital of Shandong University on fetal skin isolation. The ethics certificate was issued on 1st Jan 2018 and the certificate number is KYLL-2018(LW)-021. Consent We received the informed consent of the patients. Conflicts of Interest The authors declare that they have no conflicts of interest. Authors' Contributions Xiao Wang designed the experiments, performed the experiments, analyzed the data, prepared the figures, and wrote the manuscript. Yi Pan and Ya Jiao performed the experiments, analyzed the data, and proofread the manuscript. Longxiao Zhang performed the histological experiments and analyzed the results. Yongjun Qi, Hongmin Gong, and Maoying Wang performed experiments. Duyin Jiang designed the experiments and supervised the research. All authors read and approved the final manuscript. Xinglei Wang and Duyin Jiang contributed equally.
2019-06-27T08:23:28.016Z
2019-06-10T00:00:00.000
{ "year": 2019, "sha1": "8e694de7875fd9861f933641bf50ba9951fdaa88", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/2402916", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e694de7875fd9861f933641bf50ba9951fdaa88", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261115338
pes2o/s2orc
v3-fos-license
Effect of initial-state geometric configurations on the nuclear liquid-gas phase transition Within the framework of an extended quantum molecular dynamics model, we simulated $^{40}$Ca + $^{16}$O collisions at beam energies ranging from 60 to 150 MeV/nucleon for $^{16}$O with different $\alpha$-cluster configurations. Results imply that different $\alpha$-cluster configurations lead to different yields of deuteron, triton, $^3$He and $^4$He, but not for proton and neutron. We discuss the effect of geometric fluctuations which are presented by double ratios of light nuclei, namely $\mathcal{O}_\text{p-d-t}$ and $\mathcal{O}_\text{p-d-He}$. It is found that magnitude hierarchy of geometric fluctuations is chain, kite, square and tetrahedron structure of $^{16}$O. $\mathcal{O}_\text{p-d-t}$ has maximum value around 80 -- 100 MeV/nucleon which could be related to liquid-gas phase transition, that is consistent with results from the charge distribution of the heaviest fragments in the collisions. I. INTRODUCTION Phase transition is a universal property of interacting substances and generally studied in the thermodynamic limit of macroscopic systems. The atomic nucleus as a finite size system, the phase transition in nucleonic level [1][2][3][4][5] or quark level [6][7][8][9][10][11] has been extensively discussed and investigated. The interaction between nucleons is similar to that between molecules in a van der Waals fluid, so Bertsch and Siemens [1] speculated that nucleus may experience liquid-gas phase transition (LGPT) when it is heated. Theoretical and experimental efforts were made to confirm it, especially in the area of intermediate energy heavy-ion collisions. In a certain excitation energy range, the nuclear caloric curve has a temperature plateau [2], which implied a possible indication of phase transition [3,[12][13][14][15][16][17]. Experimentally, spinodal decomposition was found to have occurred in nuclear multifragmentation [18], indicating the existence of liquid-gas phase coexistence region in the finite nuclear systems. The application of negative microcanonical heat capacity in nuclear fragmentation [19], which may be related to LGPT [20]. As we know, clustering is a fundamental phenomenon in physics, which has attracted a lot of attention for a long time. It was earlier proposed by Gamow [21] and discussed by Bethe and Bacher [22,23] for the high stability of the α-cluster around neighboring light nuclei. A cluster structure can emerge in excited states of nuclei or in ground states of nuclei especially in light nuclei, where the nucleus resembles a molecule composed of clusters [24][25][26][27][28][29][30][31][32][33]. Configuration of α-cluster is a key problem to understand the phenomenon of clustering in light nuclei. At present, there are many theoretical predictions on α-cluster configurations in light nuclei. For instance, 16 O can be treated as linear-chain structure with four-α clusters, which was supported by the α cluster model [34] and the cranked Skyrme Hartree-Fock method [35]. At the ground state, it can be regarded as tetrahedral structure with the approach of nuclear chiral effective field theory [36] and covariant density functional theory [37]. And the same structure is also presented above the ground state supported by Hartree-Fock-Bogoliubov method [38]. In the last decade, many studies have focused on density fluctuations to investigate LGPT as in Refs. [39][40][41]. Obviously, different α-cluster configurations shall induce different geometric fluctuations, so we chose the following four α-cluster configurations for the projectile 16 O, which are chain, kite, square and tetrahedron to probe density fluctuation. How different α-cluster configurations affect on the LGPT is considered in this work. In our study, we explore the effect of geometric fluctuation on LGPT in low-intermediate energy heavy-ion collisions. Within the framework of the extended quantum molecular dynamics (EQMD) model, the central 40 Ca + 16 O collisions at energies ranging from 60 to 150 MeV/nucleon are simulated, and the GEMINI model [42][43][44] is then used to de-excite heavy fragments. The organization of the paper is as follows: In Sect. II, we give introductions of our simulation model and method, including the EQMD model and GEMINI model as well as ratios of light nuclei. Results of effects of geometric fluctuation on the yields and (double) ratios of light nuclei are discussed in Sect. III. Moreover, the relation to nuclear liquid gas phase transition is pointed out by the charge distribution of the heaviest fragments in the same collisions. Finally, conclusion is given in Sect. IV. In the EQMD model, the wave packets of nucleons are Gaussian-like and the total wave function of the system is treated as the direct product of all nucleons [45] where R i and P i are the centers of position and momentum of the i-th wave packet, respectively. The Gaussian width ν i is introduced as ν i ≡ 1 λi + iδ i where λ i and δ i are dynamical variables in the process of initialization. The expected value of Hamiltonian can be expressed as where the first, second and third term are the center momentum of the wave packet, the contribution of the dynamic wave packet, and the zero point center-of-mass kinetic energy −T zero , respectively. The first term can be expressed as ⟨p i ⟩ 2 /2m, the second term can be treated as p 2 i − ⟨p i ⟩ 2 /2m, and the form of the third term can be found in details in Ref. [45]. For the effective interaction H int , it consists of the Skyrme potential, the Coulomb potential, the symmetry energy, and the Pauli potential as follows The form of Skyrme interaction is written as where α = −124.3 MeV, β = 70.5 MeV, and γ = 2, which can be obtained from fitting the ground state properties of finite nuclei. The form of Coulomb potential can be expressed as where r ij = |r i − r j | and erf(x) = 2 √ π x 0 e −u 2 du. And the symmetry potential can be written as where C S is the symmetry energy coefficient which is 25 MeV in this work. It is known that the stability of nuclei in the model description is very important to study the cluster structure effects of nuclei. As a result, in order to make saturation property and α-cluster structures can be obtained after energy cooling [30], a phenomenological repulsive Pauli potential is introduced to prevent nucleons with the same spin-S and isospin-I to come close to each other in the phase space, which can be presented as where f i is the overlap of the i-th nucleon with other nucleons having the same spin and isospin, i.e. f i ≡ j δ(S i , S j )δ(I i , I j ) |⟨ϕ i |ϕ j ⟩| 2 , and θ is the unit step function, and c P = 15 MeV is a coefficient denoting strength of Pauli potential. For the other two parameters, we take f 0 = 1.0 and µ = 1.3. For the standard QMD model, it shows insufficient stability, for which the phase space obtained from the Monte Carlo samples is not in the lowest point of energy [45]. So the EQMD model takes the kinetic-energy term of the momentum variance of wave packets in the Hamiltonian into account, which is ignored as the spurious constant term in the standard QMD [46,47]. Besides, the wave packet width is introduced into the Hamiltonian as a complex variable, and treated as an independent dynamic variable. These modifications not only describe the ground state better, but also make the model successful in the study of nuclear cluster states. As a consequence, we first consider that the energyminimum state is the ground state of initial nucleus. Afterwards, a random configuration is given to each nucleus. And under the time-dependent variation principle (TDVP) [48], propagation of each nucleon can be described as [45] where H is the expected value of the Hamiltonian, and µ R , µ P , µ λ and µ δ are various friction coefficients. During the friction cooling process, the system dissipates its energy with negative coefficients, making itself goes to a stable (minimum or even eigenstate) state [49]. In contrast, in the subsequent nuclear reaction simulation stage, these coefficients are zero to maintain the energy conservation of the system. It is worth mentioning that an improvement in the performance of the inelastic process, especially for the incoherent p-n bremsstrahlung process in the framework of the EQMD model, has been presented in Refs. [50,51]. B. GEMINI model The calculation in this study is a two-step process, including both dynamical and statistical codes. At the end of dynamical evolution, the nucleons are re-aggregated and condensed to form individual clusters [43]. The deexcitation of heavy clusters is realized by the GEMINI code by R. J. Charity [52,53]. With the information of a given primary fragment including its proton number Z, mass number A, excitation energy E * , and spin J CN , GEM-INI de-excites the fragment through a series of sequential binary decays until the excitation energy of the hot fragments reaches zero. The GEMINI model deals with the evaporation of light particles in the Hauser-Feshbach form [54]. The partial decay width of a compound nucleus for the evaporation of particle i is expressed as where J d , S i , J, and ℓ are spin of the daughter nucleus, the spin, the total angular momentum, and the orbital angular momenta of the evaporated particle, respectively; ε and B i are respectively its kinetic and separation energy; T ℓ is its transmission coefficient or barrier penetration factor, and ρ d and ρ CN are respectively the level density of the daughter and compound nucleus. The description of intermediate-mass fragment emission follows the Moretto form [55,56], which has been further extended to the following form where ρ sad is the level density at the saddle point, ε is the kinetic energy in the fission degree of freedom at the saddle point, B Z,A (J CN ) is the conditional barrier depending on both the mass and charge asymmetries, and can be expressed as For the symmetric divisions in heavy nuclei, the GEM-INI model uses the Bohr-Wheeler form [57] to predict the total symmetric fission yield where B f (J CN ) is the spin-dependent fission barrier, read as C. Ratios and density fluctuation In the analytical coalescence formula COAL-SH [58] for cluster production, the yield N c of a cluster at midrapidity and consisting of A constituent particles from the hadronic matter at kinetic freeze-out or emission source of effective temperature T ef f , volume V , and number N of the i-th constituent with mass m i can be read as In Eq. (14), M = Σ A i=1 m i is the rest mass of the cluster, l i is the orbital angular momentum associated with the i-th relative coordinate, ω is the oscillator frequency of the cluster's internal wave function and is inversely proportional to M r 2 rms with r rms being the rootmean-square (RMS) radius of the cluster, and G(l, is the suppression factor due to the orbital angular momentum on the coalescence probability [59,60]. Additionally, is the coalescence factor for constituents of spin s i to form a cluster of spin S, g rel is the relativistic correction to the effective volume in momentum space, and g size is the correction due to the finite size of produced cluster. Taking density fluctuations of nucleons into account, the neutron and proton densities in the emission source can be expressed as [61,62] n(⃗ r) = 1 V n(⃗ r)d⃗ r + δn(⃗ r) = ⟨n⟩ + δn(⃗ r) . Combining Eq. (17) and Eq. (18), an important double ratio can be defined as [61,62] with g = 4/9 × (3/4) 1.5 ≈ 0.29. When α∆n is much smaller than unity, the correction from α in Eq. (19) is second-order [61], and O 1 can be approximated as In this way, O 1 has a very simple linear dependence on ∆n. We can suggest that the yield ratio of light nuclei can be taken as a direct probe of the large density fluctuations which might be associated with critical phenomenon [61]. Besides, another double ratio of light-nuclei which αparticle is involved was also proposed [63] as From the results in Ref. [63], it is thought that the above ratio could be taken as a potential probe of critical phenomenon [64][65][66][67]. From the statistical point of view, the ratios of O 1 and O 4 can be considered in this work. Moreover, in our simulations, some single ratios such as N n /N p and N4 He /N3 He are also considered. In the EQMD model, the Pauli potential inhibits the system to collapse into the Pauli-blocked state at low energies and gives the model capability to describe αclustering. Before frictional cooling, the nucleon distribution of 16 O is random, but after friction cooling it forms something like four-α configuration. For the four-α states of 16 O, we have chosen four configurations: chain, square, kite and tetrahedron. After the system goes long enough time till 500 fm/c, the final-state heavy fragments of which the excitation energy are greater than zero and the mass greater than 4 will be further deexcited by the GEMINI model. For a given α-cluster configuration and incident energy point, the number of simulated events is 300,000. It should be noted that, for O 1 and O 4 , the events when the denominator is zero are abandoned and only fill in the spectrum event by event with non-zero denominators. A. The effect of chain α-clustering projectile with different polarization modes In this work, we refer to the plane formed by the intersection of the x and z axes as the collision plane. Here, we polarize projectile with the chain of 16 O both transversely and longitudinally, as shown in Fig. 1. For other comparison case, the projectile is randomly rotated in four-π solid angle. It can be imagined that the projection of the projectile on x − y plane is only one α-cluster point in the case of transverse polarization, while it is four α-cluster points for the longitudinal polarization. In this way, different initial fluctuations among these three cases can be set and one can determine whether it has any effects on LGPT or not. Firstly, the yields of various types of fragments as a function of beam energy in chain-like 16 O bombarding on 40 Ca collisions under three polarization modes are given, as shown in Fig. 2. One can see that the yields of proton and neutron increase with the increase of incident energy and reach stable values in energy region of 60 -150 MeV/nucleon. And the yields of deuteron, triton and 3 He increase first and then decrease as incident energy increases. For deuteron, triton and 3 He, when the incident energy is less then 100 MeV/nucleon, their yields increase with the incident energy, which is due to the fact that the composite system formed by 16 O and 40 Ca is in a state of fusion evaporation [68][69][70]. At this stage, the compression and temperature of the collision system increase as incident energy increases. Thus it evaporates more light clusters etc.,such as proton, neutron, deuteron, triton and 3 He [71]. However, with further increase of incident energy, the excitation energy of the system is so large that the system moves towards multiple fragmentation [68][69][70]. The phase-space volume occupied by proton and neutron becomes larger [71], which reduces the formation probability of deuteron, triton and 3 He. These features have been observed in previous experiments [72]. In addition, for deuteron, triton and 3 He, under the same conditions, the mass number is larger, the yield is smaller, which is consistent with the prediction from the thermal model [73]. Different from the previous paragraph, for 4 He, its yield starts at almost zero before 70 MeV/nucleon, then increases with the beam energy, and finally levels off or drops slightly (see Fig. 5(f)). Moreover, the yield of 4 He is about ten times that of 3 He, which is exactly opposite to the prediction of the thermal model [73]. The yield of 4 He is greater than that of triton and 3 He which can be attributed to the weaker Mott effect [74] on 4 He than that on triton and 3 He, i.e., a light nucleus would no longer be bound if the phase-space density of its surrounding nucleons is too large [75][76][77]. This is because the 4 He is well bound and compact while other light fragments is weakly bound and loose. Furthermore, from the trend of dent energy, multiple fragmentation starts to occur and gradually dominates, so its yield increases. When the incident energy is large, it is difficult to decompose 4 He due to the large binding energy, so its yield changes little or only slightly. In Fig. 2 (a) and (b), proton and neutron show insen- sitive to the polarization modes. However, for deuteron, triton, 3 He, and 4 He, they display obvious differences among longitudinal, transverse, and without polarization modes. And it is seen that deuteron shows more sensitive in low energy region, but it is opposite for triton, 3 He, and 4 He. For the ratio of N n /N p which is usually taken as a sensitive probe to neutron skin [78][79][80][81], we can see from Fig. 3 (a) that it increases with the incident energy and eventually converges to 1, since the projectile and target are symmetric in this work. And there is no significant difference in the value of N n /N p among different polarization modes. Additionally, as shown in Fig. 3(b), the ratio of 4 He to 3 He has the similar trend with N n /N p but has obvious difference for different polarization modes, and the curve is similar to the dependence of the yield of 4 He on incident energy in Fig. 2 (f), indicating that the change of the 4 He yield is dominant. Furthermore, ratios of O 1 and O 4 as a function of incident energy under different polarization modes (with different initial geometric fluctuations) are shown in Fig. 4 which could reflect nucleonic density fluctuation. One could expect that such geometric fluctuation has strong relation to the nucleonic density fluctuation. As mentioned above, the polarized projectile of chain-like 16 O at longitudinal direction has larger geometric fluctuation than the transverse polarization one. And the geometric fluctuation for unpolarization one is between them. Here, one should notice that ratios of O 1 and O 4 are based on an equilibrium source. And the collision system at low energy could not reach equilibrium condition. Without such limit, one still can make the ratios by light nuclei but with less meanings. From Fig. 4 (a), one can see that the ratio of O 1 for unpolarization case has the largest value below 80 MeV/nucleon. As beam energy increases, however, the O 1 for longitudinal polarization gives the largest value and the one for transverse polarization shows the smallest which is as we expected. It shows that the initial-state geometric fluctuation of projectile with different α-cluster configurations is sensitive to the O 1 at higher incident energies. In Refs. [82,83], density fluctuation is enhanced as beam energy or temperature increases which is associated with the LGPT in nuclear matter. In Fig. 4 (a), the ratios of O 1 can reach maximum value around 90 MeV/nucleon which depends on polarization modes. Such turning point could has physical meaning which may be associated with the LGPT and it will be cross-checked by charge distribution of the heaviest fragment below. For the ratios of O 4 , it tends to be stable value as beam energy increases without turning value. But it seems that ratio of O 4 is sensitive to the polarization mode. Also one can see that trends of O 1 are similar to ones of the yield of triton, and trends of O 4 are similar to ones of the yield of 4 He, from which we can infer that the yields of triton and 4 He in the final-state product is more sensitive to geometric fluctuation. In addition, it can be seen from Fig. 2, 3 and 4 that when the incident energy is low and the system is in the fusion evaporation stage, the yields and various ratios of different fragments are not sensitive to the geometric configuration of 16 O, while they become sensitive only when the incident energy is high and the system is in the multiple fragmentation stage. Similarly in Sect. III A, we first investigate the dependence of the yields of different types of fragments on incident energy with different α-cluster configurations for 16 O, the results of which are shown in Fig. 5. For proton and neutron, their yields increase with the incident energy. And they show no more difference among yields with different α-cluster configurations. For deuteron, triton and 3 He, their yields increase first and then decrease with the incident energy. And for 4 He, its yield first increases and then becomes stable with the incident energy. Furthermore, when the incident energy is greater than 100 MeV/nucleon, the relationship among the yields of triton, 3 He and 4 He for 16 O with different α-cluster configurations is "chain > kite > square > tetrahedron" and with an obvious difference. As shown in Fig. 6, the trends of N n /N p and N4 He /N3 He are similar to those described in Sect. III A. There is also no significant difference in the value of N n /N p between different α-cluster configurations as in Fig. 6 (a). The N4 He /N3 He for chain-like configuration displays the largest values and the one for tetrahedronlike configuration is with the smallest value. Ratios of O 1 and O 4 as a function of incident energy under different α-cluster configurations are shown in Fig. 7. For O 1 , it first increases and then decreases with the incident energy. And below 100 MeV/nucleon, O 1 for chain-like configuration gives the smallest value and tetrahedron-like configuration is with the largest value. However, the hierarchy is opposite from 100 MeV/nucleon up to 150 MeV/nucleon. In addition, there are obvious peaks arising around 80 to 100 MeV/nucleon, which may be related to LGPT as mentioned above. For O 4 , it first increases and tends to be stable with the incident energy except for the one with tetrahedron configuration slightly decreasing as beam energy increases after 100 MeV/nucleon. Moreover, the peak energy of O 1 is somehow different for various cluster configurations. And for O 4 , the influence of different cluster configurations begins to appear at 80 MeV/nucleon and becomes stable after 100 MeV/nucleon. As mentioned in Ref. [84], the charge distribution of the heaviest fragment in intermediate energy heavy-ion collisions has been observed to be bimodal, which is expected as a generic signal of phase transition. So we plot the probability distribution for Z 1 over Z s for different incident energy and different α-cluster configurations as shown in Fig. 8, where Z 1 is the charge of the heaviest fragment in each collision event and Z s is the sum of the charges of projectile and target. It can be clearly seen from Fig. 8(a) that for chain-like 16 O, the probability distribution of Z 1 /Z s starts to show a bimodal structure when the incident energy is greater than 80 MeV/nucleon, and this structure disappears until the incident energy is greater than 100 MeV/nucleon, further indicating that LGPT occurs within this incident energy range. Furthermore, as shown in Fig. 8(b), when the incident energy is 80 MeV/nucleon, the bimodal structure of the probability distribution curve corresponding to the square-like and tetrahedron-like projectile is the most obvious, followed by the kite-like, and the chainlike is the least obvious. Combined with the magnitude of geometric fluctuation for different α-cluster configurations derived previously, it can be inferred that the larger the geometric fluctuation, the larger the incident energy resulting from LGPT, which can also be verified with the peak energy of O 1 in Fig. 7(a). IV. CONCLUSION The difference of geometric fluctuation caused by different α-cluster configurations is mainly reflected in the effects on the yields of deuteron, triton, 3 He and 4 He, but it is dull for the yields of proton and neutron. By investigating the double ratios O p-d-t and O p-d-He of light nuclei, we disclose that the magnitude hierarchy of geometric fluctuations is "chain > kite > square > tetrahedron" for reactions of 40 Ca induced by 16 O with different α-configuration. The maximum value of O p-d-t is around 80 -100 MeV/nucleon which could be related to LGPT, and it is consistent with results from the charge distribution of the heaviest fragment in the same reaction. The current work sheds light on the effects of geometric fluctuation on LGPT in low-intermediate energy heavy-ion collisions. In future, the yields of light nuclei produced in 40 Ca + 16 O central collisions with different incident energy can be measured through some experimental programs in HIRFL at CSR, FRIB at MSU as well as other facilities. Since it was indicated in many previous studies that 16 O in the ground state could be a tetrahedral 4α structure, we expect that the experimental data shall be compatible with the conclusions we have drawn in the previous sections for 16 O with tetrahedral configuration. Meanwhile, the yields of charged light nuclei are intuitive and easily measurable physical quantities, and the single ratios of 4 He/ 3 He as well as their double ratios O p-d-t and O p-d-He are better observables since the insufficient detector's effect in experiments can be cancelled, we expect the trend or saturation value of the excitation function of the ratios could give hints of geometric fluctuation. Of course, collective observable, such as elliptic flow, may be also necessary for the further study on the phenomena discussed in this work. Authors thank Dr. Kai-Jia Sun and Song Zhang for communications. This work was supported in part by the National Natural Science Foundation of China under contract Nos. 11890710, 11890714, 12147101, and 12205049, and the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008.
2023-08-25T15:03:37.809Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "16a9ce043dc9e0154a57fba79ab34c82b8e51b6b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e6c8e40d962f65d236bd59460eea3c502d4b977e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
268231611
pes2o/s2orc
v3-fos-license
De novo genome assembly of a high-protein soybean variety HJ117 Objectives Soybean is an important feed and oil crop in the world due to its high protein and oil content. China has a collection of more than 43,000 soybean germplasm resources, which provides a rich genetic diversity for soybean breeding. However, the rich genetic diversity poses great challenges to the genetic improvement of soybean. This study reports on the de novo genome assembly of HJ117, a soybean variety with high protein content of 52.99%. These data will prove to be valuable resources for further soybean quality improvement research, and will aid in the elucidation of regulatory mechanisms underlying soybean protein content. Data description We generated a contiguous reference genome of 1041.94 Mb for HJ117 using a combination of Illumina short reads (23.38 Gb) and PacBio long reads (25.58 Gb), with high-quality sequence coverage of approximately 22.44× and 24.55×, respectively. HJ117 was developed through backcross breeding, using Jidou 12 as the recurrent parent and Chamoshidou as the donor parent. The assembly was further assisted by 114.5 Gb Hi-C data (109.9×), resulting in a contig N50 of 19.32 Mb and scaffold N50 of 51.43 Mb. Notably, Core Eukaryotic Genes Mapping Approach (CEGMA) assessment and Benchmarking Universal Single-Copy Orthologs (BUSCO) assessment results indicated that most core eukaryotic genes (97.18%) and genes in the BUSCO dataset (99.4%) were identified, and 96.44% of the genomic sequences were anchored onto twenty pseudochromosomes. content is influenced by complex factors such as genotype, environment, and genotype-environment interactions [3,4].Due to the strong negative correlations of soy protein content and oil content [4] with yield [5], it is quite difficult to increase soy protein content. In the early stages of soybean breeding, farmers primarily relied on repeatedly selecting preferred seeds from cultivated populations [6].Following that, artificial hybridization technology was introduced, and the initial artificially hybridized cultivated soybean was introduced in North America during the 1940s [7].With the development and progress of molecular biology technology, marker-assisted selection (MAS) has been employed to expedite the breeding process [8].The publication of the initial reference genome of soybean (cultivar Williams 82) in 2010 [9] signaled the commencement of the soybean functional genomics research era [10,11].The enhancement of sequencing technologies has significantly boosted the capacity to generate high-quality genome assemblies. Data description The Glycine max sample was collected from Shijiazhuang (37°6′25″N, 114°42′47″E).Genomic DNA and total RNA were isolated from leaf tissues.High-quality DNA was extracted using QIAGEN® Genomic kits.Three methods were used to quantify and check the extracted DNA, NanoDrop 2000 Spectrophotometer (Thermo Fischer Scientific), agarose gel electrophoresis and Qubit Fluorometer (Invitrogen).After the detection, the DNA was purified using AMPure PB beads (Pacbio 100-265-900), and the subsequent library construction utilized the final high-quality genomic DNA (gDNA).The size and concentration of the library fragments were assessed using an Agilent 2100 Bioanalyzer (Agilent Technologies, USA).Qualified libraries were evenly loaded on SMRT Cell and sequenced for 30 h using Sequel II/IIe system (Pacific Biosciences, CA, USA). Briefly, the DNA sample was initially fixed with formaldehyde and subsequently digested using HindIII restriction enzyme.Next, the DNA ends underwent repair and were labeled with biotin.Subsequently, T4 DNA ligase was used to ligate the interacting fragments to form a loop.After ligation, protease K was added for crosslinking, and then protein of ligated DNA fragments was digested to obtain purified DNA.Finally, the purified DNA was fragmented into sizes ranging from 300 to 500 base pairs.The biotin-labeled DNA fragments were then isolated using Dynabeads® M-280 Streptavidin (Life Technologies).Subsequently, the Hi-C library was constructed and sequenced on the Illumina NovaSeq6000 sequencing platform using paired-end reads of 150 base pairs. To ensure the acquisition of high-quality data, the raw polymerase reads were subjected to quality control using the PacBio SMRT-Analysis package (https://www.pacb.com).This involved filtering out the following types of polymerase reads: (1) polymerase reads less than 50 bp in length, (2) Polymerase readings with a mass value below 0.8, (3) a polymerase read comprising an adaptor attached to itself and removing the adaptor sequence in the polymerase read.Then use SMRTLink 9.0 (parameter --min-passes = 3 --min-rq = 0.99) to generate CCS reads for subsequent assembly. Hifiasm (https://github.com/chhylp123/hifiasm)was employed to assemble the HiFi reads, and the preliminarily assembled genome version (primary contigs) was obtained.To obtain chromosome level genome, we performed Hi-C assisted assembly.For the ~114.5 Gb raw reads (Data file 1 and Data file 2), preliminary quality control was performed using Fastp [14], and the resulting clean reads were subsequently aligned to primary contigs using hicup.Valid pair reads were utilized for further analysis.AllHIC was used for auxiliary assembly, and then Juicebox was used for fine-tune AllHIC clustering results.Finally, A genome was obtained with a contig N50 length of 19.32 Mb and a total contig length of 1041.94Mb, as well as a scaffold N50 length of 51.43 Mb and a total scaffold length of 1041.95Mb (Data file 3 and Data file 4). To assess the quality of the assembly the self-written script was used to perform statistics on the number of single chromosome cluster scaffolds, chromosome sequence length, and genome mounting rate.According to the number of sequences assembled to the chromosome level and the number of sequences that were not assembled to the chromosome level, the Hi-C mounting rate was calculated.The chromosome-level genome was partitioned into 500 Kb bins of equal length.The number of Hi-C read pairs spanning any two bins was used as the intensity signal to represent the interaction between the respective bins.Heatmaps (Data file 5) were generated based on these signals.BUSCO (Benchmarking Universal Single-Copy Orthologs: http://busco.ezlab.org/) [18] was also applied to perform a quality assessment of the genome.The conserved genes (248 genes) existing in six eukaryotes were selected to construct the core gene library for CEGMA [19] evaluation.The evaluation results revealed that the majority of core eukaryotic genes (97.18%) and genes in the BUSCO dataset (99.4%) were successfully identified (Data file 6). Limitations Soybean is considered to have undergone an allotetraploidy event [9] that have resulted in 75% of its genes being present in multiple copies [32].Repetitive DNA made up ~54.4% of each genome [33].In this study, 23.38 Gb Illumina short reads (Data file 13) and 25.58 Gb of PacBio long reads (Data file 14) were obtained, providing approximately 22.44× and 24.55× sequence coverage.Although Hi-C sequencing obtained 114.5 Gb of data with a depth of 109.9×, the overall sequencing depth was relatively low, which may result in incomplete genomic information being obtained. The contig N50 length of the de novo assembled HJ117 genome is 19.32 Mb, and the scaffold N50 reaches 51.43 Mb, indicating that the genome assembly level has achieved the average level of soybean genome assemblies during the same period.However, gaps still exist in the genome.To achieve accurate genome assembly, optical mapping technology could be incorporated, and HiFi sequencing depth could be increased in the later stages.Alternatively, HJ117 genome could be assembled to a telomere-to-telomere level using ONT Ultra-long technology to obtain more comprehensive genomic information for HJ117. Table 1 Overview of data files/data sets
2024-03-05T14:15:24.724Z
2024-03-04T00:00:00.000
{ "year": 2024, "sha1": "65846a6822a8bd6dec4d5b6c75a9ea38e2838e56", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cffec167a8b4845de07cc57af36e58dd11e3a5fd", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245804448
pes2o/s2orc
v3-fos-license
Transcranial direct current stimulation (tDCS) reduces motivation to drink ethanol and reacquisition of ethanol self-administration in female mice Transcranial direct current stimulation (tDCS) is an emerging noninvasive brain neuromodulation technique aimed at relieving symptoms associated with psychiatric disorders, including addiction. The goal of the present study was to better identify which phase of alcohol-related behavior (hedonic effect, behavioral sensitization, self-administration, or motivation to obtain the drug) might be modulated by repeated anodal tDCS over the frontal cortex (0.2 mA, 20 min, twice a day for 5 consecutive days), using female mice as a model. Our data showed that tDCS did not modulate the hedonic effects of ethanol as assessed by a conditioned place preference test (CPP) or the expression of ethanol-induced behavioral sensitization. Interestingly, tDCS robustly reduced reacquisition of ethanol consumption (50% decrease) following extinction of self-administration in an operant paradigm. Furthermore, tDCS significantly decreased motivation to drink ethanol on a progressive ratio schedule (30% decrease). Taken together, our results show a dissociation between the effects of tDCS on “liking” (hedonic aspect; no effect in the CPP) and “wanting” (motivation; decreased consumption on a progressive ratio schedule). Our tDCS procedure in rodents will allow us to better understand its mechanisms of action in order to accelerate its use as a complementary and innovative tool to help alcohol-dependent patients maintain abstinence or reduce ethanol intake. We have recently developed a procedure to apply tDCS in rodents to study its behavioral and neurobiological effects [43][44][45][46][47] ; see 48 for a detailed procedure. Our early studies showed that in mice, repeated anodal tDCS over the frontal cortex reduced nicotine-induced place preference conditioning as well as abnormal behaviors associated with chronic exposure to nicotine during adolescence 43 . We also found that tDCS produced long-lasting attenuation of cocaine-induced behavioral responses and gene regulation in corticostriatal circuits 44 . The goal of the present study was to extend these data and evaluate the efficacy of repeated tDCS treatment in animal models of ethanol exposure that reflect different aspects of alcohol consumption. In the first experiment, we used the place preference conditioning paradigm to evaluate the impact of tDCS on the hedonic effect of ethanol ("Do I like it?"). In a second experiment, we evaluated the effects of tDCS on the expression of ethanol-induced behavioral sensitization, which has been hypothesized to reflect drug-induced long-term neuroplasticity in the nucleus accumbens 49 . In these experiments, ethanol was passively administered, i.e., intraperitoneally administered by the experimenter. Finally, we tested whether repeated tDCS treatment might facilitate the extinction and/or decrease the reacquisition of ethanol consumption in an operant self-administration paradigm. The motivational component ("How hard I am willing to work to obtain a dose of ethanol?") was finally evaluated using a progressive ratio schedule. In this experiment, the consumption of ethanol was voluntary, i.e., the mice controlled their oral consumption. Materials and methods Animals. Female mice were housed at six to eight per cage (except during surgery, recovery, and electrical stimulation periods, during which they were individually housed) under a 12-h light/dark cycle (lights on from 07:00 to 19:00 h) at a controlled temperature (21 ± 0.5 °C) and humidity (55 ± 10%). Experiments were conducted during the light phase of the cycle. Food and water were available ad libitum (unless otherwise indicated). Female rather than male mice were used in the present work because we used females in our previous tDCS studies 43,44 and because we have already collected a significant quantity of data on alcohol-induced behavioral sensitization in female mice [50][51][52][53] . Moreover, female mice were used to ensure consistency with the majority of ethanol sensitization studies and because female rodents generally show increased susceptibility to druginduced sensitization compared to males 54 . To the best of our knowledge, there are no studies to date showing a differential effect of tDCS between male and female mice. However, several groups have reported differential impact of tDCS in men and women (with the effects often being more marked in women than in men), which they have linked to factors such as sex hormones, anatomical differences, and neuroplasticity [55][56][57][58][59][60][61][62] . All experimental procedures were performed in strict accordance with the Guide for the Care and Use of Laboratory Animals (NIH), the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines and the European Union regulations on animal research (Directive 2010/63/EU). They were approved by the University of Franche-Comté Animal Care and Use Committee (CEBEA-58) and the local ethics committees of Amiens (CREMEAP-C2EA96). Transcranial direct current stimulation. Surgery. Prior to surgery, mice were allowed 1 week of acclimation to the animal facility, during which time, they were repeatedly handled. A tubular plastic electrode holder (internal diameter 2.1 mm; DIXI Medical, France; Fig. 1A) was surgically affixed to the skull of each mouse. The animals were anesthetized with ketamine hydrochloride/xylazine (80/12 mg/kg; intraperitoneal (i.p.) injection) and were placed in a stereotaxic apparatus. The center of the electrode holder was positioned over the left frontal cortex, 1 mm rostral and 1 mm left of bregma, and fixed with glass ionomer cement (GC Fuji I, Leuven, Belgium; Fig. 1B) 43,44,48 . The animals were allowed 1 week to recover from surgery. Stimulation protocol. The electrode holder was filled with saline (NaCl 0.9%), and the stimulation electrode (anode, diameter: 2.1 mm; DIXI Medical) was screwed into the electrode holder. A larger rectangular rubberplate electrode (cathode, 9.5 cm 2 ; Physiomed Elektromedizin AG, Schnaittach, Germany) was used as a counter electrode and was placed onto the ventral thorax ( Fig. 1C) 43,44,48 . For 5 consecutive days, an anodal constant current (0.2 mA; 2 × 20 min/day, 5-h interstimulation interval, linear fade-in/fade-out: 10-s ramp) was transcranially applied over the frontal cortex using a DC-Stimulator Plus (NeuroConn, Ilmenau, Germany) or an Open-tES stimulator specifically designed for rodent research 45 . The animals were restrained and awake during tDCS to prevent a possible interaction between tDCS and anesthetic drugs. Control (sham) animals were subjected to the same procedure (surgery, restraining box, and electrode fixation), but no current was delivered. An important consideration is the extent to which our animal stimulation paradigm is equivalent to protocols used in humans. Indeed, our stimulation protocol is the same as that used in clinical trials in terms of time, length, and number of repetitions, but our protocol has a lower intensity: 0.2 mA vs. 2 mA. However, the current density was much higher in our animal model than in human protocols ( www.nature.com/scientificreports/ the smaller size of the electrode used for mice. This is of importance because the area stimulated by the current might be significantly different in mice and in humans, especially if considered relative to the size of the brain. Liebetanz and collaborators found that lesions began to be induced after tDCS at a current density of 142.9 A/m 263 . Jackson and collaborators 64 found that when rats were stimulated using a 5.3 mm 2 electrode (anode), lesions began to appear at 0.5 mA (current density: 94.2 A/m 2 ). Another team 65 published evidence that higher intensities (up to 1 mA, corresponding to a current density of 80 A/m 2 ) are effective and safe. Even if there are some differences between our stimulation protocol and those cited above (e.g. polarity, rats vs. mice), we estimate that our stimulation protocol (current density: 57.1 A/m 2 ) is safe. In the present study, only one current intensity and one polarity were tested (0.2 mA, anodal stimulations). In a previous study by our group, we tested distinct current intensities and polarities 46 ; our results indicated that the behavioral effect of tDCS (on depression-related behavior) was absent at intensities of 0.025 and 0.1 mA and emerged when the intensity was increased to 0.2 mA. At a current intensity of 0.2 mA, only anodal stimulation affected depression-like behavior; cathodal stimulation had no effect. Briefly, the CPP apparatus consisted of two main compartments (18 cm tall × 18 cm long × 24 cm deep, manufactured in house) linked by a corridor and displayed different features, both visual (walls: plain or hatched pattern) and tactile (floor texture: smooth or textured, respectively). On day 1 (preconditioning, D1), mice were placed in the corridor and allowed free access to the compartments for 10 min. The time spent in each compartment was recorded using the EthoVision system (Noldus, the Netherlands). On days 2-4 (conditioning phase, www.nature.com/scientificreports/ D2-D4), the mice received an injection of ethanol and an injection of vehicle daily (interval between injections: 6 h). After the injection, the mice were immediately confined in one of the two conditioning compartments for 15 min (the drug was always paired with the less preferred of the two compartments as measured at D1). On day 5 (postconditioning, D5), the mice were again allowed free access to both compartments for 10 min without any drug injection. The percentage of the time spent in the drug-paired compartment was calculated for the preconditioning (%D1) and postconditioning (%D5) phases as follows: drug-paired compartment (seconds)/ (drug-paired compartment + vehicle-paired compartment (seconds)) × 100. Preference scores were then calculated as follows: %D5-%D1. The drug induced a conditioned place preference (i.e., a rewarding pleasant effect) if the preference score was significantly superior to 0 and a conditioned place aversion (an aversive unpleasant effect) if the preference score was significantly inferior to 0. A preference score not significantly different from 0 indicated that the injected substance had neither rewarding nor aversive effects. Experiment 2: tDCS effects on ethanol-induced behavioral sensitization Animals. Forty-seven female DBA/2 J mice were used (8 weeks old at the beginning of the experiment; Janvier, France). This strain was chosen for its high sensitivity to the stimulatory and sensitizing effects of ethanol on locomotion 66,67 . Locomotor activity cages. Locomotor activity was assessed using an infrared actimeter (LE 8811 Model, 45-cm width × 45-cm depth × 20-cm height, Bioseb, Chaville/Vitrolles, France). Each frame of the six locomotor activity cages was equipped with 16 × 16 infrared photocell beams (2 cm above the floor) and located in a dark experimental room (indirect 20-lx white light) isolated from external noise. Horizontal locomotion was measured by determination of photobeam breaks using ActiTrack software (Bioseb, France). Ethanol sensitization. Behavioral sensitization was performed as previously described [50][51][52][53] . In this test, ethanol is passively administered, i.e., intraperitoneally administered by the experimenter. On the first 3 days of the experiment (habituation, D1-D3; Fig. 3A), the mice were injected with saline (12.5 ml/kg, i.p.) and immediately placed in the center of the actimeter, which was used to record their horizontal locomotor activity for 5 min. The mice were then divided into saline (saline, N = 20) and ethanol (EtOH, N = 27) groups, with similar baseline locomotor activity levels for both groups. The next day (D4), the sensitization procedure started (Fig. 3A). During the 10 days of the induction phase (D4-D13), the mice received daily i.p. injections of either saline or ethanol (2 g/kg). After the last day of sensitization (D13), the mice injected with ethanol were divided into two groups, sham (N = 13) and tDCS (N = 14), and were subjected to the tDCS procedure as previously described (D14: surgery, D21-D25: stimulation; Fig. 3A). Following ten sessions of stimulation, the mice were left undisturbed in their home cages for 7 days. On day 33 (ethanol challenge), all mice received an i.p. injection of 2 g/kg ethanol, and locomotor activity was evaluated (expression phase). Afterward, ethanol challenges were repeated weekly (D40, D47, D54, and D61). To avoid ethanol-induced behavioral sensitization in saline-treated mice across the five ethanol challenges, half of the control animals (N = 10) received ethanol during the first three ethanol challenges, and the other half (N = 10) received ethanol during the last two ethanol challenges. Habituation to the taste of ethanol. To habituate the mice to the taste of ethanol, they were pre-exposed to increasing concentrations of ethanol in their home cages under a standard two-bottle choice protocol between water and 6%, 12%, and 20% ethanol solution for 3 days, 4 days, and 1 week, respectively (Fig. 4A). Operant ethanol self-administration procedure (training). Each mouse was placed in an operant conditioning chamber and trained to poke the active hole for 20% ethanol solution delivery (20 μL). The mice were initially trained under a fixed ratio 1 (FR-1) schedule of reinforcement for 1 day to allow acquisition of a nose-poke response for ethanol solution. The following day, the response requirement was increased (FR-2) and maintained for 6 weeks (1-h session during the light phase of the day, 5 consecutive days per week (Monday to Friday); Fig. 4A). After 6 weeks of operant self-administration, the mice were randomly divided into two experimental groups: sham (N = 12) and tDCS (N = 12) groups. Surgery was performed the following week (week 9, W9; Fig. 4A), and repeated tDCS or sham stimulation was applied as described above during week 10 (W10; Fig. 4A). During weeks 9 and 10, the mice were not placed in the operant chambers (abstinence period). Operant response during extinction of ethanol self-administration. After tDCS or sham stimulation, the mice were subjected to a period of extinction (W11-W14) during which nose pokes no longer resulted in ethanol solution delivery or light cue presentation. Extinction sessions, similar to training sessions, were conducted during the light phase of the day (1-h session, 5 consecutive days per week; Fig. 4A). Progressive ratio schedule. The motivation of the mice to obtain 20% ethanol solution was tested in a single 1-h session (beginning of W17; Fig. 4A) using a progressive ratio schedule in which the sequence of response requirements was increased by a step size of 2 or 3. Animals had to poke the active hole two or three times more than the previous event to receive the next delivery of the ethanol solution. The ratio sequence employed was as follows: 1, 2, 3,5,8,10,13,15,18,20,23,25,28,30,33,35,38,40,43, and 45. The last ratio completed in the 1-h session was defined as the breakpoint. The higher the breakpoint was, the greater the motivation of the mice to obtain the ethanol solution. Statistical analyses. The results are expressed as the mean ± standard error of the mean (SEM). Significance was set at P ≤ 0.05. For experiment 1 (CPP), we performed a two-way analysis of variance (ANOVA) with stimulation (sham, tDCS) and dose (0, 1, 2) as between-subject variables. Newman-Keuls (NK) post hoc tests were used to describe differences between individual groups. Student's t-tests were also used to compare the means of each group with a standard value (i.e., a preference score of 0). For experiment 2, regarding the induction of behavioral sensitization, we performed a repeated-measures ANOVA with group (saline, EtOH-sham, www.nature.com/scientificreports/ EtOH-tDCS) as the between-subject variable and time (D4, D13) as the within-subject variable. For the expression of behavioral sensitization, we used one-way ANOVA with group (saline, EtOH-sham, EtOH-tDCS) as a between-subject variable at each time point (D33, D40, D47, D54, D61). NK post hoc tests were used to describe differences between individual groups. For experiment 3 (ethanol oral self-administration), we performed a repeated-measures ANOVA with stimulation (sham, tDCS) as the between-subject variable and time (W11 to W14, W11 to W16, W8 and W15) as the within-subject variable. NK post hoc tests were used to describe differences between individual groups. Experiment 1: tDCS had no impact on ethanol-induced conditioned place preference. Saline injections did not induce any place preference or aversion in either the sham or tDCS groups (all P > 0.05 vs. 0, Fig. 2B). In the sham group, ethanol induced a place preference (1 and 2 g/kg, P ≤ 0.05 and P ≤ 0.01 vs. 0, respectively). This response was also observed in tDCS animals (1 and 2 g/kg, P ≤ 0.05 and P ≤ 0.01 vs. 0, respectively). Two-way ANOVA revealed that place preference was modulated by the dose of ethanol (dose effect: F (2, 48) = 20.83, P ≤ 0.001). The higher the dose was, the higher the preference score (NK: 0 < 1 g/kg, P ≤ 0.001; 1 < 2 g/kg, P = 0.08); however, tDCS did not modulate ethanol-induced conditioned place preference (stimulation effect: F (1, 48) = 33.7, P = 0.68). After tDCS treatment (from D21 to D25), the expression of behavioral sensitization was evaluated at D33, D40, D47, D54, and D61 (Fig. 3A). At each time point, a significant group effect was observed (ANOVA for D33, D40, D47, and D54: group effect: P ≤ 0.001; ANOVA for D61, group effect: P ≤ 0.01). NK post hoc analysis revealed that ethanol induced significantly higher locomotor activity in the animals previously injected with ethanol during the induction phase than in the animals treated with saline (Fig. 3C). This reflected the expression of ethanol-induced behavioral sensitization and showed that this effect was still present at D61, more than 6 weeks after the end of the induction period; however, tDCS had no significant impact on this phenomenon (all P > 0.05). Experiment 3: tDCS decreased ethanol self-administration. Acquisition of operant ethanol self-ad- ministration before tDCS. There were no differences during habituation (i.e., oral spontaneous consumption, W1 and W2) or during the training period (W3-W8) between the sham and tDCS groups (active hole and inactive hole, all P > 0.05; data not shown). During W8 (training pre-tDCS), there was no difference in the number of nose pokes in the active hole between sham and tDCS animals (Student's t-tests: active hole, P > 0.05; inactive hole, P > 0.05; Fig. 4B). Extinction of operant ethanol self-administration after tDCS. As expected, the number of nose pokes per session in the active hole decreased from W11 to W14 during the extinction phase (W11-W14, time effect: F (3,66) Fig. 4B, active hole). This effect was no longer significant the following week (W16, active hole, P = 0.13). The number of nose pokes (active hole) was significantly higher in the sham animals during W15 (reacquisition) than during W8 (training pre-tDCS) (W8 vs. W15, NK: P ≤ 0.05). This was not true in the tDCS animals (P > 0.05). No time effect, stimulation effect, or interaction effect was observed for the inactive hole (control condition, all P > 0.05; training, extinction, reacquisition). tDCS decreases the motivation to obtain ethanol on a progressive ratio schedule. At the beginning of W17, the breakpoint to obtain a dose of ethanol during the progressive ratio procedure (1-h session) was significantly higher in the sham animals than in the tDCS animals (Student's t-test: P ≤ 0.05; Fig. 4C, active hole). This effect was not observed for the inactive hole (Student's t-test: P > 0.05; Fig. 4C, inactive hole). Discussion To the best of our knowledge, the beneficial effects of tDCS on alcohol addiction-related behaviors have never been explored in preclinical studies. The goal of the present work was to better identify which phase of alcoholrelated behavior (hedonic effect, sensitization, reacquisition after extinction, motivation) might be modulated by www.nature.com/scientificreports/ repeated anodal tDCS over the frontal cortex in mice. In general, our results indicated that tDCS had no effect when ethanol was passively administered but was very effective in reducing voluntary ethanol consumption. Our data showed that, in contrast to findings with nicotine 43 and cocaine 44 , tDCS did not modulate the hedonic effect of ethanol assessed in the place preference paradigm, a test that combines Pavlovian conditioning and testing in a drug-free state. Furthermore, tDCS also did not modulate the expression of ethanol-induced behavioral sensitization, which has previously been linked to high ethanol intake in operant procedures 49 . In contrast, tDCS robustly reduced the reacquisition of ethanol consumption (50% decrease) following extinction in an operant paradigm. This is appealing, since this parameter is considered an index of the risk of relapse after a period of abstinence. Furthermore, tDCS significantly decreased the motivation to drink ethanol (30% decrease). That is, the tDCS mice did not work as much as the control (sham) mice to obtain the reward ("wanting" component). These effects were observed more than 1 month after the end of the stimulation period, demonstrating that repeated tDCS has a sustainable impact on these parameters. This is encouraging because the rates of relapse are high in alcohol-dependent patients, even after a long period of abstinence. These results allow us to assume that noninvasive electrical brain stimulation could be useful to help abstinent alcohol-dependent patients avoid relapse. tDCS did not modulate ethanol-induced conditioned place preference. Our findings indicated that tDCS had no effect on ethanol-induced conditioned place preference. This is in contrast with our previous data showing that tDCS decreased nicotine-induced conditioned place preference 5 weeks after repeated tDCS (using the same protocol with 0.5 mg/kg nicotine) 43 . Moreover, the increase in nicotine-induced place preference in adults obtained after chronic exposure to nicotine during adolescence was prevented by tDCS 43 . Similarly, cocaine-induced place preference conditioning was reduced 3 weeks after tDCS (for 5 and 25, but not 10, mg/kg) 44 . Regarding ethanol, a dose response was observed in the present study (2 g/kg was more appetitive than 1 g/kg, and 1 g/kg was more appetitive than saline 71,72 ), but tDCS did not modulate this parameter. It could be argued, based on Fig. 2B, that the preference score with 2 g/kg ethanol in the tDCS animals decreased and was close to the level of preference observed with 1 g/kg ethanol. However, there was no direct significant difference between the tDCS and sham animals. Moreover, the preference with the 2 g/kg dose of ethanol (compared to a preference score of 0) remained highly robust in the tDCS animals. Clearly, tDCS did not significantly modulate the pleasant effects induced by ethanol ("liking"), in contrast to the modulation observed with other drugs such as nicotine and cocaine. The different mechanisms of action of ethanol compared to psychostimulants might be responsible for these differences. The actions of psychostimulants are limited to a smaller number of neurochemical or receptor systems. Ethanol, on the other hand, interacts with several neurotransmitters in the brain's reward and stress circuits, which involve multiple receptors at widespread neuroanatomical sites throughout the brain 73 . For example, the rewarding effects of ethanol are mediated both directly and indirectly (e.g., release of GABAergic inhibitory tone and β-endorphin release) on dopaminergic neurons from the ventral tegmental area 74 . Based on the data mentioned above, tDCS seems to have differential outcomes on the hedonic effects of drugs depending on their modes of action on the central nervous system. tDCS did not modulate ethanol-induced behavioral sensitization. To date, only one study has explored the impact of tDCS on the locomotor effects of a drug of abuse. In this study, the locomotor activating effects of a high dose of cocaine (25 mg/kg) were reduced by tDCS 44 . The impact of tDCS on behavioral sensitization induced by alcohol and/or other drugs of abuse has never been evaluated. Behavioral sensitization is defined as a progressive and long-lasting increase in specific behaviors after repeated drug exposure, with the most studied behavior being locomotion. A recent study showed that in mice, sensitization to the motor stimulant effects of ethanol was associated with facilitation of the acquisition of ethanol self-administration in an operant task 49 . It was therefore of interest to evaluate the impact of tDCS on ethanol-induced behavioral sensitization. Locomotor sensitization can be divided into two phases: induction and expression. We focused on how tDCS could modulate the expression of behavioral sensitization once the induction was completed. This question has translational value because it would be useful to use tDCS in populations of heavy drinkers who have already been exposed to alcohol. Our data showed that repeated injections of ethanol over a period of 10 days induced, as expected, robust behavioral sensitization in DBA/2 J female mice 50,51,53,75,76 . This phenomenon was still present more than 6 weeks after the end of the induction, demonstrating a long-lasting neuroadaptation to repeated ethanol exposure. However, tDCS had no impact on ethanol-induced behavioral sensitization, at least its expression, in our experimental conditions. Complementary studies are needed to evaluate whether tDCS could preclude the induction of behavioral sensitization induced by ethanol. In this case, tDCS would have to be used as a preventive intervention, which is less practical/relevant for clinical use. tDCS reduced the reacquisition of operant ethanol self-administration. The major finding of the present study was the demonstration that repeated anodal tDCS over the frontal lobe can impact addictionrelated behaviors in a voluntary oral ethanol self-administration paradigm. Here, tDCS decreased the reacquisition of ethanol self-administration after an extinction period. The number of nose pokes in the active hole was decreased by 50% in the tDCS group compared to the sham group during the first week of reacquisition, 5 weeks after the end of the stimulation period. This suggests that tDCS might decrease the rate of relapse in abstinent alcohol-dependent patients. This is similar to what is observed in humans 19,77,78 . The amount of ethanol intake per session was noticeably higher in sham animals during the first week of reacquisition than in the last week of the training sessions. This was not the case for tDCS animals, which displayed comparable amounts of intake relative to the last week of the training session. Overall, these data indicated that in mice, after an extinction period of 4 weeks, the increase in ethanol consumption that is typically observed was suppressed by tDCS. Therefore, tDCS does not decrease basal ethanol consumption but blocks the increase in ethanol consumption seen after www.nature.com/scientificreports/ an extinction period (an index of relapse rate). This effect was no longer present the following week due to a less robust effect of extinction on ethanol intake in the sham group at this time point. We also tested the motivation to consume ethanol with a progressive ratio schedule since elevated motivation is a hallmark of addictive behavior, and we obtained another major significant result regarding ethanol addiction. tDCS decreased the motivation to work to obtain the drug (30% decrease), which indicated a decrease in "drug wanting" (drug craving). Indeed, animals exposed to tDCS displayed a lower breakpoint when tested under a progressive ratio schedule in an operant task. This effect was detected 7 weeks after the end of the stimulation period, which demonstrated a long-term impact of tDCS on this parameter. These results in mice are in line with an increasing quantity of data obtained from clinical trials. Different laboratories have reported that tDCS reduced craving 19,[32][33][34]40,42 , alcohol consumption 41 , behavioral symptoms associated with alcohol 33,38 or relapse after gradual withdrawal 19,36,38,40 in humans. However, others have been unable to replicate these results regarding craving 39 or relapse 37 . Recent meta-analyses have provided additional evidence that tDCS over the DLPFC reduces craving and ethanol consumption. Interestingly, larger effects were found with repeated stimulations than with single stimulation 77,78 , and a level B (possible efficacy) recommendation has been proposed for tDCS in addiction/craving 79 . tDCS decreased "wanting" (motivation) and not "liking" (pleasure). Taken together, these results suggest a dissociation regarding the effects of tDCS between "liking" (no effect in the CPP) and "wanting" (decrease in ethanol consumption in the self-administration procedure). They also suggest that tDCS is effective when ethanol is voluntarily self-administered and not when it is passively administered, thus revealing that its effectiveness may be more dependent upon motivational aspects. The incentive-sensitization theory posits that the essence of drug addiction is excessive amplification specifically of psychological "wanting" (especially triggered by cues), not necessarily accompanied by an amplification of "liking" 80,81 . Therefore, tDCS might be particularly relevant in the treatment of ethanol-dependent patients and the promotion of abstinence by reducing the wanting component ("craving"). The brain circuitry that mediates the psychological process of "wanting" a particular reward is dissociable from the circuitry that mediates the degree to which it is "liked" 80,81 . Incentive salience or "wanting", a form of motivation, is generated by large and robust neural systems that involve mesolimbic dopamine. By comparison, "liking", the actual pleasurable impact of reward consumption, is mediated by smaller and more fragile neural systems and is not dependent on dopamine. Since tDCS seems to have a pronounced impact on "wanting", further studies should explore the effect of tDCS on dopamine release induced by alcohol and other drugs of abuse in the nucleus accumbens using in vivo voltammetry and gene induction in corticostriatal circuits 82,83 . We cannot rule out the possibility that both the motivation and hedonic components of alcohol consumption might be affected by tDCS if we used a higher current density and/or more stimulation sessions. However, it is interesting to note that 0.2 mA was sufficient in our previous studies to decrease nicotineand cocaine-induced place preference, while the exact same protocol of stimulation was not able to decrease ethanol-induced place preference in the present work 43,44 . Finally, the necessary stimulation parameters clearly differ between humans and rodent models, especially concerning the current density (higher in mice). Additionally, the anatomy of the brain is not the same (different tissue thicknesses between humans and mice, gyrencephalic brain structure in humans vs. lissencephalic structure in mice, etc.). Interestingly, however, the same behavioral parameters are affected by tDCS in clinical and preclinical studies. Indeed, tDCS reduces symptoms associated with depression, improves working memory and attenuates the desire to consume several drugs of abuse both in humans and in mice (see, for example 43,44,46 ). Thus, it seems that common mechanisms may be at work in these models and that it is appropriate to study them, bearing in mind that we must be cautious in our conclusions when translating results from animals to humans. The goal of our study was to establish tDCS in animals to highlight the impact of tDCS on different aspects of ethanol addiction-related behavior. We can now take advantage of this tDCS procedure to better understand the neurobiological mechanisms underlying the behavioral effects of tDCS. Several other questions remain unanswered, notably the duration of tDCS effects on addiction-related behavior. Are they long-lasting effects, or would it be beneficial to add tDCS sessions at different time points to maintain the persistence of its effects? The protocol of the stimulation itself could be improved (e.g., intensity, number of stimulations, interstimulation intervals, position of the electrode, polarity effect, timing of stimulation relative to the extinction procedure), and other stimulation protocols could be tested (e.g., transcranial alternating current and pulsed current stimulation). Conclusion The present study supports the promising clinical results showing that repeated anodal tDCS over the frontal lobe may have beneficial effects on consumption, craving and relapse in users of alcohol (and other drugs of abuse). Our tDCS procedure in animals will allow researchers to better understand the mechanisms of action of tDCS and accelerate its development as an innovative complementary tool to help alcohol-dependent patients maintain abstinence or reduce ethanol intake.
2022-01-08T14:35:42.177Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "3facb9f4eee52f045b0cb529cdb7062e8ddf2974", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9c447130a7b684cd9cdd366f6b6914fd0adc2a4c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220666542
pes2o/s2orc
v3-fos-license
Implementing Mobile HRV Biofeedback as Adjunctive Therapy During Inpatient Psychiatric Rehabilitation Facilitates Recovery of Depressive Symptoms and Enhances Autonomic Functioning Short-Term: A 1-Year Pre–Post-intervention Follow-Up Pilot Study Objective New treatment options for depression are warranted, due to high recurrence rates. Recent research indicates benefits of heart rate variability biofeedback (HRVBF) on symptom recovery and autonomic functioning in depressed individuals. Slow-paced breathing-induced amplification of vagus nerve activity is the main element of HRVBF. Thus, the latter represents a safe and non-invasive complementary depression treatment. However, its efficacy in patients undergoing inpatient psychiatric rehabilitation receiving highly comprehensive treatments has not been evaluated. Methods Ninety-two inpatients were randomly assigned to an intervention group (IG) or control group (CG). While the latter received the standard treatment only, adjunctive HRVBF was provided to the IG over 5 weeks. Depression severity and heart rate variability (HRV) were assessed before (pre) and after 5 weeks (post). Moreover, 1-year follow-up depression scores were available for 30 participants. Results Although depression improved in both groups, the IG exhibited significantly larger improvements at post-assessment (ηp2 = 0.065) and significant increases in resting LF-HRV (d = 0.45) and cardiorespiratory coherence (d = 0.61). No significant effects for RMSSD, SDNN, HF-HRV, or HR were found (ps > 0.05). Additionally, the IG showed a medium- to large-sized reduction in resting respiratory rate from 13.2 to 9.8 breaths per minute (p < 0.001, d = 0.86), with the CG exhibiting only a small decrease from 13.5 to 12.4 (p = 0.49; d = 0.35). While the IG exhibited significantly lower depression scores at post-assessment (p = 0.042, d = 0.79), this effect decreased during follow-up (p = 0.195, d = 0.48). Conclusion HRVBF as adjuvant therapy during inpatient psychiatric rehabilitation facilitated depression recovery. Additionally, amplified LF-HRV as well as cardiorespiratory coherence at rest and a decrease in resting breathing frequency was observed in the HRVBF group. These findings emphasize HRVBF’s value as complementary therapy regardless of concurrent treatments. Moreover, these incremental benefits could serve as resource even after the actual training period. However, the additional antidepressant gains vanish during the long-term follow-up, indicating the need for more intense training or regular practice afterward, respectively. Thus, future studies are warranted to examine how the initial benefits of HRVBF during inpatient psychiatric rehabilitation can be preserved post discharge. Objective: New treatment options for depression are warranted, due to high recurrence rates. Recent research indicates benefits of heart rate variability biofeedback (HRVBF) on symptom recovery and autonomic functioning in depressed individuals. Slowpaced breathing-induced amplification of vagus nerve activity is the main element of HRVBF. Thus, the latter represents a safe and non-invasive complementary depression treatment. However, its efficacy in patients undergoing inpatient psychiatric rehabilitation receiving highly comprehensive treatments has not been evaluated. Methods: Ninety-two inpatients were randomly assigned to an intervention group (IG) or control group (CG). While the latter received the standard treatment only, adjunctive HRVBF was provided to the IG over 5 weeks. Depression severity and heart rate variability (HRV) were assessed before (pre) and after 5 weeks (post). Moreover, 1-year follow-up depression scores were available for 30 participants. Results: Although depression improved in both groups, the IG exhibited significantly larger improvements at post-assessment (η 2 p = 0.065) and significant increases in resting LF-HRV (d = 0.45) and cardiorespiratory coherence (d = 0.61). No significant effects for RMSSD, SDNN, HF-HRV, or HR were found (ps > 0.05). Additionally, the IG showed a medium-to large-sized reduction in resting respiratory rate from 13.2 to 9.8 breaths per minute (p < 0.001, d = 0.86), with the CG exhibiting only a small decrease from 13.5 to 12.4 (p = 0.49; d = 0.35). While the IG exhibited significantly lower depression scores at post-assessment (p = 0.042, d = 0.79), this effect decreased during follow-up (p = 0.195, d = 0.48). Conclusion: HRVBF as adjuvant therapy during inpatient psychiatric rehabilitation facilitated depression recovery. Additionally, amplified LF-HRV as well as INTRODUCTION Depression has been identified as the leading cause of disability worldwide, affecting approximately 300 million people globally (World Health Organization, 2017;James et al., 2018). While antidepressants are still the standard treatment for depression, a debate regarding their efficacy has been emerging in recent years (Davidson, 2010;Ormel et al., 2020). A recent meta-analysis suggests only minor benefits compared to placebo treatments (Cipriani et al., 2018). Importantly, taking antidepressants seems to increase suicidality and all-cause mortality (Baldessarini et al., 2017;Maslej et al., 2017). Due to these obvious limitations of pharmacotherapy, alternative and safer treatment options are considered worthwhile. Importantly, the high recurrence rates among those affected by this debilitating disease indicate the need to complement conventional therapeutic approaches to improve depression prognosis (Burcusa and Iacono, 2007). Of note, autonomic functioning is shifted toward increased sympathetic activity in depression (Koschke et al., 2009;Schumann et al., 2017). Importantly, autonomic activity can be reliably and non-invasively assessed through heart rate variability (HRV), which refers to the fluctuation of subsequent beat-to-beat intervals of the heart rate, with mathematical analysis of HRV permitting inferences onto the underlying vagal modulations (Berntson et al., 1997). HRV can be assessed in time-domain and frequency-domain measures (Shaffer and Ginsberg, 2017). A sensitive indicator of vagally mediated HRV is the respiratory sinus arrhythmia (RSA), which reflects the concomitant increase in heart rate with inspiration and decrease with expiration, with the exact phase relationship between respiration and heart rate depending on the breathing frequency (Berntson et al., 1997;Vaschillo et al., 2002). Additionally, the root mean square of successive differences between normal heartbeats (RMSSD) is an established marker of vagally mediated HRV (vmHRV; Schwerdtfeger et al., 2019). Noteworthily, a recent meta-analysis shows attenuated vagal functioning in depressed individuals, manifesting in decreased heart rate variability, including vmHRV (Koch et al., 2019). The neurovisceral integration model (NIM), first postulated by Thayer and Lane (2000), provides a framework for a possible explanation regarding the link between depression and HRV. The NIM proposes that the regulation of affect, attention, and autonomic activity shares neural circuits, and therefore, vmHRV could index the efficacy of central-peripheral neural feedback loops (Thayer and Lane, 2009). Importantly, the prefrontal cortex, central to executive functions, is considered as a major effector regarding autonomic functioning, exhibiting top-down inhibition on sympathetic activity (Thayer and Lane, 2009). Thus, dysfunctional cognitions and emotions, respectively, could trigger the release of the prefrontal vagal brake, manifesting in decreased vmHRV (Thayer and Lane, 2009;Smith et al., 2017). Accordingly, perseverative cognition like rumination is associated with attenuated vmHRV (Gerteis and Schwerdtfeger, 2016;Ottaviani, 2018). Importantly, enhancing HRV is hypothesized to increase cerebral oscillations, supposedly strengthening functional connectivity in brain areas relevant to emotion regulation, including prefrontal areas, which in return should improve mental well-being (Mather and Thayer, 2018). Hence, increasing HRV via heart rate variability biofeedback (HRVBF) could constitute an alternative treatment for alleviating depressive symptoms. HRVBF is based on the phenomenon of maximum RSA amplification occurring at a specific respiratory frequency, which on average is approximately 5.5 (0.09 Hz) breaths per minute (Vaschillo et al., 2002;Lehrer et al., 2003). Due to the cardiovascular resonance in response to this specific respiratory pattern, it has also been labeled resonant breathing . HRVBF supposedly amplifies autonomic reflexes, like the baroreflex, ultimately enhancing autonomic functioning, which eventually increases HRV (Vaschillo et al., 2002;Lehrer et al., 2003;Lehrer and Gevirtz, 2014). Noteworthily, breathing at such a slow rate (i.e., 0.09 Hz) shifts the RSA from the high-frequency (HF; 0.15-0.4 Hz) to the low-frequency (LF; 0.04-0.15 Hz) domain of HRV, which seems primarily vagally mediated (Lehrer et al., 2003;Kromenacker et al., 2018). Importantly, several studies have shown benefits of HRVBF on depression recovery and HRV in clinical depression (Karavidas et al., 2007;Siepmann et al., 2008;Hartogs et al., 2017;Caldwell and Steffen, 2018;Lin et al., 2019). Although compelling, small sample sizes and lack of control groups in previous research limit interpretation and long-term outcomes of HRVBF have not been evaluated yet. Thus, the present work aims to expand prior research by evaluating for the first time the short-and long-term efficacy of HRVBF in individuals undergoing inpatient psychiatric rehabilitation. Importantly, the main intent of this study was to assess the general feasibility of HRVBF to improve depressive symptoms in patients already receiving a highly comprehensive treatment program. Since aiming at elucidating HRVBF's antidepressant efficacy on a more global level and in stationary psychiatric rehabilitation per se, patients with diagnoses other than depression were included. Since inpatients are exposed to the same environmental factors during the 6-week in-clinic rehabilitation period, this provides an excellent context to investigate HRVBF's efficacy, especially as HRV seems sensitive to external influences like diet, exercise, and even air quality (Levy et al., 1998;Haberfellner et al., 2008;Pieters et al., 2012;Kingsley and Figueroa, 2016;Young and Benton, 2018). Therefore, any HRV or depression differences occurring between the intervention group (IG) and the control group (CG) are likely due to HRVBF. Based on the proposed benefits of HRVBF on depression recovery and autonomic functioning, we hypothesized that inpatient psychiatric rehabilitation supplemented by HRVBF will yield greater improvements in depressive symptoms and HRV than the standard treatment alone. Specifically, we expected that practicing HRVBF enhances vagal and baroreflex functioning, which should result in increased RMSSD, HF-HRV, and LF-HRV, respectively. Finally, the cumulative effect of increased vagal activity as well as improved baroreflex should manifest in improved overall variability and therefore increased SDNN. On an exploratory basis, we also evaluated whether HRVBF during rehabilitation affects 12-month recovery from depressive symptoms. Ethics Statement The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2013. This study was approved by the institutional ethics committee (GZ. 39/43/63 ex 2017/18). From all participants, oral and written informed consent was obtained. Participants and Design Participants were recruited from a local inpatient psychiatric rehabilitation clinic, where they stay on average for 42 days. Patients taking antidepressants, anxiolytics, and other medication like anti-hypertensives or supplements, respectively, were included only if intake had been started at least 3 months prior study admission. The sampling protocol is shown in Figure 1. Patients diagnosed with a substance-use disorder were excluded. Initially, 48 participants were randomly assigned to the intervention group (IG) and 44 participants to the control group (CG). Final sample size was reduced due to dropouts (IG = 8; CG = 5), diagnosed substance-use disorder (IG = 3; CG = 2), acute illness at post-assessment (IG = 1), severe side effects due to new medication (IG = 1), missing items in the BDI-II (IG = 1; CG = 3), artifacts in the electrocardiogram (ECG; IG = 1), and missing ECG assessments (CG = 2). Thus, the depression pre-post analyses included 68 participants (IG = 34; FIGURE 1 | Study design. N = 92 in-patients enrolled in the study and were randomly assigned to the intervention (IG) or control group (CG). Due to dropouts, missing items in the BDI-II, side effects from medication, illness, artifacts in the ECG, and missing assessments, the final sample size was reduced to 68 for the depression pre-post and to 69 for the HRV pre-post analyses, respectively. Overall, 30 participants (IG = 14; CG = 16) returned the follow-up questionnaires. CG = 34) aged 26-66 (M = 48.7; SD = 9.4; Table 1). Pre-post data for HRV were available from 69 participants (see Table 1). The 12-month follow-up questionnaires were returned by 30 participants (IG = 14; CG = 16). A 2 × 2 pre-post design was applied with group (IG vs. CG) as between-subject factor, time (pre-post; post-follow-up) as within-subject factor, and depression as well as various HRV measures as dependent variables. The IG practiced HRVBF in addition to the standard treatment. The CG received standard treatment only, provided with the opportunity to receive a brief HRVBF training after the study. Procedure On admission day, inpatients received an overview of the study and were assured about the confidentiality, anonymity, and possibility to withdraw from the study without negative consequences. They completed psychometric testing and two separate short-term HRV recordings, prior and after the 5-week intervention phase. After completing the baseline assessments, participants were randomly assigned to the IG or the CG, respectively (Figure 1). After post-assessment, a 12month follow-up regarding depressive symptoms was conducted in written form. During the follow-up period, no further support was provided. Demographics and Confounders Participants filled out questionnaires regarding demographic/control variables at study entry. At admission, diagnoses and medication including supplement intake were obtained from the patient documentations. Medications and supplements were also assessed from the demographic/control questionnaires to record potential unregistered intake. Supplements were assessed, since various substances like vitamin D 3 or probiotics seem to have mood-altering effects Mocking et al., 2016;Schefft et al., 2017; In-clinic breathing training (%) 0.0 20.6 0.005 0.0 20.0 0.006 BDI-II, demographics between groups for depression pre-post comparisons; HRV, demographics between groups for HRV pre-post comparisons; variables tested by chi-square tests and by t-tests. CVD, cardiovascular disease. Aerobic training, total number and percentage of participants practicing aerobic exercise regularly. Weekly aerobic training, mean weekly volume of aerobic exercise in minutes. Strength, total number and percentage of participants practicing strength training regularly. Weekly strength, mean frequency of weekly strength sessions. N, total number. %, percentage. M, mean. SD, standard deviation. Nikolova et al., 2019). Additionally, the care report of every participant was reviewed to control for changes in medication, occurring illness and extraordinary incidents. Depressive Symptoms Depression severity was assessed with the Beck-Depression Inventory II (BDI-II) (Beck et al., 1996), which seems particularly sensitive to detect changes among psychiatric patients (Wang and Gorenstein, 2013). The BDI-II consists of 21 items and is a self-report measure, assessing cognitive, affective, and neurovegetative symptoms of depression (Beck et al., 1996;Steer and Clark, 1997). In addition to an overall depression score, BDI-II distinguishes between a cognitive and somatic-affective subscale (Huang and Chen, 2015). Cronbach's alphas for the BDI-II overall score and the cognitive and somatic-affective subscales Frontiers in Neuroscience | www.frontiersin.org were 0.95, 0.91, and 0.92 at baseline; 0.95, 0.90, and 0.93 at postassessment; and 0.95, 0.91, and 0.92 at follow-up, respectively, indicating high internal consistency (Peterson, 1994). HRV Data Analysis Heart rate variability was acquired by means of the "HRV Scanner, " a one-channel ECG with a sampling rate of 500 Hz (BioSign GmbH, D-85570, Ottenhofen, Germany). The signal was obtained from two limb clamps, placed at participants' wrists. Data analysis was performed with the HRV-Scanner Software (BioSign GmbH, D-85570, Ottenhofen, Germany). Participants completed a 3-min short-term electrocardiogram (ECG), from which time-domain and frequency-domain measures were assessed. The ECG signal was automatically controlled for artifacts by the HRV-Scanner software, and only data containing less than five percent of artifacts were included for further analyses. Additionally, the ECG was visually controlled by two experienced examiners. One participant from the IG was excluded due to excessive artifacts (baseline: 5.04%; postassessment: 8.65%). Of note, both groups showed similar mean artifact ratios at baseline ( HRV Measures Heart rate variability parameters in the time domain encompass heart rate (HR), root mean square of successive differences (RMSSD), and standard deviation of RR intervals (SDNN). RMSSD is considered as a cardinal marker of parasympathetic activity and SDNN a global measure of all autonomic influences on HRV (Umetani et al., 1998;Shaffer F. et al., 2014). Frequencydomain analysis classifies HRV into low (LF; 0.04-0.15 Hz) and high frequencies (HF; 0.15-0.4 Hz), expressed as ms 2 . HF power primarily reflects parasympathetic (i.e., vagal) activity (Berntson et al., 1997). Although regarded as a marker of cardiac sympathetic control (Berntson et al., 1997), a major vagal influence on LF power is proposed (Billman, 2013;Reyes del Paso et al., 2013). Of note, during resting conditions, LF-HRV seems predominantly influenced by baroreflex and vagal activity, with only minor sympathetic contributions, compared to ambulatory settings, where sympathetic efference could be more dominant . Accordingly, Kromenacker et al. (2018) showed that increases in LF power due to slow breathing were predominantly vagally mediated. We analyzed HR, SDNN, RMSSD, HF, and LF from the 3-min ECG recordings. Additionally, as an indicator of RSA the grade of rhythmization (GR) was calculated, which aims to quantify HRVBF success. This index integrates fluctuations of LF-HRV and HF-HRV. Specifically, changes in LF and HF are weighted against each other, with HF assigned a higher weight, thus quantifying the ratio of peak amplitude power compared to the remaining signals in the spectral analysis. This is due to the well-known phenomenon, that during states of enhanced cardiorespiratory coherence, an elevated peak and a narrower distribution of power can be observed in the spectrogram, shifting from HF to LF frequency. Therefore, GR increments correspond to an increase in the peak amplitude power, including a higher signal density centered around the peak and less power within the remaining frequencies, indicating a high RSA state. On the contrary a distribution of the power across a wider frequency range and a lower power peak, respectively, should indicate a lower GR and therefore a low RSA state. Hence, the GR aims at describing the quantity (i.e., height of amplitude) and quality of the RSA (i.e., presence of non-respiratory influences on the RSA), indicating the degree of cardiorespiratory coherence (e.g., Druschky and Druschky, 2015). It should be noted, though, that the GR is of explorative nature, since published validation studies are lacking. Frequency-domain HRV parameters were analyzed applying fast-Fourier transformation. At the day of testing, participants were instructed to abstain from alcohol, nicotine, and exercise until HRV measurements were completed. Individuals were also instructed to fast at least 2 h prior to their appointments, as food intake potentially influences HRV (Hayano et al., 1990;Lu et al., 1999;Cornelissen et al., 2010;Romanowicz et al., 2011;Kingsley and Figueroa, 2016). Due to circadian HRV fluctuations, participants' prepost measurements were taken within the same 3 h of the day (Bonnemeier et al., 2003). The ECG was taken in a supine position after participants had rested for 10 min. Pre-post ECG measurements were conducted in the same climatized room. Breathing Frequency Resting breathing frequency was analyzed pre-and postintervention from the ECG. The HRV Scanner Software analyzes respiratory rate from the ECG signal, which is highly correlated with the actual breathing rate (Schrumpf et al., 2016). Thus, ECGderived breathing frequency has been suggested as an accurate measure of respiration (Tong et al., 2014). HRVBF Training Compliance We assessed three compliance measures. First, we documented participants' number of attended group trainings, and second, self-practice frequency was analyzed from the portable HRVBF devices. Third, an overall compliance score was calculated adding up group and self-practice frequencies. HRVBF Training Performance To measure participants' HRVBF training performance, we assessed the relative grade of rhythmization (relGR). The relGR describes the mean achieved percentage of the set target GR (i.e., RSA amplitude required to receive a perfect feedback) during HRVBF sessions. Thus, the relGR objectifies the difficulty of the HRVBF while simultaneously measuring training success. For example, a relGR of 76 corresponds to producing on average 76% of the set target GR, while a relGR of 108 equals a mean GR, 108% of the target GR. Hence, the relGR can exceed 100% if the achieved values are higher than the set target GR, thus indicating superior performance. Additionally, respiratory rates were estimated from the biofeedback data, calculating the power peak in the frequency domains (Karlen et al., 2011). Technical Details HRVBF Heart rate variability biofeedback was delivered through a portable device named Qiu (BioSign GmbH, D-85570, Ottenhofen, Germany), which allowed participants to practice HRVBF at any time. The sphere-shaped device is batterypowered, has the size of a tennis ball, and measures heart rate by an optical sensor (i.e., photoplethysmography) at the palm, second digit, or thumb. Alternatively, an ear clip can be used to sense pulse rate. The Qiu provides the option to guide the practitioner's BF by moving blue LED lights, which can be set individually. The Qiu records date, time, and the RR intervals of every session. Once heart rate is detected, the luminescent upper half of the Qiu visualizes the current relGR through a continuous visual feedback, which ranges from dark red (i.e., low relGR) to bright green (i.e., high relGR). Accordingly, the optical feedback displays the degree to which practitioners achieve their target GR, which can be set individually, based on the participants' individual values. Importantly, the Qiu applies an algorithm controlling for error variance in the GR during the biofeedback, which ensures accuracy of the short feedback latency, necessary for the HRVBF. In general, the HRVBF protocol used in this study differs from the original procedure (i.e., Lehrer et al., 2000Lehrer et al., , 2013. The original protocol assesses the precise resonance frequency with a rather time intensive procedure as a basis for the actual HRVBF (Lehrer et al., 2000(Lehrer et al., , 2013. On the contrary, a 60-s deep breathing HRV test (DBT) is used to estimate the target HRV amplitude for the Qiu-HRVBF. In the DBT, participants breathe at 6 breaths per minute, which corresponds to the approximate resonance frequency, with inspiration and expiration lasting 5 s each, guided by a visual signal (Ewing and Clarke, 1982;Lehrer et al., 2003;Shields, 2009). Hence, instead of assessing the individual resonance frequency, the approximate maximum HRV amplitude is assessed from the DBT. Precisely, the HRV Scanner software calculates the GR from the 60-s DBT, which is used by the Qiu as reference for the HRVBF. Importantly, Qiu's target GR is set higher than the actual maximum GR amplitude achieved in the DBT. Therefore, enough margin is provided to enable practitioners to achieve their actual peak HRV during the HRVBF practice. Thus, participants have to adapt their breathing pattern in response to the visual feedback to achieve their maximum HRV (i.e., GR) amplitude. Accordingly, practitioners determine their precise resonance frequency during every training session in order to achieve a positive feedback. Standard Treatment (ST) The ST consisted of 240 min of daily multifaceted therapies during the week and 80 min of therapy on Saturdays. These treatments included psychotherapy, psychoeducation, music therapy, physical and exercise therapy, and relaxation methods, including progressive muscle relaxation. Importantly, inpatients have to adhere to a strict treatment curriculum, which is equal for all inpatients, with non-adherence leading to early discharge. Hence, the IG and CG were comparable regarding treatment regiments independent of the HRVBF intervention. Of note, the clinic also provided breathing training by physical therapists, as additional individual therapy. Only the CG could participate in the in-clinic breathing training to avoid any confluent effects with the HRVBF on the study outcome. Details HRVBF Training Procedure The IG received a 2-h introduction to the HRVBF, consisting of hierarchical steps: First, participants were taught nasal abdominal and pursed-lip breathing according to Lehrer et al. (2000). We emphasized nasal inspiration, as recent literature indicates improved entrainment of cerebral activity as compared to oral inspiration (Zelano et al., 2016;Herrero et al., 2018;Piarulli et al., 2018). In addition, switching from thoracic to abdominal breathing could improve vagal activation via slowly adapting stretch receptors during deep breathing (Noble and Hochman, 2019). Pursed-lip breathing is supposed to improve breathing economy through decreasing air turbulences during exhalation and mechanically dilating the airways (e.g., Lehrer et al., 2000). Also, participants were instructed to focus the mind on the Dan Tian, a supposed "energy center" in the mindbody technique of Qi Gong, allegedly located three centimeters below the navel inside the belly (Chan et al., 2008). We integrated this idea as focusing on the Dan Tian while breathing seems to facilitate slow, deep breathing, which eventually is a prerequisite to successfully modulate HRV (Lehrer et al., 2003;Chan et al., 2008). Importantly, we disentangled this concept from its dogmatic valence and instructed participants to focus on the center of their abdomen to facilitate deep breathing. Second, participants were familiarized with the Qiu. Third, they were trained to use the taught techniques to modify their breathing and to adopt the latter according to the Qiu's visual feedback to optimize their HRV. Thus, the goal was to maximize the GR, rather than rigidly execute a specific technique. Fourth, participants received written instructions of the breathing techniques and details regarding Qiu usage, including self-practice. The self-practice consisted of a 10-min HRVBF twice a day. Since participants had to attend the various standard treatments during the day, we recommended to do the first session in the morning and the second in the afternoon or evening, respectively. Participants were informed that they could train more HRVBF if they wanted to. Additionally, the IG was instructed to do three cycles of resonant breathing without the Qiu throughout the day, with each cycle lasting 10 breaths, trying to emulate the breathing pattern of the biofeedbackguided training. This additional practice aimed at familiarizing participants with the taught breathing techniques in order to facilitate HRVBF training. The HRVBF introduction was supplemented by one guided HRVBF session weekly (i.e., 5 sessions), consisting of approximately 35 min of HRVBF and 25 min for discussing any questions. In order to maintain training quality throughout the study period, the set target GR necessary to achieve a positive feedback was adjusted based on each individual's progression in performance. Because groups shared the same environment (i.e., clinic) during the study, the IG was instructed not to communicate any details about the HRVBF with the CG to avoid potential transfer effects. Importantly, participants were stressed not to share their personal HRVBF device, since all training sessions are recorded and supposed to reflect each individual's performance and compliance, respectively. Statistical Analyses Data were analyzed with SPSS 25.0 software. To compare groups regarding demographic, medical, and behavioral variables, chi-square analyses and unpaired t-tests were conducted. Shapiro-Wilk tests were performed to analyze distributional characteristics (Shapiro and Wilk, 1965). Accounting for skewed distributions, HRV measures were normalized using natural logarithmic transformation. Separate two-way mixed ANOVAs were performed, with group (IG, CG) as a between-subject factor and time (pre-post; post-follow-up) as a within-subject factor. Correlations were analyzed using Pearson's product-moment correlations. As a measure of effect size, partial eta-squared (η 2 p ) is reported with small, medium, and large effects represented by the values 0.01, 0.06, and 0.14, respectively (Cohen, 2013). Cohen's d was reported as effect size for t-tests with small, medium, and large effects, represented by the values 0.2, 0.5, and 0.8, respectively (Cohen, 2013). Baseline Sample Characteristics Regarding demographic and control variables, there were no significant baseline differences between the IG and CG. Both groups showed similar overall antidepressant intake and were slightly overweight with BMI values corresponding to earlystage obesity (World Health Organization, 1998; see, Table 1). However, in the IG a tendency for higher SNRI intake, significantly higher use of atypical neuroleptics, and less frequent use of antiepileptics were observed (ps < 0.05; Table 1). Inpatients were diagnosed according to ICD-10 (World Health Organization, 1992). The majority was diagnosed with affective disorders (ICD-10: F30-39), followed by neurotic, stress-related, and somatoform disorders and schizophrenia and schizotypal and delusional disorders (ICD-10: F20-29), respectively. However, no information regarding the precise number of episodes in case of recurrent depression was available. Within each group, approximately one third exhibited a comorbid (two or more) disorder with about a fifth exhibiting an additional burnout (Z73.0) diagnosis (Table 1). Importantly, for HRV pre-post analyses, demographic characteristics including both, diagnoses (including specific ICD-10 diagnoses) and control variables, were similar to the depression pre-post analyses. Only the statistical tendency for higher SNRI intake in the IG was significant (p < 0.05), while the less frequent use of atypical neuroleptics in the CG was non-significant (p > 0.05; Table 1). Of note, groups did not differ in severity of depressive symptoms (including subscales), diagnoses, or HRV variables at baseline (ps > 0.05). Both groups showed moderate depression scores at baseline ( Table 2). Effects of HRVBF on Resting Breathing Frequency Baseline breathing rates did not differ significantly between groups (p = 0.717, d = 0.09) and were within the normal range of human respiration (IG = 13.2 vs. CG = 13.5; Yuan et al., 2013). Overall, resting breathing rate decreased, as evidenced by a largesized main effect of time in the mixed ANOVA [F(1,67) = 27.16, p < 0.001, η 2 p = 0.288]. A significant medium-sized interaction of time × group illustrated a moderating effect of the HRVBF on resting respiratory rate [F(1,67) = 6.928, p = 0.011, η 2 p = 0.094]. Paired t-tests showed large decreases in breathing frequency for the IG, from 13.2 to 9.8 breaths per minute [t(33) = 5.04, HRVBF Training Performance Mean relGR across sessions was 80.0%, which documents that throughout all sessions, participants achieved on average 80 percent of their individual maximum HRV peak. This indicates sufficient HRVBF training difficulty to induce potential autonomic adaptations (i.e., HRV increases). The average respiratory rate in the IG, calculated from the biofeedback data, was 5.5 (SD = 0.46; range: 4.7-6.8) breaths per minute, which is in line with the findings of previous research using more extensive assessment methods (Vaschillo et al., 2002;Lehrer et al., 2003). Therefore, the GR seems to provide a feasible feedback signal to foster each individual's resonance frequency. Exploratory Analyses Overall depression scores were negatively associated with lnLF (r = −0.346, p = 0.006) and lnGR (r = −0.319, p = 0.011) at preassessment and at post-assessment (lnLF: r = −0.286, p = 0.020; lnGR: r = −0.315, p = 0.010). None of the evaluated compliance (i.e., practice frequency) and performance measures (i.e., relGR) were associated with changes in depression or HRV, respectively (ps > 0.05). Also, the observed decreases in depressive symptoms including subscales were not associated with changes in any of the HRV parameters across the whole sample and within each group, respectively (ps > 0.05). 12-Month Follow-Up of Depression Recovery From thirty participants (IG = 14; CG = 16), depression followup data was available. No group differences in the control variables (ps > 0.05) or depression severity (p = 0.511, d = 0.24) were found at baseline. A mixed ANOVA comparing depression severity between the IG and the CG from post-assessment to follow-up showed neither a significant main effect for time nor a time × group interaction (ps > 0.05). However, the IG exhibited significantly lower depression scores of large effect size compared to the CG (p = 0.042, d = 0.79) at post-assessment. These additional antidepressive benefits due to HRVBF decreased during the 12-month post-discharge, illustrated by slightly smaller depression differences at follow-up (p = 0.195, d = 0.48). This effect seems to originate from a visible increase in depressive symptoms within the IG during the follow-up period (p = 0.118, d = 0.48). Importantly, none of the participants from the CG completing the follow-up took part in the brief HRVBF introduction at the end of the rehabilitation. DISCUSSION This study evaluated whether HRVBF could enhance recovery of depressive symptoms and autonomic functioning in inpatients undergoing psychiatric rehabilitation. Moreover, a 12-month follow-up regarding depression trajectories was conducted, assessing the long-term sustainability of potential effects. Within 5 weeks, the IG exhibited a medium-sized, larger recovery in depressive symptoms than the CG, which appeared to be mainly driven by the comparably strong improvements in somaticaffective symptoms. However, these additional benefits gained during the treatment period vanished during the long-term follow-up. Noteworthily, toward the end of the treatment period, the IG showed medium-to large-sized amplification of LF-HRV as well as cardiorespiratory coherence (i.e., grade of rhythmization) at rest and a large reduction in resting breathing frequency, while no significant effects for RMSSD, SDNN, HF-HRV, or HR could be found. In comparison, no significant HRV changes could be observed in the CG, which, however, showed a small decrease in resting breathing rate. Importantly, the present research complements the hitherto only randomized controlled trial by Caldwell and Steffen (2018), who showed that HRVBF facilitated depression recovery and HRV in psychotherapy patients. These effects were larger as compared to our findings, which might be attributable to differences in sample characteristics. Of note, the comparably young sample in the Caldwell and Steffen study comprised women only, who seem to respond better to depression treatment and show larger autonomic adaptions due to interventions like exercise (Genovesi et al., 2007;Donker et al., 2013). Additionally, young individuals have shown larger HRV increases in response to interventions compared to middle-aged ones (Carter et al., 2003). Noteworthily, age seems to be an important factor regarding the efficacy of HRVBF on HRV, with young samples showing more reliable increases Alayan et al., 2019). Hence, the sample of the Caldwell and Steffen study could have been more sensitive to treatment effects regarding depression and HRV than those in the present research, who were approximately twice as old and included both sexes. It should also be mentioned that antidepressants seem to reduce HRV, with SNRIs and tricyclics having particularly unfavorable effects on vagal efferent cardiac control (Kemp et al., 2014;Alvares et al., 2016). However, SSRIs seem to attenuate vagal functioning as well, although depending on the SSRI class, with fluoxetine exhibiting the least adverse effects on HRV (Kemp et al., 2016). In this regard, it is necessary to mention that only four participants included in the HRV analyses (IG = 3; CG = 1) were taking fluoxetine. However, this was supplemented with either antipsychotics, SNRIs, additional SSRI classes, or a combination of these medications. Additionally, antipsychotics have been shown to decrease HRV as well, with atypical neuroleptics seeming especially detrimental to autonomic functioning (Agelink et al., 2001;Iwamoto et al., 2012;Linder et al., 2014). Noteworthily, the IG exhibited a high intake of SNRIs and of atypical neuroleptics, which could explain why no improvements in RMSSD, HF-HRV, or SDNN could be observed. Therefore, we suggest that the advanced age and the density of pharmacological interventions may have attenuated an increase in autonomic functioning in the IG. It is also important to note that small samples tend to exaggerate effects (Button et al., 2013). Thus, our findings may reflect HRVBF's efficacy more accurately than the comparably smaller study of Caldwell and Steffen (2018). Of note, this study provides first insights regarding the longterm sustainability of HRVBF-induced add-on benefits during inpatient psychiatric rehabilitation. Noteworthily, while groups showed no significant differences regarding the magnitude of depressive symptoms at baseline (d = 0.24), the IG compared to the CG exhibited significantly lower symptom severity of large effect size CG (d = 0.79) at the end of the rehabilitation period. Although these favorable antidepressive gains due to HRVBF became statistically non-significant at the 12-month follow-up assessment, these effects were still visible and of moderate size (d = 0.48). Seemingly, HRVBF generates unique psychophysiological benefits during the training phase, serving as additional resource even after the actual training period, which, however, appears to gradually vanish during a 12-month followup. Nevertheless, since we did not assess depressive symptoms and HRV at any time points between post-assessment and followup, no conclusions regarding psychophysiological trajectories can be drawn. Furthermore, slow-paced breathing practice during the follow-up period was not assessed, thus limiting the interpretation of the findings. However, it may be assumed that more intense training during stationary rehabilitation and/or continuing HRVBF after discharge may be necessary to maintain the initial benefits. Recently, Lin (2018) reported positive effects of a mobile-based HRVBF on autonomic balance, which could provide a useful tool to secure sustainability of the effects. Nevertheless, the medium to large favorable effects of HRVBF shown in patients within 5 weeks of inpatient rehabilitation appear compelling and seemingly magnified the already large antidepressant effect of a well-validated, multidimensional treatment program (i.e., 25 h of weekly therapies). Moreover, the amplification of LF-HRV and GR, exclusively observed in the IG, could indicate enhanced autonomic efficacy. Importantly, under controlled resting conditions the LF-HRV seems predominantly influenced by baroreflex and vagal activity, with only minor sympathetic influences . Especially when breathing within the LF frequency range, LF-HRV reflects almost exclusively vagal efference (Kromenacker et al., 2018). Since the IG exhibited a resting breathing rate at the upper end of the LF spectrum at post-assessment, the increases in LF-HRV within the IG could be considered of vagal origin. Regarding the GR, a cautious interpretation of this measure is imperative, as validation studies are lacking. It should be noted though that participants achieved resonance breathing during the Qiu biofeedback, thus indicating the utility of the GR as a marker of cardiorespiratory coherence. Taken together, we tentatively suggest that the IG exhibited improved vagal functioning. Certainly, further studies are needed to verify of falsify this hypothesis, especially since RMSSD, a sensitive marker of vmHRV, was not affected by the resonant breathing intervention . Still, HRVBF may exhibit unique therapeutic benefits independent of concurrent treatments. Hence, these findings emphasize the distinct effect of cultivating physiological coherence through resonance breathing on human psychophysiology. Of note, a study conducted in a similar setting found no additional antidepressive benefit of a mindfulness self-compassion training (Gaiswinkler et al., 2019), despite showing antidepressant effects in a prior study (Neff and Germer, 2013). In general, breathing-based interventions seem to be of merit in improving depressive symptoms, potentially beyond conventional treatments approaches. For example, a study by Sharma et al. (2017) found that Sudarshan Kriya Yoga (SKY), a breathing-based meditation, induced large symptom improvements in depressed individuals resistant to antidepressant medication within 8 weeks. Of note, the largest reduction in depression occurred within the first 4 weeks, with small decreases during the subsequent half of the intervention period. In a further study, practicing SKY, which includes slow-paced breathing, resulted in enhanced vagal functioning in patients suffering from depression and/or anxiety, in addition to symptom reduction (Zope and Zope, 2013;Toschi-Dias et al., 2017). Recent research indicates that the benefits of paced breathing techniques on psychological well-being could originate from breathing-induced changes in brain activation patterns. During slow-paced breathing, slowly adapting stretch receptors in the lungs are recruited and in response amplify vagal afferent input to the nucleus tractus solitarius (NTS) in the brain stem (Carr and Undem, 2003;Kubin et al., 2006). The NTS projects to cortical and subcortical areas of the brain, including prefrontal areas, the cingulate, the nucleus paraventricularis of the hypothalamus, and the amygdala, which show altered functioning in depressed individuals (Ricardo and Koh, 1978;Petrov et al., 1993;Greicius et al., 2007;Siegle et al., 2007;Bao et al., 2008;Koenigs and Grafman, 2009). Accordingly, cumulative evidence indicates that modulating respiratory patterns could entrain brain activity and, in turn, may generate a neurofunctional signature corresponding to emotional well-being Noble and Hochman, 2019). Hence, as hypothesized by Porges (2007), afferent vagal input, including slow-paced breathing, may aid in orchestrating emotional/psychological functioning via the cerebral susceptibility to upstream (i.e., afferent) vagal stimulation. Indeed, these frequently suggested physiological pathways could be one origin of HRVBF's efficacy. However, since we did not find any associations between HRV changes and improvements in depressive symptoms, including subscales, psychological mechanisms may also contribute to its potency. For example, successfully modulating Qiu's visual feedback might foster self-efficacy, which seems decreased in depression (Bandura et al., 1999;Maeda et al., 2013). Of note, neither depression nor HRV trajectories were associated with mean relGR. Thus, the degree of participants' exposure to negative (i.e., red), neutral (i.e., orange), or positive (i.e., green) feedback during HRVBF had no distinct effect on the main outcomes. However, the sole experience of intentionally modulating the optical feedback or feelings of relaxation independent of the actual extent may have fostered self-efficacy. To our knowledge, this study is among the first to objectively assess whether HRVBF self-practice frequency is linked to depression or HRV changes (e.g., Karavidas et al., 2007;Siepmann et al., 2008;Caldwell and Steffen, 2018;Lin et al., 2019). Astoundingly, there were no significant associations. However, it could well be that participants generally engaged in slow-paced breathing practice independent of HRVBF, which unfortunately was not documented. Obviously, more research is necessary, targeting at the psychological and neurobiological mechanisms of the HRVBF effect. Nonetheless, considering no substantial HRV increases due to the standard treatment and the disturbed sympathetic-vagal balance in depression, our results suggest supplementing conventional therapies with HRVBF to specifically target autonomic dys/functioning (e.g., Kim et al., 2009;Koschke et al., 2009;Chien et al., 2015;Schumann et al., 2017). Moreover, the high level of practice adherence observed in the present study further supports the feasibility of HRVBF as adjunctive therapy in depressed individuals. Strengths and Limitations This study has several strengths and limitations that should be mentioned. Overall, the highly standardized environment during inpatient psychiatric rehabilitation allowed us to control for various confounders, thus strengthening the validity of the findings. However, since this and prior studies were not placebo controlled, it has yet to be evaluated whether the promising antidepressive HRVBF effects are independent of a potential placebo effect. In addition, the CG did not receive a control intervention. Therefore, it could be that the additional attention due to the HRVBF group sessions may have fostered increased perceived social support and behavioral activation within the IG, thus contributing to the beneficial effects. Another positive aspect of this study is that we could objectively assess HRVBF training compliance. However, breathing practice independent of HRVBF was not documented, which may have diluted the non-findings regarding training adherence and outcome measures. Noteworthily, assessing depressive symptoms 1 year post-training constitutes a unique feature of this study as compared to previous research, thus elucidating for the first time potential long-term effects of HRVBF. On the contrary, neither HRV data at follow-up nor breathing practice during the 1year period were obtained, thus limiting the interpretation of our results. Another factor potentially confounding the observed HRV increases is the relatively short ECG recording time of 3 min, as compared to the recommended 5 min (Berntson et al., 1997). However, cumulative research indicates the reliability of ultra-short-term HRV recordings, suggesting the validity of our findings. For example, Shaffer et al. (2016) propose that an ECG recording length of 180 s is sufficient to reliably calculate LF-HRV and HF-HRV, with 3-min recordings yielding almost identical results as 5-min measurements. Further, accurate measures of RMSSD and SDNN may be obtained from 2-min recordings (Munoz et al., 2015). CONCLUSION The present research suggests additional benefits of HRVBF on the recovery of depressive symptoms and autonomic functioning in psychiatric rehabilitation inpatients. Thus, these findings argue for the value of HRVBF as adjunctive therapy during diverse treatment contexts, since it seemingly improves the therapeutic outcome regardless of concurrent treatment diversity and diagnosis. Importantly, the observed incremental effects could serve as resource after the training period and postdischarge, respectively. However, since the observed long-term trends did not reach significance, adequately powered followup studies are warranted to confirm this hypothesis and to examine ways to preserve initial therapeutic gains. Nevertheless, our findings support the implementation of HRVBF as additional intervention to foster the recovery of depressive symptoms, especially since it could provide add on-benefits, including enhanced autonomic regulation, as compared to current standard therapies. Further research is needed to examine the robustness of these findings and to control for placebo effects. DATA AVAILABILITY STATEMENT Due to a data policy contract with the clinic we are not allowed to provide any data to third parties. Hence, no data is available. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee University of Graz. The patients/participants provided their written informed consent to participate in this study.
2020-07-22T13:14:08.749Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "b8bc31b980cc064545595e45a9eb5e2d51ee76dd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00738/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b8bc31b980cc064545595e45a9eb5e2d51ee76dd", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }